This document is designed to be a quick (and incomplete) description of how modes work, how you might create one yourself, etc. 1. What are modes? Pmacs uses modes to determine what actions can be applied to a buffer, which keys should apply which actions, how the buffer should be highlighted, how the buffer should be indented, and any other per-buffer configuration. The default mode ("Fundamental") provides the base functionality which all other modes inherit. 2. Where do they come from? All modes are loaded and installed in application.py. It would be nice if there was a configuration file where you could add your own modes, but there isn't. Loading a module involves the following (in Application.__init__): self.modes['foo'] = package.Foo # ... self.mode_paths['/some/full/path'] = 'foo' self.mode_basenames['some-filename'] = 'foo' self.mode_extensions['.foo'] = 'foo' self.mode_detection['foo'] = 'foo' The last is for detecting scripts using the "#!/usr/bin/foo" syntax. 3. How do they work? The one thing every mode has in common is that they map key bindings to actions to be taken. They do this via self.bindings, a dictionary mapping action names (i.e. 'page-down') to a tuple of key bindings (i.e. ('C-v', 'PG_DN',)). Modes subclass mode2.Fundamental, and they call mode2.Fundamental.__init__ to run the standard mode initialization (including building the default bindings dictionary); they can later modify or overwrite this dictionary if they choose. There are at least 3 optional behaviors modes can make use of: 1. Syntax highlighting 2. Indentation level detection 3. Tag (parenthesis, brace, bracket, etc.) matching Not all modes can (or should) make use of these features: they are primarily useful in modes having to do with programming languages, or other structured documents. 4. Syntax highlighting Syntax highlighting uses a hybrid lexing/parsing process to break each line of the buffer down into one or more lexical tokens; these tokens are primarily used to highlight parts of the buffer different colors. The colors to use are defined by self.colors, a dictionary mapping token-names to a tuple consisting of at least a foreground color and a background color. Explaining how to write a Grammar is outside the scope of this document; see lex2.py and mode/*.py for examples. Some important points to note: * Regexes are applied to only one line of the document at a time. * All regexes must match at least one character. * All tokens must consist of at least one character. * A rule that matches must generate one or more tokens. * Any input not matched by a rule will end up in a "null" token. * Tokens can't "look" for other tokens (but they can use 0-width assertions to test for data on the current line). 5. Indentation level detection Indentation level detection hooks into the basic 'insert-tab' action; rather than inserting 4 spaces (which is the default), it will instead determine the correct "tab depth" for this line of the buffer, and insert/remove spaces from the beginning of the line in order to reach the correct number. To implement this, you must create a Tabber class and assign it to self.tabber in the mode. At a minimum, a tabber must support the following methods: * __init__(self, mode) * get_level(self, y) * region_added(self, p, lines) * region_removed(self, p1, p2) Tabber classes can often be tricky to implement correctly. tab2.Tabber provides a lot of base functionality that is probably useful; also, you may want to try looking at tab2.StackTabber, which provides even more base support. 6. Tag matching Tag matching allows a closing tag (such as ")") to "show" the corresponding opening tag (such as an earlier "(") to help orient the user. Currently, tags are assumed to be single characters, although in the future it's easy to imagine multi-character tags being useful. Currently they are not supported. This support is very easy to add, assuming that the mode has a grammar. In most cases, here is how it works: a. In the mode, create 4 tuples: * opentokens: tuple of lexical tokens which can be opentags * opentags: dictionary mapping opentag strings to closetag strings * closetokens: tuple of lexical tokens which can be closetags * closetags: dictionary mapping closetag strings to opentag strings b. Also in the visecmode, create or instantiate actions who subclass method.CloseTag (e.g. method.CloseParen) and assign them the appropriate binding (e.g. '('). c. Enjoy!