Documentation

Lean.Parser.Basic

Basic Lean parser infrastructure #

The Lean parser was developed with the following primary goals in mind:

Given these constraints, we decided to implement a combinatoric, non-monadic, lexer-less, memoizing recursive-descent parser. Using combinators instead of some more formal and introspectible grammar representation ensures ultimate flexibility as well as efficient extensibility: there is (almost) no pre-processing necessary when extending the grammar with a new parser. However, because all the results the combinators produce are of the homogeneous Syntax type, the basic parser type is not actually a monad but a monomorphic linear function ParserState → ParserState, avoiding constructing and deconstructing countless monadic return values. Instead of explicitly returning syntax objects, parsers push (zero or more of) them onto a syntax stack inside the linear state. Chaining parsers via >> accumulates their output on the stack. Combinators such as node then pop off all syntax objects produced during their invocation and wrap them in a single Syntax.node object that is again pushed on this stack. Instead of calling node directly, we usually use the macro leading_parser p, which unfolds to node k p where the new syntax node kind k is the name of the declaration being defined.

The lack of a dedicated lexer ensures we can modify and replace the lexical grammar at any point, and simplifies detecting and propagating whitespace. The parser still has a concept of "tokens", however, and caches the most recent one for performance: when tokenFn is called twice at the same position in the input, it will reuse the result of the first call. tokenFn recognizes some built-in variable-length tokens such as identifiers as well as any fixed token in the ParserContext's TokenTable (a trie); however, the same cache field and strategy could be reused by custom token parsers. Tokens also play a central role in the prattParser combinator, which selects a leading parser followed by zero or more trailing parsers based on the current token (via peekToken); see the documentation of prattParser for more details. Tokens are specified via the symbol parser, or with symbolNoWs for tokens that should not be preceded by whitespace.

The Parser type is extended with additional metadata over the mere parsing function to propagate token information: collectTokens collects all tokens within a parser for registering. firstTokens holds information about the "FIRST" token set used to speed up parser selection in prattParser. This approach of combining static and dynamic information in the parser type is inspired by the paper "Deterministic, Error-Correcting Combinator Parsers" by Swierstra and Duponcheel. If multiple parsers accept the same current token, prattParser tries all of them using the backtracking longestMatchFn combinator. This is the only case where standard parsers might execute arbitrary backtracking. Repeated invocations of the same category or concrete parser at the same position are cached where possible; see withCache.

Finally, error reporting follows the standard combinatoric approach of collecting a single unexpected token/... and zero or more expected tokens (see Error below). Expected tokens are e.g. set by symbol and merged by <|>. Combinators running multiple parsers should check if an error message is set in the parser state (hasError) and act accordingly. Error recovery is left to the designer of the specific language; for example, Lean's top-level parseCommand loop skips tokens until the next command keyword on error.

Generate an error at the position saved with the withPosition combinator. If delta == true, then it reports at saved position+1. This useful to make sure a parser consumed at least one character.

Instances For

    Succeeds if c.prec <= prec

    Instances For

      Succeeds if c.lhsPrec >= prec

      Instances For

        Run p, falling back to q if p failed without consuming any input.

        NOTE: In order for the pretty printer to retrace an orelse, p must be a call to node or some other parser producing a single node kind. Nested orelse calls are flattened for this, i.e. (node k1 p1 <|> node k2 p2) <|> ... is fine as well.

        Instances For

          Apply f to the syntax object produced by p

          Instances For
            def Lean.Parser.satisfyFn (p : CharBool) (errorMsg : optParam String "unexpected character") :
            Instances For
              Instances For
                partial def Lean.Parser.finishCommentBlock (pushMissingOnError : Bool) (nesting : Nat) :

                Consume whitespace and comments

                Match an arbitrary Parser and return the consumed String in a Syntax.atom.

                Instances For
                  Instances For
                    Instances For

                      Push (Syntax.node tk ) onto syntax stack if parse was successful.

                      Instances For

                        Treat keywords as identifiers.

                        Instances For
                          Instances For

                            Check if the following token is the symbol or identifier sym. Useful for parsing local tokens that have not been added to the token table (but may have been so by some unrelated code).

                            For example, the universe max Function is parsed using this combinator so that it can still be used as an identifier outside of universe (but registering it as a token in a Term Syntax would not break the universe Parser).

                            Instances For
                              Instances For
                                Instances For
                                  Instances For
                                    Instances For
                                      Instances For
                                        Instances For

                                          Auxiliary function used to execute parsers provided to longestMatchFn. Push left? into the stack if it is not none, and execute p.

                                          Remark: p must produce exactly one syntax node. Remark: the left? is not none when we are processing trailing parsers.

                                          Instances For
                                            Instances For
                                              def Lean.Parser.longestMatchFnAux (left? : Option Lean.Syntax) (startSize : Nat) (startLhsPrec : Nat) (startPos : String.Pos) (prevPrio : Nat) (ps : List (Lean.Parser.Parser × Nat)) :
                                              Instances For
                                                def Lean.Parser.longestMatchFnAux.parse (left? : Option Lean.Syntax) (startSize : Nat) (startLhsPrec : Nat) (startPos : String.Pos) (prevPrio : Nat) (ps : List (Lean.Parser.Parser × Nat)) :
                                                Equations
                                                Instances For
                                                  Instances For
                                                    Instances For
                                                      Instances For
                                                        Instances For

                                                          A multimap indexed by tokens. Used for indexing parsers by their leading token.

                                                          Instances For
                                                            Instances For

                                                              The type LeadingIdentBehavior specifies how the parsing table lookup function behaves for identifiers. The function prattParser uses two tables leadingTable and trailingTable. They map tokens to parsers.

                                                              We use LeadingIdentBehavior.symbol and LeadingIdentBehavior.both and nonReservedSymbol parser to implement the tactic parsers. The idea is to avoid creating a reserved symbol for each builtin tactic (e.g., apply, assumption, etc.). That is, users may still use these symbols as identifiers (e.g., naming a function).

                                                              Instances For
                                                                • declName : Lake.Name

                                                                  The name of a declaration which will be used as the target of go-to-definition queries and from which doc strings will be extracted. This is a dummy declaration of type Lean.Parser.Category created by declare_syntax_cat, but for builtin categories the declaration is made manually and passed to registerBuiltinParserAttribute.

                                                                • The list of syntax nodes that can parse into this category. This can be used to list all syntaxes in the category.

                                                                • The parsing tables, which consist of a dynamic set of parser functions based on the syntaxes that have been declared so far.

                                                                • The LeadingIdentBehavior, which specifies how the parsing table lookup function behaves for the first identifier to be parsed. This is used by the tactic parser to avoid creating a reserved symbol for each builtin tactic (e.g., apply, assumption, etc.).

                                                                Each parser category is implemented using a Pratt's parser. The system comes equipped with the following categories: level, term, tactic, and command. Users and plugins may define extra categories.

                                                                The method

                                                                categoryParser `term prec
                                                                

                                                                executes the Pratt's parser for category term with precedence prec. That is, only parsers with precedence at least prec are considered. The method termParser prec is equivalent to the method above.

                                                                Instances For
                                                                  @[inline, reducible]
                                                                  Instances For
                                                                    @[inline, reducible]
                                                                    Instances For
                                                                      Instances For

                                                                        Antiquotations #

                                                                        Fail if previous token is immediately followed by ':'.

                                                                        Instances For
                                                                          Instances For

                                                                            Define parser for $e (if anonymous == true) and $e:name. kind is embedded in the antiquotation's kind, and checked at syntax match unless isPseudoKind is true. Antiquotations can be escaped as in $$e, which produces the syntax tree for $e.

                                                                            Instances For

                                                                              Optimized version of mkAntiquot ... <|> p.

                                                                              Instances For

                                                                                Parse $[p]suffix, e.g. $[p],*.

                                                                                Instances For

                                                                                  Parse suffix after an antiquotation, e.g. $x,*, and put both into a new node.

                                                                                  Instances For

                                                                                    End of Antiquotations #

                                                                                    Implements a variant of Pratt's algorithm. In Pratt's algorithms tokens have a right and left binding power. In our implementation, parsers have precedence instead. This method selects a parser (or more, via longestMatchFn) from leadingTable based on the current token. Note that the unindexed leadingParsers parsers are also tried. We have the unidexed leadingParsers because some parsers do not have a "first token". Example:

                                                                                    syntax term:51 "≤" ident "<" term "|" term : index
                                                                                    

                                                                                    Example, in principle, the set of first tokens for this parser is any token that can start a term, but this set is always changing. Thus, this parsing rule is stored as an unindexed leading parser at leadingParsers. After processing the leading parser, we chain with parsers from trailingTable/trailingParsers that have precedence at least c.prec where c is the ParsingContext. Recall that c.prec is set by categoryParser.

                                                                                    Note that in the original Pratt's algorith, precedences are only checked before calling trailing parsers. In our implementation, leading and trailing parsers check the precendece. We claim our algorithm is more flexible, modular and easier to understand.

                                                                                    antiquotParser should be a mkAntiquot parser (or always fail) and is tried before all other parsers. It should not be added to the regular leading parsers because it would heavily overlap with antiquotation parsers nested inside them.

                                                                                    Instances For
                                                                                      def Lean.Syntax.foldArgsM {m : Type u_1 → Type u_2} [Monad m] {β : Type u_1} (s : Lean.Syntax) (f : Lean.Syntaxβm β) (b : β) :
                                                                                      m β
                                                                                      Instances For
                                                                                        def Lean.Syntax.foldArgs {β : Type u_1} (s : Lean.Syntax) (f : Lean.Syntaxββ) (b : β) :
                                                                                        β
                                                                                        Instances For
                                                                                          def Lean.Syntax.forArgsM {m : TypeType u_1} [Monad m] (s : Lean.Syntax) (f : Lean.Syntaxm Unit) :
                                                                                          Instances For