Quasiquotation with Binders: A Lean Metaprogramming Example

In programming languages research we often explore a language feature by formally defining a tiny language that is just big enough to have the properties we want, but is still small enough to make a fully self-contained formalization feasible. These tiny languages are typically not useful for actual programming, but it is often possible to encode more expressive features in the smaller language. Thus, the formal results can in principle apply to larger languages. Small models are also valuable for exploring new implementation techniques.

One nice thing about a formal model in an interactive theorem prover is that it can be played with! Experimentation with a system is a great way to build intuitions about it. While paper formalizations often require laborious pen-and-paper calculations to check an example, the computer can do it much more quickly. Here, notations are important: the closer the mechanization is to the notation in which we communicate with each other, the easier it is to experiment with. This is why Lean and other provers have an extensible notation system.

This post demonstrates one approach to solving both the problem of encoding features in a core language and the problem of concrete syntax: binding-aware quasiquotation. First, I'll describe a small language and show how to give it a nice syntax. This nice syntax supports escaping to Lean from the embedded language, so Lean becomes a macro language for the embedded language. Re-using Lean's own binding forms to implement the embedded language's binding forms makes it possible to use bound variables from the embedded language inside these escapes, but it also leads to leaky abstractions and confusing error messages. Fortunately, Lean's macro hygiene system can be used to fix these leaks.

This post assumes a background in programming language theory and implementation. For the sake of space, the implemented language is not a full formalization, but it exhibits the key challenges that arise when implementing quasiquotation with bindings. This is the first in a series of two posts on this topic; the next post in the series will demonstrate the use of elaborators to further improve the language embedding.

The Language

The choice of example language here is not particularly important, so I'm using the untyped lambda-calculus with built-in let-bindings. Because I first hit upon this method while experimenting with an implementation of András Kovács and Olle Frederiksen's glued evaluator, the example code is based on theirs. Please see the linked file and the smalltt repository for more details on this approach.

Glued evaluation is one approach to implementing the computations that occur in a type checker and elaborator for a dependently typed language. An elaborator converts from a convenient surface syntax to a simpler core language, simultaneously type checking the input and translating it to a form suitable to be checked by a small, simple kernel. While elaborating, it is necessary to compare expressions modulo α-equivalence, β-reduction, and often also η-expansion. At the same time, the system must maintain a collection of metavariables, which represent holes in the program that the user expects that elaborator to fill in. In Lean, for example, some of the uses of metavariables are to represent implicit arguments to functions, type class instances, and explicit user uses of the placeholder _. There is a tension between the computations that are necessary to compare two terms, which may unfold many definitions, and the size of terms that are saved as metavariable solutions, which should ideally be as small as possible. The glued evaluator computes both fully- and partially-unfolded results, sharing as much intermediate work as possible.

Theoretically, the language implemented in this post has the following structure:

M, N ::= c
       | x
       | λ x . M
       | M N
       | let x := M in N

The x terminal represents locally-bound variables, while c represents defined constants drawn from some ambient global environment. To keep the size of elaborated terms manageable, unifiers computed by the elaborator will not unfold global constants, while constants can be unfolded in the process of checking definitional equality. This is implemented through a variation of normalization by evaluation (NbE), with the domain of values suitably extended to represent both unfoldings. In NbE, expressions are first evaluated to some semantic domain that includes the equalities that we'd like to check over syntax, and then quoted or read back to syntax. Because the semantic domain that is the target of evaluation includes the desired equalities, the process of quotation must necessarily compute normal forms with respect to this equality.

In NbE, the bodies of functions are also normalized. This means that the values need to include representations for unknown variables. Because these unknown variables might later be applied to arguments, these extra values must also represent the arguments that are waiting for the variable, should its value as a function become known.

For this language, values may be one of the following:

  • a closure that provides the values of the free variables in a function along with its code, resulting from evaluating a lambda

  • a local application that pairs an unknown local variable with a spine of argument values to which it can be applied when its value becomes known

  • a global application that simultaneously includes both a global constant with a spine of arguments and a lazy value that will compute the result when forced

Because global application values contain both the result of the function call and a representation of its syntax, the read-back or quotation phase of NbE can choose a more compact representation or a normal form.

To represent the glued evaluator in Lean, the first step is to define c and x from the grammar, as TName and LName respectively:

/-- Top-level names -/ structure TName where name : String deriving BEq, Inhabited, Repr, DecidableEq /-- Local names -/ structure LName where name : String deriving BEq, Inhabited, Repr, DecidableEq

The definition of terms follows the grammar:

/-- Terms -/ inductive Tm where | top : TName Tm | loc : LName Tm | lam : LName Tm Tm | app : Tm Tm Tm | let : LName Tm Tm Tm deriving BEq, Repr, Inhabited

Example: Church Numerals

One of the best ways to test a normalization procedure is to use Church numerals. Church numerals are an encoding of the natural numbers in the untyped lambda calculus, in which a number n is a function that takes a function f and a base value z, applying f to z a total of n times. For instance, two is \lambda f. \lambda z. f\ (f\ z) and zero is \lambda f.\lambda z. z. Church numerals are an excellent test of normalization because they are fairly easy to recognize with a bit of practice, and they tend to be good exercise for systems that rely on string representations of names due to using many similar local names. Addition of Church numerals is a function that takes two encoded numbers followed by f and z, using one of the numbers in the base value provided to the other: \lambda n. \lambda k. \lambda f.\lambda z. n\ f\ (k\ f\ z).

The corresponding Tm definitions are:

def Tm.two : Tm := .lam "f" (.lam "z" (.app (.loc "f") (.app (.loc "f") (.loc "z")))) def Tm.zero : Tm := .lam "f" (.lam "z" (.loc "z")) def Tm.succ : Tm := .lam "n" (.lam "f" (.lam "z" (.app (.loc "f") (.app (.app (.loc "n") (.loc "f")) (.loc "z"))))) def Tm.add : Tm := .lam "n" (.lam "k" (.lam "f" (.lam "z" (.app (.app (.loc "n") (.loc "f")) (.app (.app (.loc "k") (.loc "f")) (.loc "z"))))))

They're already quite difficult to read. The downside of using Church numerals to test a normalization procedure is that it can be hard to know whether a surprising result is due to a mistake in the input or a mistake in the evaluator. It's worth it to devise a way to write ordinary numbers instead of Church numerals. One way to convert a natural number to a Church numeral is syntactic - the chain of applications of f in the body is constructed directly:

def toChurch (n : Nat) : Tm := .lam "f" (.lam "z" (mkBody n)) where mkBody | 0 => .loc "z" | k + 1 => .app (.loc "f") (mkBody k)

Another is more semantic, using the definitions of zero and successor:

def toChurch' : Nat Tm | 0 => Tm.zero | n + 1 => .app Tm.succ (toChurch' n)

While toChurch' is easier to check with a quick glance, toChurch generates much more readable output:

Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.loc { name := "z" }))))) #eval toChurch 3
Tm.lam
  { name := "f" }
  (Tm.lam
    { name := "z" }
    (Tm.app
      (Tm.loc { name := "f" })
      (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.loc { name := "z" })))))
Tm.app (Tm.lam { name := "n" } (Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.app (Tm.loc { name := "n" }) (Tm.loc { name := "f" })) (Tm.loc { name := "z" })))))) (Tm.app (Tm.lam { name := "n" } (Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.app (Tm.loc { name := "n" }) (Tm.loc { name := "f" })) (Tm.loc { name := "z" })))))) (Tm.app (Tm.lam { name := "n" } (Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.app (Tm.loc { name := "n" }) (Tm.loc { name := "f" })) (Tm.loc { name := "z" })))))) (Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.loc { name := "z" }))))) #eval toChurch' 3
Tm.app
  (Tm.lam
    { name := "n" }
    (Tm.lam
      { name := "f" }
      (Tm.lam
        { name := "z" }
        (Tm.app
          (Tm.loc { name := "f" })
          (Tm.app (Tm.app (Tm.loc { name := "n" }) (Tm.loc { name := "f" })) (Tm.loc { name := "z" }))))))
  (Tm.app
    (Tm.lam
      { name := "n" }
      (Tm.lam
        { name := "f" }
        (Tm.lam
          { name := "z" }
          (Tm.app
            (Tm.loc { name := "f" })
            (Tm.app (Tm.app (Tm.loc { name := "n" }) (Tm.loc { name := "f" })) (Tm.loc { name := "z" }))))))
    (Tm.app
      (Tm.lam
        { name := "n" }
        (Tm.lam
          { name := "f" }
          (Tm.lam
            { name := "z" }
            (Tm.app
              (Tm.loc { name := "f" })
              (Tm.app (Tm.app (Tm.loc { name := "n" }) (Tm.loc { name := "f" })) (Tm.loc { name := "z" }))))))
      (Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.loc { name := "z" })))))

Defining the Evaluator

A traditional McCarthy-style eval-apply evaluator, using closures as the values of functions, works well for NbE. The eval function takes an expression and an environment of values for the expression's free variables, yielding a value. It is mutually recursively defined with apply, which takes the value of a function and the value of its argument and computes the result of the function call. eval calls apply when it has computed the values of the function and argument positions in an application, and apply calls eval in order to compute with the body of a function.

Environments

The first step in defining the evaluator is to define environments. Because global and local names are syntactically distinct, environments contain both:

/-- Local environments -/ def LocalEnv v := List (LName × v) /-- Top-level environments -/ def TopEnv v := List (TName × v) /-- Environments map free variables to their values -/ structure Env (v : Type u) where top : List (TName × v) loc : List (LName × v) deriving Repr

Environments can be empty, and they can be extended with additional local or top-level values:

/-- The empty environment -/ def Env.empty : Env v := [], [] /-- Add a local variable's value to an environment -/ def Env.addLocal (ρ : Env v) (x : LName) (val : v) : Env v := {ρ with loc := (x, val) :: ρ.loc} /-- Add a top-level constant's definition to an environment -/ def Env.addTop (ρ : Env v) (x : TName) (val : v) : Env v := {ρ with top := (x, val) :: ρ.top}

The implementation of the language will need to be able to look up names in the environment. This can be made convenient by defining two implementations of GetElem, but the details of the implementation are a not really the point of this post. If you're curious, please feel free to read the code, but otherwise, it's fine to skip it.

The details

Using two separate GetElem instances allows either form of name to be looked up, automatically selecting the correct environment.

def HasKey (k : α) : (ρ : List (α × β)) Prop | [] => False | (k', _) :: ρ => k = k' HasKey k ρ def decideHasKey [DecidableEq k] (x : k) : (ρ : List (k × v)) Decidable (HasKey x ρ) | [] => isFalse fun h => nomatch h | (y, _) :: ρ' => if h : x = y then isTrue <| k:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky:ksnd✝:vρ':List (k × v)h:x = yHasKey x ((y, snd✝) :: ρ') k:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky:ksnd✝:vρ':List (k × v)h:x = yx = y; All goals completed! 🐙 else match decideHasKey x ρ' with | isTrue y => isTrue <| k:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky✝:ksnd✝:vρ':List (k × v)h:¬x = y✝y:HasKey x ρ'HasKey x ((y✝, snd✝) :: ρ') All goals completed! 🐙 | isFalse n => isFalse <| k:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky:ksnd✝:vρ':List (k × v)h:¬x = yn:¬HasKey x ρ'¬HasKey x ((y, snd✝) :: ρ') k:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky:ksnd✝:vρ':List (k × v)h:¬x = yn:¬HasKey x ρ'h':HasKey x ((y, snd✝) :: ρ')False k:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky:ksnd✝:vρ':List (k × v)h:¬x = yn:¬HasKey x ρ'h✝:x = yFalsek:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky:ksnd✝:vρ':List (k × v)h:¬x = yn:¬HasKey x ρ'h✝:HasKey x ρ'False k:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky:ksnd✝:vρ':List (k × v)h:¬x = yn:¬HasKey x ρ'h✝:x = yFalsek:Type ?u.8139v:Type ?u.8138inst✝:DecidableEq kx:ky:ksnd✝:vρ':List (k × v)h:¬x = yn:¬HasKey x ρ'h✝:HasKey x ρ'False All goals completed! 🐙 instance : Membership LName (LocalEnv v) where mem x ρ := HasKey x ρ instance : Membership TName (TopEnv v) where mem x ρ := HasKey x ρ instance {k : Type u} [DecidableRel (α := k) (· = ·)] {x : k} {ρ : List (k × v)}: Decidable (HasKey x ρ) := decideHasKey _ _ instance {x : LName} {ρ : LocalEnv v} : Decidable (x ρ) := decideHasKey _ _ instance {x : TName} {ρ : TopEnv v} : Decidable (x ρ) := decideHasKey _ _ instance : GetElem (LocalEnv v) LName v (fun ρ x => HasKey x ρ) where getElem ρ x ok := let rec go : (locals : List (LName × v)) HasKey x locals v | [], impossible => nomatch impossible | (y, v) :: locals', ok' => if h : x = y then v else go locals' $ v✝:Type ?u.9801ρ:LocalEnv v✝x:LNameok:HasKey x ρy:LNamev:v✝locals':List (LName × v✝)ok':HasKey x ((y, v) :: locals')h:¬x = yHasKey x locals' v✝:Type ?u.9801ρ:LocalEnv v✝x:LNameok:HasKey x ρy:LNamev:v✝locals':List (LName × v✝)h:¬x = yh✝:x = yHasKey x locals'v✝:Type ?u.9801ρ:LocalEnv v✝x:LNameok:HasKey x ρy:LNamev:v✝locals':List (LName × v✝)h:¬x = yh✝:HasKey x locals'HasKey x locals' v✝:Type ?u.9801ρ:LocalEnv v✝x:LNameok:HasKey x ρy:LNamev:v✝locals':List (LName × v✝)h:¬x = yh✝:x = yHasKey x locals'v✝:Type ?u.9801ρ:LocalEnv v✝x:LNameok:HasKey x ρy:LNamev:v✝locals':List (LName × v✝)h:¬x = yh✝:HasKey x locals'HasKey x locals' All goals completed! 🐙 go ρ ok instance : GetElem (Env v) LName v (fun ρ x => HasKey x ρ.loc) where getElem ρ x ok := let locals : LocalEnv v := ρ.loc locals[x] instance : GetElem (TopEnv v) TName v (fun ρ x => HasKey x ρ) where getElem ρ x ok := let rec go : (tops : List (TName × v)) HasKey x tops v | [], impossible => nomatch impossible | (y, v) :: locals', ok' => if h : x = y then v else go locals' $ v✝:Type ?u.11366ρ:TopEnv v✝x:TNameok:HasKey x ρy:TNamev:v✝locals':List (TName × v✝)ok':HasKey x ((y, v) :: locals')h:¬x = yHasKey x locals' v✝:Type ?u.11366ρ:TopEnv v✝x:TNameok:HasKey x ρy:TNamev:v✝locals':List (TName × v✝)h:¬x = yh✝:x = yHasKey x locals'v✝:Type ?u.11366ρ:TopEnv v✝x:TNameok:HasKey x ρy:TNamev:v✝locals':List (TName × v✝)h:¬x = yh✝:HasKey x locals'HasKey x locals' v✝:Type ?u.11366ρ:TopEnv v✝x:TNameok:HasKey x ρy:TNamev:v✝locals':List (TName × v✝)h:¬x = yh✝:x = yHasKey x locals'v✝:Type ?u.11366ρ:TopEnv v✝x:TNameok:HasKey x ρy:TNamev:v✝locals':List (TName × v✝)h:¬x = yh✝:HasKey x locals'HasKey x locals' All goals completed! 🐙 go ρ ok instance : GetElem (Env v) TName v (fun ρ x => HasKey x ρ.top) where getElem ρ x ok := let tops : TopEnv v := ρ.top tops[x]

Values

The spine of accumulated arguments to an unknown function are best represented by a snoc-list, as they grow to the right when the value is applied to further arguments. In order to get the desired performance characteristics, spines should be lazy, allowing definitional equality comparison to abort early without fully evaluating all arguments. Thunk v is a type that wraps a nullary function that yields a v, with special support from the Lean compiler so that the value is cached rather than recomputed. Getting the value of a thunk is called forcing it.

/-- A "spine" of values accumulated as arguments to an unknown function -/ inductive Spine (v : Type u) where | empty | push : Spine v Thunk v Spine v def Spine.foldl (f : β Thunk v β) (start : β) (s : Spine v) : β := match s with | .empty => start | .push s' arg => f (s'.foldl f start) arg

As described, there are three values: closures (Val.lam) retain an argument name, an environment for the function's body, and the body itself; applications of local variables (Val.loc) pair a local variable with a spine of lazy argument values; and applications of global constants (Val.top) combine the constant's name with a spine of lazy values as well as a lazy computation that actually does the work of applying the top-level constant's value to its arguments.

inductive Val where | lam : LName Env Val Tm Val | loc : LName Spine Val Val | top : TName Spine Val Thunk Val Val instance : Inhabited Val where default := .loc "fake" .empty

The Evaluator

The evaluator must be partial, because this is the untyped lambda calculus! The only difference between this evaluator and an ordinary eval-apply evaluator is that apply must account for the fact that functions might be represented by three different kinds of values. Lean's coercion mechanism takes care of wrapping expressions in lazy positions in thunks automatically.

mutual partial def eval (ρ : Env Val) : Tm Val | .lam x b => .lam x ρ b | .let x e b => eval (ρ.addLocal x (eval ρ e)) b | .app e1 e2 => (eval ρ e1).apply (eval ρ e2) | .top x => .top x .empty ρ[x]! | .loc x => ρ[x]! partial def Val.apply : Val Thunk Val Val | .lam x ρ b, arg => eval (ρ.addLocal x arg.get) b | .loc x s, arg => .loc x (s.push arg.get) | .top x s v, arg => .top x (s.push arg) (v.get.apply arg) end

From Evaluation to Normal Forms

Normalizing terms consists of first evaluating them, then converting the value back to syntax. Because the domain of values has been carefully chosen to exhibit all of the equalities desired for the term language, this process must map terms to a canonical member of the equivalence class induced by the desired equalities - that is, the normal form! But the whole point of glued evaluation is to control unfolding, so the quotation process here must take an argument that determines whether to unfold top-level constants (thus computing normal forms) or to leave them as names (thus computing smaller terms in unifiers).

inductive UnfoldTop where | unfold | noUnfold deriving Repr

When reading back nested functions, it's important to ensure that the name of the bound variable has not already been used by a surrounding function. Otherwise, occurrences of the variable could end up pointing at the wrong binding site. Because local and global names are separate syntactic categories in this language, and functions bind only local variables, only the locals need to be checked for conflicts:

partial def LocalEnv.fresh (ρ : LocalEnv v) (x : LName) : LName := if x ρ then ρ.fresh {x with name := x.name ++ "'"} else x def Env.fresh (ρ : Env v) (x : LName) : LName := let locals : LocalEnv v := ρ.loc locals.fresh x

The actual process of quotation replaces each value with the corresponding expression:

  • Closures represent functions, and can be quoted to lambda-expressions by inventing a fresh name for the function's argument. The body of this function is found by quoting the result of applying the closure value to a value that represents the argument.

  • Local variable values represent applications of unknown local functions. They can be quoted by constructing an application expression that applies the variable to the results of quoting each value in the spine.

  • Global constant values can be quoted in one of two ways:

    1. If normal forms are being computed, the thunk that represents the value of the application is forced, and the resulting value is quoted

    2. If smaller results are being computed, then the value is quoted as if it were a local variable application, quoting each value in the spine and constructing an application expression

mutual partial def Val.quote (ρ : LocalEnv Val) (unfold : UnfoldTop) : Val Tm | .loc x sp => sp.quote ρ unfold (.loc x) | .top x sp v => match unfold with | .unfold => v.get.quote ρ unfold | .noUnfold => sp.quote ρ unfold (.top x) | f@(.lam x ..) => let x' := ρ.fresh x let v : Val := .loc x' .empty .lam x' ((f.apply v).quote ((x', v) :: ρ) unfold) partial def Spine.quote (ρ : LocalEnv Val) (unfold : UnfoldTop) (head : Tm) (args : Spine Val) : Tm := args.foldl (mkApp ρ unfold) head partial def mkApp (ρ : LocalEnv Val) (unfold : UnfoldTop) (quoted : Tm) (arg : Thunk Val) : Tm := .app quoted (arg.get.quote ρ unfold) end

Only a bit of glue code is needed to run the normalization procedure. Given a set of top-level constant definitions represented by an array of name-expression pairs, evalTop constructs a top-level environment by evaluating each declaration's expression in the context of the prior expressions, and then evaluates a result expression in the resulting top-level environment. Similarly, nfTop does the same thing, but it also quotes the resulting value, yielding a normal form or partially-evaluated term, as requested.

/-- Evaluate a term under some top-level declarations -/ def evalTop (topDefs : Array (TName × Tm)) (program : Tm) : Val := let ρ : Env Val := topDefs.foldl (fun ρ (x, e) => ρ.addTop x (eval ρ e)) .empty eval ρ program /-- Normalize or semi-normalize a term under some top-level declarations -/ def nfTop (unfold : UnfoldTop) (topDefs : Array (TName × Tm)) (program : Tm) : Tm := (evalTop topDefs program).quote [] unfold

The array examples contains the prior examples as declarations suitable for nfTop:

def examples : Array (TName × Tm) := #[ ("zero", Tm.zero), ("two", Tm.two), ("add", Tm.add) ]

Squinting a bit, two plus two is indeed four:

Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.loc { name := "z" })))))) #eval nfTop .unfold examples (.app (.app (.top "add") (.top "two")) (.top "two"))
Tm.lam
  { name := "f" }
  (Tm.lam
    { name := "z" }
    (Tm.app
      (Tm.loc { name := "f" })
      (Tm.app
        (Tm.loc { name := "f" })
        (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.loc { name := "z" }))))))

Because add and two are top-level definitions, asking nfTop not to unfold top-level definitions results in the original term:

Tm.app (Tm.app (Tm.top { name := "add" }) (Tm.top { name := "two" })) (Tm.top { name := "two" }) #eval nfTop .noUnfold examples (.app (.app (.top "add") (.top "two")) (.top "two"))
Tm.app (Tm.app (Tm.top { name := "add" }) (Tm.top { name := "two" })) (Tm.top { name := "two" })

Inspecting the normal forms of the results of toChurch and toChurch' shows that they agree:

Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.loc { name := "z" }))))) #eval nfTop .unfold examples (toChurch 3)
Tm.lam
  { name := "f" }
  (Tm.lam
    { name := "z" }
    (Tm.app
      (Tm.loc { name := "f" })
      (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.loc { name := "z" })))))
Tm.lam { name := "f" } (Tm.lam { name := "z" } (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.loc { name := "z" }))))) #eval nfTop .unfold examples (toChurch' 3)
Tm.lam
  { name := "f" }
  (Tm.lam
    { name := "z" }
    (Tm.app
      (Tm.loc { name := "f" })
      (Tm.app (Tm.loc { name := "f" }) (Tm.app (Tm.loc { name := "f" }) (Tm.loc { name := "z" })))))

Indeed, they are equal:

true #eval nfTop .unfold examples (toChurch 3) == nfTop .unfold examples (toChurch' 3)
true

The glued evaluator seems to be functioning correctly. But constructing input terms and understanding its output is still tedious and error-prone due to the verbosity of the ASTs. The rest of this post concerns techniques for making the prototype easier to interact with.

Improving the Output

The first step in making the prototype easier to use is to make the output easier to read. The simplest way to do this is to write an improved instance of Repr Tm. A Repr instance uses Lean's built-in pretty-printing library to render a term. Each pretty printer takes a value and an incoming precedence as arguments; the value is used to determine what to display, and the precedence to determine whether parentheses are needed. If the natural precedence of the term is higher than that of the context in which it's used, then it must be wrapped in parentheses.

open Std (Format) open Lean.Parser (maxPrec argPrec minPrec) def argsAndBody : Tm List LName × Tm | .lam x body => let (args, body') := argsAndBody body (x :: args, body') | other => ([], other) partial def Tm.format (prec : Nat) : Tm Format | .lam x body => let (moreArgs, body') := argsAndBody body let args := x :: moreArgs .group <| "fun " ++ (Format.text " ").joinSep (args.map (Format.text ·.name)) ++ " =>" ++ .nest 2 (.line ++ body'.format minPrec) | .let x e body => .group <| "let" ++ " " ++ .group (.nest 2 (x.name ++ " :=" ++ .line ++ e.format maxPrec)) ++ .line ++ "in" ++ .line ++ .group (.nest 2 <| body.format minPrec) | .app e1 e2 => fmtApp <| e1.format argPrec ++ .line ++ e2.format maxPrec | .top x => x.name | .loc x => x.name where fmtApp (elts : Format) : Format := Repr.addAppParen (.group (.nest 2 elts)) prec instance : Repr Tm where reprPrec tm _ := .group <| .nest 2 ("{{{" ++ .line ++ tm.format minPrec) ++ .line ++ "}}}"

The output is wrapped in {{{ and }}} because this syntax is used later in this post to input embedded language terms. With the improved Repr Tm instance, the result is now much more clearly the Church numeral for four:

{{{ fun f z => f (f (f (f z))) }}} #eval nfTop .unfold examples (.app (.app (.top "add") (.top "two")) (.top "two"))
{{{ fun f z => f (f (f (f z))) }}}

Similarly, the version without unfolding is easier to read:

{{{ add two two }}} #eval nfTop .noUnfold examples (.app (.app (.top "add") (.top "two")) (.top "two"))
{{{ add two two }}}

With the output now easier to understand, it's time to see what can be done to make better input.

Improving the Input

There are two basic strategies that can be used to make it easier to write down a data structure: coercions and syntax extensions. Coercions allow Lean to automatically convert expressions with one type into another, and a coercion from String to TName or LName could make it a bit shorter to write names as string literals. However, coercions are mostly useful for removing noisy explicit calls to conversion operators, and that isn't the main contributor to Tm being noisy to write. For more difficult cases, syntax extensions allow the Lean parser to be extended essentially arbitrarily, allowing much greater notational flexibility. Because the desired surface syntax is so far from the Tm datatype, this is the right tool for the job.

The first step is to define a new syntactic category, corresponding to adding a new nonterminal to Lean's grammar. The syntactic category is called embedded, because it is used to represent terms in the embedded language:

/-- Embedded language expression -/ declare_syntax_cat embedded

Each production in the grammar of Tm has a parser rule, except both top-level and local names are represented by identifiers. The :max and :arg annotations on syntax rules and subterms specify the precedence and associativity of the operators.

/-- A variable reference -/ syntax:max ident : embedded /-- Grouping of expressions -/ syntax "(" embedded ")" : embedded /-- Function application -/ syntax:arg embedded:arg embedded:max : embedded /-- A function -/ syntax:max "fun" ident+ " => " embedded:arg : embedded /-- A local definition -/ syntax:max "let" ident " := " embedded "in" embedded : embedded

In order to write functions like toChurch, it is also convenient to be able to embed Lean expressions that compute a Tm, making the expression syntax into a form of quasiquotation, just as $ can be used to escape Lean's quotations.

syntax:max "~" term:max : embedded

Similarly, embedded needs to actually be embedded into Lean terms—otherwise, there's no way for the parser to reach it:

syntax "{{{ " embedded " }}}" : term

The current syntax doesn't provide a convenient way to define top-level constants. Keeping with the theme of imitating Lean's own syntax, def can be used for embedded language definitions as well. This also demonstrates the addition of whitespace-sensitivity to the syntax: withPosition saves a parser position, and colGt fails if the current column is not greater than the column of the most recent saved position. In other words, these definitions' bodies must be at least as indented as the def keywords.

/-- Embedded-language declarations -/ declare_syntax_cat decl /-- A top-level definition -/ syntax withPosition("def" ident " := " colGt embedded) : decl

Finally, the #norm_embedded command provides a syntax for normalizing an expression in the context of a sequence of declarations. The optional parameter unfoldTop allows users to control whether full or partial normalization should be used.

/-- Normalize an expression in the context of some top-level definitions -/ syntax withPosition( "#norm_embedded" group("(" &"unfoldTop" ":=" ident ")")? (colGt decl)* colGt (withPosition("=>" colGt embedded))) : command

Lean's syntax has been extended, but there's still no semantics for the new syntax. Attempting to use it only results in an error:

example := elaboration function for '«term{{{_}}}»' has not been implemented {{{ fun x => x }}}{{{ fun x => x }}}
elaboration function for '«term{{{_}}}»' has not been implemented
  {{{ fun x => x }}}

Lean needs to be given instructions for translating this new syntax into something it understands.

First Attempt: Reusing Lean's Binders

Because the embedded language contains binding forms, it's important to find a way to associate the appropriate TName and LName values with the names that the user writes. In this first attempt, the code that implements embedded-language binding forms will emit Lean let-expressions that bind the surface name to the underlying value. To keep the multiple implementations separate, each is placed in its own namespace so that scoped rules won't interfere with each other:

namespace LeanBinders

The new syntax is given meaning by defining it as a macro. Each case of the macro translates a quotation of the embedded language to Lean code that constructs the appropriate AST. Whenever a variable is bound in the embedded language, it is implemented by a let-binding in the generated code. The placement of this let determines the variable's scope, so it's important that the case for let places the expansion of e outside the scope of the binding for $x.

open Lean in scoped macro_rules | `({{{ ( $e ) }}}) => `({{{ $e }}}) | `({{{ $e1 $e2 }}}) => `(Tm.app {{{ $e1 }}} {{{ $e2 }}}) | `({{{ $x:ident }}}) => `($x) | `({{{ fun $x => $body }}}) => do -- Rather than just using `x`, create a new string -- without its source positions to avoid misleading -- hovers let xName : StrLit := quote (toString x.getId) `(let $x := Tm.loc $xName Tm.lam $xName {{{ $body }}}) | `({{{ fun $x $y $xs* => $body }}}) => do let xName : StrLit := quote (toString x.getId) `(let $x := Tm.loc $xName Tm.lam $xName {{{ fun $y $xs* => $body }}}) | `({{{ let $x := $e in $body }}}) => do let xName : StrLit := quote (toString x.getId) `(let e := {{{ $e }}} let $x := Tm.loc $xName Tm.let $xName e {{{ $body }}}) | `({{{ ~$e }}}) => pure e

With this macro in place, toChurch can be implemented using quasiquotations, and the bindings of f and z can cross the boundaries of antiquotations and be used in nested quotations:

def toChurch (n : Nat) : Tm := {{{ fun f z => ~(let rec mkBody | 0 => {{{ z }}} | k + 1 => {{{ f ~(mkBody k) }}} mkBody n) }}}{{{ fun f z => f (f (f (f (f z)))) }}} #eval toChurch 5
{{{ fun f z => f (f (f (f (f z)))) }}}

This strategy has a number of nice properties:

  • Because embedded language binding forms are represented by native Lean binding forms, features like occurrence highlighting and go-to-definition work properly for quotations with no additional work.

  • The scoping rules of the embedded language can be read directly from the implementation.

  • Mistakes in variable names are caught while writing the program, rather than while running it.

  • Macros are easier to write than other kinds of metaprograms.

Unfortunately, there is a key downside to this approach that undermines its usability in practice. Even though Lean and the embedded language conceptually have completely separate namespaces, Lean binders intrude into embedded language terms, leading to error messages that leak details about the encoding. One way that this can go wrong is when Lean binders are accidentally referred to from the embedded language:

example (n : Nat) : Tm := application type mismatch Tm.lam { name := "xyz" } n argument n has type Nat : Type but is expected to have type Tm : Type{{{ fun xyz => n }}}
application type mismatch
  Tm.lam { name := "xyz" } n
argument
  n
has type
  Nat : Type
but is expected to have type
  Tm : Type

Similarly, embedded language bindings leak into Lean, shadowing Lean's functions:

example (n : Nat) : Tm := {{{ fun n => ~(toChurch application type mismatch toChurch n argument n has type Tm : Type but is expected to have type Nat : Typen) }}}
application type mismatch
  toChurch n
argument
  n
has type
  Tm : Type
but is expected to have type
  Nat : Type

Declarations

To implement #norm_embedded as a macro, a bit of cleverness is needed. This is because each of the declarations should be in scope for the next, and all of them should be in scope for the final expression. This means that either the macro will need to be complex, generating the necessary code in one fell swoop, or it will need to track scopes carefully while expanding step by step. Usually, the latter is easier to read and maintain. One way to do this is to create a term version of #norm_embedded that includes a position for the environment that is being constructed:

scoped syntax "#norm_embedded" group("(" &"unfoldTop" ":=" ident ")") decl* "⟨" ident "⟩=>" term : term

This new syntax extends terms rather than commands so that the strategy of using local let-bindings for embedded-language bindings is still applicable.

The command macro has two cases:

  1. If no unfolding strategy was specified, the default is added.

  2. Given an unfolding strategy, it emits a #eval command that contains the term version of #norm_embedded. The #eval keyword is given the source location of the original arrow so that the evaluation result is associated with the arrow in the editor.

open Lean in scoped macro_rules | `(#norm_embedded $decls* =>%$arr $e:embedded) => `(#norm_embedded ( unfoldTop := UnfoldTop.unfold) $decls* =>%$arr $e) | `(#norm_embedded ( unfoldTop := $unfold ) $decls* =>%$arr $e:embedded) => `(#eval%$arr let ρ : Array (TName × Tm) := #[] #norm_embedded ( unfoldTop := $unfold ) $decls* ρ ⟩=> {{{ $e }}})

For the term macro, there are also two possibilities:

  1. If there are no declarations remaining, it expands to a call to nfTop with the appropriate unfolding flag, environment, and expression.

  2. If there are declarations, the environment is extended with the first one. Then, the declaration's identifier is let-bound around a #norm_embedded term that will generate code for the remaining declarations.

The ordering of let-bindings is important. In this language, declarations are not recursive, so the environment must be extended outside the scope of the variable.

open Lean in scoped macro_rules | `(#norm_embedded ( unfoldTop := $unfold ) $ρ ⟩=> $e:term) => `(nfTop (open UnfoldTop in $unfold) $ρ $e) | `(#norm_embedded ( unfoldTop := $unfold ) def $x:ident := $e1:embedded $ds:decl* $ρ ⟩=>%$arr $e:term) => do let xName : StrLit := quote (toString x.getId) `(let ρ' := $(ρ).push ($xName, {{{ $e1 }}}) let $x : Tm := Tm.top $xName #norm_embedded ( unfoldTop := $unfold ) $ds* ρ' ⟩=> $e)#norm_embedded def one := fun f z => f z def two := fun f z => f (f z) def plus := fun n k f z => n f (k f z) {{{ fun f z => f (f (f z)) }}} => plus one two
{{{ fun f z => f (f (f z)) }}}
end LeanBinders

Second Attempt: Distinguishing Scopes

namespace ScopedBinders open Lean

Lean's macros are hygienic: this means that, by default, bindings introduced by a macro expansion will neither capture nor be captured by bindings introduced by other steps of macro expansion. Taking a cue from Matthew Flatt's 2016 paper, Lean distinguishes identifiers introduced at different steps of macro expansion using scopes, which are unique tokens generated for each step. The scope token is added to identifiers in quotations, so binding forms and uses introduced by the macro will refer to each other, but not to anything else. Lean's implementation of hygienic macros is described in detail in a paper by my colleagues Sebastian Ullrich and Leo de Moura.

Even though the hygiene system was designed to prevent bindings introduced by macros from interfering with each other or with other bindings, it can be repurposed to prevent interference between bindings in the embedded language and bindings in Lean. With a bit of trickery, it is possible to obtain a unique scope token that can be added to all embedded language bindings and uses. Adding it to the bindings prevents Lean terms from accessing them, and adding it to the use sites causes them to avoid Lean's bindings and gives them access to the correct bindings.

Each step of macro expansion needs its own scope. To implement this, Lean provides a global supply of macro scopes for each module, and one of them is distinguished as the "current" scope. Between steps of expansion, the current scope is replaced by a fresh one from the supply.

The API to access the macro scopes contains getCurrMacroScope, which accesses the current macro scope, and withFreshMacroScope, which runs a monadic action with the current scope set to a unique, fresh scope. These tools can be combined to create a Lean command that defines a top-level constant that refers to a unique scope:

macro "#define_scope" x:ident : command => do let sc withFreshMacroScope getCurrMacroScope let thisMod getMainModule `(def $x : Name × MacroScope := ($(quote thisMod), $(quote sc)))

Running this command defines specialEmbeddedScope to be the desired scope along with the module's name:

#define_scope specialEmbeddedScope

The next step is to write a function that adds this scope to a name:

def _root_.Lean.Name.addEmbeddedScope (n : Name) : Name := let (mod, sc) := specialEmbeddedScope Lean.addMacroScope mod n sc

Because macro scopes are usually added by quotations, there's no particularly easy way to add the scope to an identifier (that is, a piece of syntax that represents a name occurrence). In Lean, the datatype of syntax is essentially an encoding of arbitrary trees, which allows the same type to be used for the results of parsers even when they're extensible with arbitrary new productions. The wrapper TSyntax tracks the syntactic category of a piece of syntax at the type level, which can prevent accidental confusion of syntax and also enables the use of the coercion machinery to automate the translation from one category to another. TSyntax postdates the paper on the macro system, but it's described in detail in section 4.4 of Sebastian Ullrich's thesis. Here, the parsed name inside identifier syntax is enriched with the embedded language's designated scope:

def _root_.Lean.TSyntax.addEmbeddedScope (x : TSyntax `ident) : TSyntax `ident := match x with | .ident info str n _ => .ident info str n.addEmbeddedScope [] | other => other

The term quasiquotation macro that uses this designated scope is identical to the previous iteration, except it calls addEmbeddedScope at each ocurrence of embedded-language identifiers.

open Lean in scoped macro_rules | `({{{ ( $e ) }}}) => `({{{ $e }}}) | `({{{ $e1 $e2 }}}) => `(Tm.app {{{ $e1 }}} {{{ $e2 }}}) | `({{{ $x:ident }}}) => `($(x.addEmbeddedScope)) | `({{{ fun $x => $body }}}) => do let xName : StrLit := quote (toString x.getId) `(let $(x.addEmbeddedScope) := Tm.loc $xName Tm.lam $xName {{{ $body }}}) | `({{{ fun $x $y $xs* => $body }}}) => do let xName : StrLit := quote (toString x.getId) `(let $(x.addEmbeddedScope) := Tm.loc $xName Tm.lam $xName {{{ fun $y $xs* => $body }}}) | `({{{ let $x := $e in $body }}}) => do let xName : StrLit := quote (toString x.getId) `(let e := {{{ $e }}} let $(x.addEmbeddedScope) := Tm.loc $xName Tm.let $xName e {{{ $body }}}) | `({{{ ~$e }}}) => pure e

Now, implementation details no longer leak through variable capture. The embedded language term correctly fails to refer to the surrounding Lean binding of n:

example (n : Nat) : Tm := {{{ fun xyz => unknown identifier 'n✝'n }}}
unknown identifier 'n✝'

Furthermore, the embedded language's binding of n no longer capture's Lean's binding, so this example now works:

example (n : Nat) : Tm := {{{ fun n => ~(toChurch n) }}}

The implementation of the normalization test command is identical to the version without separate scopes, with the exception of the addition of the embedded language scope to identifier names:

open Lean in scoped macro_rules | `(#norm_embedded ( unfoldTop := $unfold ) $ρ ⟩=> $e:term) => `(nfTop (open UnfoldTop in $unfold) $ρ $e) | `(#norm_embedded ( unfoldTop := $unfold ) def $x:ident := $e1:embedded $ds:decl* $ρ ⟩=>%$arr $e:term) => do let xName : StrLit := quote (toString x.getId) `(let ρ' := $(ρ).push ($xName, {{{ $e1 }}}) let $(x.addEmbeddedScope) : Tm := Tm.top $xName #norm_embedded ( unfoldTop := $unfold ) $ds* ρ' ⟩=> $e)

This version similarly keeps embedded language bindings separate from Lean's bindings:

def n := 5 #norm_embedded def zero := fun f z => z def n := ~(toChurch n) def n' := ~(toChurch n) def add := fun n k f z => n f (k f z) {{{ fun f z => f (f (f (f (f (f (f (f (f (f z))))))))) }}} => add n nend ScopedBinders

Next Post

The techniques demonstrated in this post are relatively easy to employ, and they provide a nice user interface for embedded languages. Reusing Lean's binders to represent embedded language binders means that features like occurrence highlighting and rename refactorings automatically work, with no further effort required. After a quite small amount of work, the formal artifact has become a usable prototype, and it is practical to use Lean to translate higher-level features (such as numbers) to lower-level encodings that are suitable for the formalism. For many uses, this technique is sufficient.

However, this implementation is beginning to reach the limits of what can be easily written as a macro. Maintaining the current environment while expanding the list of declarations required an explicit representation of the state. While there is a long history of encoding quite complicated control flow using continuation-passing-style macros in Scheme, the code tends to become difficult to read and maintain over time. At the same time, most languages that are interesting to programming language researchers will require more information about identifiers than just whether they're in scope, such as their type, and there's no easy way for the macro to check these features.

An upcoming post will demonstrate some techniques for associating more information with bound variables, moving from a macro-based implementation to an elaborator, which uses the compiler's API to translate terms directly to Lean's core type theory.