Pikelet đŸ„ž

Pikelet is a small dependently typed language. It doesn't do many interesting things yet, but hopefully that will change in the future!

A small taste

Definitions:

let
    id : (a : Type) -> a -> a;
    id a x = x;

    const : (a b : Type) -> a -> b -> a;
    const a b x y = x;
in
    record {
        id = id;
        const = const;
    }

Interactive REPL:

$ cargo run repl
    ____  _ __        __     __
   / __ \(_) /_____  / /__  / /_
  / /_/ / / //_/ _ \/ / _ \/ __/    Version 0.1.0
 / ____/ / ,< /  __/ /  __/ /_      https://github.com/pikelet-lang/pikelet
/_/   /_/_/|_|\___/_/\___/\__/      :? for help

Pikelet> (\(a : Type) (x : a) => x) String "hello"
"hello" : String
Pikelet> :t Type
Type^1
Pikelet> 1 : S16
1 : S16
Pikelet>

What is a Pikelet?

A pikelet is an odd sort of small (often pre-made) pancake found in Australia and New Zealand. Commonly sent in school lunches spread with jam and butter. Handily it also has a name that includes 'pi' and 'let' as substrings! 😅

Installation

Pikelet is written in Rust and therefore needs to be compiled with Cargo, because we don't yet offer prebuilt binaries. If you haven't already installed Rust, please install it now!

Cloning the source from Github

We've not yet published Pikelet on crates.io, so you'll first need to clone the repository using git:

git clone https://github.com/pikelet-lang/pikelet.git
cd pikelet

Running the REPL

After cloning, you can now run the REPL using Cargo:

cargo run repl

You will now need to wait for Cargo to download and build the dependencies, but sooner or later the REPL will be ready for you to interact with!

Language

Contents

Comments

Line comments are preceded by a double dash:

-- this is a comment!

Primitive types and their literals

Pikelet has a number of primitive types:

Type Literal
Bool true, false
String "hello there!"
Char 'a', 'b', ..., '\n', '\t', ...
U8 1, 2, 3, ...
U16 1, 2, 3, ...
U32 1, 2, 3, ...
U64 1, 2, 3, ...
S8 1, 2, 3, ...
S16 1, 2, 3, ...
S32 1, 2, 3, ...
S64 1, 2, 3, ...
F32 1, 2, 3, ..., 0.0, 1.0, ...
F64 1, 2, 3, ..., 0.0, 1.0, ...

Note: You can't do much with these primitive types yet. In the future we will add some primitive functions to allow you to manipulate them.

Type annotations

If you note above, a number of the primitive types share a literal representation. Pikelet will try to predictably infer the types, but if it fails to do so you will get an error. In that case you can use the type annotation operator, (:), to specify the intended type:

Pikelet> 1
error: ambiguous integer literal
Pikelet> 1 : S32
1 : S32
Pikelet> 1 : F32
1 : F32
Pikelet> 1.0 : F32
1.0 : F32
Pikelet> 1.1 : U64
error: found a floating point literal, but expected a type `U64`
- <repl>:1:1
1 | 1.1 : U64
  | ^^^ the literal

Identifiers

TODO

Keywords

Keyword Documentation
as internal field names
case case expressions
else if-then-else-expressions
extern
if if-then-else-expressions
import
in bindings
let bindings
record record values
Record Record types
then if-then-else-expressions
Type polymorphic functions, types of types
where

Conditionals

If-then-else expressions

if expressions take an expression that evaluates to a Bool (the condition), and two other expressions (the consequent and the alternative) that evaluate to the same type. If the condition evaluates to true, then the consequent will be evaluated and returned, otherwise the alternative will be evaluated and returned.

Pikelet> if true then "hello!" else "goodbye!"
"hello!" : String
Pikelet> if false then "hello!" else "goodbye!"
"goodbye!" : String

Case expressions

Pikelet supports case expressions on strings, and numbers:

case value {
    "hello" => "goodbye";
    "goodbye" => "hello";
    value => value; -- matches all strings
}

Note that we don't (yet) check that the series of patterns provided cover all possible cases, leading to the following embarrassing error:

Pikelet> case "hello" { "hi" => "oh dear" }
error: internal compiler error: no patterns matched the given expression

In the future we' plan to fix this, add support for matching on booleans, and also support more complex patterns (eg. for records).

Functions

Contents

Simply typed functions

Here are some simple functions and their types:

Pikelet> :t \x : S32 => x
S32 -> S32
Pikelet> :t \x : String => x
String -> String
Pikelet> :t \x : Char => x
Char -> Char

Note that all of these types follow the same pattern - they are the identity function! This means that if you pass a value to them, they'll return the same thing without alteration!

Pikelet> (\x : S32 => x) 42
42 : S32
Pikelet> (\x : String => x) "hi"
"hi" : String
Pikelet> (\x : Char => x) 'b'
'b' : Char

Polymorphic functions

Alas, we can't reuse one of these identity functions with other, incompatible types:

Pikelet> (\x : S32 => x) 4.0
error: found a floating point literal, but expected a type `S32`
- <repl>:1:17
1 | (\x : S32 => x) 4.0
  |                 ^^^ the literal

Let's make this identity function polymorphic by adding a parameter for the type of the argument:

Pikelet> :t \(a : Type) (x : a) => x
(a : Type) -> a -> a

We now have a polymorphic identity function! We can specialize this function by applying a type to it:

Pikelet> (\(x : Type) (x : a) => x) String "hello"
"hello" : String
Pikelet> (\(x : Type) (x : a) => x) S32 1
1 : S32

Syntactic sugar for functions

In Pikelet, all functions take a single argument - in order to pass multiple arguments we use currying. The following functions are equivalent:

\(x : Type) (x : a) => x
\(x : Type) => \(x : a) => x

Non-dependent functions can be expressed without explicit parameter names. For example the following function types are equivalent:

(a : Type) (x : a) -> a
(a : Type) -> (x : a) -> a
(a : Type) -> a -> a

Records

Contents

Record values and record types

You can group together multiple values by using records:

Pikelet> record { x = 3.0 : F32; y = 3.0 : F32 }
record { x = 3; y = 3 } : Record { x : F32; y : F32 }

Take note of the following:

  • record values use the lower case record keyword
  • record types use the upper case Record keyword
  • we have to annotate ambiguous field values

We can make a new definition for point types:

Point2d = Record {
  x : F32;
  y : F32;
};

You can then use this type to make it easier to define a point record:

Pikelet> record { x = 3.0; y = 3.0 } : Point2d
record { x = 3; y = 3 } : Record { x : F32; y : F32 }

Note that we no longer need to annotate each field! Pikelet was able to pick up the type of each field from the type definition during type checking. You can read more about Pikelet's type inference on the type inference page.

Field lookups

You can access the value associated with a field name by using the dot operator:

Pikelet> record { name = "Jane" }.name
"Jane" : String

Dependent record types

Field types can depend on data from previous fields. Here we turn a fixed-length array into a dynamically sized array, by using the len field later on to define the data field's annotation:

DArray (a : Type) = Record {
    len : S32;
    data : Box (Array len a);
};

External vs. internal field names

Sometimes we'll run into rare cases where a field name might shadow a binding from a higher scope. In this case we can give the field a new, internal name using the as notation:

Foo = Record {
  -- external name
  --  |
  --  |    internal name
  --  |        |
  --  v        v
  String as String1 : Type;

  -- refers to the built in `String` type
  --     |
  --     v
  x : String;

  -- refers to the local `String` field
  --     |
  --     v
  y : String1;
};

We define the following terms:

  • external field name: the name that we use when projecting on the record
  • internal field name: the name that we use internally, in dependent fields

Note that most of the time the internal and external field names are the same. For example:

Point2d = Record {
  x : F32;
  y : F32;
};

Is actually desugared to:

Point2d = Record {
  x as x : F32;
  y as y : F32;
};

Bindings

Contents

Items

Let bindings are made up of items. At the moment these can either be declarations or definitions.

A declaration states the type for of an identifier that we should expect to see in a subsequent definition for that identifier. For example:

let
    greeting : String;              -- declaration
    greeting = "hello there!";      -- definition
in
    ...

We can make supply a number of forward declarations before providing their associated definitions:

let
    one : S32;
    two : S32;

    one = 1;
    two = 2;
in
    ...

Values that can be inferred do not require a declaration, although sometimes a declaration may be useful for documentations purposes!

let
    string = "hello"  -- ok!
    one = 1 : U16     -- ok!
    two = 2           -- error: is this an U8, U16, S64, etc?
in
    ...

Function definitions

We have some nice syntactic sugar for defining functions. For example:

let
    const : (a b : Type) -> a -> b -> a;
    const = \a b x y => x;
in
    const String I32 "hello" 1

Could be expressed more cleanly as:

let
    const : (a b : Type) -> a -> b -> a;
    const a b x y = x;
in
    const String I32 "hello" 1

Type aliases

Because Pikelet is dependently typed, we need no other mechanism for making type aliases. Instead we just use definitions!

let
    Name : Type;
    Name = String;

    bobs-name : Name
    bobs-name = "bob"
in
    ...

Doc comments

Documentation can be provided for above declarations, by using doc comments:

let
    ||| This is a documented definition
    |||
    ||| # Example
    |||
    ||| ```pikelet-repl
    ||| Pikelet> self-aware-string
    ||| "I am a string!" : String
    ||| ```
    self-aware-string : String;
    self-aware-string = "I am a string!";
in
    ...

Type inference

Many statically typed languages perform type inference to varying degrees, and Pikelet is no different! The goal is to reduce the burden of writing type annotations everywhere. Some languages like OCaml and Elm can even infer the types of a whole program without any annotations at all!

Pikelet's type inference follows some very simple rules that you can probably pick up on your own, but we thought it might help to give a deeper explanation of how it works, without getting too bogged down in the theoretical details.

Contents

Bidirectional type checking

Pikelet has a rather flexible type system that can have expressions embedded in them, so we've opted to use an algorithm known as 'bidirectional type checking' as a way to get a decent amount of inference while still remaining relatively predictable to you, the programmer. This means that you may sometimes have to write annotations on top-level definitions, but the types should propagate downwards and inner definitions should not require much annotation at all.

To do this we break the terms of the language into two groups. We call these inferable terms and checkable terms.

Inferable terms

Inferable terms can be checked on their own or based on previous definitions.

TODO: Explain examples

Pikelet> true
Pikelet> "1"
Pikelet> 'a'
Pikelet> Bool
Pikelet> Type
Pikelet> Type^2
Pikelet> record { name = "Jane" }
Pikelet> Record { name : String }
Pikelet> record { x = 3.0 : F32; y = 3.0 : F32 }
Pikelet> \x : Int => x
Pikelet> (a : Type) -> a

Checkable terms

Checkable terms need extra annotations, or be used in a position where extra information can be supplied.

TODO: Explain examples

Pikelet> 1
Pikelet> 2.0
Pikelet> record { x = 3.0; y = 3.0 }
Pikelet> \x => x
Pikelet> 1 : S32
Pikelet> 2.0 : F32
Pikelet> record { x = 3.0; y = 3.0 } : Record { x : F32; y : F32 }
Pikelet> \x => x : S32 -> S32

Further reading

We describe Pikelet's type checking algorithm more formally in the appendix. If you have a background in programming languages and type theory this might be of interest to you. If not, that's ok - understanding the formal notation is not necessary for developing a high level intuition of type inference in Pikelet.

Universes

Contents

Types of types

Types also have types!

Pikelet> :t S32
Type

We call this special 'type of types' a universe.

You might then wonder, “what is the type of Type?” That's a good question! Clever people have figured out that if Type was its own type, ie. Type : Type, it would lead situations like Girard's paradox, (the type theory equivalent of the more well-known Russel's paradox in set theory). There is nothing worse than paradoxes in a type system, so instead we create ourselves a new universe called Type^1:

Pikelet> :t Type
Type^1

We keep on going like this, giving us a hierarchy of universes, as many as we need for a given program:

Type : Type^1 : Type^2 : Type^3 : ...

We call the number given to each universe the level of that universe. You can think of these universes as larger and larger collections of things, with the smaller universes being contained within the larger universes:

.- Type^2 -----------------------------------------------------------------------.
|                                                    Array n Type^1              |
|                                                                                |
|  .- Type^1 -----------------------------------------------------------------.  |
|  |                                                                          |  |
|  |  .- Type -------------------------------.                                |  |
|  |  |                Array n String        |                                |  |
|  |  |                                      |                                |  |
|  |  |            Record { x : F32 }        |          Array n Type          |  |
|  |  |                                      |                                |  |
|  |  |  .- S32 --------------.              |                                |  |
|  |  |  | ..., -1, 0, 1, ... |              |       Nat -> Type -> Type      |  |
|  |  |  '--------------------'              |                                |  |
|  |  |                                      |                                |  |
|  |  |  .- Record {} -.   .- Bool -------.  |                                |  |
|  |  |  |  record {}  |   |  true, false |  |          Type -> Type          |  |
|  |  |  '-------------'   '--------------'  |                                |  |
|  |  |                                      |                                |  |
|  |  |  .- String ----------------------.   |  .- Record { t : Type } ----.  |  |
|  |  |  | "hello", "byee!", "hoho", ... |   |  |  record { t = String },  |  |  |
|  |  |  '-------------------------------'   |  |  record { t = U32 }, ... |  |  |
|  |  '--------------------------------------'  '--------------------------'  |  |
|  '--------------------------------------------------------------------------'  |
'--------------------------------------------------------------------------------'

Note that in most regular programming you will rarely see anything above Type, and even more rarely still will you see things above Type^1, so all of this might seem a little excessive. That being said, we believe it is important enough to plug this gap in our type system while we have the chance.

Cumulativity

Because the level of a universe corresponds to some notion of it's 'size', this suggests that larger universes should be able to contain all the other things smaller than themselves. This is reflected in Pikelet too:

Pikelet> Bool : Type        -- ok
Pikelet> Bool : Type^2      -- ok
Pikelet> Type^1 : Type^3    -- ok
Pikelet> Type^3 : Type^1    -- error!

Syntactic sugar

Note that Type is actually just sugar for Type^0:

Pikelet> :t Type^0
Type^1

Shifting universes

Often we'll write definitions in terms of Type, without worrying about the universe levels. For example the identity function can be defined with a type parameter in the universe of Type:

Pikelet> :let id = \(a : Type) (x : a) => x
id : (a : Type) (x : a) -> a

This then allows us to use it with values:

Pikelet> id String "hello"   -- ok
Pikelet> id S32 1            -- ok

Sadly because of our universe hierarchy, we can't use our identity function at the type level!

Pikelet> id Type String                 -- error!
Pikelet> id ((a : Type) -> a -> a) id   -- error!
Pikelet> id Type^1 Type                 -- error!

This would seem like it would be terrible for code reuse - you would need to create a new id function for every universe level! Thankfully we have a simple solution: We allow identifiers to be shifted to the correct universe level using the ^ notation. This shifts the given definition to the desired universe level:

Pikelet> :t id^1
(a : Type^1) -> a -> a

We can then use the shifted identity functions like so:

Pikelet> id^1 Type String                    -- ok
Pikelet> id^1 ((a : Type) -> a -> a) id      -- ok
Pikelet> id^2 Type^1 Type                    -- ok

Field projections can also have shifts applied to them:

Pikelet> prelude.id^1 Type String

Appendix

Here you will find additional reference information for understanding the theoretical foundations behind Pikelet.

Design

Design goals

Pikelet should feel:

  • Friendly
  • Liberating
  • Sympathetic

This is a rough list of things that might be interesting to explore with Pikelet:

  • Sympathetic to humans
    • Programs should look pretty, and read clearly
    • Friendly community
    • Helpful interactive tooling
    • Tight feedback loops
  • Sympathetic to machines
    • Predictable performance
    • Predictable memory layout
    • Predictable optimizations
    • Zero (or close to zero) cost abstractions
    • Low level control
    • Minimal runtime
  • Sympathetic to real-world realities
    • Gradual correctness (eg. examples -> generative tests -> solvers -> proofs)
    • Provide clearly marked escape hatches (learn from the lessons of RustBelt?)
    • Automatic upgrades of code - between language and library versions
    • Work for large scale systems
  • Sympathetic to mathematical foundations
    • Simple core language that can be formalized and proven sound
    • Should allow for mathematically inspired patterns of code reuse
    • Allow newcomers to learn mathematical thinking gradually
    • Adaptable to future developments in type theory

It's unclear how many of these will be able to be met, and what priorities these should have, so this list might change over time. Come and chat with us if you'd like to give your input and get involved!

Some key features of interest

  • Dependent types
  • Purely functional
  • Strict evaluation
  • Implicit arguments
  • Dependent records+instance arguments as first class modules/type classes
  • Non-uniform memory layout - custom data layouts for records and variants?
  • Quantitative type theory for erasure and linear types
  • Totality checking
  • Explicit tail-call elimination
  • Excellent editor support with fast, incremental program analysis
  • Interactive program development using holes
  • Refinement types for lightweight verification

Some more hand-wavey ideas:

  • Monomorphization by partial-evaluation of instance arguments
  • Optional/configurable garbage collection
  • Alternatives to currying for function application?
  • First-class declarations (Levitation or Elaborator Reflection could be useful here)

Some other features that may be trickier to integrate given the previous features and design goals:

  • Effect systems/Algebraic Effects
    • could make it easier to integrate async-io without needing to build it in
    • how do we use effects for generative modules?
    • how do we makes this fast for systems programming?
      • should compile down in a similar way to the equivalent procedural code in Rust or C
      • most effect handling does not need to be dynamic
      • most systems cause lots of intermediate allocations or stack switching
  • Combined Effects/Coeffects
    • allow for statically checked compilation configurations
    • explicit variable capture could be modelled as a coeffect
    • could subsume quantitative type theory, implicit arguments, etc
    • not yet integrated into dependent types in the research literature (Granule is working on this)
  • Row polymorphism
    • no research on integrating these with dependent records and inductive data types
  • Program composition via category theory
    • Challenging to integrate in the presence of strict evaluation
    • Similar problems to effect systems: we don't want to allocate intermediate data structures, instead we want to build up stack allocated state machines (like in Rust's future and iterator traits) to be executed later

A possible implementation plan

  1. ~~Start with a simple dependent type system, like LambdaPi~~
  2. Implement additional language extensions needed for actual usefulness
    • ~~dependent records~~
    • ~~let/where bindings~~
    • quantitative type theory
    • implicit arguments
    • instance arguments
    • basic (non-dependent) effect system
    • ~~cumulative universes~~
  3. Implement back end(s)
    • JIT and embeddable runtime (for bootstrapping usage) - possibly with HolyJIT or or CraneLift
      • CraneLift would unlock WebASM, which would be a huge boost
    • Optimizing compiler - Possibly with LLVM or a verified compiler (like CompCert) in the future
      • Figure out how to integrate with libraries written in other languages, like C or Rust

By starting with a JIT we could get initial usage from embedding the language within existing Rust programs, like games. Looking into the future it would also be nice to then move forward towards implementing a native compiler, however.

At the moment we are building the language in Rust, but perhaps it would be better to build a verified implementation in Coq/Agda/Lean/Idris/something else. That way we can actually start proving some of the claims we desire to make about our system. A concern could be that we go too far down the route of implementation and it would be extremely challenging to then form a solid specification for what we are building. On the other hand, as always, the downside of a verified implementation is that it could take a prohibitive amount of time to complete.

Theory

A formalization of the semantics for type checking and normalizing Pikelet.

Contents

Introduction

At its core, Pikelet is a dependently typed lambda calculus with a cumulative universe hierarchy with explicit level shifts.

Note: This document is intended for those who are interested in looking deeper into the formal foundations of Pikelet. You don't need to understand this for general use of Pikelet, so feel free to skip this document if that is easier. We will however make an effort to explain some of the notation we use here, and point to resources that might help if this piques your curiosity!

Notation

We use a combination of some BNF-style syntax definitions with natural deduction rules to define our language. This combination of notation is sometimes referred to as computer science metanotation and is, alas, a little hard to pin down as conventions vary between papers and authors. The general rules stay the same however, and once you learn to read them they are much more succinct than an actual implementation could be, and are an invaluable tool for quickly getting a high-level overview of a programming language's semantics.

TODO: Describe BNF, natural deduction rules, overbars, variable binding, etc.

Some handy links:

Where is the soundness proof?

Here we are only defining the rules of our language's type checking and evaluation. Further work needs to be done to verify that our system actually satisfies certain interesting type soundness properties, like progress, preservation, strong normalization, etc. If you would like to discuss this with us, please check out the relevant github issue!

Syntax

Raw terms

\[ % Haskell-style append https://tex.stackexchange.com/questions/4194/how-to-typeset-haskell-operator-and-friends \newcommand\doubleplus{+\kern-1.3ex+\kern0.8ex} % Small caps https://github.com/mathjax/MathJax-docs/wiki/Small-caps-%5Ctextsc-in-MathJaxx \def\sc#1{\dosc#1\csod} \def\dosc#1#2\csod{{\rm #1{\small #2}}} \ \newcommand{\rule}[3]{ \dfrac{ ~~#2~~ }{ ~~#3~~ } & \Tiny{\text{(#1)}} } \ \DeclareMathOperator{\max}{max} \DeclareMathOperator{\field}{field} \DeclareMathOperator{\fieldty}{fieldty} \DeclareMathOperator{\fieldsubst}{fieldsubst} \DeclareMathOperator{\Match}{\sc{MATCH}} \DeclareMathOperator{\shift}{shift} \ % Judgments \newcommand{\eval}[3]{ #1 \vdash #2 \hookrightarrow #3 } \newcommand{\check}[4]{ #1 \vdash #2 \uparrow #3 \leadsto #4 } \newcommand{\infer}[4]{ #1 \vdash #2 \downarrow #3 \leadsto #4 } \newcommand{\subty}[3]{ #1 \vdash #2 \preccurlyeq #3 } \newcommand{\match}[3]{ \Match(#1,#2) \Longrightarrow #3 } \newcommand{\checkpat}[5]{ #1 \vdash #2 \uparrow #3 \leadsto #4 \Longrightarrow #5 } \newcommand{\inferpat}[5]{ #1 \vdash #2 \downarrow #3 \leadsto #4 \Longrightarrow #5 } \ % Metavariables \newcommand{\rexpr}{r} % raw expressions \newcommand{\rtype}{R} % raw types \newcommand{\rpat}{s} % raw patterns \ \newcommand{\texpr}{t} % expressions \newcommand{\ttype}{T} % types \newcommand{\tpat}{p} % patterns \ \newcommand{\vexpr}{v} % value expressions \newcommand{\vtype}{V} % value types \newcommand{\wexpr}{w} % whnf expressions \newcommand{\wtype}{W} % whnf types \newcommand{\nexpr}{n} % neutral expressions \newcommand{\ntype}{N} % neutral types \ \newcommand{\ctx}{\Gamma} % contexts \ % Keywords \newcommand{\kw}[1]{ \mathsf{#1} } \ % Term and Type constructors \newcommand{\label}{l} \newcommand{\binder}{x} \newcommand{\var}[1]{x^\wedge{#1}} \newcommand{\Type}[1]{\kw{Type}^\wedge{#1}} \newcommand{\Arrow}[2]{ #1 \rightarrow #2 } \newcommand{\Pi}[2]{ \Arrow{(#1)}{#2} } \newcommand{\lam}[2]{ \kw{\lambda} #1 . #2 } \newcommand{\app}[2]{ #1 ~ #2 } \newcommand{\case}[2]{ \kw{case} ~ #1 \left\{ #2 \right\} } \newcommand{\RecordCons}[2]{ \kw{Record} \left\{ #1; #2 \right\} } \newcommand{\RecordEmpty}{ \kw{Record} \left\{\right\} } \newcommand{\as}{ ~ \kw{as} ~ } \newcommand{\record}[1]{ \kw{record} \left\{ #1 \right\} } \newcommand{\proj}[3]{ #1.#2^\wedge{#3} } \newcommand{\subst}[3]{ #1 ~ [#2 \rightarrow #3] } \ % Items \newcommand{\declItem}[2]{ #1 : #2 } \newcommand{\defnItem}[2]{ #1 = #2 } \ % Contexts \newcommand{\emptyCtx}{ \varnothing } \newcommand{\composeCtx}[2]{ #1 \sim #2 } \newcommand{\extendCtx}[2]{ #1, #2 } \ \begin{array}{rrll} \rexpr,\rtype & ::= & \var{i} & \text{variables ($i \in \mathbb{N}$)} \\ & | & \Type{i} & \text{universe of types ($i \in \mathbb{N}$)} \\ & | & ? & \text{holes} \\ & | & \rexpr : \rtype & \text{term annotated with a type} \\ & | & \Pi{\binder:\rtype_1}{\rtype_2} & \text{dependent function type} \\ & | & \lam{\binder:\rtype}{\rexpr} & \text{functions} \\ & | & \app{\rexpr_1}{\rexpr_2} & \text{function application} \\ & | & \case{\rexpr}{\overline{\rpat_i \rightarrow \rexpr_i}^{;}} & \text{case expressions} \\ & | & \RecordCons{\label \as \binder:\rtype_1}{\rtype_2} & \text{record type extension} \\ & | & \RecordEmpty & \text{empty record type} \\ & | & \record{\label=\rexpr_1, \rexpr_2} & \text{record extension} \\ & | & \record{} & \text{empty record} \\ & | & \proj{\rexpr}{\label}{i} & \text{record projection ($i \in \mathbb{N}$)} \\ \\ \rpat & ::= & \binder & \text{binder pattern} \\ & | & \rpat : \rtype & \text{pattern annotated with a type} \\ % & | & \record{\label=\rpat_1, \rpat_2} & \text{record extension pattern} \\ % & | & \record{} & \text{empty record pattern} \\ \\ \end{array} \]

\[ \begin{array}{lrll} \Arrow{\rtype_1}{\rtype_2} & := & \Pi{\binder:\rtype_1}{\rtype_2} & \text{non-dependent function types} \\ \lam{\binder}{\rexpr} & := & \lam{\binder:?}{\rexpr} & \text{functions (without an annotation)} \\ \end{array} \]

Terms

The core term syntax skips holes, ensuring that everything is fully elaborated:

\[ \begin{array}{rrll} \texpr,\ttype & ::= & \var{i} & \text{variables ($i \in \mathbb{N}$)} \\ & | & \Type{i} & \text{universe of types ($i \in \mathbb{N}$)} \\ & | & \texpr : \ttype & \text{term annotated with a type} \\ & | & \Pi{\binder:\ttype_1}{\ttype_2} & \text{dependent function type} \\ & | & \lam{\binder:\ttype}{\texpr} & \text{functions} \\ & | & \app{\texpr_1}{\texpr_2} & \text{function application} \\ & | & \case{\texpr}{\overline{\tpat_i \rightarrow \texpr_i}^{;}} & \text{case expressions} \\ & | & \RecordCons{\label \as \binder:\ttype_1}{\ttype_2} & \text{record type extension} \\ & | & \RecordEmpty & \text{empty record type} \\ & | & \record{\label=\texpr_1, \texpr_2} & \text{record extension} \\ & | & \record{} & \text{empty record} \\ & | & \proj{\texpr}{\label}{i} & \text{record projection ($i \in \mathbb{N}$)} \\ \\ \tpat & ::= & \binder & \text{binder pattern} \\ & | & \tpat : \ttype & \text{pattern annotated with a type} \\ & | & \record{\label=\tpat_1, \tpat_2} & \text{record extension pattern} \\ & | & \record{} & \text{empty record pattern} \\ \\ \end{array} \]

Values

In order to make it clear what is 'stuck' and what still needs to be evaluated, we separate our syntax into weak head normal forms (\(\wexpr\)), and neutral terms (\(\nexpr\)):

\[ \begin{array}{rrll} \vexpr,\vtype & ::= & \wexpr & \text{weak head normal forms} \\ & | & \nexpr & \text{neutral terms} \\ \\ \nexpr,\ntype & ::= & \var{i} & \text{variables ($i \in \mathbb{N}$)} \\ & | & \app{\nexpr}{\texpr} & \text{function application} \\ & | & \case{\nexpr}{\overline{\tpat_i \rightarrow \texpr_i}^{;}} & \text{case expressions} \\ & | & \proj{\nexpr}{\label}{i} & \text{record projection ($i \in \mathbb{N}$)} \\ \\ \wexpr,\wtype & ::= & \Type{i} & \text{universe of types ($i \in \mathbb{N}$)} \\ & | & \Pi{\binder:\vtype_1}{\vtype_2} & \text{dependent function type} \\ & | & \lam{\binder:\vtype}{\vexpr} & \text{functions} \\ & | & \RecordCons{\label \as \binder:\vtype_1}{\vtype_2} & \text{record type extension} \\ & | & \RecordEmpty & \text{empty record type} \\ & | & \record{\label=\vexpr_1, \vexpr_2} & \text{record extension} \\ & | & \record{} & \text{empty record} \\ \\ \end{array} \]

Contexts

As we type check terms, we'll be passing over bindings like lambdas and pi types. Contexts allow us to keep track of the bound parameters, even though we don't know the exact values these will eventually take during normalization.

\[ \begin{array}{rrll} \ctx & ::= & \emptyCtx & \text{the empty context} \\ & | & \extendCtx{\ctx}{\declItem{\binder}{\vtype}} & \text{context extended with a declaration} \\ & | & \extendCtx{\ctx}{\defnItem{\binder}{\texpr}} & \text{context extended with a definition} \\ \end{array} \]

Semantics

We take a bidirectional approach to type checking, splitting it into two phases: type checking and type inference. This makes the flow of information through the type checker clear and relatively easy to reason about. Normalization happens after inference, and before types are fed back in to be used during type checking.

With that in mind, the next sections will describe the following judgments:

Name Notation Inputs Outputs
normalization \(\eval{ \ctx }{ \texpr }{ \vexpr }\) \(\ctx\), \(\rexpr\) \(\vexpr\)
type checking \(\check{ \ctx }{ \rexpr }{ \vtype }{ \texpr }\) \(\ctx\), \(\rexpr\), \(\vtype\) \(\texpr\)
type inference \(\infer{ \ctx }{ \rexpr }{ \vtype }{ \texpr }\) \(\ctx\), \(\rexpr\) \(\vtype\), \(\texpr\)
subtyping \(\subty{ \ctx }{ \vtype_1 }{ \vtype_2 }\) \(\ctx\), \(\vtype_1\), \(\vtype_2\)
pattern matching \(\match{ \wexpr }{ \tpat }{ \theta }\) \(\wexpr\), \(\tpat\) \(\theta\)
type checking of patterns \(\checkpat{ \ctx }{ \rpat }{ \vtype }{ \tpat }{ \ctx' }\) \(\ctx\), \(\rpat\), \(\vtype\) \(\tpat\), \(\ctx'\)
type inference of patterns \(\inferpat{ \ctx }{ \rpat }{ \vtype }{ \tpat }{ \ctx' }\) \(\ctx\), \(\rpat\), \(\vtype\), \(\tpat\), \(\ctx'\)

Normalization stands on its own, but both checking and inference are mutually dependent on each other. Care has been taken to design the judgments so that they are syntax-directed, meaning that an algorithm can be clearly derived from them.

Elaboration

Elaboration is the process of filling in missing information that the programmer omitted in the original code, generally based on the results of type inference.

In Pikelet's judgements the elaborated terms are denoted after the diamond: \(\rhd\). At the moment not much is added - only the missing type annotations on function parameters. In the future this could be extended filling in type class instances and implicit arguments.

Normalization

Here we describe how we normalize elaborated terms under the assumptions in the context.

\[ \boxed{ \eval{ \ctx }{ \texpr }{ \vexpr } } \\[2em] \begin{array}{cl} \rule{E-ANN}{ \eval{ \ctx }{ \texpr }{ \vexpr } }{ \eval{ \ctx }{ \texpr:\ttype }{ \vexpr } } \\[2em] \rule{E-TYPE}{}{ \eval{ \ctx }{ \Type{i} }{ \Type{i} } } \\[2em] \rule{E-VAR}{ \defnItem{\binder}{\texpr} \notin \ctx }{ \eval{ \ctx }{ \var{i} }{ \var{i} } } \\[2em] \rule{E-VAR-DEF}{ \defnItem{\binder}{\texpr} \in \ctx \qquad \eval{ \ctx }{ \texpr }{ \vexpr } }{ \eval{ \ctx }{ \var{i} }{ \shift(\vexpr,i) } } \\[2em] \rule{E-PI}{ \eval{ \ctx }{ \ttype_1 }{ \vtype_1 } \qquad \eval{ \ctx }{ \ttype_2 }{ \vtype_2 } }{ \eval{ \ctx }{ \Pi{\binder:\ttype_1}{\ttype_2} }{ \Pi{\binder:\vtype_1}{\vtype_2} } } \\[2em] \rule{E-LAM}{ \eval{ \ctx }{ \ttype }{ \vtype } \qquad \eval{ \ctx }{ \texpr }{ \vexpr } }{ \eval{ \ctx }{ \lam{\binder:\ttype}{\texpr} }{ \lam{\binder:\vtype}{\vexpr} } } \\[2em] \rule{E-APP}{ \eval{ \ctx }{ \texpr_1 }{ \lam{\binder:\vtype_1}{\vexpr_1} } \qquad \eval{ \ctx }{ \subst{\vexpr_1}{\binder}{\texpr_2} }{ \vexpr_3 } }{ \eval{ \ctx }{ \app{\texpr_1}{\texpr_2} }{ \vexpr_3 } } \\[2em] \rule{E-CASE}{ \eval{ \ctx }{ \nexpr }{ \nexpr' } }{ \eval{ \ctx }{ \case{\nexpr}{\overline{\tpat_i \rightarrow \texpr_i}^{;}} } { \case{\nexpr'}{\overline{\tpat_i \rightarrow \texpr_i}^{;}} } } \\[2em] \rule{E-CASE-MATCH}{ \eval{ \ctx }{ \nexpr }{ \wexpr } \qquad \match{ \wexpr }{ \tpat_i }{ \theta } \qquad \eval{ \ctx }{ \texpr_i ~ \theta }{ \vexpr_i } }{ \eval{ \ctx }{ \case{\nexpr}{\overline{\tpat_i \rightarrow \texpr_i}^{;}} }{ \vexpr_i } } \\[2em] \rule{E-RECORD-TYPE}{ \eval{ \ctx }{ \ttype_1 }{ \vtype_1 } \qquad \eval{ \ctx }{ \ttype_2 }{ \vtype_2 } }{ \eval{ \ctx }{ \RecordCons{\label \as \binder:\ttype_1}{\ttype_2} }{ \RecordCons{\label \as \binder:\vtype_1}{\vtype_2} } } \\[2em] \rule{E-RECORD}{ \eval{ \ctx }{ \texpr_1 }{ \vexpr_1 } \qquad \eval{ \ctx }{ \texpr_2 }{ \vexpr_2 } }{ \eval{ \ctx }{ \record{\label=\texpr_1, \texpr_2} }{ \record{\label=\vexpr_1, \vexpr_2} } } \\[2em] \rule{E-EMPTY-RECORD-TYPE}{}{ \eval{ \ctx }{ \RecordEmpty }{ \RecordEmpty } } \\[2em] \rule{E-EMPTY-RECORD}{}{ \eval{ \ctx }{ \record{} }{ \record{} } } \\[2em] \rule{E-PROJ}{ \eval{ \ctx }{ \texpr_1 }{ \vexpr_1 } \qquad \vexpr_2 = \field(\label, \vexpr_1) }{ \eval{ \ctx }{ \proj{\texpr_1}{\label}{i} }{ \vexpr_2 } } \\[2em] \end{array} \]

We define \(\field(-,-)\) like so:

\[ \begin{array}{lrll} \field(\label_1, \record{\label_2 = \vexpr_1, \vexpr_2}) & = & \vexpr_1 & \text{if} ~ \label_1 \equiv \label_2 \\ \field(\label_1, \record{\label_2 = \vexpr_1, \vexpr_2}) & = & \field(\label_1, \vexpr_2) \\ \end{array} \]

Type checking

This judgement checks that the given term has the expected type and returns its elaborated form.

\[ \boxed{ \check{ \ctx }{ \rexpr }{ \vtype }{ \texpr } } \\[2em] \begin{array}{cl} \rule{C-LAM}{ \infer{ \extendCtx{\ctx}{\declItem{\binder}{\vtype_1}} }{ \rexpr }{ \ttype_2 }{ \texpr } }{ \check{ \ctx }{ \lam{\binder}{\rexpr} }{ \Pi{\binder:\vtype_1}{\vtype_2} }{ \lam{\binder:\vtype_1}{\texpr} } } \\[2em] \rule{C-CASE}{ \infer{ \ctx }{ \rexpr }{ \vtype_1 }{ \texpr } \qquad \overline{ % TODO: impl pattern checks ~ \check{ \ctx }{ \rpat_i }{ \vtype_1 }{ \tpat_i } \Rightarrow \ctx' \qquad \check{ \composeCtx{\ctx}{\ctx'} }{ \rexpr_i }{ \vtype_2 }{ \texpr_i } ~ } }{ \check{ \ctx }{ \case{\rexpr}{\overline{\rpat_i \rightarrow \rexpr_i}^{;}} }{ \vtype_2 } { \case{\texpr}{\overline{\tpat_i \rightarrow \texpr_i}^{;}} } } \\[2em] \rule{C-RECORD}{ \label_1 \equiv \label_2 \qquad \check{ \ctx }{ \rexpr_1 }{ \vtype_1 }{ \texpr_1 } \qquad \eval{ \ctx }{ \subst{\vtype_2}{\binder}{\texpr_1} }{ \vtype_3 } \qquad \check{ \ctx }{ \rexpr_2 }{ \vtype_3 }{ \texpr_2 } }{ \check{ \ctx }{ \record{\label_1=\rexpr_1, \rexpr_2} } { \RecordCons{\label_2 \as \binder:\vtype_1}{\vtype_2} } { \record{\label_1=\texpr_1, \texpr_2} } } \\[2em] \rule{C-CONV}{ \infer{ \ctx }{ \rexpr }{ \vtype_2 }{ \texpr } \qquad \subty{ \ctx }{ \vtype_1 }{ \vtype_2 } }{ \check{ \ctx }{ \rexpr }{ \vtype_1 }{ \texpr } } \\[2em] \end{array} \]

Type inference

Here we define a judgement that synthesizes a type from the given term and returns its elaborated form.

\[ \boxed{ \infer{ \ctx }{ \rexpr }{ \vtype }{ \texpr } } \\[2em] \begin{array}{cl} \rule{I-ANN}{ \infer{ \ctx }{ \rtype }{ \Type{i} }{ \ttype } \qquad \eval{ \ctx }{ \ttype }{ \vtype } \qquad \check{ \ctx }{ \rexpr }{ \vtype }{ \texpr } }{ \infer{ \ctx }{ \rexpr:\rtype }{ \Type{i+1} }{ \texpr:\ttype } } \\[2em] \rule{I-TYPE}{}{ \infer{ \ctx }{ \Type{i} }{ \Type{(i+1)} }{ \Type{i} } } \\[2em] \rule{I-VAR}{ \declItem{\binder}{\vtype} \in \ctx }{ \infer{ \ctx }{ \var{i} }{ \shift(\vtype,i) }{ \var{i} } } \\[2em] \rule{I-PI}{ \infer{ \ctx }{ \rtype_1 }{ \Type{i} }{ \ttype_1 } \qquad \eval{ \ctx }{ \ttype_1 }{ \vtype_1 } \qquad \check{ \extendCtx{\ctx}{\declItem{\binder}{\vtype_1}} }{ \rtype_2 }{ \Type{j} }{ \ttype_2 } }{ \infer{ \ctx }{ \Pi{\binder:\rtype_1}{\rtype_2} }{ \Type{\max(i,j)} } { \Pi{\binder:\ttype_1}{\ttype_2} } } \\[2em] \rule{I-LAM}{ \infer{ \ctx }{ \rtype }{ \Type{i} }{ \ttype } \qquad \eval{ \ctx }{ \ttype }{ \vtype_1 } \qquad \check{ \extendCtx{\ctx}{\declItem{\binder}{\vtype_1}} }{ \rexpr}{ \vtype_2 }{ \texpr } }{ \infer{ \ctx }{ \lam{\binder:\rtype}{\rexpr} } { \Pi{\binder:\vtype_1}{\vtype_2} }{ \lam{\binder:\ttype}{\texpr} } } \\[2em] \rule{I-APP}{ \infer{ \ctx }{ \rexpr_1 }{ \Pi{\binder:\vtype_1}{\vtype_2} }{ \texpr_1 } \qquad \check{ \ctx }{ \rexpr_2 }{ \vtype_1 }{ \texpr_2 } \qquad \eval{ \ctx }{ \subst{\vtype_2}{\binder}{\texpr_2} }{ \vtype_3 } }{ \infer{ \ctx }{ \app{\rexpr_1}{\rexpr_2} }{ \vtype_3 }{ \app{\texpr_1}{\texpr_2} } } \\[2em] \rule{I-RECORD-TYPE}{ \infer{ \ctx }{ \rtype_1 }{ \Type{i} }{ \ttype_1 } \qquad \eval{ \ctx }{ \ttype_1 }{ \vtype_1 } \qquad \infer{ \extendCtx{\ctx}{\declItem{\binder}{\vtype_1}} }{ \rtype_2 }{ \Type{j} }{ \ttype_2 } }{ \infer{ \ctx } { \RecordCons{\label \as \binder:\rtype_1}{\rtype_2} } { \Type{\max(i,j)} } { \RecordCons{\label \as \binder:\ttype_1}{\ttype_2} } } \\[2em] \rule{I-EMPTY-RECORD-TYPE}{}{ \infer{ \ctx }{ \RecordEmpty }{ \Type{0} }{ \RecordEmpty } } \\[2em] \rule{I-RECORD}{ \infer{ \ctx }{ \rexpr_1 }{ \vtype_1 }{ \texpr_1 } \qquad \infer{ \ctx }{ \rexpr_2 }{ \vtype_2 }{ \texpr_2 } \qquad \eval{ \ctx }{ \subst{\vtype_2}{\binder}{\texpr_1} }{ \vtype_3 } }{ \infer{ \ctx }{ \record{\label=\rexpr_1, \rexpr_2} } { \RecordCons{\label \as \binder:\vtype_1}{\vtype_3} } { \record{\label=\texpr_1, \texpr_2} } } \\[2em] \rule{I-EMPTY-RECORD}{}{ \infer{ \ctx }{ \record{} }{ \RecordEmpty }{ \record{} } } \\[2em] \rule{I-PROJ}{ \infer{ \ctx }{ \rexpr }{ \vtype_1 }{ \texpr } \qquad \vtype_2 = \fieldty(\label, \vtype_1) \qquad \theta = \fieldsubst(\texpr, \label, i, \vtype_1) }{ \infer{ \ctx }{ \proj{\rexpr}{\label}{i} }{ \vtype_2 ~ \theta }{ \proj{\texpr}{\label}{i} } } \\[2em] \end{array} \]

We define \(\fieldty(-,-)\) like so:

\[ \begin{array}{lrll} \fieldty(\label_1, \RecordCons{\label_2 : \vtype_1}{\vtype_2}) & = & \vtype_1 & \text{if} ~ \label_1 \equiv \label_2 \\ \fieldty(\label_1, \RecordCons{\label_2 : \vtype_1}{\vtype_2}) & = & \fieldty(\label_1, \vtype_2) \\ \\[2em] \end{array} \]

In order to ensure that we maintain maintain the proper paths to variables when we project on them, we define \(\fieldsubst(-,-,-,-)\) as:

\[ \begin{array}{lrll} \fieldsubst(\texpr, \label_1, i, \RecordCons{\label_2 : \vtype_1}{\vtype_2}) & = & [] & \text{if} ~ \label_1 \equiv \label_2 \\ \fieldsubst(\texpr, \label_1, i, \RecordCons{\label_2 : \vtype_1}{\vtype_2}) & = & \fieldsubst(\texpr, \label_1, \vtype_2) \doubleplus [ \label_2 \rightarrow \proj{\texpr}{\label_2}{i} ] \\ \\[2em] \end{array} \]

Subtyping

\[ \boxed{ \subty{ \ctx }{ \vtype_1 }{ \vtype_2 } } \\[2em] \begin{array}{cl} \rule{ST-TYPE}{ i \leqslant j }{ \subty{ \ctx }{ \Type{i} }{ \Type{j} } } \\[2em] \rule{ST-PI}{ \subty{ \ctx }{ \vtype_2 }{ \vtype_1 } \qquad \subty{ \extendCtx{\ctx}{\declItem{\binder}{\vtype_2}} }{ \vtype_3 }{ \vtype_4 } }{ \subty{ \ctx }{ \Pi{\binder:\vtype_1}{\vtype_2} } { \Pi{\binder:\vtype_3}{\vtype_4} } } \\[2em] \rule{ST-RECORD-TYPE}{ \subty{ \ctx }{ \vtype_1 }{ \vtype_3 } \qquad \subty{ \extendCtx{\ctx}{\declItem{\binder}{\vtype_1}} }{ \vtype_2 }{ \vtype_4 } }{ \subty{ \ctx }{ \RecordCons{\label \as \binder:\vtype_1}{\vtype_2} } { \RecordCons{\label \as \binder:\vtype_3}{\vtype_4} } } \\[2em] \rule{ST-EMPTY-RECORD-TYPE}{}{ \subty{ \ctx }{ \RecordEmpty }{ \RecordEmpty } } \\[2em] \rule{ST-ALPHA-EQ}{ \vtype_1 \equiv_{\alpha} \vtype_2 }{ \subty{ \ctx }{ \vtype_1 }{ \vtype_2 } } \\[2em] \end{array} \]

Universe shifting

We implement explicit level shifts, giving us something like what Conor McBride describes in his blog post, universe hierarchies.

We define \(\shift(-,-)\) for values:

\[ \begin{array}{llrl} \shift(\var{i}, & j) & = & \var{i} \\ \shift(\app{\nexpr}{\texpr}, & j) & = & \app{\shift(\nexpr, j)}{\shift(\texpr, j)} \\ \shift(\case{\nexpr}{\overline{\tpat_i \rightarrow \texpr_i}^{;}}, & j) & = & % FIXME: define pattern shifting \case{\shift(\nexpr, j)}{\overline{\shift(\tpat_i, j) \rightarrow \shift(\texpr_i, j)}^{;}} \\ \shift(\proj{\nexpr}{\label}{i}, & j) & = & \proj{\shift(\nexpr, j)}{\label}{i} \\ \shift(\Type{i}, & j) & = & \Type{(i + j)} \\ \shift(\Pi{\binder:\vtype_1}{\vtype_2}, & j) & = & \Pi{\binder:\shift(\vtype_1, j)}{\shift(\vtype_2, j)} \\ \shift(\lam{\binder:\vtype}{\vexpr}, & j) & = & \lam{\binder:\shift(\vtype, j)}{\shift(\vexpr, j)} \\ \shift(\RecordCons{\label \as \binder:\vtype_1}{\vtype_2}, & j) & = & \RecordCons{\label \as \binder:\shift(\vtype_1, j)}{\shift(\vtype_2, j)} \\ \shift(\RecordEmpty, & j) & = & \RecordEmpty \\ \shift(\record{\label=\vexpr_1, \vexpr_2}, & j) & = & \record{\label=\shift(\vexpr_1, j), \shift(\vexpr_2, j)} \\ \shift(\record{}, & j) & = & \record{} \\ \\[2em] \end{array} \]

NOTE: We might want to investigate making this shifting operator more expressive and 'first class', perhaps as was described in Dependently typed lambda calculus with a lifting operator. For now this seems to be expressive enough for most use cases that our users might run into.

Pattern matching

This judement takes an expression \(\wexpr\) in weak head normal form, and a pattern \(\tpat\) and returns a substitution \(\theta\) with the matched bindings.

\[ \boxed{ \match{ \wexpr }{ \tpat }{ \theta } } \\[2em] \begin{array}{cl} \rule{M-VAR}{}{ \match{ \wexpr }{ \binder }{ [\binder \rightarrow \wexpr] } } \\[2em] % TODO: % \rule{M-RECORD}{ % \match{ \wexpr_1 }{ \tpat_1 }{ \theta_1 } % \qquad % \match{ \wexpr_2 }{ \tpat_2 }{ \theta_2 } % }{ % \match{ \record{\label=\wexpr_1, \wexpr_2} }{ \record{\label=\tpat_1, \tpat_2} }{ \theta_1 \doubleplus \theta_2 } % } % \\[2em] % \rule{M-EMPTY-RECORD}{}{ % \match{ \record{} }{ \record{} }{ [] } % } % \\[2em] \end{array} \]

Type checking of patterns

\[ \boxed{ \checkpat{ \ctx }{ \rpat }{ \vtype }{ \tpat }{ \ctx' } } \\[2em] \begin{array}{cl} \rule{CP-BINDER}{}{ \checkpat{ \ctx }{ \binder }{ \vtype }{ \binder }{ \binder : \vtype } } \\[2em] \rule{CP-CONV}{ \inferpat{ \ctx }{ \rpat }{ \vtype_2 }{ \tpat }{ \ctx' } \qquad \subty{ \ctx }{ \vtype_1 }{ \vtype_2 } }{ \checkpat{ \ctx }{ \rpat }{ \vtype_1 }{ \tpat }{ \ctx' } } \\[2em] \end{array} \]

Type inference of patterns

\[ \boxed{ \inferpat{ \ctx }{ \rpat }{ \vtype }{ \tpat }{ \ctx' } } \\[2em] \begin{array}{cl} \rule{IP-ANN}{ \infer{ \ctx }{ \rtype }{ \Type{i} }{ \ttype } \qquad \eval{ \ctx }{ \ttype }{ \vtype } \qquad \checkpat{ \ctx }{ \rpat }{ \vtype }{ \rpat }{ \ctx' } }{ \inferpat{ \ctx }{ \rpat : \rtype }{ \rtype }{ \rpat : \rtype }{ \ctx' } } \\[2em] \end{array} \]

TODO:

Influences

Some languages have been inspiring when building Pikelet. We list some of them here, and the contributions they have made in our thinking. These ideas may or may not be included in the final Pikelet language, but they are worth mentioning!

Contents

1ML

Links:

Key things we love:

  • focus on simplicity
  • combines module language of ML with dependent records
  • formalized foundation

Agda

Links:

Key things we love:

  • interactive editing
  • dependent types
  • dependent records
  • implicit arguments
  • instance arguments for emulating type classes
  • codata for unbounded data
  • totality checking
  • inductive data types

ATS

Links:

Key things we love:

  • dependent types
  • proofs can be generated by SMT solvers
  • high level of performance, low level interoperability
  • unboxed data types

D

Links:

Key things we love:

  • strong support for static metaprogramming
  • design by introspection (ie. breaking parametricity for performance optimization and metaprogramming purposes)
  • low-level control and high-level abstraction
  • fast compiler, short iteration times

Dhall

Links:

Key things we love:

  • simple core language
  • dependent types
  • total language
  • structural records

Discus (formerly DDC)

Links:

Elm

Links:

Key things we love:

  • focus on usability, and adoption
  • friendly marketing
  • welcoming community
  • best-in-class error messages
  • row polymorphism

F*

Links:

Key things we love:

  • combining SMT solvers with explicit proofs
  • combining effects with dependent types

Gluon

Links:

Key things we love:

  • strict evaluation
  • focus on simplicity
  • embeddable in Rust programs
  • using records as a basis for the module system

Granule

Links:

Key things we love:

  • combining coeffects with effects in one language

Idris

Links:

Key things we love:

  • focus on making dependently typed programming practical
  • interactive editing
  • simple core type theory
  • opt-out totality checking
  • nice, accessible documentation, focused on real-world examples
  • universe checking
  • aggressive erasure
  • linear types
  • compilation to native code
  • elaborator reflection
  • ad-hoc interfaces desugar into records
  • effects system as a library
  • state machine library

Ivory

Links:

Koka

Links:

Key things we love:

  • algebraic effects and handlers
  • nice library documentation, with clickable links, etc.

Lean

Links:

Key things we love:

  • focus on responsive interactive development
  • metaprogramming support using Lean
  • simple kernel language

OCaml

Links:

Key things we love:

  • module system
  • algebraic effects
  • modular implicits
  • row polymorphism
  • structural records
  • efficient (and fast) code generation

Rust

Links:

Key things we love:

  • friendly community
  • non-uniform, unboxed data layout
  • predictable optimizations
  • focus on systems programming
  • having safety and low level code under one roof
  • infectious 'unsafe' keyword
  • strict evaluation
  • opt-in garbage collection
  • minimal runtime
  • bootstrapped itself off a foundation of C libraries

Sixten

Links:

Key things we love:

  • non-uniform, unboxed data layout
  • dependent types

Ur

Links:

Key things we love:

  • Statically typed metaprogramming with type-level records

References

What follows is a non-exhaustive list of some of the references that were useful when building Pikelet:

  • Christiansen, David Raymond (2013). “Bidirectional Typing Rules: A Tutorial”. [PAPER]
  • Löh, Andres, McBride, Conor and Swierstra, Wouter (2009). “A tutorial implementation of a dependently typed lambda calculus”. [SITE] [PAPER]
  • Norell, Ulf (2007). “Towards a practical programming language based on dependent type theory”. [PAPER]