Set-Theoretic Types for Polymorphic Variants

Polymorphic variants are a useful feature of the OCaml language whose current definition and implementation rely on kinding constraints to simulate a subtyping relation via unification. This yields an awkward formalization and results in a type system whose behaviour is in some cases unintuitive and/or unduly restrictive. In this work, we present an alternative formalization of poly-morphic variants, based on set-theoretic types and subtyping, that yields a cleaner and more streamlined system. Our formalization is more expressive than the current one (it types more programs while preserving type safety), it can internalize some meta-theoretic properties, and it removes some pathological cases of the current implementation resulting in a more intuitive and, thus, predictable type system. More generally, this work shows how to add full-fledged union types to functional languages of the ML family that usually rely on the Hindley-Milner type system. As an aside, our system also improves the theory of semantic subtyping, notably by proving completeness for the type reconstruction algorithm.


Introduction
Polymorphic variants are a useful feature of OCaml, as they balance static safety and code reuse capabilities with a remarkable conciseness. They were originally proposed as a solution to add union types to Hindley-Milner (HM) type systems (Garrigue 2002). Union types have several applications and make it possible to deduce types that are finer grained than algebraic data types, especially in languages with pattern matching. Polymorphic variants cover several of the applications of union types, which explains their success; however they provide just a limited form of union types: although they offer some sort of subtyping and value sharing that ordinary variants do not, it is still not possible to form unions of values of generic types, but just finite enumerations of tagged values. This is obtained by superimposing on the HM type system a system of kinding constraints, which is used to simulate subtyping without actually introducing it. In general, the current system reuses the ML type system-including unification for type reconstruction-as much as possible. This is the source of several trade-offs which yield significant complexity, make polymorphic variants hard to understand (especially for beginners), and jeopardize expressiveness insofar as they forbid several useful applications that general union types make possible.
We argue that using a different system, one that departs drastically from HM, is advantageous. In this work we advocate the use of full-fledged union types (i.e., the original motivation of polymorphic variants) with standard set-theoretic subtyping. In particular we use semantic subtyping (Frisch et al. 2008), a type system where (i) types are interpreted as set of values, (ii) they are enriched with unrestrained unions, intersections, and negations interpreted as the corresponding set-theoretic operations on sets of values, and (iii) subtyping corresponds to set containment. Using set-theoretic types and subtyping yields a much more natural and easy-to-understand system in which several key notions-e.g., bounded quantification and exhaustiveness and redundancy analyses of pattern matching-can be expressed directly by types; conversely, with the current formalization these notions need metatheoretic constructions (in the case of kinding) or they are metatheoretic properties not directly connected to the type theory (as for exhaustiveness and redundancy).
All in all, our proposal is not very original: in order to have the advantages of union types in an implicitly-typed language, we simply add them, instead of simulating them roughly and partially by polymorphic variants. This implies to generalize notions such as instantiation and generalization to cope with subtyping (and, thus, with unions). We chose not to start from scratch, but instead to build on the existing: therefore we show how to add unions as a modification of the type checker, that is, without disrupting the current syntax of OCaml. Nevertheless, our results can be used to add unions to other implicitly-typed languages of the ML family.
We said that the use of kinding constraints instead of fullfledged unions has several practical drawbacks and that the system may therefore result in unintuitive or overly restrictive behaviour. We illustrate this by the following motivating examples in OCaml.
EXAMPLE 1: loss of polymorphism. Let us consider the identity function and its application to a polymorphic variant in OCaml ("#" denotes the interactive toplevel prompt of OCaml, whose input is ended by a double semicolon and followed by the system response): the typing of the function above is tainted by two approximations: (i) the domain of the function should be [< 'A | 'B ], but-since the argument x is passed to the function id2-OCaml deduces the type [ 'A | 'B ] (a shorthand for [< 'A | 'B > 'A | 'B ]), which is less precise: it loses subtype polymorphism; (ii) the return type states that f yields either 'A or 'B, while it is easy to see that only the latter is possible (when the argument is 'A the function returns 'B, and when the argument is 'B the function returns the argument, that is, 'B). So the type system deduces for f the type To recover the correct type, we need to state explicitly that the second pattern will only be used when y is 'B, by using the alias pattern 'B as y. This is a minor inconvenience here, but writing the type for y is not always possible and is often more cumbersome.
Likewise, OCaml unduly restricts the type of the function # let g x = match x with 'A → id2 x | _ → x ;; val g : ([ < 'A | 'B > 'A ] as α) → α = < fun > as it states g can only be applied to 'A or 'B; actually, it can be applied safely to, say, 'C or any variant value with any other tag.
The system adds the upper bound 'A | 'B because id2 is applied to x. However, the application is evaluated only when x = A: hence, this bound is unnecessary. The lower bound 'A is unnecessary too. The problem with these two functions is not specific to variant types. It is more general, and it stems from the lack of full-fledged connectives (union, intersection, and negation) in the types, a lack which neither allows the system to type a given pattern-matching branch by taking into account the cases the previous branches have already handled (e.g., the typing of the second branch in f), nor allows it to use the information provided by a pattern to refine the typing of the branch code (e.g., the typing of the first branch in g).
As a matter of fact, we can reproduce the same problem as for g, for instance, on lists: Outline. Section 2 defines the syntax and semantics of the language we will study throughout this work. Sections 3 and 4 present two different type systems for this language.
In particular, Section 3 briefly describes the K type system we have developed as a formalization of how polymorphic variants are typed in OCaml. Section 4 describes the S type system, which employs set-theoretic types with semantic subtyping: we first give a deductive presentation of the system, and then we compare it to K to show that S can type every program that the K system can type. Section 5 defines a type reconstruction algorithm that is sound and complete with respect to the S type system. Section 6 presents three extensions or modifications of the system: the first is the addition of overloaded functions; the second is a refinement of the typing of pattern matching, which we need to type precisely the functions g and map of Example 2; the third is a restriction which solves a discrepancy between our model and OCaml (the lack of type tagging at runtime in the OCaml implementation).
Finally, Section 7 compares our work with other formalizations of polymorphic variants and with previous work on systems with set-theoretic type connectives, and Section 8 concludes the presentation and points out some directions for future research.
For space reasons we omitted all the proofs as well as some definitions. They can be found in the Appendix.

The language of polymorphic variants
In this section, we define the syntax and semantics of the language with polymorphic variants and pattern matching that we study in this work. In the sections following this one we will define two different type systems for it (one with kinds in Section 3, the other with set-theoretic types in Section 4), as well as a type reconstruction algorithm (Section 5).

Syntax
We assume that there exist a countable set X of expression variables, ranged over by x, y, z, . . . , a set C of constants, ranged over by c, and a set L of tags, ranged over by tag. Tags are used to label variant expressions.

Definition 2.1 (Expressions).
An expression e is a term inductively generated by the following grammar: e ::= x | c | λx. e | e e | (e,e) | tag(e) | match e with (pi→ei)i∈I where p ranges over the set P of patterns, defined below. We write E to denote the set of all expressions.
We define fv(e) to be the set of expression variables occurring free in the expression e, and we say that e is closed if and only if fv(e) is empty. As customary, we consider expressions up to αrenaming of the variables bound by abstractions and by patterns.
The language is a λ-calculus with constants, pairs, variants, and pattern matching. Constants include a dummy constant ( ) ('unit') to encode variants without arguments; multiple-argument variants are encoded with pairs. Matching expressions specify one or more branches (indexed by a set I) and can be used to encode letexpressions: let x = e0 in e1

Definition 2.2 (Patterns).
A pattern p is a term inductively generated by the following grammar: such that (i) in a pair pattern (p1, p2) or an intersection pattern p1&p2, capt(p1) ∩ capt(p2) = ∅; (ii) in a union pattern p1|p2, capt(p1) = capt(p2), where capt(p) denotes the set of expression variables occurring as sub-terms in a pattern p (called the capture variables of p). We write P to denote the set of all patterns.
Patterns have the usual semantics. A wildcard " " accepts any value and generates no bindings; a variable pattern accepts any value and binds the value to the variable. Constants only accept themselves and do not bind. Pair patterns accept pairs if each subpattern accepts the corresponding component, and variant patterns accept variants with the same tag if the argument matches the inner pattern (in both cases, the bindings are those of the sub-patterns). Intersection patterns require the value to match both sub-patterns (they are a generalization of the alias patterns p as x of OCaml), while union patterns require it to match either of the two (the left pattern is tested first).

Semantics
We now define a small-step operational semantics for this calculus. First, we define the values of the language.

Definition 2.3 (Values). A value v is a closed expression inductively generated by the following grammar.
v ::= c | λx. e | (v, v) | tag (v) We now formalize the intuitive semantics of patterns that we have presented above.
Bindings are expressed in terms of expression substitutions, ranged over by ς: we write [ v 1/x 1 , . . . , vn /xn] for the substitution that replaces free occurrences of xi with vi, for each i. We write eς for the application of the substitution ς to an expression e; we write ς1 ∪ ς2 for the union of disjoint substitutions.
The semantics of pattern matching we have described is formalized by the definition of v/p given in Figure 1. In a nutshell, v/p is the result of matching a value v against a pattern p. We have either v/p = ς, where ς is a substitution defined on the variables in capt(p), or v/p = Ω. In the former case, we say that v matches p (or that p accepts v); in the latter, we say that matching fails.
Note that the unions of substitutions in the definition are always disjoint because of our linearity condition on pair and intersection patterns. The condition that sub-patterns of a union pattern p1|p2 must have the same capture variables ensures that v/p1 and v/p2 will be defined on the same variables.
Finally, we describe the reduction relation. It is defined by the following two notions of reduction if v/pj = ς and ∀i < j. v/pi = Ω applied with a leftmost-outermost strategy which does not reduce inside λ-abstractions nor in the branches of match expressions. The first reduction rule is the ordinary rule for call-by-value βreduction. It states that the application of an abstraction λx. e to a value v reduces to the body e of the abstraction, where x is replaced by v. The second rule states that a match expression on a value v reduces to the branch ej corresponding to the first pattern pj for which matching is successful. The obtained substitution is applied to ej , replacing the capture variables of pj with sub-terms of v. If no pattern accepts v, the expression is stuck.

Typing variants with kinding constraints
In this section, we formalize K, the type system with kinding constraints for polymorphic variants as featured in OCaml; we will use it to gauge the merits of S, our type system with set-theoretic types. This formalization is derived from, and extends, the published systems based on structural polymorphism (Garrigue 2002(Garrigue , 2015. In our ken, no formalization in the literature includes polymorphic variants, let-polymorphism, and full-fledged pattern matching (see Section 7), which is why we give here a new one. While based on existing work, the formalization is far from being trivial (which with hindsight explains its absence), and thus we needed to prove all its properties from scratch. For space reasons we outline just the features that distinguish our formalization, namely variants, pattern matching, and type generalization for pattern capture variables. The Appendix presents the full definitions and proofs of all properties.
The system consists essentially of the core ML type system with the addition of a kinding system to distinguish normal type variables from constrained ones. Unlike normal variables, constrained ones cannot be instantiated into any type, but only into other constrained variables with compatible constraints. They are used to type variant expressions: there are no 'variant types' per se. Constraints are recorded in kinds and kinds in a kinding environment (i.e., a mapping from type variables to kinds) which is included in typing judgments. An important consequence of using kinding constraints is that they implicitly introduce (a limited form of) recursive types, since a constrained type variable may occur in its constraints.
We assume that there exists a countable set V of type variables, ranged over by α, β, γ, . . . . We also consider a finite set B of basic types, ranged over by b, and a function b (·) from constants to basic types. For instance, we might take B = {bool, int, unit}, with b true = bool, b ( ) = unit, and so on.
Definition 3.1 (Types). A type τ is a term inductively generated by the following grammar.
The system only uses the types of core ML: all additional information is encoded in the kinds of type variables.
Kinds have two forms: the unconstrained kind "•" classifies "normal" variables, while variables used to type variants are given a constrained kind. Constrained kinds are triples describing which tags may or may not appear (a presence information) and which argument types are associated to each tag (a typing information). The presence information is split in two parts, a lower and an upper bound. This is necessary to provide an equivalent to both covariant and contravariant subtyping-without actually having subtyping in the system-that is, to allow both variant values and functions defined on variant values to be polymorphic.

Definition 3.2 (Kinds).
A kind κ is either the unconstrained kind "•" or a constrained kind, that is, a triple (L, U, T ) where: -L is a finite set of tags {tag1, . . . ,tagn}; -U is either a finite set of tags or the set L of all tags; -T is a finite set of pairs of a tag and a type, written {tag1 : τ1, . . . ,tagn : τn} (its domain dom(T ) is the set of tags occurring in it); and where the following conditions hold: -tags in L have a single type in T , that is, if tag ∈ L, whenever both tag : τ1 ∈ T and tag : τ2 ∈ T , we have τ1 = τ2.
In OCaml, kinds are written with the typing information inlined in the lower and upper bounds. These are introduced by > and < respectively and, if missing, ∅ is assumed for the lower bound and L for the upper. For instance, [ We identify a type scheme ∀∅. ∅ ⊲ τ , which quantifies no variable, with the type τ itself. We consider type schemes up to renaming of the variables they bind and disregard useless quantification (i.e., quantification of variables that do not occur in the type).
Type schemes single out, by quantifying them, the variables of a type which can be instantiated. In ML without kinds, the quantified variables of a scheme can be instantiated with any type. The addition of kinds changes this: variables with constrained kinds may only be instantiated into other variables with equally strong or stronger constraints. This relation on constraints is formalized by the following entailment relation: where κ1 κ2 means that κ1 is a constraint stronger than κ2. This relation is used to select the type substitutions (ranged over by θ) that are admissible, that is, that are sound with respect to kinding. Definition 3.4 (Admissibility of a type substitution). A type substitution θ is admissible between two kinding environments K and K ′ , written K ⊢ θ : K ′ , if and only if, for every type variable α such that K(α) = (L, U, T ), αθ is a type variable such that K ′ (αθ) = (L ′ , U ′ , T ′ ) and (L ′ , U ′ , T ′ ) (L, U, T θ).
In words, whenever α is constrained in K, then αθ must be a type variable constrained in K ′ by a kind that entails the substitution instance of the kind of α in K.
The set of the instances of a type scheme are now obtained by applying only admissible substitutions.
Definition 3.5 (Instances of a type scheme). The set of instances of a type scheme ∀A. K ′ ⊲ τ in a kinding environment K is As customary, this set is used in the type system rule to type expression variables: Notice that typing judgments are of the form K; Γ ⊢ K e : τ : the premises include a type environment Γ but also, which is new, a kinding environment K (the K subscript in the turnstile symbol is to distinguish this relation from ⊢ S , the relation for the set-theoretic type system of the next section).
The typing rules for constants, abstractions, applications, and pairs are straightforward. There remain the rules for variants and for pattern matching, which are the only interesting ones.

Tk-Tag
The typing of variant expressions uses the kinding environment. Rule Tk-Tag states that tag(e) can be typed by any variable α such that α has a constrained kind in K which entails the "minimal" kind for this expression. Specifically, if K(α) = (L, U, T ), then we require tag ∈ L and tag : τ ∈ T , where τ is a type for e. Note that T may not assign more than one type to tag, since tag ∈ L.
The typing of pattern matching is by far the most complex part of the type system and it is original to our system.

Tk-Match
Let us describe each step that the rule above implies. First the rule deduces the type τ0 of the matched expression (K; Γ ⊢ K e0 : τ0). Second, for each pattern pi, it generates the type environment Γi which assigns types to the capture variables of pi, assuming pi is matched against a value known to be of type τ0. This is done by deducing the judgment K ⊢ pi : τ0 ⇒ Γi, whose inference system is mostly straightforward (see Figure 8 in the Appendix); for instance, for variable patterns we have: The only subtle point of this inference system is the rule for patterns of the form tag(p) which-after generating the environment for the capture variables of p-checks whether the type of the matched expression is a variant type (i.e., a variable) with the right constraints for tag. Third, the rule Tk-Match types each branch ei with type τ , in a type environment updated with gen K;Γ (Γi), that is, with the generalization of the Γi generated by K ⊢ pi : τ0 ⇒ Γi. The definition of generalization is standard: it corresponds to quantifying all the variables that do not occur free in the environment Γ. The subtle point is the definition of the free variables of a type (and hence of an environment), which we omit for space reasons. It must navigate the kinding environment K to collect all variables which can be reached by following the constraints; hence, the gen function takes as argument K as well as Γ.
Finally, the premises of the rule also include the exhaustiveness condition τ0 K { pi | i ∈ I }, which checks whether every possible value that e0 can produce matches at least one pattern pi. The definition of exhaustiveness is quite convoluted.
Definition 3.6 (Exhaustiveness). We say that a set of patterns P is exhaustive with respect to a type τ in a kinding environment K, and we write τ K P , when, for every K ′ , θ, and v, In words, P is exhaustive when every value that can be typed with any admissible substitution of τ is accepted by at least one pattern in P . OCaml does not impose exhaustiveness-it just signals non-exhaustiveness with a warning-but our system does. We do so in order to have a simpler statement for soundness and to facilitate the comparison with the system of the next section. We do not discuss how exhaustiveness can be effectively computed; for more information on how OCaml checks it, see Garrigue (2004) and Maranget (2007).
We conclude this section by stating the type soundness property of the K type system.

Typing variants with set-theoretic types
We now describe S, a type system for the language of Section 2 based on set-theoretic types. The approach we take in its design is drastically different from that followed for K. Rather than adding a kinding system to record information that types cannot express, we directly enrich the syntax of types so they can express all the notions we need. Moreover, we add subtyping-using a semantic definition-rather than encoding it via instantiation. We exploit type connectives and subtyping to represent variant types as unions and to encode bounded quantification by union and intersection. We argue that S has several advantages with respect to the previous system. It is more expressive: it is able to type some programs that K rejects though they are actually type safe, and it can derive more precise types than K. It is arguably a simpler formalization: typing works much like in ML except for the addition of subtyping, we have explicit types for variants, and we can type pattern matching precisely and straightforwardly. Indeed, as regards pattern matching, an advantage of the S system is that it can express exhaustiveness and non-redundancy checking as subtyping checks, while they cannot be expressed at the level of types in K.
Naturally, subtyping brings its own complications. We do not discuss its definition here, since we reuse the relation defined by Castagna and Xu (2011). The use of semantic subtyping makes the definition of a typing algorithm challenging: Castagna et al. (2014Castagna et al. ( , 2015 show how to define one in an explicitly-typed setting. Conversely, we study here an implicitly-typed language and hence study the problem of type reconstruction (in the next section).
While this system is based on that described by Castagna et al. (2014Castagna et al. ( , 2015, there are significant differences which we discuss in Section 7. Notably, intersection types play a more limited role in our system (no rule allows the derivation of an intersection of arrow types for a function), making our type reconstruction complete.

Types and subtyping
As before, we consider a set V of type variables (ranged over by α, β, γ, . . . ) and the sets C, L, and B of language constants, tags, and basic types (ranged over by c, tag, and b respectively).

Definition 4.1 (Types).
A type t is a term coinductively produced by the following grammar:  : basic, constant, arrow, product, or variant).
We introduce the following abbreviations: With respect to the types in Definition 3.1, we add several new forms. We introduce set-theoretic connectives (union, intersection, and negation), as well as bottom (the empty type 0) and top (1) types. We add general (uniform) recursive types by interpreting the grammar coinductively, while K introduces recursion via kinds. Contractivity is imposed to bar out ill-formed types such as those fulfilling the equation t = t ∨ t (which does not give any information on the set of values it represents) or t = ¬t (which cannot represent any set of values).
We introduce explicit types for variants. These types have the form tag(t): the type of variant expressions with tag tag and an argument of type t. 3 Type connectives allow us to represent all variant types of K by combining types of this form, as we describe in detail below. Finally, we add singleton types for constants (e.g., a type true which is a subtype of bool), which we use to type pattern matching precisely.
Variant types and bounded quantification. K uses constrained variables to type variants; when these variables are quantified in a type scheme, their kind constrains the possible instantiations of the scheme. This is essentially a form of bounded quantification: a variable of kind (L, U, T ) may only be instantiated by other variables which fall within the bounds-the lower bound being determined by L and T , the upper one by U and T .
In S, we can represent these bounds as unions of variant types tag(t). For instance, consider in K a constrained variable α of kind ({A}, {A,B}, {A : bool, B : int}). If we quantify α, we can then instantiate it with variables whose kinds entail that of α. Using our variant types and unions, we write the lower bound as tL = A(bool) and the upper one as tU = A(bool) ∨ B(int). In our system, α should be a variable with bounded quantification, which can only be instantiated by types t such that tL ≤ t ≤ tU.
However, we do not need to introduce bounded quantification as a feature of our language: we can use type connectives to encode it as proposed by Castagna and Xu (2011, cf. Footnote 4 therein). The possible instantiations of α (with the bounds above) and the possible instantiations of (tL ∨ β) ∧ tU, with no bound on β, are equivalent. We use the latter form: we internalize the bounds in the type itself by union and intersection. In this way, we need no system of constraints extraneous to types.
Subtyping. There exists a subtyping relation between types. We write t1 ≤ t2 when t1 is a subtype of t2; we write t1 ≃ t2 when t1 and t2 are equivalent with respect to subtyping, that is, when t1 ≤ t2 and t2 ≤ t1. The definition and properties of this relation are studied in Castagna and Xu (2011), except for variant types which, for this purpose, we encode as pairs (cf. Footnote 3). In brief, subtyping is given a semantic definition, in the sense that t1 ≤ t2 holds if and only if t1 ⊆ t2 , where · is an interpretation function mapping types to sets of elements from some domain (intuitively, the set of values of the language). The interpretation is "set-theoretic" as it interprets union types as unions, negation as complementation, and products as Cartesian products.

Ts-Var
In general, in the semantic-subtyping approach, we consider a type to denote the set of all values that have that type (we will say that some type "is" the set of values of that type). In particular, for arrow types, the type t1 → t2 is that of function values (i.e., λabstractions) which, if they are given an argument in t1 and they do not diverge, yield a result in t2 . Hence, all types of the form 0 → t, for any t, are equivalent (as only diverging expressions can have type 0): any of them is the type of all functions. Conversely, 1 → 0 is the type of functions that (provably) diverge on all inputs: a function of this type should yield a value in the empty type whenever it terminates, and that is impossible.
The presence of variables complicates the definition of semantic subtyping. Here, we just recall from Castagna and Xu (2011) that subtyping is preserved by type substitutions: t1 ≤ t2 implies t1θ ≤ t2θ for every type substitution θ.

Type system
We present S focusing on the differences with respect to the system of OCaml (i.e., K); full definitions are in the Appendix. Unlike in K, type schemes here are defined just as in ML as we no longer need kinding constraints. As in K, we identify a type scheme ∀∅. t with the type t itself, we consider type schemes up to renaming of the variables they bind, and we disregard useless quantification.
We write var(t) for the set of type variables occurring in a type t; we say they are the free variables of t, and we say that t is ground or closed if and only if var(t) is empty. The (coinductive) definition of var can be found in Castagna et al. (2014, Definition A.2).
Unlike in ML, types in our system can contain variables which are irrelevant to the meaning of the type. For instance, α × 0 is equivalent to 0 (with respect to subtyping), as we interpret product types into Cartesian products. Thus, α is irrelevant in α × 0. To capture this concept, we introduce the notion of meaningful variables in a type t. We define these to be the set where the choice of 0 to replace α is arbitrary (any other closed type yields the same definition). Equivalent types have exactly the same meaningful variables. To define generalization, we allow quantifying variables which are free in the type environment but are meaningless in it (intuitively, we act as if types were in a canonical form without irrelevant variables).
We extend var to type schemes as var(∀A. t) = var(t) \ A, and do likewise for mvar.
Type substitutions are defined in a standard way by coinduction; there being no kinding system, we do not need the admissibility condition of K.
We define type environments Γ as usual. The operations of generalization of types and instantiation of type schemes, instead, must account for the presence of irrelevant variables and of subtyping.
Generalization with respect to Γ quantifies all variables in a type except for those that are free and meaningful in Γ: We extend gen pointwise to sets of bindings {x1 : t1, . . . , xn : tn}.
The set of instances of a type scheme is given by and we say that a type scheme s1 is more general than a type scheme s2-written s1 ⊑ s2-if ∀t2 ∈ inst(s2). ∃t1 ∈ inst(s1). t1 ≤ t2 . (1) Notice that the use of subtyping in the definition above generalizes the corresponding definition of ML (which uses equality) and subsumes the notion of "admissibility" of K by a far simpler and more natural relation (cf. Definitions 3.4 and 3.5). Figure 2 defines the typing relation Γ ⊢ S e : t of the S type system (we use the S subscript in the turnstile symbol to distinguish this relation from that for K). All rules except that for pattern matching are straightforward. Note that Ts-Const is more precise than in K since we have singleton types, and that Ts-Tag uses the types we have introduced for variants.
The rule Ts-Match involves two new concepts that we present below. We start by typing the expression to be matched, e0, with some type t0. We also require every branch ei to be well-typed with some type t ′ i : the type of the whole match expression is the union of all t ′ i . We type each branch in an environment expanded with types for the capture variables of pi: this environment is generated by the function ti//pi (described below) and is generalized.
The advantage of our richer types here is that, given any pattern, the set of values it accepts is always described precisely by a type.
For well-typed values v, we have v/p = Ω ⇐⇒ ∅ ⊢ S v : p . We use accepted types to express the condition of exhaustiveness: t0 ≤ i∈I pi ensures that every value e0 can reduce to (i.e., every value in t0) will match at least one pattern (i.e., is in the accepted type of some pattern). We also use them to compute precisely the subtypes of t0 corresponding to the values which will trigger each branch. In the rule, ti is the type of all values which will be selected by the i-th branch: those in t0 (i.e., generated by e0), not in any pj for j < i (i.e., not captured by any previous pattern), and in pi (i.e., accepted by pi). These types ti allow us to express nonredundancy checks: if ti ≤ 0 for some i, then the corresponding pattern will never be selected (which likely means the programmer has made some mistake and should receive a warning). 4 The last element we must describe is the generation of types for the capture variables of each pattern by the ti//pi function. Here, our use of ti means we exploit the shape of the pattern pi and of the previous ones to generate more precise types; environment generation in K essentially uses only t0 and is therefore less precise.
Environment generation relies on two functions π1 and π2 which extract the first and second component of a type t ≤ 1 × 1. For instance, if t = (α×β)∨(bool×int), we have π1(t) = α∨bool and π2(t) = β ∨ int. Given any tag tag, πtag does likewise for variant types with that tag. See Castagna et al. (2014, Appendix C.2.1) and Petrucciani (2015) for the full details.
Definition 4.4 (Pattern environment generation). Given a pattern p and a type t ≤ p , the type environment t//p generated by pattern matching is defined inductively as: The S type system is sound, as stated by the following properties.
Theorem 4.1 (Progress). Let e be a well-typed, closed expression (i.e., ∅ ⊢ S e : t holds for some t). Then, either e is a value or there exists an expression e ′ such that e e ′ .
Theorem 4.2 (Subject reduction). Let e be an expression and t a type such that Γ ⊢ S e : t. If e e ′ , then Γ ⊢ S e ′ : t.

Corollary 4.3 (Type soundness)
. Let e be a well-typed, closed expression, that is, such that ∅ ⊢ S e : t holds for some t. Then, either e diverges or it reduces to a value v such that ∅ ⊢ S v : t.

Comparison with K
Our type system S extends K in the sense that every well-typed program of K is also well-typed in S: we say that S is complete with respect to K.
To show completeness, we define a translation · K which maps k-types (i.e., types of K) to s-types (types of S). The translation is parameterized by a kinding environment to make sense of type variables.
Definition 4.5 (Translation of types). Given a k-type τ in a nonrecursive kinding environment K, its translation is the s-type τ K defined inductively by the rules in Figure 3.
We define the translation of type schemes as ∀A. K ′ ⊲ τ K = ∀A. τ K,K ′ and that of type environments by translating each type scheme pointwise.
The only complex case is the translation of a constrained variable. We translate it to the same variable, in union with its lower bound and in intersection with its upper bound. Lower bounds and finite upper ones (i.e., those where U = L) are represented by a union of variant types. In K, a tag in U may be associated with more than one argument type, in which case its argument should have all these types. This is a somewhat surprising feature of the type system in OCaml-for details, see Garrigue (2002Garrigue ( , 2015-but here we can simply take the intersection of all argument types. For instance, the OCaml type [< 'A of int | 'B of unit > 'A ] as α, represented in K by the type variable α with kind ({A}, {A,B}, {A : int, B : unit}), is trans- The translation of an upper bound U = L is more involved. Ideally, we need the type (1) which states that tags mentioned in T can only appear with arguments of the proper type, whereas tags not in T can appear with any argument. However, the union on the right is infinite and cannot be represented in our system; hence, in the definition in Figure 3 we use its complement with respect to the top type of variants 1V. 5 In practice, a type (tL ∨ α) ∧ tU can be replaced by its lower (respectively, upper) bound if α only appears in covariant (resp., contravariant) position.
We state the completeness property as follows. 5 The type 1V can itself be defined by complementation as the type of values which are not constants, nor abstractions, nor pairs.   . Let e be an expression, K a non-recursive kinding environment, Γ a k-type environment, and τ a k-type. If K; Γ ⊢ K e : τ , then Γ K ⊢ S e : τ K .
Notice that we have defined · K by induction. Therefore, strictly speaking, we have only proved that S deduces all the judgments provable for non-recursive types in K. Indeed, in the statement we require the kinding environment K to be non-recursive 6 . We conjecture that the result holds also with recursive kindings and that it can be proven by coinductive techniques.

Type reconstruction
In this section, we study type reconstruction for the S type system. We build on the work of Castagna et al. (2015), who study local type inference and type reconstruction for the polymorphic version of CDuce. In particular, we reuse their work on the resolution of the tallying problem, which plays in our system the same role as unification in ML.
Our contribution is threefold: (i) we prove type reconstruction for our system to be both sound and complete, while in Castagna et al. (2015) it is only proven to be sound for CDuce (indeed, we rely on the restricted role of intersection types in our system to obtain this result); (ii) we describe reconstruction with let-polymorphism and use structured constraints to separate constraint generation from constraint solving; (iii) we define reconstruction for full pattern matching. Both let-polymorphism and pattern matching are omitted in Castagna et al. (2015).
Type reconstruction for a program (a closed expression) e consists in finding a type t such that ∅ ⊢ S e : t can be derived: we see it as finding a type substitution θ such that ∅ ⊢ S e : αθ holds for some fresh variable α. We generalize this to non-closed expressions and to reconstruction of types that are partially known. Thus, we say that type reconstruction consists-given an expression e, a type environment Γ, and a type t-in computing a type substitution θ such that Γθ ⊢ S e : tθ holds, if any such θ exists. Reconstruction in our system proceeds in two main phases. In the first, constraint generation (Section 5.1), we generate from an expression e and a type t a set of constraints that record the conditions under which e may be given type t. In the second phase, constraint solving (Sections 5.2-5.3), we solve (if possible) these constraints to obtain a type substitution θ.
We keep these two phases separate following an approach inspired by presentations of HM(X) (Pottier and Rémy 2005): we use structured constraints which contain expression variables, so that constraint generation does not depend on the type environment Γ that e is to be typed in. Γ is used later for constraint solving. 6 We say K is non-recursive if it does not contain any cycle α, α 1 , . . . , αn, α such that the kind of each variable α i contains α i+1 . Constraint solving is itself made up of two steps: constraint rewriting (Section 5.2) and type-constraint solving (Section 5.3).
In the former, we convert a set of structured constraints into a simpler set of subtyping constraints. In the latter, we solve this set of subtyping constraints to obtain a set of type substitutions; this latter step is analogous to unification in ML and is computed using the tallying algorithm of Castagna et al. (2015). Constraint rewriting also uses type-constraint solving internally; hence, these two steps are actually intertwined in practice.

Constraint generation
Given an expression e and a type t, constraint generation computes a finite set of constraints of the form defined below.

Definition 5.1 (Constraints).
A constraint c is a term inductively generated by the following grammar: where C ranges over constraint sets, that is, finite sets of constraints, and where the range of every type environment Γ in constraints of the form def or let only contains types (i.e., trivial type schemes).
A constraint of the form t≤ t ′ requires tθ ≤ t ′ θ to hold for the final substitution θ. One of the form x≤ t constrains the type of x (actually, an instantiation of its type scheme with fresh variables) in the same way. A definition constraint def Γ in C introduces new expression variables, as we do in abstractions; these variables may then occur in C. We use def constraints to introduce monomorphic bindings (environments with types and not type schemes).
Finally, let constraints introduce polymorphic bindings. We use them for pattern matching: hence, we define them with multiple branches (the constraint sets Ci's), each with its own environment (binding the capture variables of each pattern to types). To solve a constraint let [C0](Γi in Ci)i∈I , we first solve C0 to obtain a substitution θ; then, we apply θ to all types in each Γi and we generalize the resulting types; finally, we solve each Ci (in an environment expanded with the generalization of Γiθ).
We define constraint generation as a relation e : t ⇒ C, given by the rules in Figure 4. We assume all variables introduced by the rules to be fresh (see the Appendix for the formal treatment of freshness: cf. Definition A.39 and Figures 13 and 14). Constraint generation for variables and constants (rules TRs-Var and TRs-Const) just yields a subtyping constraint. For an abstraction λx. e (rule TRs-Abstr), we generate constraints for the body and wrap them into a definition constraint binding x to a fresh variable α; we add a subtyping constraint to ensure that λx. e has type t by subsumption. The rules for applications, pairs, and tags are similar.
For pattern-matching expressions (rule TRs-Match), we use an auxiliary relation t///p ⇒ (Γ, C) to generate the pattern type environment Γ, together with a set of constraints C in case the environment contains new type variables. The full definition is in the Appendix; as an excerpt, consider the rules for variable and tag patterns.
The rule for variable patterns produces no constraints (and the empty environment). Conversely, the rule for tags must introduce a new variable α to stand for the argument type: the constraint produced mirrors the use of the projection operator πtag in the deductive system. To generate constraints for a pattern-matching expression, we generate them for the expression to be matched and for each branch separately. All these are combined in a let constraint, together with the constraints generated by patterns and with α≤ i∈I pi , which ensures exhaustiveness.

Constraint rewriting
The first step of constraint solving consists in rewriting the constraint set into a simpler form that contains only subtyping constraints, that is, into a set of the form {t1≤ t ′ 1 , . . . , tn≤ t ′ n } (i.e., no let, def, or expression variables). We call such sets typeconstraint sets (ranged over by D).
Constraint rewriting is defined as a relation Γ ⊢ C D: between type environments, constraints or constraint sets, and typeconstraint sets. It is given by the rules in Figure 5.
We rewrite constraint sets pointwise. We leave subtyping constraints unchanged. In variable type constraints, we replace the variable x with an instantiation of the type scheme Γ(x) with the variables β1, . . . , βn, which we assume to be fresh. We rewrite def constraints by expanding the environment and rewriting the inner constraint set.
The complex case is that of let constraints, which is where rewriting already performs type-constraint solving. We first rewrite the constraint set C0. Then we extract a solution θ0-if any exists-by the tally algorithm (described below). The algorithm can produce multiple alternative solutions: hence, this step is nondeterministic. Finally, we rewrite each of the Ci in an expanded environment. We perform generalization, so let constraints may introduce polymorphic bindings. The resulting type-constraint set is the union of the type-constraint sets obtained for each branch plus equiv(θ0), which is defined as We add the constraints of equiv(θ0) because tallying might generate multiple incompatible solutions for the constraints in D0. The choice of θ0 is arbitrary, but we must force subsequent steps of constraint solving to abide by it. Adding equiv(θ0) ensures that every solution θ to the resulting type-constraint set will satisfy αθ ≃ αθ0θ for every α, and hence will not contradict our choice. Castagna et al. (2015) define the tallying problem as the problemin our terminology-of finding a substitution that satisfies a given type-constraint set.

Type-constraint solving
Definition 5.2. We say that a type substitution θ satisfies a typeconstraint set D, written θ D, if tθ ≤ t ′ θ holds for every t≤ t ′ in D. When θ satisfies D, we say it is a solution to the tallying problem of D.
The tallying problem is the analogue in our system of the unification problem in ML. However, there is a very significant difference: while unification admits principal solutions, tallying does not. Indeed, the algorithm to solve the tallying problem for a typeconstraint set produces a finite set of type substitutions. The algorithm is sound in that all substitutions it generates are solutions. It is complete in the sense that any other solution is less general than one of those in the set: we have a finite number of solutions which are principal when taken together, but not necessarily a single solution that is principal on its own. This is a consequence of our semantic definition of subtyping. As an example, consider subtyping for product types: with a straightforward syntactic definition, a constraint t1 × t ′ 1 ≤ t2 × t ′ 2 would simplify to the conjunction of two constraints t1 ≤ t2 and t ′ 1 ≤ t ′ 2 . With semantic subtyping-where products are seen as Cartesian products-that simplification is sound, but it is not the only possible choice: either t1 ≤ 0 or t ′ 1 ≤ 0 is also enough to ensure t1 × t ′ 1 ≤ t2 × t ′ 2 , since both ensure t1 × t ′ 1 ≃ 0. The three possible choices can produce incomparable solutions. Castagna et al. (2015, Section 3.2 and Appendix C.1) define a sound, complete, and terminating algorithm to solve the tallying problem, which can be adapted to our types by encoding variants as pairs. We refer to this algorithm here as tally (it is Sol∅ in the referenced work) and state its properties.

Property 5.3 (Tallying algorithm).
There exists a terminating algorithm tally such that, for any type-constraint set D, tally(D) is a finite, possibly empty, set of type substitutions.
Theorem 5.1 (Soundness and completeness of tally). Let D be a type-constraint set. For any type substitution θ: Hence, given a type-constraint set, we can use tally to either find a set of solutions or determine it has no solution: tally(D) = ∅ occurs if and only if there exists no θ such that θ D.

Properties of type reconstruction
Type reconstruction as a whole consists in generating a constraint set C from an expression, rewriting this set into a type-constraint set D (which can require solving intermediate type-constraint sets) and finally solving D by the tally algorithm. Type reconstruction is both sound and complete with respect to the deductive type system S. We state these properties in terms of constraint rewriting.

Theorem 5.2 (Soundness of constraint generation and rewriting).
Let e be an expression, t a type, and Γ a type environment. If e : t ⇒ C, Γ ⊢ C D, and θ D, then Γθ ⊢ S e : tθ.

Theorem 5.3 (Completeness of constraint generation and rewriting)
. Let e be an expression, t a type, and Γ a type environment. Let θ be a type substitution such that Γθ ⊢ S e : tθ. Let e : t ⇒ C. There exist a type-constraint set D and a type substitution These theorems and the properties above express soundness and completeness for the reconstruction system. Decidability is a direct consequence of the termination of the tallying algorithm.

Practical issues
As compared to reconstruction in ML, our system has the disadvantage of being non-deterministic: in practice, an implementation should check every solution that tallying generates at each step of type-constraint solving until it finds a choice of solution which makes the whole program well-typed. This should be done at every step of generalization (that is, for every match expression) and might cripple efficiency. Whether this is significant in practice or not is a question that requires further study and experimentation. Testing multiple solutions cannot be avoided since our system does not admit principal types. For instance the function has both type ('A,'A)→'C and type ('B,'B)→'C (and neither is better than the other) but it is not possible to deduce for it their least upper bound ('A,'A)∨('B,'B)→'C (which would be principal).
Multiple solutions often arise by instantiating some type variables by the empty type. Such solutions are in many cases subsumed by other more general solutions, but not always. For instance, consider the α list data-type (encoded as the recursive type X = (α,X)∨[]) together with the classic map function over lists (the type of which is (α → β) → α list → β list). The application of map to the successor function succ : int → int has type int list → int list, but also type [] → [] (obtained by instantiating all the variables of the type of map by the empty type). The latter type is correct, cannot be derived (by instantiation and/or subtyping) from the former, but it is seldom useful (it just states that map(succ) maps the empty list into the empty list). As such, it should be possible to define some preferred choice of solution (i.e., the solution that does not involve empty types) which is likely to be the most useful in practice. As it happens, we would like to try to restrict the system so that it only considers solutions without empty types. While it would make us lose completeness with respect to S, it would be interesting to compare the restricted system with ML (with respect to which it could still be complete).

Extensions
In this section, we present three extensions or modifications to the S type system; the presentation is just sketched for space reasons: the details of all three can be found in the Appendix.
The first is the introduction of overloaded functions typed via intersection types, as done in CDuce. The second is a refinement of the typing of pattern matching, which we have shown as part of Example 2 (the function g and our definition of map). Finally, the third is a restriction of our system to adapt it to the semantics of the OCaml implementation which, unlike our calculus, cannot compare safely untagged values of different types at runtime.

Overloaded functions
CDuce allows the use of intersection types to type overloaded functions precisely: for example, it can type the negation function which is more precise than bool → bool. We can add this feature by changing the rule to type λ-abstractions to ∀j ∈ J. Γ, {x : t ′ j } ⊢ e : tj Γ ⊢ λx. e : j∈J t ′ j → tj which types the abstraction with an intersection of arrow types, provided each of them can be derived for it. The rule above roughly corresponds to the one introduced by Reynolds for the language Forsythe (Reynolds 1997). With this rule alone, however, one has only the so-called coherent overloading (Pierce 1991), that is, the possibility of assigning different types to the same piece of code, yielding an intersection type. In full-fledged overloading, instead, different pieces of code are executed for different types of the input. This possibility was first introduced by CDuce (Frisch et al. 2002;Benzaken et al. 2003) and it is obtained by typing pattern matching without taking into account the type of the branches that cannot be selected for a given input type. Indeed, the function "not" above cannot be given the type we want if we just add the rule above: it can neither be typed as true → false nor as false → true.
To use intersections effectively for pattern matching, we need to exclude redundant patterns from typing. We do so by changing the rule Ts-Match (in Figure 2): when for some branch i we have ti ≤ 0, we do not type that branch at all, and we do not consider it in the result type (that is, we set t ′ i = 0). In this way, if we take t ′ j = true, we can derive tj = false (and vice versa). Indeed, if we assume that the argument is true, the second branch will never be selected: it is therefore sound not to type it at all. This typing technique is peculiar to CDuce's overloading. However, functions in CDuce are explicitly typed. As type reconstruction is undecidable for unrestricted intersection type systems, this extension would make annotations necessary in our system as well. We plan to study the extension of our system with intersection types for functions and to adapt reconstruction to also consider explicit annotations.

Refining the type of expressions in pattern matching
Two of our motivating examples concerning pattern matching (from Section 1, Example 2) involved a refinement of the typing of pattern matching that we have not described yet, but which can be added as a small extension of our S system. Recall the function g defined as λx. match x with A → id2 x | → x, where id2 has domain A ∨ B. Like OCaml, S requires the type of x to be a subtype of A ∨ B, but this constraint is unnecessary because id2 x is only computed when x = A. To capture this, we need pattern matching to introduce more precise types for variables in the matched expression; this is a form of occurrence typing (Tobin-Hochstadt and Felleisen 2010) or flow typing (Pearce 2013).
We first consider pattern matching on a variable. In an expression match x with (pi → ei)i∈I we can obtain this increased precision by using the type ti-actually, its generalization-for x while typing the i-th branch. In the case of g, the first branch is typed assuming x has type t0 ∧ A, where t0 is the type we have derived for x. As a result, the constraint t0 ∧ A ≤ A ∨ B does not restrict t0.
We can express this so as to reuse pattern environment generation. Let · : E → P be a function such that x = x and e = when e is not a variable. Then, we obtain the typing above if we use Γ, gen Γ (ti// e0 ), gen Γ (ti//pi) as the type environment in which we type the i-th branch, rather than Γ, gen Γ (ti//pi).
We generalize this approach to refine types also for variables occurring inside pairs and variants. To do so, we redefine · .
On variants, we let tag(e) = tag( e ). On pairs, ideally we want (e1, e2) = ( e1 , e2 ): however, pair patterns cannot have repeated variables, while (e1, e2) might. We therefore introduce a new form of pair pattern p1, p2 (only for internal use) which admits repeated variables: environment generation for such patterns intersects the types it obtains for each occurrence of a variable.

Applicability to OCaml
A thesis of this work is that the type system of OCaml-specifically, the part dealing with polymorphic variants and pattern matchingcould be profitably replaced by an alternative, set-theoretic system. Of course, we need the set-theoretic system to be still type safe.
In Section 4, we stated that S is sound with respect to the semantics we gave in Section 2. However, this semantics is not precise enough, as it does not correspond to the behaviour of the OCaml implementation on ill-typed terms. 7 Notably, OCaml does not record type information at runtime: values of different types cannot be compared safely and constants of different basic types might have the same representation (as, for instance, 1 and true). Consider as an example the two functions Both can be given the type 1 → bool in S, which is indeed safe in our semantics. Hence, we can apply both of them to 1, and both return false. In OCaml, conversely, the first would return true and the second would cause a crash. The types bool → bool and bool × bool → bool, respectively, would be safe for these functions in OCaml.
To model OCaml more faithfully, we define an alternative semantics where matching a value v against a pattern p can have three outcomes rather than two: it can succeed (v/p = ς), fail (v/p = Ω), or be undefined (v/p = ℧). Matching is undefined whenever it is unsafe in OCaml: for instance, 1/true = 1/(true, true) = ℧ (see Appendix A.5.3 for the full definition).
We use the same definition as before for reduction (see Section 2.2). Note that a match expression on a value reduces to the first branch for which matching is successful if the result is Ω for all previous branches. If matching for a branch is undefined, no branch after it can be selected; hence, there are fewer possible reductions with this semantics. Adapting the type system requires us to restrict the typing of pattern matching so that undefined results cannot arise. We define the compatible type ⌈p⌉ of a pattern p as the type of values v which can be safely matched with it: those for which v/p = ℧. For instance, ⌈1⌉ = int. The rule for pattern matching should require that the type t0 of the matched expression be a subtype of all ⌈pi⌉.
Note that this restricts the use of union types in the system. For instance, if we have a value of type bool ∨ int, we can no longer use pattern matching to discriminate between the two cases. This is to be expected in a language without runtime type tagging: indeed, union types are primarily used for variants, which reintroduce tagging explicitly. Nevertheless, having unions of non-variant types in the system is still useful, both internally (to type pattern matching) and externally (see Example 3 in Section 1, for instance).

Related work
We discuss here the differences between our system and other formalizations of variants in ML. We also compare our work with the work on CDuce and other union/intersection type systems.

Variants in ML: formal models and OCaml
K is based on the framework of structural polymorphism and more specifically on the presentations by Garrigue (2002Garrigue ( , 2015. There exist several other systems with structural polymorphism: for instance, the earlier one by Garrigue (1998) and more expressive constraint-based frameworks, like the presentation of HM(X) by Pottier and Rémy (2005). We have chosen as a starting point the system which corresponds most closely to the actual implementation in OCaml.
With respect to the system in Garrigue (2002Garrigue ( , 2015, K differs mainly in three respects. First, Garrigue's system describes constraints more abstractly and can accommodate different forms of polymorphic typing of variants and of records. We only consider variants and, as a result, give a more concrete presentation. Second, we model full pattern matching instead of "shallow" case analysis. To our knowledge, pattern matching on polymorphic variants in OCaml is only treated in Garrigue (2004) and only as concerns some problems with type reconstruction. We have chosen to formalize it to compare K to our set-theoretic type system S, which admits a simpler formalization and more precise typing. However, we have omitted a feature of OCaml that allows refinement of variant types in alias patterns and which is modeled in Garrigue (2002) by a split construct. While this feature makes OCaml more precise than K, it is subsumed in S by the precise typing of capture variables. Third, we did not study type inference for K. Since S is more expressive than K and since we describe complete reconstruction for it, extending Garrigue's inference system to pattern matching was unnecessary for the goals of this work.
As compared to OCaml itself (or, more precisely, to the fragment we consider) our formalization is different because it requires exhaustiveness; this might not always by practical in K, but nonexhaustive pattern matching is no longer useful once we introduce more precise types, as in S. Other differences include not considering variant refinement in alias patterns, as noted above, and the handling of conjunctive types, where OCaml is more restrictive than we are in order to infer more intuitive types (as discussed in Garrigue 2004, Section 4.1).

S and the CDuce calculus
S reuses the subtyping relation defined by Castagna and Xu (2011) and some of the work described in Castagna et al. (2014Castagna et al. ( , 2015 (notably, the encoding of bounded polymorphism via type connectives and the algorithm to solve the tallying problem). Here, we explore the application of these elements to a markedly different language. Castagna et al. (2014Castagna et al. ( , 2015 study polymorphic typing for the CDuce language, which features type-cases. Significantly, such type-cases can discriminate between functions of different types; pattern matching in ML cannot (indeed, it cannot distinguish between functions and non-functional values). As a result, the runtime semantics of CDuce is quite involved and, unlike ours, not typeerasing; our setting has allowed us to simplify the type system too. Moreover, most of the work in Castagna et al. (2014Castagna et al. ( , 2015 studies an explicitly-typed language (where functions can be typed with intersection types). In contrast, our language is implicitly typed. We focus our attention on type reconstruction and prove it sound and complete, thanks to the limited use we make of intersections. We have also introduced differences in presentation to conform our system to standard descriptions of the Hindley-Milner system.

Union types and pattern matching
The use of union and intersection types in ML has been studied in the literature of refinement type systems. For example, the theses of Davies (2005) and Dunfield (2007) describe systems where declared datatypes (such as the ordinary variants of OCaml) are refined by finite discriminated unions. Here we study a very different setting, because we consider polymorphic variants and, above all, we focus on providing complete type reconstruction, while the cited works describe forms of bidirectional type checking which require type annotations. Conversely, our system makes a more limited use of intersection types, since it does not allow the derivation of intersection types for functions. Refinement type systems are closer in spirit to the work on CDuce which is why we refer the reader to Section 7 on related work in Castagna et al. (2014) for a comprehensive comparison.
For what concerns programming languages we are not aware of any implicitly-typed language with full-fledged union types. The closest match to our work is probably Typed Racket (Tobin-Hochstadt and Felleisen 2008, 2010) which represents datatypes as unions of tagged types, as we do. However it does not perform type reconstruction: it is an explicitly-typed language with local type inference, that is, the very same setting studied for CDuce in Castagna et al. (2015) whose Section 6 contains a thorough comparison with the type system of Typed Racket. 8 Typed Racket also features occurrence typing, which refines the types of variables according to the results of tests (combinations of predicates on base types and selectors) to give a form of flow sensitivity. We introduced a similar feature in Section 6.2: we use pattern matching and hence consider tests which are as expressive as theirs, but we do not allow them to be abstracted out as functions.

Conclusion
This work shows how to add general union, intersection and difference types in implicitly-typed languages that traditionally use the HM type system. Specifically, we showed how to improve the current OCaml type system of polymorphic variants in four different aspects: its formalization, its meta-theoretic properties, the expressiveness of the system, and its practical ramifications. These improvements are obtained by a drastic departure from the current unification-based approach and by the injection in the system of set-theoretic types and semantic subtyping.
Our approach arguably improves the formalization of polymorphic variants: in our system we directly encode all meta-theoretic notions in a core-albeit rich-type theory, while the current OCaml system must introduce sophisticated "ad hoc" constructions (e.g., the definition of constrained kind, cf. Definition 3.2) to simulate subtyping. This is why, in our approach, bounded polymorphism can be encoded in terms of union and intersection types, and meta-theoretic properties such as exhaustiveness and redundancy in pattern matching can be internalized and expressed in terms of types and subtyping. Likewise, the most pleasant surprise of our formalization is the definition of the generality relation ⊑ on type schemes (cf. equation (1)): the current OCaml formalization requires complicated definitions such as the admissibility of type substitutions, while in our system it turns out to be the straightforward and natural generalization to subtyping of the usual relation of ML. A similar consideration can be done for unification, which is here generalized by the notion of tallying.
In the end we obtain a type system which is very natural: if we abstract the technicalities of the rule for pattern matching, the type system really is what one expects it to be: all (and only) the classic typing rules plus a subsumption rule. And even the rule Ts-Match, the most complicated one, is at the end what one should expect it to be: (1) type the matched expression e0, (2) check whether the patterns are exhaustive, (3) for each branch (3.i) compute the set of the results of e0 that are captured by the pattern of the branch, (3.ii) use them to deduce the type of the capture variables of the pattern (3.iii) generalize the types of these variables in order to type the body of the branch, and (4) return the union of the types of the branches.
The advantages of our approach are not limited to the formalization. The resulting system is more expressive-it types more programs while preserving static type safety-and natural, insofar as it removes the pathological behaviours we outlined in the introduction as well as problems found in real life (e.g., Nicollet 2011;Wegrzanowski 2006). The solution can be even more satisfactory if we extend the current syntax of OCaml types. For instance, Nicollet (2011) shows the OCaml function function 'A → 'B | x → x which transforms 'A into 'B and leaves any other constructor unchanged. OCaml gives to this function the somewhat nonsensical type ([> 'A | 'B ] as α) → α. Our reconstruction algorithm deduces instead the type α → ('B | (α\'A)): it correctly deduces that the result can be either 'B or the variant in input, but can never be 'A (for further examples of the use of difference types see  2007; [CAML-LIST 4] 2004). If we want to preserve the current syntax of OCaml types, this type should be approximated as ([> 'B ] as α) → α; however, if we extend the syntax with differences (that in our system come for free), we gain the expressiveness that the kinding approach can only achieve with explicit row variables and that is needed, for instance, to encode exceptions (Blume et al. 2008). But we can do more: by allowing also intersections in the syntax of OCaml types we could type Nicollet's function by the type ('A → 'B) & ((α\'A) → (α\'A)), which is exact since it states that the function maps 'A to 'B and leaves any argument other than 'A unchanged. As an aside, notice that types of this form provide an exact typing of exception handlers as intended by Blume et al. (2008) (Nicollet's function can be seen as a handler that catches the exception 'A yielding 'B and lets all other values pass through).
Finally, our work improves some aspects of the theory of semantic subtyping as well: our type reconstruction copes with letpolymorphism and pattern matching and it is proven to be not only sound but also complete, all properties that the system in Castagna et al. (2015) does not possess. Furthermore, the refinement we proposed in Section 6.2 applies to CDuce patterns as well, and it has already been implemented in the development version of CDuce.
This work is just the first step of a long-term research. Our short-term plan is to finish an ongoing implementation and test it, especially as concerns messages to show to the programmer. We also need to extend the subtyping relation used here to cope with types containing cyclic values (e.g., along the lines of the work of Bonsangue et al. (2014)): the subtyping relation of Castagna and Xu (2011) assumes that types contain only finite values, but cyclic values can be defined in OCaml.
The interest of this work is not limited to polymorphic variants. In the long term we plan to check whether building on this work it is possible to extend the syntax of OCaml patterns and types, so as to encode XML document types and provide the OCaml programmer with processing capabilities for XML documents like those that can be found in XML-centred programming languages such as CDuce. Likewise we want to explore the addition of intersection types to OCaml (or Haskell) in order to allow the programmer to define refinement types and check how such an integration blends with existing features, notably GADTs.

A. Appendix
In this Appendix, we present full definitions of the language and type systems we have described, together with complete proofs of all results.

A.1.1 Syntax
We assume that there exist a countable set X of expression variables, ranged over by x, y, z, . . . , a set C of constants, ranged over by c, and a set L of tags, ranged over by tag.
Definition A.1 (Expressions). An expression e is a term inductively generated by the following grammar: e ::= x | c | λx. e | e e | (e, e) | tag(e) | match e with (pi → ei)i∈I where p ranges over the set P of patterns, defined below. We write E to denote the set of all expressions.
We define fv(e) to be the set of expression variables occurring free in the expression e, and we say that e is closed if and only if fv(e) is empty.
As customary, we consider expressions up to α-renaming of the variables bound by abstractions and by patterns.

Definition A.2 (Patterns).
A pattern p is a term inductively generated by the following grammar: where capt(p) denotes the set of expression variables occurring as sub-terms in a pattern p (called the capture variables of p).
We write P to denote the set of all patterns.

A.1.2 Semantics Definition A.3 (Values).
A value v is a closed expression inductively generated by the following grammar. v : Definition A.4 (Expression substitution). An expression substitution ς is a partial mapping of expression variables to values. We write [ v i/x i | i ∈ I ] for the substitution which replaces free occurrences of xi with vi, for each i ∈ I. We write eς for the application of the substitution to an expression e. We write ς1 ∪ ς2 for the union of disjoint substitutions.
Definition A.5 (Semantics of pattern matching). We write v/p for the result of matching a value v against a pattern p. We have either v/p = ς, where ς is a substitution defined on the variables in capt(p), or v/p = Ω. In the former case, we say that v matches p (or that p accepts v); in the latter, we say that matching fails. The definition of v/p is given inductively in Figure 6.
Definition A.6 (Evaluation contexts). Let the symbol [ ] denote a hole. An evaluation context E is a term inductively generated by the following grammar.
We write E[ e ] for the expression obtained by replacing the hole in E with the expression e. Figure 7.

A.2.1 Definition of the K type system
We assume that there exists a countable set V of type variables, ranged over by α, β, γ, . . . . We also consider a finite set B of basic types, ranged over by b, and a function b (·) from constants to basic types.

Definition A.8 (Types).
A type τ is a term inductively generated by the following grammar.  and where the following conditions hold:

R-Appl
, and, if U = L, U ⊆ dom(T ); • tags in L have a single type in T , that is, if tag ∈ L, whenever both tag : τ1 ∈ T and tag : τ2 ∈ T , we have τ1 = τ2.
Definition A.10 (Kind entailment). The entailment relation · · between constrained kinds is defined as Definition A.11 (Kinding environments). A kinding environment K is a partial mapping from type variables to kinds. We write kinding environments as K = {α1 :: κ1, . . . , αn :: κn}. We write K, K ′ for the updating of the kinding environment K with the new bindings in K ′ . It is defined as follows.
We say that a kinding environment is closed if all the type variables that appear in the types in its range also appear in its domain. We say it is canonical if it is infinite and contains infinitely many variables of every kind. Definition A.12 (Type schemes). A type scheme σ is of the form ∀A. K ⊲ τ , where: • A is a finite set {α1, . . . , αn} of type variables; • K is a kinding environment such that dom(K) = A.
We identify a type scheme ∀∅. ∅ ⊲ τ , which quantifies no variable, with the type τ itself. We consider type schemes up to renaming of the variables they bind and disregard useless quantification (i.e., quantification of variables that do not occur in the type).
Definition A.13 (Free variables). The set of free variables varK (σ) of a type scheme σ with respect to a kinding environment K is the minimum set satisfying the following equations.
We say that a type τ is ground or closed if and only if var∅(τ ) is empty. We say that a type or a type scheme is closed in a kinding environment K if all its free variables are in the domain of K.
Definition A.14 (Type substitutions). A type substitution θ is a finite mapping of type variables to types. We write [ τ i/α i | i ∈ I ] for the type substitution which simultaneously replaces αi with τi, for each i ∈ I. We write τ θ for the application of the substitution θ to the type τ , which is defined as follows.
We extend the var operation to substitutions as We extend application of substitutions to the typing component of a constrained kind (L, U, T ): T θ is given by the pointwise application of θ to all types in T . We extend it to kinding environments: Kθ is given by the pointwise application of θ to the typing component of every constrained kind in the range of K. We extend it to type schemes ∀A. K ⊲ τ : by renaming quantified variables, we assume A ∩ (dom(θ) ∪ var∅(θ)) = ∅, and we have (∀A. K ⊲ τ )θ = ∀A. Kθ ⊲ τ θ.
We write θ1 ∪ θ2 for the union of disjoint substitutions and θ1 • θ2 for the composition of substitutions.
We write Γ, Γ ′ for the updating of the type environment Γ with the new bindings in Γ ′ . It is defined as follows.
We extend the var operation to type environments as Definition A.17 (Generalization). We define the generalization of a type τ with respect to a kinding environment K and a type environment Γ as the type scheme We extend this definition to type environments which only contain types (i.e., trivial type schemes) as

Tk-Var
We say that a type scheme σ1 is more general than a type scheme σ2 in K, and we write σ1 ⊑K σ2, if instK (σ1) ⊇ instK (σ2).
We extend this notion to type environments as Definition A.19 (Pattern environment generation). The environment generated by pattern matching is given by the relation K ⊢ p : τ ⇒ Γ (the pattern p can match type τ in K, producing the bindings in Γ), defined by the rules in Figure 8.
Definition A.20 (Exhaustiveness). We say that a set of patterns P is exhaustive with respect to a type τ in a kinding environment K, and we write τ K P , when Definition A.21 (Typing relation). The typing relation K; Γ ⊢ K e : τ (e is given type τ in the kinding environment K and the type environment Γ) is defined by the rules in Figure 9, where we require K to be closed and Γ and τ to be closed with respect to K. We also assume that K is canonical.
Proof. The typing rules are syntax-directed, so the last rule applied to type a value is fixed by its form. All these rules derive types of different forms, thus the form of the type assigned to a value determines the last rule used. In each case the premises of the rule entail the consequences above.

Cases TPk-And and TPk-Or Straightforward application of the induction hypothesis, analogously
to the case of pair patterns.

Lemma A.4 (Stability of exhaustiveness under type substitutions).
If τ K P , then τ θ K ′ P for any type substitution θ such that K ⊢ θ : K ′ .
Proof. By induction on the derivation of K; Γ2 ⊢ K e : τ . We reason by cases on the last applied rule.
Hence we may apply the induction hypothesis for all i to derive K; Γ1, gen K;Γ 1 (Γi) ⊢ K ei : τ and then apply Tk-Match to conclude. Lemma A.7 (Stability of typing under type substitutions). Let K, K ′ be two closed, canonical kinding environments and θ a type substitution such that K ⊢ θ : K ′ . If K; Γ ⊢ K e : τ , then K ′ ; Γθ ⊢ K e : τ θ.
Proof. By induction on the derivation of K; Γ ⊢ K e : τ . We reason by cases on the last applied rule.

Case Tk-Var We have
By α-renaming we can assume that θ does not involve A, that is, A ∩ dom(θ) = ∅ and A ∩ var∅(θ) = ∅, and also that A ∩ (dom(K ′ ) ∪ var∅(K ′ )) = ∅, that is, that the variables in A are not assigned a kind in K ′ nor do they appear in the types in the typing component of the kinds in K ′ .
Since dom(θ ′ x ) ⊆ A holds, we only need to establish that K ′ , Kxθ ⊢ θ ′ x : K ′ . This requires proving, for each α such that (K ′ , Kxθ)(α) = (L, U, T ), that αθ ′ x is a type variable such that . Such an α can either be in the domain of Kxθ (if and only if it is in A) or in the domain of K ′ . In the latter case, we have αθ ′ x = α, since α / ∈ A, and hence its kind in K ′ is the same as in K ′ , Kxθ. We must prove (L, U, T ) (L, U, T θ ′ x ), which holds because the variables in A do not appear in T since (L, U, T ) ∈ range(K ′ ).
This assumption is justified by the following observations. The variables in A only appear quantified in the environment used for the typing derivation for e1. Therefore we may assume that they do not appear in τ : if they do, it is because they have been chosen when instantiating some type scheme and, since K is canonical, we might have chosen some other variable of the same kind. As for the occurrences of the variables in A in the derivation for e0, a similar reasoning applies. These variables do not appear free in the environment (neither directly in a type in Γ, nor in the kinds of variables which appear free in Γ). Therefore, if they occur in τ0 it is because they have been chosen either during instantiation of a type scheme or when typing an abstraction, and in both cases we might have chosen a different variable. Now we rename these variables so that θ will not have effect on them. Let B = {β1, . . . , βn} be a set of type variables such that B ∩ (dom(θ) ∪ var∅(θ)) = ∅ and B ∩ var∅(Γ) = ∅. Let θ0 = [ β 1/α 1 , . . . , βn /αn] and θ ′ = θ • θ0. Since K ′ is canonical, we can choose each βi so that, if K(αi) = •, then K ′ (βi) = •, and if K(αi) = (L, U, T ), then K(βi) = (L, U, T θ ′ ). As for A, we choose B so that the kinds in K ′ for variables not in B do not contain variables of B.
Since θ ′ is admissibile, by the induction hypothesis applied to θ ′ , we derive K; Γθ ′ ⊢ K e0 : τ0θ ′ . Since the variables in A do not appear in Γ, we have Γθ ′ = Γθ. We chooseτ0 to be τ0θ ′ .
We apply weakening (Lemma A.6) to derive from the latter the typing we need, that is, To do so we must show The latter holds because var K ′ (Γθ, gen K ′ ;Γθ ({x : τ0θ ′ })) ⊆ var K ′ (Γθ).
As for the former, we prove gen K ′ ;Γθ ({x : τ0θ ′ }) ⊑ K ′ (gen K;Γ ({x : τ0)})θ. We have By α-renaming of the quantified variables we can write We show B ⊆ C, which concludes the proof (because the kinding environments are both restrictions of K ′ ). Consider βi ∈ B. We have αi ∈ varK(τ0) \ varK(Γ). Then βi = αiθ ′ ∈ var K ′ (τ0θ ′ ). Furthermore βi / ∈ var K ′ (Γθ) holds because Γθ does not contain variables in B (Γ does not contain them and θ does not introduce them) and variables in B do not appear in the kinds of other variables which are not themselves in B.
We proceed as before also for the derivations for each branch. The difference is that, to apply weakening, we must prove the two premises for the environments and not for τ0 alone. The condition on variables is straightforward, as before. For the other we prove, for each x ∈ capt(pi) and assuming Γi(x) = τx, Γθ, gen K ′ ;Γθ (τxθ ′ ) ⊑ K ′ Γθ, (gen K;Γ (τx))θ .
Proof. By induction on the derivation of K; Γ, Γ ′ ⊢ K e : τ . We reason by cases on the last applied rule.
By the induction hypothesis, each of e1 and e2 either is a value or may reduce. If e1 e ′ 1 , then e1 e2 e ′ 1 e2. If e1 is a value and e2 e ′ 2 , then e1 e2 e1 e ′ 2 . If both are values then, by Lemma A.1, e1 has the form λx. e3 for some e3. Then, we can apply R-Appl and e1 e2 e3[ e 2/x].
Case Tk-Tag We have K; ∅ ⊢ K tag(e1) : α K; ∅ ⊢ K e1 : τ1 . Analogously to the previous case, by the induction hypothesis we have that either e1 is a value or e1 e ′ 1 . In the former case, tag(e1) is a value as well. In the latter, we have tag(e1) tag(e ′ 1 ). Case Tk-Match We have By the inductive hypothesis, either e0 is a value or it may reduce. In the latter case, if e0 e ′ 0 , then match e0 with (pi → ei)i∈I match e ′ 0 with (pi → ei)i∈I .
If e0 is a value, on the other hand, the expression may reduce by application of R-Match. Since τ0 K { pi | i ∈ I } and e0 is a value of type τ0 (and therefore satisfies the premises of the definition of exhaustiveness, with θ = [ ] and K = K ′ ), there exists at least an i ∈ I such that e0/pi = ς for some substitution ς. Let j be the least of these i and ςj the corresponding substitution; then match e0 with (pi → ei)i∈I ejςj .
Theorem A.10 (Subject reduction). Let e be an expression and τ a type such that K; Γ ⊢ K e : τ . If e e ′ , then K; Γ ⊢ K e ′ : τ .
Proof. By induction on the derivation of K; Γ ⊢ K e : τ . We reason by cases on the last applied rule.
Cases Tk-Var, Tk-Const, and Tk-Abstr These cases may not occur: variables, constants, and abstractions never reduce.
In the first case, we derive by the induction hypothesis that K; Γ ⊢ K e ′ 1 : τ ′ → τ and conclude by applying Tk-Appl again. The second case is analogous.

Case Tk-Pair We have
(e1, e2) e ′ occurs either because e1 e ′ 1 and e ′ = (e ′ 1 , e2), or because e1 is a value, e2 e ′ 2 , and e ′ = (e1, e ′ 2 ). In either case, the induction hypothesis allows us to derive that the type of the component that reduces is preserved; therefore, we can apply Tk-Pair again to conclude.
Case Tk-Tag Analogously to the previous case, a variant expression only reduces if its argument does, so we apply the induction hypothesis and Tk-Tag to conclude.
Proof. Consequence of Theorem A.9 and Theorem A.10.

A.3.1 Definition of the S type system
We consider a set V of type variables (ranged over by α, β, γ, . . . ) and the sets C, L, and B of language constants, tags, and basic types (ranged over by c, tag, and b respectively).

Definition A.22 (Types).
A type t is a term coinductively produced by the following grammar: which satisfies two additional constraints:

• (regularity) the term must have a finite number of different sub-terms; • (contractivity) every infinite branch must contain an infinite number of occurrences of atoms
(i.e., a type variable or the immediate application of a type constructor: basic, constant, arrow, product, or variant).
We introduce the following abbreviations: Definition A.23 (Type schemes). A type scheme s is of the form ∀A. t, where A is a finite set {α1, . . . , αn} of type variables.
We identify a type scheme ∀∅. t with the type t itself. Furthermore, we consider type schemes up to renaming of the variables they bind, and we disregard useless quantification.
Definition A.24 (Free variables). We write var(t) for the set of type variables occurring in a type t; we say they are the free variables of t, and we say that t is ground or closed if and only if var(t) is empty.
We extend the definition to type schemes as var(∀A. t) = var(t) \ A.
The (coinductive) definition of var can be found in Castagna et al. (2014, Definition A.2).

Definition A.25 (Meaningful variables).
We define the set mvar(t) of meaningful variables of a type t as We extend the definition to type schemes as mvar(∀A. t) = mvar(t) \ A.

Definition A.26 (Type substitutions).
A type substitution θ is a finite mapping of type variables to types. We write [ t i/α i | i ∈ I ] for the type substitution which simultaneously replaces αi with ti, for each i ∈ I. We write tθ for the application of the substitution θ to the type t; application is defined coinductively by the following equations.
We extend the var operation to substitutions as var(αθ) .
We write θ1 ∪ θ2 for the union of disjoint substitutions and θ1 • θ2 for the composition of substitutions.
We write Γ, Γ ′ for the updating of the type environment Γ with the new bindings in Γ ′ . It is defined as follows.
We extend the var operation to type environments as var(s) , and we extend mvar likewise.
Definition A.28 (Generalization). We define the generalization of a type t with respect to the type environment Γ as the type scheme gen Γ (t) = ∀A. t where A = var(t) \ mvar(Γ).
We extend this definition to type environments which only contain types (i.e., trivial type schemes) as Definition A.29 (Instances of a type scheme). The set of instances of a type scheme ∀A. t is defined as We say that a type scheme s1 is more general than a type scheme s2, and we write s1 ⊑ s2, if ∀t2 ∈ inst(s2). ∃t1 ∈ inst(s1). t1 ≤ t2 .
We extend this notion to type environments as Definition A.30 (Accepted type). The accepted type p of a pattern p is defined inductively as: The projection operators π1 and π2 for product types are defined by Castagna et al. (2014, Appendix C.2.1). We do not repeat the definition, but we state below the properties we need in the proofs. The projection operators for variant types correspond to π2 if we encode variant types as pairs; we therefore rephrase the same properties for them. Property A.31 (Projections of product types). There exist two functions π1 and π2 which, given a type t ≤ 1 × 1, yield types π1(t) and π2(t) such that: • for all type substitutions θ, πi(tθ) ≤ πi(t)θ.

Property A.32 (Projections of variant arguments).
For every tag tag there exists a function πtag which, given a type t ≤ tag(1), yields a type πtag(t) such that: (1), then πtag(t) ≤ πtag(t ′ ); • for all type substitutions θ, πtag(tθ) ≤ πtag(t)θ. Definition A.33 (Pattern environment generation). Given a pattern p and a type t ≤ p , the type environment t//p generated by pattern matching is defined inductively as: Definition A.34 (Typing relation). The typing relation Γ ⊢ S e : t (e is given type t in the type environment Γ) is defined by the rules in Figure 10.

A.3.2 Properties of the S type system
Lemma A.12 (Generation for values). Let v be a value. Then:  Figure 10. Typing relation of the S type system.
Proof. By induction on the typing derivation: values must be typed by an application of the rule corresponding to their form to appropriate premises, possibly followed by applications of Ts-Subsum.
The base cases are straightforward. In the inductive step, we just apply the induction hypothesis; for abstractions, the result follows from the behaviour of subtyping on arrow types.
We state the next three lemmas without proof, as they rely on the model of types which we have not discussed. Details can be found in Frisch et al. (2008) and Castagna and Xu (2011), as well as in Alain Frisch's PhD thesis. 9 Lemma A.13. For each i ∈ I, let pi be a pattern. If Γ ⊢ S v : i∈I pi , then there exists an i ∈ I such that Γ ⊢ S v : pi . Lemma A.14. Let t be a type. Let t ′ be a type such that either t ′ = p or t ′ = ¬ p , for some pattern p.
Lemma A.15. Let v be a well-typed value (i.e., ∅ ⊢ S v : t holds for some t) and p a pattern. Then: Lemma A.16. Let p be a pattern and t, t ′ two types. If t ≤ t ′ ≤ p , then, for all x ∈ capt(p), (t//p)(x) ≤ (t ′ //p)(x).

Proof. By structural induction on p.
Cases p = and p = c There is nothing to prove since capt(p) = ∅.
, that is, t ≤ t ′ , which we know by hypothesis.
We have t1θ1 = t1θ2 because the two substitutions differ only on variables in var(t2) \ var(t1) (which do not appear in t1 at all) or in mvar(Γ1) \ mvar(Γ2) (which is empty). Finally, we have t1θ2 ≤ t2θ2 because subtyping is preserved by substitutions.
Proof. By induction on the derivation of Γ2 ⊢ S e : t. We reason by cases on the last applied rule.

Case Ts-Var
and hence, since Γ1 ⊑ Γ2, there exists a t ′ ∈ inst(Γ1(x)) such that t ′ ≤ t. We apply Ts-Var to derive Γ1 ⊢ S x : t ′ and Ts-Subsum to conclude.
Hence we may apply the induction hypothesis for all i to derive Γ1, gen Γ 1 (ti//pi) ⊢ S ei : t ′ i and then apply Ts-Match to conclude.
Lemma A.23 (Stability of typing under type substitutions). Let θ be a type substitution. If Γ ⊢ S e : t, then Γθ ⊢ S e : tθ.
Proof. By induction on the derivation of Γ ⊢ S e : t. We reason by cases on the last applied rule.
Case Ts-Var We have and must show Γθ ⊢ S x : tθ.
Cases Ts-Appl, Ts-Pair, and Ts-Tag Straightforward application of the induction hypothesis.
By the induction hypothesis, using θ ′ , we derive Γθ ′ ⊢ S e0 : t0θ ′ . From it, we derive Γθ ⊢ S e0 : t0θ ′ by weakening (Lemma A.22); we prove the required premises below. We takê t0 = t0θ ′ : note that the exhaustiveness condition is satisfied because substitutions preserve subtyping (and all accepted types of patterns are closed). We haveti = tiθ ′ for all i.
The instances of this type scheme are all types txθ ′ θx, with dom(θx) ⊆ B|J . Given such a type, we must construct an instance of gen Γθ (t ′ x ) that is a subtype of it. Let θ ′ x be the restriction of θx to variables in var(t ′ x ) \ mvar(Γθ).
x θx: the two substitutions differ only on variables in B|J \ var(t ′ x ) (variables which do not appear in the type at all) and on variables in B|J ∩mvar(Γθ) (which is empty, because B was chosen fresh). By Lemma A.16, we have t ′ Case Ts-Subsum The conclusion follows from the induction hypothesis since substitutions preserve subtyping.
We show t ≃ tθ, which implies both (by Lemma A.21 and Lemma A.19). The equivalence holds by Lemma A.20: dom(θ) ∩ mvar(t) = ∅ because every α ∈ mvar(t) is either in A or mvar(Γ), and in both cases this means it cannot be in dom(θ). If Γ, Γ ′ ⊢ S e : t and, for all k ∈ {1, . . . , n} and for all t k ∈ inst(s k ), Γ ⊢ S v k : t k , then Γ ⊢ S eς : t.
Proof. By induction on the derivation of Γ, Γ ′ ⊢ S e : t. We reason by cases on the last applied rule.
We must then prove Γ ⊢ S v k : t, which we know by hypothesis since t ∈ inst(s k ).
Theorem A.26 (Progress). Let e be a well-typed, closed expression (i.e., ∅ ⊢ S e : t holds for some t). Then, either e is a value or there exists an expression e ′ such that e e ′ .
Proof. By hypothesis we have ∅ ⊢ S e : t. The proof is by induction on its derivation; we reason by cases on the last applied rule.
Case Ts-Var This case does not occur because variables are not closed.
Case Ts-Const In this case e is a constant c and therefore a value.
Case Ts-Abstr In this case e is an abstraction λx. e1. Since it is also closed, it is a value.
Case Ts-Appl We have By the induction hypothesis, each of e1 and e2 either is a value or may reduce. If e1 e ′ 1 , then e1 e2 e ′ 1 e2. If e1 is a value and e2 e ′ 2 , then e1 e2 e1 e ′ 2 . If both are values then, by Lemma A.12, e1 has the form λx. e3 for some e3. Then, we can apply R-Appl and e1 e2 e3[ e 2/x].

Case Ts-Tag
Analogously to the previous case, by the induction hypothesis we have that either e1 is a value or e1 e ′ 1 . In the former case, tag(e1) is a value as well. In the latter, we have tag(e1) tag(e ′ 1 ).

Case Ts-Match We have
By the inductive hypothesis, either e0 is a value or it may reduce. In the latter case, if e0 e ′ 0 , then match e0 with (pi → ei)i∈I match e ′ 0 with (pi → ei)i∈I . If e0 is a value, on the other hand, the expression may reduce by application of R-Match. Since t0 ≤ i∈I pi , ∅ ⊢ S e0 : i∈I pi holds by subsumption. Hence, since e0 is a value, ∅ ⊢ S e0 : pi holds for at least one i (by Lemma A.13); for each such i we have e0/pi = ςi (by Lemma A.15). Let j be the least of these i; then match e0 with (pi → ei)i∈I ejςj .
Case Ts-Subsum Straightforward application of the induction hypothesis.
Theorem A.27 (Subject reduction). Let e be an expression and t a type such that Γ ⊢ S e : t. If e e ′ , then Γ ⊢ S e ′ : t.
Proof. By induction on the derivation of Γ ⊢ S e : t. We reason by cases on the last applied rule.
Cases Ts-Var, Ts-Const, and Ts-Abstr These cases do not occur: variables, constants, and abstractions never reduce.
In the first case, we derive by the induction hypothesis that Γ ⊢ S e ′ 1 : t ′ → t and conclude by applying Ts-Appl again. The second case is analogous.
In the third case, we know by Lemma A.12 that Γ, {x : t ′ } ⊢ S e3 : t. We also know that e2 is a value such that Γ ⊢ S e2 : t ′ . Then, by Lemma A.25, Γ ⊢ S e3[ e 2/x] : t.

Case Ts-Pair We have
(e1, e2) e ′ occurs either because e1 e ′ 1 and e ′ = (e ′ 1 , e2), or because e1 is a value, e2 e ′ 2 , and e ′ = (e1, e ′ 2 ). In either case, the induction hypothesis allows us to derive that the type of the component that reduces is preserved; therefore, we can apply Ts-Pair again to conclude.
Case Ts-Tag Analogously to the previous case, a variant expression only reduces if its argument does, so we apply the induction hypothesis and Ts-Tag to conclude. Figure 11. Rules for the fixed-point combinator.

R-Fix
Case Ts-Match We have The reduction match e0 with (pi → ei)i∈I e ′ occurs either because e0 e ′ 0 and e ′ = match e ′ 0 with (pi → ei)i∈I or because e0 is a value and e ′ = ejς, where e0/pj = ς and, for all i < j, e0/pi = Ω. In the former case, we apply the induction hypothesis and conclude by Ts-Match.
Case Ts-Subsum Straightforward application of the induction hypothesis.
Corollary A.28 (Type soundness). Let e be a well-typed, closed expression, that is, such that ∅ ⊢ S e : t holds for some t. Then, either e diverges or it reduces to a value v such that ∅ ⊢ S v : t.
Proof. Consequence of Theorem A.26 and Theorem A.27.

A.3.3 Completeness of S with respect to K
In the proof of completeness, we consider a calculus and type systems extended with the addition of a fixed-point combinator Y: this simplifies the proof (as it allows us to assume that all arrow types are inhabited) and it would be desirable anyway in order to use the system in pratice. We add a new production Y e to the grammar defining expressions, a new production Y E to the grammar of evaluation contexts, and the new reduction rule R-Fix in Figure 11. We extend K and S with the addition, respectively, of the rules Tk-Fix and Ts-Fix in Figure 11.
As mentioned in Section 4.3, we prove completeness of S with respect to K using inductive techniques which do not account for the presence of recursion in kinds: we therefore have to restrict ourselves to only consider kinding environments which do not feature recursion, (the non-recursive environments defined below). We conjecture that coinductive techniques could be used instead to prove the result for general kinding environments. Definition A.35 (Non-recursive kinding environments). We say that a kinding environment K is non-recursive if, for all α such that K(α) = (L, U, T ), we have α / ∈ tag : τ ∈T varK(τ ).
Definition A.36. We define a function w which, given a k-type τ in a non-recursive kinding environment K, yields the measure w(τ, K) of τ in K. It is defined by the following equations.
We define the translation of type schemes as ∀A. K ′ ⊲ τ K = ∀A. τ K,K ′ and of type environments by translating each type scheme pointwise.
If tag / ∈ U (in which case U = L), then all summands have the form tag1(τ ′ ) and for each tag1 we have tag1 = tag: hence, the intersection is empty and thus we have t ≤ 0 ≃ tag(0). Then πtag(t) ≤ 0 ≤ τ1 K .
Case p = p1&p2 We directly apply the induction hypothesis to both sub-patterns and conclude.
tag / ∈ dom(T ), then tag / ∈ dom(T ′ ), in which case both u and u ′ admit it with any argument type. If tag ∈ dom(T ), either tag ∈ dom(T ′ ) or not. In the former case, u admits a smaller argument type than u ′ because T ′ θ ⊆ T . The same occurs in the latter case, since u ′ admits tag with any argument type.
If U = L, then U ′ could be L or not. In either case we can prove, for each tag ∈ U , that u ′ admits tag with a larger argument type than u does.
Case τ = b Straightforward, since a basic type is translated into itself and is never affected by substitutions.
Case τ = τ1 → τ2 By the induction hypothesis we have τi K,K ′ θ ′ ≃ τiθ K for both i. Then Case τ = τ1 × τ2 Analogous to the previous case.
Proof. By structural induction on v.
Note that values are always typed by an application of the typing rule corresponding to their form (Ts-Const, Ts-Abstr, Ts-Pair, or Ts-Tag) to appropriate premises, possibly followed by applications of Ts-Subsum. Hence, if ∅ ⊢ S v : t, there is a type t ′ ≤ t such that ∅ ⊢ S v : t ′ and that the last typing rule used to derive ∅ ⊢ S v : t ′ is one of the four above, given by the form of v.
Case v = c We have τ K ≥ c. Hence τ = bc, as the translation of any other τ is disjoint from c. Then we can take v ′ = v.
Case v = (v1, v2) We have τ K ≥ t1 × t2 for some t1 and t2. Hence τ = τ1 × τ2: any other τ would translate to a type disjoint from all products. Therefore ∅ ⊢ S v : τ1 K × τ2 K . By Lemma A.12 we have ∅ ⊢ S vi : τi K for both i; then by the induction hypothesis we find v ′ i for both i and let (t1) and ∅ ⊢ S v : tag(t1) for some t1 0 (since t1 types the value v1). Therefore, by the same reasoning as above, τ = α with K(α) = (L, U, T ). Since τ K ≥ tag(t1), we have tag ∈ L and therefore tag : τ1 ∈ T for some τ1 such that t1 ≤ τ1 K . Then we have ∅ ⊢ S v1 : τ1 K ; we may apply the induction hypothesis to find a value v ′ 1 and let v ′ = tag(v ′ 1 ). Case v = λx. e Note that an abstraction is only accepted by patterns which accept any value, so any two abstractions fail to match exactly the same patterns.
We have ∅ ⊢ S v : t1 → t2 for some t1 → t2 ≤ τ K . Hence we know τ is of the form τ1 → τ2; thus we have ∅ ⊢ S v : τ1 K → τ2 K . We take v ′ to be the function λx. Y (λf. λx. f x) x, which never terminates and can be assigned any arrow type.
Lemma A.33. Let K be a kinding environment, τ a k-type, and P a set of patterns. If τ K P , then τ K ≤ p∈P p .
Proof. By contradiction, assume that τ K P holds but τ K p∈P p . The latter condition implies that there exists a value v in the interpretation of τ K which is not in the interpretation of p∈P p . Because the definition of accepted type is exact with respect to the semantics of pattern matching, we have v/p = Ω for all p ∈ P . We also have ∅ ⊢ S v : τ K since v is in the interpretation of that type (typing is complete with respect to the interpretation if we restrict ourselves to translations of k-types).
By Lemma A.32, from v we can build a value v ′ such that K; ∅ ⊢ K v ′ : τ and, for every pattern p, v/p = Ω ⇐⇒ v ′ /p = Ω. We reach a contradiction, since τ K P and K; ∅ ⊢ K v ′ : τ imply that there exists a p ∈ P such that v ′ /p = Ω, whereas we have v/p = Ω for all p ∈ P .
Theorem A.34 (Preservation of typing). Let e be an expression, K a non-recursive kinding environment, Γ a k-type environment, and τ a k-type. If K; Γ ⊢ K e : τ , then Γ K ⊢ S e : τ K .
Proof. By induction on the derivation of K; Γ ⊢ K e : τ . We reason by cases on the last applied rule.

Case Tk-Var We have
and must show Γ K ⊢ S x : τ K . Since Γ K (x) = ∀A. τx K,Kx , by Ts-Var we can derive and must show Γ K ⊢ S match e0 with (pi → ei)i∈I : τ K which we prove by establishing, for some types t0 and ti, t ′ i for each i, that Γ K ⊢ S e0 : t0 t0 ≤ i∈I pi ti = (t0 \ j<i pj ) ∧ pi ∀i ∈ I. Γ K , gen Γ K (ti//pi) ⊢ S ei : t ′ i i∈I t ′ i ≤ τ K . and then applying Ts-Match, followed by Ts-Subsum if necessary.
By the induction hypothesis we derive Γ K ⊢ S e0 : τ0 K and hence have t0 = τ0 K . By Lemma A.33, we have t0 ≤ i∈I pi . For every branch, ti ≤ t0 and ti ≤ pi : therefore, we can apply Lemma A.30 and derive that (ti//pi)(x) ≤ Γi(x) K holds for every x ∈ capt(pi).
We can thus choose t ′ i = τ K for all branches, satisfying i∈I t ′ i ≤ τ K .

A.4 Type reconstruction
A.4.1 Definition of type reconstruction for S Definition A.38 (Constraints). A constraint c is a term inductively generated by the following grammar: c ::= t≤ t | x≤ t | def Γ in C | let [C](Γi in Ci)i∈I where C ranges over constraint sets, that is, finite sets of constraints, and where the range of every type environment Γ in constraints of the form def or let only contains types (i.e., trivial type schemes).
We give different definitions of constraint generation and rewriting here than those in Section 5, because we keep track explicitly of the new variables introduced during the derivation, rather than   Figure 14. Constraint generation rules with explicit variable introduction.
informally requiring them to be fresh. For instance, in e : t ⇒A C, A is the set of variables which appear in C but not in t. We will omit it for soundness proofs, where it is not relevant. We use the symbol ⊎ to denote the union of two disjoint sets. Therefore, when we write A1 ⊎A2, we require A1 and A2 to be disjoint. When we require this for sets of type variables, the condition is always satisfiable by an appropriate choice of variables, since there is an infinite supply to choose from.
Definition A.39 (Freshness). We say that a type variable α is fresh with respect to a set of type variables A, and write α ♯ A, if α / ∈ A. We write A ♯ A ′ if ∀α ∈ A. α ♯ A ′ . We extend this to define freshness with respect to types, type environments, and type substitutions: we write α ♯ t if α ♯ var(t), α ♯ Γ if α ♯ var(Γ), and α ♯ θ if α ♯ (dom(θ) ∪ var(θ)).
Definition A.40 (Environment generation for pattern matching). The environment generation relation for pattern matching t///p ⇒A (Γ, C) is defined by the rules in Figure 13. Definition A.41 (Constraint generation). The constraint generation relation e : t ⇒A C is defined by the rules in Figure 14. Note that all rules include a constraint of the form (·)≤ t. We add this constraint everywhere to streamline the proofs; in practice, it can be dropped from TRs-Appl and TRs-Match by using directly t instead of β to generate constraints for the sub-expressions.
Definition A.42 (Type-constraint set). A type-constraint set D is a set of constraints of the form t≤ t ′ , where t and t ′ are types.
We say that a type substitution θ satisfies a type-constraint set D, written θ D, if tθ ≤ t ′ θ holds for every t≤ t ′ in D.
Furthermore, if θ ∈ tally(D), then dom(θ) is the set of variables appearing in D and var(θ) is a set of fresh variables of the same cardinality. In the completeness property above, for θ ′′ we can take θ ∪ θ ′′′ where dom(θ ′′′ ) = var(θ ′ ).

A.4.2 Properties of type reconstruction for S
Lemma A.36. Given a constraint set C, we write var(C) for the set of variables appearing in it. The following properties hold: Proof. Straightforward proofs by induction on the derivations. Lemma A.37 (Correctness of environment reconstruction). Let p be a pattern and t, t ′ two types, with t ′ ≤ p . Let t///p ⇒ (Γ, C). If θ is a type substitution such that θ C and t ′ ≤ tθ, then, for all x ∈ capt(p), (t ′ //p)(x) ≤ Γ(x)θ.
Proof. By structural induction on p.
Cases p = and p = c There is nothing to prove since capt(p) = ∅.
Case p = p1&p2 Every x ∈ capt(p) is either in capt(p1) or in capt(p2). Let x ∈ capt(pi); then, we apply the induction hypothesis to pi to conclude.
By the induction hypothesis applied to both p1 and p2 we derive, for all x, Lemma A.38 (Precise solution to environment reconstruction constraints). Let p be a pattern, t a type, and θ a type substitution such that tθ ≤ p . Let t///p ⇒A (Γ, C), with A ♯ dom(θ).
There exists a type substitution θ ′ such that dom(θ ′ ) = A, that (θ ∪ θ ′ ) C, and that, for all Proof. By structural induction on p.
Cases p = and p = c In both cases we take θ ′ = [ ].
For both i, we apply the induction hypothesis to pi, t, and θ to derive θ ′ i . We take We apply the induction hypothesis to p1, t ∧ p1 , and θ to derive θ ′ 1 . We apply it to p2, t \ p1 , and θ to derive θ ′ 2 ; here, note that tθ ≤ p1 ∨ p2 implies tθ \ p1 ≤ p2 . We take θ ′ = θ ′ 1 ∪ θ ′ 2 . We have (θ ∪ θ ′ ) C since it satisfies C1 and C2. Furthermore, for all x, we have Γ1( , since A1 and A2 are disjoint and both are disjoint from var(t). Finally, Theorem A.39 (Soundness of constraint generation and rewriting). Let e be an expression, t a type, and Γ a type environment. If e : t ⇒ C, Γ ⊢ C D, and θ D, then Γθ ⊢ S e : tθ.
Proof. By structural induction on e.
Case e = tag(e1) We have Analogous to the previous case. We apply the induction hypothesis, then Ts-Tag, then subsumption.
We taket0 = αθ0θ ⋆ θ. We have αθ0θ ⋆ θ ≤ i∈I pi because θ0 D ′ 0 implies αθ0 ≤ i∈I pi and because subtyping is preserved by substitutions (recall that the accepted types of patterns are closed). We also haveti = tiθ0θ ⋆ θ for all i.
Theorem A.40 (Completeness of constraint generation and rewriting). Let e be an expression, t a type, and Γ a type environment. Let θ be a type substitution such that Γθ ⊢ S e : tθ.
Let e : t ⇒A C, with A ♯ Γ, dom(θ). There exist a type-constraint set D, a set of fresh type variables A ′ , and a type substitution θ ′ , with dom(θ ′ ) = A ∪ A ′ , such that Γ ⊢ C A ′ D and (θ ∪ θ ′ ) D.
By applying the induction hypothesis to each branch i, we therefore find Di, A ′′ i (of fresh variables), and θ ′ i such that Γ, It has the correct domain; we must only show θ ∪ θ ′ equiv(θ0) ∪ ( i∈I Di) ∪ {β≤ t} .

A.5 Extensions
We give full definitions for the three variants of the S system that we have sketched in Section 6.

A.5.1 Overloaded functions
To remove the restriction on the use of intersection types for functions, we change the typing rule Ts-Abstr: we allow the derivation of an intersection of arrow types for a λ-abstraction if each of these types is derivable. The modified rule is the following.
Ts-Abstr ∀j ∈ J. Γ, {x : t ′ j } ⊢ e : tj Γ ⊢ λx. e : j∈J t ′ j → tj Furthermore, we change the typing rule for pattern matching so that redundant branches are excluded from typing. This is necessary to use intersections effectively for pattern matching: in practice, to be able to assign to a function defined by pattern matching one arrow type for each branch.

Ts-Match
Γ ⊢ S e0 : t0 t0 ≤ i∈I pi ti = (t0 \ j<i pj ) ∧ pi ∀i ∈ I t ′ i = 0 if ti ≤ 0 Γ, gen Γ (ti//pi) ⊢ S ei : t ′ i otherwise Γ ⊢ S match e0 with (pi → ei)i∈I : i∈I t ′ i Finally, we also change the rule Ts-Var for variables: we allow a variable to be typed with any intersection of instantiations, rather than just with a single instantiation.

A.5.2 Refining the type of expressions in pattern matching
The extension we present here improves the typing of pattern matching by introducing more precise types for some variables in the matched expression when typing the branches. These refined types take into account which patterns have been selected and which have not; they are introduced for variables that appear in the matched expression, possibly below pairs or variant constructors, but not inside applications or match constructs. We reuse pattern environment generation to describe the derivation of these refined types. However, we need to introduce a new production for patterns to use when translating expressions to patterns: p ::= · · · | p, p .
Patterns of the form p, p should not occur in programs; they are only for internal use in the type system. Unlike normal pair patterns, these patterns may include repeated variables. We need not define the dynamic semantics of these patterns, as it won't be used. We define their accepted type as p1, p2 = p1 × p2 and environment generation as t// p1, p2 = π1(t)//p1 ∧ ∧ π2(t)//p2 , where ∧ ∧, defined as is the pointwise intersection of type environments.
We define a translation · of expressions to patterns. It preserves variables and variants, converts pairs to the new form, and turns everything else into a wildcard.
We change the typing rule for pattern matching as follows.

Ts-Match
Γ ⊢ S e0 : t0 t0 ≤ i∈I pi ∧ e0 ti = (t0 \ j<i pj ) ∧ pi ∀i ∈ I Γ, gen Γ (ti// e0 ), gen Γ (ti//pi) ⊢ S ei : t ′ i Γ ⊢ S match e0 with (pi → ei)i∈I : i∈I t ′ i The main difference is the addition of the type environment gen Γ (ti// e0 ) which provides the refined types for the variables in e0 . This environment is added before the usual one for the pattern pi: hence, the capture variables of pi still take precedence.
We also add the requirement t0 ≤ e0 to ensure ti// e0 is well defined. This is not restrictive because any well-typed e can be typed with a subtype of e .

A.5.3 Applicability to OCaml
We change the semantics of pattern matching to include undefined results. These occur when matching constants of different basic types or when matching different constructors (for instance, a constant and a pair). We use the following definition.
Definition A.47 (Semantics of pattern matching). We write v/p for the result of matching a value v against a pattern p. We have either v/p = ς, where ς is a substitution defined on the variables in capt(p), v/p = Ω, or v/p = ℧. In the first case, we say that v matches p (or that p accepts v); in the second, we say that matching fails; in the third, we say that it is undefined.
The definition of v/p is given inductively in Figure 16.
Recall that the function b (·) (used here for v/c) assigns a basic type bc to each constant c.
The notions of reduction are unchanged, but the rule R-Match is made more restrictive by the changed definition of v/p: a match expression reduces only if matching succeeds for a branch and fails-but is never undefined-for all previous branches. The type system should therefore ensure that, in a well-typed expression match v with (pi → ei)i∈I , v/pi = ℧ never happens. While this is true for K, S has to be restricted to ensure this.

Ts-Match
Γ ⊢ S e0 : t0 t0 ≤ i∈I pi ∧ i∈I ⌈pi⌉ ti = (t0 \ j<i pj ) ∧ pi ∀i ∈ I Γ, gen Γ (ti//pi) ⊢ S ei : t ′ i Γ ⊢ S match e0 with (pi → ei)i∈I : i∈I t ′ i Note that this condition is somewhat more restrictive than necessary: patterns which follow a catch-all (wildcard or variable) pattern-or in general that are useless because previous patterns already cover all cases-can be left out of the intersection. The precise condition would be t0 ≤ i∈I pi ∧ j<i ⌈pj⌉ , but we choose the simpler condition since they only differ in case there is redundancy.