Changeset 9a38436c
- Timestamp:
- Jan 21, 2019, 5:09:46 PM (6 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- f2c5726
- Parents:
- 2240ec4
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/aaron_moss_PhD/phd/resolution-heuristics.tex
r2240ec4 r9a38436c 3 3 4 4 The main task of the \CFACC{} type-checker is \emph{expression resolution}, determining which declarations the identifiers in each expression correspond to. 5 Resolution is a straightforward task in C, as each declaration has a unique identifier, but in \CFA{} the name overloading features discussed in Section~\ref{overloading-sec} generate multiple candidate declarations for each identifier.5 Resolution is a straightforward task in C, as no declarations share identifiers, but in \CFA{} the name overloading features discussed in Section~\ref{overloading-sec} generate multiple candidate declarations for each identifier. 6 6 I refer to a given matching between identifiers and declarations in an expression as an \emph{interpretation}; an interpretation also includes information about polymorphic type bindings and implicit casts to support the \CFA{} features discussed in Sections~\ref{poly-func-sec} and~\ref{implicit-conv-sec}, each of which increase the proportion of feasible candidate interpretations. 7 7 To choose between feasible interpretations, \CFA{} defines a \emph{conversion cost} to rank interpretations; the expression resolution problem is thus to find the unique minimal-cost interpretation for an expression, reporting an error if no such interpretation exists. 8 8 9 9 \section{Expression Resolution} 10 11 \subsection{Type Unification} 12 13 The polymorphism features of \CFA{} require binding of concrete types to polymorphic type variables. 14 Briefly, \CFACC{} keeps a mapping from type variables to the concrete types they are bound to as an auxiliary data structure during expression resolution; Chapter~\ref{env-chap} describes this \emph{environment} data structure in more detail. 15 A \emph{unification} algorithm is used to simultaneously check two types for equivalence with respect to the substitutions in an environment and update that environment. 16 Essentially, unification recursively traverses the structure of both types, checking them for equivalence, and when it encounters a type variable it replaces it with the concrete type it is bound to; if the type variable has not yet been bound, the unification algorithm assigns the equivalent type as the bound type of the variable, after performing various consistency checks. 17 Ditchfield\cite{Ditchfield92} and Bilson\cite{Bilson03} describe the semantics of \CFA{} unification in more detail. 10 18 11 19 \subsection{Conversion Cost} … … 15 23 Loosely defined, the conversion cost counts the implicit conversions utilized by an interpretation. 16 24 With more specificity, the cost is a lexicographically-ordered tuple, where each element corresponds to a particular kind of conversion. 17 In Bilson's \CFA{} design, conversion cost is a three-tuple, $(unsafe, poly, safe)$, where $unsafe$ is the count of unsafe (narrowing) conversions, $poly$ is the count of polymorphic type bindings, and $safe$ is the sum of the degree of safe (widening) conversions.25 In Bilson's \CFA{} design, conversion cost is a 3-tuple, $(unsafe, poly, safe)$, where $unsafe$ is the count of unsafe (narrowing) conversions, $poly$ is the count of polymorphic type bindings, and $safe$ is the sum of the degree of safe (widening) conversions. 18 26 The following example lists the cost in the Bilson model of calling each of the following functions with two !int! parameters: 19 27 … … 27 35 28 36 Note that safe and unsafe conversions are handled differently; \CFA{} counts ``distance'' of safe conversions (\eg{} !int! to !long! is cheaper than !int! to !unsigned long!), while only counting the number of unsafe conversions (\eg{} !int! to !char! and !int! to !short! both have unsafe cost 1). 37 29 38 As part of adding reference types to \CFA{} (see Section~\ref{type-features-sec}), Schluntz added a new $reference$ element to the cost tuple, which counts the number of implicit reference-to-rvalue conversions performed so that candidate interpretations can be distinguished by how closely they match the nesting of reference types; since references are meant to act almost indistinguishably from lvalues, this $reference$ element is the least significant in the lexicographic comparison of cost tuples. 30 39 … … 35 44 The random-access iterator has more type constraints, but should be chosen whenever those constraints can be satisfied. 36 45 As such, I have added a $specialization$ element to the \CFA{} cost tuple, the values of which are always negative. 37 Each type assertion subtracts 1 from $specialization$, so that more-constrained functions will be chosen over less-constrained functions, all else being equal.38 A more sophisticated design would define a partial order over sets of type assertions by set inclusion (\ie{} one function would only cost less than another if it had a ll of the same assertions, plus more,rather than just more total assertions), but I did not judge the added complexity of computing and testing this order to be worth the gain in specificity.46 Each type assertion subtracts 1 from $specialization$, so that more-constrained functions will cost less and thus be chosen over less-constrained functions, all else being equal. 47 A more sophisticated design would define a partial order over sets of type assertions by set inclusion (\ie{} one function would only cost less than another if it had a strict superset of assertions, rather than just more total assertions), but I did not judge the added complexity of computing and testing this order to be worth the gain in specificity. 39 48 40 49 I have also incorporated an unimplemented aspect of Ditchfield's earlier cost model. 41 In the example below, adapted from \cite[89]{Ditchfield }, Bilson's cost model only distinguished between the first two cases by accounting extra cost for the extra set of !otype! parameters, which, as discussed above, is not a desirable solution:50 In the example below, adapted from \cite[89]{Ditchfield92}, Bilson's cost model only distinguished between the first two cases by accounting extra cost for the extra set of !otype! parameters, which, as discussed above, is not a desirable solution: 42 51 43 52 \begin{cfa} 44 forall(otype T, otype U) void f(T, U); $\C {// polymorphic}$45 forall(otype T) void f(T, T); $\C {// less polymorphic}$46 forall(otype T) void f(T, int); $\C {// even less polymorphic}$47 forall(otype T) void f(T*, int); $\C {// least polymorphic}$53 forall(otype T, otype U) void f(T, U); $\C[3.25in]{// polymorphic}$ 54 forall(otype T) void f(T, T); $\C[3.25in]{// less polymorphic}$ 55 forall(otype T) void f(T, int); $\C[3.25in]{// even less polymorphic}$ 56 forall(otype T) void f(T*, int); $\C[3.25in]{// least polymorphic}$ 48 57 \end{cfa} 49 58 … … 55 64 In my modified model, each level of constraint on a polymorphic type in the parameter list results in a decrement of the $specialization$ cost element. 56 65 Thus, all else equal, if both a binding to !T! and a binding to !T*! are available, \CFA{} will pick the more specific !T*! binding. 57 This process is recursive, such that !T**! produces a -2 specialization cost, as opposed to the -1 cost for $T*$. 58 This works similarly for generic types, \eg{} !box(T)! also has specialization cost -1; for multi-argument generic types, the least-specialized polymorphic parameter sets the specialization cost, \eg{} the specialization cost of !pair(T, S*)! is -1 (from !T!) rather than -2 (from !S!). 66 This process is recursive, such that !T**! produces a -2 specialization cost, as opposed to the -1 cost for !T*!. 67 This works similarly for generic types, \eg{} !box(T)! also has specialization cost -1. 68 For multi-argument generic types, the least-specialized polymorphic parameter sets the specialization cost, \eg{} the specialization cost of !pair(T, S*)! is -1 (from !T!) rather than -2 (from !S!). 59 69 Since the user programmer provides parameters, but cannot provide guidance on return type, specialization cost is not counted for the return type list. 60 The final \CFA{} cost tuple is thus as follows: 70 Since both $vars$ and $specialization$ are properties of the declaration rather than any particular interpretation, they are prioritized less than the interpretation-specific conversion costs from Bilson's original 3-tuple. 71 The current \CFA{} cost tuple is thus as follows: 61 72 62 73 \begin{equation*} … … 66 77 \subsection{Expression Cost} 67 78 68 Having defined the cost model ... 79 The mapping from \CFA{} expressions to cost tuples is described by Bilson in \cite{Bilson03}, and remains effectively unchanged modulo the refinements to the cost tuple described above. 80 Nonetheless, some salient details are repeated here for the sake of completeness. 69 81 70 % mention cast expression as "eager" 82 On a theoretical level, the resolver algorithm treats most expressions as if they were function calls. 83 Operators in \CFA{} (both those existing in C and added features like constructors) are all modelled as function calls. 84 In terms of the core argument-parameter matching algorithm, the overloaded variables of \CFA{} are not handled differently from zero-argument function calls, aside from a different pool of candidate declarations and setup for different code generation. 85 Similarly, an aggregate member expression !a.m! can be modelled as a unary function !m! that takes one argument of the aggregate type. 86 Literals do not require sophisticated resolution, as the syntactic form of each implies their result types (\eg{} !42! is int, !"hello"! is !char*!, \etc{}), though struct literals require resolution of the implied constructor call. 87 88 Since most expressions can be treated as function calls, nested function calls are the primary component of expression resolution problem instances. 89 Each function call has an \emph{identifier} which must match the name of the corresponding declaration, and a possibly-empty list of \emph{arguments}. 90 These arguments may be function call expressions themselves, producing a tree of function-call expressions to resolve, where the leaf expressions are generally nullary functions, variable expressions, or literals. 91 A single instance of expression resolution consists of matching declarations to all the identifiers in the expression tree of a top-level expression, along with inserting any conversions and assertions necessary for that matching. 92 The cost of a function-call expression is the sum of the conversion costs of each argument type to the corresponding parameter and the total cost of each subexpression, recursively calculated. 93 \CFA{} expression resolution must produce either the unique lowest-cost interpretation of the top-level expression, or an appropriate error message if none such exists. 94 The cost model of \CFA{} precludes a simple bottom-up resolution pass, as constraints and costs introduced by calls higher in the expression tree can change the interpretation of those lower in the tree, as in the following example: 95 96 \begin{cfa} 97 void f(int); 98 double g(int); $\C{// g1}$ 99 int g(double); $\C{// g2}$ 100 101 f( g(42) ); 102 \end{cfa} 103 104 !g1! is the cheapest interpretation of !g(42)!, with cost $(0,0,0,0,0,0)$ since the argument type is an exact match, but to downcast the return type of !g1! to an !int! suitable for !f! requires an unsafe conversion for a total cost of $(1,0,0,0,0,0)$. 105 If !g2! is chosen, on the other hand, there is a safe upcast from the !int! type of !42! to !double!, but no cast on the return of !g!, for a total cost of $(0,0,1,0,0,0)$; as this is cheaper, $g2$ is chosen. 106 Due to this design, in general all feasible interpretations of subexpressions must be propagated to the top of the expression tree before any can be eliminated, a lazy form of expression resolution, as opposed to the eager expression resolution allowed by C, where each expression can be resolved given only the resolution of its immediate subexpressions. 107 108 If there are no feasible interpretations of the top-level expression, expression resolution fails and must produce an appropriate error message. 109 If any subexpression has no feasible interpretations, the process can be short-circuited and the error produced at that time. 110 If there are multiple feasible interpretations of a top-level expression, ties are broken based on the conversion cost, calculated as above. 111 If there are multiple minimal-cost feasible interpretations of a top-level expression, that expression is said to be \emph{ambiguous}, and an error must be produced. 112 Multiple minimal-cost interpretations of a subexpression do not necessarily imply an ambiguous top-level expression, however, as the subexpression interpretations may be disambiguated based on their return type or by selecting a more-expensive interpretation of that subexpression to reduce the overall expression cost, as above. 113 114 The \CFA{} resolver uses type assertions to filter out otherwise-feasible subexpression interpretations. 115 An interpretation can only be selected if all the type assertions in the !forall! clause on the corresponding declaration can be satisfied with a unique minimal-cost set of satisfying declarations. 116 Type assertion satisfaction is tested by performing type unification on the type of the assertion and the type of the declaration satisfying the assertion. 117 That is, a declaration which satisfies a type assertion must have the same name and type as the assertion after applying the substitutions in the type environment. 118 Assertion-satisfying declarations may be polymorphic functions with assertions of their own that must be satisfied recursively. 119 This recursive assertion satisfaction has the potential to introduce infinite loops into the type resolution algorithm, a situation which \CFACC{} avoids by imposing a hard limit on the depth of recursive assertion satisfaction (currently 4); this approach is also taken by \CC{} to prevent infinite recursion in template expansion, and has proven to be both effective an not unduly restrictive of the language's expressive power. 120 121 Cast expressions must be treated somewhat differently than functions for backwards compatibility purposes with C. 122 In C, cast expressions can serve two purposes, \emph{conversion} (\eg{} !(int)3.14!), which semantically converts a value to another value in a different type with a different bit representation, or \emph{coercion} (\eg{} !void* p; (int*)p;!), which assigns a different type to the same bit value. 123 C provides a set of built-in conversions and coercions, and user programmers are able to force a coercion over a conversion if desired by casting pointers. 124 The overloading features in \CFA{} introduce a third cast semantics, \emph{ascription} (\eg{} !int x; double x; (int)x;!), which selects the overload which most-closely matches the cast type. 125 However, since ascription does not exist in C due to the lack of overloadable identifiers, if a cast argument has an unambiguous interpretation as a conversion argument then it must be interpreted as such, even if the ascription interpretation would have a lower overall cost, as in the following example, adapted from the C standard library: 126 127 \begin{cfa} 128 unsigned long long x; 129 (unsigned)(x >> 32); 130 \end{cfa} 131 132 In C semantics, this example is unambiguously upcasting !32! to !unsigned long long!, performing the shift, then downcasting the result to !unsigned!, at total cost $(1,0,4,0,0,0)$. 133 If ascription were allowed to be a first-class interpretation of a cast expression, it would be cheaper to select the !unsigned! interpretation of $?>>?$ by downcasting !x! to !unsigned! and upcasting !32! to !unsigned!, at a total cost of $(1,0,1,0,0,0)$. 134 However, this break from C semantics introduces a backwards compatibility break, so to maintain C compatibility the \CFA{} resolver selects the lowest-cost interpretation of the cast argument for which a conversion or coercion to the target type exists (upcasting to !unsigned long long! in the example above, due to the lack of unsafe downcasts), only using the cost of the conversion itself as a tie-breaker. 135 For example, in !int x; double x; (int)x;!, both declarations have zero-cost interpretations as !x!, but the !int x! interpretation is cheaper to cast to !int!, and is thus selected. 136 Thus, in contrast to the lazy resolution of nested function-call expressions discussed above, where final interpretations for each subexpression are not chosen until the top-level expression is reached, cast expressions introduce eager resolution of their argument subexpressions, as if that argument was itself a top-level expression. 137 138 \section{Resolution Algorithms} 139 140 \subsection{Related Work} 141 142 \subsection{Contributions} 143 144 One improvement I have implemented to the assertion resolution scheme in \CFACC{} is to change the search for satisfying declarations from depth-first to breadth-first. 145 If there are $n$ candidates to satisfy a single assertion, the candidate with the lowest conversion cost among those that have all their assertions satisfied should be chosen. 146 Bilson's \CFACC{} would immediately attempt to recursively satisfy the assertions of each candidate that passed type unification; this work is wasted if that candidate is not selected to satisfy the assertion, and the wasted work may be quite substantial if the candidate function produces deeply-recursive assertions. 147 I have modified \CFACC{} to first sort assertion satisfaction candidates by conversion cost, and only resolve recursive assertions until a unique minimal-cost candidate or an ambiguity is detected. 148 149 I have experimented with using expression resolution rather than type unification to choose assertion resolutions; this path should be investigated further in future work. 150 This approach is more flexible than type unification, allowing for conversions to be applied to functions to satisfy assertions. 151 Anecdotally, this flexibility matches user-programmer expectations better, as small type differences (\eg{} the presence or absence of a reference type, or the usual conversion from !int! to !long!) no longer break assertion satisfaction. 152 Practically, the resolver prototype uses this model of assertion satisfaction, with no apparent deficit in performance; the generated expressions that are resolved to satisfy the assertions are easier than the general case because they never have nested subexpressions, which eliminates much of the theoretical differences between unification and resolution. 153 The main challenge to implement this approach in \CFACC{} would be applying the implicit conversions generated by the resolution process in the code-generation for the thunk functions that \CFACC{} uses to pass type assertions with the proper signatures. 71 154 72 155 % Mention relevance of work to C++20 concepts
Note: See TracChangeset
for help on using the changeset viewer.