# Changeset 7b1bfc5 for doc/aaron_comp_II/comp_II.tex

Ignore:
Timestamp:
Aug 20, 2016, 5:32:55 AM (5 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, ctor, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, memory, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, resolv-new, with_gc
Children:
4a7d895
Parents:
b1848a0
Message:

Updates to Comp II draft per Peter, round 2

File:
1 edited

### Legend:

Unmodified
 rb1848a0 \CFA\footnote{Pronounced C-for-all'', and written \CFA or \CFL.} is an evolutionary modernization of the C programming language currently being designed and built at the University of Waterloo by a team led by Peter Buhr. \CFA both fixes existing design problems and adds multiple new features to C, including name overloading, user-defined operators, parametric-polymorphic routines, and type constructors and destructors, among others. The new features make \CFA significantly more powerful and expressive than C, but impose a significant compile-time cost, particularly in the expression resolver, which must evaluate the typing rules of a much more complex type-system. The new features make \CFA more powerful and expressive than C, but impose a compile-time cost, particularly in the expression resolver, which must evaluate the typing rules of a significantly more complex type-system. The primary goal of this research project is to develop a sufficiently performant expression resolution algorithm, experimentally validate its performance, and integrate it into CFA, the \CFA reference compiler. It is important to note that \CFA is not an object-oriented language. \CFA does have a system of (possibly implicit) type conversions derived from C's type conversions; while these conversions may be thought of as something like an inheritance hierarchy the underlying semantics are significantly different and such an analogy is loose at best. \CFA does have a system of (possibly implicit) type conversions derived from C's type conversions; while these conversions may be thought of as something like an inheritance hierarchy, the underlying semantics are significantly different and such an analogy is loose at best. Particularly, \CFA has no concept of subclass'', and thus no need to integrate an inheritance-based form of polymorphism with its parametric and overloading-based polymorphism. The graph structure of the \CFA type conversions is also markedly different than an inheritance graph; it has neither a top nor a bottom type, and does not satisfy the lattice properties typical of inheritance graphs. \end{lstlisting} The ©identity© function above can be applied to any complete object type (or ©otype©''). The type variable ©T© is transformed into a set of additional implicit parameters to ©identity© which encode sufficient information about ©T© to create and return a variable of that type. The type variable ©T© is transformed into a set of additional implicit parameters to ©identity©, which encode sufficient information about ©T© to create and return a variable of that type. The current \CFA implementation passes the size and alignment of the type represented by an ©otype© parameter, as well as an assignment operator, constructor, copy constructor and destructor. Here, the runtime cost of polymorphism is spread over each polymorphic call, due to passing more arguments to polymorphic functions; preliminary experiments have shown this overhead to be similar to \CC virtual function calls. Determining if packaging all polymorphic arguments to a function into a virtual function table would reduce the runtime overhead of polymorphic calls is an open research question. Since bare polymorphic types do not provide a great range of available operations, \CFA also provides a \emph{type assertion} mechanism to provide further information about a type: Since bare polymorphic types do not provide a great range of available operations, \CFA provides a \emph{type assertion} mechanism to provide further information about a type: \begin{lstlisting} forall(otype T ®| { T twice(T); }®) \end{lstlisting} These type assertions may be either variable or function declarations that depend on a polymorphic type variable. ©four_times© can only be called with an argument for which there exists a function named ©twice© that can take that argument and return another value of the same type; a pointer to the appropriate ©twice© function is passed as an additional implicit parameter to the call to ©four_times©. ©four_times© can only be called with an argument for which there exists a function named ©twice© that can take that argument and return another value of the same type; a pointer to the appropriate ©twice© function is passed as an additional implicit parameter to the call of ©four_times©. Monomorphic specializations of polymorphic functions can themselves be used to satisfy type assertions. The compiler accomplishes this by creating a wrapper function calling ©twice // (2)© with ©S© bound to ©double©, then providing this wrapper function to ©four_times©\footnote{©twice // (2)© could also have had a type parameter named ©T©; \CFA specifies renaming of the type parameters, which would avoid the name conflict with the type variable ©T© of ©four_times©.}. Finding appropriate functions to satisfy type assertions is essentially a recursive case of expression resolution, as it takes a name (that of the type assertion) and attempts to match it to a suitable declaration in the current scope. Finding appropriate functions to satisfy type assertions is essentially a recursive case of expression resolution, as it takes a name (that of the type assertion) and attempts to match it to a suitable declaration \emph{in the current scope}. If a polymorphic function can be used to satisfy one of its own type assertions, this recursion may not terminate, as it is possible that function is examined as a candidate for its own type assertion unboundedly repeatedly. To avoid infinite loops, the current CFA compiler imposes a fixed limit on the possible depth of recursion, similar to that employed by most \CC compilers for template expansion; this restriction means that there are some semantically well-typed expressions that cannot be resolved by CFA. forall(otype M | has_magnitude(M)) M max_magnitude( M a, M b ) { M aa = abs(a), ab = abs(b); return aa < ab ? b : a; return abs(a) < abs(b) ? b : a; } \end{lstlisting} Semantically, traits are simply a named lists of type assertions, but they may be used for many of the same purposes that interfaces in Java or abstract base classes in \CC are used for. Unlike Java interfaces or \CC base classes, \CFA types do not explicitly state any inheritance relationship to traits they satisfy; this can be considered a form of structural inheritance, similar to interface implementation in Go, as opposed to the nominal inheritance model of Java and \CC. Unlike Java interfaces or \CC base classes, \CFA types do not explicitly state any inheritance relationship to traits they satisfy; this can be considered a form of structural inheritance, similar to implementation of an interface in Go, as opposed to the nominal inheritance model of Java and \CC. Nominal inheritance can be simulated with traits using marker variables or functions: \begin{lstlisting} \end{lstlisting} Traits, however, are significantly more powerful than nominal-inheritance interfaces; firstly, due to the scoping rules of the declarations which satisfy a trait's type assertions, a type may not satisfy a trait everywhere that the type is declared, as with ©char© and the ©nominal© trait above. Secondly, traits may be used to declare a relationship between multiple types, a property which may be difficult or impossible to represent in nominal-inheritance type systems: Traits, however, are significantly more powerful than nominal-inheritance interfaces; firstly, due to the scoping rules of the declarations that satisfy a trait's type assertions, a type may not satisfy a trait everywhere that the type is declared, as with ©char© and the ©nominal© trait above. Secondly, traits may be used to declare a relationship among multiple types, a property that may be difficult or impossible to represent in nominal-inheritance type systems: \begin{lstlisting} trait pointer_like(®otype Ptr, otype El®) { }; typedef list* list_iterator; typedef list *list_iterator; lvalue int *?( list_iterator it ) { \end{lstlisting} In the example above, ©(list_iterator, int)© satisfies ©pointer_like© by the given function, and ©(list_iterator, list)© also satisfies ©pointer_like© by the built-in pointer dereference operator. In the example above, ©(list_iterator, int)© satisfies ©pointer_like© by the user-defined dereference function, and ©(list_iterator, list)© also satisfies ©pointer_like© by the built-in dereference operator for pointers. Given a declaration ©list_iterator it©, ©*it© can be either an ©int© or a ©list©, with the meaning disambiguated by context (\eg ©int x = *it;© interprets ©*it© as an ©int©, while ©(*it).value = 42;© interprets ©*it© as a ©list©). While a nominal-inheritance system with associated types could model one of those two relationships by making ©El© an associated type of ©Ptr© in the ©pointer_like© implementation, few such systems could model both relationships simultaneously. The flexibility of \CFA's implicit trait satisfaction mechanism provides programmers with a great deal of power, but also blocks some optimization approaches for expression resolution. The ability of types to begin to or cease to satisfy traits when declarations go into or out of scope makes caching of trait satisfaction judgements difficult, and the ability of traits to take multiple type parameters could lead to a combinatorial explosion of work in any attempt to pre-compute trait satisfaction relationships. The flexibility of \CFA's implicit trait-satisfaction mechanism provides programmers with a great deal of power, but also blocks some optimization approaches for expression resolution. The ability of types to begin to or cease to satisfy traits when declarations go into or out of scope makes caching of trait satisfaction judgements difficult, and the ability of traits to take multiple type parameters can lead to a combinatorial explosion of work in any attempt to pre-compute trait satisfaction relationships. On the other hand, the addition of a nominal inheritance mechanism to \CFA's type system or replacement of \CFA's trait satisfaction system with a more object-oriented inheritance model and investigation of possible expression resolution optimizations for such a system may be an interesting avenue of further research. \subsection{Name Overloading} In C, no more than one variable or function in the same scope may share the same name\footnote{Technically, C has multiple separated namespaces, one holding ©struct©, ©union©, and ©enum© tags, one holding labels, one holding typedef names, variable, function, and enumerator identifiers, and one for each ©struct© or ©union© type holding the field names.}, and variable or function declarations in inner scopes with the same name as a declaration in an outer scope hide the outer declaration. This makes finding the proper declaration to match to a variable expression or function application a simple matter of symbol table lookup, which can be easily and efficiently implemented. This restriction makes finding the proper declaration to match to a variable expression or function application a simple matter of symbol-table lookup, which can be easily and efficiently implemented. \CFA, on the other hand, allows overloading of variable and function names, so long as the overloaded declarations do not have the same type, avoiding the multiplication of variable and function names for different types common in the C standard library, as in the following example: \begin{lstlisting} double max = DBL_MAX;  // (4) max(7, -max);   // uses (1) and (3), by matching int type of 7 max(max, 3.14); // uses (2) and (4), by matching double type of 3.14 max(7, -max);   // uses (1) and (3), by matching int type of the constant 7 max(max, 3.14); // uses (2) and (4), by matching double type of the constant 3.14 max(max, -max);  // ERROR: ambiguous int m = max(max, -max); // uses (1) and (3) twice, by return type int m = max(max, -max); // uses (1) once and (3) twice, by matching return type \end{lstlisting} \subsection{Implicit Conversions} In addition to the multiple interpretations of an expression produced by name overloading, \CFA also supports all of the implicit conversions present in C, producing further candidate interpretations for expressions. C does not have a traditionally-defined inheritance hierarchy of types, but the C standard's rules for the usual arithmetic conversions'' define which of the built-in types are implicitly convertable to which other types, and the relative cost of any pair of such conversions from a single source type. \CFA adds to the usual arithmetic conversions rules for determining the cost of binding a polymorphic type variable in a function call; such bindings are cheaper than any \emph{unsafe} (narrowing) conversion, \eg ©int© to ©char©, but more expensive than any \emph{safe} (widening) conversion, \eg ©int© to ©double©. The expression resolution problem, then, is to find the unique minimal-cost interpretation of each expression in the program, where all identifiers must be matched to a declaration and implicit conversions or polymorphic bindings of the result of an expression may increase the cost of the expression. In addition to the multiple interpretations of an expression produced by name overloading, \CFA must support all of the implicit conversions present in C for backward compatibility, producing further candidate interpretations for expressions. C does not have a inheritance hierarchy of types, but the C standard's rules for the usual arithmetic conversions'' define which of the built-in types are implicitly convertable to which other types, and the relative cost of any pair of such conversions from a single source type. \CFA adds to the usual arithmetic conversions rules defining the cost of binding a polymorphic type variable in a function call; such bindings are cheaper than any \emph{unsafe} (narrowing) conversion, \eg ©int© to ©char©, but more expensive than any \emph{safe} (widening) conversion, \eg ©int© to ©double©. The expression resolution problem, then, is to find the unique minimal-cost interpretation of each expression in the program, where all identifiers must be matched to a declaration, and implicit conversions or polymorphic bindings of the result of an expression may increase the cost of the expression. Note that which subexpression interpretation is minimal-cost may require contextual information to disambiguate. For instance, in the example in the previous subsection, ©max(max, -max)© cannot be unambiguously resolved, but ©int m = max(max, -max);© has a single minimal-cost resolution. ©int m = (int)max((double)max, -(double)max)© is also be a valid interpretation, but is not minimal-cost due to the unsafe cast from the ©double© result of ©max© to ©int© (the two ©double© casts function as type ascriptions selecting ©double max© rather than casts from ©int max© to ©double©, and as such are zero-cost). For instance, in the example in the previous subsection, ©max(max, -max)© cannot be unambiguously resolved, but ©int m = max(max, -max)© has a single minimal-cost resolution. While the interpretation ©int m = (int)max((double)max, -(double)max)© is also a valid interpretation, it is not minimal-cost due to the unsafe cast from the ©double© result of ©max© to ©int© (the two ©double© casts function as type ascriptions selecting ©double max© rather than casts from ©int max© to ©double©, and as such are zero-cost). \subsubsection{User-generated Implicit Conversions} Such a conversion system should be simple for programmers to utilize, and fit naturally with the existing design of implicit conversions in C; ideally it would also be sufficiently powerful to encode C's usual arithmetic conversions itself, so that \CFA only has one set of rules for conversions. Ditchfield~\cite{Ditchfield:conversions} has laid out a framework for using polymorphic-conversion-constructor functions to create a directed acyclic graph (DAG) of conversions. Ditchfield~\cite{Ditchfield:conversions} laid out a framework for using polymorphic-conversion-constructor functions to create a directed acyclic graph (DAG) of conversions. A monomorphic variant of these functions can be used to mark a conversion arc in the DAG as only usable as the final step in a conversion. With these two types of conversion arcs, separate DAGs can be created for the safe and the unsafe conversions, and conversion cost can be represented as path length through the DAG. With these two types of conversion arcs, separate DAGs can be created for the safe and the unsafe conversions, and conversion cost can be represented the length of the shortest path through the DAG from one type to another. \begin{figure}[h] \centering \includegraphics{conversion_dag} \caption{A portion of the implicit conversion DAG for built-in types.} \caption{A portion of the implicit conversion DAG for built-in types.}\label{fig:conv_dag} \end{figure} As can be seen in the example DAG above, there are either safe or unsafe paths between each of the arithmetic types listed; the final'' arcs are important both to avoid creating cycles in the signed-unsigned conversions, and to disambiguate potential diamond conversions (\eg, if the ©int© to ©unsigned int© conversion was not marked final there would be two length-two paths from ©int© to ©unsigned long©, and it would be impossible to choose which one; however, since the ©unsigned int© to ©unsigned long© arc can not be traversed after the final ©int© to ©unsigned int© arc, there is a single unambiguous conversion path from ©int© to ©unsigned long©). As can be seen in Figure~\ref{fig:conv_dag}, there are either safe or unsafe paths between each of the arithmetic types listed; the final'' arcs are important both to avoid creating cycles in the signed-unsigned conversions, and to disambiguate potential diamond conversions (\eg, if the ©int© to ©unsigned int© conversion was not marked final there would be two length-two paths from ©int© to ©unsigned long©, making it impossible to choose which one; however, since the ©unsigned int© to ©unsigned long© arc can not be traversed after the final ©int© to ©unsigned int© arc, there is a single unambiguous conversion path from ©int© to ©unsigned long©). Open research questions on this topic include: \begin{itemize} \item Can a conversion graph be generated that represents each allowable conversion in C with a unique minimal-length path such that the path lengths accurately represent the relative costs of the conversions? \item Can such a graph representation can be usefully augmented to include user-defined types as well as built-in types? \item Can the graph can be efficiently represented and used in the expression resolver? \item Can such a graph representation be usefully augmented to include user-defined types as well as built-in types? \item Can the graph be efficiently represented and used in the expression resolver? \end{itemize} \subsection{Constructors and Destructors} Rob Shluntz, a current member of the \CFA research team, has added constructors and destructors to \CFA. Each type has an overridable default-generated zero-argument constructor, copy constructor, assignment operator, and destructor; for ©struct© types these functions each call their equivalents on each field of the ©struct©. This affects expression resolution because an ©otype© type variable ©T© implicitly adds four type assertions, one for each of these four functions, so assertion resolution is pervasive in \CFA polymorphic functions, even those without any explicit type assertions. Each type has an overridable default-generated zero-argument constructor, copy constructor, assignment operator, and destructor. For ©struct© types these functions each call their equivalents on each field of the ©struct©. This feature affects expression resolution because an ©otype© type variable ©T© implicitly adds four type assertions, one for each of these four functions, so assertion resolution is pervasive in \CFA polymorphic functions, even those without any explicit type assertions. The following example shows the implicitly-generated code in green: \begin{lstlisting} }; ¢void ?{}(kv *this) { ?{}(&this->key); ?{}(&this->value); } void ?{}(kv *this, kv that) { ?{}(&this->key, that.key); ?{}(&this->value, that.value); } kv ?=?(kv *this, kv that) { ?=?(&this->key, that.key); ?=?(&this->value, that.value); ¢void ?{}(kv *this) {  // default constructor ?{}(&(this->key));  // call recursively on members ?{}(&(this->value)); } void ?{}(kv *this, kv that) {  // copy constructor ?{}(&(this->key), that.key); ?{}(&(this->value), that.value); } kv ?=?(kv *this, kv that) {  // assignment operator ?=?(&(this->key), that.key); ?=?(&(this->value), that.value); return *this; } void ^?{}(kv *this) { ^?{}(&this->key); ^?{}(&this->value); void ^?{}(kv *this) {  // destructor ^?{}(&(this->key)); ^?{}(&(this->value)); }¢ \begin{itemize} \item Since there is an implicit conversion from ©void*© to any pointer type, the highlighted expression can be interpreted as either a ©void*©, matching ©f // (1)©, or a ©box(S)*© for some type ©S©, matching ©f // (2)©. \item To determine the cost of the ©box(S)© interpretation, a type must be found for ©S© which satisfies the ©otype© implicit type assertions (assignment operator, default and copy constructors, and destructor); one option is ©box(S2)© for some type ©S2©. \item To determine the cost of the ©box(S)© interpretation, a type must be found for ©S© that satisfies the ©otype© implicit type assertions (assignment operator, default and copy constructors, and destructor); one option is ©box(S2)© for some type ©S2©. \item The assignment operator, default and copy constructors, and destructor of ©box(T)© are also polymorphic functions, each of which require the type parameter ©T© to have an assignment operator, default and copy constructors, and destructor. When choosing an interpretation for ©S2©, one option is ©box(S3)©, for some type ©S3©. \item The previous step repeats until stopped, with four times as much work performed at each step. \end{itemize} This problem can occur in any resolution context where a polymorphic function that can satisfy its own type assertions is required for a possible interpretation of an expression with no constraints on its type, and is thus not limited to combinations of generic types with ©void*© conversions, though constructors for generic types often satisfy their own assertions and a polymorphic conversion such as the ©void*© conversion to a polymorphic variable can create an expression with no constraints on its type. This problem can occur in any resolution context where a polymorphic function can satisfy its own type assertions is required for a possible interpretation of an expression with no constraints on its type, and is thus not limited to combinations of generic types with ©void*© conversions. However, constructors for generic types often satisfy their own assertions and a polymorphic conversion such as the ©void*© conversion to a polymorphic variable is a common way to create an expression with no constraints on its type. As discussed above, the \CFA expression resolver must handle this possible infinite recursion somehow, and it occurs fairly naturally in code like the above that uses generic types. \CFA adds \emph{tuple types} to C, a syntactic facility for referring to lists of values anonymously or with a single identifier. An identifier may name a tuple, and a function may return one. Particularly relevantly for resolution, a tuple may be implicitly \emph{destructured} into a list of values, as in the call to ©swap© below: \begin{lstlisting} [char, char] x = [ '!', '?' ]; int x = 42; forall(otype T) [T, T] swap( T a, T b ) { return [b, a]; } Particularly relevantly for resolution, a tuple may be implicitly \emph{destructured} into a list of values, as in the call to ©swap©: \begin{lstlisting} [char, char] x = [ '!', '?' ];  // (1) int x = 42;  // (2) forall(otype T) [T, T] swap( T a, T b ) { return [b, a]; }  // (3) x = swap( x ); // destructure [char, char] x into two elements of parameter list // cannot use int x for parameter, not enough arguments to swap \end{lstlisting} Tuple destructuring means that the mapping from the position of a subexpression in the argument list to the position of a paramter in the function declaration is not straightforward, as some arguments may be expandable to different numbers of parameters, like ©x© above. void swap( int, char, char ); // (4) swap( x, x ); // resolved as (4) on (2) and (1) // (3) on (2) and (2) is close, but the polymorphic binding makes it not minimal-cost \end{lstlisting} Tuple destructuring means that the mapping from the position of a subexpression in the argument list to the position of a paramter in the function declaration is not straightforward, as some arguments may be expandable to different numbers of parameters, like ©x© above. In the second example, the second ©x© argument can be resolved starting at the second or third parameter of ©swap©, depending which interpretation of ©x© was chosen for the first argument. \subsection{Reference Types} I have been designing \emph{reference types} for \CFA, in collaboration with the rest of the \CFA research team. Given some type ©T©, a ©T&© (reference to ©T©'') is essentially an automatically dereferenced pointer; with these semantics most of the C standard's discussions of lvalues can be expressed in terms of references instead, with the benefit of being able to express the difference between the reference and non-reference version of a type in user code. References preserve C's existing qualifier-dropping lvalue-to-rvalue conversion (\eg a ©const volatile int&© can be implicitly converted to a bare ©int©); the reference proposal also adds a rvalue-to-lvalue conversion to \CFA, implemented by storing the value in a new compiler-generated temporary and passing a reference to the temporary. These two conversions can chain, producing a qualifier-dropping conversion for references, for instance converting a reference to a ©const int© into a reference to a non-©const int© by copying the originally refered to value into a fresh temporary and taking a reference to this temporary, as below: References preserve C's existing qualifier-dropping lvalue-to-rvalue conversion (\eg a ©const volatile int&© can be implicitly converted to a bare ©int©). The reference proposal also adds a rvalue-to-lvalue conversion to \CFA, implemented by storing the value in a new compiler-generated temporary and passing a reference to the temporary. These two conversions can chain, producing a qualifier-dropping conversion for references, for instance converting a reference to a ©const int© into a reference to a non-©const int© by copying the originally refered to value into a fresh temporary and taking a reference to this temporary, as in: \begin{lstlisting} const int magic = 42; print_inc( magic ); // legal; implicitly generated code in green below: ¢int tmp = magic;¢ // copies to safely strip const-qualifier ¢int tmp = magic;¢ // to safely strip const-qualifier ¢print_inc( tmp );¢ // tmp is incremented, magic is unchanged \end{lstlisting} These reference conversions may also chain with the other implicit type conversions. The main implication of this for expression resolution is the multiplication of available implicit conversions, though in a restricted context that may be able to be treated efficiently as a special case. These reference conversions may also chain with the other implicit type-conversions. The main implication of the reference conversions for expression resolution is the multiplication of available implicit conversions, though given the restricted context reference conversions may be able to be treated efficiently as a special case of implicit conversions. \subsection{Special Literal Types} Another proposal currently under consideration for the \CFA type-system is assigning special types to the literal values ©0© and ©1©. Implicit conversions from these types allow ©0© and ©1© to be considered as values of many different types, depending on context, allowing expression desugarings like ©if ( x ) {}© $\Rightarrow$ ©if ( x != 0 ) {}© to be implemented efficiently and precicely. Implicit conversions from these types allow ©0© and ©1© to be considered as values of many different types, depending on context, allowing expression desugarings like ©if ( x ) {}© $\Rightarrow$ ©if ( x != 0 ) {}© to be implemented efficiently and precisely. This approach is a generalization of C's existing behaviour of treating ©0© as either an integer zero or a null pointer constant, and treating either of those values as boolean false. The main implication for expression resolution is that the frequently encountered expressions ©0© and ©1© may have a large number of valid interpretations. int somefn(char) = delete; \end{lstlisting} This feature is typically used in \CCeleven to make a type non-copyable by deleting its copy constructor and assignment operator, or forbidding some interpretations of a polymorphic function by specifically deleting the forbidden overloads. To add a similar feature to \CFA would involve including the deleted function declarations in expression resolution along with the normal declarations, but producing a compiler error if the deleted function was the best resolution. This feature is typically used in \CCeleven to make a type non-copyable by deleting its copy constructor and assignment operator\footnote{In previous versions of \CC a type could be made non-copyable by declaring a private copy constructor and assignment operator, but not defining either. This idiom is well-known, but depends on some rather subtle and \CC-specific rules about private members and implicitly-generated functions; the deleted-function form is both clearer and less verbose.}, or forbidding some interpretations of a polymorphic function by specifically deleting the forbidden overloads\footnote{Specific polymorphic function overloads can also be forbidden in previous \CC versions through use of template metaprogramming techniques, though this advanced usage is beyond the skills of many programmers. A similar effect can be produced on an ad-hoc basis at the appropriate call sites through use of casts to determine the function type. In both cases, the deleted-function form is clearer and more concise.}. To add a similar feature to \CFA involves including the deleted function declarations in expression resolution along with the normal declarations, but producing a compiler error if the deleted function is the best resolution. How conflicts should be handled between resolution of an expression to both a deleted and a non-deleted function is a small but open research question. %TODO: look up and lit review The second area of investigation is minimizing dependencies between argument-parameter matches; the current CFA compiler attempts to match entire argument combinations against functions at once, potentially attempting to match the same argument against the same parameter multiple times. Whether the feature set of \CFA admits an expression resolution algorithm where arguments can be matched to parameters independently of other arguments in the same function application is an area of open research; polymorphic type paramters produce enough of a cross-argument dependency that the problem is not trivial. Whether the feature set of \CFA admits an expression resolution algorithm where arguments can be matched to parameters independently of other arguments in the same function application is an area of open research; polymorphic type paramters produce enough cross-argument dependencies that the problem is not trivial. If cross-argument resolution dependencies cannot be completely eliminated, effective caching strategies to reduce duplicated work between equivalent argument-parameter matches in different combinations may mitigate the asymptotic defecits of the whole-combination matching approach. The final area of investigation is heuristics and algorithmic approaches to reduce the number of argument interpretations considered in the common case; if argument-parameter matches cannot be made independent, even small reductions in $i$ should yield significant reductions in the $i^{p+1}$ resolver runtime factor. \subsection{Argument-Parameter Matching} The first axis for consideration is argument-parameter matching direction --- whether the type matching for a candidate function to a set of candidate arguments is directed by the argument types or the parameter types. The first axis for consideration is the argument-parameter matching direction --- whether the type matching for a candidate function to a set of candidate arguments is directed by the argument types or the parameter types. For programming languages without implicit conversions, argument-parameter matching is essentially the entirety of the expression resolution problem, and is generally referred to as overload resolution'' in the literature. All expression resolution algorithms form a DAG of interpretations, some explicitly, some implicitly; in this DAG, arcs point from function-call interpretations to argument interpretations, as below: All expression-resolution algorithms form a DAG of interpretations, some explicitly, some implicitly; in this DAG, arcs point from function-call interpretations to argument interpretations, as in Figure~\ref{fig:res_dag}: \begin{figure}[h] \centering \end{figure} Note that some interpretations may be part of more than one super-interpretation, as with $p_i$ in the bottom row, while some valid subexpression interpretations, like $f_d$ in the middle row, are not used in any interpretation of their containing expression. Note that some interpretations may be part of more than one super-interpretation, as with the second $p_i$ in the bottom row, while some valid subexpression interpretations, like $f_d$ in the middle row, are not used in any interpretation of their superexpression. \subsubsection{Argument-directed (Bottom-up)} A reasonable hybrid approach might take a top-down approach when the expression to be matched has a fixed type, and a bottom-up approach in untyped contexts. This approach may involve switching from one type to another at different levels of the expression tree. For instance: For instance, in: \begin{lstlisting} forall(otype T) int x = f( f( '!' ) ); \end{lstlisting} The outer call to ©f© must have a return type that is (implicitly convertable to) ©int©, so a top-down approach is used to select \textit{(1)} as the proper interpretation of ©f©. \textit{(1)}'s parameter ©x©, however, is an unbound type variable, and can thus take a value of any complete type, providing no guidance for the choice of candidate for the inner call to ©f©. The leaf expression ©'!'©, however, determines a zero-cost interpretation of the inner ©f© as \textit{(2)}, providing a minimal-cost expression resolution where ©T© is bound to ©void*©. Deciding when to switch between bottom-up and top-down resolution to minimize wasted work in a hybrid algorithm is a necessarily heuristic process, and though finding good heuristics for which subexpressions to swich matching strategies on is an open question, one reasonable approach might be to set a threshold $t$ for the number of candidate functions, and to use top-down resolution for any subexpression with fewer than $t$ candidate functions, to minimize the number of unmatchable argument interpretations computed, but to use bottom-up resolution for any subexpression with at least $t$ candidate functions, to reduce duplication in argument interpretation computation between the different candidate functions. the outer call to ©f© must have a return type that is (implicitly convertable to) ©int©, so a top-down approach is used to select \textit{(1)} as the proper interpretation of ©f©. \textit{(1)}'s parameter ©x©, however, is an unbound type variable, and can thus take a value of any complete type, providing no guidance for the choice of candidate for the inner call to ©f©. The leaf expression ©'!'©, however, determines a zero-cost interpretation of the inner ©f© as \textit{(2)}, providing a minimal-cost expression resolution where ©T© is bound to ©void*©. Deciding when to switch between bottom-up and top-down resolution to minimize wasted work in a hybrid algorithm is a necessarily heuristic process, and finding good heuristics for which subexpressions to swich matching strategies on is an open question. One reasonable approach might be to set a threshold $t$ for the number of candidate functions, and to use top-down resolution for any subexpression with fewer than $t$ candidate functions, to minimize the number of unmatchable argument interpretations computed, but to use bottom-up resolution for any subexpression with at least $t$ candidate functions, to reduce duplication in argument interpretation computation between the different candidate functions. Ganzinger and Ripken~\cite{Ganzinger80} propose an approach (later refined by Pennello~\etal~\cite{Pennello80}) that uses a top-down filtering pass followed by a bottom-up filtering pass to reduce the number of candidate interpretations; they prove that for the Ada programming language a small number of such iterations is sufficient to converge to a solution for the expression resolution problem. Persch~\etal~\cite{PW:overload} developed a similar two-pass approach where the bottom-up pass is followed by the top-down pass. These algorithms differ from the hybrid approach under investigation in that they take multiple passes over the expression tree to yield a solution, and also in that they apply both filtering heuristics to all expression nodes; \CFA's polymorphic functions and implicit conversions make the approach of filtering out invalid types taken by all of these algorithms infeasible. These algorithms differ from the hybrid approach under investigation in that they take multiple passes over the expression tree to yield a solution, and that they also apply both filtering heuristics to all expression nodes; \CFA's polymorphic functions and implicit conversions make the approach of filtering out invalid types taken by all of these algorithms infeasible. \subsubsection{Common Subexpression Caching} \CC~\cite{ANSI98:C++} includes both name overloading and implicit conversions in its expression resolution specification, though unlike \CFA it does complete type-checking on a generated monomorphization of template functions, where \CFA simply checks a list of type constraints. The upcoming Concepts standard~\cite{C++concepts} defines a system of type constraints similar in principle to \CFA's. Cormack and Wright~\cite{Cormack90} present an algorithm which integrates overload resolution with a polymorphic type inference approach very similar to \CFA's. Cormack and Wright~\cite{Cormack90} present an algorithm that integrates overload resolution with a polymorphic type inference approach very similar to \CFA's. However, their algorithm does not account for implicit conversions other than polymorphic type binding and their discussion of their overload resolution algorithm is not sufficiently detailed to classify it with the other argument-parameter matching approaches\footnote{Their overload resolution algorithm is possibly a variant of Ganzinger and Ripken~\cite{Ganzinger80} or Pennello~\etal~\cite{Pennello80}, modified to allow for polymorphic type binding.}. Bilson does account for implicit conversions in his algorithm, but it is unclear if the approach is optimal. His algorithm integrates checking for valid implicit conversions into the argument-parameter-matching step, essentially trading more expensive matching for a smaller number of argument interpretations. This approach may result in the same subexpression being checked for a type match with the same type multiple times, though again memoization may mitigate this cost, and this approach does not generate implicit conversions that are not useful to match the containing function. Calculating implicit conversions on parameters pairs naturally with a top-down approach to expression resolution, though it can also be used in a bottom-up approach, as Bilson demonstrates. This approach may result in the same subexpression being checked for a type match with the same type multiple times, though again memoization may mitigate this cost; however, this approach does not generate implicit conversions that are not useful to match the containing function. \subsubsection{On Arguments} Another approach is to generate a set of possible implicit conversions for each set of interpretations of a given argument. This approach has the benefit of detecting ambiguous interpretations of arguments at the level of the argument rather than its containing call, never finds more than one interpretation of the argument with a given type, and re-uses calculation of implicit conversions between function candidates. On the other hand, this approach may unncessarily generate argument interpretations that never match any parameter, wasting work. Further, in the presence of tuple types this approach may lead to a combinatorial explosion of argument interpretations considered, unless the tuple can be considered as a sequence of elements rather than a unified whole. Calculating implicit conversions on arguments is a viable approach for bottom-up expression resolution, though it may be difficult to apply in a top-down approach due to the presence of a target type for the expression interpretation. On the other hand, this approach may unnecessarily generate argument interpretations that never match any parameter, wasting work. Furthermore, in the presence of tuple types, this approach may lead to a combinatorial explosion of argument interpretations considered, unless the tuple can be considered as a sequence of elements rather than a unified whole. \subsection{Candidate Set Generation} All the algorithms discussed to this point generate the complete set of candidate argument interpretations before attempting to match the containing function call expression. However, given that the top-level expression interpretation that is ultimately chosen is the minimal-cost valid interpretation, any consideration of non-minimal-cost interpretations is in some sense wasted work. Under the assumption that that programmers generally write function calls with relatively low-cost interpretations, a possible work-saving heuristic is to generate only the lowest-cost argument interpretations first, attempt to find a valid top-level interpretation using them, and only if that fails generate the next higher-cost argument interpretations. All the algorithms discussed to this point generate the complete set of candidate argument interpretations before attempting to match the containing function-call expression. However, given that the top-level expression interpretation that is ultimately chosen is the minimal-cost valid interpretation, any consideration of non-minimal-cost interpretations is wasted work. Under the assumption that programmers generally write function calls with relatively low-cost interpretations, a possible work-saving heuristic is to generate only the lowest-cost argument interpretations first, attempt to find a valid top-level interpretation using them, and only if that fails generate the next higher-cost argument interpretations. \subsubsection{Eager} This comparison closes Baker's open research question, as well as potentially improving Bilson's \CFA compiler. Rather than testing all of these algorithms in-place in the \CFA compiler, a resolver prototype is being developed which acts on a simplified input language encapsulating the essential details of the \CFA type-system\footnote{Note this simplified input language is not a usable programming language.}. Rather than testing all of these algorithms in-place in the \CFA compiler, a resolver prototype is being developed that acts on a simplified input language encapsulating the essential details of the \CFA type-system\footnote{Note this simplified input language is not a usable programming language.}. Multiple variants of this resolver prototype will be implemented, each encapsulating a different expression resolution variant, sharing as much code as feasible. These variants will be instrumented to test runtime performance, and run on a variety of input files; the input files may be generated programmatically or from exisiting code in \CFA or similar languages. As an example, there are currently multiple open proposals for how implicit conversions should interact with polymorphic type binding in \CFA, each with distinct levels of expressive power; if the resolver prototype is modified to support each proposal, the optimal algorithm for each proposal can be compared, providing an empirical demonstration of the trade-off between expressive power and compiler runtime. This proposed project should provide valuable data on how to implement a performant compiler for modern programming languages such as \CFA with powerful static type-systems, specifically targeting the feature interaction between name overloading and implicit conversions. This proposed project should provide valuable data on how to implement a performant compiler for programming languages such as \CFA with powerful static type-systems, specifically targeting the feature interaction between name overloading and implicit conversions. This work is not limited in applicability to \CFA, but may also be useful for supporting efficient compilation of the upcoming Concepts standard~\cite{C++concepts} for \CC template constraints, for instance.