Changeset e93bc13 for doc/aaron_comp_II/comp_II.tex
- Timestamp:
- Aug 3, 2016, 2:28:26 PM (9 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, ctor, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, memory, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- ea5daeb
- Parents:
- a14187f
- git-author:
- Aaron Moss <a3moss@…> (08/03/16 12:07:20)
- git-committer:
- Aaron Moss <a3moss@…> (08/03/16 14:28:26)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/aaron_comp_II/comp_II.tex
ra14187f re93bc13 84 84 \section{Introduction} 85 85 86 \CFA\footnote{Pronounced ``C-for-all'', and written \CFA, CFA, or \CFL.} is an evolutionary modernization of the C programming language currently being designed and built at the University of Waterloo by a team led by Peter Buhr. 87 \CFA adds multiple features to C, including name overloading, user-defined operators, parametric-polymorphic routines, and type constructors and destructors, among others. 88 These features make \CFA significantly more powerful and expressive than C, but impose a significant compile-time cost to support, particularly in the expression resolver, which must evaluate the typing rules of a much more complex type system. 89 The primary goal of this proposed research project is to develop a sufficiently performant expression resolution algorithm, experimentally validate its performance, and integrate it into \Index*{CFA-CC}, the \CFA reference compiler. 90 Secondary goals of this project include the development of various new language features for \CFA; parametric-polymorphic (``generic'') types have already been designed and implemented, and reference types and user-defined conversions are under design consideration. 91 The experimental performance-testing architecture for resolution algorithms will also be used to determine the compile-time cost of adding such new features to the \CFA type system. 92 More broadly, this research should provide valuable data for implementers of compilers for other programming languages with similarly powerful static type systems. 86 \CFA\footnote{Pronounced ``C-for-all'', and written \CFA or \CFL.} is an evolutionary modernization of the C programming language currently being designed and built at the University of Waterloo by a team led by Peter Buhr. 87 \CFA both fixes existing design problems and adds multiple new features to C, including name overloading, user-defined operators, parametric-polymorphic routines, and type constructors and destructors, among others. 88 The new features make \CFA significantly more powerful and expressive than C, but impose a significant compile-time cost, particularly in the expression resolver, which must evaluate the typing rules of a much more complex type-system. 89 90 The primary goal of this research project is to develop a sufficiently performant expression resolution algorithm, experimentally validate its performance, and integrate it into CFA, the \CFA reference compiler. 91 Secondary goals of this project include the development of various new language features for \CFA: parametric-polymorphic (``generic'') types have already been designed and implemented, and reference types and user-defined conversions are under design consideration. 92 An experimental performance-testing architecture for resolution algorithms is under development to determine the relative performance of different expression resolution algorithms, as well as the compile-time cost of adding various new features to the \CFA type-system. 93 More broadly, this research should provide valuable data for implementers of compilers for other programming languages with similarly powerful static type-systems. 93 94 94 95 \section{\CFA} 95 96 96 To make the scope of the proposed expression resolution problem more explicit, it is necessary to define the features of both C and \CFA (both current and proposed) which affect this algorithm. 97 In some cases the interactions of multiple features make expression resolution a significantly more complex problem than any individual feature would; in others a feature which does not by itself add any complexity to expression resolution will trigger previously rare edge cases much more frequently. 97 To make the scope of the proposed expression resolution problem more explicit, it is necessary to define the features of both C and \CFA (both current and proposed) that affect this algorithm. 98 In some cases the interactions of multiple features make expression resolution a significantly more complex problem than any individual feature would; in other cases a feature that does not by itself add any complexity to expression resolution triggers previously rare edge cases more frequently. 99 100 It is important to note that \CFA is not an object-oriented language. 101 \CFA does have a system of (possibly implicit) type conversions derived from C's type conversions; while these conversions may be thought of as something like an inheritance hierarchy the underlying semantics are significantly different and such an analogy is loose at best. 102 Particularly, \CFA has no concept of ``subclass'', and thus no need to integrate an inheritance-based form of polymorphism with its parametric and overloading-based polymorphism. 103 The graph structure of the \CFA type conversions is also markedly different than an inheritance graph; it has neither a top nor a bottom type, and does not satisfy the lattice properties typical of inheritance graphs. 98 104 99 105 \subsection{Polymorphic Functions} … … 101 107 Such functions are written using a ©forall© clause (which gives the language its name): 102 108 \begin{lstlisting} 103 forall(otype T) 109 ®forall(otype T)® 104 110 T identity(T x) { 105 111 return x; … … 110 116 The ©identity© function above can be applied to any complete object type (or ``©otype©''). 111 117 The type variable ©T© is transformed into a set of additional implicit parameters to ©identity© which encode sufficient information about ©T© to create and return a variable of that type. 112 The current \CFA implementation passes the size and alignment of the type represented by an ©otype© parameter, as well as an assignment operator, constructor, copy constructor \&destructor.118 The current \CFA implementation passes the size and alignment of the type represented by an ©otype© parameter, as well as an assignment operator, constructor, copy constructor and destructor. 113 119 114 120 Since bare polymorphic types do not provide a great range of available operations, \CFA also provides a \emph{type assertion} mechanism to provide further information about a type: 115 121 \begin{lstlisting} 116 forall(otype T | { T twice(T); })122 forall(otype T ®| { T twice(T); }®) 117 123 T four_times(T x) { 118 124 return twice( twice(x) ); … … 137 143 Finding appropriate functions to satisfy type assertions is essentially a recursive case of expression resolution, as it takes a name (that of the type assertion) and attempts to match it to a suitable declaration in the current scope. 138 144 If a polymorphic function can be used to satisfy one of its own type assertions, this recursion may not terminate, as it is possible that function will be examined as a candidate for its own type assertion unboundedly repeatedly. 139 To avoid infinite loops, the current \Index*{CFA-CC} compiler imposes a fixed limit on the possible depth of recursion, similar to that employed by most \Index*[C++]{\CC} compilers for template expansion; this restriction means that there are some semantically well-typed expressions which cannot be resolved by {CFA-CC}.145 To avoid infinite loops, the current CFA compiler imposes a fixed limit on the possible depth of recursion, similar to that employed by most \CC compilers for template expansion; this restriction means that there are some semantically well-typed expressions which cannot be resolved by CFA. 140 146 One area of potential improvement this project proposes to investigate is the possibility of using the compiler's knowledge of the current set of declarations to more precicely determine when further type assertion satisfaction recursion will not produce a well-typed expression. 147 148 \subsubsection{Traits} 149 \CFA provides \emph{traits} as a means to name a group of type assertions, as in the example below: 150 \begin{lstlisting} 151 trait has_magnitude(otype T) { 152 bool ?<?(T, T); // comparison operator for T 153 T -?(T); // negation operator for T 154 void ?{}(T*, zero_t); // constructor from 0 literal 155 }; 156 157 forall(otype M | has_magnitude(M)) 158 int sgn( M m ) { 159 M zero = 0; // uses zero_t constructor from trait 160 if ( m < zero ) return -1; 161 if ( zero < m ) return 1; 162 return 0; 163 } 164 165 // TODO write another function 166 \end{lstlisting} 141 167 142 168 \subsection{Name Overloading} … … 174 200 Open research questions on this topic include whether a conversion graph can be generated that represents each allowable conversion in C with a unique minimal-length path, such that the path lengths accurately represent the relative costs of the conversions, whether such a graph representation can be usefully augmented to include user-defined types as well as built-in types, and whether the graph can be efficiently represented and included in the expression resolver. 175 201 176 \subsection{Constructors \&Destructors}202 \subsection{Constructors and Destructors} 177 203 Rob Shluntz, a current member of the \CFA research team, has added constructors and destructors to \CFA. 178 204 Each type has an overridable default-generated zero-argument constructor, copy constructor, assignment operator, and destructor; for struct types these functions each call their equivalents on each field of the struct. … … 180 206 181 207 \subsection{Generic Types} 182 The author hasadded a generic type capability to \CFA, designed to efficiently and naturally integrate with \CFA's existing polymorphic functions.208 I have already added a generic type capability to \CFA, designed to efficiently and naturally integrate with \CFA's existing polymorphic functions. 183 209 A generic type can be declared by placing a ©forall© specifier on a struct or union declaration, and instantiated using a parenthesized list of types after the type name: 184 210 \begin{lstlisting} … … 195 221 \end{lstlisting} 196 222 For \emph{concrete} generic types, that is, those where none of the type parameters depend on polymorphic type variables (like ©pair(const char*, int)© above), the struct is essentially template expanded to a new struct type; for \emph{polymorphic} generic types (such as ©pair(const char*, T)© above), member access is handled by a runtime calculation of the field offset, based on the size and alignment information of the polymorphic parameter type. 197 The default-generated constructors, destructor \&assignment operator for a generic type are polymorphic functions with the same list of type parameters as the generic type definition.198 199 Aside from giving users the ability to create more parameterized types than just the built-in pointer, array \&function types, the combination of generic types with polymorphic functions and implicit conversions makes the edge case where a polymorphic function can match its own assertions much more common, as follows:223 The default-generated constructors, destructor and assignment operator for a generic type are polymorphic functions with the same list of type parameters as the generic type definition. 224 225 Aside from giving users the ability to create more parameterized types than just the built-in pointer, array and function types, the combination of generic types with polymorphic functions and implicit conversions makes the edge case where a polymorphic function can match its own assertions much more common, as follows: 200 226 \begin{itemize} 201 227 \item Given an expression in an untyped context, such as a top-level function call with no assignment of return values, apply a polymorphic implicit conversion to the expression that can produce multiple types (the built-in conversion from ©void*© to any other pointer type is one, but not the only). … … 221 247 222 248 \subsection{Reference Types} 223 The author, in collaboration with the rest of the \CFA research team, has been designing \emph{reference types} for \CFA.249 I have been designing \emph{reference types} for \CFA, in collaboration with the rest of the \CFA research team. 224 250 Given some type ©T©, a ©T&© (``reference to ©T©'') is essentially an automatically dereferenced pointer; with these semantics most of the C standard's discussions of lvalues can be expressed in terms of references instead, with the benefit of being able to express the difference between the reference and non-reference version of a type in user code. 225 251 References preserve C's existing qualifier-dropping lvalue-to-rvalue conversion (\eg a ©const volatile int&© can be implicitly converted to a bare ©int©); the reference proposal also adds a rvalue-to-lvalue conversion to \CFA, implemented by storing the value in a new compiler-generated temporary and passing a reference to the temporary. … … 229 255 230 256 \subsection{Literal Types} 231 Another proposal currently under consideration for the \CFA type 257 Another proposal currently under consideration for the \CFA type-system is assigning special types to the literal values ©0© and ©1©.%, say ©zero_t© and ©one_t©. 232 258 Implicit conversions from these types would allow ©0© and ©1© to be considered as values of many different types, depending on context, allowing expression desugarings like ©if ( x ) {}© $\Rightarrow$ ©if ( x != 0 ) {}© to be implemented efficiently and precicely. 233 259 This is a generalization of C's existing behaviour of treating ©0© as either an integer zero or a null pointer constant, and treating either of those values as boolean false. … … 248 274 Expression resolution is somewhat unavoidably exponential in $p$, the number of function parameters, and $d$, the depth of the expression tree, but these values are fixed by the user programmer, and generally bounded by reasonably small constants. 249 275 $k$, on the other hand, is mostly dependent on the representation of types in the system and the efficiency of type assertion checking; if a candidate argument combination can be compared to a function parameter list in linear time in the length of the list (\ie $k = 1$), then the $p^{k \cdot d}$ term is linear in the input size of the source code for the expression, otherwise the resolution algorithm will exibit sub-linear performance scaling on code containing more-deeply nested expressions. 250 The number of valid interpretations of any subexpression, $i$, is bounded by the number of types in the system, which is possibly infinite, though practical resolution algorithms for \CFA must be able to place some finite bound on $i$, possibly at the expense of type 276 The number of valid interpretations of any subexpression, $i$, is bounded by the number of types in the system, which is possibly infinite, though practical resolution algorithms for \CFA must be able to place some finite bound on $i$, possibly at the expense of type-system completeness. 251 277 252 278 The research goal of this project is to develop a performant expression resolver for \CFA; this analysis suggests two primary areas of investigation to accomplish that end. 253 The first is efficient argument-parameter matching; Bilson\cite{Bilson03} mentions significant optimization opportunities available in the current literature to improve on the existing {CFA-CC}compiler.279 The first is efficient argument-parameter matching; Bilson\cite{Bilson03} mentions significant optimization opportunities available in the current literature to improve on the existing CFA compiler. 254 280 %TODO: look up and lit review 255 281 The second, and likely more fruitful, area of investigation is heuristics and algorithmic approaches to reduce the number of argument interpretations considered in the common case; given the large ($p+1$) exponent on number of interpretations considered in the runtime analysis, even small reductions here could have a significant effect on overall resolver runtime. … … 299 325 Another approach would be to generate a set of possible implicit conversions for each set of interpretations of a given argument. 300 326 This would have the benefit of detecting ambiguous interpretations of arguments at the level of the argument rather than its containing call, would also never find more than one interpretation of the argument with a given type, and would re-use calculation of implicit conversions between function candidates. 301 On the other hand, this approach may unncessarily generate argument interpretations that will never match a parameter, wasting work. 327 On the other hand, this approach may unncessarily generate argument interpretations that will never match a parameter, wasting work. 328 Further, in the presence of tuple types this approach may lead to a combinatorial explosion of argument interpretations considered, unless the tuple can be considered as a sequence of elements rather than a unified whole. 302 329 303 330 \subsection{Candidate Set Generation} 304 Cormack\cite{Cormack81}, Baker\cite{Baker82} \&Bilson\cite{Bilson03} all generate the complete set of candidate argument interpretations before attempting to match the containing function call expression.331 Cormack\cite{Cormack81}, Baker\cite{Baker82} and Bilson\cite{Bilson03} all generate the complete set of candidate argument interpretations before attempting to match the containing function call expression. 305 332 However, given that the top-level expression interpretation that is ultimately chosen will be the minimal-cost valid interpretation, any consideration of non-minimal-cost interpretations is in some sense wasted work. 306 333 If we assume that user programmers will generally write function calls with relatively low-cost interpretations, a possible work-saving heuristic is to generate only the lowest-cost argument interpretations first, attempt to find a valid top-level interpretation using them, and only if that fails generate the higher-cost argument interpretations. 307 334 308 335 \subsubsection{Eager} 309 Within the eager approach taken by Cormack, Baker \&Bilson, there are still variants to explore.310 Cormack \&Baker do not account for implict conversions, and thus do not account for the possibility of multiple valid interpretations with distinct costs; Bilson, on the other hand, sorts the list of interpretations to aid in finding minimal-cost interpretations.336 Within the eager approach taken by Cormack, Baker and Bilson, there are still variants to explore. 337 Cormack and Baker do not account for implict conversions, and thus do not account for the possibility of multiple valid interpretations with distinct costs; Bilson, on the other hand, sorts the list of interpretations to aid in finding minimal-cost interpretations. 311 338 Sorting the lists of argument or function call interpretations by cost at some point during resolution may provide useful opportunities to short-circuit expression evaluation when a minimal-cost interpretation is found, though it is not clear if this short-circuiting behaviour would justify the cost of the sort. 312 339 … … 315 342 However, if user programmers actually use relatively few implicit conversions, then the ``on arguments'' approach to implicit conversions will generate a large number of high-cost interpretations which may never be used. 316 343 The essence of the lazy approach to candidate set generation is to wrap the matching algorithm into the element generator of a lazy list type, only generating as few elements at a time as possible to ensure that the next-smallest-cost interpretation has been generated. 317 Assuming that argument interpretations are provided to the parameter matching algorithm in sorted order, a sorted list of function call interpretations can be produced by generating combinations of arguments sorted by total cost\footnote{ The author has developed a lazy $n$-way combination generation algorithm that canperform this task.}, then generating function call interpretations in the order suggested by this list.344 Assuming that argument interpretations are provided to the parameter matching algorithm in sorted order, a sorted list of function call interpretations can be produced by generating combinations of arguments sorted by total cost\footnote{I have already developed a lazy $n$-way combination generation algorithm to perform this task.}, then generating function call interpretations in the order suggested by this list. 318 345 Note that the function call interpretation chosen may have costs of its own, for instance polymorphic type binding, so in some cases a number of argument combinations (any combination whose marginal cost does not exceed the cost of the function call interpretation itself) may need to be considered to determine the next-smallest-cost function call interpretation. 319 346 Ideally, this candidate generation approach will lead to very few unused candidates being generated (in the expected case where the user programmer has, in fact, provided a validly-typable program), but this research project will need to determine whether or not the overheads of lazy generation exceed the benefit produced from considering fewer interpretations. … … 333 360 %\subsection{Parameter-Directed} 334 361 %\textbf{TODO: Richard's algorithm isn't Baker (Cormack?), disentangle from this section \ldots}. 335 %The expression resolution algorithm used by the existing iteration of {CFA-CC}is based on Baker's\cite{Baker82} algorithm for overload resolution in Ada.362 %The expression resolution algorithm used by the existing iteration of CFA is based on Baker's\cite{Baker82} algorithm for overload resolution in Ada. 336 363 %The essential idea of this algorithm is to first find the possible interpretations of the most deeply nested subexpressions, then to use these interpretations to recursively generate valid interpretations of their superexpressions. 337 364 %To simplify matters, the only expressions considered in this discussion of the algorithm are function application and literal expressions; other expression types can generally be considered to be variants of one of these for the purposes of the resolver, \eg variables are essentially zero-argument functions. … … 341 368 %\textbf{TODO: Figure} 342 369 % 343 %Baker's algorithm was designed to account for name overloading; Richard Bilson\cite{Bilson03} extended this algorithm to also handle polymorphic functions, implicit conversions \&multiple return types when designing the original \CFA compiler.370 %Baker's algorithm was designed to account for name overloading; Richard Bilson\cite{Bilson03} extended this algorithm to also handle polymorphic functions, implicit conversions and multiple return types when designing the original \CFA compiler. 344 371 %The core of the algorithm is a function which Baker refers to as $gen\_calls$. 345 372 %$gen\_calls$ takes as arguments the name of a function $f$ and a list containing the set of possible subexpression interpretations $S_j$ for each argument of the function and returns a set of possible interpretations of calling that function on those arguments. … … 363 390 \section{Proposal} 364 391 Baker\cite{Baker82} discussed various expression resolution algorithms that could handle name overloading, but left experimental comparison of those algorithms to future work; Bilson\cite{Bilson03} described one extension of Baker's algorithm to handle implicit conversions, but did not fully explore the space of algorithmic approaches to handle both overloaded names and implicit conversions. 365 This project is intended to experimentally test a number of expression resolution algorithms which are powerful enough to handle the \CFA type 392 This project is intended to experimentally test a number of expression resolution algorithms which are powerful enough to handle the \CFA type-system, including both name overloading and implicit conversions. 366 393 This comparison will close Baker's open research question, as well as potentially improving on Bilson's \CFA compiler. 367 394 368 Rather than testing all of these algorithms in-place in the \CFA compiler, a resolver prototype will be developed which acts on a simplified input language encapsulating the essential details of the \CFA type 395 Rather than testing all of these algorithms in-place in the \CFA compiler, a resolver prototype will be developed which acts on a simplified input language encapsulating the essential details of the \CFA type-system\footnote{Note that this simplified input language is not required to be a usable programming language.}. 369 396 Multiple variants of this resolver prototype will be implemented, each encapsulating a different expression resolution variant, sharing as much code as feasible. 370 397 These variants will be instrumented to test runtime performance, and run on a variety of input files; the input files may be generated programmatically or from exisiting code in \CFA or similar languages. 371 These experimental results will allow the research team to determine the algorithm likely to be most performant in practical use, and replace {CFA-CC}'s existing expression resolver with that code.398 These experimental results will allow the research team to determine the algorithm likely to be most performant in practical use, and replace CFA's existing expression resolver with that code. 372 399 The experimental results will also provide some empirical sense of the compile-time cost of various language features by comparing the results of the most performant resolver variant that supports the feature with the most performant resolver variant that doesn't, a useful capability to guide language design. 373 400 374 This proposed project should provide valuable data on how to implement a performant compiler for modern programming languages such as \CFA with powerful static type 401 This proposed project should provide valuable data on how to implement a performant compiler for modern programming languages such as \CFA with powerful static type-systems, specifically targeting the feature interaction between name overloading and implicit conversions. 375 402 376 403 \appendix … … 379 406 \begin{center} 380 407 \begin{tabular}{ | r @{--} l | p{4in} | } 381 \hline May 2015 & April 2016 & Project familiarization and generic types design \&implementation. \\382 \hline May 2016 & April 2017 & Design \&implement resolver prototype and run performance experiments. \\383 \hline May 2017 & August 2017 & Integrate new language features and best-performing resolver prototype into {CFA-CC}. \\384 \hline September 2017 & January 2018 & Thesis writing \&defense. \\408 \hline May 2015 & April 2016 & Project familiarization and generic types design and implementation. \\ 409 \hline May 2016 & April 2017 & Design and implement resolver prototype and run performance experiments. \\ 410 \hline May 2017 & August 2017 & Integrate new language features and best-performing resolver prototype into CFA. \\ 411 \hline September 2017 & January 2018 & Thesis writing and defense. \\ 385 412 \hline 386 413 \end{tabular}
Note: See TracChangeset
for help on using the changeset viewer.