Changeset d30790f
- Timestamp:
- Aug 22, 2016, 10:05:14 AM (8 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, ctor, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, memory, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 8d2844a
- Parents:
- 4f147cc (diff), 80722d0 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Files:
-
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/aaron_comp_II/comp_II.tex
r4f147cc rd30790f 91 91 \CFA\footnote{Pronounced ``C-for-all'', and written \CFA or \CFL.} is an evolutionary modernization of the C programming language currently being designed and built at the University of Waterloo by a team led by Peter Buhr. 92 92 \CFA both fixes existing design problems and adds multiple new features to C, including name overloading, user-defined operators, parametric-polymorphic routines, and type constructors and destructors, among others. 93 The new features make \CFA significantly more powerful and expressive than C, but impose a significant compile-time cost, particularly in the expression resolver, which must evaluate the typing rules of a muchmore complex type-system.93 The new features make \CFA more powerful and expressive than C, but impose a compile-time cost, particularly in the expression resolver, which must evaluate the typing rules of a significantly more complex type-system. 94 94 95 95 The primary goal of this research project is to develop a sufficiently performant expression resolution algorithm, experimentally validate its performance, and integrate it into CFA, the \CFA reference compiler. … … 104 104 105 105 It is important to note that \CFA is not an object-oriented language. 106 \CFA does have a system of (possibly implicit) type conversions derived from C's type conversions; while these conversions may be thought of as something like an inheritance hierarchy the underlying semantics are significantly different and such an analogy is loose at best.106 \CFA does have a system of (possibly implicit) type conversions derived from C's type conversions; while these conversions may be thought of as something like an inheritance hierarchy, the underlying semantics are significantly different and such an analogy is loose at best. 107 107 Particularly, \CFA has no concept of ``subclass'', and thus no need to integrate an inheritance-based form of polymorphism with its parametric and overloading-based polymorphism. 108 108 The graph structure of the \CFA type conversions is also markedly different than an inheritance graph; it has neither a top nor a bottom type, and does not satisfy the lattice properties typical of inheritance graphs. … … 120 120 \end{lstlisting} 121 121 The ©identity© function above can be applied to any complete object type (or ``©otype©''). 122 The type variable ©T© is transformed into a set of additional implicit parameters to ©identity© which encode sufficient information about ©T© to create and return a variable of that type.122 The type variable ©T© is transformed into a set of additional implicit parameters to ©identity©, which encode sufficient information about ©T© to create and return a variable of that type. 123 123 The current \CFA implementation passes the size and alignment of the type represented by an ©otype© parameter, as well as an assignment operator, constructor, copy constructor and destructor. 124 124 Here, the runtime cost of polymorphism is spread over each polymorphic call, due to passing more arguments to polymorphic functions; preliminary experiments have shown this overhead to be similar to \CC virtual function calls. 125 125 Determining if packaging all polymorphic arguments to a function into a virtual function table would reduce the runtime overhead of polymorphic calls is an open research question. 126 126 127 Since bare polymorphic types do not provide a great range of available operations, \CFA alsoprovides a \emph{type assertion} mechanism to provide further information about a type:127 Since bare polymorphic types do not provide a great range of available operations, \CFA provides a \emph{type assertion} mechanism to provide further information about a type: 128 128 \begin{lstlisting} 129 129 forall(otype T ®| { T twice(T); }®) … … 137 137 \end{lstlisting} 138 138 These type assertions may be either variable or function declarations that depend on a polymorphic type variable. 139 ©four_times© can only be called with an argument for which there exists a function named ©twice© that can take that argument and return another value of the same type; a pointer to the appropriate ©twice© function is passed as an additional implicit parameter to the call to©four_times©.139 ©four_times© can only be called with an argument for which there exists a function named ©twice© that can take that argument and return another value of the same type; a pointer to the appropriate ©twice© function is passed as an additional implicit parameter to the call of ©four_times©. 140 140 141 141 Monomorphic specializations of polymorphic functions can themselves be used to satisfy type assertions. … … 148 148 The compiler accomplishes this by creating a wrapper function calling ©twice // (2)© with ©S© bound to ©double©, then providing this wrapper function to ©four_times©\footnote{©twice // (2)© could also have had a type parameter named ©T©; \CFA specifies renaming of the type parameters, which would avoid the name conflict with the type variable ©T© of ©four_times©.}. 149 149 150 Finding appropriate functions to satisfy type assertions is essentially a recursive case of expression resolution, as it takes a name (that of the type assertion) and attempts to match it to a suitable declaration in the current scope.150 Finding appropriate functions to satisfy type assertions is essentially a recursive case of expression resolution, as it takes a name (that of the type assertion) and attempts to match it to a suitable declaration \emph{in the current scope}. 151 151 If a polymorphic function can be used to satisfy one of its own type assertions, this recursion may not terminate, as it is possible that function is examined as a candidate for its own type assertion unboundedly repeatedly. 152 152 To avoid infinite loops, the current CFA compiler imposes a fixed limit on the possible depth of recursion, similar to that employed by most \CC compilers for template expansion; this restriction means that there are some semantically well-typed expressions that cannot be resolved by CFA. … … 170 170 forall(otype M | has_magnitude(M)) 171 171 M max_magnitude( M a, M b ) { 172 M aa = abs(a), ab = abs(b); 173 return aa < ab ? b : a; 172 return abs(a) < abs(b) ? b : a; 174 173 } 175 174 \end{lstlisting} 176 175 177 176 Semantically, traits are simply a named lists of type assertions, but they may be used for many of the same purposes that interfaces in Java or abstract base classes in \CC are used for. 178 Unlike Java interfaces or \CC base classes, \CFA types do not explicitly state any inheritance relationship to traits they satisfy; this can be considered a form of structural inheritance, similar to i nterface implementationin Go, as opposed to the nominal inheritance model of Java and \CC.177 Unlike Java interfaces or \CC base classes, \CFA types do not explicitly state any inheritance relationship to traits they satisfy; this can be considered a form of structural inheritance, similar to implementation of an interface in Go, as opposed to the nominal inheritance model of Java and \CC. 179 178 Nominal inheritance can be simulated with traits using marker variables or functions: 180 179 \begin{lstlisting} … … 190 189 \end{lstlisting} 191 190 192 Traits, however, are significantly more powerful than nominal-inheritance interfaces; firstly, due to the scoping rules of the declarations whichsatisfy a trait's type assertions, a type may not satisfy a trait everywhere that the type is declared, as with ©char© and the ©nominal© trait above.193 Secondly, traits may be used to declare a relationship between multiple types, a property whichmay be difficult or impossible to represent in nominal-inheritance type systems:191 Traits, however, are significantly more powerful than nominal-inheritance interfaces; firstly, due to the scoping rules of the declarations that satisfy a trait's type assertions, a type may not satisfy a trait everywhere that the type is declared, as with ©char© and the ©nominal© trait above. 192 Secondly, traits may be used to declare a relationship among multiple types, a property that may be difficult or impossible to represent in nominal-inheritance type systems: 194 193 \begin{lstlisting} 195 194 trait pointer_like(®otype Ptr, otype El®) { … … 202 201 }; 203 202 204 typedef list *list_iterator;203 typedef list *list_iterator; 205 204 206 205 lvalue int *?( list_iterator it ) { … … 209 208 \end{lstlisting} 210 209 211 In the example above, ©(list_iterator, int)© satisfies ©pointer_like© by the given function, and ©(list_iterator, list)© also satisfies ©pointer_like© by the built-in pointer dereference operator. 210 In the example above, ©(list_iterator, int)© satisfies ©pointer_like© by the user-defined dereference function, and ©(list_iterator, list)© also satisfies ©pointer_like© by the built-in dereference operator for pointers. 211 Given a declaration ©list_iterator it©, ©*it© can be either an ©int© or a ©list©, with the meaning disambiguated by context (\eg ©int x = *it;© interprets ©*it© as an ©int©, while ©(*it).value = 42;© interprets ©*it© as a ©list©). 212 212 While a nominal-inheritance system with associated types could model one of those two relationships by making ©El© an associated type of ©Ptr© in the ©pointer_like© implementation, few such systems could model both relationships simultaneously. 213 213 214 The flexibility of \CFA's implicit trait 215 The ability of types to begin to or cease to satisfy traits when declarations go into or out of scope makes caching of trait satisfaction judgements difficult, and the ability of traits to take multiple type parameters c ouldlead to a combinatorial explosion of work in any attempt to pre-compute trait satisfaction relationships.214 The flexibility of \CFA's implicit trait-satisfaction mechanism provides programmers with a great deal of power, but also blocks some optimization approaches for expression resolution. 215 The ability of types to begin to or cease to satisfy traits when declarations go into or out of scope makes caching of trait satisfaction judgements difficult, and the ability of traits to take multiple type parameters can lead to a combinatorial explosion of work in any attempt to pre-compute trait satisfaction relationships. 216 216 On the other hand, the addition of a nominal inheritance mechanism to \CFA's type system or replacement of \CFA's trait satisfaction system with a more object-oriented inheritance model and investigation of possible expression resolution optimizations for such a system may be an interesting avenue of further research. 217 217 218 218 \subsection{Name Overloading} 219 219 In C, no more than one variable or function in the same scope may share the same name\footnote{Technically, C has multiple separated namespaces, one holding ©struct©, ©union©, and ©enum© tags, one holding labels, one holding typedef names, variable, function, and enumerator identifiers, and one for each ©struct© or ©union© type holding the field names.}, and variable or function declarations in inner scopes with the same name as a declaration in an outer scope hide the outer declaration. 220 This makes finding the proper declaration to match to a variable expression or function application a simple matter of symboltable lookup, which can be easily and efficiently implemented.220 This restriction makes finding the proper declaration to match to a variable expression or function application a simple matter of symbol-table lookup, which can be easily and efficiently implemented. 221 221 \CFA, on the other hand, allows overloading of variable and function names, so long as the overloaded declarations do not have the same type, avoiding the multiplication of variable and function names for different types common in the C standard library, as in the following example: 222 222 \begin{lstlisting} … … 229 229 double max = DBL_MAX; // (4) 230 230 231 max(7, -max); // uses (1) and (3), by matching int type of 7232 max(max, 3.14); // uses (2) and (4), by matching double type of 3.14231 max(7, -max); // uses (1) and (3), by matching int type of the constant 7 232 max(max, 3.14); // uses (2) and (4), by matching double type of the constant 3.14 233 233 234 234 max(max, -max); // ERROR: ambiguous 235 int m = max(max, -max); // uses (1) and (3) twice, byreturn type235 int m = max(max, -max); // uses (1) once and (3) twice, by matching return type 236 236 \end{lstlisting} 237 237 … … 239 239 240 240 \subsection{Implicit Conversions} 241 In addition to the multiple interpretations of an expression produced by name overloading, \CFA also supports all of the implicit conversions present in C, producing further candidate interpretations for expressions.242 C does not have a traditionally-definedinheritance hierarchy of types, but the C standard's rules for the ``usual arithmetic conversions'' define which of the built-in types are implicitly convertable to which other types, and the relative cost of any pair of such conversions from a single source type.243 \CFA adds to the usual arithmetic conversions rules for determining the cost of binding a polymorphic type variable in a function call; such bindings are cheaper than any \emph{unsafe} (narrowing) conversion, \eg ©int© to ©char©, but more expensive than any \emph{safe} (widening) conversion, \eg ©int© to ©double©.244 245 The expression resolution problem, then, is to find the unique minimal-cost interpretation of each expression in the program, where all identifiers must be matched to a declaration and implicit conversions or polymorphic bindings of the result of an expression may increase the cost of the expression.241 In addition to the multiple interpretations of an expression produced by name overloading, \CFA must support all of the implicit conversions present in C for backward compatibility, producing further candidate interpretations for expressions. 242 C does not have a inheritance hierarchy of types, but the C standard's rules for the ``usual arithmetic conversions'' define which of the built-in types are implicitly convertable to which other types, and the relative cost of any pair of such conversions from a single source type. 243 \CFA adds to the usual arithmetic conversions rules defining the cost of binding a polymorphic type variable in a function call; such bindings are cheaper than any \emph{unsafe} (narrowing) conversion, \eg ©int© to ©char©, but more expensive than any \emph{safe} (widening) conversion, \eg ©int© to ©double©. 244 245 The expression resolution problem, then, is to find the unique minimal-cost interpretation of each expression in the program, where all identifiers must be matched to a declaration, and implicit conversions or polymorphic bindings of the result of an expression may increase the cost of the expression. 246 246 Note that which subexpression interpretation is minimal-cost may require contextual information to disambiguate. 247 For instance, in the example in the previous subsection, ©max(max, -max)© cannot be unambiguously resolved, but ©int m = max(max, -max) ;© has a single minimal-cost resolution.248 ©int m = (int)max((double)max, -(double)max)© is also be a valid interpretation, but is not minimal-cost due to the unsafe cast from the ©double© result of ©max© to ©int© (the two ©double© casts function as type ascriptions selecting ©double max© rather than casts from ©int max© to ©double©, and as such are zero-cost).247 For instance, in the example in the previous subsection, ©max(max, -max)© cannot be unambiguously resolved, but ©int m = max(max, -max)© has a single minimal-cost resolution. 248 While the interpretation ©int m = (int)max((double)max, -(double)max)© is also a valid interpretation, it is not minimal-cost due to the unsafe cast from the ©double© result of ©max© to ©int© (the two ©double© casts function as type ascriptions selecting ©double max© rather than casts from ©int max© to ©double©, and as such are zero-cost). 249 249 250 250 \subsubsection{User-generated Implicit Conversions} … … 252 252 Such a conversion system should be simple for programmers to utilize, and fit naturally with the existing design of implicit conversions in C; ideally it would also be sufficiently powerful to encode C's usual arithmetic conversions itself, so that \CFA only has one set of rules for conversions. 253 253 254 Ditchfield~\cite{Ditchfield:conversions} haslaid out a framework for using polymorphic-conversion-constructor functions to create a directed acyclic graph (DAG) of conversions.254 Ditchfield~\cite{Ditchfield:conversions} laid out a framework for using polymorphic-conversion-constructor functions to create a directed acyclic graph (DAG) of conversions. 255 255 A monomorphic variant of these functions can be used to mark a conversion arc in the DAG as only usable as the final step in a conversion. 256 With these two types of conversion arcs, separate DAGs can be created for the safe and the unsafe conversions, and conversion cost can be represented as path length through the DAG.256 With these two types of conversion arcs, separate DAGs can be created for the safe and the unsafe conversions, and conversion cost can be represented the length of the shortest path through the DAG from one type to another. 257 257 \begin{figure}[h] 258 258 \centering 259 259 \includegraphics{conversion_dag} 260 \caption{A portion of the implicit conversion DAG for built-in types.} 260 \caption{A portion of the implicit conversion DAG for built-in types.}\label{fig:conv_dag} 261 261 \end{figure} 262 As can be seen in the example DAG above, there are either safe or unsafe paths between each of the arithmetic types listed; the ``final'' arcs are important both to avoid creating cycles in the signed-unsigned conversions, and to disambiguate potential diamond conversions (\eg, if the ©int© to ©unsigned int© conversion was not marked final there would be two length-two paths from ©int© to ©unsigned long©, and it would beimpossible to choose which one; however, since the ©unsigned int© to ©unsigned long© arc can not be traversed after the final ©int© to ©unsigned int© arc, there is a single unambiguous conversion path from ©int© to ©unsigned long©).262 As can be seen in Figure~\ref{fig:conv_dag}, there are either safe or unsafe paths between each of the arithmetic types listed; the ``final'' arcs are important both to avoid creating cycles in the signed-unsigned conversions, and to disambiguate potential diamond conversions (\eg, if the ©int© to ©unsigned int© conversion was not marked final there would be two length-two paths from ©int© to ©unsigned long©, making it impossible to choose which one; however, since the ©unsigned int© to ©unsigned long© arc can not be traversed after the final ©int© to ©unsigned int© arc, there is a single unambiguous conversion path from ©int© to ©unsigned long©). 263 263 264 264 Open research questions on this topic include: 265 265 \begin{itemize} 266 266 \item Can a conversion graph be generated that represents each allowable conversion in C with a unique minimal-length path such that the path lengths accurately represent the relative costs of the conversions? 267 \item Can such a graph representation canbe usefully augmented to include user-defined types as well as built-in types?268 \item Can the graph canbe efficiently represented and used in the expression resolver?267 \item Can such a graph representation be usefully augmented to include user-defined types as well as built-in types? 268 \item Can the graph be efficiently represented and used in the expression resolver? 269 269 \end{itemize} 270 270 271 271 \subsection{Constructors and Destructors} 272 272 Rob Shluntz, a current member of the \CFA research team, has added constructors and destructors to \CFA. 273 Each type has an overridable default-generated zero-argument constructor, copy constructor, assignment operator, and destructor; for ©struct© types these functions each call their equivalents on each field of the ©struct©. 274 This affects expression resolution because an ©otype© type variable ©T© implicitly adds four type assertions, one for each of these four functions, so assertion resolution is pervasive in \CFA polymorphic functions, even those without any explicit type assertions. 273 Each type has an overridable default-generated zero-argument constructor, copy constructor, assignment operator, and destructor. 274 For ©struct© types these functions each call their equivalents on each field of the ©struct©. 275 This feature affects expression resolution because an ©otype© type variable ©T© implicitly adds four type assertions, one for each of these four functions, so assertion resolution is pervasive in \CFA polymorphic functions, even those without any explicit type assertions. 275 276 The following example shows the implicitly-generated code in green: 276 277 \begin{lstlisting} … … 280 281 }; 281 282 282 ¢void ?{}(kv *this) { 283 ?{}(& this->key);284 ?{}(& this->value);285 } 286 void ?{}(kv *this, kv that) { 287 ?{}(& this->key, that.key);288 ?{}(& this->value, that.value);289 } 290 kv ?=?(kv *this, kv that) { 291 ?=?(& this->key, that.key);292 ?=?(& this->value, that.value);283 ¢void ?{}(kv *this) { // default constructor 284 ?{}(&(this->key)); // call recursively on members 285 ?{}(&(this->value)); 286 } 287 void ?{}(kv *this, kv that) { // copy constructor 288 ?{}(&(this->key), that.key); 289 ?{}(&(this->value), that.value); 290 } 291 kv ?=?(kv *this, kv that) { // assignment operator 292 ?=?(&(this->key), that.key); 293 ?=?(&(this->value), that.value); 293 294 return *this; 294 295 } 295 void ^?{}(kv *this) { 296 ^?{}(& this->key);297 ^?{}(& this->value);296 void ^?{}(kv *this) { // destructor 297 ^?{}(&(this->key)); 298 ^?{}(&(this->value)); 298 299 }¢ 299 300 … … 335 336 \begin{itemize} 336 337 \item Since there is an implicit conversion from ©void*© to any pointer type, the highlighted expression can be interpreted as either a ©void*©, matching ©f // (1)©, or a ©box(S)*© for some type ©S©, matching ©f // (2)©. 337 \item To determine the cost of the ©box(S)© interpretation, a type must be found for ©S© whichsatisfies the ©otype© implicit type assertions (assignment operator, default and copy constructors, and destructor); one option is ©box(S2)© for some type ©S2©.338 \item To determine the cost of the ©box(S)© interpretation, a type must be found for ©S© that satisfies the ©otype© implicit type assertions (assignment operator, default and copy constructors, and destructor); one option is ©box(S2)© for some type ©S2©. 338 339 \item The assignment operator, default and copy constructors, and destructor of ©box(T)© are also polymorphic functions, each of which require the type parameter ©T© to have an assignment operator, default and copy constructors, and destructor. When choosing an interpretation for ©S2©, one option is ©box(S3)©, for some type ©S3©. 339 340 \item The previous step repeats until stopped, with four times as much work performed at each step. 340 341 \end{itemize} 341 This problem can occur in any resolution context where a polymorphic function that can satisfy its own type assertions is required for a possible interpretation of an expression with no constraints on its type, and is thus not limited to combinations of generic types with ©void*© conversions, though constructors for generic types often satisfy their own assertions and a polymorphic conversion such as the ©void*© conversion to a polymorphic variable can create an expression with no constraints on its type. 342 This problem can occur in any resolution context where a polymorphic function can satisfy its own type assertions is required for a possible interpretation of an expression with no constraints on its type, and is thus not limited to combinations of generic types with ©void*© conversions. 343 However, constructors for generic types often satisfy their own assertions and a polymorphic conversion such as the ©void*© conversion to a polymorphic variable is a common way to create an expression with no constraints on its type. 342 344 As discussed above, the \CFA expression resolver must handle this possible infinite recursion somehow, and it occurs fairly naturally in code like the above that uses generic types. 343 345 … … 345 347 \CFA adds \emph{tuple types} to C, a syntactic facility for referring to lists of values anonymously or with a single identifier. 346 348 An identifier may name a tuple, and a function may return one. 347 Particularly relevantly for resolution, a tuple may be implicitly \emph{destructured} into a list of values, as in the call to ©swap© below:348 \begin{lstlisting} 349 [char, char] x = [ '!', '?' ]; 350 int x = 42; 351 352 forall(otype T) [T, T] swap( T a, T b ) { return [b, a]; } 349 Particularly relevantly for resolution, a tuple may be implicitly \emph{destructured} into a list of values, as in the call to ©swap©: 350 \begin{lstlisting} 351 [char, char] x = [ '!', '?' ]; // (1) 352 int x = 42; // (2) 353 354 forall(otype T) [T, T] swap( T a, T b ) { return [b, a]; } // (3) 353 355 354 356 x = swap( x ); // destructure [char, char] x into two elements of parameter list 355 357 // cannot use int x for parameter, not enough arguments to swap 356 \end{lstlisting} 357 Tuple destructuring means that the mapping from the position of a subexpression in the argument list to the position of a paramter in the function declaration is not straightforward, as some arguments may be expandable to different numbers of parameters, like ©x© above. 358 359 void swap( int, char, char ); // (4) 360 361 swap( x, x ); // resolved as (4) on (2) and (1) 362 // (3) on (2) and (2) is close, but the polymorphic binding makes it not minimal-cost 363 \end{lstlisting} 364 Tuple destructuring means that the mapping from the position of a subexpression in the argument list to the position of a paramter in the function declaration is not straightforward, as some arguments may be expandable to different numbers of parameters, like ©x© above. 365 In the second example, the second ©x© argument can be resolved starting at the second or third parameter of ©swap©, depending which interpretation of ©x© was chosen for the first argument. 358 366 359 367 \subsection{Reference Types} 360 368 I have been designing \emph{reference types} for \CFA, in collaboration with the rest of the \CFA research team. 361 369 Given some type ©T©, a ©T&© (``reference to ©T©'') is essentially an automatically dereferenced pointer; with these semantics most of the C standard's discussions of lvalues can be expressed in terms of references instead, with the benefit of being able to express the difference between the reference and non-reference version of a type in user code. 362 References preserve C's existing qualifier-dropping lvalue-to-rvalue conversion (\eg a ©const volatile int&© can be implicitly converted to a bare ©int©); the reference proposal also adds a rvalue-to-lvalue conversion to \CFA, implemented by storing the value in a new compiler-generated temporary and passing a reference to the temporary. 363 These two conversions can chain, producing a qualifier-dropping conversion for references, for instance converting a reference to a ©const int© into a reference to a non-©const int© by copying the originally refered to value into a fresh temporary and taking a reference to this temporary, as below: 370 References preserve C's existing qualifier-dropping lvalue-to-rvalue conversion (\eg a ©const volatile int&© can be implicitly converted to a bare ©int©). 371 The reference proposal also adds a rvalue-to-lvalue conversion to \CFA, implemented by storing the value in a new compiler-generated temporary and passing a reference to the temporary. 372 These two conversions can chain, producing a qualifier-dropping conversion for references, for instance converting a reference to a ©const int© into a reference to a non-©const int© by copying the originally refered to value into a fresh temporary and taking a reference to this temporary, as in: 364 373 \begin{lstlisting} 365 374 const int magic = 42; … … 369 378 print_inc( magic ); // legal; implicitly generated code in green below: 370 379 371 ¢int tmp = magic;¢ // copiesto safely strip const-qualifier380 ¢int tmp = magic;¢ // to safely strip const-qualifier 372 381 ¢print_inc( tmp );¢ // tmp is incremented, magic is unchanged 373 382 \end{lstlisting} 374 These reference conversions may also chain with the other implicit type 375 The main implication of th is for expression resolution is the multiplication of available implicit conversions, though in a restricted context that may be able to be treated efficiently as a special case.383 These reference conversions may also chain with the other implicit type-conversions. 384 The main implication of the reference conversions for expression resolution is the multiplication of available implicit conversions, though given the restricted context reference conversions may be able to be treated efficiently as a special case of implicit conversions. 376 385 377 386 \subsection{Special Literal Types} 378 387 Another proposal currently under consideration for the \CFA type-system is assigning special types to the literal values ©0© and ©1©. 379 Implicit conversions from these types allow ©0© and ©1© to be considered as values of many different types, depending on context, allowing expression desugarings like ©if ( x ) {}© $\Rightarrow$ ©if ( x != 0 ) {}© to be implemented efficiently and preci cely.388 Implicit conversions from these types allow ©0© and ©1© to be considered as values of many different types, depending on context, allowing expression desugarings like ©if ( x ) {}© $\Rightarrow$ ©if ( x != 0 ) {}© to be implemented efficiently and precisely. 380 389 This approach is a generalization of C's existing behaviour of treating ©0© as either an integer zero or a null pointer constant, and treating either of those values as boolean false. 381 390 The main implication for expression resolution is that the frequently encountered expressions ©0© and ©1© may have a large number of valid interpretations. … … 386 395 int somefn(char) = delete; 387 396 \end{lstlisting} 388 This feature is typically used in \CCeleven to make a type non-copyable by deleting its copy constructor and assignment operator , or forbidding some interpretations of a polymorphic function by specifically deleting the forbidden overloads.389 To add a similar feature to \CFA would involve including the deleted function declarations in expression resolution along with the normal declarations, but producing a compiler error if the deleted function was the best resolution.397 This feature is typically used in \CCeleven to make a type non-copyable by deleting its copy constructor and assignment operator\footnote{In previous versions of \CC a type could be made non-copyable by declaring a private copy constructor and assignment operator, but not defining either. This idiom is well-known, but depends on some rather subtle and \CC-specific rules about private members and implicitly-generated functions; the deleted-function form is both clearer and less verbose.}, or forbidding some interpretations of a polymorphic function by specifically deleting the forbidden overloads\footnote{Specific polymorphic function overloads can also be forbidden in previous \CC versions through use of template metaprogramming techniques, though this advanced usage is beyond the skills of many programmers. A similar effect can be produced on an ad-hoc basis at the appropriate call sites through use of casts to determine the function type. In both cases, the deleted-function form is clearer and more concise.}. 398 To add a similar feature to \CFA involves including the deleted function declarations in expression resolution along with the normal declarations, but producing a compiler error if the deleted function is the best resolution. 390 399 How conflicts should be handled between resolution of an expression to both a deleted and a non-deleted function is a small but open research question. 391 400 … … 404 413 %TODO: look up and lit review 405 414 The second area of investigation is minimizing dependencies between argument-parameter matches; the current CFA compiler attempts to match entire argument combinations against functions at once, potentially attempting to match the same argument against the same parameter multiple times. 406 Whether the feature set of \CFA admits an expression resolution algorithm where arguments can be matched to parameters independently of other arguments in the same function application is an area of open research; polymorphic type paramters produce enough of a cross-argument dependencythat the problem is not trivial.415 Whether the feature set of \CFA admits an expression resolution algorithm where arguments can be matched to parameters independently of other arguments in the same function application is an area of open research; polymorphic type paramters produce enough cross-argument dependencies that the problem is not trivial. 407 416 If cross-argument resolution dependencies cannot be completely eliminated, effective caching strategies to reduce duplicated work between equivalent argument-parameter matches in different combinations may mitigate the asymptotic defecits of the whole-combination matching approach. 408 417 The final area of investigation is heuristics and algorithmic approaches to reduce the number of argument interpretations considered in the common case; if argument-parameter matches cannot be made independent, even small reductions in $i$ should yield significant reductions in the $i^{p+1}$ resolver runtime factor. … … 412 421 413 422 \subsection{Argument-Parameter Matching} 414 The first axis for consideration is argument-parameter matching direction --- whether the type matching for a candidate function to a set of candidate arguments is directed by the argument types or the parameter types.423 The first axis for consideration is the argument-parameter matching direction --- whether the type matching for a candidate function to a set of candidate arguments is directed by the argument types or the parameter types. 415 424 For programming languages without implicit conversions, argument-parameter matching is essentially the entirety of the expression resolution problem, and is generally referred to as ``overload resolution'' in the literature. 416 All expression resolution algorithms form a DAG of interpretations, some explicitly, some implicitly; in this DAG, arcs point from function-call interpretations to argument interpretations, as below:425 All expression-resolution algorithms form a DAG of interpretations, some explicitly, some implicitly; in this DAG, arcs point from function-call interpretations to argument interpretations, as in Figure~\ref{fig:res_dag}: 417 426 \begin{figure}[h] 418 427 \centering … … 433 442 \end{figure} 434 443 435 Note that some interpretations may be part of more than one super-interpretation, as with $p_i$ in the bottom row, while some valid subexpression interpretations, like $f_d$ in the middle row, are not used in any interpretation of their containingexpression.444 Note that some interpretations may be part of more than one super-interpretation, as with the second $p_i$ in the bottom row, while some valid subexpression interpretations, like $f_d$ in the middle row, are not used in any interpretation of their superexpression. 436 445 437 446 \subsubsection{Argument-directed (Bottom-up)} … … 451 460 A reasonable hybrid approach might take a top-down approach when the expression to be matched has a fixed type, and a bottom-up approach in untyped contexts. 452 461 This approach may involve switching from one type to another at different levels of the expression tree. 453 For instance :462 For instance, in: 454 463 \begin{lstlisting} 455 464 forall(otype T) … … 460 469 int x = f( f( '!' ) ); 461 470 \end{lstlisting} 462 The outer call to ©f© must have a return type that is (implicitly convertable to) ©int©, so a top-down approach is used to select \textit{(1)} as the proper interpretation of ©f©. \textit{(1)}'s parameter ©x©, however, is an unbound type variable, and can thus take a value of any complete type, providing no guidance for the choice of candidate for the inner call to ©f©. The leaf expression ©'!'©, however, determines a zero-cost interpretation of the inner ©f© as \textit{(2)}, providing a minimal-cost expression resolution where ©T© is bound to ©void*©. 463 464 Deciding when to switch between bottom-up and top-down resolution to minimize wasted work in a hybrid algorithm is a necessarily heuristic process, and though finding good heuristics for which subexpressions to swich matching strategies on is an open question, one reasonable approach might be to set a threshold $t$ for the number of candidate functions, and to use top-down resolution for any subexpression with fewer than $t$ candidate functions, to minimize the number of unmatchable argument interpretations computed, but to use bottom-up resolution for any subexpression with at least $t$ candidate functions, to reduce duplication in argument interpretation computation between the different candidate functions. 471 the outer call to ©f© must have a return type that is (implicitly convertable to) ©int©, so a top-down approach is used to select \textit{(1)} as the proper interpretation of ©f©. \textit{(1)}'s parameter ©x©, however, is an unbound type variable, and can thus take a value of any complete type, providing no guidance for the choice of candidate for the inner call to ©f©. The leaf expression ©'!'©, however, determines a zero-cost interpretation of the inner ©f© as \textit{(2)}, providing a minimal-cost expression resolution where ©T© is bound to ©void*©. 472 473 Deciding when to switch between bottom-up and top-down resolution to minimize wasted work in a hybrid algorithm is a necessarily heuristic process, and finding good heuristics for which subexpressions to swich matching strategies on is an open question. 474 One reasonable approach might be to set a threshold $t$ for the number of candidate functions, and to use top-down resolution for any subexpression with fewer than $t$ candidate functions, to minimize the number of unmatchable argument interpretations computed, but to use bottom-up resolution for any subexpression with at least $t$ candidate functions, to reduce duplication in argument interpretation computation between the different candidate functions. 465 475 466 476 Ganzinger and Ripken~\cite{Ganzinger80} propose an approach (later refined by Pennello~\etal~\cite{Pennello80}) that uses a top-down filtering pass followed by a bottom-up filtering pass to reduce the number of candidate interpretations; they prove that for the Ada programming language a small number of such iterations is sufficient to converge to a solution for the expression resolution problem. 467 477 Persch~\etal~\cite{PW:overload} developed a similar two-pass approach where the bottom-up pass is followed by the top-down pass. 468 These algorithms differ from the hybrid approach under investigation in that they take multiple passes over the expression tree to yield a solution, and also in that theyapply both filtering heuristics to all expression nodes; \CFA's polymorphic functions and implicit conversions make the approach of filtering out invalid types taken by all of these algorithms infeasible.478 These algorithms differ from the hybrid approach under investigation in that they take multiple passes over the expression tree to yield a solution, and that they also apply both filtering heuristics to all expression nodes; \CFA's polymorphic functions and implicit conversions make the approach of filtering out invalid types taken by all of these algorithms infeasible. 469 479 470 480 \subsubsection{Common Subexpression Caching} … … 480 490 \CC~\cite{ANSI98:C++} includes both name overloading and implicit conversions in its expression resolution specification, though unlike \CFA it does complete type-checking on a generated monomorphization of template functions, where \CFA simply checks a list of type constraints. 481 491 The upcoming Concepts standard~\cite{C++concepts} defines a system of type constraints similar in principle to \CFA's. 482 Cormack and Wright~\cite{Cormack90} present an algorithm whichintegrates overload resolution with a polymorphic type inference approach very similar to \CFA's.492 Cormack and Wright~\cite{Cormack90} present an algorithm that integrates overload resolution with a polymorphic type inference approach very similar to \CFA's. 483 493 However, their algorithm does not account for implicit conversions other than polymorphic type binding and their discussion of their overload resolution algorithm is not sufficiently detailed to classify it with the other argument-parameter matching approaches\footnote{Their overload resolution algorithm is possibly a variant of Ganzinger and Ripken~\cite{Ganzinger80} or Pennello~\etal~\cite{Pennello80}, modified to allow for polymorphic type binding.}. 484 494 … … 486 496 Bilson does account for implicit conversions in his algorithm, but it is unclear if the approach is optimal. 487 497 His algorithm integrates checking for valid implicit conversions into the argument-parameter-matching step, essentially trading more expensive matching for a smaller number of argument interpretations. 488 This approach may result in the same subexpression being checked for a type match with the same type multiple times, though again memoization may mitigate this cost, and this approach does not generate implicit conversions that are not useful to match the containing function. 489 Calculating implicit conversions on parameters pairs naturally with a top-down approach to expression resolution, though it can also be used in a bottom-up approach, as Bilson demonstrates. 498 This approach may result in the same subexpression being checked for a type match with the same type multiple times, though again memoization may mitigate this cost; however, this approach does not generate implicit conversions that are not useful to match the containing function. 490 499 491 500 \subsubsection{On Arguments} 492 501 Another approach is to generate a set of possible implicit conversions for each set of interpretations of a given argument. 493 502 This approach has the benefit of detecting ambiguous interpretations of arguments at the level of the argument rather than its containing call, never finds more than one interpretation of the argument with a given type, and re-uses calculation of implicit conversions between function candidates. 494 On the other hand, this approach may unncessarily generate argument interpretations that never match any parameter, wasting work. 495 Further, in the presence of tuple types this approach may lead to a combinatorial explosion of argument interpretations considered, unless the tuple can be considered as a sequence of elements rather than a unified whole. 496 Calculating implicit conversions on arguments is a viable approach for bottom-up expression resolution, though it may be difficult to apply in a top-down approach due to the presence of a target type for the expression interpretation. 503 On the other hand, this approach may unnecessarily generate argument interpretations that never match any parameter, wasting work. 504 Furthermore, in the presence of tuple types, this approach may lead to a combinatorial explosion of argument interpretations considered, unless the tuple can be considered as a sequence of elements rather than a unified whole. 497 505 498 506 \subsection{Candidate Set Generation} 499 All the algorithms discussed to this point generate the complete set of candidate argument interpretations before attempting to match the containing function 500 However, given that the top-level expression interpretation that is ultimately chosen is the minimal-cost valid interpretation, any consideration of non-minimal-cost interpretations is in some sensewasted work.501 Under the assumption that thatprogrammers generally write function calls with relatively low-cost interpretations, a possible work-saving heuristic is to generate only the lowest-cost argument interpretations first, attempt to find a valid top-level interpretation using them, and only if that fails generate the next higher-cost argument interpretations.507 All the algorithms discussed to this point generate the complete set of candidate argument interpretations before attempting to match the containing function-call expression. 508 However, given that the top-level expression interpretation that is ultimately chosen is the minimal-cost valid interpretation, any consideration of non-minimal-cost interpretations is wasted work. 509 Under the assumption that programmers generally write function calls with relatively low-cost interpretations, a possible work-saving heuristic is to generate only the lowest-cost argument interpretations first, attempt to find a valid top-level interpretation using them, and only if that fails generate the next higher-cost argument interpretations. 502 510 503 511 \subsubsection{Eager} … … 563 571 This comparison closes Baker's open research question, as well as potentially improving Bilson's \CFA compiler. 564 572 565 Rather than testing all of these algorithms in-place in the \CFA compiler, a resolver prototype is being developed whichacts on a simplified input language encapsulating the essential details of the \CFA type-system\footnote{Note this simplified input language is not a usable programming language.}.573 Rather than testing all of these algorithms in-place in the \CFA compiler, a resolver prototype is being developed that acts on a simplified input language encapsulating the essential details of the \CFA type-system\footnote{Note this simplified input language is not a usable programming language.}. 566 574 Multiple variants of this resolver prototype will be implemented, each encapsulating a different expression resolution variant, sharing as much code as feasible. 567 575 These variants will be instrumented to test runtime performance, and run on a variety of input files; the input files may be generated programmatically or from exisiting code in \CFA or similar languages. … … 571 579 As an example, there are currently multiple open proposals for how implicit conversions should interact with polymorphic type binding in \CFA, each with distinct levels of expressive power; if the resolver prototype is modified to support each proposal, the optimal algorithm for each proposal can be compared, providing an empirical demonstration of the trade-off between expressive power and compiler runtime. 572 580 573 This proposed project should provide valuable data on how to implement a performant compiler for modernprogramming languages such as \CFA with powerful static type-systems, specifically targeting the feature interaction between name overloading and implicit conversions.581 This proposed project should provide valuable data on how to implement a performant compiler for programming languages such as \CFA with powerful static type-systems, specifically targeting the feature interaction between name overloading and implicit conversions. 574 582 This work is not limited in applicability to \CFA, but may also be useful for supporting efficient compilation of the upcoming Concepts standard~\cite{C++concepts} for \CC template constraints, for instance. 575 583 -
src/Common/Assert.cc
r4f147cc rd30790f 10 10 // Created On : Thu Aug 18 13:26:59 2016 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Thu Aug 18 23:40:29201613 // Update Count : 812 // Last Modified On : Fri Aug 19 17:07:08 2016 13 // Update Count : 10 14 14 // 15 15 16 16 #include <assert.h> 17 #include <cstdarg> 18 #include <cstdio> 19 #include <cstdlib> 17 #include <cstdarg> // varargs 18 #include <cstdio> // fprintf 19 #include <cstdlib> // abort 20 20 21 21 extern const char * __progname; // global name of running executable (argv[0]) … … 26 26 void __assert_fail( const char *assertion, const char *file, unsigned int line, const char *function ) { 27 27 fprintf( stderr, CFA_ASSERT_FMT ".\n", __progname, function, line, file ); 28 exit( EXIT_FAILURE);28 abort(); 29 29 } 30 30 … … 35 35 va_start( args, fmt ); 36 36 vfprintf( stderr, fmt, args ); 37 exit( EXIT_FAILURE);37 abort(); 38 38 } 39 39 -
src/Makefile.am
r4f147cc rd30790f 11 11 ## Created On : Sun May 31 08:51:46 2015 12 12 ## Last Modified By : Peter A. Buhr 13 ## Last Modified On : Thu Aug 18 17:47:06201614 ## Update Count : 7 013 ## Last Modified On : Sat Aug 20 11:13:12 2016 14 ## Update Count : 71 15 15 ############################################################################### 16 16 … … 43 43 driver_cfa_cpp_SOURCES = ${SRC} 44 44 driver_cfa_cpp_LDADD = ${LEXLIB} # yywrap 45 driver_cfa_cpp_CXXFLAGS = -Wno-deprecated -Wall -DDEBUG_ALL - I${abs_top_srcdir}/src/include45 driver_cfa_cpp_CXXFLAGS = -Wno-deprecated -Wall -DDEBUG_ALL -rdynamic -I${abs_top_srcdir}/src/include 46 46 47 47 MAINTAINERCLEANFILES += ${libdir}/${notdir ${cfa_cpplib_PROGRAMS}} -
src/Makefile.in
r4f147cc rd30790f 427 427 driver_cfa_cpp_SOURCES = ${SRC} 428 428 driver_cfa_cpp_LDADD = ${LEXLIB} # yywrap 429 driver_cfa_cpp_CXXFLAGS = -Wno-deprecated -Wall -DDEBUG_ALL - I${abs_top_srcdir}/src/include429 driver_cfa_cpp_CXXFLAGS = -Wno-deprecated -Wall -DDEBUG_ALL -rdynamic -I${abs_top_srcdir}/src/include 430 430 all: $(BUILT_SOURCES) 431 431 $(MAKE) $(AM_MAKEFLAGS) all-am -
src/main.cc
r4f147cc rd30790f 10 10 // Created On : Fri May 15 23:12:02 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Aug 19 08:31:22 201613 // Update Count : 35012 // Last Modified On : Sat Aug 20 12:52:22 2016 13 // Update Count : 403 14 14 // 15 15 16 16 #include <iostream> 17 17 #include <fstream> 18 #include <getopt.h> 18 #include <signal.h> // signal 19 #include <getopt.h> // getopt 20 #include <execinfo.h> // backtrace, backtrace_symbols_fd 21 #include <cxxabi.h> // __cxa_demangle 22 23 using namespace std; 24 19 25 #include "Parser/lex.h" 20 26 #include "Parser/parser.h" … … 35 41 #include "InitTweak/FixInit.h" 36 42 #include "Common/UnimplementedError.h" 37 38 43 #include "../config.h" 39 44 40 45 using namespace std; 41 46 42 #define OPTPRINT(x) if ( errorp ) std::cerr << x << std::endl;47 #define OPTPRINT(x) if ( errorp ) cerr << x << endl; 43 48 44 49 … … 68 73 static void parse_cmdline( int argc, char *argv[], const char *& filename ); 69 74 static void parse( FILE * input, LinkageSpec::Spec linkage, bool shouldExit = false ); 70 static void dump( std::list< Declaration * > & translationUnit, std::ostream & out = std::cout ); 75 static void dump( list< Declaration * > & translationUnit, ostream & out = cout ); 76 77 void sigSegvBusHandler( int sig_num ) { 78 enum { Frames = 50 }; 79 void * array[Frames]; 80 int size = backtrace( array, Frames ); 81 82 cerr << "*CFA runtime error* program cfa-cpp terminated with " 83 << (sig_num == SIGSEGV ? "segment fault" : "bus error") 84 << " backtrace:" << endl; 85 86 char ** messages = backtrace_symbols( array, size ); 87 88 // skip first stack frame (points here) 89 for ( int i = 2; i < size - 2 && messages != nullptr; i += 1 ) { 90 char * mangled_name = nullptr, * offset_begin = nullptr, * offset_end = nullptr; 91 for ( char *p = messages[i]; *p; ++p ) { // find parantheses and +offset 92 if (*p == '(') { 93 mangled_name = p; 94 } else if (*p == '+') { 95 offset_begin = p; 96 } else if (*p == ')') { 97 offset_end = p; 98 break; 99 } // if 100 } // for 101 102 // if line contains symbol, attempt to demangle 103 if ( mangled_name && offset_begin && offset_end && mangled_name < offset_begin ) { 104 *mangled_name++ = '\0'; 105 *offset_begin++ = '\0'; 106 *offset_end++ = '\0'; 107 108 int status; 109 char * real_name = __cxxabiv1::__cxa_demangle( mangled_name, 0, 0, &status ); 110 if ( status == 0 ) { // demangling successful ? 111 cerr << "(" << i - 2 << ") " << messages[i] << " : " 112 << real_name << "+" << offset_begin << offset_end << endl; 113 114 } else { // otherwise, output mangled name 115 cerr << "(" << i - 2 << ") " << messages[i] << " : " 116 << mangled_name << "+" << offset_begin << offset_end << endl; 117 } // if 118 free( real_name ); 119 } else { // otherwise, print the whole line 120 cerr << "(" << i - 2 << ") " << messages[i] << endl; 121 } // if 122 } // for 123 free( messages ); 124 exit( EXIT_FAILURE ); 125 } // sigSegvBusHandler 71 126 72 127 int main( int argc, char * argv[] ) { 73 128 FILE * input; // use FILE rather than istream because yyin is FILE 74 std::ostream *output = & std::cout; 75 std::list< Declaration * > translationUnit; 129 ostream *output = & cout; 76 130 const char *filename = nullptr; 131 list< Declaration * > translationUnit; 132 133 signal( SIGSEGV, sigSegvBusHandler ); 134 signal( SIGBUS, sigSegvBusHandler ); 77 135 78 136 parse_cmdline( argc, argv, filename ); // process command-line arguments … … 122 180 123 181 if ( parsep ) { 124 parseTree->printList( std::cout );182 parseTree->printList( cout ); 125 183 delete parseTree; 126 184 return 0; … … 144 202 145 203 if ( expraltp ) { 146 ResolvExpr::AlternativePrinter printer( std::cout );204 ResolvExpr::AlternativePrinter printer( cout ); 147 205 acceptAll( translationUnit, printer ); 148 206 return 0; … … 210 268 CodeGen::generate( translationUnit, *output, ! noprotop ); 211 269 212 if ( output != & std::cout ) {270 if ( output != &cout ) { 213 271 delete output; 214 272 } // if 215 273 } catch ( SemanticError &e ) { 216 274 if ( errorp ) { 217 std::cerr << "---AST at error:---" << std::endl;218 dump( translationUnit, std::cerr );219 std::cerr << std::endl << "---End of AST, begin error message:---\n" << std::endl;220 } // if 221 e.print( std::cerr );222 if ( output != & std::cout ) {275 cerr << "---AST at error:---" << endl; 276 dump( translationUnit, cerr ); 277 cerr << endl << "---End of AST, begin error message:---\n" << endl; 278 } // if 279 e.print( cerr ); 280 if ( output != &cout ) { 223 281 delete output; 224 282 } // if 225 283 return 1; 226 284 } catch ( UnimplementedError &e ) { 227 std::cout << "Sorry, " << e.get_what() << " is not currently implemented" << std::endl;228 if ( output != & std::cout ) {285 cout << "Sorry, " << e.get_what() << " is not currently implemented" << endl; 286 if ( output != &cout ) { 229 287 delete output; 230 288 } // if 231 289 return 1; 232 290 } catch ( CompilerError &e ) { 233 std::cerr << "Compiler Error: " << e.get_what() << std::endl;234 std::cerr << "(please report bugs to " << std::endl;235 if ( output != & std::cout ) {291 cerr << "Compiler Error: " << e.get_what() << endl; 292 cerr << "(please report bugs to " << endl; 293 if ( output != &cout ) { 236 294 delete output; 237 295 } // if … … 369 427 } // notPrelude 370 428 371 static void dump( std::list< Declaration * > & translationUnit, std::ostream & out ) {372 std::list< Declaration * > decls;429 static void dump( list< Declaration * > & translationUnit, ostream & out ) { 430 list< Declaration * > decls; 373 431 374 432 if ( noprotop ) { 375 filter( translationUnit.begin(), translationUnit.end(), std::back_inserter( decls ), notPrelude );433 filter( translationUnit.begin(), translationUnit.end(), back_inserter( decls ), notPrelude ); 376 434 } else { 377 435 decls = translationUnit;
Note: See TracChangeset
for help on using the changeset viewer.