Changeset 41cd57c0 for doc/generic_types


Ignore:
Timestamp:
Mar 27, 2017, 9:00:38 PM (8 years ago)
Author:
Aaron Moss <bruceiv@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
Children:
c1fb1f2f, f32a013
Parents:
626da644
Message:

Finish initial integration of tuples work

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/generic_types/generic_types.tex

    r626da644 r41cd57c0  
    139139\item Extensions introduced by \CFA{} must be translated in the most efficient way possible.
    140140\end{enumerate}
    141 The purpose of these goals is to ensure that existing C codebases can be converted to \CFA{} incrementally and with minimal effort, and that programmers who already know C can productively produce \CFA{} code without extensive training beyond the extension features they wish to employ.
     141The purpose of these goals is to ensure that existing C codebases can be converted to \CFA{} incrementally and with minimal effort, and that programmers who already know C can productively produce \CFA{} code without extensive training beyond the extension features they wish to employ. In its current implementation, \CFA{} is compiled by translating it to GCC-dialect C, allowing it to leverage the portability and code optimizations provided by GCC, meeting goals (1)-(3).
    142142
    143143\CFA{} has been previously extended with polymorphic functions and name overloading (including operator overloading) \citep{Bilson03}, and deterministically-executed constructors and destructors \citep{Schluntz17}. This paper describes how generic and tuple types are designed and implemented in \CFA{} in accordance with both the backward compatibility goals and existing features described above.
     
    178178\end{lstlisting}
    179179This version of @twice@ works for any type @S@ that has an addition operator defined for it, and it could have been used to satisfy the type assertion on @four_times@.
    180 The compiler accomplishes this by creating a wrapper function calling @twice // (2)@ with @S@ bound to @double@, then providing this wrapper function to @four_times@\footnote{\lstinline@twice // (2)@ could also have had a type parameter named \lstinline@T@; \CFA{} specifies renaming of the type parameters, which would avoid the name conflict with the type variable \lstinline@T@ of \lstinline@four_times@.}.
     180The translator accomplishes this by creating a wrapper function calling @twice // (2)@ with @S@ bound to @double@, then providing this wrapper function to @four_times@\footnote{\lstinline@twice // (2)@ could also have had a type parameter named \lstinline@T@; \CFA{} specifies renaming of the type parameters, which would avoid the name conflict with the type variable \lstinline@T@ of \lstinline@four_times@.}.
    181181
    182182\subsection{Traits}
     
    299299\CFA{} classifies generic types as either \emph{concrete} or \emph{dynamic}. Dynamic generic types vary in their in-memory layout depending on their type parameters, while concrete generic types have a fixed memory layout regardless of type parameters. A type may have polymorphic parameters but still be concrete; \CFA{} refers to such types as \emph{dtype-static}. Polymorphic pointers are an example of dtype-static types -- @forall(dtype T) T*@ is a polymorphic type, but for any @T@ chosen, @T*@ will have exactly the same in-memory representation as a @void*@, and can therefore be represented by a @void*@ in code generation.
    300300
    301 The \CFA{} compiler instantiates concrete generic types by template-expanding them to fresh struct types; concrete generic types can therefore be used with zero runtime overhead. To enable interoperation between equivalent instantiations of a generic type, the compiler saves the set of instantiations currently in scope and re-uses the generated struct declarations where appropriate. As an example, the concrete instantiation for @pair(const char*, int)@ would look something like this:
     301\subsection{Concrete Generic Types}
     302
     303The \CFA{} translator instantiates concrete generic types by template-expanding them to fresh struct types; concrete generic types can therefore be used with zero runtime overhead. To enable interoperation between equivalent instantiations of a generic type, the translator saves the set of instantiations currently in scope and re-uses the generated struct declarations where appropriate. As an example, the concrete instantiation for @pair(const char*, int)@ would look something like this:
    302304\begin{lstlisting}
    303305struct _pair_conc1 {
     
    315317\end{lstlisting}
    316318
     319\subsection{Dynamic Generic Types}
     320
    317321Though \CFA{} implements concrete generic types efficiently, it also has a fully-general system for computing with dynamic generic types. As mentioned in Section~\ref{sec:poly-fns}, @otype@ function parameters (in fact all @sized@ polymorphic parameters) come with implicit size and alignment parameters provided by the caller. Dynamic generic structs also come with an \emph{offset array} which contains the offsets of each member of the struct\footnote{Dynamic generic unions need no such offset array, as all members are at offset 0.}. Access to members\footnote{The \lstinline@offsetof@ macro is implmented similarly.} of a dynamic generic struct is provided by adding the corresponding member of the offset array to the struct pointer at runtime, essentially moving a compile-time offset calculation to runtime where neccessary.
    318322
    319 These offset arrays are staticly generated where possible. If a dynamic generic type is declared to be passed or returned by value from a polymorphic function, the compiler can safely assume that the generic type is complete (that is, has a known layout) at any call-site, and the offset array is passed from the caller; if the generic type is concrete at the call site this offset array can even be statically generated using the C @offsetof@ macro. As an example, @p.second@ in the @value@ function above is implemented as @*(p + _offsetof_pair[1])@, where @p@ is a @void*@, and @_offsetof_pair@ is the offset array passed in to @value@ for @pair(const char*, T)@. The offset array @_offsetof_pair@ is generated at the call site as @size_t _offsetof_pair[] = { offsetof(_pair_conc1, first), offsetof(_pair_conc1, second) };@.
    320 
    321 In some cases the offset arrays cannot be staticly generated. For instance, modularity is generally produced in C by including an opaque forward-declaration of a struct and associated accessor and mutator routines in a header file, with the actual implementations in a separately-compiled \texttt{.c} file. \CFA{} supports this pattern for generic types, and in tis instance the caller does not know the actual layout or size of the dynamic generic type, and only holds it by pointer. The \CFA{} compiler automatically generates \emph{layout functions} for cases where the size, alignment, and offset array of a generic struct cannot be passed in to a function from that function's caller. These layout functions take as arguments pointers to size and alignment variables and a caller-allocated array of member offsets, as well as the size and alignment of all @sized@ parameters to the generic struct (un-@sized@ parameters are forbidden from the language from being used in a context which affects layout). Results of these layout functions are cached so that they are only computed once per type per function.%, as in the example below for @pair@.
     323These offset arrays are staticly generated where possible. If a dynamic generic type is declared to be passed or returned by value from a polymorphic function, the translator can safely assume that the generic type is complete (that is, has a known layout) at any call-site, and the offset array is passed from the caller; if the generic type is concrete at the call site this offset array can even be statically generated using the C @offsetof@ macro. As an example, @p.second@ in the @value@ function above is implemented as @*(p + _offsetof_pair[1])@, where @p@ is a @void*@, and @_offsetof_pair@ is the offset array passed in to @value@ for @pair(const char*, T)@. The offset array @_offsetof_pair@ is generated at the call site as @size_t _offsetof_pair[] = { offsetof(_pair_conc1, first), offsetof(_pair_conc1, second) };@.
     324
     325In some cases the offset arrays cannot be staticly generated. For instance, modularity is generally produced in C by including an opaque forward-declaration of a struct and associated accessor and mutator routines in a header file, with the actual implementations in a separately-compiled \texttt{.c} file. \CFA{} supports this pattern for generic types, and in tis instance the caller does not know the actual layout or size of the dynamic generic type, and only holds it by pointer. The \CFA{} translator automatically generates \emph{layout functions} for cases where the size, alignment, and offset array of a generic struct cannot be passed in to a function from that function's caller. These layout functions take as arguments pointers to size and alignment variables and a caller-allocated array of member offsets, as well as the size and alignment of all @sized@ parameters to the generic struct (un-@sized@ parameters are forbidden from the language from being used in a context which affects layout). Results of these layout functions are cached so that they are only computed once per type per function.%, as in the example below for @pair@.
    322326% \begin{lstlisting}
    323327% static inline void _layoutof_pair(size_t* _szeof_pair, size_t* _alignof_pair, size_t* _offsetof_pair,
     
    348352Whether a type is concrete, dtype-static, or dynamic is decided based solely on the type parameters and @forall@ clause on the struct declaration. This allows opaque forward declarations of generic types like @forall(otype T) struct Box;@ -- like in C, all uses of @Box(T)@ can be in a separately compiled translation unit, and callers from other translation units will know the proper calling conventions to use. If the definition of a struct type was included in the determination of dynamic or concrete, some further types may be recognized as dtype-static (\eg, @forall(otype T) struct unique_ptr { T* p };@ does not depend on @T@ for its layout, but the existence of an @otype@ parameter means that it \emph{could}.), but preserving separate compilation (and the associated C compatibility) in this way is judged to be an appropriate trade-off.
    349353
     354\subsection{Applications}
     355
    350356The re-use of dtype-static struct instantiations enables some useful programming patterns at zero runtime cost. The most important such pattern is using @forall(dtype T) T*@ as a type-checked replacement for @void*@, as in this example, which takes a @qsort@ or @bsearch@-compatible comparison routine and creates a similar lexicographic comparison for pairs of pointers:
    351357\begin{lstlisting}
     
    567573Note that a cast is not a function call in \CFA{}, so flattening and structuring conversions do not occur for cast expressions\footnote{User-defined conversions have been considered, but for compatibility with C and the existing use of casts as type ascription, any future design for such conversions would require more precise matching of types than allowed for function arguments and parameters.}. As such, (4) is invalid because the cast target type contains 4 components, while the source type contains only 3. Similarly, (5) is invalid because the cast @([int, int, int])(g().1)@ is invalid. That is, it is invalid to cast @[int, int]@ to @[int, int, int]@.
    568574
    569 \TODO{}
    570 
    571 In \CFA{}, functions can be declared to return multiple values with an extension to the function declaration syntax. Multiple return values are declared as a comma-separated list of types in square brackets in the same location that the return type appears in standard C function declarations. The ability to return multiple values from a function requires a new syntax for the @return@ statement. For consistency, the @return@ statement in \CFA{} accepts a comma-separated list of expressions in square brackets. The expression resolution phase of the \CFA{} compiler ensures that the correct form is used depending on the values being returned and the return type of the current function. A multiple-returning function with return type @T@ can return any expression that is implicitly convertible to @T@. As an example, a function returning the most frequent character in a string and its frequency can be written in \CFA{} as:
    572 \begin{lstlisting}
    573 [int, char] most_frequent(const char * str) {  // tuple return type
    574   char freqs [26] = { 0 };
    575   int ret_freq = 0;
    576   char ret_ch = 'a';
    577   for (int i = 0; str[i] != '\0'; ++i) {
    578     if (isalpha(str[i])) {  // only count letters
    579       int ch = tolower(str[i]);  // convert to lower case
    580       int idx = ch-'a';
    581       if (++freqs[idx] > ret_freq) {  // update on new max
    582         ret_freq = freqs[idx];
    583         ret_ch = ch;
    584       }
    585     }
    586   }
    587   return [ret_freq, ret_ch];  // return tuple expression
    588 }
    589 \end{lstlisting}
    590 This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type.
    591 
    592 The addition of multiple-return-value functions necessitates a syntax for accepting multiple values at the call-site. The simplest mechanism for retaining a return value in C is variable assignment. By assigning the return value into a variable, its value can be retrieved later at any point in the program. As such, \CFA{} allows assigning multiple values from a function into multiple variables, using a square-bracketed list of lvalue expressions on the left side.
    593 \begin{lstlisting}
    594 const char * str = "hello world";
    595 int freq;
    596 char ch;
    597 [freq, ch] = most_frequent(str);  // assign into multiple variables
    598 printf("%s -- %d %c\n", str, freq, ch);
    599 \end{lstlisting}
    600 
    601 It is also common to use a function's output as the input to another function. \CFA{} also allows this case, without any new syntax. When a function call is passed as an argument to another call, the expression resolver attempts to find the best match of actual arguments to formal parameters given all of the possible expression interpretations in the current scope \citep{Bilson03}. For example,
    602 \begin{lstlisting}
    603 void process(int);       // (1)
    604 void process(char);      // (2)
    605 void process(int, char); // (3)
    606 void process(char, int); // (4)
    607 
    608 process(most_frequent("hello world"));  // selects (3)
    609 \end{lstlisting}
    610 In this case, there is only one option for a function named @most_frequent@ that takes a string as input. This function returns two values, one @int@ and one @char@. There are four options for a function named @process@, but only two which accept two arguments, and of those the best match is (3), which is also an exact match. This expression first calls @most_frequent("hello world")@, which produces the values @3@ and @'l'@, which are fed directly to the first and second parameters of (3), respectively.
    611 
    612 
     575\subsection{Polymorphism}
     576
     577Tuples also integrate with \CFA{} polymorphism as a special sort of generic type. Due to the implicit flattening and structuring conversions involved in argument passing, @otype@ and @dtype@ parameters are restricted to matching only with non-tuple types.
     578\begin{lstlisting}
     579forall(otype T, dtype U)
     580void f(T x, U * y);
     581
     582f([5, "hello"]);
     583\end{lstlisting}
     584In this example, @[5, "hello"]@ is flattened, so that the argument list appears as @5, "hello"@. The argument matching algorithm binds @T@ to @int@ and @U@ to @const char@, and calls the function as normal.
     585
     586Tuples, however, can contain polymorphic components. For example, a plus operator can be written to add two triples of a type together.
     587\begin{lstlisting}
     588forall(otype T | { T ?+?(T, T); })
     589[T, T, T] ?+?([T, T, T] x, [T, T, T] y) {
     590  return [x.0+y.0, x.1+y.1, x.2+y.2];
     591}
     592[int, int, int] x;
     593int i1, i2, i3;
     594[i1, i2, i3] = x + ([10, 20, 30]);
     595\end{lstlisting}
     596
     597Flattening and restructuring conversions are also applied to tuple types in polymorphic type assertions. Previously in \CFA{}, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (\ie{}, no implicit conversions are applied to assertion arguments). In the example below:
     598\begin{lstlisting}
     599int f([int, double], double);
     600forall(otype T, otype U | { T f(T, U, U); })
     601void g(T, U);
     602g(5, 10.21);
     603\end{lstlisting}
     604If assertion arguments must match exactly, then the call to @g@ cannot be resolved, since the expected type of @f@ is flat, while the only @f@ in scope requires a tuple type. Since tuples are fluid, this requirement reduces the usability of tuples in polymorphic code. To ease this pain point, function parameter and return lists are flattened for the purposes of type unification, which allows the previous example to pass expression resolution.
     605
     606This relaxation is made possible by extending the existing thunk generation scheme, as described by Bilson \citet{Bilson03}. Now, whenever a candidate's parameter structure does not exactly match the formal parameter's structure, a thunk is generated to specialize calls to the actual function:
     607\begin{lstlisting}
     608int _thunk(int _p0, double _p1, double _p2) {
     609  return f([_p0, _p1], _p2);
     610}
     611\end{lstlisting}
     612Essentially, this provides flattening and structuring conversions to inferred functions, improving the compatibility of tuples and polymorphism.
     613
     614\subsection{Variadic Tuples}
     615
     616To define variadic functions, \CFA{} adds a new kind of type parameter, @ttype@. Matching against a @ttype@ parameter consumes all remaining argument components and packages them into a tuple, binding to the resulting tuple of types. In a given parameter list, there should be at most one @ttype@ parameter that must occur last, otherwise the call can never resolve, given the previous rule. This idea essentially matches normal variadic semantics, with a strong feeling of similarity to \CCeleven{} variadic templates. As such, @ttype@ variables will also be referred to as \emph{argument packs}.
     617
     618Like variadic templates, the main way to manipulate @ttype@ polymorphic functions is through recursion. Since nothing is known about a parameter pack by default, assertion parameters are key to doing anything meaningful. Unlike variadic templates, @ttype@ polymorphic functions can be separately compiled.
     619
     620For example, the C @sum@ function at the beginning of this section could be written using @ttype@ as:
     621\begin{lstlisting}
     622int sum(){ return 0; }        // (0)
     623forall(ttype Params | { int sum(Params); })
     624int sum(int x, Params rest) { // (1)
     625  return x+sum(rest);
     626}
     627sum(10, 20, 30);
     628\end{lstlisting}
     629Since (0) does not accept any arguments, it is not a valid candidate function for the call @sum(10, 20, 30)@.
     630In order to call (1), @10@ is matched with @x@, and the argument resolution moves on to the argument pack @rest@, which consumes the remainder of the argument list and @Params@ is bound to @[20, 30]@.
     631In order to finish the resolution of @sum@, an assertion parameter which matches @int sum(int, int)@ is required.
     632Like in the previous iteration, (0) is not a valid candiate, so (1) is examined with @Params@ bound to @[int]@, requiring the assertion @int sum(int)@.
     633Next, (0) fails, and to satisfy (1) @Params@ is bound to @[]@, requiring an assertion @int sum()@.
     634Finally, (0) matches and (1) fails, which terminates the recursion.
     635Effectively, this traces as @sum(10, 20, 30)@ $\rightarrow$ @10+sum(20, 30)@ $\rightarrow$ @10+(20+sum(30))@ $\rightarrow$ @10+(20+(30+sum()))@ $\rightarrow$ @10+(20+(30+0))@.
     636
     637A point of note is that this version does not require any form of argument descriptor, since the \CFA{} type system keeps track of all of these details. It might be reasonable to take the @sum@ function a step further to enforce a minimum number of arguments, which could be done simply:
     638\begin{lstlisting}
     639int sum(int x, int y){
     640  return x+y;
     641}
     642forall(ttype Params | { int sum(int, Params); })
     643int sum(int x, int y, Params rest) {
     644  return sum(x+y, rest);
     645}
     646sum(10, 20, 30);
     647\end{lstlisting}
     648
     649One more iteration permits the summation of any summable type, as long as all arguments are the same type:
     650\begin{lstlisting}
     651trait summable(otype T) {
     652  T ?+?(T, T);
     653};
     654forall(otype R | summable(R))
     655R sum(R x, R y){
     656  return x+y;
     657}
     658forall(otype R, ttype Params
     659  | summable(R)
     660  | { R sum(R, Params); })
     661R sum(R x, R y, Params rest) {
     662  return sum(x+y, rest);
     663}
     664sum(3, 10, 20, 30);
     665\end{lstlisting}
     666Unlike C, it is not necessary to hard code the expected type. This is naturally open to extension, in that any user-defined type with a @?+?@ operator is automatically able to be used with the @sum@ function. That is to say, the programmer who writes @sum@ does not need full program knowledge of every possible data type, unlike what is necessary to write an equivalent function using the standard C mechanisms. Summing arbitrary heterogeneous lists is possible with similar code by adding the appropriate type variables and addition operators.
     667
     668It is possible to write a type-safe variadic print routine, which can replace @printf@
     669\begin{lstlisting}
     670struct S { int x, y; };
     671forall(otype T, ttype Params |
     672  { void print(T); void print(Params); })
     673void print(T arg, Params rest) {
     674  print(arg);
     675  print(rest);
     676}
     677void print(char * x) { printf("%s", x); }
     678void print(int x) { printf("%d", x);  }
     679void print(S s) { print("{ ", s.x, ",", s.y, " }"); }
     680print("s = ", (S){ 1, 2 }, "\n");
     681\end{lstlisting}
     682This example routine showcases a variadic-template-like decomposition of the provided argument list. The individual @print@ routines allow printing a single element of a type. The polymorphic @print@ allows printing any list of types, as long as each individual type has a @print@ function. The individual print functions can be used to build up more complicated @print@ routines, such as for @S@, which is something that cannot be done with @printf@ in C.
     683
     684It is also possible to use @ttype@ polymorphism to provide arbitrary argument forwarding functions. For example, it is possible to write @new@ as a library function:
     685\begin{lstlisting}
     686struct Pair(otype R, otype S);
     687forall(otype R, otype S)
     688void ?{}(Pair(R, S) *, R, S);  // (1)
     689
     690forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); })
     691T * new(Params p) {
     692  return ((T*)malloc( sizeof(T) )){ p }; // construct into result of malloc
     693}
     694Pair(int, char) * x = new(42, '!');
     695\end{lstlisting}
     696The @new@ function provides the combination of type-safe @malloc@ with a constructor call, so that it becomes impossible to forget to construct dynamically allocated objects. This provides the type-safety of @new@ in \CC{}, without the need to specify the allocated type, thanks to return-type inference.
     697
     698In the call to @new@, @Pair(double, char)@ is selected to match @T@, and @Params@ is expanded to match @[double, char]@. To satisfy the assertion, a constructor with an interface compatible with @void ?{}(Pair(int, char) *, int, char)@ must exist in the current scope, which (1) may be specialized to match.
    613699
    614700\TODO{} Check if we actually can use ttype parameters on generic types (if they set the complete flag, it should work, or nearly so).
    615701
     702\subsection{Implementation}
     703
     704Tuples are implemented in the \CFA{} translator via a transformation into generic types. For each $N$, the first time an $N$-tuple is seen in a scope a generic type with $N$ type parameters is generated. For example:
     705\begin{lstlisting}
     706[int, int] f() {
     707  [double, double] x;
     708  [int, double, int] y;
     709}
     710\end{lstlisting}
     711Is transformed into:
     712\begin{lstlisting}
     713forall(dtype T0, dtype T1 | sized(T0) | sized(T1))
     714struct _tuple2 {  // generated before the first 2-tuple
     715  T0 field_0;
     716  T1 field_1;
     717};
     718_tuple2_(int, int) f() {
     719  _tuple2_(double, double) x;
     720  forall(dtype T0, dtype T1, dtype T2 | sized(T0) | sized(T1) | sized(T2))
     721  struct _tuple3 {  // generated before the first 3-tuple
     722    T0 field_0;
     723    T1 field_1;
     724    T2 field_2;
     725  };
     726  _tuple3_(int, double, int) y;
     727}
     728\end{lstlisting}
     729
     730Tuple expressions are then simply converted directly into compound literals:
     731\begin{lstlisting}
     732[5, 'x', 1.24];
     733\end{lstlisting}
     734Becomes:
     735\begin{lstlisting}
     736(_tuple3_(int, char, double)){ 5, 'x', 1.24 };
     737\end{lstlisting}
     738
     739Since tuples are essentially structures, tuple indexing expressions are just field accesses:
     740\begin{lstlisting}
     741void f(int, [double, char]);
     742[int, double] x;
     743
     744x.0+x.1;
     745printf("%d %g\n", x);
     746f(x, 'z');
     747\end{lstlisting}
     748Is transformed into:
     749\begin{lstlisting}
     750void f(int, _tuple2_(double, char));
     751_tuple2_(int, double) x;
     752
     753x.field_0+x.field_1;
     754printf("%d %g\n", x.field_0, x.field_1);
     755f(x.field_0, (_tuple2){ x.field_1, 'z' });
     756\end{lstlisting}
     757Note that due to flattening, @x@ used in the argument position is converted into the list of its fields. In the call to @f@, a the second and third argument components are structured into a tuple argument. Similarly, tuple member expressions are recursively expanded into a list of member access expressions.
     758
     759Expressions which may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. Each unique expression is assigned an identifier and is guaranteed to be executed exactly once:
     760\begin{lstlisting}
     761void g(int, double);
     762[int, double] h();
     763g(h());
     764\end{lstlisting}
     765Interally, this is converted to two variables and an expression:
     766\begin{lstlisting}
     767void g(int, double);
     768[int, double] h();
     769
     770_Bool _unq0_finished_ = 0;
     771[int, double] _unq0;
     772g(
     773  (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).0,
     774  (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).1,
     775);
     776\end{lstlisting}
     777Since argument evaluation order is not specified by the C programming language, this scheme is built to work regardless of evaluation order. The first time a unique expression is executed, the actual expression is evaluated and the accompanying boolean is true to true. Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression. Tuple member expressions also take advantage of unique expressions in the case of possible impurity.
     778
     779Currently, the \CFA{} translator has a very broad, imprecise definition of impurity, where any function call is assumed to be impure. This notion could be made more precise for certain intrinsic, autogenerated, and builtin functions, and could analyze function bodies when they are available to recursively detect impurity, to eliminate some unique expressions.
     780
     781The various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions. A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, \eg{} in a unique expression. The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language. However, there are other places where the \CFA{} translator makes use of GNU C extensions, such as its use of nested functions, so this is not a new restriction.
     782
     783Type assertions in \CFA{} generally require exact matching of parameter and return types, with no implicit conversions allowed. Tuple flattening and restructuring is an exception to this, however, allowing functions to match assertions without regard to the presence of tuples; this is particularly useful for structuring trailing arguments to match a @ttype@ parameter or flattening assertions based on @ttype@ type variables to match programmer-defined functions. This relaxation is made possible by extending the existing thunk generation scheme, as described by Bilson \citet{Bilson03}. Whenever a candidate's parameter structure does not exactly match the formal parameter's structure, a thunk is generated to specialize calls to the actual function, as in this conversion from a function @f@ with signature @int (*)(int, char, char)@ to another with signature @int (*)([int, char], char)@:
     784\begin{lstlisting}
     785int _thunk(int _p0, char _p1, char _p2) {
     786  return f([_p0, _p1], _p2);
     787}
     788\end{lstlisting}
     789These thunks take advantage of GCC C nested functions to produce closures that have the usual function pointer signature.
     790
    616791\section{Related Work}
    617792
     
    622797\section{Conclusion \& Future Work}
    623798
    624 In conclusion, the authors' design for generic types and tuples imposes minimal runtime overhead while still supporting a full range of C features, including separately-compiled modules. There is ongoing work on a wide range of \CFA{} feature extensions, including reference types, exceptions, and concurent programming primitives. In addition to this work, there are some interesting future directions the polymorphism design could take. Notably, \CC{} template functions trade compile time and code bloat for optimal runtime of individual instantiations of polymorphic functions. \CFA{} polymorphic functions, by contrast, use an approach which is essentially dynamic virtual dispatch. The runtime overhead of this approach is low, but not as low as \CC{} template functions, and it may be beneficial to provide a mechanism for particularly performance-sensitive code to close this gap. Further research is needed, but two promising approaches are to allow an annotation on polymorphic function call sites that tells the compiler to create a template-specialization of the function (provided the code is visible in the current translation unit) or placing an annotation on polymorphic function definitions that instantiates a version of the polymorphic function specialized to some set of types. These approaches are not mutually exclusive, and would allow these performance optimizations to be applied only where most useful to increase performance, without suffering the code bloat or loss of generality of a template expansion approach where it is unneccessary.
     799In conclusion, the authors' design for generic types and tuples imposes minimal runtime overhead while still supporting a full range of C features, including separately-compiled modules. There is ongoing work on a wide range of \CFA{} feature extensions, including reference types, exceptions, and concurent programming primitives. In addition to this work, there are some interesting future directions the polymorphism design could take. Notably, \CC{} template functions trade compile time and code bloat for optimal runtime of individual instantiations of polymorphic functions. \CFA{} polymorphic functions, by contrast, use an approach which is essentially dynamic virtual dispatch. The runtime overhead of this approach is low, but not as low as \CC{} template functions, and it may be beneficial to provide a mechanism for particularly performance-sensitive code to close this gap. Further research is needed, but two promising approaches are to allow an annotation on polymorphic function call sites that tells the translator to create a template-specialization of the function (provided the code is visible in the current translation unit) or placing an annotation on polymorphic function definitions that instantiates a version of the polymorphic function specialized to some set of types. These approaches are not mutually exclusive, and would allow these performance optimizations to be applied only where most useful to increase performance, without suffering the code bloat or loss of generality of a template expansion approach where it is unneccessary.
    625800
    626801\bibliographystyle{ACM-Reference-Format}
Note: See TracChangeset for help on using the changeset viewer.