# Changeset 6a8ac0b for doc/generic_types/generic_types.tex

Ignore:
Timestamp:
Apr 25, 2017, 8:21:10 AM (5 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, resolv-new, with_gc
Children:
89a9be2
Parents:
b3d70eb
Message:

more cleanup

File:
1 edited

### Legend:

Unmodified
 rb3d70eb int comp( const void * t1, const void * t2 ) { return *(double *)t1 < *(double *)t2 ? -1 : *(double *)t2 < *(double *)t1 ? 1 : 0; } double key = 5.0, vals[10] = { /* 10 floating-point values */ }; double key = 5.0, vals[10] = { /* 10 sorted floating-point values */ }; double * val = (double *)bsearch( &key, vals, 10, sizeof(vals[0]), comp );      $\C{// search sorted array}$ \end{lstlisting} One of the known shortcomings of standard C is that it does not provide reusable type-safe abstractions for generic data structures and algorithms. Broadly speaking, there are three approaches to implement abstract data-structures in C. One approach is to write bespoke data structures for each context in which they are needed. One approach is to write bespoke data-structures for each context in which they are needed. While this approach is flexible and supports integration with the C type-checker and tooling, it is also tedious and error-prone, especially for more complex data structures. A second approach is to use @void *@--based polymorphism, \eg the C standard-library functions @bsearch@ and @qsort@; an approach which does allow reuse of code for common functionality. \end{lstlisting} where the tuple variable-name serves the same purpose as the parameter name(s). Tuple variables can be composed of any types, except for array types, since array sizes are generally unknown. Tuple variables can be composed of any types, except for array types, since array sizes are generally unknown in C. One way to access the tuple-variable components is with assignment or composition: \begin{lstlisting} [int, int] * p = &qr;                                           $\C{// tuple pointer}$ int rem = qr.1;                                                         $\C{// access remainder}$ int quo = div( 13, 5 ).0;                                       $\C{// access quotient}$ p->0 = 5;                                                                       $\C{// change quotient}$ bar( qr.1, qr );                                                        $\C{// pass remainder and quotient/remainder}$ rem = [42, div( 13, 5 )].0.1;                           $\C{// access 2nd component of 1st component of tuple expression}$ int rem = qr.1;                                                       $\C{// access remainder}$ int quo = div( 13, 5 ).0;                                     $\C{// access quotient}$ p->0 = 5;                                                                     $\C{// change quotient}$ bar( qr.1, qr );                                                      $\C{// pass remainder and quotient/remainder}$ rem = [42, div( 13, 5 )].0.1;                         $\C{// access 2nd component of 1st component of tuple expression}$ \end{lstlisting} This semantics means mass assignment differs from C cascading assignment (\eg @a = b = c@) in that conversions are applied in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, yielding @y == 3.14@ and @x == 3@; whereas C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, yielding @3@ in @y@ and @x@. whereas, C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, yielding @3@ in @y@ and @x@. Finally, tuple assignment is an expression where the result type is the type of the left-hand side of the assignment, just like all other assignment expressions in C. This example shows mass, multiple, and cascading assignment used in one expression: \end{lstlisting} Hence, function parameter and return lists are flattened for the purposes of type unification allowing the example to pass expression resolution. This relaxation is possible by extending the thunk scheme described by \citet{Bilson03}. This relaxation is possible by extending the thunk scheme described by~\citet{Bilson03}. Whenever a candidate's parameter structure does not exactly match the formal parameter's structure, a thunk is generated to specialize calls to the actual function: \begin{lstlisting} \end{lstlisting} so the thunk provides flattening and structuring conversions to inferred functions, improving the compatibility of tuples and polymorphism. These thunks take advantage of GCC C nested-functions to produce closures that have the usual function pointer signature. These thunks take advantage of GCC C nested-functions to produce closures that have the usual function-pointer signature. \subsection{Implementation} Tuples are implemented in the \CFA translator via a transformation into generic types. Tuples are implemented in the \CFA translator via a transformation into \emph{generic types}. For each $N$, the first time an $N$-tuple is seen in a scope a generic type with $N$ type parameters is generated, \eg: \begin{lstlisting} Finally, we demonstrate that \CFA performance for some idiomatic cases is better than C and close to \CC, showing the design is practically applicable. There is ongoing work on a wide range of \CFA feature extensions, including reference types, exceptions, concurrent primitives and modules. There is ongoing work on a wide range of \CFA feature extensions, including reference types, arrays with size, exceptions, concurrent primitives and modules. (While all examples in the paper compile and run, a public beta-release of \CFA will take another 8--12 months to finalize these additional extensions.) In addition, there are interesting future directions for the polymorphism design. \CFA polymorphic functions use dynamic virtual-dispatch; the runtime overhead of this approach is low, but not as low as inlining, and it may be beneficial to provide a mechanism for performance-sensitive code. Two promising approaches are an @inline@ annotation at polymorphic function call sites to create a template-specialization of the function (provided the code is visible) or placing an @inline@ annotation on polymorphic function-definitions to instantiate a specialized version for some set of types. Two promising approaches are an @inline@ annotation at polymorphic function call sites to create a template-specialization of the function (provided the code is visible) or placing an @inline@ annotation on polymorphic function-definitions to instantiate a specialized version for some set of types (\CC template specialization). These approaches are not mutually exclusive and allow performance optimizations to be applied only when necessary, without suffering global code-bloat. In general, we believe separate compilation, producing smaller code, works well with loaded hardware-caches, which may offset the benefit of larger inlined-code. Throughout, @/***/@ designates a counted redundant type annotation. \medskip\noindent \smallskip\noindent \CFA \begin{lstlisting}[xleftmargin=2\parindentlnth,aboveskip=0pt,belowskip=0pt]