\chapter{Generic Types} \label{generic-chap} A significant shortcoming in standard C is the lack of reusable type-safe abstractions for generic data structures and algorithms. Broadly speaking, there are three approaches to implement abstract data structures in C. One approach is to write bespoke data structures for each context in which they are needed. While this approach is flexible and supports integration with the C type checker and tooling, it is also tedious and error prone, especially for more complex data structures. A second approach is to use !void*!-based polymorphism, \eg{} the C standard library functions !bsearch! and !qsort!, which allow for the reuse of common functionality. However, basing all polymorphism on !void*! eliminates the type checker's ability to ensure that argument types are properly matched, often requiring a number of extra function parameters, pointer indirection, and dynamic allocation that is otherwise unnecessary. A third approach to generic code is to use preprocessor macros, which does allow the generated code to be both generic and type checked, but errors in such code may be difficult to locate and debug. Furthermore, writing and using preprocessor macros is unnatural and inflexible. Figure~\ref{bespoke-generic-fig} demonstrates the bespoke approach for a simple linked list with !insert! and !head! operations, while Figure~\ref{void-generic-fig} and Figure~\ref{macro-generic-fig} show the same example using !void*! and !#define!-based polymorphism, respectively. \begin{figure} \begin{cfa} #include $\C{// for malloc}$ #include $\C{// for printf}$ struct int_list { int value; struct int_list* next; }; void int_list_insert( struct int_list** ls, int x ) { struct int_list* node = malloc(sizeof(struct int_list)); node->value = x; node->next = *ls; *ls = node; } int int_list_head( const struct int_list* ls ) { return ls->value; } $\C[\textwidth]{// all code must be duplicated for every generic instantiation}$ struct string_list { const char* value; struct string_list* next; }; void string_list_insert( struct string_list** ls, const char* x ) { struct string_list* node = malloc(sizeof(struct string_list)); node->value = x; node->next = *ls; *ls = node; } const char* string_list_head( const struct string_list* ls ) { return ls->value; } $\C[\textwidth]{// use is efficient and idiomatic}$ int main() { struct int_list* il = NULL; int_list_insert( &il, 42 ); printf("%d\n", int_list_head(il)); struct string_list* sl = NULL; string_list_insert( &sl, "hello" ); printf("%s\n", string_list_head(sl)); } \end{cfa} \caption{Bespoke code for linked list implementation.} \label{bespoke-generic-fig} \end{figure} \begin{figure} \begin{cfa} #include $\C{// for malloc}$ #include $\C{// for printf}$ // single code implementation struct list { void* value; struct list* next; }; $\C[\textwidth]{// internal memory management requires helper functions}$ void list_insert( struct list** ls, void* x, void* (*copy)(void*) ) { struct list* node = malloc(sizeof(struct list)); node->value = copy(x); node->next = *ls; *ls = node; } void* list_head( const struct list* ls ) { return ls->value; } $\C[\textwidth]{// helpers duplicated per type}$ void* int_copy(void* x) { int* n = malloc(sizeof(int)); *n = *(int*)x; return n; } void* string_copy(void* x) { return strdup((const char*)x); } int main() { struct list* il = NULL; int i = 42; list_insert( &il, &i, int_copy ); printf("%d\n", *(int*)list_head(il)); $\C[2in]{// unsafe type cast}$ struct list* sl = NULL; list_insert( &sl, "hello", string_copy ); printf("%s\n", (char*)list_head(sl)); $\C[2in]{// unsafe type cast}$ } \end{cfa} \caption{\lstinline{void*}-polymorphic code for linked list implementation.} \label{void-generic-fig} \end{figure} \begin{figure} \begin{cfa} #include $\C{// for malloc}$ #include $\C{// for printf}$ $\C[\textwidth]{// code is nested in macros}$ #define list(N) N ## _list #define list_insert(N) N ## _list_insert #define list_head(N) N ## _list_head #define define_list(N, T) $\C[0.25in]{ \textbackslash }$ struct list(N) { T value; struct list(N)* next; }; $\C[0.25in]{ \textbackslash }$ $\C[0.25in]{ \textbackslash }$ void list_insert(N)( struct list(N)** ls, T x ) { $\C[0.25in]{ \textbackslash }$ struct list(N)* node = malloc(sizeof(struct list(N))); $\C[0.25in]{ \textbackslash }$ node->value = x; node->next = *ls; $\C[0.25in]{ \textbackslash }$ *ls = node; $\C[0.25in]{ \textbackslash }$ } $\C[0.25in]{ \textbackslash }$ $\C[0.25in]{ \textbackslash }$ T list_head(N)( const struct list(N)* ls ) { return ls->value; } define_list(int, int); $\C[3in]{// defines int\_list}$ define_list(string, const char*); $\C[3in]{// defines string\_list}$ $\C[\textwidth]{// use is efficient, but syntactically idiosyncratic}$ int main() { struct list(int)* il = NULL; $\C[3in]{// does not match compiler-visible name}$ list_insert(int)( &il, 42 ); printf("%d\n", list_head(int)(il)); struct list(string)* sl = NULL; list_insert(string)( &sl, "hello" ); printf("%s\n", list_head(string)(sl)); } \end{cfa} \caption{Macros for generic linked list implementation.} \label{macro-generic-fig} \end{figure} \CC{}, Java, and other languages use \emph{generic types} to produce type-safe abstract data types. Design and implementation of generic types for \CFA{} is the first major contribution of this thesis, a summary of which is published in \cite{Moss18} and from which this chapter is closely based. \CFA{} generic types integrate efficiently and naturally with the existing polymorphic functions in \CFA{}, while retaining backward compatibility with C in layout and support for separate compilation. A generic type can be declared in \CFA{} by placing a !forall! specifier on a !struct! or !union! declaration, and instantiated using a parenthesized list of types after the generic name. An example comparable to the C polymorphism examples in Figures~\ref{bespoke-generic-fig}, \ref{void-generic-fig}, and \ref{macro-generic-fig} can be seen in Figure~\ref{cfa-generic-fig}. \begin{figure} \begin{cfa} #include $\C{// for alloc}$ #include $\C{// for printf}$ forall(otype T) struct list { T value; list(T)* next; }; $\C[\textwidth]{// single polymorphic implementation of each function}$ $\C[\textwidth]{// overloading reduces need for namespace prefixes}$ forall(otype T) void insert( list(T)** ls, T x ) { list(T)* node = alloc(); $\C{// type-inferring alloc}$ (*node){ x, *ls }; $\C{// concise constructor syntax}$ *ls = node; } forall(otype T) T head( const list(T)* ls ) { return ls->value; } $\C[\textwidth]{// use is clear and efficient}$ int main() { list(int)* il = 0; insert( &il, 42 ); $\C{// inferred polymorphic T}$ printf("%d\n", head(il)); list(const char*)* sl = 0; insert( &sl, "hello" ); printf("%s\n", head(sl)); } \end{cfa} \caption{\CFA{} generic linked list implementation.} \label{cfa-generic-fig} \end{figure} \section{Design} Though a number of languages have some implementation of generic types, backward compatibility with both C and existing \CFA{} polymorphism present some unique design constraints for \CFA{} generics. The guiding principle is to maintain an unsurprising language model for C programmers without compromising runtime efficiency. A key insight for this design is that C already possesses a handful of built-in generic types (\emph{derived types} in the language of the standard \cite[\S{}6.2.5]{C11}), notably pointer (!T*!) and array (!T[]!), and that user-definable generics should act similarly. \subsection{Related Work} One approach to the design of generic types is that taken by \CC{} templates \cite{C++}. The template approach is closely related to the macro-expansion approach to C polymorphism demonstrated in Figure~\ref{macro-generic-fig}, but where the macro-expansion syntax has been given first-class language support. Template expansion has the benefit of generating code with near-optimal runtime efficiency, as distinct optimizations can be applied for each instantiation of the template. On the other hand, template expansion can also lead to significant code bloat, exponential in the worst case \cite{Haberman16}, and the costs of increased compilation time and instruction cache pressure cannot be ignored. The most significant restriction of the \CC{} template model is that it breaks separate compilation and C's translation-unit-based encapsulation mechanisms. Because a \CC{} template is not actually code, but rather a ``recipe'' to generate code, template code must be visible at its call site to be used. Furthermore, \CC{} template code cannot be type-checked without instantiating it, a time consuming process with no hope of improvement until \CC{} concepts \cite{C++Concepts} are standardized in \CCtwenty{}. C code, by contrast, only needs a function declaration to call that function or a !struct! declaration to use (by-pointer) values of that type, desirable properties to maintain in \CFA{}. Java \cite{Java8} has another prominent implementation for generic types, introduced in Java~5 and based on a significantly different approach than \CC{}. The Java approach has much more in common with the !void*!-polymorphism shown in Figure~\ref{void-generic-fig}; since in Java nearly all data is stored by reference, the Java approach to polymorphic data is to store pointers to arbitrary data and insert type-checked implicit casts at compile-time. This process of \emph{type erasure} has the benefit of allowing a single instantiation of polymorphic code, but relies heavily on Java's object model and garbage collector. To use this model, a more C-like language such as \CFA{} would be required to dynamically allocate internal storage for variables, track their lifetime, and properly clean them up afterward. Cyclone \cite{Grossman06} extends C and also provides capabilities for polymorphic functions and existential types which are similar to \CFA{}'s !forall! functions and generic types. Cyclone existential types can include function pointers in a construct similar to a virtual function table, but these pointers must be explicitly initialized at some point in the code, which is tedious and error-prone compared to \CFA{}'s implicit assertion satisfaction. Furthermore, Cyclone's polymorphic functions and types are restricted to abstraction over types with the same layout and calling convention as !void*!, \ie{} only pointer types and !int!. In the \CFA{} terminology discussed in Section~\ref{generic-impl-sec}, all Cyclone polymorphism must be dtype-static. While the Cyclone polymorphism design provides the efficiency benefits discussed in Section~\ref{dtype-static-sec} for dtype-static polymorphism, it is more restrictive than the more general model of \CFA{}. Many other languages include some form of generic types. As a brief survey, ML \cite{ML} was the first language to support parametric polymorphism, but unlike \CFA{} does not support the use of assertions and traits to constrain type arguments. Haskell \cite{Haskell10} combines ML-style polymorphism with the notion of type classes, similar to \CFA{} traits, but requiring an explicit association with their implementing types, unlike \CFA{}. Objective-C \cite{obj-c-book} is an extension to C which has had some industrial success; however, it did not support type-checked generics until recently \cite{xcode7}, and its garbage-collected, message-passing object-oriented model is a radical departure from C. Go \cite{Go}, and Rust \cite{Rust} are modern compiled languages with abstraction features similar to \CFA{} traits: \emph{interfaces} in Go and \emph{traits} in Rust. Go has implicit interface implementation and uses a ``fat pointer'' construct to pass polymorphic objects to functions, similar in principle to \CFA{}'s implicit forall parameters. Go does not, however, allow user code to define generic types, restricting Go programmers to the small set of generic types defined by the compiler. Rust has powerful abstractions for generic programming, including explicit implementation of traits and options for both separately-compiled virtual dispatch and template-instantiated static dispatch in functions. On the other hand, the safety guarantees of Rust's \emph{lifetime} abstraction and borrow checker impose a distinctly idiosyncratic programming style and steep learning curve; \CFA{}, with its more modest safety features, allows direct ports of C code while maintaining the idiomatic style of the original source. \subsection{\CFA{} Generics} The generic types design in \CFA{} draws inspiration from both \CC{} and Java generics, capturing useful aspects of each. Like \CC{} template types, generic !struct! and !union! types in \CFA{} have macro-expanded storage layouts, but, like Java generics, \CFA{} generic types can be used with separately-compiled polymorphic functions without requiring either the type or function definition to be visible to the other. The fact that the storage layout of any instantiation of a \CFA{} generic type is identical to that of the monomorphic type produced by simple macro replacement of the generic type parameters is important to provide consistent and predictable runtime performance, and to not impose any undue abstraction penalty on generic code. As an example, consider the following generic type and function: % TODO whatever the bug is with initializer-expressions not working, it affects this \begin{cfa} forall( otype R, otype S ) struct pair { R first; S second; }; pair(const char*, int) with_len( const char* s ) { return (pair(const char*, int)){ s, strlen(s) }; } \end{cfa} In this example, !with_len! is defined at the same scope as !pair!, but it could be called from any context that can see the definition of !pair! and a declaration of !with_len!. If its return type were !pair(const char*, int)*!, callers of !with_len! would only need the declaration !forall(otype R, otype S) struct pair! to call it, in accordance with the usual C rules for opaque types. !with_len! is itself a monomorphic function, returning a type that is structurally identical to !struct { const char* first; int second; }!, and as such could be called from C given appropriate re-declarations and demangling flags. However, the definition of !with_len! depends on a polymorphic function call to the !pair! constructor, which only needs to be written once (in this case, implicitly by the compiler according to the usual \CFA{} constructor generation \cite{Schluntz17}) and can be re-used for a wide variety of !pair! instantiations. Since the parameters to this polymorphic constructor call are all statically known, compiler inlining can in principle eliminate any runtime overhead of this polymorphic call. \CFA{} deliberately does not support \CC{}-style partial specializations of generic types. A particularly infamous example in the \CC{} standard library is !vector!, which is represented as a bit-string rather than the array representation of the other !vector! instantiations. Complications from this inconsistency (chiefly the fact that a single bit is not addressable, unlike an array element) make the \CC{} !vector! unpleasant to use in generic contexts due to the break in its public interface. Rather than attempting to plug leaks in the template specialization abstraction with a detailed method interface, \CFA{} takes the more consistent position that two types with an unrelated data layout are in fact unrelated types, and should be handled with different code. Of course, to the degree that distinct types are similar enough to share an interface, the \CFA{} !trait! system allows such an interface to be defined, and objects of types implementing that !trait! to be operated on using the same polymorphic functions. Since \CFA{} polymorphic functions can operate over polymorphic generic types, functions over such types can be partially or completely specialized using the usual overload selection rules. As an example, the following generalization of !with_len! is a semantically-equivalent function which works for any type that has a !len! function declared, making use of both the ad-hoc (overloading) and parametric (!forall!) polymorphism features of \CFA{}: \begin{cfa} forall(otype T, otype I | { I len(T); }) pair(T, I) with_len( T s ) { return (pair(T,I)){ s, len(s) }; } \end{cfa} \CFA{} generic types also support type constraints, as in !forall! functions. For example, the following declaration of a sorted set type ensures that the set key implements equality and relational comparison: \begin{cfa} forall(otype Key | { int ?==?(Key, Key); int ?first, b->first ); return c ? c : cmp( a->second, b->second ); } \end{cfa} Since !pair(T*, T*)! is a concrete type, there are no implicit parameters passed to !lexcmp!; hence, the generated code is identical to a function written in standard C using !void*!, yet the \CFA{} version is type-checked to ensure members of both pairs and arguments to the comparison function match in type. Another useful pattern enabled by reused dtype-static type instantiations is zero-cost \emph{tag structures}. Sometimes, information is only used for type checking and can be omitted at runtime. In the example below, !scalar! is a dtype-static type; hence, all uses have a single structure definition containing !unsigned long! and can share the same implementations of common functions, like !?+?!. These implementations may even be separately compiled, unlike \CC{} template functions. However, the \CFA{} type checker ensures matching types are used by all calls to !?+?!, preventing nonsensical computations like adding a length to a volume. \begin{cfa} forall(dtype Unit) struct scalar { unsigned long value; }; struct metres {}; struct litres {}; forall(dtype U) scalar(U) ?+?(scalar(U) a, scalar(U) b) { return (scalar(U)){ a.value + b.value }; } scalar(metres) half_marathon = { 21098 }; scalar(litres) pool = { 2500000 }; scalar(metres) marathon = half_marathon + half_marathon; `marathon + pool;` $\C[4in]{// compiler ERROR, mismatched types}$ \end{cfa} \section{Performance Experiments} \label{generic-performance-sec} To validate the practicality of this generic type design, microbenchmark-based tests were conducted against a number of comparable code designs in C and \CC{}, first published in \cite{Moss18}. Since these languages are all C-based and compiled with the same compiler backend, maximal-performance benchmarks should show little runtime variance, differing only in length and clarity of source code. A more illustrative comparison measures the costs of idiomatic usage of each language's features. The code below shows the \CFA{} benchmark tests for a generic stack based on a singly-linked list; the test suite is equivalent for the other other languages, code for which is included in Appendix~\ref{generic-bench-app}. The experiment uses element types !int! and !pair(short, char)! and pushes $N = 40M$ elements on a generic stack, copies the stack, clears one of the stacks, and finds the maximum value in the other stack. \begin{cfa} #define N 4000000 int main() { int max = 0, val = 42; stack( int ) si, ti; REPEAT_TIMED( "push_int", N, push( si, val ); ) TIMED( "copy_int", ti{ si }; ) TIMED( "clear_int", clear( si ); ) REPEAT_TIMED( "pop_int", N, int x = pop( ti ); if ( x > max ) max = x; ) pair( short, char ) max = { 0h, '\0' }, val = { 42h, 'a' }; stack( pair( short, char ) ) sp, tp; REPEAT_TIMED( "push_pair", N, push( sp, val ); ) TIMED( "copy_pair", tp{ sp }; ) TIMED( "clear_pair", clear( sp ); ) REPEAT_TIMED( "pop_pair", N, pair(short, char) x = pop( tp ); if ( x > max ) max = x; ) } \end{cfa} The four versions of the benchmark implemented are C with !void*!-based polymorphism, \CFA{} with parametric polymorphism, \CC{} with templates, and \CC{} using only class inheritance for polymorphism, denoted \CCV{}. The \CCV{} variant illustrates an alternative object-oriented idiom where all objects inherit from a base !object! class, mimicking a Java-like interface; in particular, runtime checks are necessary to safely downcast objects. The most notable difference among the implementations is the memory layout of generic types: \CFA{} and \CC{} inline the stack and pair elements into corresponding list and pair nodes, while C and \CCV{} lack such capability and, instead, must store generic objects via pointers to separately allocated objects. Note that the C benchmark uses unchecked casts as C has no runtime mechanism to perform such checks, whereas \CFA{} and \CC{} provide type safety statically. Figure~\ref{generic-eval-fig} and Table~\ref{generic-eval-table} show the results of running the described benchmark. The graph plots the median of five consecutive runs of each program, with an initial warm-up run omitted. All code is compiled at \texttt{-O2} by gcc or g++ 6.4.0, with all \CC{} code compiled as \CCfourteen{}. The benchmarks are run on an Ubuntu 16.04 workstation with 16 GB of RAM and a 6-core AMD FX-6300 CPU with 3.5 GHz maximum clock frequency. I conjecture that these results scale across most uses of generic types, given the constant underlying polymorphism implementation. \begin{figure} \centering \input{generic-timing} \caption[Benchmark timing results]{Benchmark timing results (smaller is better)} \label{generic-eval-fig} \end{figure} \begin{table} \caption{Properties of benchmark code} \label{generic-eval-table} \centering \newcommand{\CT}[1]{\multicolumn{1}{c}{#1}} \begin{tabular}{lrrrr} & \CT{C} & \CT{\CFA} & \CT{\CC} & \CT{\CCV} \\ maximum memory usage (MB) & 10\,001 & 2\,502 & 2\,503 & 11\,253 \\ source code size (lines) & 201 & 191 & 125 & 294 \\ redundant type annotations (lines) & 27 & 0 & 2 & 16 \\ binary size (KB) & 14 & 257 & 14 & 37 \\ \end{tabular} \end{table} The C and \CCV{} variants are generally the slowest and have the largest memory footprint, due to their less-efficient memory layout and the pointer indirection necessary to implement generic types in those languages; this inefficiency is exacerbated by the second level of generic types in the pair benchmarks. By contrast, the \CFA{} and \CC{} variants run in roughly equivalent time for both the integer and pair because of the equivalent storage layout, with the inlined libraries (\ie{} no separate compilation) and greater maturity of the \CC{} compiler contributing to its lead. \CCV{} is slower than C largely due to the cost of runtime type checking of downcasts (implemented with !dynamic_cast!); the outlier for \CFA{}, pop !pair!, results from the complexity of the generated-C polymorphic code. The gcc compiler is unable to optimize some dead code and condense nested calls; a compiler designed for \CFA{} could more easily perform these optimizations. Finally, the binary size for \CFA{} is larger because of static linking with \CFA{} libraries. \CFA{} is also competitive in terms of source code size, measured as a proxy for programmer effort. The line counts in Table~\ref{generic-eval-table} include implementations of !pair! and !stack! types for all four languages for purposes of direct comparison, although it should be noted that \CFA{} and \CC{} have prewritten data structures in their standard libraries that programmers would generally use instead. Use of these standard library types has minimal impact on the performance benchmarks, but shrinks the \CFA{} and \CC{} code to 39 and 42 lines, respectively. The difference between the \CFA{} and \CC{} line counts is primarily declaration duplication to implement separate compilation; a header-only \CFA{} library is similar in length to the \CC{} version. On the other hand, due to the language shortcomings mentioned at the beginning of the chapter, C does not have a generic collections library in its standard distribution, resulting in frequent re-implementation of such collection types by C programmers. \CCV{} does not use the \CC{} standard template library by construction, and, in fact, includes the definition of !object! and wrapper classes for !char!, !short!, and !int! in its line count, which inflates this count somewhat, as an actual object-oriented language would include these in the standard library. I justify the given line count by noting that many object-oriented languages do not allow implementing new interfaces on library types without subclassing or wrapper types, which may be similarly verbose. Line count is a fairly rough measure of code complexity; another important factor is how much type information the programmer must specify manually, especially where that information is not type-checked. Such unchecked type information produces a heavier documentation burden and increased potential for runtime bugs and is much less common in \CFA{} than C, with its manually specified function pointer arguments and format codes, or \CCV{}, with its extensive use of un-type-checked downcasts, \eg{} !object! to !integer! when popping a stack. To quantify this manual typing, the ``redundant type annotations'' line in Table~\ref{generic-eval-table} counts the number of lines on which the known type of a variable is re-specified, either as a format specifier, explicit downcast, type-specific function, or by name in a !sizeof!, !struct! literal, or !new! expression. The \CC{} benchmark uses two redundant type annotations to create new stack nodes, whereas the C and \CCV{} benchmarks have several such annotations spread throughout their code. The \CFA{} benchmark is able to eliminate \emph{all} redundant type annotations through use of the return-type polymorphic !alloc! function in the \CFA{} standard library. \section{Future Work} The generic types presented here are already sufficiently expressive to implement a variety of useful library types. However, some other features based on this design could further improve \CFA{}. The most pressing addition is the ability to have non-type generic parameters. C already supports fixed-length array types, \eg{} !int[10]!; these types are essentially generic types with unsigned integer parameters (\ie{} array dimension), and allowing \CFA{} users the capability to build similar types is a requested feature. % More exotically, the ability to have these non-type parameters depend on dynamic runtime values rather than static compile-time constants opens up interesting opportunities for type-checking problematic code patterns. % For example, if a collection iterator was parameterized over the pointer to the collection it was drawn from, then a sufficiently powerful static analysis pass could ensure that that iterator was only used for that collection, eliminating one source of hard-to-find bugs. The implementation mechanisms behind generic types can also be used to add new features to \CFA{}. One such potential feature is \emph{field assertions}, an addition to the existing function and variable assertions on polymorphic type variables. These assertions could be specified using this proposed syntax: \begin{cfa} trait hasXY(dtype T) { int T.x; $\C{// T has a field x of type int}$ int T.y; $\C{// T has a field y of type int}$ }; \end{cfa} Implementation of these field assertions would be based on the same code that supports member access by dynamic offset calculation for dynamic generic types. Simulating field access can already be done more flexibly in \CFA{} by declaring a trait containing an accessor function to be called from polymorphic code, but these accessor functions impose some overhead both to write and call, and directly providing field access via an implicit offset parameter would be both more concise and more efficient. Of course, there are language design trade-offs to such an approach, notably that providing the two similar features of field and function assertions would impose a burden of choice on programmers writing traits, with field assertions more efficient, but function assertions more general; given this open design question a decision on field assertions is deferred until \CFA{} is more mature. If field assertions are included in the language, a natural extension would be to provide a structural inheritance mechanism for every !struct! type that simply turns the list of !struct! fields into a list of field assertions, allowing monomorphic functions over that type to be generalized to polymorphic functions over other similar types with added or reordered fields, for example: \begin{cfa} struct point { int x, y; }; $\C{// traitof(point) is equivalent to hasXY above}$ struct coloured_point { int x, y; enum { RED, BLACK } colour }; // works for both point and coloured_point forall(dtype T | traitof(point)(T) ) double hypot( T& p ) { return sqrt( p.x*p.x + p.y*p.y ); } \end{cfa} \CFA{} could also support a packed or otherwise size-optimized representation for generic types based on a similar mechanism --- nothing in the use of the offset arrays implies that the field offsets need to be monotonically increasing. With respect to the broader \CFA{} polymorphism design, the experimental results in Section~\ref{generic-performance-sec} demonstrate that though the runtime impact of \CFA{}'s dynamic virtual dispatch is low, it is not as low as the static dispatch of \CC{} template inlining. However, rather than subject all \CFA{} users to the compile-time costs of ubiquitous template expansion, it is better to target performance-sensitive code more precisely. Two promising approaches are an !inline! annotation at polymorphic function call sites to create a template specialization of the function (provided the code is visible) or placing a different !inline! annotation on polymorphic function definitions to instantiate a specialized version of the function for some set of types. These approaches are complementary and allow performance optimizations to be applied only when necessary, without suffering global code bloat.