% take off review (for line numbers) and anonymous (for anonymization) on submission \documentclass[format=acmlarge,anonymous,review]{acmart} % \documentclass[format=acmlarge,review]{acmart} \usepackage{xspace,calc,comment} \usepackage{upquote} % switch curled `'" to straight \usepackage{listings} % format program code \makeatletter % parindent is relative, i.e., toggled on/off in environments like itemize, so store the value for % use rather than use \parident directly. \newlength{\parindentlnth} \setlength{\parindentlnth}{\parindent} \newlength{\gcolumnposn} % temporary hack because lstlisting does handle tabs correctly \newlength{\columnposn} \setlength{\gcolumnposn}{2.75in} \setlength{\columnposn}{\gcolumnposn} \newcommand{\C}[2][\@empty]{\ifx#1\@empty\else\global\setlength{\columnposn}{#1}\global\columnposn=\columnposn\fi\hfill\makebox[\textwidth-\columnposn][l]{\lst@commentstyle{#2}}} \newcommand{\CRT}{\global\columnposn=\gcolumnposn} \newcommand{\TODO}[1]{\textbf{TODO}: {\itshape #1}} % TODO included %\newcommand{\TODO}[1]{} % TODO elided % Latin abbreviation \newcommand{\abbrevFont}{\textit} % set empty for no italics \newcommand*{\eg}{% \@ifnextchar{,}{\abbrevFont{e}.\abbrevFont{g}.}% {\@ifnextchar{:}{\abbrevFont{e}.\abbrevFont{g}.}% {\abbrevFont{e}.\abbrevFont{g}.,\xspace}}% }% \newcommand*{\ie}{% \@ifnextchar{,}{\abbrevFont{i}.\abbrevFont{e}.}% {\@ifnextchar{:}{\abbrevFont{i}.\abbrevFont{e}.}% {\abbrevFont{i}.\abbrevFont{e}.,\xspace}}% }% \newcommand*{\etc}{% \@ifnextchar{.}{\abbrevFont{etc}}% {\abbrevFont{etc}.\xspace}% }% \newcommand{\etal}{% \@ifnextchar{.}{\abbrevFont{et~al}}% {\abbrevFont{et al}.\xspace}% }% % \newcommand{\eg}{\textit{e}.\textit{g}.,\xspace} % \newcommand{\ie}{\textit{i}.\textit{e}.,\xspace} % \newcommand{\etc}{\textit{etc}.,\xspace} \makeatother % Useful macros \newcommand{\CFA}{C$\mathbf\forall$\xspace} % Cforall symbolic name \newcommand{\CC}{\rm C\kern-.1em\hbox{+\kern-.25em+}\xspace} % C++ symbolic name \newcommand{\CCeleven}{\rm C\kern-.1em\hbox{+\kern-.25em+}11\xspace} % C++11 symbolic name \newcommand{\CCfourteen}{\rm C\kern-.1em\hbox{+\kern-.25em+}14\xspace} % C++14 symbolic name \newcommand{\CCseventeen}{\rm C\kern-.1em\hbox{+\kern-.25em+}17\xspace} % C++17 symbolic name \newcommand{\CCtwenty}{\rm C\kern-.1em\hbox{+\kern-.25em+}20\xspace} % C++20 symbolic name \newcommand{\CCV}{\rm C\kern-.1em\hbox{+\kern-.25em+}obj\xspace} % C++ virtual symbolic name \newcommand{\CS}{C\raisebox{-0.7ex}{\Large$^\sharp$}\xspace} \newcommand{\Textbf}[1]{{\color{red}\textbf{#1}}} % CFA programming language, based on ANSI C (with some gcc additions) \lstdefinelanguage{CFA}[ANSI]{C}{ morekeywords={_Alignas,_Alignof,__alignof,__alignof__,asm,__asm,__asm__,_At,_Atomic,__attribute,__attribute__,auto, _Bool,catch,catchResume,choose,_Complex,__complex,__complex__,__const,__const__,disable,dtype,enable,__extension__, fallthrough,fallthru,finally,forall,ftype,_Generic,_Imaginary,inline,__label__,lvalue,_Noreturn,one_t,otype,restrict,_Static_assert, _Thread_local,throw,throwResume,trait,try,ttype,typeof,__typeof,__typeof__,zero_t}, }% \lstset{ language=CFA, columns=fullflexible, basicstyle=\linespread{0.9}\sf, % reduce line spacing and use sanserif font stringstyle=\tt, % use typewriter font tabsize=4, % 4 space tabbing xleftmargin=\parindentlnth, % indent code to paragraph indentation %mathescape=true, % LaTeX math escape in CFA code $...$ escapechar=\$, % LaTeX escape in CFA code keepspaces=true, % showstringspaces=false, % do not show spaces with cup showlines=true, % show blank lines at end of code aboveskip=4pt, % spacing above/below code block belowskip=3pt, % replace/adjust listing characters that look bad in sanserif literate={-}{\raisebox{-0.15ex}{\texttt{-}}}1 {^}{\raisebox{0.6ex}{$\scriptscriptstyle\land\,$}}1 {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 {_}{\makebox[1.2ex][c]{\rule{1ex}{0.1ex}}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1 {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2, moredelim=**[is][\color{red}]{`}{`}, }% lstset % inline code @...@ \lstMakeShortInline@% % ACM Information \citestyle{acmauthoryear} \acmJournal{PACMPL} \title{Generic and Tuple Types with Efficient Dynamic Layout in \CFA} \author{Aaron Moss} \email{a3moss@uwaterloo.ca} \author{Robert Schluntz} \email{rschlunt@uwaterloo.ca} \author{Peter Buhr} \email{pabuhr@uwaterloo.ca} \affiliation{% \institution{University of Waterloo} \department{David R. Cheriton School of Computer Science} \streetaddress{Davis Centre, University of Waterloo} \city{Waterloo} \state{ON} \postcode{N2L 3G1} \country{Canada} } \terms{generic, tuple, variadic, types} \keywords{generic types, tuple types, variadic types, polymorphic functions, C, Cforall} \begin{CCSXML} 10011007.10011006.10011008.10011024.10011025 Software and its engineering~Polymorphism 500 10011007.10011006.10011008.10011024.10011028 Software and its engineering~Data types and structures 500 10011007.10011006.10011041.10011047 Software and its engineering~Source code generation 300 \end{CCSXML} \ccsdesc[500]{Software and its engineering~Polymorphism} \ccsdesc[500]{Software and its engineering~Data types and structures} \ccsdesc[300]{Software and its engineering~Source code generation} \begin{abstract} The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from commercial operating-systems to hobby projects. This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more. Nonetheless, C, first standardized over thirty years ago, lacks many features that make programming in more modern languages safer and more productive. The goal of the \CFA project is to create an extension of C that provides modern safety and productivity features while still ensuring strong backwards compatibility with C and its programmers. Prior projects have attempted similar goals but failed to honour C programming-style; for instance, adding object-oriented or functional programming with garbage collection is a non-starter for many C developers. Specifically, \CFA is designed to have an orthogonal feature-set based closely on the C programming paradigm, so that \CFA features can be added \emph{incrementally} to existing C code-bases, and C programmers can learn \CFA extensions on an as-needed basis, preserving investment in existing code and engineers. This paper describes two \CFA extensions, generic and tuple types, details how their design avoids shortcomings of similar features in C and other C-like languages, and presents experimental results validating the design. \end{abstract} \begin{document} \maketitle \section{Introduction and Background} The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from commercial operating-systems to hobby projects. This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more. The \citet{TIOBE} ranks the top 5 most popular programming languages as: Java 16\%, \Textbf{C 7\%}, \Textbf{\CC 5\%}, \CS 4\%, Python 4\% = 36\%, where the next 50 languages are less than 3\% each with a long tail. The top 3 rankings over the past 30 years are: \lstDeleteShortInline@% \begin{center} \setlength{\tabcolsep}{10pt} \begin{tabular}{@{}r|c|c|c|c|c|c|c@{}} & 2017 & 2012 & 2007 & 2002 & 1997 & 1992 & 1987 \\ \hline Java & 1 & 1 & 1 & 1 & 12 & - & - \\ \hline \Textbf{C} & \Textbf{2}& \Textbf{2}& \Textbf{2}& \Textbf{2}& \Textbf{1}& \Textbf{1}& \Textbf{1} \\ \hline \CC & 3 & 3 & 3 & 3 & 2 & 2 & 4 \\ \end{tabular} \end{center} \lstMakeShortInline@% Love it or hate it, C is extremely popular, highly used, and one of the few system's languages. In many cases, \CC is often used solely as a better C. Nonetheless, C, first standardized over thirty years ago, lacks many features that make programming in more modern languages safer and more productive. \CFA (pronounced ``C-for-all'', and written \CFA or Cforall) is an evolutionary extension of the C programming language that aims to add modern language features to C while maintaining both source compatibility with C and a familiar programming model for programmers. The four key design goals for \CFA~\citep{Bilson03} are: (1) The behaviour of standard C code must remain the same when translated by a \CFA compiler as when translated by a C compiler; (2) Standard C code must be as fast and as small when translated by a \CFA compiler as when translated by a C compiler; (3) \CFA code must be at least as portable as standard C code; (4) Extensions introduced by \CFA must be translated in the most efficient way possible. These goals ensure existing C code-bases can be converted to \CFA incrementally with minimal effort, and C programmers can productively generate \CFA code without training beyond the features being used. Unfortunately, \CC is actively diverging from C, so incremental additions require significant effort and training, coupled with multiple legacy design-choices that cannot be updated. \CFA is currently implemented as a source-to-source translator from \CFA to the GCC-dialect of C~\citep{GCCExtensions}, allowing it to leverage the portability and code optimizations provided by GCC, meeting goals (1)-(3). Ultimately, a compiler is necessary for advanced features and optimal performance. This paper identifies shortcomings in existing approaches to generic and variadic data types in C-like languages and presents a design for generic and variadic types avoiding those shortcomings. Specifically, the solution is both reusable and type-checked, as well as conforming to the design goals of \CFA with ergonomic use of existing C abstractions. The new constructs are empirically compared with both standard C and \CC; the results show the new design is comparable in performance. \subsection{Polymorphic Functions} \label{sec:poly-fns} \CFA's polymorphism was originally formalized by \citet{Ditchfield92}, and first implemented by \citet{Bilson03}. The signature feature of \CFA is parametric-polymorphic functions where functions are generalized using a @forall@ clause (giving the language its name): \begin{lstlisting} `forall( otype T )` T identity( T val ) { return val; } int forty_two = identity( 42 ); $\C{// T is bound to int, forty\_two == 42}$ \end{lstlisting} The @identity@ function above can be applied to any complete \emph{object type} (or @otype@). The type variable @T@ is transformed into a set of additional implicit parameters encoding sufficient information about @T@ to create and return a variable of that type. The \CFA implementation passes the size and alignment of the type represented by an @otype@ parameter, as well as an assignment operator, constructor, copy constructor and destructor. If this extra information is not needed, \eg for a pointer, the type parameter can be declared as a \emph{data type} (or @dtype@). In \CFA, the polymorphism runtime-cost is spread over each polymorphic call, due to passing more arguments to polymorphic functions; preliminary experiments show this overhead is similar to \CC virtual-function calls. An advantage of this design is that, unlike \CC template-functions, \CFA polymorphic-functions are compatible with C \emph{separate compilation}, preventing compilation and code bloat. Since bare polymorphic-types provide only a narrow set of available operations, \CFA provides a \emph{type assertion} mechanism to provide further type information, where type assertions may be variable or function declarations that depend on a polymorphic type-variable. For example, the function @twice@ can be defined using the \CFA syntax for operator overloading: \begin{lstlisting} forall( otype T `| { T ?+?(T, T); }` ) T twice( T x ) { return x + x; } $\C{// ? denotes operands}$ int val = twice( twice( 3.7 ) ); \end{lstlisting} which works for any type @T@ with a matching addition operator. The polymorphism is achieved by creating a wrapper function for calling @+@ with @T@ bound to @double@, then passing this function to the first call of @twice@. There is now the option of using the same @twice@ and converting the result to @int@ on assignment, or creating another @twice@ with type parameter @T@ bound to @int@ because \CFA uses the return type, as in~\cite{Ada}, in its type analysis. The first approach has a late conversion from @double@ to @int@ on the final assignment, while the second has an eager conversion to @int@. \CFA minimizes the number of conversions and their potential to lose information, so it selects the first approach, which corresponds with C-programmer intuition. Crucial to the design of a new programming language are the libraries to access thousands of external software features. Like \CC, \CFA inherits a massive compatible library-base, where other programming languages must rewrite or provide fragile inter-language communication with C. A simple example is leveraging the existing type-unsafe (@void *@) C @bsearch@ to binary search a sorted floating-point array: \begin{lstlisting} void * bsearch( const void * key, const void * base, size_t nmemb, size_t size, int (* compar)( const void *, const void * )); int comp( const void * t1, const void * t2 ) { return *(double *)t1 < *(double *)t2 ? -1 : *(double *)t2 < *(double *)t1 ? 1 : 0; } double vals[10] = { /* 10 floating-point values */ }; double key = 5.0; double * val = (double *)bsearch( &key, vals, 10, sizeof(vals[0]), comp ); $\C{// search sorted array}$ \end{lstlisting} which can be augmented simply with a generalized, type-safe, \CFA-overloaded wrappers: \begin{lstlisting} forall( otype T | { int ?` y; } $\C{// locally override behaviour}$ qsort( vals, size ); $\C{// descending sort}$ } \end{lstlisting} Within the block, the nested version of @<@ performs @>@ and this local version overrides the built-in @<@ so it is passed to @qsort@. Hence, programmers can easily form local environments, adding and modifying appropriate functions, to maximize reuse of other existing functions and types. Finally, \CFA allows variable overloading: \lstDeleteShortInline@% \par\smallskip \begin{tabular}{@{}l@{\hspace{\parindent}}|@{\hspace{\parindent}}l@{}} \begin{lstlisting} short int MAX = ...; int MAX = ...; double MAX = ...; \end{lstlisting} & \begin{lstlisting} short int s = MAX; // select correct MAX int i = MAX; double d = MAX; \end{lstlisting} \end{tabular} \smallskip\par\noindent \lstMakeShortInline@% Hence, the single name @MAX@ replaces all the C type-specific names: @SHRT_MAX@, @INT_MAX@, @DBL_MAX@. As well, restricted constant overloading is allowed for the values @0@ and @1@, which have special status in C, \eg the value @0@ is both an integer and a pointer literal, so its meaning depends on context. In addition, several operations are defined in terms values @0@ and @1@, \eg: \begin{lstlisting} int x; if (x) x++ $\C{// if (x != 0) x += 1;}$ \end{lstlisting} Every if statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result. Due to these rewrite rules, the values @0@ and @1@ have the types @zero_t@ and @one_t@ in \CFA, which allows overloading various operations for new types that seamlessly connect to all special @0@ and @1@ contexts. The types @zero_t@ and @one_t@ have special built in implicit conversions to the various integral types, and a conversion to pointer types for @0@, which allows standard C code involving @0@ and @1@ to work as normal. \subsection{Traits} \CFA provides \emph{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each function declaration: \begin{lstlisting} trait summable( otype T ) { void ?{}( T *, zero_t ); $\C{// constructor from 0 literal}$ T ?+?( T, T ); $\C{// assortment of additions}$ T ?+=?( T *, T ); T ++?( T * ); T ?++( T * ); }; forall( otype T `| summable( T )` ) T sum( T a[$\,$], size_t size ) { // use trait `T` total = { `0` }; $\C{// instantiate T from 0 by calling its constructor}$ for ( unsigned int i = 0; i < size; i += 1 ) total `+=` a[i]; $\C{// select appropriate +}$ return total; } \end{lstlisting} In fact, the set of trait operators is incomplete, as there is no assignment requirement for type @T@, but @otype@ is syntactic sugar for the following implicit trait: \begin{lstlisting} trait otype( dtype T | sized(T) ) { // sized is a pseudo-trait for types with known size and alignment void ?{}( T * ); $\C{// default constructor}$ void ?{}( T *, T ); $\C{// copy constructor}$ void ?=?( T *, T ); $\C{// assignment operator}$ void ^?{}( T * ); }; $\C{// destructor}$ \end{lstlisting} Given the information provided for an @otype@, variables of polymorphic type can be treated as if they were a complete type: stack-allocatable, default or copy-initialized, assigned, and deleted. In summation, the \CFA type-system uses \emph{nominal typing} for concrete types, matching with the C type-system, and \emph{structural typing} for polymorphic types. Hence, trait names play no part in type equivalence; the names are simply macros for a list of polymorphic assertions, which are expanded at usage sites. Nevertheless, trait names form a logical subtype-hierarchy with @dtype@ at the top, where traits often contain overlapping assertions, \eg operator @+@. Traits are used like interfaces in Java or abstract base-classes in \CC, but without the nominal inheritance-relationships. Instead, each polymorphic function (or generic type) defines the structural type needed for its execution (polymorphic type-key), and this key is fulfilled at each call site from the lexical environment, which is similar to Go~\citep{Go} interfaces. Hence, new lexical scopes and nested functions are used extensively to create local subtypes, as in the @qsort@ example, without having to manage a nominal-inheritance hierarchy. (Nominal inheritance can be approximated with traits using marker variables or functions, as is done in Go.) % Nominal inheritance can be simulated with traits using marker variables or functions: % \begin{lstlisting} % trait nominal(otype T) { % T is_nominal; % }; % int is_nominal; $\C{// int now satisfies the nominal trait}$ % \end{lstlisting} % % Traits, however, are significantly more powerful than nominal-inheritance interfaces; most notably, traits may be used to declare a relationship \emph{among} multiple types, a property that may be difficult or impossible to represent in nominal-inheritance type systems: % \begin{lstlisting} % trait pointer_like(otype Ptr, otype El) { % lvalue El *?(Ptr); $\C{// Ptr can be dereferenced into a modifiable value of type El}$ % } % struct list { % int value; % list *next; $\C{// may omit "struct" on type names as in \CC}$ % }; % typedef list *list_iterator; % % lvalue int *?( list_iterator it ) { return it->value; } % \end{lstlisting} % In the example above, @(list_iterator, int)@ satisfies @pointer_like@ by the user-defined dereference function, and @(list_iterator, list)@ also satisfies @pointer_like@ by the built-in dereference operator for pointers. Given a declaration @list_iterator it@, @*it@ can be either an @int@ or a @list@, with the meaning disambiguated by context (\eg @int x = *it;@ interprets @*it@ as an @int@, while @(*it).value = 42;@ interprets @*it@ as a @list@). % While a nominal-inheritance system with associated types could model one of those two relationships by making @El@ an associated type of @Ptr@ in the @pointer_like@ implementation, few such systems could model both relationships simultaneously. \section{Generic Types} One of the known shortcomings of standard C is that it does not provide reusable type-safe abstractions for generic data structures and algorithms. Broadly speaking, there are three approaches to create data structures in C. One approach is to write bespoke data structures for each context in which they are needed. While this approach is flexible and supports integration with the C type-checker and tooling, it is also tedious and error-prone, especially for more complex data structures. A second approach is to use @void *@--based polymorphism, \eg the C standard-library functions @bsearch@ and @qsort@, and does allow the use of common code for common functionality. However, basing all polymorphism on @void *@ eliminates the type-checker's ability to ensure that argument types are properly matched, often requiring a number of extra function parameters, pointer indirection, and dynamic allocation that would not otherwise be needed. A third approach to generic code is to use preprocessor macros, which does allow the generated code to be both generic and type-checked, but errors may be difficult to interpret. Furthermore, writing and using preprocessor macros can be unnatural and inflexible. Other languages use \emph{generic types}, \eg \CC and Java, to produce type-safe abstract data-types. \CFA also implements generic types that integrate efficiently and naturally with the existing polymorphic functions, while retaining backwards compatibility with C and providing separate compilation. However, for known concrete parameters, the generic type can be inlined, like \CC templates. A generic type can be declared by placing a @forall@ specifier on a @struct@ or @union@ declaration, and instantiated using a parenthesized list of types after the type name: \begin{lstlisting} forall( otype R, otype S ) struct pair { R first; S second; }; forall( otype T ) T value( pair( const char *, T ) p ) { return p.second; } forall( dtype F, otype T ) T value_p( pair( F *, T * ) p ) { return *p.second; } pair( const char *, int ) p = { "magic", 42 }; int magic = value( p ); pair( void *, int * ) q = { 0, &p.second }; magic = value_p( q ); double d = 1.0; pair( double *, double * ) r = { &d, &d }; d = value_p( r ); \end{lstlisting} \CFA classifies generic types as either \emph{concrete} or \emph{dynamic}. Concrete have a fixed memory layout regardless of type parameters, while dynamic vary in memory layout depending on their type parameters. A type may have polymorphic parameters but still be concrete, called \emph{dtype-static}. Polymorphic pointers are an example of dtype-static types, \eg @forall(dtype T) T *@ is a polymorphic type, but for any @T@, @T *@ is a fixed-sized pointer, and therefore, can be represented by a @void *@ in code generation. \CFA generic types also allow checked argument-constraints. For example, the following declaration of a sorted set-type ensures the set key supports equality and relational comparison: \begin{lstlisting} forall( otype Key | { _Bool ?==?(Key, Key); _Bool ?first, b->first ) ? : cmp( a->second, b->second ); } \end{lstlisting} % int c = cmp( a->first, b->first ); % if ( c == 0 ) c = cmp( a->second, b->second ); % return c; Since @pair(T *, T * )@ is a concrete type, there are no implicit parameters passed to @lexcmp@, so the generated code is identical to a function written in standard C using @void *@, yet the \CFA version is type-checked to ensure the fields of both pairs and the arguments to the comparison function match in type. Another useful pattern enabled by reused dtype-static type instantiations is zero-cost \emph{tag-structures}. Sometimes information is only used for type-checking and can be omitted at runtime, \eg: \begin{lstlisting} forall(dtype Unit) struct scalar { unsigned long value; }; struct metres {}; struct litres {}; forall(dtype U) scalar(U) ?+?( scalar(U) a, scalar(U) b ) { return (scalar(U)){ a.value + b.value }; } scalar(metres) half_marathon = { 21093 }; scalar(litres) swimming_pool = { 2500000 }; scalar(metres) marathon = half_marathon + half_marathon; scalar(litres) two_pools = swimming_pool + swimming_pool; marathon + swimming_pool; $\C{// compilation ERROR}$ \end{lstlisting} @scalar@ is a dtype-static type, so all uses have a single structure definition, containing @unsigned long@, and can share the same implementations of common functions like @?+?@. These implementations may even be separately compiled, unlike \CC template functions. However, the \CFA type-checker ensures matching types are used by all calls to @?+?@, preventing nonsensical computations like adding a length to a volume. \section{Tuples} \label{sec:tuples} In many languages, functions can return at most one value; however, many operations have multiple outcomes, some exceptional. Consider C's @div@ and @remquo@ functions, which return the quotient and remainder for a division of integer and floating-point values, respectively. \begin{lstlisting} typedef struct { int quo, rem; } div_t; div_t div( int num, int den ); double remquo( double num, double den, int * quo ); div_t qr = div( 13, 5 ); $\C{// return quotient/remainder aggregate}$ int q; double r = remquo( 13.5, 5.2, &q ); $\C{// return remainder, alias quotient}$ \end{lstlisting} @div@ aggregates the quotient/remainder in a structure, while @remquo@ aliases a parameter to an argument. Both approaches are awkward. Alternatively, a programming language can directly support returning multiple values, \eg in \CFA: \begin{lstlisting} [ int, int ] div( int num, int den ); $\C{// return two integers}$ [ double, double ] div( double num, double den ); $\C{// return two doubles}$ int q, r; $\C{// overload variable names}$ double q, r; [ q, r ] = div( 13, 5 ); $\C{// select appropriate div and q, r}$ [ q, r ] = div( 13.5, 5.2 ); \end{lstlisting} Clearly, this approach is straightforward to understand and use; therefore, why do few programming languages support this obvious feature or provide it awkwardly? The answer is that there are complex consequences that cascade through multiple aspects of the language, especially the type-system. This section show these consequences and how \CFA deals with them. \subsection{Tuple Expressions} The addition of multiple-return-value functions (MRVF) are useless without a syntax for accepting multiple values at the call-site. The simplest mechanism for capturing the return values is variable assignment, allowing the values to be retrieved directly. As such, \CFA allows assigning multiple values from a function into multiple variables, using a square-bracketed list of lvalue expressions (as above), called a \emph{tuple}. However, functions also use \emph{composition} (nested calls), with the direct consequence that MRVFs must also support composition to be orthogonal with single-returning-value functions (SRVF), \eg: \begin{lstlisting} printf( "%d %d\n", div( 13, 5 ) ); $\C{// return values seperated into arguments}$ \end{lstlisting} Here, the values returned by @div@ are composed with the call to @printf@. However, the \CFA type-system must support significantly more complex composition: \begin{lstlisting} [ int, int ] foo$\(_1\)$( int ); [ double ] foo$\(_2\)$( int ); void bar( int, double, double ); bar( foo( 3 ), foo( 3 ) ); \end{lstlisting} The type-resolver only has the tuple return-types to resolve the call to @bar@ as the @foo@ parameters are identical, which involves unifying the possible @foo@ functions with @bar@'s parameter list. No combination of @foo@s are an exact match with @bar@'s parameters, so the resolver applies C conversions. The minimal cost is @bar( foo@$_1$@( 3 ), foo@$_2$@( 3 ) )@, giving (@int@, {\color{green}@int@}, @double@) to (@int@, {\color{green}@double@}, @double@) with one {\color{green}safe} (widening) conversion from @int@ to @double@ versus ({\color{red}@double@}, {\color{green}@int@}, {\color{green}@int@}) to ({\color{red}@int@}, {\color{green}@double@}, {\color{green}@double@}) with one {\color{red}unsafe} (narrowing) conversion from @double@ to @int@ and two safe conversions. \subsection{Tuple Variables} An important observation from function composition is that new variable names are not required to initialize parameters from an MRVF. \CFA also allows declaration of tuple variables that can be initialized from an MRVF, since it can be awkward to declare multiple variables of different types. As a consequence, \CFA allows declaration of \emph{tuple variables} that can be initialized from an MRVF, \eg: \begin{lstlisting} [ int, int ] qr = div( 13, 5 ); $\C{// tuple-variable declaration and initialization}$ [ double, double ] qr = div( 13.5, 5.2 ); \end{lstlisting} where the tuple variable-name serves the same purpose as the parameter name(s). Tuple variables can be composed of any types, except for array types, since array sizes are generally unknown. One way to access the tuple-variable components is with assignment or composition: \begin{lstlisting} [ q, r ] = qr; $\C{// access tuple-variable components}$ printf( "%d %d\n", qr ); \end{lstlisting} \CFA also supports \emph{tuple indexing} to access single components of a tuple expression: \begin{lstlisting} [int, int] * p = &qr; $\C{// tuple pointer}$ int rem = qr.1; $\C{// access remainder}$ int quo = div( 13, 5 ).0; $\C{// access quotient}$ p->0 = 5; $\C{// change quotient}$ bar( qr.1, qr ); $\C{// pass remainder and quotient/remainder}$ rem = [42, div( 13, 5 )].0.1; $\C{// access 2nd component of 1st component of tuple expression}$ \end{lstlisting} \subsection{Flattening and Restructuring} In function call contexts, tuples support implicit flattening and restructuring conversions. Tuple flattening recursively expands a tuple into the list of its basic components. Tuple structuring packages a list of expressions into a value of tuple type, \eg: \lstDeleteShortInline@% \par\smallskip \begin{tabular}{@{}l@{\hspace{\parindent}}|@{\hspace{\parindent}}l@{}} \begin{lstlisting} int f( int, int ); int g( [int, int] ); int h( int, [int, int] ); [int, int] x; \end{lstlisting} & \begin{lstlisting} int y; f( x ); $\C[1in]{// flatten}$ g( y, 10 ); $\C{// structure}$ h( x, y ); $\C{// flatten and structure}\CRT{}$ \end{lstlisting} \end{tabular} \smallskip\par\noindent \lstMakeShortInline@% In the call to @f@, @x@ is implicitly flattened so the components of @x@ are passed as the two arguments. In the call to @g@, the values @y@ and @10@ are structured into a single argument of type @[int, int]@ to match the parameter type of @g@. Finally, in the call to @h@, @x@ is flattened to yield an argument list of length 3, of which the first component of @x@ is passed as the first parameter of @h@, and the second component of @x@ and @y@ are structured into the second argument of type @[int, int]@. The flexible structure of tuples permits a simple and expressive function call syntax to work seamlessly with both SRVF and MRVF, and with any number of arguments of arbitrarily complex structure. \subsection{Tuple Assignment} An assignment where the left side is a tuple type is called \emph{tuple assignment}. There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called \emph{multiple} and \emph{mass assignment}, respectively. \lstDeleteShortInline@% \par\smallskip \begin{tabular}{@{}l@{\hspace{\parindent}}|@{\hspace{\parindent}}l@{}} \begin{lstlisting} int x = 10; double y = 3.5; [int, double] z; \end{lstlisting} & \begin{lstlisting} z = [x, y]; $\C[1in]{// multiple assignment}$ [x, y] = z; $\C{// multiple assignment}$ z = 10; $\C{// mass assignment}$ [y, x] = 3.14; $\C{// mass assignment}\CRT{}$ \end{lstlisting} \end{tabular} \smallskip\par\noindent \lstMakeShortInline@% Both kinds of tuple assignment have parallel semantics, so that each value on the left and right side is evaluated before any assignments occur. As a result, it is possible to swap the values in two variables without explicitly creating any temporary variables or calling a function, \eg, @[x, y] = [y, x]@. This semantics means mass assignment differs from C cascading assignment (\eg @a = b = c@) in that conversions are applied in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, yielding @y == 3.14@ and @x == 3@; whereas C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, yielding @3@ in @y@ and @x@. Finally, tuple assignment is an expression where the result type is the type of the left-hand side of the assignment, just like all other assignment expressions in C. This example shows mass, multiple, and cascading assignment used in one expression: \begin{lstlisting} void f( [int, int] ); f( [x, y] = z = 1.5 ); $\C{// assignments in parameter list}$ \end{lstlisting} \subsection{Member Access} It is also possible to access multiple fields from a single expression using a \emph{member-access}. The result is a single tuple-valued expression whose type is the tuple of the types of the members, \eg: \begin{lstlisting} struct S { int x; double y; char * z; } s; s.[x, y, z] = 0; \end{lstlisting} Here, the mass assignment sets all members of @s@ to zero. Since tuple-index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (\eg rearrange, drop, and duplicate components). \lstDeleteShortInline@% \par\smallskip \begin{tabular}{@{}l@{\hspace{\parindent}}|@{\hspace{\parindent}}l@{}} \begin{lstlisting} [int, int, long, double] x; void f( double, long ); \end{lstlisting} & \begin{lstlisting} x.[0, 1] = x.[1, 0]; $\C[1in]{// rearrange: [x.0, x.1] = [x.1, x.0]}$ f( x.[0, 3] ); $\C{// drop: f(x.0, x.3)}\CRT{}$ [int, int, int] y = x.[2, 0, 2]; // duplicate: [y.0, y.1, y.2] = [x.2, x.0. x.2] \end{lstlisting} \end{tabular} \smallskip\par\noindent \lstMakeShortInline@% It is also possible for a member access to contain other member accesses, \eg: \begin{lstlisting} struct A { double i; int j; }; struct B { int * k; short l; }; struct C { int x; A y; B z; } v; v.[x, y.[i, j], z.k]; $\C{// [v.x, [v.y.i, v.y.j], v.z.k]}$ \end{lstlisting} \begin{comment} \subsection{Casting} In C, the cast operator is used to explicitly convert between types. In \CFA, the cast operator has a secondary use as type ascription. That is, a cast can be used to select the type of an expression when it is ambiguous, as in the call to an overloaded function: \begin{lstlisting} int f(); // (1) double f(); // (2) f(); // ambiguous - (1),(2) both equally viable (int)f(); // choose (2) \end{lstlisting} Since casting is a fundamental operation in \CFA, casts should be given a meaningful interpretation in the context of tuples. Taking a look at standard C provides some guidance with respect to the way casts should work with tuples: \begin{lstlisting} int f(); void g(); (void)f(); // (1) (int)g(); // (2) \end{lstlisting} In C, (1) is a valid cast, which calls @f@ and discards its result. On the other hand, (2) is invalid, because @g@ does not produce a result, so requesting an @int@ to materialize from nothing is nonsensical. Generalizing these principles, any cast wherein the number of components increases as a result of the cast is invalid, while casts that have the same or fewer number of components may be valid. Formally, a cast to tuple type is valid when $T_n \leq S_m$, where $T_n$ is the number of components in the target type and $S_m$ is the number of components in the source type, and for each $i$ in $[0, n)$, $S_i$ can be cast to $T_i$. Excess elements ($S_j$ for all $j$ in $[n, m)$) are evaluated, but their values are discarded so that they are not included in the result expression. This approach follows naturally from the way that a cast to @void@ works in C. For example, in \begin{lstlisting} [int, int, int] f(); [int, [int, int], int] g(); ([int, double])f(); $\C{// (1)}$ ([int, int, int])g(); $\C{// (2)}$ ([void, [int, int]])g(); $\C{// (3)}$ ([int, int, int, int])g(); $\C{// (4)}$ ([int, [int, int, int]])g(); $\C{// (5)}$ \end{lstlisting} (1) discards the last element of the return value and converts the second element to @double@. Since @int@ is effectively a 1-element tuple, (2) discards the second component of the second element of the return value of @g@. If @g@ is free of side effects, this expression is equivalent to @[(int)(g().0), (int)(g().1.0), (int)(g().2)]@. Since @void@ is effectively a 0-element tuple, (3) discards the first and third return values, which is effectively equivalent to @[(int)(g().1.0), (int)(g().1.1)]@). Note that a cast is not a function call in \CFA, so flattening and structuring conversions do not occur for cast expressions\footnote{User-defined conversions have been considered, but for compatibility with C and the existing use of casts as type ascription, any future design for such conversions would require more precise matching of types than allowed for function arguments and parameters.}. As such, (4) is invalid because the cast target type contains 4 components, while the source type contains only 3. Similarly, (5) is invalid because the cast @([int, int, int])(g().1)@ is invalid. That is, it is invalid to cast @[int, int]@ to @[int, int, int]@. \end{comment} \subsection{Polymorphism} Tuples also integrate with \CFA polymorphism as a kind of generic type. Due to the implicit flattening and structuring conversions involved in argument passing, @otype@ and @dtype@ parameters are restricted to matching only with non-tuple types, \eg: \begin{lstlisting} forall(otype T, dtype U) void f( T x, U * y ); f( [5, "hello"] ); \end{lstlisting} where @[5, "hello"]@ is flattened, giving argument list @5, "hello"@, and @T@ binds to @int@ and @U@ binds to @const char@. Tuples, however, may contain polymorphic components. For example, a plus operator can be written to add two triples together. \begin{lstlisting} forall(otype T | { T ?+?( T, T ); }) [T, T, T] ?+?( [T, T, T] x, [T, T, T] y ) { return [x.0 + y.0, x.1 + y.1, x.2 + y.2]; } [int, int, int] x; int i1, i2, i3; [i1, i2, i3] = x + ([10, 20, 30]); \end{lstlisting} Flattening and restructuring conversions are also applied to tuple types in polymorphic type assertions. \begin{lstlisting} int f( [int, double], double ); forall(otype T, otype U | { T f( T, U, U ); }) void g( T, U ); g( 5, 10.21 ); \end{lstlisting} Hence, function parameter and return lists are flattened for the purposes of type unification allowing the example to pass expression resolution. This relaxation is possible by extending the thunk scheme described by \citet{Bilson03}. Whenever a candidate's parameter structure does not exactly match the formal parameter's structure, a thunk is generated to specialize calls to the actual function: \begin{lstlisting} int _thunk( int _p0, double _p1, double _p2 ) { return f( [_p0, _p1], _p2 ); } \end{lstlisting} so the thunk provides flattening and structuring conversions to inferred functions, improving the compatibility of tuples and polymorphism. These thunks take advantage of GCC C nested-functions to produce closures that have the usual function pointer signature. \subsection{Variadic Tuples} \label{sec:variadic-tuples} To define variadic functions, \CFA adds a new kind of type parameter, @ttype@ (tuple type). Matching against a @ttype@ parameter consumes all remaining argument components and packages them into a tuple, binding to the resulting tuple of types. In a given parameter list, there must be at most one @ttype@ parameter that occurs last, which matches normal variadic semantics, with a strong feeling of similarity to \CCeleven variadic templates. As such, @ttype@ variables are also called \emph{argument packs}. Like variadic templates, the main way to manipulate @ttype@ polymorphic functions is via recursion. Since nothing is known about a parameter pack by default, assertion parameters are key to doing anything meaningful. Unlike variadic templates, @ttype@ polymorphic functions can be separately compiled. For example, a generalized @sum@ function written using @ttype@: \begin{lstlisting} int sum$\(_0\)$() { return 0; } forall(ttype Params | { int sum( Params ); } ) int sum$\(_1\)$( int x, Params rest ) { return x + sum( rest ); } sum( 10, 20, 30 ); \end{lstlisting} Since @sum@\(_0\) does not accept any arguments, it is not a valid candidate function for the call @sum(10, 20, 30)@. In order to call @sum@\(_1\), @10@ is matched with @x@, and the argument resolution moves on to the argument pack @rest@, which consumes the remainder of the argument list and @Params@ is bound to @[20, 30]@. The process continues, @Params@ is bound to @[]@, requiring an assertion @int sum()@, which matches @sum@\(_0\) and terminates the recursion. Effectively, this algorithm traces as @sum(10, 20, 30)@ $\rightarrow$ @10 + sum(20, 30)@ $\rightarrow$ @10 + (20 + sum(30))@ $\rightarrow$ @10 + (20 + (30 + sum()))@ $\rightarrow$ @10 + (20 + (30 + 0))@. It is reasonable to take the @sum@ function a step further to enforce a minimum number of arguments: \begin{lstlisting} int sum( int x, int y ) { return x + y; } forall(ttype Params | { int sum( int, Params ); } ) int sum( int x, int y, Params rest ) { return sum( x + y, rest ); } \end{lstlisting} One more step permits the summation of any summable type with all arguments of the same type: \begin{lstlisting} trait summable(otype T) { T ?+?( T, T ); }; forall(otype R | summable( R ) ) R sum( R x, R y ) { return x + y; } forall(otype R, ttype Params | summable(R) | { R sum(R, Params); } ) R sum(R x, R y, Params rest) { return sum( x + y, rest ); } \end{lstlisting} Unlike C variadic functions, it is unnecessary to hard code the number and expected types. Furthermore, this code is extendable so any user-defined type with a @?+?@ operator. Summing arbitrary heterogeneous lists is possible with similar code by adding the appropriate type variables and addition operators. It is also possible to write a type-safe variadic print function to replace @printf@: \begin{lstlisting} struct S { int x, y; }; forall(otype T, ttype Params | { void print(T); void print(Params); }) void print(T arg, Params rest) { print(arg); print(rest); } void print( char * x ) { printf( "%s", x ); } void print( int x ) { printf( "%d", x ); } void print( S s ) { print( "{ ", s.x, ",", s.y, " }" ); } print( "s = ", (S){ 1, 2 }, "\n" ); \end{lstlisting} This example showcases a variadic-template-like decomposition of the provided argument list. The individual @print@ functions allow printing a single element of a type. The polymorphic @print@ allows printing any list of types, as long as each individual type has a @print@ function. The individual print functions can be used to build up more complicated @print@ functions, such as for @S@, which is something that cannot be done with @printf@ in C. Finally, it is possible to use @ttype@ polymorphism to provide arbitrary argument forwarding functions. For example, it is possible to write @new@ as a library function: \begin{lstlisting} struct pair( otype R, otype S ); forall( otype R, otype S ) void ?{}( pair(R, S) *, R, S ); // (1) forall( dtype T, ttype Params | sized(T) | { void ?{}( T *, Params ); } ) T * new( Params p ) { return ((T*)malloc( sizeof(T) )){ p }; // construct into result of malloc } pair( int, char ) * x = new( 42, '!' ); \end{lstlisting} The @new@ function provides the combination of type-safe @malloc@ with a \CFA constructor call, making it impossible to forget constructing dynamically allocated objects. This function provides the type-safety of @new@ in \CC, without the need to specify the allocated type again, thanks to return-type inference. \subsection{Implementation} Tuples are implemented in the \CFA translator via a transformation into generic types. For each $N$, the first time an $N$-tuple is seen in a scope a generic type with $N$ type parameters is generated, \eg: \begin{lstlisting} [int, int] f() { [double, double] x; [int, double, int] y; } \end{lstlisting} is transformed into: \begin{lstlisting} // generated before the first 2-tuple forall(dtype T0, dtype T1 | sized(T0) | sized(T1)) struct _tuple2 { T0 field_0; T1 field_1; }; _tuple2(int, int) f() { _tuple2(double, double) x; // generated before the first 3-tuple forall(dtype T0, dtype T1, dtype T2 | sized(T0) | sized(T1) | sized(T2)) struct _tuple3 { T0 field_0; T1 field_1; T2 field_2; }; _tuple3(int, double, int) y; } \end{lstlisting} Tuple expressions are then simply converted directly into compound literals: \begin{lstlisting} [5, 'x', 1.24]; \end{lstlisting} becomes: \begin{lstlisting} (_tuple3(int, char, double)){ 5, 'x', 1.24 }; \end{lstlisting} \begin{comment} Since tuples are essentially structures, tuple indexing expressions are just field accesses: \begin{lstlisting} void f(int, [double, char]); [int, double] x; x.0+x.1; printf("%d %g\n", x); f(x, 'z'); \end{lstlisting} Is transformed into: \begin{lstlisting} void f(int, _tuple2(double, char)); _tuple2(int, double) x; x.field_0+x.field_1; printf("%d %g\n", x.field_0, x.field_1); f(x.field_0, (_tuple2){ x.field_1, 'z' }); \end{lstlisting} Note that due to flattening, @x@ used in the argument position is converted into the list of its fields. In the call to @f@, the second and third argument components are structured into a tuple argument. Similarly, tuple member expressions are recursively expanded into a list of member access expressions. Expressions that may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. Each unique expression is assigned an identifier and is guaranteed to be executed exactly once: \begin{lstlisting} void g(int, double); [int, double] h(); g(h()); \end{lstlisting} Internally, this expression is converted to two variables and an expression: \begin{lstlisting} void g(int, double); [int, double] h(); _Bool _unq0_finished_ = 0; [int, double] _unq0; g( (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).0, (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).1, ); \end{lstlisting} Since argument evaluation order is not specified by the C programming language, this scheme is built to work regardless of evaluation order. The first time a unique expression is executed, the actual expression is evaluated and the accompanying boolean is set to true. Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression. Tuple member expressions also take advantage of unique expressions in the case of possible impurity. Currently, the \CFA translator has a very broad, imprecise definition of impurity, where any function call is assumed to be impure. This notion could be made more precise for certain intrinsic, auto-generated, and builtin functions, and could analyze function bodies when they are available to recursively detect impurity, to eliminate some unique expressions. The various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions. A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, \eg in a unique expression. The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language. However, there are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this restriction is not new. \end{comment} \section{Evaluation} Though \CFA provides significant added functionality over C, these added features have a low runtime penalty. In fact, \CFA's features for generic programming can enable faster runtime execution than idiomatic @void *@-based C code. This claim is demonstrated through a set of generic-code-based micro-benchmarks in C, \CFA, and \CC (see source code in Appendix~\ref{sec:BenchMarks}). Since all these languages share a subset comprising most of standard C, maximal-performance benchmarks would show little runtime variance, other than in length and clarity of source code. Instead, the presented benchmarks show the costs of idiomatic use of each language's features to examine common usage. Figure~\ref{fig:MicroBenchmark} shows the benchmark tests for a generic stack based on a singly linked-list, a generic pair-data-structure, and a variadic @print@ routine similar to that in Section~\ref{sec:variadic-tuples}. The experiments are: \begin{enumerate} \item N stack pushes of int, where N = 40M \item copy int stack \item clear int stack \item N stack pops of int \end{enumerate} The structure of each implemented is: C with @void *@-based polymorphism, \CFA with the different presented features, \CC with templates, and \CC using only class inheritance for polymorphism, called \CCV. The \CCV variant illustrates an alternative object-oriented idiom where all objects inherit from a base @object@ class, mimicking a Java-like interface; hence runtime checks are necessary to safely down-cast objects. The most notable difference among the implementations is in optimizations: \CFA and \CC inline the stack and pair elements into corresponding list and pair nodes, while the C and \CCV lack generic-type capability {\color{red}(AWKWARD) to store generic objects via pointers to separately-allocated objects}. For the print benchmark, idiomatic printing is used: the C and \CFA variants used @cstdio.h@, while the \CC and \CCV variants used @iostream@. Preliminary tests show the difference has little runtime effect. Finally, the C @rand@ function is used generate random numbers. \begin{figure} \begin{lstlisting}[xleftmargin=3\parindentlnth,aboveskip=0pt,belowskip=0pt,numbers=left,numberstyle=\tt\small,numberblanklines=false] int main( int argc, char *argv[] ) { int max = 0; stack(int) s, t; REPEAT_TIMED( "push_int", push( &s, 42 ); ) TIMED( "copy_int", t = s; ) TIMED( "clear_int", clear( &s ); ) REPEAT_TIMED( "pop_int", max = max( max, pop( &t ) ); ) stack(pair(_Bool, char)) s, t; pair(_Bool, char) max = { (_Bool)0, '\0' }; REPEAT_TIMED( "push_pair", push( &s, (pair(_Bool, char)){ 42, 42 } ); ) TIMED( "copy_pair", t = s; ) TIMED( "clear_pair", clear( &s ); ) REPEAT_TIMED( "pop_pair", max = max( max, pop( &t ) ); ) FILE * out = fopen( "cfa-out.txt", "w" ); REPEAT_TIMED( "print_int", print( out, 42, ":", 42, "\n" ); ) REPEAT_TIMED( "print_pair", print( out, (pair(_Bool, char)){ 42, 42 }, ":", (pair(_Bool, char)){ 42, 42 }, "\n" ); ) fclose(out); } \end{lstlisting} \caption{Micro-Benchmark} \label{fig:MicroBenchmark} \end{figure} \begin{figure} \centering \input{timing} \caption{Benchmark Timing Results (smaller is better)} \label{fig:eval} \end{figure} \begin{table} \caption{Properties of benchmark code} \label{tab:eval} \newcommand{\CT}[1]{\multicolumn{1}{c}{#1}} \begin{tabular}{r|rrrr} & \CT{C} & \CT{\CFA} & \CT{\CC} & \CT{\CCV} \\ \hline maximum memory usage (MB) & 10001 & 2501 & 2503 & 11253 \\ source code size (lines) & 301 & 224 & 188 & 437 \\ binary size (KB) & 18 & 234 & 18 & 42 \\ \end{tabular} \end{table} Figure~\ref{fig:eval} and Table~\ref{tab:eval} show the benchmark results. Each data point is the time for 40M function call, repeated times where appropriate. The five functions are $N$ stack pushes of randomly generated elements, deep copy of an $N$ element stack, clearing all nodes of an $N$ element stack, $N/2$ variadic @print@ calls each containing two constant strings and two stack elements \TODO{right now $N$ fresh elements: FIX}, and $N$ stack pops, keeping a running record of the maximum element to ensure that the object copies are not optimized out. These five functions are run first for a stack of integers, and second for a stack of generic pairs of a boolean and a @char@. \TODO{} The data shown is the median of 5 consecutive runs of each program, with an initial warm-up run omitted. All code was compiled at \texttt{-O2} by GCC or G++ 6.2.0, with all \CC code compiled as \CCfourteen. The benchmarks were run on an Ubuntu 16.04 workstation with 16 GB of RAM and a 6-core AMD FX-6300 CPU with 3.5 GHz maximum clock frequency. The C and \CCV variants are generally the slowest and most memory-hungry, due to their less-efficient memory layout and the pointer-indirection necessary to implement generic types in these languages; this problem is exacerbated by the second level of generic types in the pair-based benchmarks. By contrast, the \CFA and \CC variants run in roughly equivalent time for both the integer and pair of boolean and char tests, which makes sense given that an integer is actually larger than the pair in both languages. \CC performs best because it uses header-only inlined libraries (i.e., no separate compilation). \CFA and \CC have the advantage of a pre-written generic @pair@ type to reduce line count, while C and \CCV require it to written by the programmer. {\color{red} Why?} The definition of @object@ and wrapper classes for @bool@, @char@, @int@, and @const char *@ are included in the line count for \CCV, which somewhat inflates its line count, as an actual object-oriented language would include these in the standard library and with their omission the \CCV line count is similar to C; we justify the given line count by the fact that many object-oriented languages do not allow implementing new interfaces on library types without subclassing or boilerplate-filled wrapper types, which may be similarly verbose. Raw line-count, however, is a fairly rough measure of code complexity; another important factor is how much type information the programmer must manually specify, especially where that information is not checked by the compiler. Such un-checked type information produces a heavier documentation burden and increased potential for runtime bugs, and is much less common in \CFA than C, with its manually specified function pointers arguments and format codes, or \CCV, with its extensive use of un-type-checked downcasts (\eg @object@ to @integer@ when popping a stack, or @object@ to @printable@ when printing the elements of a @pair@) \TODO{Actually calculate this; I want to put a distinctive comment in the source code and grep for it}. \section{Related Work} \subsection{Polymorphism} \CC is closest language to \CFA; both are extensions to C with source and runtime backwards compatibility, and incremental extensions to C. The fundamental difference is in their engineering approach to C compatibility and programmer expectation. While \CC provides good backwards compatibility with C, it has a steep learning curve for many of its extensions. For example, polymorphism is provided via three disjoint mechanisms: overloading, inheritance, and templates. The overloading is restricted because resolution does not using the return type, inheritance requires learning object-oriented programming and coping with a restricted nominal-inheritance hierarchy, templates cannot be separately compiled resulting in compilation/code bloat and poor error messages, and determining how these mechanisms interact and which to use is confusing. In contrast, \CFA has a single facility for polymorphic code supporting type-safe separate-compilation of polymorphic functions and generic (opaque) types, which uniformly leveraging the C procedural paradigm. The key mechanism to support separate compilation is \CFA's \emph{explicit} use of assumed properties for a type. Until \CC concepts~\citep{C++Concepts} are standardized (anticipated for \CCtwenty), \CC provides no way to specify the requirements of a generic function in code beyond compilation errors during template expansion; furthermore, \CC concepts are restricted to template polymorphism. Cyclone~\citep{Grossman06} also provides capabilities for polymorphic functions and existential types, similar to \CFA's @forall@ functions and generic types. Cyclone existential types can include function pointers in a construct similar to a virtual function-table, but these pointers must be explicitly initialized at some point in the code, a tedious and potentially error-prone process. Furthermore, Cyclone's polymorphic functions and types are restricted to abstraction over types with the same layout and calling convention as @void *@, \ie only pointer types and @int@. In \CFA terms, all Cyclone polymorphism must be dtype-static. While the Cyclone design provides the efficiency benefits discussed in Section~\ref{sec:generic-apps} for dtype-static polymorphism, it is more restrictive than \CFA's general model. Objective-C~\citep{obj-c-book} is an industrially successful extensions to C. However, Objective-C is a radical departure from C, using an object-oriented model with message-passing. Objective-C did not support type-checked generics until recently~\citep{xcode7}, historically using less-efficient and more error-prone runtime checking of object types. The GObject framework~\citep{GObject} also adds object-oriented programming with runtime type-checking and reference-counting garbage-collection to C; these features are more intrusive additions than those provided by \CFA, in addition to the runtime overhead of reference-counting. Vala~\citep{Vala} compiles to GObject-based C, and so adds the burden of learning a separate language syntax to the aforementioned demerits of GObject as a modernization path for the existing C code-bases. Java~\citep{Java8} included generic types in Java~5; Java's generic types are type-checked at compilation and type-erased at runtime, similar to \CFA's. However, in Java, each object carries its own table of method pointers, while \CFA passes the method pointers separately to maintain a C-compatible layout. Java is also a garbage-collected, object-oriented language, with the associated resource usage and C-interoperability burdens. D~\citep{D}, Go~\citep{Go}, and Rust~\citep{Rust} are modern, compiled languages with abstraction features similar to \CFA traits, \emph{interfaces} in D and Go and \emph{traits} in Rust. However, each language represents significant departures from C in terms of language model, and none has the same level of compatibility with C as \CFA. D and Go are garbage-collected languages, imposing the associated runtime overhead. The necessity of accounting for data transfer between managed runtimes and the unmanaged C runtime complicates foreign-function interfaces to C. Furthermore, while generic types and functions are available in Go, they are limited to a small fixed set provided by the compiler, with no language facility to define more. D restricts garbage collection to its own heap by default, while Rust is not garbage-collected, and thus has a lighter-weight runtime more interoperable with C. Rust also possesses much more powerful abstraction capabilities for writing generic code than Go. On the other hand, Rust's borrow-checker, provides strong safety guarantees but is complex and difficult to learn, and imposes a distinctly idiomatic programming style. \CFA, with its more modest safety features, ports directly to C code, while maintaining the idiomatic style of the original source. \subsection{Tuples/Variadics} Many programming languages have some form of tuple construct and/or variadic functions, \eg SETL, C, KW-C, \CC, D, Go, Java, ML, and Scala. SETL~\cite{SETL} is a high-level mathematical programming language, with tuples being one of the primary data types. Tuples in SETL allow subscripting, dynamic expansion, and multiple assignment. C provides variadic functions through @va_list@ objects, but the programmer is responsible for managing the number of arguments and their types, so the mechanism is not type-safe. KW-C~\cite{Buhr94a}, a predecessor of \CFA, introduced tuples to C as an extension of the C syntax, taking much of its inspiration from SETL. The main contributions of that work were adding MRVF, tuple mass and multiple assignment, and record-field access. \CCeleven introduced @std::tuple@ as a library variadic template structure. Tuples are a generalization of @std::pair@, in that they allow for arbitrary length, fixed-size aggregation of heterogeneous values. Operations include @std::get@ to extract vales, @std::tie@ to create a tuple of references used for assignment, and lexicographic comparisons. \CCseventeen proposes \emph{structured bindings}~\cite{Sutter15} to eliminate pre-declaring variables and use of @std::tie@ for binding the results. This extension requires the use of @auto@ to infer the types of the new variables, so complicated expressions with a non-obvious type must be documented with some other mechanism. Furthermore, structured bindings are not a full replacement for @std::tie@, as it always declares new variables. Like \CC, D provides tuples through a library variadic-template structure. Go does not have tuples but supports MRVF. Java's variadic functions appear similar to C's but are type-safe using homogeneous arrays, which are less useful than \CFA's heterogeneously-typed variadic functions. Tuples are a fundamental abstraction in most functional programming languages, such as Standard ML~\cite{sml} and Scala~\cite{Scala}, which decompose tuples using pattern matching. \section{Conclusion \& Future Work} There is ongoing work on a wide range of \CFA feature extensions, including reference types, exceptions, and concurrent programming primitives. In addition to this work, there are some interesting future directions the polymorphism design could take. Notably, \CC template functions trade compile time and code bloat for optimal runtime of individual instantiations of polymorphic functions. \CFA polymorphic functions, by contrast, use an approach that is essentially dynamic virtual dispatch. The runtime overhead of this approach is low, but not as low as \CC template functions, and it may be beneficial to provide a mechanism for particularly performance-sensitive code to close this gap. Further research is needed, but two promising approaches are to allow an annotation on polymorphic function call sites that tells the translator to create a template-specialization of the function (provided the code is visible in the current translation unit) or placing an annotation on polymorphic function definitions that instantiates a version of the polymorphic function specialized to some set of types. These approaches are not mutually exclusive, and would allow these performance optimizations to be applied only where most useful to increase performance, without suffering the code bloat or loss of generality of a template expansion approach where it is unnecessary. In conclusion, the authors' design for generic types and tuples, unlike those available in existing work, is both reusable and type-checked, while still supporting a full range of C features, including separately-compiled modules. We have experimentally validated the performance of our design against both \CC and standard C, showing it is \TODO{shiny, cap'n}. \begin{acks} The authors would like to thank Magnus Madsen for valuable editorial feedback. This work is supported in part by a corporate partnership with \grantsponsor{Huawei}{Huawei Ltd.}{http://www.huawei.com}\ and the first author's \grantsponsor{NSERC-PGS}{NSERC PGS D}{http://www.nserc-crsng.gc.ca/Students-Etudiants/PG-CS/BellandPostgrad-BelletSuperieures_eng.asp} scholarship. \end{acks} \appendix \section{BenchMarks} \label{sec:BenchMarks} TODO \bibliographystyle{ACM-Reference-Format} \bibliography{cfa} \end{document} % Local Variables: % % tab-width: 4 % % compile-command: "make" % % End: %