Changeset 3895b8b5
- Timestamp:
- Apr 14, 2017, 12:41:06 PM (8 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 3fb7f5e, bbe856c
- Parents:
- 1a16e9d
- Location:
- doc
- Files:
-
- 1 added
- 2 deleted
- 4 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/bibliography/cfa.bib
r1a16e9d r3895b8b5 4564 4564 4565 4565 @manual{obj-c-book, 4566 keywords 4567 contributor 4568 author = {{Apple Computer Inc.}},4569 title 4570 publisher= {Apple Computer Inc.},4571 address 4572 year 4566 keywords = {objective-c}, 4567 contributor = {a3moss@uwaterloo.ca}, 4568 author = {{Apple Computer}}, 4569 title = {The {Objective-C} Programming Language}, 4570 organization= {Apple Computer Inc.}, 4571 address = {Cupertino, CA}, 4572 year = 2003 4573 4573 } 4574 4574 … … 4894 4894 year = 1980, 4895 4895 month = nov, volume = 15, number = 11, pages = {47-56}, 4896 note = {Proceedings of the ACM-SIGPLAN Symposium on the {Ada} Programming 4897 Language}, 4896 note = {Proceedings of the ACM-SIGPLAN Symposium on the {Ada} Programming Language}, 4898 4897 comment = { 4899 4898 The two-pass (bottom-up, then top-down) algorithm, with a proof … … 5881 5880 keywords = {Rust programming language}, 5882 5881 contributer = {pabuhr@plg}, 5882 author = {{Rust Programming Language}}, 5883 5883 title = {The {Rust} Programming Language}, 5884 5884 organization= {The Rust Project Developers}, -
doc/generic_types/Makefile
r1a16e9d r3895b8b5 21 21 22 22 GRAPHS = ${addsuffix .tex, \ 23 timing \ 23 24 } 24 25 … … 45 46 #${DOCUMENT} : Makefile ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} ${basename ${DOCUMENT}}.tex \ 46 47 47 ${basename ${DOCUMENT}}.dvi : Makefile ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} ${basename ${DOCUMENT}}.tex \ 48 ../LaTeXmacros/common.tex ../LaTeXmacros/indexstyle ../bibliography/cfa.bib 48 ${basename ${DOCUMENT}}.dvi : Makefile ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} ${basename ${DOCUMENT}}.tex ../bibliography/cfa.bib 49 49 # Conditionally create an empty *.idx (index) file for inclusion until makeindex is run. 50 50 if [ ! -r ${basename $@}.idx ] ; then touch ${basename $@}.idx ; fi … … 66 66 ## Define the default recipes. 67 67 68 ${GRAPHS} : evaluation/timing.gp evaluation/timing.csv 69 gnuplot evaluation/timing.gp 70 68 71 %.tex : %.fig 69 72 fig2dev -L eepic $< > $@ -
doc/generic_types/evaluation/timing.csv
r1a16e9d r3895b8b5 1 "400 million repetitions","C"," Cforall","C++","C++obj","units"1 "400 million repetitions","C","\\CFA{}","\\CC{}","\\CC{obj}","units" 2 2 "push\nint",3379,2616,1928,3527,"ms" 3 3 "copy\nint",3036,2268,1564,3182,"ms" -
doc/generic_types/generic_types.tex
r1a16e9d r3895b8b5 6 6 \usepackage{upquote} % switch curled `'" to straight 7 7 \usepackage{listings} % format program code 8 \usepackage{graphicx}9 8 10 9 \makeatletter … … 142 141 143 142 \begin{abstract} 144 The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from commercial operating-systems to hobby projects. This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more. Nonetheless, C, first standardized over thirty years ago, lacks many features that make programming in more modern languages safer and more productive. The goal of the \CFA project is to create an extension of C that provides modern safety and productivity features while still ensuring strong backwards compatibility with C and its programmers. Prior projects have attempted similar goals but failed to honour C programming-style; for instance, adding object-oriented or functional programming with garbage collection is a non-starter for many C developers. Specifically, \CFA is designed to have an orthogonal feature-set based closely on the C programming paradigm, so that \CFA features can be added \emph{incrementally} to existing C code-bases, and C programmers can learn \CFA extensions on an as-needed basis, preserving investment in existing code and engineers. This paper describes two \CFA extensions, generic and tuple types, details how their design avoids shortcomings of similar features in C and other C-like languages, and presents experimental results validating the design. 143 The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from commercial operating-systems to hobby projects. 144 This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more. 145 Nonetheless, C, first standardized over thirty years ago, lacks many features that make programming in more modern languages safer and more productive. 146 The goal of the \CFA project is to create an extension of C that provides modern safety and productivity features while still ensuring strong backwards compatibility with C and its programmers. 147 Prior projects have attempted similar goals but failed to honour C programming-style; for instance, adding object-oriented or functional programming with garbage collection is a non-starter for many C developers. 148 Specifically, \CFA is designed to have an orthogonal feature-set based closely on the C programming paradigm, so that \CFA features can be added \emph{incrementally} to existing C code-bases, and C programmers can learn \CFA extensions on an as-needed basis, preserving investment in existing code and engineers. 149 This paper describes two \CFA extensions, generic and tuple types, details how their design avoids shortcomings of similar features in C and other C-like languages, and presents experimental results validating the design. 145 150 \end{abstract} 146 151 … … 151 156 \section{Introduction and Background} 152 157 153 The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from commercial operating-systems to hobby projects. This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more. 154 The \citet{TIOBE} ranks the top 5 most popular programming languages as: Java 16\%, \Textbf{C 7\%}, \Textbf{\CC 5\%}, \CS 4\%, Python 4\% = 36\%, where the next 50 languages are less than 3\% each with a long tail. The top 3 rankings over the past 30 years are: 158 The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from commercial operating-systems to hobby projects. 159 This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more. 160 The \citet{TIOBE} ranks the top 5 most popular programming languages as: Java 16\%, \Textbf{C 7\%}, \Textbf{\CC 5\%}, \CS 4\%, Python 4\% = 36\%, where the next 50 languages are less than 3\% each with a long tail. 161 The top 3 rankings over the past 30 years are: 155 162 \lstDeleteShortInline@% 156 163 \begin{center} … … 171 178 Nonetheless, C, first standardized over thirty years ago, lacks many features that make programming in more modern languages safer and more productive. 172 179 173 \CFA (pronounced ``C-for-all'', and written \CFA or Cforall) is an evolutionary extension of the C programming language that aims to add modern language features to C while maintaining both source compatibility with C and a familiar programming model for programmers. The four key design goals for \CFA~\citep{Bilson03} are: 180 \CFA (pronounced ``C-for-all'', and written \CFA or Cforall) is an evolutionary extension of the C programming language that aims to add modern language features to C while maintaining both source compatibility with C and a familiar programming model for programmers. 181 The four key design goals for \CFA~\citep{Bilson03} are: 174 182 (1) The behaviour of standard C code must remain the same when translated by a \CFA compiler as when translated by a C compiler; 175 183 (2) Standard C code must be as fast and as small when translated by a \CFA compiler as when translated by a C compiler; … … 179 187 Unfortunately, \CC is actively diverging from C, so incremental additions require significant effort and training, coupled with multiple legacy design-choices that cannot be updated. 180 188 181 \CFA is currently implemented as a source-to-source translator from \CFA to the GCC-dialect of C~\citep{GCCExtensions}, allowing it to leverage the portability and code optimizations provided by GCC, meeting goals (1)-(3). Ultimately, a compiler is necessary for advanced features and optimal performance. 182 183 This paper identifies shortcomings in existing approaches to generic and variadic data types in C-like languages and presents a design for generic and variadic types avoiding those shortcomings. Specifically, the solution is both reusable and type-checked, as well as conforming to the design goals of \CFA with ergonomic use of existing C abstractions. The new constructs are empirically compared with both standard C and \CC; the results show the new design is comparable in performance. 189 \CFA is currently implemented as a source-to-source translator from \CFA to the GCC-dialect of C~\citep{GCCExtensions}, allowing it to leverage the portability and code optimizations provided by GCC, meeting goals (1)-(3). 190 Ultimately, a compiler is necessary for advanced features and optimal performance. 191 192 This paper identifies shortcomings in existing approaches to generic and variadic data types in C-like languages and presents a design for generic and variadic types avoiding those shortcomings. 193 Specifically, the solution is both reusable and type-checked, as well as conforming to the design goals of \CFA with ergonomic use of existing C abstractions. 194 The new constructs are empirically compared with both standard C and \CC; the results show the new design is comparable in performance. 184 195 185 196 … … 187 198 \label{sec:poly-fns} 188 199 189 \CFA's polymorphism was originally formalized by \citet{Ditchfield92}, and first implemented by \citet{Bilson03}. The signature feature of \CFA is parametric-polymorphic functions where functions are generalized using a @forall@ clause (giving the language its name): 200 \CFA's polymorphism was originally formalized by \citet{Ditchfield92}, and first implemented by \citet{Bilson03}. 201 The signature feature of \CFA is parametric-polymorphic functions where functions are generalized using a @forall@ clause (giving the language its name): 190 202 \begin{lstlisting} 191 203 `forall( otype T )` T identity( T val ) { return val; } 192 204 int forty_two = identity( 42 ); $\C{// T is bound to int, forty\_two == 42}$ 193 205 \end{lstlisting} 194 The @identity@ function above can be applied to any complete \emph{object type} (or @otype@). The type variable @T@ is transformed into a set of additional implicit parameters encoding sufficient information about @T@ to create and return a variable of that type. The \CFA implementation passes the size and alignment of the type represented by an @otype@ parameter, as well as an assignment operator, constructor, copy constructor and destructor. If this extra information is not needed, \eg for a pointer, the type parameter can be declared as a \emph{data type} (or @dtype@). 195 196 In \CFA, the polymorphism runtime-cost is spread over each polymorphic call, due to passing more arguments to polymorphic functions; preliminary experiments show this overhead is similar to \CC virtual-function calls. An advantage of this design is that, unlike \CC template-functions, \CFA polymorphic-functions are compatible with C \emph{separate compilation}, preventing compilation and code bloat. 197 198 Since bare polymorphic-types provide only a narrow set of available operations, \CFA provides a \emph{type assertion} mechanism to provide further type information, where type assertions may be variable or function declarations that depend on a polymorphic type-variable. For example, the function @twice@ can be defined using the \CFA syntax for operator overloading: 206 The @identity@ function above can be applied to any complete \emph{object type} (or @otype@). 207 The type variable @T@ is transformed into a set of additional implicit parameters encoding sufficient information about @T@ to create and return a variable of that type. 208 The \CFA implementation passes the size and alignment of the type represented by an @otype@ parameter, as well as an assignment operator, constructor, copy constructor and destructor. 209 If this extra information is not needed, \eg for a pointer, the type parameter can be declared as a \emph{data type} (or @dtype@). 210 211 In \CFA, the polymorphism runtime-cost is spread over each polymorphic call, due to passing more arguments to polymorphic functions; preliminary experiments show this overhead is similar to \CC virtual-function calls. 212 An advantage of this design is that, unlike \CC template-functions, \CFA polymorphic-functions are compatible with C \emph{separate compilation}, preventing compilation and code bloat. 213 214 Since bare polymorphic-types provide only a narrow set of available operations, \CFA provides a \emph{type assertion} mechanism to provide further type information, where type assertions may be variable or function declarations that depend on a polymorphic type-variable. 215 For example, the function @twice@ can be defined using the \CFA syntax for operator overloading: 199 216 \begin{lstlisting} 200 217 forall( otype T `| { T ?+?(T, T); }` ) T twice( T x ) { return x + x; } $\C{// ? denotes operands}$ 201 218 int val = twice( twice( 3.7 ) ); 202 219 \end{lstlisting} 203 which works for any type @T@ with a matching addition operator. The polymorphism is achieved by creating a wrapper function for calling @+@ with @T@ bound to @double@, then passing this function to the first call of @twice@. There is now the option of using the same @twice@ and converting the result to @int@ on assignment, or creating another @twice@ with type parameter @T@ bound to @int@ because \CFA uses the return type, as in~\cite{Ada}, in its type analysis. 204 The first approach has a late conversion from @double@ to @int@ on the final assignment, while the second has an eager conversion to @int@. \CFA minimizes the number of conversions and their potential to lose information, so it selects the first approach, which corresponds with C-programmer intuition. 220 which works for any type @T@ with a matching addition operator. 221 The polymorphism is achieved by creating a wrapper function for calling @+@ with @T@ bound to @double@, then passing this function to the first call of @twice@. 222 There is now the option of using the same @twice@ and converting the result to @int@ on assignment, or creating another @twice@ with type parameter @T@ bound to @int@ because \CFA uses the return type, as in~\cite{Ada}, in its type analysis. 223 The first approach has a late conversion from @double@ to @int@ on the final assignment, while the second has an eager conversion to @int@. 224 \CFA minimizes the number of conversions and their potential to lose information, so it selects the first approach, which corresponds with C-programmer intuition. 205 225 206 226 Crucial to the design of a new programming language are the libraries to access thousands of external software features. … … 242 262 where the return type supplies the type/size of the allocation, which is impossible in most type systems. 243 263 244 Call-site inferencing and nested functions provide a localized form of inheritance. For example, the \CFA @qsort@ only sorts in ascending order using @<@. However, it is trivial to locally change this behaviour: 264 Call-site inferencing and nested functions provide a localized form of inheritance. 265 For example, the \CFA @qsort@ only sorts in ascending order using @<@. 266 However, it is trivial to locally change this behaviour: 245 267 \begin{lstlisting} 246 268 forall( otype T | { int ?<?( T, T ); } ) void qsort( const T * arr, size_t size ) { /* use C qsort */ } … … 307 329 \end{lstlisting} 308 330 Given the information provided for an @otype@, variables of polymorphic type can be treated as if they were a complete type: stack-allocatable, default or copy-initialized, assigned, and deleted. 309 % As an example, the @sum@ function produces generated code something like the following (simplified for clarity and brevity)\TODO{fix example, maybe elide, it's likely too long with the more complicated function}:310 % \begin{lstlisting}311 % void abs( size_t _sizeof_M, size_t _alignof_M,312 % void (*_ctor_M)(void*), void (*_copy_M)(void*, void*),313 % void (*_assign_M)(void*, void*), void (*_dtor_M)(void*),314 % _Bool (*_lt_M)(void*, void*), void (*_neg_M)(void*, void*),315 % void (*_ctor_M_zero)(void*, int),316 % void* m, void* _rtn ) { $\C{// polymorphic parameter and return passed as void*}$317 % $\C{// M zero = { 0 };}$318 % void* zero = alloca(_sizeof_M); $\C{// stack allocate zero temporary}$319 % _ctor_M_zero(zero, 0); $\C{// initialize using zero\_t constructor}$320 % $\C{// return m < zero ? -m : m;}$321 % void *_tmp = alloca(_sizeof_M);322 % _copy_M( _rtn, $\C{// copy-initialize return value}$323 % _lt_M( m, zero ) ? $\C{// check condition}$324 % (_neg_M(m, _tmp), _tmp) : $\C{// negate m}$325 % m);326 % _dtor_M(_tmp); _dtor_M(zero); $\C{// destroy temporaries}$327 % }328 % \end{lstlisting}329 331 330 332 In summation, the \CFA type-system uses \emph{nominal typing} for concrete types, matching with the C type-system, and \emph{structural typing} for polymorphic types. … … 364 366 \section{Generic Types} 365 367 366 One of the known shortcomings of standard C is that it does not provide reusable type-safe abstractions for generic data structures and algorithms. Broadly speaking, there are three approaches to create data structures in C. One approach is to write bespoke data structures for each context in which they are needed. While this approach is flexible and supports integration with the C type-checker and tooling, it is also tedious and error-prone, especially for more complex data structures. 367 A second approach is to use @void *@--based polymorphism, \eg the C standard-library functions @bsearch@ and @qsort@, and does allow the use of common code for common functionality. However, basing all polymorphism on @void *@ eliminates the type-checker's ability to ensure that argument types are properly matched, often requiring a number of extra function parameters, pointer indirection, and dynamic allocation that would not otherwise be needed. 368 A third approach to generic code is to use preprocessor macros, which does allow the generated code to be both generic and type-checked, but errors may be difficult to interpret. Furthermore, writing and using preprocessor macros can be unnatural and inflexible. 369 370 Other languages use \emph{generic types}, \eg \CC and Java, to produce type-safe abstract data-types. \CFA also implements generic types that integrate efficiently and naturally with the existing polymorphic functions, while retaining backwards compatibility with C and providing separate compilation. However, for known concrete parameters, the generic type can be inlined, like \CC templates. 368 One of the known shortcomings of standard C is that it does not provide reusable type-safe abstractions for generic data structures and algorithms. 369 Broadly speaking, there are three approaches to create data structures in C. 370 One approach is to write bespoke data structures for each context in which they are needed. 371 While this approach is flexible and supports integration with the C type-checker and tooling, it is also tedious and error-prone, especially for more complex data structures. 372 A second approach is to use @void *@--based polymorphism, \eg the C standard-library functions @bsearch@ and @qsort@, and does allow the use of common code for common functionality. 373 However, basing all polymorphism on @void *@ eliminates the type-checker's ability to ensure that argument types are properly matched, often requiring a number of extra function parameters, pointer indirection, and dynamic allocation that would not otherwise be needed. 374 A third approach to generic code is to use preprocessor macros, which does allow the generated code to be both generic and type-checked, but errors may be difficult to interpret. 375 Furthermore, writing and using preprocessor macros can be unnatural and inflexible. 376 377 Other languages use \emph{generic types}, \eg \CC and Java, to produce type-safe abstract data-types. 378 \CFA also implements generic types that integrate efficiently and naturally with the existing polymorphic functions, while retaining backwards compatibility with C and providing separate compilation. 379 However, for known concrete parameters, the generic type can be inlined, like \CC templates. 371 380 372 381 A generic type can be declared by placing a @forall@ specifier on a @struct@ or @union@ declaration, and instantiated using a parenthesized list of types after the type name: … … 387 396 \end{lstlisting} 388 397 389 \CFA classifies generic types as either \emph{concrete} or \emph{dynamic}. Concrete have a fixed memory layout regardless of type parameters, while dynamic vary in memory layout depending on their type parameters. A type may have polymorphic parameters but still be concrete, called \emph{dtype-static}. Polymorphic pointers are an example of dtype-static types, \eg @forall(dtype T) T *@ is a polymorphic type, but for any @T@, @T *@ is a fixed-sized pointer, and therefore, can be represented by a @void *@ in code generation. 390 391 \CFA generic types also allow checked argument-constraints. For example, the following declaration of a sorted set-type ensures the set key supports equality and relational comparison: 398 \CFA classifies generic types as either \emph{concrete} or \emph{dynamic}. 399 Concrete have a fixed memory layout regardless of type parameters, while dynamic vary in memory layout depending on their type parameters. 400 A type may have polymorphic parameters but still be concrete, called \emph{dtype-static}. 401 Polymorphic pointers are an example of dtype-static types, \eg @forall(dtype T) T *@ is a polymorphic type, but for any @T@, @T *@ is a fixed-sized pointer, and therefore, can be represented by a @void *@ in code generation. 402 403 \CFA generic types also allow checked argument-constraints. 404 For example, the following declaration of a sorted set-type ensures the set key supports equality and relational comparison: 392 405 \begin{lstlisting} 393 406 forall( otype Key | { _Bool ?==?(Key, Key); _Bool ?<?(Key, Key); } ) struct sorted_set; … … 397 410 \subsection{Concrete Generic-Types} 398 411 399 The \CFA translator template-expands concrete generic-types into new structure types, affording maximal inlining. To enable inter-operation among equivalent instantiations of a generic type, the translator saves the set of instantiations currently in scope and reuses the generated structure declarations where appropriate. For example, a function declaration that accepts or returns a concrete generic-type produces a declaration for the instantiated struct in the same scope, which all callers may reuse. For example, the concrete instantiation for @pair( const char *, int )@ is: 412 The \CFA translator template-expands concrete generic-types into new structure types, affording maximal inlining. 413 To enable inter-operation among equivalent instantiations of a generic type, the translator saves the set of instantiations currently in scope and reuses the generated structure declarations where appropriate. 414 For example, a function declaration that accepts or returns a concrete generic-type produces a declaration for the instantiated struct in the same scope, which all callers may reuse. 415 For example, the concrete instantiation for @pair( const char *, int )@ is: 400 416 \begin{lstlisting} 401 417 struct _pair_conc1 { … … 405 421 \end{lstlisting} 406 422 407 A concrete generic-type with dtype-static parameters is also expanded to a structure type, but this type is used for all matching instantiations. In the above example, the @pair( F *, T * )@ parameter to @value_p@ is such a type; its expansion is below and it is used as the type of the variables @q@ and @r@ as well, with casts for member access where appropriate: 423 A concrete generic-type with dtype-static parameters is also expanded to a structure type, but this type is used for all matching instantiations. 424 In the above example, the @pair( F *, T * )@ parameter to @value_p@ is such a type; its expansion is below and it is used as the type of the variables @q@ and @r@ as well, with casts for member access where appropriate: 408 425 \begin{lstlisting} 409 426 struct _pair_conc0 { … … 428 445 The offset array @_offsetof_pair@ is generated at the call site as @size_t _offsetof_pair[] = { offsetof(_pair_conc1, first), offsetof(_pair_conc1, second) }@. 429 446 430 In some cases the offset arrays cannot be statically generated. For instance, modularity is generally provided in C by including an opaque forward-declaration of a structure and associated accessor and mutator functions in a header file, with the actual implementations in a separately-compiled @.c@ file. 447 In some cases the offset arrays cannot be statically generated. 448 For instance, modularity is generally provided in C by including an opaque forward-declaration of a structure and associated accessor and mutator functions in a header file, with the actual implementations in a separately-compiled @.c@ file. 431 449 \CFA supports this pattern for generic types, but the caller does not know the actual layout or size of the dynamic generic-type, and only holds it by a pointer. 432 450 The \CFA translator automatically generates \emph{layout functions} for cases where the size, alignment, and offset array of a generic struct cannot be passed into a function from that function's caller. 433 451 These layout functions take as arguments pointers to size and alignment variables and a caller-allocated array of member offsets, as well as the size and alignment of all @sized@ parameters to the generic structure (un@sized@ parameters are forbidden from being used in a context that affects layout). 434 452 Results of these layout functions are cached so that they are only computed once per type per function. %, as in the example below for @pair@. 435 % \begin{lstlisting}436 % static inline void _layoutof_pair(size_t* _szeof_pair, size_t* _alignof_pair, size_t* _offsetof_pair,437 % size_t _szeof_R, size_t _alignof_R, size_t _szeof_S, size_t _alignof_S) {438 % *_szeof_pair = 0; // default values439 % *_alignof_pair = 1;440 %441 % // add offset, size, and alignment of first field442 % _offsetof_pair[0] = *_szeof_pair;443 % *_szeof_pair += _szeof_R;444 % if ( *_alignof_pair < _alignof_R ) *_alignof_pair = _alignof_R;445 %446 % // padding, offset, size, and alignment of second field447 % if ( *_szeof_pair & (_alignof_S - 1) )448 % *_szeof_pair += (_alignof_S - ( *_szeof_pair & (_alignof_S - 1) ) );449 % _offsetof_pair[1] = *_szeof_pair;450 % *_szeof_pair += _szeof_S;451 % if ( *_alignof_pair < _alignof_S ) *_alignof_pair = _alignof_S;452 %453 % // pad to struct alignment454 % if ( *_szeof_pair & (*_alignof_pair - 1) )455 % *_szeof_pair += ( *_alignof_pair - ( *_szeof_pair & (*_alignof_pair - 1) ) );456 % }457 % \end{lstlisting}458 453 Layout functions also allow generic types to be used in a function definition without reflecting them in the function signature. 459 454 For instance, a function that strips duplicate values from an unsorted @vector(T)@ would likely have a pointer to the vector as its only explicit parameter, but use some sort of @set(T)@ internally to test for duplicate values. … … 468 463 \label{sec:generic-apps} 469 464 470 The reuse of dtype-static structure instantiations enables useful programming patterns at zero runtime cost. The most important such pattern is using @forall(dtype T) T *@ as a type-checked replacement for @void *@, \eg creating a lexicographic comparison for pairs of pointers used by @bsearch@ or @qsort@: 465 The reuse of dtype-static structure instantiations enables useful programming patterns at zero runtime cost. 466 The most important such pattern is using @forall(dtype T) T *@ as a type-checked replacement for @void *@, \eg creating a lexicographic comparison for pairs of pointers used by @bsearch@ or @qsort@: 471 467 \begin{lstlisting} 472 468 forall(dtype T) int lexcmp( pair( T *, T * ) * a, pair( T *, T * ) * b, int (* cmp)( T *, T * ) ) { … … 670 666 x.[0, 1] = x.[1, 0]; $\C[1in]{// rearrange: [x.0, x.1] = [x.1, x.0]}$ 671 667 f( x.[0, 3] ); $\C{// drop: f(x.0, x.3)}\CRT{}$ 672 [int, int, int] y = x.[2, 0, 2]; // duplicate: [y.0, y.1, y.2] = [x.2, x.0. x.2] 668 [int, int, int] y = x.[2, 0, 2]; // duplicate: [y.0, y.1, y.2] = [x.2, x.0. 669 x.2] 673 670 \end{lstlisting} 674 671 \end{tabular} … … 687 684 \subsection{Casting} 688 685 689 In C, the cast operator is used to explicitly convert between types. In \CFA, the cast operator has a secondary use as type ascription. That is, a cast can be used to select the type of an expression when it is ambiguous, as in the call to an overloaded function: 686 In C, the cast operator is used to explicitly convert between types. 687 In \CFA, the cast operator has a secondary use as type ascription. 688 That is, a cast can be used to select the type of an expression when it is ambiguous, as in the call to an overloaded function: 690 689 \begin{lstlisting} 691 690 int f(); // (1) … … 696 695 \end{lstlisting} 697 696 698 Since casting is a fundamental operation in \CFA, casts should be given a meaningful interpretation in the context of tuples. Taking a look at standard C provides some guidance with respect to the way casts should work with tuples: 697 Since casting is a fundamental operation in \CFA, casts should be given a meaningful interpretation in the context of tuples. 698 Taking a look at standard C provides some guidance with respect to the way casts should work with tuples: 699 699 \begin{lstlisting} 700 700 int f(); … … 704 704 (int)g(); // (2) 705 705 \end{lstlisting} 706 In C, (1) is a valid cast, which calls @f@ and discards its result. On the other hand, (2) is invalid, because @g@ does not produce a result, so requesting an @int@ to materialize from nothing is nonsensical. Generalizing these principles, any cast wherein the number of components increases as a result of the cast is invalid, while casts that have the same or fewer number of components may be valid. 707 708 Formally, a cast to tuple type is valid when $T_n \leq S_m$, where $T_n$ is the number of components in the target type and $S_m$ is the number of components in the source type, and for each $i$ in $[0, n)$, $S_i$ can be cast to $T_i$. Excess elements ($S_j$ for all $j$ in $[n, m)$) are evaluated, but their values are discarded so that they are not included in the result expression. This approach follows naturally from the way that a cast to @void@ works in C. 706 In C, (1) is a valid cast, which calls @f@ and discards its result. 707 On the other hand, (2) is invalid, because @g@ does not produce a result, so requesting an @int@ to materialize from nothing is nonsensical. 708 Generalizing these principles, any cast wherein the number of components increases as a result of the cast is invalid, while casts that have the same or fewer number of components may be valid. 709 710 Formally, a cast to tuple type is valid when $T_n \leq S_m$, where $T_n$ is the number of components in the target type and $S_m$ is the number of components in the source type, and for each $i$ in $[0, n)$, $S_i$ can be cast to $T_i$. 711 Excess elements ($S_j$ for all $j$ in $[n, m)$) are evaluated, but their values are discarded so that they are not included in the result expression. 712 This approach follows naturally from the way that a cast to @void@ works in C. 709 713 710 714 For example, in … … 720 724 \end{lstlisting} 721 725 722 (1) discards the last element of the return value and converts the second element to @double@. Since @int@ is effectively a 1-element tuple, (2) discards the second component of the second element of the return value of @g@. If @g@ is free of side effects, this expression is equivalent to @[(int)(g().0), (int)(g().1.0), (int)(g().2)]@. 726 (1) discards the last element of the return value and converts the second element to @double@. 727 Since @int@ is effectively a 1-element tuple, (2) discards the second component of the second element of the return value of @g@. 728 If @g@ is free of side effects, this expression is equivalent to @[(int)(g().0), (int)(g().1.0), (int)(g().2)]@. 723 729 Since @void@ is effectively a 0-element tuple, (3) discards the first and third return values, which is effectively equivalent to @[(int)(g().1.0), (int)(g().1.1)]@). 724 730 725 Note that a cast is not a function call in \CFA, so flattening and structuring conversions do not occur for cast expressions\footnote{User-defined conversions have been considered, but for compatibility with C and the existing use of casts as type ascription, any future design for such conversions would require more precise matching of types than allowed for function arguments and parameters.}. As such, (4) is invalid because the cast target type contains 4 components, while the source type contains only 3. Similarly, (5) is invalid because the cast @([int, int, int])(g().1)@ is invalid. That is, it is invalid to cast @[int, int]@ to @[int, int, int]@. 731 Note that a cast is not a function call in \CFA, so flattening and structuring conversions do not occur for cast expressions\footnote{User-defined conversions have been considered, but for compatibility with C and the existing use of casts as type ascription, any future design for such conversions would require more precise matching of types than allowed for function arguments and parameters.}. 732 As such, (4) is invalid because the cast target type contains 4 components, while the source type contains only 3. 733 Similarly, (5) is invalid because the cast @([int, int, int])(g().1)@ is invalid. 734 That is, it is invalid to cast @[int, int]@ to @[int, int, int]@. 726 735 \end{comment} 727 736 … … 843 852 This function provides the type-safety of @new@ in \CC, without the need to specify the allocated type again, thanks to return-type inference. 844 853 845 % In the call to @new@, @pair(double, char)@ is selected to match @T@, and @Params@ is expanded to match @[double, char]@. The constructor (1) may be specialized to satisfy the assertion for a constructor with an interface compatible with @void ?{}(pair(int, char) *, int, char)@.846 847 854 848 855 \subsection{Implementation} 849 856 850 857 Tuples are implemented in the \CFA translator via a transformation into generic types. 851 For each $N$, the first time an $N$-tuple is seen in a scope a generic type with $N$ type parameters is generated .\eg:858 For each $N$, the first time an $N$-tuple is seen in a scope a generic type with $N$ type parameters is generated, \eg: 852 859 \begin{lstlisting} 853 860 [int, int] f() { … … 902 909 f(x.field_0, (_tuple2){ x.field_1, 'z' }); 903 910 \end{lstlisting} 904 Note that due to flattening, @x@ used in the argument position is converted into the list of its fields. In the call to @f@, the second and third argument components are structured into a tuple argument. Similarly, tuple member expressions are recursively expanded into a list of member access expressions. 905 906 Expressions that may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. Each unique expression is assigned an identifier and is guaranteed to be executed exactly once: 911 Note that due to flattening, @x@ used in the argument position is converted into the list of its fields. 912 In the call to @f@, the second and third argument components are structured into a tuple argument. 913 Similarly, tuple member expressions are recursively expanded into a list of member access expressions. 914 915 Expressions that may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. 916 Each unique expression is assigned an identifier and is guaranteed to be executed exactly once: 907 917 \begin{lstlisting} 908 918 void g(int, double); … … 922 932 ); 923 933 \end{lstlisting} 924 Since argument evaluation order is not specified by the C programming language, this scheme is built to work regardless of evaluation order. The first time a unique expression is executed, the actual expression is evaluated and the accompanying boolean is set to true. Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression. Tuple member expressions also take advantage of unique expressions in the case of possible impurity. 925 926 Currently, the \CFA translator has a very broad, imprecise definition of impurity, where any function call is assumed to be impure. This notion could be made more precise for certain intrinsic, auto-generated, and builtin functions, and could analyze function bodies when they are available to recursively detect impurity, to eliminate some unique expressions. 927 928 The various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions. A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, \eg in a unique expression. The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language. However, there are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this restriction is not new. 934 Since argument evaluation order is not specified by the C programming language, this scheme is built to work regardless of evaluation order. 935 The first time a unique expression is executed, the actual expression is evaluated and the accompanying boolean is set to true. 936 Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression. 937 Tuple member expressions also take advantage of unique expressions in the case of possible impurity. 938 939 Currently, the \CFA translator has a very broad, imprecise definition of impurity, where any function call is assumed to be impure. 940 This notion could be made more precise for certain intrinsic, auto-generated, and builtin functions, and could analyze function bodies when they are available to recursively detect impurity, to eliminate some unique expressions. 941 942 The various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions. 943 A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, \eg in a unique expression. 944 The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language. 945 However, there are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this restriction is not new. 929 946 \end{comment} 930 947 … … 932 949 \section{Evaluation} 933 950 934 Though \CFA provides significant added functionality over C, these added features do not impose a significant runtime penalty. In fact, \CFA's features for generic programming can enable runtime execution that is faster than idiomatic @void*@-based C code. We have produced a set of generic-code-based micro-benchmarks to demonstrate these claims, source code for which may be found in \TODO{Appendix A}. These benchmarks test a generic stack based on a singly-linked-list, a generic pair data structure, and a variadic @print@ routine similar to that shown in Section~\ref{sec:variadic-tuples}. Each benchmark has been implemented in C with @void*@-based polymorphism, \CFA with the features discussed in this paper, \CC with templates, and \CC using only class inheritance for polymorphism (``\CCV''). The intention of these benchmarks is to represent the costs of idiomatic use of each language's features, rather than the strict maximal performance obtainable by code written in each language -- as all the languages considered have a shared subset comprising most of standard C, a set of maximal-performance benchmarks would presumably show very little runtime variance, and would differ primarily in length and clarity of source code. Particularly, in the \CCV variant of the benchmark all objects inherit from a base @object@ class, explicitly implement interfaces defined as abstract base classes, and must do runtime checks in generic code to safely down-cast objects; this is not an idiomatic programming pattern for \CC, but is meant to represent the design of a simple object-oriented programming language. The most notable difference between the implementations is memory layout; the \CFA and \CC variants inline the stack and pair elements into their corresponding list and pair nodes, while the C and \CCV versions are forced by their lack of a generic type capability to store generic objects via pointers to separately-allocated objects. For more idiomatic language use, the C and \CFA variants used \texttt{cstdio.h} for printing, while the \CC and \CCV variants used \texttt{iostream}, though preliminary experiments showed this distinction to make little runtime difference. For consistency in testing, all implementations used the C @rand()@ function for random number generation. 951 Though \CFA provides significant added functionality over C, these added features have a low runtime penalty. 952 In fact, \CFA's features for generic programming can enable faster runtime execution than idiomatic @void *@-based C code. 953 This claim is demonstrated through a set of generic-code-based micro-benchmarks in C, \CFA, and \CC (see source code in Appendix~\ref{sec:BenchMarks}). 954 Since all these languages share a subset comprising most of standard C, maximal-performance benchmarks would show little runtime variance, other than in length and clarity of source code. 955 Instead, the presented benchmarks show the costs of idiomatic use of each language's features to examine common usage. 956 The benchmarks test a generic stack based on a singly linked-list, a generic pair-data-structure, and a variadic @print@ routine similar to that in Section~\ref{sec:variadic-tuples}. 957 The structure of each implemented is: C with @void *@-based polymorphism, \CFA with the different presented features, \CC with templates, and \CC using only class inheritance for polymorphism, called \CCV. 958 The \CCV variant illustrates an alternative object-oriented idiom where all objects inherit from a base @object@ class, mimicking a Java-like interface; 959 hence runtime checks are necessary to safely down-cast objects. 960 The most notable difference among the implementations is in optimizations: \CFA and \CC inline the stack and pair elements into corresponding list and pair nodes, while the C and \CCV lack generic-type capability {\color{red}(AWKWARD) to store generic objects via pointers to separately-allocated objects}. 961 For the print benchmark, idiomatic printing is used: the C and \CFA variants used @cstdio.h@, while the \CC and \CCV variants used @iostream@. 962 Preliminary tests show the difference has little runtime effect. 963 Finally, the C @rand@ function is used generate random numbers. 935 964 936 965 \begin{figure} 937 966 \centering 938 \in cludegraphics{evaluation/timing}939 \caption{ Timing Results for benchmarks}967 \input{timing} 968 \caption{Benchmark Timing Results (smaller is better)} 940 969 \label{fig:eval} 941 970 \end{figure} … … 944 973 \caption{Properties of benchmark code} 945 974 \label{tab:eval} 946 \begin{tabular}{lrrrr} 947 & C & \CFA & \CC & \CCV \\ \hline 948 maximum memory usage (MB) & 10001 & 2501 & 2503 & 11253 \\ 949 source code size (lines) & 301 & 224 & 188 & 437 \\ 950 binary size (KB) & 18.46 & 234.22 & 18.42 & 42.10 \\ 975 \newcommand{\CT}[1]{\multicolumn{1}{c}{#1}} 976 \begin{tabular}{r|rrrr} 977 & \CT{C} & \CT{\CFA} & \CT{\CC} & \CT{\CCV} \\ \hline 978 maximum memory usage (MB) & 10001 & 2501 & 2503 & 11253 \\ 979 source code size (lines) & 301 & 224 & 188 & 437 \\ 980 binary size (KB) & 18 & 234 & 18 & 42 \\ 951 981 \end{tabular} 952 982 \end{table} 953 983 954 The results of running the benchmarks can be seen in Figure~\ref{fig:eval} and Table~\ref{tab:eval}; each result records the time taken by a single function call, repeated $N = 40,000,000$ times where appropriate. The five functions are $N$ stack pushes of randomly generated elements, deep copy of an $N$ element stack, clearing all nodes of an $N$ element stack, $N/2$ variadic @print@ calls each containing two constant strings and two stack elements \TODO{right now $N$ fresh elements: FIX}, and $N$ stack pops, keeping a running record of the maximum element to ensure that the object copies are not optimized out. These five functions are run first for a stack of integers, and second for a stack of generic pairs of a boolean and a @char@. \TODO{} The data shown is the median of 5 consecutive runs of each program, with an initial warm-up run omitted. All code was compiled at \texttt{-O2} by GCC or G++ 6.2.0, with all \CC code compiled as \CCfourteen. The benchmarks were run on an Ubuntu 16.04 workstation with 16 GB of RAM and a 6-core AMD FX-6300 CPU with 3.5 GHz maximum clock frequency. The C and \CCV variants are generally the slowest and most memory-hungry, due to their less-efficient memory layout and the pointer-indirection necessary to implement generic types in these languages; this problem is exacerbated by the second level of generic types in the pair-based benchmarks. By contrast, the \CFA and \CC variants run in roughly equivalent time for both the integer and pair of boolean and char tests, which makes sense given that an integer is actually larger than the pair in both languages. 955 956 The \CC code is the shortest largely due to its use of header-only libraries, as template code cannot be separately compiled, the \CFA line count would shrink to \TODO{} if it used a header-only approach instead of the more idiomatic separate compilation. \CFA and \CC also have the advantage of a more extensive standard library; as part of the standard library neither language's generic @pair@ type is included in the line count, while this type must be written by the user programmer in both C and \CCV. The definition of @object@ and wrapper classes for @bool@, @char@, @int@, and @const char*@ are included in the line count for \CCV, which somewhat inflates its line count, as an actual object-oriented language would include these in the standard library and with their omission the \CCV line count is similar to C; we justify the given line count by the fact that many object-oriented languages do not allow implementing new interfaces on library types without subclassing or boilerplate-filled wrapper types, which may be similarly verbose. Raw line-count, however, is a fairly rough measure of code complexity; another important factor is how much type information the programmer must manually specify, especially where that information is not checked by the compiler. Such un-checked type information produces a heavier documentation burden and increased potential for runtime bugs, and is much less common in \CFA than C, with its manually specified function pointers arguments and format codes, or \CCV, with its extensive use of un-type-checked downcasts (\eg @object@ to @integer@ when popping a stack, or @object@ to @printable@ when printing the elements of a @pair@) \TODO{Actually calculate this; I want to put a distinctive comment in the source code and grep for it}. 984 Figure~\ref{fig:eval} and Table~\ref{tab:eval} show the benchmark results. 985 Each data point is the time for 40M function call, repeated times where appropriate. 986 The five functions are $N$ stack pushes of randomly generated elements, deep copy of an $N$ element stack, clearing all nodes of an $N$ element stack, $N/2$ variadic @print@ calls each containing two constant strings and two stack elements \TODO{right now $N$ fresh elements: FIX}, and $N$ stack pops, keeping a running record of the maximum element to ensure that the object copies are not optimized out. 987 These five functions are run first for a stack of integers, and second for a stack of generic pairs of a boolean and a @char@. 988 \TODO{} The data shown is the median of 5 consecutive runs of each program, with an initial warm-up run omitted. 989 All code was compiled at \texttt{-O2} by GCC or G++ 6.2.0, with all \CC code compiled as \CCfourteen. 990 The benchmarks were run on an Ubuntu 16.04 workstation with 16 GB of RAM and a 6-core AMD FX-6300 CPU with 3.5 GHz maximum clock frequency. 991 The C and \CCV variants are generally the slowest and most memory-hungry, due to their less-efficient memory layout and the pointer-indirection necessary to implement generic types in these languages; this problem is exacerbated by the second level of generic types in the pair-based benchmarks. 992 By contrast, the \CFA and \CC variants run in roughly equivalent time for both the integer and pair of boolean and char tests, which makes sense given that an integer is actually larger than the pair in both languages. 993 994 \CC performs best because it uses header-only inlined libraries (i.e., no separate compilation). 995 \CFA and \CC have the advantage of a pre-written generic @pair@ type to reduce line count, while C and \CCV require it to written by the programmer. {\color{red} Why?} 996 The definition of @object@ and wrapper classes for @bool@, @char@, @int@, and @const char *@ are included in the line count for \CCV, which somewhat inflates its line count, as an actual object-oriented language would include these in the standard library and with their omission the \CCV line count is similar to C; 997 we justify the given line count by the fact that many object-oriented languages do not allow implementing new interfaces on library types without subclassing or boilerplate-filled wrapper types, which may be similarly verbose. 998 Raw line-count, however, is a fairly rough measure of code complexity; 999 another important factor is how much type information the programmer must manually specify, especially where that information is not checked by the compiler. 1000 Such un-checked type information produces a heavier documentation burden and increased potential for runtime bugs, and is much less common in \CFA than C, with its manually specified function pointers arguments and format codes, or \CCV, with its extensive use of un-type-checked downcasts (\eg @object@ to @integer@ when popping a stack, or @object@ to @printable@ when printing the elements of a @pair@) \TODO{Actually calculate this; I want to put a distinctive comment in the source code and grep for it}. 1001 957 1002 958 1003 \section{Related Work} 959 1004 960 1005 961 \subsection{Generics} 962 963 \CC is the existing language it is most natural to compare \CFA to, as they are both more modern extensions to C with backwards source compatibility. The most fundamental difference in approach between \CC and \CFA is their approach to this C compatibility. \CC does provide fairly strong source backwards compatibility with C, but is a dramatically more complex language than C, and imposes a steep learning curve to use many of its extension features. For instance, in a break from general C practice, template code is typically written in header files, with a variety of subtle restrictions implied on its use by this choice, while the other polymorphism mechanism made available by \CC, class inheritance, requires programmers to learn an entirely new object-oriented programming paradigm; the interaction between templates and inheritance is also quite complex. \CFA, by contrast, has a single facility for polymorphic code, one which supports separate compilation and the existing procedural paradigm of C code. A major difference between the approaches of \CC and \CFA to polymorphism is that the set of assumed properties for a type is \emph{explicit} in \CFA. One of the major limiting factors of \CC's approach is that templates cannot be separately compiled, and, until concepts~\citep{C++Concepts} are standardized (currently anticipated for \CCtwenty), \CC provides no way to specify the requirements of a generic function in code beyond compilation errors for template expansion failures. By contrast, the explicit nature of assertions in \CFA allows polymorphic functions to be separately compiled, and for their requirements to be checked by the compiler; similarly, \CFA generic types may be opaque, unlike \CC template classes. 964 965 Cyclone also provides capabilities for polymorphic functions and existential types~\citep{Grossman06}, similar in concept to \CFA's @forall@ functions and generic types. Cyclone existential types can include function pointers in a construct similar to a virtual function table, but these pointers must be explicitly initialized at some point in the code, a tedious and potentially error-prone process. Furthermore, Cyclone's polymorphic functions and types are restricted in that they may only abstract over types with the same layout and calling convention as @void*@, in practice only pointer types and @int@ - in \CFA terms, all Cyclone polymorphism must be dtype-static. This design provides the efficiency benefits discussed in Section~\ref{sec:generic-apps} for dtype-static polymorphism, but is more restrictive than \CFA's more general model. 966 967 Apple's Objective-C \citep{obj-c-book} is another industrially successful set of extensions to C. The Objective-C language model is a fairly radical departure from C, adding object-orientation and message-passing. Objective-C implements variadic functions using the C @va_arg@ mechanism, and did not support type-checked generics until recently \citep{xcode7}, historically using less-efficient and more error-prone runtime checking of object types instead. The GObject framework \citep{GObject} also adds object-orientation with runtime type-checking and reference-counting garbage-collection to C; these are much more intrusive feature additions than those provided by \CFA, in addition to the runtime overhead of reference-counting. The Vala programming language \citep{Vala} compiles to GObject-based C, and so adds the burden of learning a separate language syntax to the aforementioned demerits of GObject as a modernization path for existing C code-bases. Java \citep{Java8} has had generic types and variadic functions since Java~5; Java's generic types are type-checked at compilation and type-erased at runtime, similar to \CFA's, though in Java each object carries its own table of method pointers, while \CFA passes the method pointers separately so as to maintain a C-compatible struct layout. Java variadic functions are simply syntactic sugar for an array of a single type, and therefore less useful than \CFA's heterogeneously-typed variadic functions. Java is also a garbage-collected, object-oriented language, with the associated resource usage and C-interoperability burdens. 968 969 D \citep{D}, Go \citep{Go}, and Rust \citep{Rust} are modern, compiled languages with abstraction features similar to \CFA traits, \emph{interfaces} in D and Go and \emph{traits} in Rust. However, each language represents dramatic departures from C in terms of language model, and none has the same level of compatibility with C as \CFA. D and Go are garbage-collected languages, imposing the associated runtime overhead. The necessity of accounting for data transfer between the managed Go runtime and the unmanaged C runtime complicates foreign-function interface between Go and C. Furthermore, while generic types and functions are available in Go, they are limited to a small fixed set provided by the compiler, with no language facility to define more. D restricts garbage collection to its own heap by default, while Rust is not garbage-collected, and thus has a lighter-weight runtime that is more easily interoperable with C. Rust also possesses much more powerful abstraction capabilities for writing generic code than Go. On the other hand, Rust's borrow-checker, while it does provide strong safety guarantees, is complex and difficult to learn, and imposes a distinctly idiomatic programming style on Rust. \CFA, with its more modest safety features, is significantly easier to port C code to, while maintaining the idiomatic style of the original source. 1006 \subsection{Polymorphism} 1007 1008 \CC is closest language to \CFA; 1009 both are extensions to C with source and runtime backwards compatibility, and incremental extensions to C. 1010 The fundamental difference is in their engineering approach to C compatibility and programmer expectation. 1011 While \CC provides good backwards compatibility with C, it has a steep learning curve for many of its extensions. 1012 For example, polymorphism is provided via three disjoint mechanisms: overloading, inheritance, and templates. 1013 The overloading is restricted because resolution does not using the return type, inheritance requires learning object-oriented programming and coping with a restricted nominal-inheritance hierarchy, templates cannot be separately compiled resulting in compilation/code bloat and poor error messages, and determining how these mechanisms interact and which to use is confusing. 1014 In contrast, \CFA has a single facility for polymorphic code supporting type-safe separate-compilation of polymorphic functions and generic (opaque) types, which uniformly leveraging the C procedural paradigm. 1015 The key mechanism to support separate compilation is \CFA's \emph{explicit} use of assumed properties for a type. 1016 Until \CC concepts~\citep{C++Concepts} are standardized (anticipated for \CCtwenty), \CC provides no way to specify the requirements of a generic function in code beyond compilation errors during template expansion; 1017 furthermore, \CC concepts are restricted to template polymorphism. 1018 1019 Cyclone~\citep{Grossman06} also provides capabilities for polymorphic functions and existential types, similar to \CFA's @forall@ functions and generic types. 1020 Cyclone existential types can include function pointers in a construct similar to a virtual function-table, but these pointers must be explicitly initialized at some point in the code, a tedious and potentially error-prone process. 1021 Furthermore, Cyclone's polymorphic functions and types are restricted to abstraction over types with the same layout and calling convention as @void *@, \ie only pointer types and @int@. 1022 In \CFA terms, all Cyclone polymorphism must be dtype-static. 1023 While the Cyclone design provides the efficiency benefits discussed in Section~\ref{sec:generic-apps} for dtype-static polymorphism, it is more restrictive than \CFA's general model. 1024 1025 Objective-C~\citep{obj-c-book} is an industrially successful extensions to C. 1026 However, Objective-C is a radical departure from C, using an object-oriented model with message-passing. 1027 Objective-C did not support type-checked generics until recently~\citep{xcode7}, historically using less-efficient and more error-prone runtime checking of object types. 1028 The GObject framework~\citep{GObject} also adds object-oriented programming with runtime type-checking and reference-counting garbage-collection to C; 1029 these features are more intrusive additions than those provided by \CFA, in addition to the runtime overhead of reference-counting. 1030 Vala~\citep{Vala} compiles to GObject-based C, and so adds the burden of learning a separate language syntax to the aforementioned demerits of GObject as a modernization path for the existing C code-bases. 1031 Java~\citep{Java8} included generic types in Java~5; 1032 Java's generic types are type-checked at compilation and type-erased at runtime, similar to \CFA's. 1033 However, in Java, each object carries its own table of method pointers, while \CFA passes the method pointers separately to maintain a C-compatible layout. 1034 Java is also a garbage-collected, object-oriented language, with the associated resource usage and C-interoperability burdens. 1035 1036 D~\citep{D}, Go~\citep{Go}, and Rust~\citep{Rust} are modern, compiled languages with abstraction features similar to \CFA traits, \emph{interfaces} in D and Go and \emph{traits} in Rust. 1037 However, each language represents significant departures from C in terms of language model, and none has the same level of compatibility with C as \CFA. 1038 D and Go are garbage-collected languages, imposing the associated runtime overhead. 1039 The necessity of accounting for data transfer between managed runtimes and the unmanaged C runtime complicates foreign-function interfaces to C. 1040 Furthermore, while generic types and functions are available in Go, they are limited to a small fixed set provided by the compiler, with no language facility to define more. 1041 D restricts garbage collection to its own heap by default, while Rust is not garbage-collected, and thus has a lighter-weight runtime more interoperable with C. 1042 Rust also possesses much more powerful abstraction capabilities for writing generic code than Go. 1043 On the other hand, Rust's borrow-checker, provides strong safety guarantees but is complex and difficult to learn, and imposes a distinctly idiomatic programming style. 1044 \CFA, with its more modest safety features, ports directly to C code, while maintaining the idiomatic style of the original source. 970 1045 971 1046 … … 975 1050 SETL~\cite{SETL} is a high-level mathematical programming language, with tuples being one of the primary data types. 976 1051 Tuples in SETL allow subscripting, dynamic expansion, and multiple assignment. 1052 C provides variadic functions through @va_list@ objects, but the programmer is responsible for managing the number of arguments and their types. 977 1053 KW-C~\cite{Buhr94a}, a predecessor of \CFA, introduced tuples to C as an extension of the C syntax, taking much of its inspiration from SETL. 978 1054 The main contributions of that work were adding MRVF, tuple mass and multiple assignment, and record-field access. … … 985 1061 Like \CC, D provides tuples through a library variadic-template structure. 986 1062 Go does not have tuples but supports MRVF. 987 Java's variadic functions appear similar to C's but are type-safe using arrays.1063 Java's variadic functions appear similar to C's but are type-safe using homogeneous arrays, which are less useful than \CFA's heterogeneously-typed variadic functions. 988 1064 Tuples are a fundamental abstraction in most functional programming languages, such as Standard ML~\cite{sml} and Scala~\cite{Scala}, which decompose tuples using pattern matching. 989 1065 … … 991 1067 \section{Conclusion \& Future Work} 992 1068 993 There is ongoing work on a wide range of \CFA feature extensions, including reference types, exceptions, and concurrent programming primitives. In addition to this work, there are some interesting future directions the polymorphism design could take. Notably, \CC template functions trade compile time and code bloat for optimal runtime of individual instantiations of polymorphic functions. \CFA polymorphic functions, by contrast, use an approach that is essentially dynamic virtual dispatch. The runtime overhead of this approach is low, but not as low as \CC template functions, and it may be beneficial to provide a mechanism for particularly performance-sensitive code to close this gap. Further research is needed, but two promising approaches are to allow an annotation on polymorphic function call sites that tells the translator to create a template-specialization of the function (provided the code is visible in the current translation unit) or placing an annotation on polymorphic function definitions that instantiates a version of the polymorphic function specialized to some set of types. These approaches are not mutually exclusive, and would allow these performance optimizations to be applied only where most useful to increase performance, without suffering the code bloat or loss of generality of a template expansion approach where it is unnecessary. 994 995 In conclusion, the authors' design for generic types and tuples, unlike those available in existing work, is both reusable and type-checked, while still supporting a full range of C features, including separately-compiled modules. We have experimentally validated the performance of our design against both \CC and standard C, showing it is \TODO{shiny, cap'n}. 1069 There is ongoing work on a wide range of \CFA feature extensions, including reference types, exceptions, and concurrent programming primitives. 1070 In addition to this work, there are some interesting future directions the polymorphism design could take. 1071 Notably, \CC template functions trade compile time and code bloat for optimal runtime of individual instantiations of polymorphic functions. 1072 \CFA polymorphic functions, by contrast, use an approach that is essentially dynamic virtual dispatch. 1073 The runtime overhead of this approach is low, but not as low as \CC template functions, and it may be beneficial to provide a mechanism for particularly performance-sensitive code to close this gap. 1074 Further research is needed, but two promising approaches are to allow an annotation on polymorphic function call sites that tells the translator to create a template-specialization of the function (provided the code is visible in the current translation unit) or placing an annotation on polymorphic function definitions that instantiates a version of the polymorphic function specialized to some set of types. 1075 These approaches are not mutually exclusive, and would allow these performance optimizations to be applied only where most useful to increase performance, without suffering the code bloat or loss of generality of a template expansion approach where it is unnecessary. 1076 1077 In conclusion, the authors' design for generic types and tuples, unlike those available in existing work, is both reusable and type-checked, while still supporting a full range of C features, including separately-compiled modules. 1078 We have experimentally validated the performance of our design against both \CC and standard C, showing it is \TODO{shiny, cap'n}. 1079 996 1080 997 1081 \begin{acks} … … 1000 1084 This work is supported in part by a corporate partnership with \grantsponsor{Huawei}{Huawei Ltd.}{http://www.huawei.com}\ and the first author's \grantsponsor{NSERC-PGS}{NSERC PGS D}{http://www.nserc-crsng.gc.ca/Students-Etudiants/PG-CS/BellandPostgrad-BelletSuperieures_eng.asp} scholarship. 1001 1085 \end{acks} 1086 1087 1088 \appendix 1089 1090 1091 \section{BenchMarks} 1092 \label{sec:BenchMarks} 1093 1094 TODO 1095 1002 1096 1003 1097 \bibliographystyle{ACM-Reference-Format}
Note: See TracChangeset
for help on using the changeset viewer.