%====================================================================== \chapter{Introduction} %====================================================================== \section{\protect\CFA Background} \label{s:background} \CFA \footnote{Pronounced ``C-for-all'', and written \CFA or Cforall.} is a modern non-object-oriented extension to the C programming language. As it is an extension of C, there is already a wealth of existing C code and principles that govern the design of the language. Among the goals set out in the original design of \CFA, four points stand out \cite{Bilson03}. \begin{enumerate} \item The behaviour of standard C code must remain the same when translated by a \CFA compiler as when translated by a C compiler. \item Standard C code must be as fast and as small when translated by a \CFA compiler as when translated by a C compiler. \item \CFA code must be at least as portable as standard C code. \item Extensions introduced by \CFA must be translated in the most efficient way possible. \end{enumerate} Therefore, these design principles must be kept in mind throughout the design and development of new language features. In order to appeal to existing C programmers, great care must be taken to ensure that new features naturally feel like C. These goals ensure existing C code-bases can be converted to \CFA incrementally with minimal effort, and C programmers can productively generate \CFA code without training beyond the features being used. Unfortunately, \CC is actively diverging from C, so incremental additions require significant effort and training, coupled with multiple legacy design-choices that cannot be updated. The current implementation of \CFA is a source-to-source translator from \CFA to GNU C \cite{GCCExtensions}. The remainder of this section describes some of the important features that currently exist in \CFA, to give the reader the necessary context in which the new features presented in this thesis must dovetail. \subsection{C Background} \label{sub:c_background} In the context of this work, the term \emph{object} refers to a region of data storage in the execution environment, the contents of which can represent values \cite[p.~6]{C11}. One of the lesser-known features of standard C is \emph{designations}. Designations are similar to named parameters in languages such as Python and Scala, except that they only apply to aggregate initializers. Note that in \CFA, designations use a colon separator, rather than an equals sign as in C, because this syntax is one of the few places that conflicts with the new language features. \begin{cfacode} struct A { int w, x, y, z; }; A a0 = { .x:4 .z:1, .x:8 }; A a1 = { 1, .y:7, 6 }; A a2[4] = { [2]:a0, [0]:a1, { .z:3 } }; // equivalent to // A a0 = { 0, 8, 0, 1 }; // A a1 = { 1, 0, 7, 6 }; // A a2[4] = { a1, { 0, 0, 0, 3 }, a0, { 0, 0, 0, 0 } }; \end{cfacode} Designations allow specifying the field to initialize by name, rather than by position. Any field not explicitly initialized is initialized as if it had static storage duration \cite[p.~141]{C11}. A designator specifies the current object for initialization, and as such any undesignated sub-objects pick up where the last initialization left off. For example, in the initialization of @a1@, the initializer of @y@ is @7@, and the unnamed initializer @6@ initializes the next sub-object, @z@. Later initializers override earlier initializers, so a sub-object for which there is more than one initializer is only initialized by its last initializer. These semantics can be seen in the initialization of @a0@, where @x@ is designated twice, and thus initialized to @8@. C also provides \emph{compound literal} expressions, which provide a first-class mechanism for creating unnamed objects. \begin{cfacode} struct A { int x, y; }; int f(A, int); int g(int *); f((A){ 3, 4 }, (int){ 5 } = 10); g((int[]){ 1, 2, 3 }); g(&(int){ 0 }); \end{cfacode} Compound literals create an unnamed object, and result in an lvalue, so it is legal to assign a value into a compound literal or to take its address \cite[p.~86]{C11}. Syntactically, compound literals look like a cast operator followed by a brace-enclosed initializer, but semantically are different from a C cast, which only applies basic conversions and coercions and is never an lvalue. The \CFA translator makes use of several GNU C extensions, including \emph{nested functions} and \emph{attributes}. Nested functions make it possible to access data that is lexically in scope in the nested function's body. \begin{cfacode} int f() { int x = 0; void g() { x++; } g(); // changes x } \end{cfacode} Nested functions come with the usual C caveat that they should not leak into the containing environment, since they are only valid as long as the containing function's stack frame is active. Attributes make it possible to inform the compiler of certain properties of the code. For example, a function can be marked as deprecated, so that legacy APIs can be identified and slowly removed, or as \emph{hot}, so that the compiler knows the function is called frequently and should be aggresively optimized. \begin{cfacode} __attribute__((deprecated("foo is deprecated, use bar instead"))) void foo(); __attribute__((hot)) void bar(); // heavily optimized foo(); // warning bar(); \end{cfacode} \subsection{Overloading} \label{sub:overloading} Overloading is the ability to specify multiple entities with the same name. The most common form of overloading is function overloading, wherein multiple functions can be defined with the same name, but with different signatures. C provides a small amount of built-in overloading, \eg + is overloaded for the basic types. Like in \CC, \CFA allows user-defined overloading based both on the number of parameters and on the types of parameters. \begin{cfacode} void f(void); // (1) void f(int); // (2) void f(char); // (3) f('A'); // selects (3) \end{cfacode} In this case, there are three @f@ procedures, where @f@ takes either 0 or 1 arguments, and if an argument is provided then it may be of type @int@ or of type @char@. Exactly which procedure is executed depends on the number and types of arguments passed. If there is no exact match available, \CFA attempts to find a suitable match by examining the C built-in conversion heuristics. The \CFA expression resolution algorithm uses a cost function to determine the interpretation that uses the fewest conversions and polymorphic type bindings. \begin{cfacode} void g(long long); g(12345); \end{cfacode} In the above example, there is only one instance of @g@, which expects a single parameter of type @long long@. Here, the argument provided has type @int@, but since all possible values of type @int@ can be represented by a value of type @long long@, there is a safe conversion from @int@ to @long long@, and so \CFA calls the provided @g@ routine. Overloading solves the problem present in C where there can only be one function with a given name, requiring multiple names for functions that perform the same operation but take in different types. This can be seen in the example of the absolute value functions C: \begin{cfacode} // stdlib.h int abs(int); long int labs(long int); long long int llabs(long long int); \end{cfacode} In \CFA, the functions @labs@ and @llabs@ are replaced by appropriate overloads of @abs@. In addition to this form of overloading, \CFA also allows overloading based on the number and types of \emph{return} values. This extension is a feature that is not available in \CC, but is available in other programming languages such as Ada \cite{Ada95}. \begin{cfacode} int g(); // (1) double g(); // (2) int x = g(); // selects (1) \end{cfacode} Here, the only difference between the signatures of the different versions of @g@ is in the return values. The result context is used to select an appropriate routine definition. In this case, the result of @g@ is assigned into a variable of type @int@, so \CFA prefers the routine that returns a single @int@, because it is an exact match. Return-type overloading solves similar problems to parameter-list overloading, in that multiple functions that perform similar operations can have the same, but produce different values. One use case for this feature is to provide two versions of the @bsearch@ routine: \begin{cfacode} forall(otype T | { int ?}). Finally, \CFA also permits overloading variable identifiers. This feature is not available in \CC. \begin{cfacode} struct Rational { int numer, denom; }; int x = 3; // (1) double x = 1.27; // (2) Rational x = { 4, 11 }; // (3) void g(double); x += 1; // chooses (1) g(x); // chooses (2) Rational y = x; // chooses (3) \end{cfacode} In this example, there are three definitions of the variable @x@. Based on the context, \CFA attempts to choose the variable whose type best matches the expression context. When used judiciously, this feature allows names like @MAX@, @MIN@, and @PI@ to apply across many types. Finally, the values @0@ and @1@ have special status in standard C. In particular, the value @0@ is both an integer and a pointer literal, and thus its meaning depends on the context. In addition, several operations can be redefined in terms of other operations and the values @0@ and @1@. For example, \begin{cfacode} int x; if (x) { // if (x != 0) x++; // x += 1; } \end{cfacode} Every if- and iteration-statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result. Due to these rewrite rules, the values @0@ and @1@ have the types \zero and \one in \CFA, which allow for overloading various operations that connect to @0@ and @1@ \footnote{In the original design of \CFA, @0@ and @1@ were overloadable names \cite[p.~7]{cforall-refrat}.}. The types \zero and \one have special built-in implicit conversions to the various integral types, and a conversion to pointer types for @0@, which allows standard C code involving @0@ and @1@ to work as normal. \begin{cfacode} // lvalue is similar to returning a reference in C++ lvalue Rational ?+=?(Rational *a, Rational b); Rational ?=?(Rational * dst, zero_t) { return *dst = (Rational){ 0, 1 }; } Rational sum(Rational *arr, int n) { Rational r; r = 0; // use rational-zero_t assignment for (; n > 0; n--) { r += arr[n-1]; } return r; } \end{cfacode} This function takes an array of @Rational@ objects and produces the @Rational@ representing the sum of the array. Note the use of an overloaded assignment operator to set an object of type @Rational@ to an appropriate @0@ value. \subsection{Polymorphism} \label{sub:polymorphism} In its most basic form, polymorphism grants the ability to write a single block of code that accepts different types. In particular, \CFA supports the notion of parametric polymorphism. Parametric polymorphism allows a function to be written generically, for all values of all types, without regard to the specifics of a particular type. For example, in \CC, the simple identity function for all types can be written as: \begin{cppcode} template T identity(T x) { return x; } \end{cppcode} \CC uses the template mechanism to support parametric polymorphism. In \CFA, an equivalent function can be written as: \begin{cfacode} forall(otype T) T identity(T x) { return x; } \end{cfacode} Once again, the only visible difference in this example is syntactic. Fundamental differences can be seen by examining more interesting examples. In \CC, a generic sum function is written as follows: \begin{cppcode} template T sum(T *arr, int n) { T t; // default construct => 0 for (; n > 0; n--) t += arr[n-1]; return t; } \end{cppcode} Here, the code assumes the existence of a default constructor, assignment operator, and an addition operator over the provided type @T@. If any of these required operators are not available, the \CC compiler produces an error message stating which operators could not be found. A similar sum function can be written in \CFA as follows: \begin{cfacode} forall(otype T | **R**{ T ?=?(T *, zero_t); T ?+=?(T *, T); }**R**) T sum(T *arr, int n) { T t = 0; for (; n > 0; n--) t = t += arr[n-1]; return t; } \end{cfacode} The first thing to note here is that immediately following the declaration of @otype T@ is a list of \emph{type assertions} that specify restrictions on acceptable choices of @T@. In particular, the assertions above specify that there must be an assignment from \zero to @T@ and an addition assignment operator from @T@ to @T@. The existence of an assignment operator from @T@ to @T@ and the ability to create an object of type @T@ are assumed implicitly by declaring @T@ with the @otype@ type-class. In addition to @otype@, there are currently two other type-classes. @dtype@, short for \emph{data type}, serves as the top type for object types; any object type, complete or incomplete, can be bound to a @dtype@ type variable. To contrast, @otype@, short for \emph{object type}, is a @dtype@ with known size, alignment, and an assignment operator, and thus bind only to complete object types. With this extra information, complete objects can be used in polymorphic code in the same way they are used in monomorphic code, providing familiarity and ease of use. The third type-class is @ftype@, short for \emph{function type}, matching only function types. The three type parameter kinds are summarized in \autoref{table:types} \begin{table}[h!] \begin{center} \begin{tabular}{|c||c|c|c||c|c|c|} \hline name & object type & incomplete type & function type & can assign & can create & has size \\ \hline @otype@ & X & & & X & X & X \\ \hline @dtype@ & X & X & & & & \\ \hline @ftype@ & & & X & & & \\ \hline \end{tabular} \end{center} \caption{\label{table:types} The different kinds of type parameters in \protect\CFA} \end{table} A major difference between the approaches of \CC and \CFA to polymorphism is that the set of assumed properties for a type is \emph{explicit} in \CFA. One of the major limiting factors of \CC's approach is that templates cannot be separately compiled. In contrast, the explicit nature of assertions allows \CFA's polymorphic functions to be separately compiled, as the function prototype states all necessary requirements separate from the implementation. For example, the prototype for the previous sum function is \begin{cfacode} forall(otype T | **R**{ T ?=?(T *, zero_t); T ?+=?(T *, T); }**R**) T sum(T *arr, int n); \end{cfacode} With this prototype, a caller in another translation unit knows all of the constraints on @T@, and thus knows all of the operations that need to be made available to @sum@. In \CFA, a set of assertions can be factored into a \emph{trait}. \begin{cfacode} trait Addable(otype T) { T ?+?(T, T); T ++?(T); T ?++(T); } forall(otype T | Addable(T)) void f(T); forall(otype T | Addable(T) | { T --?(T); }) T g(T); forall(otype T, U | Addable(T) | { T ?/?(T, U); }) U h(T, U); \end{cfacode} This capability allows specifying the same set of assertions in multiple locations, without the repetition and likelihood of mistakes that come with manually writing them out for each function declaration. An interesting application of return-type resolution and polymorphism is a polymorphic version of @malloc@. \begin{cfacode} forall(dtype T | sized(T)) T * malloc() { return (T*)malloc(sizeof(T)); // call C malloc } int * x = malloc(); // malloc(sizeof(int)) double * y = malloc(); // malloc(sizeof(double)) struct S { ... }; S * s = malloc(); // malloc(sizeof(S)) \end{cfacode} The built-in trait @sized@ ensures that size and alignment information for @T@ is available in the body of @malloc@ through @sizeof@ and @_Alignof@ expressions respectively. In calls to @malloc@, the type @T@ is bound based on call-site information, allowing \CFA code to allocate memory without the potential for errors introduced by manually specifying the size of the allocated block. \subsection{Planned Features} One of the planned features \CFA is \emph{reference types}. At a high level, the current proposal is to add references as a way to cleanup pointer syntax. With references, it will be possible to store any address, as with a pointer, with the key difference being that references are automatically dereferenced. \begin{cfacode} int x = 0; int * p = &x; // needs & int & ref = x; // no & printf("%d %d\n", *p, ref); // pointer needs *, ref does not \end{cfacode} It is possible to add new functions or shadow existing functions for the duration of a scope, using normal C scoping rules. One application of this feature is to reverse the order of @qsort@. \begin{cfacode} forall(otype T | { int ? y; } qsort(vals, 10); // descending sort } \end{cfacode} Currently, there is no way to \emph{remove} a function from consideration from the duration of a scope. For example, it may be desirable to eliminate assignment from a scope, to reduce accidental mutation. To address this desire, \emph{deleted functions} are a planned feature for \CFA. \begin{cfacode} forall(otype T) void f(T *); int x = 0; f(&x); // might modify x { int ?=?(int *, int) = delete; f(&x); // error, no assignment for int } \end{cfacode} Now, if the deleted function is chosen as the best match, the expression resolver emits an error. \section{Invariants} An \emph{invariant} is a logical assertion that is true for some duration of a program's execution. Invariants help a programmer to reason about code correctness and prove properties of programs. \begin{sloppypar} In object-oriented programming languages, type invariants are typically established in a constructor and maintained throughout the object's lifetime. These assertions are typically achieved through a combination of access-control modifiers and a restricted interface. Typically, data which requires the maintenance of an invariant is hidden from external sources using the \emph{private} modifier, which restricts reads and writes to a select set of trusted routines, including member functions. It is these trusted routines that perform all modifications to internal data in a way that is consistent with the invariant, by ensuring that the invariant holds true at the end of the routine call. \end{sloppypar} In C, the @assert@ macro is often used to ensure invariants are true. Using @assert@, the programmer can check a condition and abort execution if the condition is not true. This powerful tool forces the programmer to deal with logical inconsistencies as they occur. For production, assertions can be removed by simply defining the preprocessor macro @NDEBUG@, making it simple to ensure that assertions are 0-cost for a performance intensive application. \begin{cfacode} struct Rational { int n, d; }; struct Rational create_rational(int n, int d) { assert(d != 0); // precondition if (d < 0) { n *= -1; d *= -1; } assert(d > 0); // postcondition // rational invariant: d > 0 return (struct Rational) { n, d }; } struct Rational rat_abs(struct Rational r) { assert(r.d > 0); // check invariant, since no access control r.n = abs(r.n); assert(r.d > 0); // ensure function preserves invariant on return value return r; } \end{cfacode} Some languages, such as D, provide language-level support for specifying program invariants. In addition to providing a C-like @assert@ expression, D allows specifying type invariants that are automatically checked at the end of a constructor, beginning of a destructor, and at the beginning and end of every public member function. \begin{dcode} import std.math; struct Rational { invariant { assert(d > 0, "d <= 0"); } int n, d; this(int n, int d) { // constructor assert(d != 0); this.n = n; this.d = d; // implicitly check invariant } Rational abs() { // implicitly check invariant return Rational(std.math.abs(n), d); // implicitly check invariant } } \end{dcode} The D compiler is able to assume that assertions and invariants hold true and perform optimizations based on those assumptions. Note, these invariants are internal to the type's correct behaviour. Types also have external invariants with the state of the execution environment, including the heap, the open-file table, the state of global variables, etc. Since resources are finite and shared (concurrency), it is important to ensure that objects clean up properly when they are finished, restoring the execution environment to a stable state so that new objects can reuse resources. \section{Resource Management} \label{s:ResMgmt} Resource management is a problem that pervades every programming language. In standard C, resource management is largely a manual effort on the part of the programmer, with a notable exception to this rule being the program stack. The program stack grows and shrinks automatically with each function call, as needed for local variables. However, whenever a program needs a variable to outlive the block it is created in, the storage must be allocated dynamically with @malloc@ and later released with @free@. This pattern is extended to more complex objects, such as files and sockets, which can also outlive the block where they are created, and thus require their own resource management. Once allocated storage escapes\footnote{In garbage collected languages, such as Java, escape analysis \cite{Choi:1999:EAJ:320385.320386} is used to determine when dynamically allocated objects are strictly contained within a function, which allows the optimizer to allocate them on the stack.} a block, the responsibility for deallocating the storage is not specified in a function's type, that is, that the return value is owned by the caller. This implicit convention is provided only through documentation about the expectations of functions. In other languages, a hybrid situation exists where resources escape the allocation block, but ownership is precisely controlled by the language. This pattern requires a strict interface and protocol for a data structure, consisting of a pre-initialization and a post-termination call, and all intervening access is done via interface routines. This kind of encapsulation is popular in object-oriented programming languages, and like the stack, it takes care of a significant portion of resource-management cases. For example, \CC directly supports this pattern through class types and an idiom known as RAII \footnote{Resource Acquisition is Initialization} by means of constructors and destructors. Constructors and destructors are special routines that are automatically inserted into the appropriate locations to bookend the lifetime of an object. Constructors allow the designer of a type to establish invariants for objects of that type, since it is guaranteed that every object must be initialized through a constructor. In particular, constructors allow a programmer to ensure that all objects are initially set to a valid state. On the other hand, destructors provide a simple mechanism for tearing down an object and resetting the environment in which the object lived. RAII ensures that if all resources are acquired in a constructor and released in a destructor, there are no resource leaks, even in exceptional circumstances. A type with at least one non-trivial constructor or destructor is henceforth referred to as a \emph{managed type}. In the context of \CFA, a non-trivial constructor is either a user defined constructor or an auto-generated constructor that calls a non-trivial constructor. For the remaining resource ownership cases, a programmer must follow a brittle, explicit protocol for freeing resources or an implicit protocol enforced by the programming language. In garbage collected languages, such as Java, resources are largely managed by the garbage collector. Still, garbage collectors typically focus only on memory management. There are many kinds of resources that the garbage collector does not understand, such as sockets, open files, and database connections. In particular, Java supports \emph{finalizers}, which are similar to destructors. Unfortunately, finalizers are only guaranteed to be called before an object is reclaimed by the garbage collector \cite[p.~373]{Java8}, which may not happen if memory use is not contentious. Due to operating-system resource-limits, this is unacceptable for many long running programs. Instead, the paradigm in Java requires programmers to manually keep track of all resources \emph{except} memory, leading many novices and experts alike to forget to close files, etc. Complicating the picture, uncaught exceptions can cause control flow to change dramatically, leaking a resource that appears on first glance to be released. \begin{javacode} void write(String filename, String msg) throws Exception { FileOutputStream out = new FileOutputStream(filename); FileOutputStream log = new FileOutputStream(filename); out.write(msg.getBytes()); log.write(msg.getBytes()); log.close(); out.close(); } \end{javacode} Any line in this program can throw an exception, which leads to a profusion of finally blocks around many function bodies, since it is not always clear when an exception may be thrown. \begin{javacode} public void write(String filename, String msg) throws Exception { FileOutputStream out = new FileOutputStream(filename); try { FileOutputStream log = new FileOutputStream("log.txt"); try { out.write(msg.getBytes()); log.write(msg.getBytes()); } finally { log.close(); } } finally { out.close(); } } \end{javacode} In Java 7, a new \emph{try-with-resources} construct was added to alleviate most of the pain of working with resources, but ultimately it still places the burden squarely on the user rather than on the library designer. Furthermore, for complete safety this pattern requires nested objects to be declared separately, otherwise resources that can throw an exception on close can leak nested resources \footnote{Since close is only guaranteed to be called on objects declared in the try-list and not objects passed as constructor parameters, the @B@ object may not be closed in @new A(new B())@ if @A@'s close raises an exception.} \cite{TryWithResources}. \begin{javacode} public void write(String filename, String msg) throws Exception { try ( // try-with-resources FileOutputStream out = new FileOutputStream(filename); FileOutputStream log = new FileOutputStream("log.txt"); ) { out.write(msg.getBytes()); log.write(msg.getBytes()); } // automatically closes out and log in every exceptional situation } \end{javacode} Variables declared as part of a try-with-resources statement must conform to the @AutoClosable@ interface, and the compiler implicitly calls @close@ on each of the variables at the end of the block. Depending on when the exception is raised, both @out@ and @log@ are null, @log@ is null, or both are non-null, therefore, the cleanup for these variables at the end is automatically guarded and conditionally executed to prevent null-pointer exceptions. While Rust \cite{Rust} does not enforce the use of a garbage collector, it does provide a manual memory management environment, with a strict ownership model that automatically frees allocated memory and prevents common memory management errors. In particular, a variable has ownership over its associated value, which is freed automatically when the owner goes out of scope. Furthermore, values are \emph{moved} by default on assignment, rather than copied, which invalidates the previous variable binding. \begin{rustcode} struct S { x: i32 } let s = S { x: 123 }; let z = s; // move, invalidate s println!("{}", s.x); // error, s has been moved \end{rustcode} Types can be made copyable by implementing the @Copy@ trait. Rust allows multiple unowned views into an object through references, also known as borrows, provided that a reference does not outlive its referent. A mutable reference is allowed only if it is the only reference to its referent, preventing data race errors and iterator invalidation errors. \begin{rustcode} let mut x = 10; { let y = &x; let z = &x; println!("{} {}", y, z); // prints 10 10 } { let y = &mut x; // let z1 = &x; // not allowed, have mutable reference // let z2 = &mut x; // not allowed, have mutable reference *y = 5; println!("{}", y); // prints 5 } println!("{}", x); // prints 5 \end{rustcode} Since references are not owned, they do not release resources when they go out of scope. There is no runtime cost imposed on these restrictions, since they are enforced at compile-time. Rust provides RAII through the @Drop@ trait, allowing arbitrary code to execute when the object goes out of scope, providing automatic clean up of auxiliary resources, much like a \CC program. \begin{rustcode} struct S { name: &'static str } impl Drop for S { // RAII for S fn drop(&mut self) { // destructor println!("dropped {}", self.name); } } { let x = S { name: "x" }; let y = S { name: "y" }; } // prints "dropped y" "dropped x" \end{rustcode} % D has constructors and destructors that are worth a mention (under classes) https://dlang.org/spec/spec.html % also https://dlang.org/spec/struct.html#struct-constructor % these are declared in the struct, so they're closer to C++ than to CFA, at least syntactically. Also do not allow for default constructors % D has a GC, which already makes the situation quite different from C/C++ The programming language D also manages resources with constructors and destructors \cite{D}. In D, @struct@s are stack allocatable and managed via scoping like in \CC, whereas @class@es are managed automatically by the garbage collector. Like Java, using the garbage collector means that destructors are called indeterminately, requiring the use of finally statements to ensure dynamically allocated resources that are not managed by the garbage collector, such as open files, are cleaned up. Since D supports RAII, it is possible to use the same techniques as in \CC to ensure that resources are released in a timely manner. Finally, D provides a scope guard statement, which allows an arbitrary statement to be executed at normal scope exit with \emph{success}, at exceptional scope exit with \emph{failure}, or at normal and exceptional scope exit with \emph{exit}. % https://dlang.org/spec/statement.html#ScopeGuardStatement It has been shown that the \emph{exit} form of the scope guard statement can be implemented in a library in \CC \cite{ExceptSafe}. To provide managed types in \CFA, new kinds of constructors and destructors are added to \CFA and discussed in Chapter 2. \section{Tuples} \label{s:Tuples} In mathematics, tuples are finite-length sequences which, unlike sets, are ordered and allow duplicate elements. In programming languages, tuples provide fixed-sized heterogeneous lists of elements. Many programming languages have tuple constructs, such as SETL, \KWC, ML, and Scala. \KWC, a predecessor of \CFA, introduced tuples to C as an extension of the C syntax, rather than as a full-blown data type \cite{Till89}. In particular, Till noted that C already contains a tuple context in the form of function parameter lists. The main contributions of that work were in the form of adding tuple contexts to assignment in the form of multiple assignment and mass assignment (discussed in detail in section \ref{s:TupleAssignment}), function return values (see section \ref{s:MRV_Functions}), and record field access (see section \ref{s:MemberAccessTuple}). Adding tuples to \CFA has previously been explored by Esteves \cite{Esteves04}. The design of tuples in \KWC took much of its inspiration from SETL \cite{SETL}. SETL is a high-level mathematical programming language, with tuples being one of the primary data types. Tuples in SETL allow a number of operations, including subscripting, dynamic expansion, and multiple assignment. \CCeleven introduced @std::tuple@ as a library variadic template struct. Tuples are a generalization of @std::pair@, in that they allow for arbitrary length, fixed-size aggregation of heterogeneous values. \begin{cppcode} tuple triple(10, 20, 30); get<1>(triple); // access component 1 => 20 tuple f(); int i; double d; tie(i, d) = f(); // assign fields of return value into local variables tuple greater(11, 0, 0); triple < greater; // true \end{cppcode} Tuples are simple data structures with few specific operations. In particular, it is possible to access a component of a tuple using @std::get@. Another interesting feature is @std::tie@, which creates a tuple of references, allowing assignment of the results of a tuple-returning function into separate local variables, without requiring a temporary variable. Tuples also support lexicographic comparisons, making it simple to write aggregate comparators using @std::tie@. There is a proposal for \CCseventeen called \emph{structured bindings} \cite{StructuredBindings}, that introduces new syntax to eliminate the need to pre-declare variables and use @std::tie@ for binding the results from a function call. \begin{cppcode} tuple f(); auto [i, d] = f(); // unpacks into new variables i, d tuple triple(10, 20, 30); auto & [t1, t2, t3] = triple; t2 = 0; // changes middle element of triple struct S { int x; double y; }; S s = { 10, 22.5 }; auto [x, y] = s; // unpack s \end{cppcode} Structured bindings allow unpacking any structure with all public non-static data members into fresh local variables. The use of @&@ allows declaring new variables as references, which is something that cannot be done with @std::tie@, since \CC references do not support rebinding. This extension requires the use of @auto@ to infer the types of the new variables, so complicated expressions with a non-obvious type must be documented with some other mechanism. Furthermore, structured bindings are not a full replacement for @std::tie@, as it always declares new variables. Like \CC, D provides tuples through a library variadic-template structure. In D, it is possible to name the fields of a tuple type, which creates a distinct type. % http://dlang.org/phobos/std_typecons.html \begin{dcode} Tuple!(float, "x", float, "y") point2D; Tuple!(float, float) float2; // different type from point2D point2D[0]; // access first element point2D.x; // access first element float f(float x, float y) { return x+y; } f(point2D.expand); \end{dcode} Tuples are 0-indexed and can be subscripted using an integer or field name, if applicable. The @expand@ method produces the components of the tuple as a list of separate values, making it possible to call a function that takes $N$ arguments using a tuple with $N$ components. Tuples are a fundamental abstraction in most functional programming languages, such as Standard ML \cite{sml}. A function in SML always accepts exactly one argument. There are two ways to mimic multiple argument functions: the first through currying and the second by accepting tuple arguments. \begin{smlcode} fun fact (n : int) = if (n = 0) then 1 else n*fact(n-1) fun binco (n: int, k: int) = real (fact n) / real (fact k * fact (n-k)) \end{smlcode} Here, the function @binco@ appears to take 2 arguments, but it actually takes a single argument which is implicitly decomposed via pattern matching. Tuples are a foundational tool in SML, allowing the creation of arbitrarily-complex structured data-types. Scala, like \CC, provides tuple types through the standard library \cite{Scala}. Scala provides tuples of size 1 through 22 inclusive through generic data structures. Tuples support named access and subscript access, among a few other operations. \begin{scalacode} val a = new Tuple3(0, "Text", 2.1) // explicit creation val b = (6, 'a', 1.1f) // syntactic sugar: Tuple3[Int, Char, Float] val (i, _, d) = triple // extractor syntax, ignore middle element println(a._2) // named access => print "Text" println(b.productElement(0)) // subscript access => print 6 \end{scalacode} In Scala, tuples are primarily used as simple data structures for carrying around multiple values or for returning multiple values from a function. The 22-element restriction is an odd and arbitrary choice, but in practice it does not cause problems since large tuples are uncommon. Subscript access is provided through the @productElement@ method, which returns a value of the top-type @Any@, since it is impossible to receive a more precise type from a general subscripting method due to type erasure. The disparity between named access beginning at @_1@ and subscript access starting at @0@ is likewise an oddity, but subscript access is typically avoided since it discards type information. Due to the language's pattern matching facilities, it is possible to extract the values from a tuple into named variables, which is a more idiomatic way of accessing the components of a tuple. \Csharp also has tuples, but has similarly strange limitations, allowing tuples of size up to 7 components. % https://msdn.microsoft.com/en-us/library/system.tuple(v=vs.110).aspx The officially supported workaround for this shortcoming is to nest tuples in the 8th component. \Csharp allows accessing a component of a tuple by using the field @Item$N$@ for components 1 through 7, and @Rest@ for the nested tuple. In Python \cite{Python}, tuples are immutable sequences that provide packing and unpacking operations. While the tuple itself is immutable, and thus does not allow the assignment of components, there is nothing preventing a component from being internally mutable. The components of a tuple can be accessed by unpacking into multiple variables, indexing, or via field name, like D. Tuples support multiple assignment through a combination of packing and unpacking, in addition to the common sequence operations. Swift \cite{Swift}, like D, provides named tuples, with components accessed by name, index, or via extractors. Tuples are primarily used for returning multiple values from a function. In Swift, @Void@ is an alias for the empty tuple, and there are no single element tuples. Tuples comparable to those described above are added to \CFA and discussed in Chapter 3. \section{Variadic Functions} \label{sec:variadic_functions} In statically-typed programming languages, functions are typically defined to receive a fixed number of arguments of specified types. Variadic argument functions provide the ability to define a function that can receive a theoretically unbounded number of arguments. C provides a simple implementation of variadic functions. A function whose parameter list ends with @, ...@ is a variadic function. Among the most common variadic functions is @printf@. \begin{cfacode} int printf(const char * fmt, ...); printf("%d %g %c %s", 10, 3.5, 'X', "a string"); \end{cfacode} Through the use of a format string, C programmers can communicate argument type information to @printf@, allowing C programmers to print any of the standard C data types. Still, @printf@ is extremely limited, since the format codes are specified by the C standard, meaning users cannot define their own format codes to extend @printf@ for new data types or new formatting rules. \begin{sloppypar} C provides manipulation of variadic arguments through the @va_list@ data type, which abstracts details of the manipulation of variadic arguments. Since the variadic arguments are untyped, it is up to the function to interpret any data that is passed in. Additionally, the interface to manipulate @va_list@ objects is essentially limited to advancing to the next argument, without any built-in facility to determine when the last argument is read. This limitation requires the use of an \emph{argument descriptor} to pass information to the function about the structure of the argument list, including the number of arguments and their types. The format string in @printf@ is one such example of an argument descriptor. \begin{cfacode} int f(const char * fmt, ...) { va_list args; va_start(args, fmt); // initialize va_list for (const char * c = fmt; *c != '\0'; ++c) { if (*c == '%') { ++c; switch (*c) { case 'd': { int i = va_arg(args, int); // have to specify type // ... break; } case 'g': { double d = va_arg(args, double); // ... break; } ... } } } va_end(args); return ...; } \end{cfacode} Every case must be handled explicitly, since the @va_arg@ macro requires a type argument to determine how the next set of bytes is to be interpreted. Furthermore, if the user makes a mistake, compile-time checking is typically restricted to standard format codes and their corresponding types. In general, this means that C's variadic functions are not type-safe, making them difficult to use properly. \end{sloppypar} % When arguments are passed to a variadic function, they undergo \emph{default argument promotions}. % Specifically, this means that \CCeleven added support for \emph{variadic templates}, which add much needed type-safety to C's variadic landscape. It is possible to use variadic templates to define variadic functions and variadic data types. \begin{cppcode} void print(int); void print(char); void print(double); ... void f() {} // base case template void f(const T & arg, const Args &... rest) { print(arg); // print the current element f(rest...); // handle remaining arguments recursively } \end{cppcode} Variadic templates work largely through recursion on the \emph{parameter pack}, which is the argument with @...@ following its type. A parameter pack matches 0 or more elements, which can be types or expressions depending on the context. Like other templates, variadic template functions rely on an implicit set of constraints on a type, in this example a @print@ routine. That is, it is possible to use the @f@ routine on any type provided there is a corresponding @print@ routine, making variadic templates fully open to extension, unlike variadic functions in C. Recent \CC standards (\CCfourteen, \CCseventeen) expand on the basic premise by allowing variadic template variables and providing convenient expansion syntax to remove the need for recursion in some cases, amongst other things. % D has variadic templates that deserve a mention http://dlang.org/ctarguments.html In Java, a variadic function appears similar to a C variadic function in syntax. \begin{javacode} int sum(int... args) { int s = 0; for (int x : args) { s += x; } return s; } void print(Object... objs) { for (Object obj : objs) { System.out.print(obj); } } print("The sum from 1 to 10 is ", sum(1,2,3,4,5,6,7,8,9,10), ".\n"); \end{javacode} The key difference is that Java variadic functions are type-safe, because they specify the type of the argument immediately prior to the ellipsis. In Java, variadic arguments are syntactic sugar for arrays, allowing access to length, subscripting operations, and for-each iteration on the variadic arguments, among other things. Since the argument type is specified explicitly, the top-type @Object@ can be used to accept arguments of any type, but to do anything interesting on the argument requires a down-cast to a more specific type, landing Java in a similar situation to C in that writing a function open to extension is difficult. The other option is to restrict the number of types that can be passed to the function by using a more specific type. Unfortunately, Java's use of nominal inheritance means that types must explicitly inherit from classes or interfaces in order to be considered a subclass. The combination of these two issues greatly restricts the usefulness of variadic functions in Java. Type-safe variadic functions are added to \CFA and discussed in Chapter 4. \section{Contributions} \label{s:contributions} No prior work on constructors or destructors had been done for \CFA. I did both the design and implementation work. While the overall design is based on constructors and destructors in object-oriented C++, it had to be re-engineered into non-object-oriented \CFA. I also had to make changes to the \CFA expression-resolver to integrate constructors and destructors into the type system. Prior work on the design of tuples for \CFA was done by Till, and some initial implementation work by Esteves. I largely took the Till design but added tuple indexing, which exists in a number of programming languages with tuples, simplified the implicit tuple conversions, and integrated with the \CFA polymorphism and assertion satisfaction model. I did a new implementation of tuples, and extensively augmented initial work by Bilson to incorporate tuples into the \CFA expression-resolver and type-unifier. No prior work on variadic functions had been done for \CFA. I did both the design and implementation work. While the overall design is based on variadic templates in C++, my design is novel in the way it is incorporated into the \CFA polymorphism model, and is engineered into \CFA so it dovetails with tuples.