%====================================================================== \chapter{Tuples} %====================================================================== \section{Introduction} % TODO: named return values are not currently implemented in CFA - tie in with named tuples? (future work) % TODO: no passing input parameters by assignment, instead will have reference types => this is not a very C-like model and greatly complicates syntax for likely little gain (and would cause confusion with already supported return-by-rerefence) % TODO: tuples are allowed in expressions, exact meaning is defined by operator overloading (e.g. can add tuples). An important caveat to note is that it is currently impossible to allow adding two triples but prevent adding a pair with a quadruple (single flattening/structuring conversions are implicit, only total number of components matters). May be able to solve this with more nuanced conversion rules (future work) % TODO: benefits (conclusion) by Till: reduced number of variables and statements; no specified order of execution for multiple assignment (more optimzation freedom); can store parameter lists in variable; MRV routines (natural code); more convenient assignment statements; simple and efficient access of record fields; named return values more legible and efficient in use of storage \section{Multiple-Return-Value Functions} \label{s:MRV_Functions} In standard C, functions can return at most one value. This restriction results in code which emulates functions with multiple return values by \emph{aggregation} or by \emph{aliasing}. In the former situation, the function designer creates a record type that combines all of the return values into a single type. For example, consider a function returning the most frequently occuring letter in a string, and its frequency. % TODO: consider simplifying the example! % Two things I like about this example: % * it uses different types to illustrate why an array is insufficient (this is not necessary, but is nice) % * it's complicated enough to show the uninitialized pitfall that exists in the aliasing example. % Still, it may be a touch too complicated. Is there a simpler example with these two properties? \begin{cfacode} struct mf_ret { int freq; char ch; }; struct mf_ret most_frequent(const char * str) { char freqs [26] = { 0 }; struct mf_ret ret = { 0, 'a' }; for (int i = 0; str[i] != '\0'; ++i) { if (isalpha(str[i])) { // only count letters int ch = tolower(str[i]); // convert to lower case int idx = ch-'a'; if (++freqs[idx] > ret.freq) { // update on new max ret.freq = freqs[idx]; ret.ch = ch; } } } return ret; } const char * str = "hello world"; struct mf_ret ret = most_frequent(str); printf("%s -- %d %c\n", str, ret.freq, ret.ch); \end{cfacode} Of note, the designer must come up with a name for the return type and for each of its fields. Unnecessary naming is a common programming language issue, introducing verbosity and a complication of the user's mental model. That is, adding another named type creates another association in the programmer's mind that needs to be kept track of when reading and writing code. As such, this technique is effective when used sparingly, but can quickly get out of hand if many functions need to return different combinations of types. In the latter approach, the designer simulates multiple return values by passing the additional return values as pointer parameters. The pointer parameters are assigned inside of the routine body to emulate a return. Using the same example, \begin{cfacode} int most_frequent(const char * str, char * ret_ch) { char freqs [26] = { 0 }; int ret_freq = 0; for (int i = 0; str[i] != '\0'; ++i) { if (isalpha(str[i])) { // only count letters int ch = tolower(str[i]); // convert to lower case int idx = ch-'a'; if (++freqs[idx] > ret_freq) { // update on new max ret_freq = freqs[idx]; *ret_ch = ch; // assign to out parameter } } } return ret_freq; // only one value returned directly } const char * str = "hello world"; char ch; // pre-allocate return value int freq = most_frequent(str, &ch); // pass return value as parameter printf("%s -- %d %c\n", str, freq, ch); \end{cfacode} Notably, using this approach, the caller is directly responsible for allocating storage for the additional temporary return values. This complicates the call site with a sequence of variable declarations leading up to the call. Also, while a disciplined use of @const@ can give clues about whether a pointer parameter is going to be used as an out parameter, it is not immediately obvious from only the routine signature whether the callee expects such a parameter to be initialized before the call. Furthermore, while many C routines that accept pointers are designed so that it is safe to pass @NULL@ as a parameter, there are many C routines that are not null-safe. On a related note, C does not provide a standard mechanism to state that a parameter is going to be used as an additional return value, which makes the job of ensuring that a value is returned more difficult for the compiler. There is a subtle bug in the previous example, in that @ret_ch@ is never assigned for a string that does not contain any letters, which can lead to undefined behaviour. As with the previous approach, this technique can simulate multiple return values, but in practice it is verbose and error prone. In \CFA, functions can be declared to return multiple values with an extension to the function declaration syntax. Multiple return values are declared as a comma-separated list of types in square brackets in the same location that the return type appears in standard C function declarations. The ability to return multiple values from a function requires a new syntax for the return statement. For consistency, the return statement in \CFA accepts a comma-separated list of expressions in square brackets. The expression resolution phase of the \CFA translator ensures that the correct form is used depending on the values being returned and the return type of the current function. A multiple-returning function with return type @T@ can return any expression that is implicitly convertible to @T@. Using the running example, the @most_frequent@ function can be written in using multiple return values as such, \begin{cfacode} [int, char] most_frequent(const char * str) { char freqs [26] = { 0 }; int ret_freq = 0; char ret_ch = 'a'; for (int i = 0; str[i] != '\0'; ++i) { if (isalpha(str[i])) { // only count letters int ch = tolower(str[i]); // convert to lower case int idx = ch-'a'; if (++freqs[idx] > ret_freq) { // update on new max ret_freq = freqs[idx]; ret_ch = ch; } } } return [ret_freq, ret_ch]; } \end{cfacode} This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type. The addition of multiple-return-value functions necessitates a syntax for accepting multiple values at the call-site. The simplest mechanism for retaining a return value in C is variable assignment. By assigning the return value into a variable, its value can be retrieved later at any point in the program. As such, \CFA allows assigning multiple values from a function into multiple variables, using a square-bracketed list of lvalue expressions on the left side. \begin{cfacode} const char * str = "hello world"; int freq; char ch; [freq, ch] = most_frequent(str); // assign into multiple variables printf("%s -- %d %c\n", str, freq, ch); \end{cfacode} It is also common to use a function's output as the input to another function. \CFA also allows this case, without any new syntax. When a function call is passed as an argument to another call, the expression resolver attempts to find the best match of actual arguments to formal parameters given all of the possible expression interpretations in the current scope \cite{Bilson03}. For example, \begin{cfacode} void process(int); // (1) void process(char); // (2) void process(int, char); // (3) void process(char, int); // (4) process(most_frequent("hello world")); // selects (3) \end{cfacode} In this case, there is only one option for a function named @most_frequent@ that takes a string as input. This function returns two values, one @int@ and one @char@. There are four options for a function named @process@, but only two which accept two arguments, and of those the best match is (3), which is also an exact match. This expression first calls @most_frequent("hello world")@, which produces the values @3@ and @'l'@, which are fed directly to the first and second parameters of (3), respectively. \section{Tuple Expressions} Multiple-return-value functions provide \CFA with a new syntax for expressing a combination of expressions in the return statement and a combination of types in a function signature. These notions can be generalized to provide \CFA with \emph{tuple expressions} and \emph{tuple types}. A tuple expression is an expression producing a fixed-size, ordered list of values of heterogeneous types. The type of a tuple expression is the tuple of the subexpression types, or a \emph{tuple type}. In \CFA, a tuple expression is denoted by a comma-separated list of expressions enclosed in square brackets. For example, the expression @[5, 'x', 10.5]@ has type @[int, char, double]@. The previous expression has 3 \emph{components}. Each component in a tuple expression can be any \CFA expression, including another tuple expression. % TODO: Tuple expressions can appear anywhere that any other expression can appear (...?) The order of evaluation of the components in a tuple expression is unspecified, to allow a compiler the greatest flexibility for program optimization. It is, however, guaranteed that each component of a tuple expression is evaluated for side-effects, even if the result is not used. Multiple-return-value functions can equivalently be called \emph{tuple-returning functions}. % TODO: does this statement still apply, and if so to what extent? % Tuples are a compile-time phenomenon and have little to no run-time presence. \subsection{Tuple Variables} The call-site of the @most_frequent@ routine has a notable blemish, in that it required the preallocation of return variables in a manner similar to the aliasing example, since it is impossible to declare multiple variables of different types in the same declaration in standard C. In \CFA, it is possible to overcome this restriction by declaring a \emph{tuple variable}. \begin{cfacode}[emph=ret, emphstyle=\color{red}] const char * str = "hello world"; [int, char] ret = most_frequent(str); // initialize tuple variable printf("%s -- %d %c\n", str, ret); \end{cfacode} It is now possible to accept multiple values into a single piece of storage, in much the same way that it was previously possible to pass multiple values from one function call to another. These variables can be used in any of the contexts where a tuple expression is allowed, such as in the @printf@ function call. As in the @process@ example, the components of the tuple value are passed as separate parameters to @printf@, allowing very simple printing of tuple expressions. If the individual components are required, they can be extracted with a simple assignment, as in previous examples. \begin{cfacode} int freq; char ch; [freq, ch] = ret; \end{cfacode} In addition to variables of tuple type, it is also possible to have pointers to tuples, and arrays of tuples. Tuple types can be composed of any types, except for array types, since arrays do not carry their size around, which makes tuple assignment difficult when a tuple contains an array. \begin{cfacode} [double, int] di; [double, int] * pdi [double, int] adi[10]; \end{cfacode} This examples declares a variable of type @[double, int]@, a variable of type pointer to @[double, int]@, and an array of ten @[double, int]@. \subsection{Tuple Indexing} At times, it is desirable to access a single component of a tuple-valued expression without creating unnecessary temporary variables to assign to. Given a tuple-valued expression @e@ and a compile-time constant integer $i$ where $0 \leq i < n$, where $n$ is the number of components in @e@, @e.i@ accesses the $i$\textsuperscript{th} component of @e@. For example, \begin{cfacode} [int, double] x; [char *, int] f(); void g(double, int); [int, double] * p; int y = x.0; // access int component of x y = f().1; // access int component of f p->0 = 5; // access int component of tuple pointed-to by p g(x.1, x.0); // rearrange x to pass to g double z = [x, f()].0.1; // access second component of first component // of tuple expression \end{cfacode} As seen above, tuple-index expressions can occur on any tuple-typed expression, including tuple-returning functions, square-bracketed tuple expressions, and other tuple-index expressions, provided the retrieved component is also a tuple. This feature was proposed for \KWC but never implemented \cite[p.~45]{Till89}. \subsection{Flattening and Structuring} As evident in previous examples, tuples in \CFA do not have a rigid structure. In function call contexts, tuples support implicit flattening and restructuring conversions. Tuple flattening recursively expands a tuple into the list of its basic components. Tuple structuring packages a list of expressions into a value of tuple type. \begin{cfacode} int f(int, int); int g([int, int]); int h(int, [int, int]); [int, int] x; int y; f(x); // flatten g(y, 10); // structure h(x, y); // flatten & structure \end{cfacode} In \CFA, each of these calls is valid. In the call to @f@, @x@ is implicitly flattened so that the components of @x@ are passed as the two arguments to @f@. For the call to @g@, the values @y@ and @10@ are structured into a single argument of type @[int, int]@ to match the type of the parameter of @g@. Finally, in the call to @h@, @y@ is flattened to yield an argument list of length 3, of which the first component of @x@ is passed as the first parameter of @h@, and the second component of @x@ and @y@ are structured into the second argument of type @[int, int]@. The flexible structure of tuples permits a simple and expressive function call syntax to work seamlessly with both single- and multiple-return-value functions, and with any number of arguments of arbitrarily complex structure. In \KWC \cite{Buhr94a,Till89}, a precursor to \CFA, there were 4 tuple coercions: opening, closing, flattening, and structuring. Opening coerces a tuple value into a tuple of values, while closing converts a tuple of values into a single tuple value. Flattening coerces a nested tuple into a flat tuple, i.e. it takes a tuple with tuple components and expands it into a tuple with only non-tuple components. Structuring moves in the opposite direction, i.e. it takes a flat tuple value and provides structure by introducing nested tuple components. In \CFA, the design has been simplified to require only the two conversions previously described, which trigger only in function call and return situations. Specifically, the expression resolution algorithm examines all of the possible alternatives for an expression to determine the best match. In resolving a function call expression, each combination of function value and list of argument alternatives is examined. Given a particular argument list and function value, the list of argument alternatives is flattened to produce a list of non-tuple valued expressions. Then the flattened list of expressions is compared with each value in the function's parameter list. If the parameter's type is not a tuple type, then the current argument value is unified with the parameter type, and on success the next argument and parameter are examined. If the parameter's type is a tuple type, then the structuring conversion takes effect, recursively applying the parameter matching algorithm using the tuple's component types as the parameter list types. Assuming a successful unification, eventually the algorithm gets to the end of the tuple type, which causes all of the matching expressions to be consumed and structured into a tuple expression. For example, in \begin{cfacode} int f(int, [double, int]); f([5, 10.2], 4); \end{cfacode} There is only a single definition of @f@, and 3 arguments with only single interpretations. First, the argument alternative list @[5, 10.2], 4@ is flattened to produce the argument list @5, 10.2, 4@. Next, the parameter matching algorithm begins, with $P = $@int@ and $A = $@int@, which unifies exactly. Moving to the next parameter and argument, $P = $@[double, int]@ and $A = $@double@. This time, the parameter is a tuple type, so the algorithm applies recursively with $P' = $@double@ and $A = $@double@, which unifies exactly. Then $P' = $@int@ and $A = $@double@, which again unifies exactly. At this point, the end of $P'$ has been reached, so the arguments @10.2, 4@ are structured into the tuple expression @[10.2, 4]@. Finally, the end of the parameter list $P$ has also been reached, so the final expression is @f(5, [10.2, 4])@. \section{Tuple Assignment} \label{s:TupleAssignment} An assignment where the left side of the assignment operator has a tuple type is called tuple assignment. There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called Multiple and Mass Assignment, respectively. \begin{cfacode} int x; double y; [int, double] z; [y, x] = 3.14; // mass assignment [x, y] = z; // multiple assignment z = 10; // mass assignment z = [x, y]; // multiple assignment \end{cfacode} Let $L_i$ for $i$ in $[0, n)$ represent each component of the flattened left side, $R_i$ represent each component of the flattened right side of a multiple assignment, and $R$ represent the right side of a mass assignment. For a multiple assignment to be valid, both tuples must have the same number of elements when flattened. Multiple assignment assigns $R_i$ to $L_i$ for each $i$. That is, @?=?(&$L_i$, $R_i$)@ must be a well-typed expression. In the previous example, @[x, y] = z@, @z@ is flattened into @z.0, z.1@, and the assignments @x = z.0@ and @y = z.1@ happen. A mass assignment assigns the value $R$ to each $L_i$. For a mass assignment to be valid, @?=?(&$L_i$, $R$)@ must be a well-typed expression. This differs from C cascading assignment (e.g. @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, which results in the value @3.14@ in @y@ and the value @3@ in @x@. On the other hand, the C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, which results in the value @3@ in @x@, and as a result the value @3@ in @y@ as well. Both kinds of tuple assignment have parallel semantics, such that each value on the left side and right side is evaluated \emph{before} any assignments occur. As a result, it is possible to swap the values in two variables without explicitly creating any temporary variables or calling a function, \begin{cfacode} int x = 10, y = 20; [x, y] = [y, x]; \end{cfacode} After executing this code, @x@ has the value @20@ and @y@ has the value @10@. In \CFA, tuple assignment is an expression where the result type is the type of the left side of the assignment, as in normal assignment. That is, a tuple assignment produces the value of the left-hand side after assignment. These semantics allow cascading tuple assignment to work out naturally in any context where a tuple is permitted. These semantics are a change from the original tuple design in \KWC \cite{Till89}, wherein tuple assignment was a statement that allows cascading assignments as a special case. This decision wa made in an attempt to fix what was seen as a problem with assignment, wherein it can be used in many different locations, such as in function-call argument position. While permitting assignment as an expression does introduce the potential for subtle complexities, it is impossible to remove assignment expressions from \CFA without affecting backwards compatibility. Furthermore, there are situations where permitting assignment as an expression improves readability by keeping code succinct and reducing repetition, and complicating the definition of tuple assignment puts a greater cognitive burden on the user. In another language, tuple assignment as a statement could be reasonable, but it would be inconsistent for tuple assignment to be the only kind of assignment that is not an expression. In addition, \KWC permits the compiler to optimize tuple assignment as a block copy, since it does not support user-defined assignment operators. This optimization could be implemented in \CFA, but it requires the compiler to verify that the selected assignment operator is trivial. The following example shows multiple, mass, and cascading assignment used in one expression \begin{cfacode} int a, b; double c, d; [void] f([int, int]); f([c, a] = [b, d] = 1.5); // assignments in parameter list \end{cfacode} The tuple expression begins with a mass assignment of @1.5@ into @[b, d]@, which assigns @1.5@ into @b@, which is truncated to @1@, and @1.5@ into @d@, producing the tuple @[1, 1.5]@ as a result. That tuple is used as the right side of the multiple assignment (i.e., @[c, a] = [1, 1.5]@) that assigns @1@ into @c@ and @1.5@ into @a@, which is truncated to @1@, producing the result @[1, 1]@. Finally, the tuple @[1, 1]@ is used as an expression in the call to @f@. \subsection{Tuple Construction} Tuple construction and destruction follow the same rules and semantics as tuple assignment, except that in the case where there is no right side, the default constructor or destructor is called on each component of the tuple. \begin{cfacode} struct S; void ?{}(S *); // (1) void ?{}(S *, int); // (2) void ?{}(S * double); // (3) void ?{}(S *, S); // (4) [S, S] x = [3, 6.28]; // uses (2), (3) [S, S] y; // uses (1), (1) [S, S] z = x.0; // uses (4), (4) \end{cfacode} In this example, @x@ is initialized by the multiple constructor calls @?{}(&x.0, 3)@ and @?{}(&x.1, 6.28)@, while @y@ is initilaized by two default constructor calls @?{}(&y.0)@ and @?{}(&y.1)@. @z@ is initialized by mass copy constructor calls @?{}(&z.0, x.0)@ and @?{}(&z.1, x.0)@. Finally, @x@, @y@, and @z@ are destructed, i.e. the calls @^?{}(&x.0)@, @^?{}(&x.1)@, @^?{}(&y.0)@, @^?{}(&y.1)@, @^?{}(&z.0)@, and @^?{}(&z.1)@. It is possible to define constructors and assignment functions for tuple types that provide new semantics, if the existing semantics do not fit the needs of an application. For example, the function @void ?{}([T, U] *, S);@ can be defined to allow a tuple variable to be constructed from a value of type @S@. \begin{cfacode} struct S { int x; double y; }; void ?{}([int, double] * this, S s) { this->0 = s.x; this->1 = s.y; } \end{cfacode} Due to the structure of generated constructors, it is possible to pass a tuple to a generated constructor for a type with a member prefix that matches the type of the tuple. For example, \begin{cfacode} struct S { int x; double y; int z }; [int, double] t; S s = t; \end{cfacode} The initialization of @s@ with @t@ works by default, because @t@ is flattened into its components, which satisfies the generated field constructor @?{}(S *, int, double)@ to initialize the first two values. \section{Member-Access Tuple Expression} \label{s:MemberAccessTuple} It is possible to access multiple fields from a single expression using a \emph{Member-Access Tuple Expression}. The result is a single tuple-valued expression whose type is the tuple of the types of the members. For example, \begin{cfacode} struct S { int x; double y; char * z; } s; s.[x, y, z]; \end{cfacode} Here, the type of @s.[x, y, z]@ is @[int, double, char *]@. A member tuple expression has the form @a.[x, y, z];@ where @a@ is an expression with type @T@, where @T@ supports member access expressions, and @x, y, z@ are all members of @T@ with types @T$_x$@, @T$_y$@, and @T$_z$@ respectively. Then the type of @a.[x, y, z]@ is @[T_x, T_y, T_z]@. Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (e.g. rearrange components, drop components, duplicate components, etc.). \begin{cfacode} [int, int, long, double] x; void f(double, long); f(x.[0, 3]); // f(x.0, x.3) x.[0, 1] = x.[1, 0]; // [x.0, x.1] = [x.1, x.0] [long, int, long] y = x.[2, 0, 2]; \end{cfacode} It is possible for a member tuple expression to contain other member access expressions. For example, \begin{cfacode} struct A { double i; int j; }; struct B { int * k; short l; }; struct C { int x; A y; B z; } v; v.[x, y.[i, j], z.k]; \end{cfacode} This expression is equivalent to @[v.x, [v.y.i, v.y.j], v.z.k]@. That is, the aggregate expression is effectively distributed across the tuple, which allows simple and easy access to multiple components in an aggregate, without repetition. It is guaranteed that the aggregate expression to the left of the @.@ in a member tuple expression is evaluated exactly once. As such, it is safe to use member tuple expressions on the result of a side-effecting function. \begin{cfacode} [int, float, double] f(); [double, float] x = f().[2, 1]; \end{cfacode} In \KWC, member tuple expressions are known as \emph{record field tuples} \cite{Till89}. Since \CFA permits these tuple-access expressions using structures, unions, and tuples, \emph{member tuple expression} or \emph{field tuple expression} is more appropriate. It could be possible to extend member-access expressions further. Currently, a member-access expression whose member is a name requires that the aggregate is a structure or union, while a constant integer member requires the aggregate to be a tuple. In the interest of orthogonal design, \CFA could apply some meaning to the remaining combinations as well. For example, \begin{cfacode} struct S { int x, y; } s; [S, S] z; s.x; // access member z.0; // access component s.1; // ??? z.y; // ??? \end{cfacode} One possiblity is for @s.1@ to select the second member of @s@. Under this interpretation, it becomes possible to not only access members of a struct by name, but also by position. Likewise, it seems natural to open this mechanism to enumerations as well, wherein the left side would be a type, rather than an expression. One benefit of this interpretation is familiar, since it is extremely reminiscent of tuple-index expressions. On the other hand, it could be argued that this interpretation is brittle in that changing the order of members or adding new members to a structure becomes a brittle operation. This problem is less of a concern with tuples, since modifying a tuple affects only the code which directly uses that tuple, whereas modifying a structure has far reaching consequences with every instance of the structure. As for @z.y@, a natural interpretation would be to extend the meaning of member tuple expressions. That is, currently the tuple must occur as the member, i.e. to the right of the dot. Allowing tuples to the left of the dot could distribute the member across the elements of the tuple, in much the same way that member tuple expressions distribute the aggregate across the member tuple. In this example, @z.y@ expands to @[z.0.y, z.1.y]@, allowing what is effectively a very limited compile-time field-sections map operation, where the argument must be a tuple containing only aggregates having a member named @y@. It is questionable how useful this would actually be in practice, since generally structures are not designed to have names in common with other structures, and further this could cause maintainability issues in that it encourages programmers to adopt very simple naming conventions, to maximize the amount of overlap between different types. Perhaps more useful would be to allow arrays on the left side of the dot, which would likewise allow mapping a field access across the entire array, producing an array of the contained fields. The immediate problem with this idea is that C arrays do not carry around their size, which would make it impossible to use this extension for anything other than a simple stack allocated array. Supposing this feature works as described, it would be necessary to specify an ordering for the expansion of member access expressions versus member tuple expressions. \begin{cfacode} struct { int x, y; }; [S, S] z; z.[x, y]; // ??? // => [z.0, z.1].[x, y] // => [z.0.x, z.0.y, z.1.x, z.1.y] // or // => [z.x, z.y] // => [[z.0, z.1].x, [z.0, z.1].y] // => [z.0.x, z.1.x, z.0.y, z.1.y] \end{cfacode} Depending on exactly how the two tuples are combined, different results can be achieved. As such, a specific ordering would need to be imposed in order for this feature to be useful. Furthermore, this addition moves a member tuple expression's meaning from being clear statically to needing resolver support, since the member name needs to be distributed appropriately over each member of the tuple, which could itself be a tuple. Ultimately, both of these extensions introduce complexity into the model, with relatively little peceived benefit. \section{Casting} In C, the cast operator is used to explicitly convert between types. In \CFA, the cast operator has a secondary use, which is type ascription. That is, a cast can be used to select the type of an expression when it is ambiguous, as in the call to an overloaded function. \begin{cfacode} int f(); // (1) double f(); // (2) f(); // ambiguous - (1),(2) both equally viable (int)f(); // choose (2) \end{cfacode} Since casting is a fundamental operation in \CFA, casts should be given a meaningful interpretation in the context of tuples. Taking a look at standard C provides some guidance with respect to the way casts should work with tuples. \begin{cfacode}[numbers=left] int f(); void g(); (void)f(); (int)g(); struct A { int x; }; (struct A)f(); \end{cfacode} In C, line 4 is a valid cast, which calls @f@ and discards its result. On the other hand, line 5 is invalid, because @g@ does not produce a result, so requesting an @int@ to materialize from nothing is nonsensical. Finally, line 8 is also invalid, because in C casts only provide conversion between scalar types \cite{C11}. For consistency, this implies that any case wherein the number of components increases as a result of the cast is invalid, while casts which have the same or fewer number of components may be valid. Formally, a cast to tuple type is valid when $T_n \leq S_m$, where $T_n$ is the number of components in the target type and $S_m$ is the number of components in the source type, and for each $i$ in $[0, n)$, $S_i$ can be cast to $T_i$. Excess elements ($S_j$ for all $j$ in $[n, m)$) are evaluated, but their values are discarded so that they are not included in the result expression. This discarding naturally follows the way that a cast to void works in C. For example, \begin{cfacode} [int, int, int] f(); [int, [int, int], int] g(); ([int, double])f(); // (1) ([int, int, int])g(); // (2) ([void, [int, int]])g(); // (3) ([int, int, int, int])g(); // (4) ([int, [int, int, int]])g(); // (5) \end{cfacode} (1) discards the last element of the return value and converts the second element to type double. Since @int@ is effectively a 1-element tuple, (2) discards the second component of the second element of the return value of @g@. If @g@ is free of side effects, this is equivalent to @[(int)(g().0), (int)(g().1.0), (int)(g().2)]@. Since @void@ is effectively a 0-element tuple, (3) discards the first and third return values, which is effectively equivalent to @[(int)(g().1.0), (int)(g().1.1)]@). % will this always hold true? probably, as constructors should give all of the conversion power we need. if casts become function calls, what would they look like? would need a way to specify the target type, which seems awkward. Also, C++ basically only has this because classes are closed to extension, while we don't have that problem (can have floating constructors for any type). Note that a cast is not a function call in \CFA, so flattening and structuring conversions do not occur for cast expressions. As such, (4) is invalid because the cast target type contains 4 components, while the source type contains only 3. Similarly, (5) is invalid because the cast @([int, int, int])(g().1)@ is invalid. That is, it is invalid to cast @[int, int]@ to @[int, int, int]@. \section{Polymorphism} Due to the implicit flattening and structuring conversions involved in argument passing, @otype@ and @dtype@ parameters are restricted to matching only with non-tuple types. \begin{cfacode} forall(otype T, dtype U) void f(T x, U * y); f([5, "hello"]); \end{cfacode} In this example, @[5, "hello"]@ is flattened, so that the argument list appears as @5, "hello"@. The argument matching algorithm binds @T@ to @int@ and @U@ to @const char@, and calls the function as normal. Tuples can contain otype and dtype components. For example, a plus operator can be written to add two triples of a type together. \begin{cfacode} forall(otype T | { T ?+?(T, T); }) [T, T, T] ?+?([T, T, T] x, [T, T, T] y) { return [x.0+y.0, x.1+y.1, x.2+y.2]; } [int, int, int] x; int i1, i2, i3; [i1, i2, i3] = x + ([10, 20, 30]); \end{cfacode} Note that due to the implicit tuple conversions, this function is not restricted to the addition of two triples. A call to this plus operator type checks as long as a total of 6 non-tuple arguments are passed after flattening, and all of the arguments have a common type which can bind to @T@, with a pairwise @?+?@ over @T@. For example, these expressions will also succeed and produce the same value. \begin{cfacode} ([x.0, x.1]) + ([x.2, 10, 20, 30]); x.0 + ([x.1, x.2, 10, 20, 30]); \end{cfacode} This presents a potential problem if structure is important, as these three expressions look like they should have different meanings. Further, these calls can be made ambiguous by adding seemingly different functions. \begin{cfacode} forall(otype T | { T ?+?(T, T); }) [T, T, T] ?+?([T, T] x, [T, T, T, T]); forall(otype T | { T ?+?(T, T); }) [T, T, T] ?+?(T x, [T, T, T, T, T]); \end{cfacode} It is also important to note that these calls could be disambiguated if the function return types were different, as they likely would be for a reasonable implementation of @?+?@, since the return type is used in overload resolution. Still, this is a deficiency of the current argument matching algorithm, and depending on the function, differing return values may not always be appropriate. It's possible that this could be rectified by applying an appropriate cost to the structuring and flattening conversions, which are currently 0-cost conversions. Care would be needed in this case to ensure that exact matches do not incur such a cost. \begin{cfacode} void f([int, int], int, int); f([0, 0], 0, 0); // no cost f(0, 0, 0, 0); // cost for structuring f([0, 0,], [0, 0]); // cost for flattening f([0, 0, 0], 0); // cost for flattening and structuring \end{cfacode} Until this point, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (i.e. no implicit conversions are applied to assertion arguments). This decision presents a conflict with the flexibility of tuples. \subsection{Assertion Inference} \begin{cfacode} int f([int, double], double); forall(otype T, otype U | { T f(T, U, U); }) void g(T, U); g(5, 10.21); \end{cfacode} If assertion arguments must match exactly, then the call to @g@ cannot be resolved, since the expected type of @f@ is flat, while the only @f@ in scope requires a tuple type. Since tuples are fluid, this requirement reduces the usability of tuples in polymorphic code. To ease this pain point, function parameter and return lists are flattened for the purposes of type unification, which allows the previous example to pass expression resolution. This relaxation is made possible by extending the existing thunk generation scheme, as described by Bilson \cite{Bilson03}. Now, whenever a candidate's parameter structure does not exactly match the formal parameter's structure, a thunk is generated to specialize calls to the actual function. \begin{cfacode} int _thunk(int _p0, double _p1, double _p2) { return f([_p0, _p1], _p2); } \end{cfacode} Essentially, this provides flattening and structuring conversions to inferred functions, improving the compatibility of tuples and polymorphism. \section{Implementation} Tuples are implemented in the \CFA translator via a transformation into generic types. The first time an $N$-tuple is seen for each $N$ in a scope, a generic type with $N$ type parameters is generated. For example, \begin{cfacode} [int, int] f() { [double, double] x; [int, double, int] y; } \end{cfacode} Is transformed into \begin{cfacode} forall(dtype T0, dtype T1 | sized(T0) | sized(T1)) struct _tuple2 { // generated before the first 2-tuple T0 field_0; T1 field_1; }; _tuple2_(int, int) f() { _tuple2_(double, double) x; forall(dtype T0, dtype T1, dtype T2 | sized(T0) | sized(T1) | sized(T2)) struct _tuple3 { // generated before the first 3-tuple T0 field_0; T1 field_1; T2 field_2; }; _tuple3_(int, double, int) y; } \end{cfacode} Tuple expressions are then simply converted directly into compound literals \begin{cfacode} [5, 'x', 1.24]; \end{cfacode} Becomes \begin{cfacode} (_tuple3_(int, char, double)){ 5, 'x', 1.24 }; \end{cfacode} Since tuples are essentially structures, tuple indexing expressions are just field accesses. \begin{cfacode} void f(int, [double, char]); [int, double] x; x.0+x.1; printf("%d %g\n", x); f(x, 'z'); \end{cfacode} Is transformed into \begin{cfacode} void f(int, _tuple2_(double, char)); _tuple2_(int, double) x; x.field_0+x.field_1; printf("%d %g\n", x.field_0, x.field_1); f(x.field_0, (_tuple2){ x.field_1, 'z' }); \end{cfacode} Note that due to flattening, @x@ used in the argument position is converted into the list of its fields. In the call to @f@, the second and third argument components are structured into a tuple argument. Expressions which may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. Each unique expression is assigned an identifier and is guaranteed to be executed exactly once. \begin{cfacode} void g(int, double); [int, double] h(); g(h()); \end{cfacode} Interally, this is converted to \begin{cfacode} void g(int, double); [int, double] h(); let unq<0> = f() : g(unq<0>.0, unq<0>.1); // notation? \end{cfacode} Ultimately, unique expressions are converted into two variables and an expression. \begin{cfacode} void g(int, double); [int, double] h(); _Bool _unq0_finished_ = 0; [int, double] _unq0; g( (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).0, (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).1, ); \end{cfacode} Since argument evaluation order is not specified by the C programming language, this scheme is built to work regardless of evaluation order. The first time a unique expression is executed, the actual expression is evaluated and the accompanying boolean is set to true. Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression. Currently, the \CFA translator has a very broad, imprecise definition of impurity, where any function call is assumed to be impure. This notion could be made more precise for certain intrinsic, autogenerated, and builtin functions, and could analyze function bodies when they are available to recursively detect impurity, to eliminate some unique expressions. It's possible that unique expressions could be exposed to the user through a language feature, but it's not immediately obvious that there is a benefit to doing so. Tuple member expressions are recursively expanded into a list of member access expressions. \begin{cfacode} [int, [double, int, double], int]] x; x.[0, 1.[0, 2]]; \end{cfacode} Which becomes \begin{cfacode} [x.0, [x.1.0, x.1.2]]; \end{cfacode} Tuple member expressions also take advantage of unique expressions in the case of possible impurity. Finally, the various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions. For example, a mass assignment \begin{cfacode} int x, z; double y; [double, double] f(); [x, y, z] = 1.5; // mass assignment \end{cfacode} Generates the following \begin{cfacode} // [x, y, z] = 1.5; _tuple3_(int, double, int) _tmp_stmtexpr_ret0; ({ // assign LHS address temporaries int *__massassign_L0 = &x; // ?{} double *__massassign_L1 = &y; // ?{} int *__massassign_L2 = &z; // ?{} // assign RHS value temporary double __massassign_R0 = 1.5; // ?{} ({ // tuple construction - construct statement expr return variable // assign LHS address temporaries int *__multassign_L0 = (int *)&_tmp_stmtexpr_ret0.0; // ?{} double *__multassign_L1 = (double *)&_tmp_stmtexpr_ret0.1; // ?{} int *__multassign_L2 = (int *)&_tmp_stmtexpr_ret0.2; // ?{} // assign RHS value temporaries and perform mass assignment to L0, L1, L2 int __multassign_R0 = (*__massassign_L0=(int)__massassign_R0); // ?{} double __multassign_R1 = (*__massassign_L1=__massassign_R0); // ?{} int __multassign_R2 = (*__massassign_L2=(int)__massassign_R0); // ?{} // perform construction of statement expr return variable using // RHS value temporary ((*__multassign_L0 = __multassign_R0 /* ?{} */), (*__multassign_L1 = __multassign_R1 /* ?{} */), (*__multassign_L2 = __multassign_R2 /* ?{} */)); }); _tmp_stmtexpr_ret0; }); ({ // tuple destruction - destruct assign expr value int *__massassign_L3 = (int *)&_tmp_stmtexpr_ret0.0; // ?{} double *__massassign_L4 = (double *)&_tmp_stmtexpr_ret0.1; // ?{} int *__massassign_L5 = (int *)&_tmp_stmtexpr_ret0.2; // ?{} ((*__massassign_L3 /* ^?{} */), (*__massassign_L4 /* ^?{} */), (*__massassign_L5 /* ^?{} */)); }); \end{cfacode} A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, e.g. in a unique expression. $N$ LHS variables are generated and constructed using the address of the tuple components, and a single RHS variable is generated to store the value of the RHS without any loss of precision. A nested statement expression is generated that performs the individual assignments and constructs the return value using the results of the individual assignments. Finally, the statement expression temporary is destroyed at the end of the expression. Similarly, a multiple assignment \begin{cfacode} [x, y, z] = [f(), 3]; // multiple assignment \end{cfacode} Generates \begin{cfacode} // [x, y, z] = [f(), 3]; _tuple3_(int, double, int) _tmp_stmtexpr_ret0; ({ // assign LHS address temporaries int *__multassign_L0 = &x; // ?{} double *__multassign_L1 = &y; // ?{} int *__multassign_L2 = &z; // ?{} // assign RHS value temporaries _tuple2_(double, double) _tmp_cp_ret0; _Bool _unq0_finished_ = 0; double __multassign_R0 = (_unq0_finished_ ? _tmp_cp_ret0 : (_tmp_cp_ret0=f(), _unq0_finished_=1, _tmp_cp_ret0)).0; // ?{} double __multassign_R1 = (_unq0_finished_ ? _tmp_cp_ret0 : (_tmp_cp_ret0=f(), _unq0_finished_=1, _tmp_cp_ret0)).1; // ?{} ({ // tuple destruction - destruct f() return temporary - tuple destruction // assign LHS address temporaries double *__massassign_L3 = (double *)&_tmp_cp_ret0.0; // ?{} double *__massassign_L4 = (double *)&_tmp_cp_ret0.1; // ?{} // perform destructions - intrinsic, so NOP ((*__massassign_L3 /* ^?{} */), (*__massassign_L4 /* ^?{} */)); }); int __multassign_R2 = 3; // ?{} ({ // tuple construction - construct statement expr return variable // assign LHS address temporaries int *__multassign_L3 = (int *)&_tmp_stmtexpr_ret0.0; // ?{} double *__multassign_L4 = (double *)&_tmp_stmtexpr_ret0.1; // ?{} int *__multassign_L5 = (int *)&_tmp_stmtexpr_ret0.2; // ?{} // assign RHS value temporaries and perform multiple assignment to L0, L1, L2 int __multassign_R3 = (*__multassign_L0=(int)__multassign_R0); // ?{} double __multassign_R4 = (*__multassign_L1=__multassign_R1); // ?{} int __multassign_R5 = (*__multassign_L2=__multassign_R2); // ?{} // perform construction of statement expr return variable using // RHS value temporaries ((*__multassign_L3=__multassign_R3 /* ?{} */), (*__multassign_L4=__multassign_R4 /* ?{} */), (*__multassign_L5=__multassign_R5 /* ?{} */)); }); _tmp_stmtexpr_ret0; }); ({ // tuple destruction - destruct assign expr value // assign LHS address temporaries int *__massassign_L5 = (int *)&_tmp_stmtexpr_ret0.0; // ?{} double *__massassign_L6 = (double *)&_tmp_stmtexpr_ret0.1; // ?{} int *__massassign_L7 = (int *)&_tmp_stmtexpr_ret0.2; // ?{} // perform destructions - intrinsic, so NOP ((*__massassign_L5 /* ^?{} */), (*__massassign_L6 /* ^?{} */), (*__massassign_L7 /* ^?{} */)); }); \end{cfacode} The difference here is that $N$ RHS values are stored into separate temporary variables. The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language. There are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this is not a new restriction. \section{Variadic Functions} % TODO: should this maybe be its own chapter? C provides variadic functions through the manipulation of @va_list@ objects. A variadic function is one which contains at least one parameter, followed by @...@ as the last token in the parameter list. In particular, some form of \emph{argument descriptor} is needed to inform the function of the number of arguments and their types. Two common argument descriptors are format strings or and counter parameters. It's important to note that both of these mechanisms are inherently redundant, because they require the user to specify information that the compiler knows explicitly. This required repetition is error prone, because it's easy for the user to add or remove arguments without updating the argument descriptor. In addition, C requires the programmer to hard code all of the possible expected types. As a result, it is cumbersome to write a function that is open to extension. For example, a simple function which sums $N$ @int@s, \begin{cfacode} int sum(int N, ...) { va_list args; va_start(args, N); int ret = 0; while(N) { ret += va_arg(args, int); // have to specify type N--; } va_end(args); return ret; } sum(3, 10, 20, 30); // need to keep counter in sync \end{cfacode} The @va_list@ type is a special C data type that abstracts variadic argument manipulation. The @va_start@ macro initializes a @va_list@, given the last named parameter. Each use of the @va_arg@ macro allows access to the next variadic argument, given a type. Since the function signature does not provide any information on what types can be passed to a variadic function, the compiler does not perform any error checks on a variadic call. As such, it is possible to pass any value to the @sum@ function, including pointers, floating-point numbers, and structures. In the case where the provided type is not compatible with the argument's actual type after default argument promotions, or if too many arguments are accessed, the behaviour is undefined \cite{C11}. Furthermore, there is no way to perform the necessary error checks in the @sum@ function at run-time, since type information is not carried into the function body. Since they rely on programmer convention rather than compile-time checks, variadic functions are generally unsafe. In practice, compilers can provide warnings to help mitigate some of the problems. For example, GCC provides the @format@ attribute to specify that a function uses a format string, which allows the compiler to perform some checks related to the standard format specifiers. Unfortunately, this does not permit extensions to the format string syntax, so a programmer cannot extend the attribute to warn for mismatches with custom types. Needless to say, C's variadic functions are a deficient language feature. Two options were examined to provide better, type-safe variadic functions in \CFA. \subsection{Whole Tuple Matching} Option 1 is to change the argument matching algorithm, so that type parameters can match whole tuples, rather than just their components. This option could be implemented with two phases of argument matching when a function contains type parameters and the argument list contains tuple arguments. If flattening and structuring fail to produce a match, a second attempt at matching the function and argument combination is made where tuple arguments are not expanded and structure must match exactly, modulo non-tuple implicit conversions. For example: \begin{cfacode} forall(otype T, otype U | { T g(U); }) void f(T, U); [int, int] g([int, int, int, int]); f([1, 2], [3, 4, 5, 6]); \end{cfacode} With flattening and structuring, the call is first transformed into @f(1, 2, 3, 4, 5, 6)@. Since the first argument of type @T@ does not have a tuple type, unification decides that @T=int@ and @1@ is matched as the first parameter. Likewise, @U@ does not have a tuple type, so @U=int@ and @2@ is accepted as the second parameter. There are now no remaining formal parameters, but there are remaining arguments and the function is not variadic, so the match fails. With the addition of an exact matching attempt, @T=[int,int]@ and @U=[int,int,int,int]@ and so the arguments type check. Likewise, when inferring assertion @g@, an exact match is found. This approach is strict with respect to argument structure by nature, which makes it syntactically awkward to use in ways that the existing tuple design is not. For example, consider a @new@ function which allocates memory using @malloc@ and constructs the result, using arbitrary arguments. \begin{cfacode} struct Array; void ?{}(Array *, int, int, int); forall(dtype T, otype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; } Array(int) * x = new([1, 2, 3]); \end{cfacode} The call to @new@ is not particularly appealing, since it requires the use of square brackets at the call-site, which is not required in any other function call. This shifts the burden from the compiler to the programmer, which is almost always wrong, and creates an odd inconsistency within the language. Similarly, in order to pass 0 variadic arguments, an explicit empty tuple must be passed into the argument list, otherwise the exact matching rule would not have an argument to bind against. It should be otherwise noted that the addition of an exact matching rule only affects the outcome for polymorphic type binding when tuples are involved. For non-tuple arguments, exact matching and flattening \& structuring are equivalent. For tuple arguments to a function without polymorphic formal parameters, flattening and structuring work whenever an exact match would have worked, since the tuple is flattened and implicitly restructured to its original structure. Thus there is nothing to be gained from permitting the exact matching rule to take effect when a function does not contain polymorphism and none of the arguments are tuples. Overall, this option takes a step in the right direction, but is contrary to the flexibility of the existing tuple design. \subsection{A New Typeclass} A second option is the addition of another kind of type parameter, @ttype@. Matching against a @ttype@ parameter consumes all remaining argument components and packages them into a tuple, binding to the resulting tuple of types. In a given parameter list, there should be at most one @ttype@ parameter that must occur last, otherwise the call can never resolve, given the previous rule. % TODO: a similar rule exists in C/C++ for "..." This idea essentially matches normal variadic semantics, with a strong feeling of similarity to \CCeleven variadic templates. As such, @ttype@ variables will also be referred to as argument packs. This is the option that has been added to \CFA. Like variadic templates, the main way to manipulate @ttype@ polymorphic functions is through recursion. Since nothing is known about a parameter pack by default, assertion parameters are key to doing anything meaningful. Unlike variadic templates, @ttype@ polymorphic functions can be separately compiled. For example, a simple translation of the C sum function using @ttype@ would look like \begin{cfacode} int sum(){ return 0; } // (0) forall(ttype Params | { int sum(Params); }) int sum(int x, Params rest) { // (1) return x+sum(rest); } sum(10, 20, 30); \end{cfacode} Since (0) does not accept any arguments, it is not a valid candidate function for the call @sum(10, 20, 30)@. In order to call (1), @10@ is matched with @x@, and the argument resolution moves on to the argument pack @rest@, which consumes the remainder of the argument list and @Params@ is bound to @[20, 30]@. In order to finish the resolution of @sum@, an assertion parameter which matches @int sum(int, int)@ is required. Like in the previous iteration, (0) is not a valid candiate, so (1) is examined with @Params@ bound to @[int]@, requiring the assertion @int sum(int)@. Next, (0) fails, and to satisfy (1) @Params@ is bound to @[]@, requiring an assertion @int sum()@. Finally, (0) matches and (1) fails, which terminates the recursion. Effectively, this traces as @sum(10, 20, 30)@ $\rightarrow$ @10+sum(20, 30)@ $\rightarrow$ @10+(20+sum(30))@ $\rightarrow$ @10+(20+(30+sum()))@ $\rightarrow$ @10+(20+(30+0))@. A point of note is that this version does not require any form of argument descriptor, since the \CFA type system keeps track of all of these details. It might be reasonable to take the @sum@ function a step further to enforce a minimum number of arguments, which could be done simply \begin{cfacode} int sum(int x, int y){ return x+y; } forall(ttype Params | { int sum(int, Params); }) int sum(int x, int y, Params rest) { return sum(x+y, rest); } sum(10, 20, 30); \end{cfacode} One more iteration permits the summation of any summable type, as long as all arguments are the same type. \begin{cfacode} trait summable(otype T) { T ?+?(T, T); }; forall(otype R | summable(R)) R sum(R x, R y){ return x+y; } forall(otype R, ttype Params | summable(R) | { R sum(R, Params); }) R sum(R x, R y, Params rest) { return sum(x+y, rest); } sum(3, 10, 20, 30); \end{cfacode} Unlike C, it is not necessary to hard code the expected type. This is naturally open to extension, in that any user-defined type with a @?+?@ operator is automatically able to be used with the @sum@ function. That is to say, the programmer who writes @sum@ does not need full program knowledge of every possible data type, unlike what is necessary to write an equivalent function using the standard C mechanisms. Going one last step, it is possible to achieve full generality in \CFA, allowing the summation of arbitrary lists of summable types. \begin{cfacode} trait summable(otype T1, otype T2, otype R) { R ?+?(T1, T2); }; forall(otype T1, otype T2, otype R | summable(T1, T2, R)) R sum(T1 x, T2 y) { return x+y; } forall(otype T1, otype T2, otype T3, ttype Params, otype R | summable(T1, T2, T3) | { R sum(T3, Params); }) R sum(T1 x, T2 y, Params rest ) { return sum(x+y, rest); } sum(3, 10.5, 20, 30.3); \end{cfacode} The \CFA translator requires adding explicit @double ?+?(int, double)@ and @double ?+?(double, int)@ functions for this call to work, since implicit conversions are not supported for assertions. C variadic syntax and @ttype@ polymorphism probably should not be mixed, since it is not clear where to draw the line to decide which arguments belong where. Furthermore, it might be desirable to disallow polymorphic functions to use C variadic syntax to encourage a Cforall style. Aside from calling C variadic functions, it is not obvious that there is anything that can be done with C variadics that could not also be done with @ttype@ parameters. Variadic templates in \CC require an ellipsis token to express that a parameter is a parameter pack and to expand a parameter pack. \CFA does not need an ellipsis in either case, since the type class @ttype@ is only used for variadics. An alternative design could have used an ellipsis combined with an existing type class. This approach was not taken because the largest benefit of the ellipsis token in \CC is the ability to expand a parameter pack within an expression, e.g. in fold expressions, which requires compile-time knowledge of the structure of the parameter pack, which is not available in \CFA. \begin{cppcode} template void f(Args &... args) { g(&args...); // expand to addresses of pack elements } \end{cppcode} As such, the addition of an ellipsis token would be purely an aesthetic change in \CFA today. It is possible to write a type-safe variadic print routine, which can replace @printf@ \begin{cfacode} struct S { int x, y; }; forall(otype T, ttype Params | { void print(T); void print(Params); }) void print(T arg, Params rest) { print(arg); print(rest); } void print(char * x) { printf("%s", x); } void print(int x) { printf("%d", x); } void print(S s) { print("{ ", s.x, ",", s.y, " }"); } print("s = ", (S){ 1, 2 }, "\n"); \end{cfacode} This example routine showcases a variadic-template-like decomposition of the provided argument list. The individual @print@ routines allow printing a single element of a type. The polymorphic @print@ allows printing any list of types, as long as each individual type has a @print@ function. The individual print functions can be used to build up more complicated @print@ routines, such as for @S@, which is something that cannot be done with @printf@ in C. It is also possible to use @ttype@ polymorphism to provide arbitrary argument forwarding functions. For example, it is possible to write @new@ as a library function. Example 2: new (i.e. type-safe malloc + constructors) \begin{cfacode} struct Array; void ?{}(Array *, int, int, int); forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; // construct result of malloc } Array * x = new(1, 2, 3); \end{cfacode} The @new@ function provides the combination of type-safe @malloc@ with a constructor call, so that it becomes impossible to forget to construct dynamically allocated objects. This provides the type-safety of @new@ in \CC, without the need to specify the allocated type, thanks to return-type inference. In the call to @new@, @Array@ is selected to match @T@, and @Params@ is expanded to match @[int, int, int, int]@. To satisfy the assertions, a constructor with an interface compatible with @void ?{}(Array *, int, int, int)@ must exist in the current scope. \subsection{Implementation} The definition of @new@ \begin{cfacode} forall(dtype T | sized(T)) T * malloc(); forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; // construct result of malloc } \end{cfacode} Generates the following \begin{cfacode} void *malloc(long unsigned int _sizeof_T, long unsigned int _alignof_T); void *new( void (*_adapter_)(void (*)(), void *, void *), long unsigned int _sizeof_T, long unsigned int _alignof_T, long unsigned int _sizeof_Params, long unsigned int _alignof_Params, void (* _ctor_T)(void *, void *), void *p ){ void *_retval_new; void *_tmp_cp_ret0; void *_tmp_ctor_expr0; _retval_new= (_adapter_(_ctor_T, (_tmp_ctor_expr0=(_tmp_cp_ret0=malloc(_sizeof_2tT, _alignof_2tT), _tmp_cp_ret0)), p), _tmp_ctor_expr0); // ?{} *(void **)&_tmp_cp_ret0; // ^?{} return _retval_new; } \end{cfacode} The constructor for @T@ is called indirectly through the adapter function on the result of @malloc@ and the parameter pack. The variable that was allocated and constructed is then returned from @new@. A call to @new@ \begin{cfacode} struct S { int x, y; }; void ?{}(S *, int, int); S * s = new(3, 4); \end{cfacode} Generates the following \begin{cfacode} struct _tuple2_ { // _tuple2_(T0, T1) void *field_0; void *field_1; }; struct _conc__tuple2_0 { // _tuple2_(int, int) int field_0; int field_1; }; struct _conc__tuple2_0 _tmp_cp1; // tuple argument to new struct S *_tmp_cp_ret1; // return value from new void _thunk0( // ?{}(S *, [int, int]) struct S *_p0, struct _conc__tuple2_0 _p1 ){ _ctor_S(_p0, _p1.field_0, _p1.field_1); // restructure tuple parameter } void _adapter(void (*_adaptee)(), void *_p0, void *_p1){ // apply adaptee to arguments after casting to actual types ((void (*)(struct S *, struct _conc__tuple2_0))_adaptee)( _p0, *(struct _conc__tuple2_0 *)_p1 ); } struct S *s = (struct S *)(_tmp_cp_ret1= new( _adapter, sizeof(struct S), __alignof__(struct S), sizeof(struct _conc__tuple2_0), __alignof__(struct _conc__tuple2_0), (void (*)(void *, void *))&_thunk0, (({ // copy construct tuple argument to new int *__multassign_L0 = (int *)&_tmp_cp1.field_0; int *__multassign_L1 = (int *)&_tmp_cp1.field_1; int __multassign_R0 = 3; int __multassign_R1 = 4; ((*__multassign_L0=__multassign_R0 /* ?{} */) , (*__multassign_L1=__multassign_R1 /* ?{} */)); }), &_tmp_cp1) ), _tmp_cp_ret1); *(struct S **)&_tmp_cp_ret1; // ^?{} // destroy return value from new ({ // destroy argument temporary int *__massassign_L0 = (int *)&_tmp_cp1.field_0; int *__massassign_L1 = (int *)&_tmp_cp1.field_1; ((*__massassign_L0 /* ^?{} */) , (*__massassign_L1 /* ^?{} */)); }); \end{cfacode} Of note, @_thunk0@ is generated to translate calls to @?{}(S *, [int, int])@ into calls to @?{}(S *, int, int)@. The call to @new@ constructs a tuple argument using the supplied arguments. The @print@ function \begin{cfacode} forall(otype T, ttype Params | { void print(T); void print(Params); }) void print(T arg, Params rest) { print(arg); print(rest); } \end{cfacode} Generates \begin{cfacode} void print_variadic( void (*_adapterF_7tParams__P)(void (*)(), void *), void (*_adapterF_2tT__P)(void (*)(), void *), void (*_adapterF_P2tT2tT__MP)(void (*)(), void *, void *), void (*_adapterF2tT_P2tT2tT_P_MP)(void (*)(), void *, void *, void *), long unsigned int _sizeof_T, long unsigned int _alignof_T, long unsigned int _sizeof_Params, long unsigned int _alignof_Params, void *(*_assign_TT)(void *, void *), void (*_ctor_T)(void *), void (*_ctor_TT)(void *, void *), void (*_dtor_T)(void *), void (*print_T)(void *), void (*print_Params)(void *), void *arg, void *rest ){ void *_tmp_cp0 = __builtin_alloca(_sizeof_T); _adapterF_2tT__P( // print(arg) ((void (*)())print_T), (_adapterF_P2tT2tT__MP( // copy construct argument ((void (*)())_ctor_TT), _tmp_cp0, arg ), _tmp_cp0) ); _dtor_T(_tmp_cp0); // destroy argument temporary _adapterF_7tParams__P( // print(rest) ((void (*)())print_Params), rest ); } \end{cfacode} The @print_T@ routine is called indirectly through an adapter function with a copy constructed argument, followed by an indirect call to @print_Params@. A call to print \begin{cfacode} void print(const char * x) { printf("%s", x); } void print(int x) { printf("%d", x); } print("x = ", 123, ".\n"); \end{cfacode} Generates the following \begin{cfacode} void print_string(const char *x){ int _tmp_cp_ret0; (_tmp_cp_ret0=printf("%s", x)) , _tmp_cp_ret0; *(int *)&_tmp_cp_ret0; // ^?{} } void print_int(int x){ int _tmp_cp_ret1; (_tmp_cp_ret1=printf("%d", x)) , _tmp_cp_ret1; *(int *)&_tmp_cp_ret1; // ^?{} } struct _tuple2_ { // _tuple2_(T0, T1) void *field_0; void *field_1; }; struct _conc__tuple2_0 { // _tuple2_(int, const char *) int field_0; const char *field_1; }; struct _conc__tuple2_0 _tmp_cp6; // _tuple2_(int, const char *) const char *_thunk0(const char **_p0, const char *_p1){ // const char * ?=?(const char **, const char *) return *_p0=_p1; } void _thunk1(const char **_p0){ // void ?{}(const char **) *_p0; // ?{} } void _thunk2(const char **_p0, const char *_p1){ // void ?{}(const char **, const char *) *_p0=_p1; // ?{} } void _thunk3(const char **_p0){ // void ^?{}(const char **) *_p0; // ^?{} } void _thunk4(struct _conc__tuple2_0 _p0){ // void print([int, const char *]) struct _tuple1_ { // _tuple1_(T0) void *field_0; }; struct _conc__tuple1_1 { // _tuple1_(const char *) const char *field_0; }; void _thunk5(struct _conc__tuple1_1 _pp0){ // void print([const char *]) print_string(_pp0.field_0); // print(rest.0) } void _adapter_i_pii_(void (*_adaptee)(), void *_ret, void *_p0, void *_p1){ *(int *)_ret=((int (*)(int *, int))_adaptee)(_p0, *(int *)_p1); } void _adapter_pii_(void (*_adaptee)(), void *_p0, void *_p1){ ((void (*)(int *, int ))_adaptee)(_p0, *(int *)_p1); } void _adapter_i_(void (*_adaptee)(), void *_p0){ ((void (*)(int))_adaptee)(*(int *)_p0); } void _adapter_tuple1_5_(void (*_adaptee)(), void *_p0){ ((void (*)(struct _conc__tuple1_1 ))_adaptee)(*(struct _conc__tuple1_1 *)_p0); } print_variadic( _adapter_tuple1_5, _adapter_i_, _adapter_pii_, _adapter_i_pii_, sizeof(int), __alignof__(int), sizeof(struct _conc__tuple1_1), __alignof__(struct _conc__tuple1_1), (void *(*)(void *, void *))_assign_i, // int ?=?(int *, int) (void (*)(void *))_ctor_i, // void ?{}(int *) (void (*)(void *, void *))_ctor_ii, // void ?{}(int *, int) (void (*)(void *))_dtor_ii, // void ^?{}(int *) (void (*)(void *))print_int, // void print(int) (void (*)(void *))&_thunk5, // void print([const char *]) &_p0.field_0, // rest.0 &(struct _conc__tuple1_1 ){ _p0.field_1 } // [rest.1] ); } struct _tuple1_ { // _tuple1_(T0) void *field_0; }; struct _conc__tuple1_6 { // _tuple_1(const char *) const char *field_0; }; const char *_temp0; _temp0="x = "; void _adapter_pstring_pstring_string( void (*_adaptee)(), void *_ret, void *_p0, void *_p1 ){ *(const char **)_ret= ((const char *(*)(const char **, const char *))_adaptee)( _p0, *(const char **)_p1 ); } void _adapter_pstring_string(void (*_adaptee)(), void *_p0, void *_p1){ ((void (*)(const char **, const char *))_adaptee)(_p0, *(const char **)_p1); } void _adapter_string_(void (*_adaptee)(), void *_p0){ ((void (*)(const char *))_adaptee)(*(const char **)_p0); } void _adapter_tuple2_0_(void (*_adaptee)(), void *_p0){ ((void (*)(struct _conc__tuple2_0 ))_adaptee)(*(struct _conc__tuple2_0 *)_p0); } print_variadic( _adapter_tuple2_0_, _adapter_string_, _adapter_pstring_string_, _adapter_pstring_pstring_string_, sizeof(const char *), __alignof__(const char *), sizeof(struct _conc__tuple2_0 ), __alignof__(struct _conc__tuple2_0 ), (void *(*)(void *, void *))&_thunk0, // const char * ?=?(const char **, const char *) (void (*)(void *))&_thunk1, // void ?{}(const char **) (void (*)(void *, void *))&_thunk2, // void ?{}(const char **, const char *) (void (*)(void *))&_thunk3, // void ^?{}(const char **) (void (*)(void *))print_string, // void print(const char *) (void (*)(void *))&_thunk4, // void print([int, const char *]) &_temp0, // "x = " (({ // copy construct tuple argument to print int *__multassign_L0 = (int *)&_tmp_cp6.field_0; const char **__multassign_L1 = (const char **)&_tmp_cp6.field_1; int __multassign_R0 = 123; const char *__multassign_R1 = ".\n"; ((*__multassign_L0=__multassign_R0 /* ?{} */), (*__multassign_L1=__multassign_R1 /* ?{} */)); }), &_tmp_cp6) // [123, ".\n"] ); ({ // destroy argument temporary int *__massassign_L0 = (int *)&_tmp_cp6.field_0; const char **__massassign_L1 = (const char **)&_tmp_cp6.field_1; ((*__massassign_L0 /* ^?{} */) , (*__massassign_L1 /* ^?{} */)); }); \end{cfacode} The type @_tuple2_@ is generated to allow passing the @rest@ argument to @print_variadic@. Thunks 0 through 3 provide wrappers for the @otype@ parameters for @const char *@, while @_thunk4@ translates a call to @print([int, const char *])@ into a call to @print_variadic(int, [const char *])@. This all builds to a call to @print_variadic@, with the appropriate copy construction of the tuple argument. \section{Future Work}