%======================================================================
\chapter{Tuples}
%======================================================================
\section{Multiple-Return-Value Functions}
\label{s:MRV_Functions}
In standard C, functions can return at most one value.
This restriction results in code which emulates functions with multiple return values by \emph{aggregation} or by \emph{aliasing}.
In the former situation, the function designer creates a record type that combines all of the return values into a single type.
For example, consider a function returning the most frequently occurring letter in a string, and its frequency.
This example is complex enough to illustrate that an array is insufficient, since arrays are homogeneous, and demonstrates a potential pitfall that exists with aliasing.
\begin{cfacode}
struct mf_ret {
int freq;
char ch;
};
struct mf_ret most_frequent(const char * str) {
char freqs [26] = { 0 };
struct mf_ret ret = { 0, 'a' };
for (int i = 0; str[i] != '\0'; ++i) {
if (isalpha(str[i])) { // only count letters
int ch = tolower(str[i]); // convert to lower case
int idx = ch-'a';
if (++freqs[idx] > ret.freq) { // update on new max
ret.freq = freqs[idx];
ret.ch = ch;
}
}
}
return ret;
}
const char * str = "hello world";
struct mf_ret ret = most_frequent(str);
printf("%s -- %d %c\n", str, ret.freq, ret.ch);
\end{cfacode}
Of note, the designer must come up with a name for the return type and for each of its fields.
Unnecessary naming is a common programming language issue, introducing verbosity and a complication of the user's mental model.
That is, adding another named type creates another association in the programmer's mind that needs to be kept track of when reading and writing code.
As such, this technique is effective when used sparingly, but can quickly get out of hand if many functions need to return different combinations of types.
In the latter approach, the designer simulates multiple return values by passing the additional return values as pointer parameters.
The pointer parameters are assigned inside of the routine body to emulate a return.
Using the same example,
\begin{cfacode}
int most_frequent(const char * str, char * ret_ch) {
char freqs [26] = { 0 };
int ret_freq = 0;
for (int i = 0; str[i] != '\0'; ++i) {
if (isalpha(str[i])) { // only count letters
int ch = tolower(str[i]); // convert to lower case
int idx = ch-'a';
if (++freqs[idx] > ret_freq) { // update on new max
ret_freq = freqs[idx];
*ret_ch = ch; // assign to out parameter
}
}
}
return ret_freq; // only one value returned directly
}
const char * str = "hello world";
char ch; // pre-allocate return value
int freq = most_frequent(str, &ch); // pass return value as out parameter
printf("%s -- %d %c\n", str, freq, ch);
\end{cfacode}
Notably, using this approach, the caller is directly responsible for allocating storage for the additional temporary return values, which complicates the call site with a sequence of variable declarations leading up to the call.
Also, while a disciplined use of @const@ can give clues about whether a pointer parameter is going to be used as an out parameter, it is not immediately obvious from only the routine signature whether the callee expects such a parameter to be initialized before the call.
Furthermore, while many C routines that accept pointers are designed so that it is safe to pass @NULL@ as a parameter, there are many C routines that are not null-safe.
On a related note, C does not provide a standard mechanism to state that a parameter is going to be used as an additional return value, which makes the job of ensuring that a value is returned more difficult for the compiler.
Interestingly, there is a subtle bug in the previous example, in that @ret_ch@ is never assigned for a string that does not contain any letters, which can lead to undefined behaviour.
In this particular case, it turns out that the frequency return value also doubles as an error code, where a frequency of 0 means the character return value should be ignored.
Still, not every routine with multiple return values should be required to return an error code, and error codes are easily ignored, so this is not a satisfying solution.
As with the previous approach, this technique can simulate multiple return values, but in practice it is verbose and error prone.
In \CFA, functions can be declared to return multiple values with an extension to the function declaration syntax.
Multiple return values are declared as a comma-separated list of types in square brackets in the same location that the return type appears in standard C function declarations.
The ability to return multiple values from a function requires a new syntax for the return statement.
For consistency, the return statement in \CFA accepts a comma-separated list of expressions in square brackets.
The expression resolution phase of the \CFA translator ensures that the correct form is used depending on the values being returned and the return type of the current function.
A multiple-returning function with return type @T@ can return any expression that is implicitly convertible to @T@.
Using the running example, the @most_frequent@ function can be written using multiple return values as such,
\begin{cfacode}
[int, char] most_frequent(const char * str) {
char freqs [26] = { 0 };
int ret_freq = 0;
char ret_ch = 'a'; // arbitrary default value for consistent results
for (int i = 0; str[i] != '\0'; ++i) {
if (isalpha(str[i])) { // only count letters
int ch = tolower(str[i]); // convert to lower case
int idx = ch-'a';
if (++freqs[idx] > ret_freq) { // update on new max
ret_freq = freqs[idx];
ret_ch = ch;
}
}
}
return [ret_freq, ret_ch];
}
\end{cfacode}
This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type, which precludes the bug seen with out-parameters.
The addition of multiple-return-value functions necessitates a syntax for accepting multiple values at the call-site.
The simplest mechanism for retaining a return value in C is variable assignment.
By assigning the return value into a variable, its value can be retrieved later at any point in the program.
As such, \CFA allows assigning multiple values from a function into multiple variables, using a square-bracketed list of lvalue expressions on the left side.
\begin{cfacode}
const char * str = "hello world";
int freq;
char ch;
[freq, ch] = most_frequent(str); // assign into multiple variables
printf("%s -- %d %c\n", str, freq, ch);
\end{cfacode}
It is also common to use a function's output as the input to another function.
\CFA also allows this case, without any new syntax.
When a function call is passed as an argument to another call, the expression resolver attempts to find the best match of actual arguments to formal parameters given all of the possible expression interpretations in the current scope \cite{Bilson03}.
For example,
\begin{cfacode}
void process(int); // (1)
void process(char); // (2)
void process(int, char); // (3)
void process(char, int); // (4)
process(most_frequent("hello world")); // selects (3)
\end{cfacode}
In this case, there is only one option for a function named @most_frequent@ that takes a string as input.
This function returns two values, one @int@ and one @char@.
There are four options for a function named @process@, but only two that accept two arguments, and of those the best match is (3), which is also an exact match.
This expression first calls @most_frequent("hello world")@, which produces the values @3@ and @'l'@, which are fed directly to the first and second parameters of (3), respectively.
\section{Tuple Expressions}
Multiple-return-value functions provide \CFA with a new syntax for expressing a combination of expressions in the return statement and a combination of types in a function signature.
These notions can be generalized to provide \CFA with \emph{tuple expressions} and \emph{tuple types}.
A tuple expression is an expression producing a fixed-size, ordered list of values of heterogeneous types.
The type of a tuple expression is the tuple of the subexpression types, or a \emph{tuple type}.
In \CFA, a tuple expression is denoted by a comma-separated list of expressions enclosed in square brackets.
For example, the expression @[5, 'x', 10.5]@ has type @[int, char, double]@.
The previous expression has 3 \emph{components}.
Each component in a tuple expression can be any \CFA expression, including another tuple expression.
The order of evaluation of the components in a tuple expression is unspecified, to allow a compiler the greatest flexibility for program optimization.
It is, however, guaranteed that each component of a tuple expression is evaluated for side-effects, even if the result is not used.
Multiple-return-value functions can equivalently be called \emph{tuple-returning functions}.
\subsection{Tuple Variables}
The call-site of the @most_frequent@ routine has a notable blemish, in that it required the preallocation of return variables in a manner similar to the aliasing example, since it is impossible to declare multiple variables of different types in the same declaration in standard C.
In \CFA, it is possible to overcome this restriction by declaring a \emph{tuple variable}.
\begin{cfacode}[emph=ret, emphstyle=\color{red}]
const char * str = "hello world";
[int, char] ret = most_frequent(str); // initialize tuple variable
printf("%s -- %d %c\n", str, ret);
\end{cfacode}
It is now possible to accept multiple values into a single piece of storage, in much the same way that it was previously possible to pass multiple values from one function call to another.
These variables can be used in any of the contexts where a tuple expression is allowed, such as in the @printf@ function call.
As in the @process@ example, the components of the tuple value are passed as separate parameters to @printf@, allowing very simple printing of tuple expressions.
One way to access the individual components is with a simple assignment, as in previous examples.
\begin{cfacode}
int freq;
char ch;
[freq, ch] = ret;
\end{cfacode}
\begin{sloppypar}
In addition to variables of tuple type, it is also possible to have pointers to tuples, and arrays of tuples.
Tuple types can be composed of any types, except for array types, since array assignment is disallowed, which makes tuple assignment difficult when a tuple contains an array.
\begin{cfacode}
[double, int] di;
[double, int] * pdi
[double, int] adi[10];
\end{cfacode}
This examples declares a variable of type @[double, int]@, a variable of type pointer to @[double, int]@, and an array of ten @[double, int]@.
\end{sloppypar}
\subsection{Tuple Indexing}
At times, it is desirable to access a single component of a tuple-valued expression without creating unnecessary temporary variables to assign to.
Given a tuple-valued expression @e@ and a compile-time constant integer $i$ where $0 \leq i < n$, where $n$ is the number of components in @e@, @e.i@ accesses the $i$\textsuperscript{th} component of @e@.
For example,
\begin{cfacode}
[int, double] x;
[char *, int] f();
void g(double, int);
[int, double] * p;
int y = x.0; // access int component of x
y = f().1; // access int component of f
p->0 = 5; // access int component of tuple pointed-to by p
g(x.1, x.0); // rearrange x to pass to g
double z = [x, f()].0.1; // access second component of first component
// of tuple expression
\end{cfacode}
As seen above, tuple-index expressions can occur on any tuple-typed expression, including tuple-returning functions, square-bracketed tuple expressions, and other tuple-index expressions, provided the retrieved component is also a tuple.
This feature was proposed for \KWC but never implemented \cite[p.~45]{Till89}.
\subsection{Flattening and Structuring}
As evident in previous examples, tuples in \CFA do not have a rigid structure.
In function call contexts, tuples support implicit flattening and restructuring conversions.
Tuple flattening recursively expands a tuple into the list of its basic components.
Tuple structuring packages a list of expressions into a value of tuple type.
\begin{cfacode}
int f(int, int);
int g([int, int]);
int h(int, [int, int]);
[int, int] x;
int y;
f(x); // flatten
g(y, 10); // structure
h(x, y); // flatten & structure
\end{cfacode}
In \CFA, each of these calls is valid.
In the call to @f@, @x@ is implicitly flattened so that the components of @x@ are passed as the two arguments to @f@.
For the call to @g@, the values @y@ and @10@ are structured into a single argument of type @[int, int]@ to match the type of the parameter of @g@.
Finally, in the call to @h@, @x@ is flattened to yield an argument list of length 3, of which the first component of @x@ is passed as the first parameter of @h@, and the second component of @x@ and @y@ are structured into the second argument of type @[int, int]@.
The flexible structure of tuples permits a simple and expressive function-call syntax to work seamlessly with both single- and multiple-return-value functions, and with any number of arguments of arbitrarily complex structure.
In \KWC \cite{Buhr94a,Till89}, there were 4 tuple coercions: opening, closing, flattening, and structuring.
Opening coerces a tuple value into a tuple of values, while closing converts a tuple of values into a single tuple value.
Flattening coerces a nested tuple into a flat tuple, \ie it takes a tuple with tuple components and expands it into a tuple with only non-tuple components.
Structuring moves in the opposite direction, \ie it takes a flat tuple value and provides structure by introducing nested tuple components.
In \CFA, the design has been simplified to require only the two conversions previously described, which trigger only in function call and return situations.
This simplification is a primary contribution of this thesis to the design of tuples in \CFA.
Specifically, the expression resolution algorithm examines all of the possible alternatives for an expression to determine the best match.
In resolving a function call expression, each combination of function value and list of argument alternatives is examined.
Given a particular argument list and function value, the list of argument alternatives is flattened to produce a list of non-tuple valued expressions.
Then the flattened list of expressions is compared with each value in the function's parameter list.
If the parameter's type is not a tuple type, then the current argument value is unified with the parameter type, and on success the next argument and parameter are examined.
If the parameter's type is a tuple type, then the structuring conversion takes effect, recursively applying the parameter matching algorithm using the tuple's component types as the parameter list types.
Assuming a successful unification, eventually the algorithm gets to the end of the tuple type, which causes all of the matching expressions to be consumed and structured into a tuple expression.
For example, in
\begin{cfacode}
int f(int, [double, int]);
f([5, 10.2], 4);
\end{cfacode}
There is only a single definition of @f@, and 3 arguments with only single interpretations.
First, the argument alternative list @[5, 10.2], 4@ is flattened to produce the argument list @5, 10.2, 4@.
Next, the parameter matching algorithm begins, with $P = $@int@ and $A = $@int@, which unifies exactly.
Moving to the next parameter and argument, $P = $@[double, int]@ and $A = $@double@.
This time, the parameter is a tuple type, so the algorithm applies recursively with $P' = $@double@ and $A = $@double@, which unifies exactly.
Then $P' = $@int@ and $A = $@double@, which again unifies exactly.
At this point, the end of $P'$ has been reached, so the arguments @10.2, 4@ are structured into the tuple expression @[10.2, 4]@.
Finally, the end of the parameter list $P$ has also been reached, so the final expression is @f(5, [10.2, 4])@.
\section{Tuple Assignment}
\label{s:TupleAssignment}
An assignment where the left side of the assignment operator has a tuple type is called tuple assignment.
There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called \emph{Multiple} and \emph{Mass} Assignment, respectively.
\begin{cfacode}
int x;
double y;
[int, double] z;
[y, x] = 3.14; // mass assignment
[x, y] = z; // multiple assignment
z = 10; // mass assignment
z = [x, y]; // multiple assignment
\end{cfacode}
Let $L_i$ for $i$ in $[0, n)$ represent each component of the flattened left side, $R_i$ represent each component of the flattened right side of a multiple assignment, and $R$ represent the right side of a mass assignment.
For a multiple assignment to be valid, both tuples must have the same number of elements when flattened.
For example, the following is invalid because the number of components on the left does not match the number of components on the right.
\begin{cfacode}
[int, int] x, y, z;
[x, y] = z; // multiple assignment, invalid 4 != 2
\end{cfacode}
Multiple assignment assigns $R_i$ to $L_i$ for each $i$.
That is, @?=?(&$L_i$, $R_i$)@ must be a well-typed expression.
In the previous example, @[x, y] = z@, @z@ is flattened into @z.0, z.1@, and the assignments @x = z.0@ and @y = z.1@ happen.
A mass assignment assigns the value $R$ to each $L_i$.
For a mass assignment to be valid, @?=?(&$L_i$, $R$)@ must be a well-typed expression.
These semantics differ from C cascading assignment (\eg @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment.
For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, which results in the value @3.14@ in @y@ and the value @3@ in @x@.
On the other hand, the C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, which results in the value @3@ in @x@, and as a result the value @3@ in @y@ as well.
Both kinds of tuple assignment have parallel semantics, such that each value on the left side and right side is evaluated \emph{before} any assignments occur.
As a result, it is possible to swap the values in two variables without explicitly creating any temporary variables or calling a function.
\begin{cfacode}
int x = 10, y = 20;
[x, y] = [y, x];
\end{cfacode}
After executing this code, @x@ has the value @20@ and @y@ has the value @10@.
In \CFA, tuple assignment is an expression where the result type is the type of the left side of the assignment, as in normal assignment.
That is, a tuple assignment produces the value of the left-hand side after assignment.
These semantics allow cascading tuple assignment to work out naturally in any context where a tuple is permitted.
These semantics are a change from the original tuple design in \KWC \cite{Till89}, wherein tuple assignment was a statement that allows cascading assignments as a special case.
Restricting tuple assignment to statements was an attempt to to fix what was seen as a problem with side-effects, wherein assignment can be used in many different locations, such as in function-call argument position.
While permitting assignment as an expression does introduce the potential for subtle complexities, it is impossible to remove assignment expressions from \CFA without affecting backwards compatibility.
Furthermore, there are situations where permitting assignment as an expression improves readability by keeping code succinct and reducing repetition, and complicating the definition of tuple assignment puts a greater cognitive burden on the user.
In another language, tuple assignment as a statement could be reasonable, but it would be inconsistent for tuple assignment to be the only kind of assignment that is not an expression.
In addition, \KWC permits the compiler to optimize tuple assignment as a block copy, since it does not support user-defined assignment operators.
This optimization could be implemented in \CFA, but it requires the compiler to verify that the selected assignment operator is trivial.
The following example shows multiple, mass, and cascading assignment used in one expression
\begin{cfacode}
int a, b;
double c, d;
[void] f([int, int]);
f([c, a] = [b, d] = 1.5); // assignments in parameter list
\end{cfacode}
The tuple expression begins with a mass assignment of @1.5@ into @[b, d]@, which assigns @1.5@ into @b@, which is truncated to @1@, and @1.5@ into @d@, producing the tuple @[1, 1.5]@ as a result.
That tuple is used as the right side of the multiple assignment (\ie, @[c, a] = [1, 1.5]@) that assigns @1@ into @c@ and @1.5@ into @a@, which is truncated to @1@, producing the result @[1, 1]@.
Finally, the tuple @[1, 1]@ is used as an expression in the call to @f@.
\subsection{Tuple Construction}
Tuple construction and destruction follow the same rules and semantics as tuple assignment, except that in the case where there is no right side, the default constructor or destructor is called on each component of the tuple.
As constructors and destructors did not exist in previous versions of \CFA or in \KWC, this is a primary contribution of this thesis to the design of tuples.
\begin{cfacode}
struct S;
void ?{}(S *); // (1)
void ?{}(S *, int); // (2)
void ?{}(S * double); // (3)
void ?{}(S *, S); // (4)
[S, S] x = [3, 6.28]; // uses (2), (3), specialized constructors
[S, S] y; // uses (1), (1), default constructor
[S, S] z = x.0; // uses (4), (4), copy constructor
\end{cfacode}
In this example, @x@ is initialized by the multiple constructor calls @?{}(&x.0, 3)@ and @?{}(&x.1, 6.28)@, while @y@ is initialized by two default constructor calls @?{}(&y.0)@ and @?{}(&y.1)@.
@z@ is initialized by mass copy constructor calls @?{}(&z.0, x.0)@ and @?{}(&z.1, x.0)@.
Finally, @x@, @y@, and @z@ are destructed, \ie the calls @^?{}(&x.0)@, @^?{}(&x.1)@, @^?{}(&y.0)@, @^?{}(&y.1)@, @^?{}(&z.0)@, and @^?{}(&z.1)@.
It is possible to define constructors and assignment functions for tuple types that provide new semantics, if the existing semantics do not fit the needs of an application.
For example, the function @void ?{}([T, U] *, S);@ can be defined to allow a tuple variable to be constructed from a value of type @S@.
\begin{cfacode}
struct S { int x; double y; };
void ?{}([int, double] * this, S s) {
this->0 = s.x;
this->1 = s.y;
}
\end{cfacode}
Due to the structure of generated constructors, it is possible to pass a tuple to a generated constructor for a type with a member prefix that matches the type of the tuple.
For example,
\begin{cfacode}
struct S { int x; double y; int z };
[int, double] t;
S s = t;
\end{cfacode}
The initialization of @s@ with @t@ works by default because @t@ is flattened into its components, which satisfies the generated field constructor @?{}(S *, int, double)@ to initialize the first two values.
\section{Member-Access Tuple Expression}
\label{s:MemberAccessTuple}
It is possible to access multiple fields from a single expression using a \emph{Member-Access Tuple Expression}.
The result is a single tuple-valued expression whose type is the tuple of the types of the members.
For example,
\begin{cfacode}
struct S { int x; double y; char * z; } s;
s.[x, y, z];
\end{cfacode}
Here, the type of @s.[x, y, z]@ is @[int, double, char *]@.
A member tuple expression has the form @a.[x, y, z];@ where @a@ is an expression with type @T@, where @T@ supports member access expressions, and @x, y, z@ are all members of @T@ with types @T$_x$@, @T$_y$@, and @T$_z$@ respectively.
Then the type of @a.[x, y, z]@ is @[T_x, T_y, T_z]@.
Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (\eg, rearrange components, drop components, duplicate components, etc.).
\begin{cfacode}
[int, int, long, double] x;
void f(double, long);
f(x.[0, 3]); // f(x.0, x.3)
x.[0, 1] = x.[1, 0]; // [x.0, x.1] = [x.1, x.0]
[long, int, long] y = x.[2, 0, 2];
\end{cfacode}
It is possible for a member tuple expression to contain other member access expressions.
For example,
\begin{cfacode}
struct A { double i; int j; };
struct B { int * k; short l; };
struct C { int x; A y; B z; } v;
v.[x, y.[i, j], z.k];
\end{cfacode}
This expression is equivalent to @[v.x, [v.y.i, v.y.j], v.z.k]@.
That is, the aggregate expression is effectively distributed across the tuple, which allows simple and easy access to multiple components in an aggregate, without repetition.
It is guaranteed that the aggregate expression to the left of the @.@ in a member tuple expression is evaluated exactly once.
As such, it is safe to use member tuple expressions on the result of a side-effecting function.
\begin{cfacode}
[int, float, double] f();
[double, float] x = f().[2, 1];
\end{cfacode}
In \KWC, member tuple expressions are known as \emph{record field tuples} \cite{Till89}.
Since \CFA permits these tuple-access expressions using structures, unions, and tuples, \emph{member tuple expression} or \emph{field tuple expression} is more appropriate.
It is possible to extend member-access expressions further.
Currently, a member-access expression whose member is a name requires that the aggregate is a structure or union, while a constant integer member requires the aggregate to be a tuple.
In the interest of orthogonal design, \CFA could apply some meaning to the remaining combinations as well.
For example,
\begin{cfacode}
struct S { int x, y; } s;
[S, S] z;
s.x; // access member
z.0; // access component
s.1; // ???
z.y; // ???
\end{cfacode}
One possibility is for @s.1@ to select the second member of @s@.
Under this interpretation, it becomes possible to not only access members of a struct by name, but also by position.
Likewise, it seems natural to open this mechanism to enumerations as well, wherein the left side would be a type, rather than an expression.
One benefit of this interpretation is familiarity, since it is extremely reminiscent of tuple-index expressions.
On the other hand, it could be argued that this interpretation is brittle in that changing the order of members or adding new members to a structure becomes a brittle operation.
This problem is less of a concern with tuples, since modifying a tuple affects only the code that directly uses the tuple, whereas modifying a structure has far reaching consequences for every instance of the structure.
As for @z.y@, one interpretation is to extend the meaning of member tuple expressions.
That is, currently the tuple must occur as the member, \ie to the right of the dot.
Allowing tuples to the left of the dot could distribute the member across the elements of the tuple, in much the same way that member tuple expressions distribute the aggregate across the member tuple.
In this example, @z.y@ expands to @[z.0.y, z.1.y]@, allowing what is effectively a very limited compile-time field-sections map operation, where the argument must be a tuple containing only aggregates having a member named @y@.
It is questionable how useful this would actually be in practice, since structures often do not have names in common with other structures, and further this could cause maintainability issues in that it encourages programmers to adopt very simple naming conventions to maximize the amount of overlap between different types.
Perhaps more useful would be to allow arrays on the left side of the dot, which would likewise allow mapping a field access across the entire array, producing an array of the contained fields.
The immediate problem with this idea is that C arrays do not carry around their size, which would make it impossible to use this extension for anything other than a simple stack allocated array.
Supposing this feature works as described, it would be necessary to specify an ordering for the expansion of member-access expressions versus member-tuple expressions.
\begin{cfacode}
struct { int x, y; };
[S, S] z;
z.[x, y]; // ???
// => [z.0, z.1].[x, y]
// => [z.0.x, z.0.y, z.1.x, z.1.y]
// or
// => [z.x, z.y]
// => [[z.0, z.1].x, [z.0, z.1].y]
// => [z.0.x, z.1.x, z.0.y, z.1.y]
\end{cfacode}
Depending on exactly how the two tuples are combined, different results can be achieved.
As such, a specific ordering would need to be imposed to make this feature useful.
Furthermore, this addition moves a member-tuple expression's meaning from being clear statically to needing resolver support, since the member name needs to be distributed appropriately over each member of the tuple, which could itself be a tuple.
A second possibility is for \CFA to have named tuples, as they exist in Swift and D.
\begin{cfacode}
typedef [int x, int y] Point2D;
Point2D p1, p2;
p1.x + p1.y + p2.x + p2.y;
p1.0 + p1.1 + p2.0 + p2.1; // equivalent
\end{cfacode}
In this simpler interpretation, a tuple type carries with it a list of possibly empty identifiers.
This approach fits naturally with the named return-value feature, and would likely go a long way towards implementing it.
Ultimately, the first two extensions introduce complexity into the model, with relatively little perceived benefit, and so were dropped from consideration.
Named tuples are a potentially useful addition to the language, provided they can be parsed with a reasonable syntax.
\section{Casting}
In C, the cast operator is used to explicitly convert between types.
In \CFA, the cast operator has a secondary use, which is type ascription, since it forces the expression resolution algorithm to choose the lowest cost conversion to the target type.
That is, a cast can be used to select the type of an expression when it is ambiguous, as in the call to an overloaded function.
\begin{cfacode}
int f(); // (1)
double f(); // (2)
f(); // ambiguous - (1),(2) both equally viable
(int)f(); // choose (2)
\end{cfacode}
Since casting is a fundamental operation in \CFA, casts need to be given a meaningful interpretation in the context of tuples.
Taking a look at standard C provides some guidance with respect to the way casts should work with tuples.
\begin{cfacode}[numbers=left]
int f();
void g();
(void)f(); // valid, ignore results
(int)g(); // invalid, void cannot be converted to int
struct A { int x; };
(struct A)f(); // invalid, int cannot be converted to A
\end{cfacode}
In C, line 4 is a valid cast, which calls @f@ and discards its result.
On the other hand, line 5 is invalid, because @g@ does not produce a result, so requesting an @int@ to materialize from nothing is nonsensical.
Finally, line 8 is also invalid, because in C casts only provide conversion between scalar types \cite[p.~91]{C11}.
For consistency, this implies that any case wherein the number of components increases as a result of the cast is invalid, while casts that have the same or fewer number of components may be valid.
Formally, a cast to tuple type is valid when $T_n \leq S_m$, where $T_n$ is the number of components in the target type and $S_m$ is the number of components in the source type, and for each $i$ in $[0, n)$, $S_i$ can be cast to $T_i$.
Excess elements ($S_j$ for all $j$ in $[n, m)$) are evaluated, but their values are discarded so that they are not included in the result expression.
This discarding naturally follows the way that a cast to void works in C.
For example,
\begin{cfacode}
[int, int, int] f();
[int, [int, int], int] g();
([int, double])f(); // (1) valid
([int, int, int])g(); // (2) valid
([void, [int, int]])g(); // (3) valid
([int, int, int, int])g(); // (4) invalid
([int, [int, int, int]])g(); // (5) invalid
\end{cfacode}
(1) discards the last element of the return value and converts the second element to type double.
Since @int@ is effectively a 1-element tuple, (2) discards the second component of the second element of the return value of @g@.
If @g@ is free of side effects, this is equivalent to @[(int)(g().0), (int)(g().1.0), (int)(g().2)]@.
Since @void@ is effectively a 0-element tuple, (3) discards the first and third return values, which is effectively equivalent to @[(int)(g().1.0), (int)(g().1.1)]@).
% will this always hold true? probably, as constructors should give all of the conversion power we need. if casts become function calls, what would they look like? would need a way to specify the target type, which seems awkward. Also, C++ basically only has this because classes are closed to extension, while we don't have that problem (can have floating constructors for any type).
Note that a cast is not a function call in \CFA, so flattening and structuring conversions do not occur for cast expressions.
As such, (4) is invalid because the cast target type contains 4 components, while the source type contains only 3.
Similarly, (5) is invalid because the cast @([int, int, int])(g().1)@ is invalid.
That is, it is invalid to cast @[int, int]@ to @[int, int, int]@.
\section{Polymorphism}
Due to the implicit flattening and structuring conversions involved in argument passing, @otype@ and @dtype@ parameters are restricted to matching only with non-tuple types.
The integration of polymorphism, type assertions, and monomorphic specialization of tuple-assertions are a primary contribution of this thesis to the design of tuples.
\begin{cfacode}
forall(otype T, dtype U)
void f(T x, U * y);
f([5, "hello"]);
\end{cfacode}
In this example, @[5, "hello"]@ is flattened, so that the argument list appears as @5, "hello"@.
The argument matching algorithm binds @T@ to @int@ and @U@ to @const char@, and calls the function as normal.
Tuples can contain otype and dtype components.
For example, a plus operator can be written to add two triples of a type together.
\begin{cfacode}
forall(otype T | { T ?+?(T, T); })
[T, T, T] ?+?([T, T, T] x, [T, T, T] y) {
return [x.0+y.0, x.1+y.1, x.2+y.2];
}
[int, int, int] x;
int i1, i2, i3;
[i1, i2, i3] = x + ([10, 20, 30]);
\end{cfacode}
Note that due to the implicit tuple conversions, this function is not restricted to the addition of two triples.
A call to this plus operator type checks as long as a total of 6 non-tuple arguments are passed after flattening, and all of the arguments have a common type that can bind to @T@, with a pairwise @?+?@ over @T@.
For example, these expressions also succeed and produce the same value.
\begin{cfacode}
([x.0, x.1]) + ([x.2, 10, 20, 30]); // x + ([10, 20, 30])
x.0 + ([x.1, x.2, 10, 20, 30]); // x + ([10, 20, 30])
\end{cfacode}
This presents a potential problem if structure is important, as these three expressions look like they should have different meanings.
Furthermore, these calls can be made ambiguous by introducing seemingly different functions.
\begin{cfacode}
forall(otype T | { T ?+?(T, T); })
[T, T, T] ?+?([T, T] x, [T, T, T, T]);
forall(otype T | { T ?+?(T, T); })
[T, T, T] ?+?(T x, [T, T, T, T, T]);
\end{cfacode}
It is also important to note that these calls could be disambiguated if the function return types were different, as they likely would be for a reasonable implementation of @?+?@, since the return type is used in overload resolution.
Still, these semantics are a deficiency of the current argument matching algorithm, and depending on the function, differing return values may not always be appropriate.
These issues could be rectified by applying an appropriate conversion cost to the structuring and flattening conversions, which are currently 0-cost conversions in the expression resolver.
Care would be needed in this case to ensure that exact matches do not incur such a cost.
\begin{cfacode}
void f([int, int], int, int);
f([0, 0], 0, 0); // no cost
f(0, 0, 0, 0); // cost for structuring
f([0, 0,], [0, 0]); // cost for flattening
f([0, 0, 0], 0); // cost for flattening and structuring
\end{cfacode}
Until this point, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (\ie, no implicit conversions are applied to assertion arguments).
This decision presents a conflict with the flexibility of tuples.
\subsection{Assertion Inference}
\begin{cfacode}
int f([int, double], double);
forall(otype T, otype U | { T f(T, U, U); })
void g(T, U);
g(5, 10.21);
\end{cfacode}
If assertion arguments must match exactly, then the call to @g@ cannot be resolved, since the expected type of @f@ is flat, while the only @f@ in scope requires a tuple type.
Since tuples are fluid, this requirement reduces the usability of tuples in polymorphic code.
To ease this pain point, function parameter and return lists are flattened for the purposes of type unification, which allows the previous example to pass expression resolution.
This relaxation is made possible by extending the existing thunk generation scheme, as described by Bilson \cite{Bilson03}.
Now, whenever a candidate's parameter structure does not exactly match the formal parameter's structure, a thunk is generated to specialize calls to the actual function.
\begin{cfacode}
int _thunk(int _p0, double _p1, double _p2) {
return f([_p0, _p1], _p2);
}
\end{cfacode}
Essentially, this provides flattening and structuring conversions to inferred functions, improving the compatibility of tuples and polymorphism.
\section{Implementation}
Tuples are implemented in the \CFA translator via a transformation into generic types.
Generic types are an independent contribution developed at the same time.
The transformation into generic types and the generation of tuple-specific code are primary contributions of this thesis to tuples.
The first time an $N$-tuple is seen for each $N$ in a scope, a generic type with $N$ type parameters is generated.
For example,
\begin{cfacode}
[int, int] f() {
[double, double] x;
[int, double, int] y;
}
\end{cfacode}
is transformed into
\begin{cfacode}
forall(dtype T0, dtype T1 | sized(T0) | sized(T1))
struct _tuple2_ { // generated before the first 2-tuple
T0 field_0;
T1 field_1;
};
_tuple2_(int, int) f() {
_tuple2_(double, double) x;
forall(dtype T0, dtype T1, dtype T2 | sized(T0) | sized(T1) | sized(T2))
struct _tuple3_ { // generated before the first 3-tuple
T0 field_0;
T1 field_1;
T2 field_2;
};
_tuple3_(int, double, int) y;
}
\end{cfacode}
Tuple expressions are then simply converted directly into compound literals
\begin{cfacode}
[5, 'x', 1.24];
\end{cfacode}
becomes
\begin{cfacode}
(_tuple3_(int, char, double)){ 5, 'x', 1.24 };
\end{cfacode}
Since tuples are essentially structures, tuple indexing expressions are just field accesses.
\begin{cfacode}
void f(int, [double, char]);
[int, double] x;
x.0+x.1;
printf("%d %g\n", x);
f(x, 'z');
\end{cfacode}
is transformed into
\begin{cfacode}
void f(int, _tuple2_(double, char));
_tuple2_(int, double) x;
x.field_0+x.field_1;
printf("%d %g\n", x.field_0, x.field_1);
f(x.field_0, (_tuple2){ x.field_1, 'z' });
\end{cfacode}
Note that due to flattening, @x@ used in the argument position is converted into the list of its fields.
In the call to @f@, the second and third argument components are structured into a tuple argument.
Expressions that may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion.
Each unique expression is assigned an identifier and is guaranteed to be executed exactly once.
\begin{cfacode}
void g(int, double);
[int, double] h();
g(h());
\end{cfacode}
Internally, this is converted to pseudo-\CFA
\begin{cfacode}
void g(int, double);
[int, double] h();
lazy [int, double] unq0 = h(); // deferred execution
g(unq0.0, unq0.1); // execute h() once
\end{cfacode}
That is, the function @h@ is evaluated lazily and its result is stored for subsequent accesses.
Ultimately, unique expressions are converted into two variables and an expression.
\begin{cfacode}
void g(int, double);
[int, double] h();
_Bool _unq0_finished_ = 0;
[int, double] _unq0;
g(
(_unq0_finished_ ? _unq0 : (_unq0 = h(), _unq0_finished_ = 1, _unq0)).0,
(_unq0_finished_ ? _unq0 : (_unq0 = h(), _unq0_finished_ = 1, _unq0)).1,
);
\end{cfacode}
Since argument evaluation order is not specified by the C programming language, this scheme is built to work regardless of evaluation order.
The first time a unique expression is executed, the actual expression is evaluated and the accompanying boolean is set to true.
Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression.
Currently, the \CFA translator has a very broad, imprecise definition of impurity (side-effects), where every function call is assumed to be impure.
This notion could be made more precise for certain intrinsic, auto-generated, and built-in functions, and could analyze function bodies, when they are available, to recursively detect impurity, to eliminate some unique expressions.
It is possible that lazy evaluation could be exposed to the user through a lazy keyword with little additional effort.
Tuple-member expressions are recursively expanded into a list of member-access expressions.
\begin{cfacode}
[int, [double, int, double], int]] x;
x.[0, 1.[0, 2]];
\end{cfacode}
becomes
\begin{cfacode}
[x.0, [x.1.0, x.1.2]];
\end{cfacode}
Tuple-member expressions also take advantage of unique expressions in the case of possible impurity.
Finally, the various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions.
For example, a mass assignment
\begin{cfacode}
int x, z;
double y;
[double, double] f();
[x, y, z] = 1.5; // mass assignment
\end{cfacode}
generates the following
\begin{cfacode}
// [x, y, z] = 1.5;
_tuple3_(int, double, int) _tmp_stmtexpr_ret0;
({ // GNU C statement expression
// assign LHS address temporaries
int *__massassign_L0 = &x; // ?{}
double *__massassign_L1 = &y; // ?{}
int *__massassign_L2 = &z; // ?{}
// assign RHS value temporary
double __massassign_R0 = 1.5; // ?{}
({ // tuple construction - construct statement expr return variable
// assign LHS address temporaries
int *__multassign_L0 = (int *)&_tmp_stmtexpr_ret0.0; // ?{}
double *__multassign_L1 = (double *)&_tmp_stmtexpr_ret0.1; // ?{}
int *__multassign_L2 = (int *)&_tmp_stmtexpr_ret0.2; // ?{}
// assign RHS value temporaries and mass-assign to L0, L1, L2
int __multassign_R0 = (*__massassign_L0=(int)__massassign_R0); // ?{}
double __multassign_R1 = (*__massassign_L1=__massassign_R0); // ?{}
int __multassign_R2 = (*__massassign_L2=(int)__massassign_R0); // ?{}
// perform construction of statement expr return variable using
// RHS value temporary
((*__multassign_L0 = __multassign_R0 /* ?{} */),
(*__multassign_L1 = __multassign_R1 /* ?{} */),
(*__multassign_L2 = __multassign_R2 /* ?{} */));
});
_tmp_stmtexpr_ret0;
});
({ // tuple destruction - destruct assign expr value
int *__massassign_L3 = (int *)&_tmp_stmtexpr_ret0.0; // ?{}
double *__massassign_L4 = (double *)&_tmp_stmtexpr_ret0.1; // ?{}
int *__massassign_L5 = (int *)&_tmp_stmtexpr_ret0.2; // ?{}
((*__massassign_L3 /* ^?{} */),
(*__massassign_L4 /* ^?{} */),
(*__massassign_L5 /* ^?{} */));
});
\end{cfacode}
A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, \eg, in a unique expression.
$N$ LHS variables are generated and constructed using the address of the tuple components, and a single RHS variable is generated to store the value of the RHS without any loss of precision.
A nested statement expression is generated that performs the individual assignments and constructs the return value using the results of the individual assignments.
Finally, the statement expression temporary is destroyed at the end of the expression.
Similarly, a multiple assignment
\begin{cfacode}
[x, y, z] = [f(), 3]; // multiple assignment
\end{cfacode}
generates the following
\begin{cfacode}
// [x, y, z] = [f(), 3];
_tuple3_(int, double, int) _tmp_stmtexpr_ret0;
({
// assign LHS address temporaries
int *__multassign_L0 = &x; // ?{}
double *__multassign_L1 = &y; // ?{}
int *__multassign_L2 = &z; // ?{}
// assign RHS value temporaries
_tuple2_(double, double) _tmp_cp_ret0;
_Bool _unq0_finished_ = 0;
double __multassign_R0 =
(_unq0_finished_ ?
_tmp_cp_ret0 :
(_tmp_cp_ret0=f(), _unq0_finished_=1, _tmp_cp_ret0)).0; // ?{}
double __multassign_R1 =
(_unq0_finished_ ?
_tmp_cp_ret0 :
(_tmp_cp_ret0=f(), _unq0_finished_=1, _tmp_cp_ret0)).1; // ?{}
({ // tuple destruction - destruct f() return temporary
// assign LHS address temporaries
double *__massassign_L3 = (double *)&_tmp_cp_ret0.0; // ?{}
double *__massassign_L4 = (double *)&_tmp_cp_ret0.1; // ?{}
// perform destructions - intrinsic, so NOP
((*__massassign_L3 /* ^?{} */),
(*__massassign_L4 /* ^?{} */));
});
int __multassign_R2 = 3; // ?{}
({ // tuple construction - construct statement expr return variable
// assign LHS address temporaries
int *__multassign_L3 = (int *)&_tmp_stmtexpr_ret0.0; // ?{}
double *__multassign_L4 = (double *)&_tmp_stmtexpr_ret0.1; // ?{}
int *__multassign_L5 = (int *)&_tmp_stmtexpr_ret0.2; // ?{}
// assign RHS value temporaries and multiple-assign to L0, L1, L2
int __multassign_R3 = (*__multassign_L0=(int)__multassign_R0); // ?{}
double __multassign_R4 = (*__multassign_L1=__multassign_R1); // ?{}
int __multassign_R5 = (*__multassign_L2=__multassign_R2); // ?{}
// perform construction of statement expr return variable using
// RHS value temporaries
((*__multassign_L3=__multassign_R3 /* ?{} */),
(*__multassign_L4=__multassign_R4 /* ?{} */),
(*__multassign_L5=__multassign_R5 /* ?{} */));
});
_tmp_stmtexpr_ret0;
});
({ // tuple destruction - destruct assign expr value
// assign LHS address temporaries
int *__massassign_L5 = (int *)&_tmp_stmtexpr_ret0.0; // ?{}
double *__massassign_L6 = (double *)&_tmp_stmtexpr_ret0.1; // ?{}
int *__massassign_L7 = (int *)&_tmp_stmtexpr_ret0.2; // ?{}
// perform destructions - intrinsic, so NOP
((*__massassign_L5 /* ^?{} */),
(*__massassign_L6 /* ^?{} */),
(*__massassign_L7 /* ^?{} */));
});
\end{cfacode}
The difference here is that $N$ RHS values are stored into separate temporary variables.
The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language.
There are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this is not a new restriction.