\chapter{Recent Features Introduced to \CFA} \label{c:content1} This chapter discusses some recent additions to the \CFA language and their interactions with the type system. \section{Reference Types} Reference types were added to \CFA by Robert Schluntz and Aaron Moss~\cite{Moss18}. The \CFA reference type generalizes the \CC reference type (and its equivalent in other modern programming languages) by providing both mutable and immutable forms and cascading referencing and dereferencing. Specifically, \CFA attempts to extend programmer intuition about pointers to references. That is, use a pointer when its primary purpose is manipulating the address of storage, \eg a top/head/tail pointer or link field in a mutable data structure. Here, manipulating the pointer address is the primary operation, while dereferencing the pointer to its value is the secondary operation. For example, \emph{within} a data structure, \eg stack or queue, all operations involve pointer addresses and the pointer may never be dereferenced because the referenced object is opaque. Alternatively, use a reference when its primary purpose is to alias a value, \eg a function parameter that does not copy the argument (performance reason). Here, manipulating the value is the primary operation, while changing the pointer address is the secondary operation. Succinctly, if the address often changes, use a pointer; if the value often changes, use a reference. Note, \CC made the reference address immutable starting a \emph{belief} that immutability is a fundamental aspect of a reference's pointer, resulting in a semantic asymmetry between the pointer and reference. \CFA adopts a uniform policy between pointers and references where mutability is a settable property at the point of declaration. The following examples shows how pointers and references are treated uniformly in \CFA. \begin{cfa}[numbers=left,numberblanklines=false] int x = 1, y = 2, z = 3; int * p1 = &x, ** p2 = &p1, *** p3 = &p2, $\C{// pointers to x}$ @&@ r1 = x, @&&@ r2 = r1, @&&&@ r3 = r2; $\C{// references to x}$ int * p4 = &z, & r4 = z; *p1 = 3; **p2 = 3; ***p3 = 3; $\C{// different ways to change x to 3}$ r1 = 3; r2 = 3; r3 = 3; $\C{// change x: implicit dereference *r1, **r2, ***r3}$ **p3 = &y; *p3 = &p4; $\C{// change p1, p2}$ // cancel implicit dereferences (&*)**r3, (&(&*)*)*r3, &(&*)r4 @&@r3 = @&@y; @&&@r3 = @&&@r4; $\C{// change r1, r2}$ \end{cfa} Like pointers, reference can be cascaded, \ie a reference to a reference, \eg @&& r2@.\footnote{ \CC uses \lstinline{&&} for rvalue reference a feature for move semantics and handling the \lstinline{const} Hell problem.} Usage of a reference variable automatically performs the same number of dereferences as the number of references in its declaration, \eg @r3@ becomes @***r3@. Finally, to reassign a reference's address needs a mechanism to stop the auto-referencing, which is accomplished by using a single reference to cancel all the auto-dereferencing, \eg @&r3 = &y@ resets @r3@'s address to point to @y@. \CFA's reference type (including multi-de/references) is powerful enough to describe the lvalue rules in C by types only. As a result, the \CFA type checker now works on just types without using the notion of lvalue in an expression. (\CFA internals still use lvalue for code generation purposes.) The current reference typing rules in \CFA are summarized as follows: \begin{enumerate} \item For a variable $x$ with declared type $T$, the variable-expression $x$ has type reference to $T$, even if $T$ itself is a reference type. \item For an expression $e$ with type $T\ \&_1...\&_n$, \ie $T$ followed by $n$ references, where $T$ is not a reference type, the expression $\&T$ (address of $T$) has type $T *$ followed by $n - 1$ references. \item For an expression $e$ with type $T *\&_1...\&_n$, \ie $T *$ followed by $n$ references, the expression $* T$ (dereference $T$) has type $T$ followed by $n + 1$ references. This is the reverse of the previous rule, such that address-of and dereference operators are perfect inverses. \item When matching parameter and argument types in a function call context, the number of references on the argument type is stripped off to match the number of references on the parameter type.\footnote{ \CFA handles the \lstinline{const} Hell problem by allowing rvalue expressions to be converted to reference values by implicitly creating a temporary variable, with some restrictions.} In an assignment context, the left-hand-side operand-type is always reduced to a single reference. \end{enumerate} Under this ruleset, a type parameter is never bound to a reference type in a function-call context. \begin{cfa} forall( T ) void f( T & ); int & x; f( x ); // implicit dereference \end{cfa} The call applies an implicit dereference once to @x@ so the call is typed @f( int & )@ with @T = int@, rather than with @T = int &@. As for a pointer type, a reference type may have qualifiers, where @const@ is most interesting. \begin{cfa} int x = 3; $\C{// mutable}$ const int cx = 5; $\C{// immutable}$ int * const cp = &x, $\C{// immutable pointer}$ & const cr = cx; const int * const ccp = &cx, $\C{// immutable value and pointer}$ & const ccr = cx; // pointer *cp = 7; cp = &x; $\C{// error, assignment of read-only variable}$ *ccp = 7; $\C{// error, assignment of read-only location}$ ccp = &cx; $\C{// error, assignment of read-only variable}$ // reference cr = 7; cr = &x; $\C{// error, assignment of read-only variable}$ *ccr = 7; $\C{// error, assignment of read-only location}$ ccr = &cx; $\C{// error, assignment of read-only variable}$ \end{cfa} Interestingly, C does not give a warning/error if a @const@ pointer is not initialized, while \CC does. Hence, type @& const@ is similar to \CC reference, but \CFA does not preclude initialization with a non-variable address. For example, in system's programming, there are cases where an immutable address is initialized to a specific memory location. \begin{cfa} int & const mem_map = *0xe45bbc67@p@; $\C{// hardware mapped registers ('p' for pointer)}$ \end{cfa} Finally, qualification is generalized across all pointer/reference declarations. \begin{cfa} const * const * const * const ccccp = ... const & const & const & const ccccr = ... \end{cfa} In the initial \CFA reference design, the goal was to make the reference type a \emph{real} data type \vs a restricted \CC reference, which is mostly used for choosing the argument-passing method, by-value or by-reference. However, there is an inherent ambiguity for auto-dereferencing: every argument expression involving a reference variable can potentially mean passing the reference's value or address. Without any restrictions, this ambiguity limits the behaviour of reference types in \CFA polymorphic functions, where a type @T@ can bind to a reference or non-reference type. This ambiguity prevents the type system treating reference types the same way as other types in many cases even if type variables could be bound to reference types. The reason is that \CFA uses a common \emph{object} trait (constructor, destructor and assignment operators) to handle passing dynamic concrete type arguments into polymorphic functions, and the reference types are handled differently in these contexts so they do not satisfy this common interface. Moreover, there is also some discrepancy in how the reference types are treated in initialization and assignment expressions. For example, in line 3 of the previous example code: \begin{cfa} int @&@ r1 = x, @&&@ r2 = r1, @&&&@ r3 = r2; $\C{// references to x}$ \end{cfa} each initialization expression is implicitly dereferenced to match the types, \eg @&x@, because an address is always required and a variable normally returns its value; \CC does the same implicit dereference when initializing its reference variables. For lines 6 and 9 of the previous example code: \begin{cfa} r1 = 3; r2 = 3; r3 = 3; $\C{// change x: implicit dereference *r1, **r2, ***r3}$ @&@r3 = @&@y; @&&@r3 = @&&@r4; $\C{// change r1, r2}$ \end{cfa} there are no actual assignment operators defined for reference types that can be overloaded; instead, all reference assignments are handled by semantic actions in the type system. In fact, the reassignment of reference variables is setup internally to use the assignment operators for pointer types. Finally, there is an annoying issue (although purely syntactic) for setting a mutable reference to a specific address like null, @int & r1 = *0p@, which looks like dereferencing a null pointer. Here, the expression is rewritten as @int & r1 = &(*0p)@, like the variable dereference of @x@ above. However, the implicit @&@ needs to be cancelled for an address, which is done with the @*@, i.e., @&*@ cancel each other, giving @0p@. Therefore, the dereferencing operation does not actually happen and the expression is translated into directly initializing the reference variable with the address. Note, the same explicit reference is used in \CC to set a reference variable to null. \begin{c++} int & ip = @*@(int *)nullptr; \end{c++} which is used in certain systems-programming situations. When generic types were introduced to \CFA~\cite{Moss19}, some thought was given to allow reference types as type arguments. \begin{cfa} forall( T ) struct vector { T t; }; vector( int @&@ ) vec; $\C{// vector of references to ints}$ \end{cfa} While it is possible to write a reference type as the argument to a generic type, it is disallowed in assertion checking, if the generic type requires the object trait for the type argument (a fairly common use case). Even if the object trait can be made optional, the current type system often misbehaves by adding undesirable auto-dereference on the referenced-to value rather than the reference variable itself, as intended. Some tweaks are necessary to accommodate reference types in polymorphic contexts and it is unclear what can or cannot be achieved. Currently, there are contexts where \CFA programmer must use pointer types, giving up the benefits of auto-dereference operations and better syntax from reference types. \section{Tuple Types} The addition of tuple types to \CFA can be traced back to the original design by David Till in \mbox{K-W C}~\cite{Till89,Buhr94a}, a predecessor project of \CFA. The primary purpose of tuples is to eliminate output parameters or creating an aggregate type to return multiple values from a function, called a multiple-value-returning (MVR) function. The following examples shows the two techniques for a function returning three values. \begin{cquote} \begin{tabular}{@{}l@{\hspace{20pt}}l@{}} \begin{cfa} int foo( int &p2, int &p3 ); // in/out parameters int x, y = 3, z = 4; x = foo( y, z ); // return 3 values \end{cfa} & \begin{cfa} struct Ret { int x, y, z; }; Ret foo( int p2, int p3 ); // multiple return values Ret ret = { .y = 3, .z = 4 }; ret = foo( ret.y, ret.z ); // return 3 values \end{cfa} \end{tabular} \end{cquote} where K-W C allows direct return of multiple values. \begin{cfa} @[int, int, int]@ foo( int p2, int p3 ); @[x, y, z]@ = foo( y, z ); // return 3 values into a tuple \end{cfa} Along with simplifying returning multiple values, tuples can be extended to simplify a number of other common context that normally require multiple statements and/or additional declarations, all of which reduces coding time and errors. \begin{cfa} [x, y, z] = 3; $\C[2in]{// x = 3; y = 3; z = 3, where types are different}$ [x, y] = [y, x]; $\C{// int tmp = x; x = y; y = tmp;}$ void bar( int, int, int ); @bar@( foo( 3, 4 ) ); $\C{// int t0, t1, t2; [t0, t1, t2] = foo( 3, 4 ); bar( t0, t1, t2 );}$ x = foo( 3, 4 )@.1@; $\C{// int t0, t1, t2; [t0, t1, t2] = foo( 3, 4 ); x = t1;}\CRT$ \end{cfa} For the call to @bar@, the three results from @foo@ are \newterm{flattened} into individual arguments. Flattening is how tuples interact with parameter and subscript lists, and with other tuples, \eg: \begin{cfa} [ [ x, y ], z, [a, b, c] ] = [2, [3, 4], foo( 3, 4) ] $\C{// structured}$ [ x, y, z, a, b, c] = [2, 3, 4, foo( 3, 4) ] $\C{// flattened, where foo results are t0, t1, t2}$ \end{cfa} Note, the \CFA type-system supports complex composition of tuples and C type conversions using a costing scheme giving lower cost to widening conversions that do not truncate a value. \begin{cfa} [ int, int ] foo$\(_1\)$( int ); $\C{// overloaded foo functions}$ [ double ] foo$\(_2\)$( int ); void bar( int, double, double ); bar( foo( 3 ), foo( 3 ) ); \end{cfa} The type resolver only has the tuple return types to resolve the call to @bar@ as the @foo@ parameters are identical, which involves unifying the flattened @foo@ return values with @bar@'s parameter list. However, no combination of @foo@s is an exact match with @bar@'s parameters; thus, the resolver applies C conversions to obtain a best match. The resulting minimal cost expression is @bar( foo@$_1$@( 3 ), foo@$_2$@( 3 ) )@, where the two possible coversions are (@int@, {\color{red}@int@}, @double@) to (@int@, {\color{red}@double@}, @double@) with a safe (widening) conversion from @int@ to @double@ versus ({\color{red}@double@}, {\color{red}@int@}, {\color{red}@int@}) to ({\color{red}@int@}, {\color{red}@double@}, {\color{red}@double@}) with one unsafe (narrowing) conversion from @double@ to @int@ and two safe conversions from @int@ to @double@. The programming language Go provides a similar but simplier tuple mechanism, as it does not have overloaded functions. The K-W C tuples are merely syntactic sugar, as there is no mechanism to define a variable with tuple type. For the tuple-returning implementation, an implicit @struct@ type is created for the returning tuple and the values are extracted by field access operations. For the tuple-assignment implementation, the left-hand tuple expression is expanded into assignments of each component, creating temporary variables to avoid unexpected side effects. For example, a structure is returned from @foo@ and its fields are individually assigned to the left-hand variables, @x@, @y@, @z@. In the second implementation of \CFA tuples by Rodolfo Gabriel Esteves~\cite{Esteves04}, a different strategy is taken to handle MVR functions. The return values are converted to output parameters passed in by pointers. When the return values of a MVR function are directly used in an assignment expression, the addresses of the left-hand operands can be directly passed into the function; composition of MVR functions is handled by creating temporaries for the returns. For example, given a function returning two values: \begin{cfa} [int, int] gives_two() { int r1, r2; ... return [ r1, r2 ]; } int x, y; [x, y] = gives_two(); \end{cfa} The K-W C implementation translates the program to: \begin{cfa} struct _tuple2 { int _0; int _1; } struct _tuple2 gives_two(); int x, y; struct _tuple2 _tmp = gives_two(); x = _tmp._0; y = _tmp._1; \end{cfa} While the Rodolfo implementation translates it to: \begin{cfa} void gives_two( int * r1, int * r2 ) { ... *r1 = ...; *r2 = ...; return; } int x, y; gives_two( &x, &y ); \end{cfa} and inside the body of the function @gives_two@, the return statement is rewritten as assignments into the passed-in argument addresses. This implementation looks more concise, and in the case of returning values having nontrivial types (\eg aggregates), this implementation saves unnecessary copying. For example, \begin{cfa} [ x, y ] gives_two(); int x, y; [ x, y ] = gives_two(); \end{cfa} becomes \begin{cfa} void gives_two( int &, int & ); int x, y; gives_two( x, y ); \end{cfa} eliminiating any copying in or out of the call. The downside is indirection within @gives_two@ to access values, unless values get hoisted into registers for some period of time, which is common. Interestingly, in the third implementation of \CFA tuples by Robert Schluntz~\cite[\S~3]{Schluntz17}, the tuple type reverts back to structure based, where it remains in the current version of the cfa-cc translator. The reason for the reversion was to make tuples become first-class types in \CFA, \ie allow tuple variables. This extension was possible, because in parallel with Schluntz's work, generic types were being added independently by Moss~\cite{Moss19}, and the tuple variables leveraged the same implementation techniques as the generic variables. However, after experience gained building the \CFA runtime system, making tuple-types first-class seems to add little benefit. The main reason is that tuples usages are largely unstructured, \begin{cfa} [int, int] foo( int, int ); $\C[2in]{// unstructured function}$ typedef [int, int] Pair; $\C{// tuple type}$ Pair bar( Pair ); $\C{// structured function}$ int x = 3, y = 4; [x, y] = foo( x, y ); $\C{// unstructured call}$ Pair ret = [3, 4]; $\C{// tuple variable}$ ret = bar( ret ); $\C{// structured call}\CRT$ \end{cfa} Basically, creating the tuple-type @Pair@ is largely equivalent to creating a @struct@ type, and creating more types and names defeats the simplicity that tuples are trying to achieve. Furthermore, since operator overloading in \CFA is implemented by treating operators as overloadable functions, tuple types are very rarely used in a structured way. When a tuple-type expression appears in a function call (except assignment expressions, which are handled differently by mass- or multiple-assignment expansions), it is always flattened, and the tuple structure of function parameter is not considered a part of the function signatures. For example, \begin{cfa} void f( int, int ); void f( [int, int] ); f( 3, 4 ); // ambiguous call \end{cfa} the two prototypes for @foo@ have the same signature (a function taking two @int@s and returning nothing), and therefore invalid overloads. Note, the ambiguity error occurs at the call rather than at the second declaration of @f@, because it is possible to have multiple equivalent prototype definitions of a function. Furthermore, ordinary polymorphic type-parameters are not allowed to have tuple types. \begin{cfa} forall( T ) T foo( T ); int x, y, z; [x, y, z] = foo( [x, y, z] ); // substitute tuple type for T \end{cfa} Without this restriction, the expression resolution algorithm can create too many argument-parameter matching options. For example, with multiple type parameters, \begin{cfa} forall( T, U ) void f( T, U ); f( [1, 2, 3, 4] ); \end{cfa} the call to @f@ can be interpreted as @T = [1]@ and @U = [2, 3, 4, 5]@, or @T = [1, 2]@ and @U = [3, 4, 5]@, and so on. The restriction ensures type checking remains tractable and does not take too long to compute. Therefore, tuple types are never present in any fixed-argument function calls. Finally, a type-safe variadic argument signature was added by Robert Schluntz~\cite[\S~4.1.2]{Schluntz17} using @forall@ and a new tuple parameter-type, denoted by the keyword @ttype @ in Schluntz's implementation, but changed to the ellipsis syntax similar to \CC's template parameter pack. For C variadics, the number and types of the arguments must be conveyed in some way, \eg @printf@ uses a format string indicating the number and types of the arguments. \VRef[Figure]{f:CVariadicMaxFunction} shows an $N$ argument @maxd@ function using the C untyped @va_list@ interface. In the example, the first argument is the number of following arguments, and the following arguments are assumed to be @double@; looping is used to traverse the argument pack from left to right. The @va_list@ interface is walking up (by address) the stack looking at the arguments pushed by the caller. (Magic knowledge is needed for arguments pushed using registers.) \begin{figure} \begin{cfa} double maxd( int @count@, ... ) { double max = 0; va_list args; va_start( args, count ); for ( int i = 0; i < count; i += 1 ) { double num = va_arg( args, double ); if ( num > max ) max = num; } va_end(args); return max; } printf( "%g\n", maxd( @4@, 25.0, 27.3, 26.9, 25.7 ) ); \end{cfa} \caption{C Variadic Maximum Function} \label{f:CVariadicMaxFunction} \end{figure} There are two common patterns for using the variadic functions in \CFA. \begin{enumerate}[leftmargin=*] \item Argument forwarding to another function, \eg: \begin{cfa} forall( T *, TT ... | { @void ?{}( T &, TT );@ } ) // constructor assertion T * new( TT tp ) { return ((T *)malloc())@{@ tp @}@; } // call constructor on storage \end{cfa} Note, the assertion on @T@ requires it to have a constructor @?{}@. The function @new@ calls @malloc@ to obtain storage and then invokes @T@'s constructor passing the tuple pack by flattening it over the constructor's arguments, \eg: \begin{cfa} struct S { int i, j; }; void ?{}( S & s, int i, int j ) { s.[ i, j ] = [ i, j ]; } // constructor S * sp = new( 3, 4 ); // flatten [3, 4] into call ?{}( 3, 4 ); \end{cfa} \item Structural recursion for processing the argument-pack values one at a time, \eg: \begin{cfa} forall( T | { int ?>?( T, T ); } ) T max( T v1, T v2 ) { return v1 > v2 ? v1 : v2; } $\vspace{-10pt}$ forall( T, TT ... | { T max( T, T ); T max( TT ); } ) T max( T arg, TT args ) { return max( arg, max( args ) ); } \end{cfa} The first non-recursive @max@ function is the polymorphic base-case for the recursion, \ie, find the maximum of two identically typed values with a greater-than (@>@) operator. The second recursive @max@ function takes two parameters, a @T@ and a @TT@ tuple pack, handling all argument lengths greater than two. The recursive function computes the maximum for the first argument and the maximum value of the rest of the tuple pack. The call of @max@ with one argument is the recursive call, where the tuple pack is converted into two arguments by taking the first value (lisp @car@) from the tuple pack as the first argument (flattening) and the remaining pack becomes the second argument (lisp @cdr@). The recursion stops when the argument pack is empty. For example, @max( 2, 3, 4 )@ matches with the recursive function, which performs @return max( 2, max( [3, 4] ) )@ and one more step yields @return max( 2, max( 3, 4 ) )@, so the tuple pack is empty. \end{enumerate} As an aside, polymorphic functions are precise with respect to their parameter types, \eg @max@ states all argument values must be the same type, which logically makes sense. However, this precision precludes normal C conversions among the base types, \eg, this mix-mode call @max( 2h, 2l, 3.0f, 3.0ld )@ fails because the types are not the same. Unfortunately, this failure violates programmer intuition because there are specialized two-argument non-polymorphic versions of @max@ that work, \eg @max( 3, 3.5 )@. Allowing C conversions for polymorphic types will require a significant change to the type resolver. Currently in \CFA, variadic polymorphic functions are the only place tuple types are used. And because \CFA compiles polymorphic functions versus template expansion, many wrapper functions are generated to implement both user-defined generic-types and polymorphism with variadics. Fortunately, the only permitted operations on polymorphic function parameters are given by the list of assertion (trait) functions. Nevertheless, this small set of functions eventually need to be called with flattened tuple arguments. Unfortunately, packing the variadic arguments into a rigid @struct@ type and generating all the required wrapper functions is significant work and largely wasted because most are never called. Interested readers can refer to pages 77-80 of Robert Schluntz's thesis to see how verbose the translator output is to implement a simple variadic call with 3 arguments. As the number of arguments increases, \eg a call with 5 arguments, the translator generates a concrete @struct@ types for a 4-tuple and a 3-tuple along with all the polymorphic type data for them. An alternative approach is to put the variadic arguments into an array, along with an offset array to retrieve each individual argument. This method is similar to how the C @va_list@ object is used (and how \CFA accesses polymorphic fields in a generic type), but the \CFA variadics generate the required type information to guarantee type safety. For example, given the following heterogeneous, variadic, typed @print@ and usage. \begin{cquote} \begin{tabular}{@{}ll@{}} \begin{cfa} forall( T, TT ... | { void print( T ); void print( TT ); } ) void print( T arg , TT rest ) { print( arg ); print( rest ); } \end{cfa} & \begin{cfa} void print( int i ) { printf( "%d ", i ); } void print( double d ) { printf( "%g ", d ); } ... // other types int i = 3 ; double d = 3.5; print( i, d ); \end{cfa} \end{tabular} \end{cquote} it would look like the following using the offset-array implementation approach. \begin{cfa} void print( T arg, char * _data_rest, size_t * _offset_rest ) { print( arg ); print( *((typeof rest.0)*) _data_rest, $\C{// first element of rest}$ _data_rest + _offset_rest[0], $\C{// remainder of data}$ _offset_rest + 1); $\C{// remainder of offset array}$ } \end{cfa} where the fixed-arg polymorphism for @T@ can be handled by the standard @void *@-based \CFA polymorphic calling conventions, and the type information can all be deduced at the call site. Note, the variadic @print@ supports heterogeneous types because the polymorphic @T@ is not returned (unlike variadic @max@), so there is no cascade of type relationships. Turning tuples into first-class values in \CFA does have a few benefits, namely allowing pointers to tuples and arrays of tuples to exist. However, it seems unlikely that these types have realistic use cases that cannot be achieved without them. And having a pointer-to-tuple type potentially forbids the simple offset-array implementation of variadic polymorphism. For example, in the case where a type assertion requests the pointer type @TT *@ in the above example, it forces the tuple type to be a @struct@, and thus incurring a high cost. My conclusion is that tuples should not be structured (first-class), rather they should be unstructured. This agrees with Rodolfo's original describes \begin{quote} As such, their [tuples] use does not enforce a particular memory layout, and in particular, does not guarantee that the components of a tuple occupy a contiguous region of memory.~\cite[pp.~74--75]{Esteves04} \end{quote} allowing the simplified implementations for MVR and variadic functions. Finally, a missing feature for tuples is using an MVR in an initialization context. Currently, this feature is \emph{only} possible when declaring a tuple variable. \begin{cfa} [int, int] ret = gives_two(); $\C{// no constructor call (unstructured)}$ Pair ret = gives_two(); $\C{// constructor call (structured)}$ \end{cfa} However, this forces the programer to use a tuple variable and possibly a tuple type to support a constructor, when they actually want separate variables with separate constructors. And as stated previously, type variables (structured tuples) are rare in general \CFA programming so far. To address this issue, while retaining the ability to leverage constructors, the following new tuple-like declaration syntax is proposed. \begin{cfa} [ int x, int y ] = gives_two(); \end{cfa} where the semantics is: \begin{cfa} T t0, t1; [ t0, t1 ] = gives_two(); T x = t0; // constructor call T y = t1; // constructor call \end{cfa} and the implementation performs as much copy elision as possible. \section{\lstinline{inline} Substructure} C allows an anonymous aggregate type (@struct@ or @union@) to be embedded (nested) within another one, \eg a tagged union. \begin{cfa} struct S { unsigned int tag; union { $\C{// anonymous nested aggregate}$ int x; double y; char z; }; } s; \end{cfa} The @union@ field-names are hoisted into the @struct@, so there is direct access, \eg @s.x@; hence, field names must be unique. For a nested anonymous @struct@, both field names and values are hoisted. \begin{cquote} \begin{tabular}{@{}l@{\hspace{35pt}}l@{}} original & rewritten \\ \begin{cfa} struct S { struct { int i, j; }; struct { int k, l; }; }; \end{cfa} & \begin{cfa} struct S { int i, j; int k, l; }; \end{cfa} \end{tabular} \end{cquote} As an aside, C nested \emph{named} aggregates behave in a (mysterious) way because the nesting is allowed but there is no ability to use qualification to access an inner type, like the \CC type operator `@::@'. In fact, all named nested aggregates are hoisted to global scope, regardless of the nesting depth. \begin{cquote} \begin{tabular}{@{}l@{\hspace{35pt}}l@{}} original & rewritten \\ \begin{cfa} struct S { struct T { int i, j; }; struct U { int k, l; }; }; \end{cfa} & \begin{cfa} struct T { int i, j; }; struct U { int k, l; }; struct S { }; \end{cfa} \end{tabular} \end{cquote} Hence, the possible accesses are: \begin{cfa} struct S s; // s cannot access any fields struct T t; t.i; t.j; struct U u; u.k; u.l; \end{cfa} and the hoisted type names can clash with global types names. For good reasons, \CC chose to change this semantics~\cite[C.1.2.3.3]{C++}: \begin{description}[leftmargin=*,topsep=3pt,itemsep=0pt,parsep=0pt] \item[Change:] A struct is a scope in C++, not in C. \item[Rationale:] Class scope is crucial to C++, and a struct is a class. \item[Effect on original feature:] Change to semantics of well-defined feature. \item[Difficulty of converting:] Semantic transformation. \item[How widely used:] C programs use @struct@ extremely frequently, but the change is only noticeable when @struct@, enumeration, or enumerator names are referred to outside the @struct@. The latter is probably rare. \end{description} However, there is no syntax to access from a variable through a type to a field. \begin{cfa} struct S s; @s::T@.i; @s::U@.k; \end{cfa} As an aside, \CFA also provides a backwards compatible \CC nested-type. \begin{cfa} struct S { @auto@ struct T { int i, j; }; @auto@ struct U { int k, l; }; }; \end{cfa} The keyword @auto@ denotes a local (scoped) declaration, and here, it implies a local (scoped) type, using dot as the type qualifier, \eg @S.T t@. % https://gcc.gnu.org/onlinedocs/gcc/Unnamed-Fields.html A polymorphic extension to nested aggregates appears in the Plan-9 C dialect, used in the Bell Labs' Plan-9 research operating system. The feature is called \newterm{unnamed substructures}~\cite[\S~3.3]{Thompson90new}, which continues to be supported by @gcc@ and @clang@ using the extension (@-fplan9-extensions@). The goal is to provided the same effect of the nested aggregate with the aggregate type defined elsewhere, which requires it be named. \begin{cfa} union U { $\C{// unnested named}$ int x; double y; char z; } u; struct W { int i; double j; char k; } w; struct S { @struct W;@ $\C{// Plan-9 substructure}$ unsigned int tag; @union U;@ $\C{// Plan-9 substructure}$ } s; \end{cfa} Note, the position of the substructure is normally unimportant. Like the anonymous nested types, the aggregate field names are hoisted into @struct S@, so there is direct access, \eg @s.x@ and @s.i@. However, like the implicit C hoisting of nested structures, the field names must be unique and the type names are now at a different scope level, unlike type nesting in \CC. In addition, a pointer to a structure is automatically converted to a pointer to an anonymous field for assignments and function calls, providing containment inheritance with implicit subtyping, \ie @U@ $\subset$ @S@ and @W@ $\subset$ @S@. For example: \begin{cfa} void f( union U * u ); void g( struct W * ); union U * up; struct W * wp; struct S * sp; up = sp; $\C{// assign pointer to U in S}$ wp = sp; $\C{// assign pointer to W in S}$ f( &s ); $\C{// pass pointer to U in S}$ g( &s ); $\C{// pass pointer to W in S}$ \end{cfa} \CFA extends the Plan-9 substructure by allowing polymorphism for values and pointers. The extended substructure is denoted using @inline@, allowing backwards compatibility to existing Plan-9 features. \begin{cfa} struct S { @inline@ W; $\C{// extended Plan-9 substructure}$ unsigned int tag; @inline@ U; $\C{// extended Plan-9 substructure}$ } s; \end{cfa} Note, like \CC, \CFA allows optional prefixing of type names with their kind, \eg @struct@, @union@, and @enum@, unless there is ambiguity with variable names in the same scope. The following shows both value and pointer polymorphism. \begin{cfa} void f( U, U * ); $\C{// value, pointer}$ void g( W, W * ); $\C{// value, pointer}$ U u, * up; S s, * sp; W w, * wp; u = s; up = sp; $\C{// value, pointer}$ w = s; wp = sp; $\C{// value, pointer}$ f( s, &s ); $\C{// value, pointer}$ g( s, &s ); $\C{// value, pointer}$ \end{cfa} In general, non-standard C features (@gcc@) do not need any special treatment, as they are directly passed through to the C compiler. However, the Plan-9 semantics allow implicit conversions from the outer type to the inner type, which means the \CFA type resolver must take this information into account. Therefore, the \CFA translator must implement the Plan-9 features and insert necessary type conversions into the translated code output. In the current version of \CFA, this is the only kind of implicit type conversion other than the standard C conversions. Since variable overloading is possible in \CFA, \CFA's implementation of Plan-9 polymorphism allows duplicate field names. When an outer field and an embedded field have the same name and type, the inner field is shadowed and cannot be accessed directly by name. While such definitions are allowed, duplicate field names is not good practice in general and should be avoided if possible. Plan-9 fields can be nested, and a struct definition can contain multiple Plan-9 embedded fields. In particular, the \newterm{diamond pattern}~\cite[\S~6.1]{Stroustrup89}\cite[\S~4]{Cargill91} can occur and result in a nested field to be embedded twice. \begin{cfa} struct A { int x; }; struct B { inline A; }; struct C { inline A; }; struct D { inline B; inline C; }; D d; \end{cfa} In the above example, the expression @d.x@ becomes ambiguous, since it can refer to the indirectly embedded field either from @B@ or from @C@. It is still possible to disambiguate the expression by first casting the outer struct to one of the directly embedded type, such as @((B)d).x@.