Ignore:
Timestamp:
Apr 3, 2017, 7:04:30 PM (8 years ago)
Author:
Rob Schluntz <rschlunt@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
Children:
fbd7ad6
Parents:
ae6cc8b
Message:

incorporate Peter's feedback, handle many TODOs

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/rob_thesis/tuples.tex

    rae6cc8b r7493339  
    44
    55\section{Introduction}
    6 % TODO: named return values are not currently implemented in CFA - tie in with named tuples? (future work)
    7 % TODO: no passing input parameters by assignment, instead will have reference types => this is not a very C-like model and greatly complicates syntax for likely little gain (and would cause confusion with already supported return-by-rerefence)
    8 % TODO: tuples are allowed in expressions, exact meaning is defined by operator overloading (e.g. can add tuples). An important caveat to note is that it is currently impossible to allow adding two triples but prevent adding a pair with a quadruple (single flattening/structuring conversions are implicit, only total number of components matters). May be able to solve this with more nuanced conversion rules (future work)
     6% TODO: no passing input parameters by assignment, instead will have reference types => this is not a very C-like model and greatly complicates syntax for likely little gain (and would cause confusion with already supported return-by-reference)
    97% TODO: benefits (conclusion) by Till: reduced number of variables and statements; no specified order of execution for multiple assignment (more optimzation freedom); can store parameter lists in variable; MRV routines (natural code); more convenient assignment statements; simple and efficient access of record fields; named return values more legible and efficient in use of storage
    108
     
    7371const char * str = "hello world";
    7472char ch;                            // pre-allocate return value
    75 int freq = most_frequent(str, &ch); // pass return value as parameter
     73int freq = most_frequent(str, &ch); // pass return value as out parameter
    7674printf("%s -- %d %c\n", str, freq, ch);
    7775\end{cfacode}
    78 Notably, using this approach, the caller is directly responsible for allocating storage for the additional temporary return values.
    79 This complicates the call site with a sequence of variable declarations leading up to the call.
     76Notably, using this approach, the caller is directly responsible for allocating storage for the additional temporary return values, which complicates the call site with a sequence of variable declarations leading up to the call.
    8077Also, while a disciplined use of @const@ can give clues about whether a pointer parameter is going to be used as an out parameter, it is not immediately obvious from only the routine signature whether the callee expects such a parameter to be initialized before the call.
    8178Furthermore, while many C routines that accept pointers are designed so that it is safe to pass @NULL@ as a parameter, there are many C routines that are not null-safe.
     
    109106}
    110107\end{cfacode}
    111 This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type.
     108This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type, which precludes the bug seen with out parameters.
    112109
    113110The addition of multiple-return-value functions necessitates a syntax for accepting multiple values at the call-site.
     
    136133In this case, there is only one option for a function named @most_frequent@ that takes a string as input.
    137134This function returns two values, one @int@ and one @char@.
    138 There are four options for a function named @process@, but only two which accept two arguments, and of those the best match is (3), which is also an exact match.
     135There are four options for a function named @process@, but only two that accept two arguments, and of those the best match is (3), which is also an exact match.
    139136This expression first calls @most_frequent("hello world")@, which produces the values @3@ and @'l'@, which are fed directly to the first and second parameters of (3), respectively.
    140137
     
    148145The previous expression has 3 \emph{components}.
    149146Each component in a tuple expression can be any \CFA expression, including another tuple expression.
    150 % TODO: Tuple expressions can appear anywhere that any other expression can appear (...?)
    151147The order of evaluation of the components in a tuple expression is unspecified, to allow a compiler the greatest flexibility for program optimization.
    152148It is, however, guaranteed that each component of a tuple expression is evaluated for side-effects, even if the result is not used.
    153149Multiple-return-value functions can equivalently be called \emph{tuple-returning functions}.
    154 % TODO: does this statement still apply, and if so to what extent?
    155 %   Tuples are a compile-time phenomenon and have little to no run-time presence.
    156150
    157151\subsection{Tuple Variables}
     
    166160These variables can be used in any of the contexts where a tuple expression is allowed, such as in the @printf@ function call.
    167161As in the @process@ example, the components of the tuple value are passed as separate parameters to @printf@, allowing very simple printing of tuple expressions.
    168 If the individual components are required, they can be extracted with a simple assignment, as in previous examples.
     162One way to access the individual components is with a simple assignment, as in previous examples.
    169163\begin{cfacode}
    170164int freq;
     
    254248\label{s:TupleAssignment}
    255249An assignment where the left side of the assignment operator has a tuple type is called tuple assignment.
    256 There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called Multiple and Mass Assignment, respectively.
     250There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called \emph{Multiple} and \emph{Mass} Assignment, respectively.
    257251\begin{cfacode}
    258252int x;
     
    272266A mass assignment assigns the value $R$ to each $L_i$.
    273267For a mass assignment to be valid, @?=?(&$L_i$, $R$)@ must be a well-typed expression.
    274 This differs from C cascading assignment (e.g. @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment.
     268These semantics differ from C cascading assignment (e.g. @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment.
    275269For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, which results in the value @3.14@ in @y@ and the value @3@ in @x@.
    276270On the other hand, the C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, which results in the value @3@ in @x@, and as a result the value @3@ in @y@ as well.
     
    288282These semantics allow cascading tuple assignment to work out naturally in any context where a tuple is permitted.
    289283These semantics are a change from the original tuple design in \KWC \cite{Till89}, wherein tuple assignment was a statement that allows cascading assignments as a special case.
    290 This decision wa made in an attempt to fix what was seen as a problem with assignment, wherein it can be used in many different locations, such as in function-call argument position.
     284The \KWC semantics fix what was seen as a problem with assignment, wherein it can be used in many different locations, such as in function-call argument position. % TODO: remove??
    291285While permitting assignment as an expression does introduce the potential for subtle complexities, it is impossible to remove assignment expressions from \CFA without affecting backwards compatibility.
    292286Furthermore, there are situations where permitting assignment as an expression improves readability by keeping code succinct and reducing repetition, and complicating the definition of tuple assignment puts a greater cognitive burden on the user.
     
    315309void ?{}(S *, S);      // (4)
    316310
    317 [S, S] x = [3, 6.28];  // uses (2), (3)
    318 [S, S] y;              // uses (1), (1)
    319 [S, S] z = x.0;        // uses (4), (4)
     311[S, S] x = [3, 6.28];  // uses (2), (3), specialized constructors
     312[S, S] y;              // uses (1), (1), default constructor
     313[S, S] z = x.0;        // uses (4), (4), copy constructor
    320314\end{cfacode}
    321315In this example, @x@ is initialized by the multiple constructor calls @?{}(&x.0, 3)@ and @?{}(&x.1, 6.28)@, while @y@ is initilaized by two default constructor calls @?{}(&y.0)@ and @?{}(&y.1)@.
     
    339333S s = t;
    340334\end{cfacode}
    341 The initialization of @s@ with @t@ works by default, because @t@ is flattened into its components, which satisfies the generated field constructor @?{}(S *, int, double)@ to initialize the first two values.
     335The initialization of @s@ with @t@ works by default because @t@ is flattened into its components, which satisfies the generated field constructor @?{}(S *, int, double)@ to initialize the first two values.
    342336
    343337\section{Member-Access Tuple Expression}
     
    354348Then the type of @a.[x, y, z]@ is @[T_x, T_y, T_z]@.
    355349
    356 Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (e.g. rearrange components, drop components, duplicate components, etc.).
     350Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (e.g., rearrange components, drop components, duplicate components, etc.).
    357351\begin{cfacode}
    358352[int, int, long, double] x;
     
    384378Since \CFA permits these tuple-access expressions using structures, unions, and tuples, \emph{member tuple expression} or \emph{field tuple expression} is more appropriate.
    385379
    386 It could be possible to extend member-access expressions further.
     380It is possible to extend member-access expressions further.
    387381Currently, a member-access expression whose member is a name requires that the aggregate is a structure or union, while a constant integer member requires the aggregate to be a tuple.
    388382In the interest of orthogonal design, \CFA could apply some meaning to the remaining combinations as well.
     
    403397One benefit of this interpretation is familiar, since it is extremely reminiscent of tuple-index expressions.
    404398On the other hand, it could be argued that this interpretation is brittle in that changing the order of members or adding new members to a structure becomes a brittle operation.
    405 This problem is less of a concern with tuples, since modifying a tuple affects only the code which directly uses that tuple, whereas modifying a structure has far reaching consequences with every instance of the structure.
    406 
    407 As for @z.y@, a natural interpretation would be to extend the meaning of member tuple expressions.
     399This problem is less of a concern with tuples, since modifying a tuple affects only the code that directly uses the tuple, whereas modifying a structure has far reaching consequences for every instance of the structure.
     400
     401As for @z.y@, a one interpretation is to extend the meaning of member tuple expressions.
    408402That is, currently the tuple must occur as the member, i.e. to the right of the dot.
    409403Allowing tuples to the left of the dot could distribute the member across the elements of the tuple, in much the same way that member tuple expressions distribute the aggregate across the member tuple.
    410404In this example, @z.y@ expands to @[z.0.y, z.1.y]@, allowing what is effectively a very limited compile-time field-sections map operation, where the argument must be a tuple containing only aggregates having a member named @y@.
    411 It is questionable how useful this would actually be in practice, since generally structures are not designed to have names in common with other structures, and further this could cause maintainability issues in that it encourages programmers to adopt very simple naming conventions, to maximize the amount of overlap between different types.
     405It is questionable how useful this would actually be in practice, since structures often do not have names in common with other structures, and further this could cause maintainability issues in that it encourages programmers to adopt very simple naming conventions to maximize the amount of overlap between different types.
    412406Perhaps more useful would be to allow arrays on the left side of the dot, which would likewise allow mapping a field access across the entire array, producing an array of the contained fields.
    413407The immediate problem with this idea is that C arrays do not carry around their size, which would make it impossible to use this extension for anything other than a simple stack allocated array.
    414408
    415 Supposing this feature works as described, it would be necessary to specify an ordering for the expansion of member access expressions versus member tuple expressions.
     409Supposing this feature works as described, it would be necessary to specify an ordering for the expansion of member-access expressions versus member-tuple expressions.
    416410\begin{cfacode}
    417411struct { int x, y; };
     
    426420\end{cfacode}
    427421Depending on exactly how the two tuples are combined, different results can be achieved.
    428 As such, a specific ordering would need to be imposed in order for this feature to be useful.
    429 Furthermore, this addition moves a member tuple expression's meaning from being clear statically to needing resolver support, since the member name needs to be distributed appropriately over each member of the tuple, which could itself be a tuple.
    430 
    431 Ultimately, both of these extensions introduce complexity into the model, with relatively little peceived benefit.
     422As such, a specific ordering would need to be imposed to make this feature useful.
     423Furthermore, this addition moves a member-tuple expression's meaning from being clear statically to needing resolver support, since the member name needs to be distributed appropriately over each member of the tuple, which could itself be a tuple.
     424
     425A second possibility is for \CFA to have named tuples, as they exist in Swift and D.
     426\begin{cfacode}
     427typedef [int x, int y] Point2D;
     428Point2D p1, p2;
     429p1.x + p1.y + p2.x + p2.y;
     430p1.0 + p1.1 + p2.0 + p2.1;  // equivalent
     431\end{cfacode}
     432In this simpler interpretation, a named tuple type carries with it a list of possibly empty identifiers.
     433This approach fits naturally with the named return-value feature, and would likely go a long way towards implementing it.
     434
     435Ultimately, the first two extensions introduce complexity into the model, with relatively little peceived benefit, and so were dropped from consideration.
     436Named tuples are a potentially useful addition to the language, provided they can be parsed with a reasonable syntax.
     437
    432438
    433439\section{Casting}
     
    442448(int)f();  // choose (2)
    443449\end{cfacode}
    444 Since casting is a fundamental operation in \CFA, casts should be given a meaningful interpretation in the context of tuples.
     450Since casting is a fundamental operation in \CFA, casts need to be given a meaningful interpretation in the context of tuples.
    445451Taking a look at standard C provides some guidance with respect to the way casts should work with tuples.
    446452\begin{cfacode}[numbers=left]
     
    448454void g();
    449455
    450 (void)f();
    451 (int)g();
     456(void)f();  // valid, ignore results
     457(int)g();   // invalid, void cannot be converted to int
    452458
    453459struct A { int x; };
    454 (struct A)f();
     460(struct A)f();  // invalid
    455461\end{cfacode}
    456462In C, line 4 is a valid cast, which calls @f@ and discards its result.
    457463On the other hand, line 5 is invalid, because @g@ does not produce a result, so requesting an @int@ to materialize from nothing is nonsensical.
    458 Finally, line 8 is also invalid, because in C casts only provide conversion between scalar types \cite{C11}.
    459 For consistency, this implies that any case wherein the number of components increases as a result of the cast is invalid, while casts which have the same or fewer number of components may be valid.
     464Finally, line 8 is also invalid, because in C casts only provide conversion between scalar types \cite[p.~91]{C11}.
     465For consistency, this implies that any case wherein the number of components increases as a result of the cast is invalid, while casts that have the same or fewer number of components may be valid.
    460466
    461467Formally, a cast to tuple type is valid when $T_n \leq S_m$, where $T_n$ is the number of components in the target type and $S_m$ is the number of components in the source type, and for each $i$ in $[0, n)$, $S_i$ can be cast to $T_i$.
     
    509515\end{cfacode}
    510516Note that due to the implicit tuple conversions, this function is not restricted to the addition of two triples.
    511 A call to this plus operator type checks as long as a total of 6 non-tuple arguments are passed after flattening, and all of the arguments have a common type which can bind to @T@, with a pairwise @?+?@ over @T@.
    512 For example, these expressions will also succeed and produce the same value.
    513 \begin{cfacode}
    514 ([x.0, x.1]) + ([x.2, 10, 20, 30]);
    515 x.0 + ([x.1, x.2, 10, 20, 30]);
     517For example, these expressions also succeed and produce the same value.
     518A call to this plus operator type checks as long as a total of 6 non-tuple arguments are passed after flattening, and all of the arguments have a common type that can bind to @T@, with a pairwise @?+?@ over @T@.
     519\begin{cfacode}
     520([x.0, x.1]) + ([x.2, 10, 20, 30]);  // x + ([10, 20, 30])
     521x.0 + ([x.1, x.2, 10, 20, 30]);      // x + ([10, 20, 30])
    516522\end{cfacode}
    517523This presents a potential problem if structure is important, as these three expressions look like they should have different meanings.
    518 Further, these calls can be made ambiguous by adding seemingly different functions.
     524Furthermore, these calls can be made ambiguous by adding seemingly different functions.
    519525\begin{cfacode}
    520526forall(otype T | { T ?+?(T, T); })
     
    524530\end{cfacode}
    525531It is also important to note that these calls could be disambiguated if the function return types were different, as they likely would be for a reasonable implementation of @?+?@, since the return type is used in overload resolution.
    526 Still, this is a deficiency of the current argument matching algorithm, and depending on the function, differing return values may not always be appropriate.
    527 It's possible that this could be rectified by applying an appropriate cost to the structuring and flattening conversions, which are currently 0-cost conversions.
     532Still, these semantics are a deficiency of the current argument matching algorithm, and depending on the function, differing return values may not always be appropriate.
     533These issues could be rectified by applying an appropriate cost to the structuring and flattening conversions, which are currently 0-cost conversions.
    528534Care would be needed in this case to ensure that exact matches do not incur such a cost.
    529535\begin{cfacode}
     
    536542\end{cfacode}
    537543
    538 Until this point, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (i.e. no implicit conversions are applied to assertion arguments).
     544Until this point, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (i.e., no implicit conversions are applied to assertion arguments).
    539545This decision presents a conflict with the flexibility of tuples.
    540546\subsection{Assertion Inference}
     
    617623In the call to @f@, the second and third argument components are structured into a tuple argument.
    618624
    619 Expressions which may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion.
     625Expressions that may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion.
    620626Each unique expression is assigned an identifier and is guaranteed to be executed exactly once.
    621627\begin{cfacode}
     
    624630g(h());
    625631\end{cfacode}
    626 Interally, this is converted to
     632Interally, this is converted to psuedo-\CFA
    627633\begin{cfacode}
    628634void g(int, double);
    629635[int, double] h();
    630 let unq<0> = f() : g(unq<0>.0, unq<0>.1);  // notation?
    631 \end{cfacode}
     636lazy [int, double] unq<0> = h();
     637g(unq<0>.0, unq<0>.1);
     638\end{cfacode}
     639That is, the function @h@ is evaluated lazily and its result is stored for subsequent accesses.
    632640Ultimately, unique expressions are converted into two variables and an expression.
    633641\begin{cfacode}
     
    638646[int, double] _unq0;
    639647g(
    640   (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).0,
    641   (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).1,
     648  (_unq0_finished_ ? _unq0 : (_unq0 = h(), _unq0_finished_ = 1, _unq0)).0,
     649  (_unq0_finished_ ? _unq0 : (_unq0 = h(), _unq0_finished_ = 1, _unq0)).1,
    642650);
    643651\end{cfacode}
     
    646654Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression.
    647655
    648 Currently, the \CFA translator has a very broad, imprecise definition of impurity, where any function call is assumed to be impure.
    649 This notion could be made more precise for certain intrinsic, autogenerated, and builtin functions, and could analyze function bodies when they are available to recursively detect impurity, to eliminate some unique expressions.
    650 It's possible that unique expressions could be exposed to the user through a language feature, but it's not immediately obvious that there is a benefit to doing so.
     656Currently, the \CFA translator has a very broad, imprecise definition of impurity (side-effects), where any function call is assumed to be impure.
     657This notion could be made more precise for certain intrinsic, autogenerated, and builtin functions, and could analyze function bodies, when they are available, to recursively detect impurity, to eliminate some unique expressions.
     658It is possible that lazy evaluation could be exposed to the user through a lazy keyword with little additional effort.
    651659
    652660Tuple member expressions are recursively expanded into a list of member access expressions.
     
    655663x.[0, 1.[0, 2]];
    656664\end{cfacode}
    657 Which becomes
     665which becomes
    658666\begin{cfacode}
    659667[x.0, [x.1.0, x.1.2]];
    660668\end{cfacode}
    661 Tuple member expressions also take advantage of unique expressions in the case of possible impurity.
     669Tuple-member expressions also take advantage of unique expressions in the case of possible impurity.
    662670
    663671Finally, the various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions.
     
    711719});
    712720\end{cfacode}
    713 A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, e.g. in a unique expression.
     721A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, e.g., in a unique expression.
    714722$N$ LHS variables are generated and constructed using the address of the tuple components, and a single RHS variable is generated to store the value of the RHS without any loss of precision.
    715723A nested statement expression is generated that performs the individual assignments and constructs the return value using the results of the individual assignments.
     
    785793The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language.
    786794There are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this is not a new restriction.
    787 
    788 \section{Variadic Functions}
    789 % TODO: should this maybe be its own chapter?
    790 C provides variadic functions through the manipulation of @va_list@ objects.
    791 A variadic function is one which contains at least one parameter, followed by @...@ as the last token in the parameter list.
    792 In particular, some form of \emph{argument descriptor} is needed to inform the function of the number of arguments and their types.
    793 Two common argument descriptors are format strings or and counter parameters.
    794 It's important to note that both of these mechanisms are inherently redundant, because they require the user to specify information that the compiler knows explicitly.
    795 This required repetition is error prone, because it's easy for the user to add or remove arguments without updating the argument descriptor.
    796 In addition, C requires the programmer to hard code all of the possible expected types.
    797 As a result, it is cumbersome to write a function that is open to extension.
    798 For example, a simple function which sums $N$ @int@s,
    799 \begin{cfacode}
    800 int sum(int N, ...) {
    801   va_list args;
    802   va_start(args, N);
    803   int ret = 0;
    804   while(N) {
    805     ret += va_arg(args, int);  // have to specify type
    806     N--;
    807   }
    808   va_end(args);
    809   return ret;
    810 }
    811 sum(3, 10, 20, 30);  // need to keep counter in sync
    812 \end{cfacode}
    813 The @va_list@ type is a special C data type that abstracts variadic argument manipulation.
    814 The @va_start@ macro initializes a @va_list@, given the last named parameter.
    815 Each use of the @va_arg@ macro allows access to the next variadic argument, given a type.
    816 Since the function signature does not provide any information on what types can be passed to a variadic function, the compiler does not perform any error checks on a variadic call.
    817 As such, it is possible to pass any value to the @sum@ function, including pointers, floating-point numbers, and structures.
    818 In the case where the provided type is not compatible with the argument's actual type after default argument promotions, or if too many arguments are accessed, the behaviour is undefined \cite{C11}.
    819 Furthermore, there is no way to perform the necessary error checks in the @sum@ function at run-time, since type information is not carried into the function body.
    820 Since they rely on programmer convention rather than compile-time checks, variadic functions are generally unsafe.
    821 
    822 In practice, compilers can provide warnings to help mitigate some of the problems.
    823 For example, GCC provides the @format@ attribute to specify that a function uses a format string, which allows the compiler to perform some checks related to the standard format specifiers.
    824 Unfortunately, this does not permit extensions to the format string syntax, so a programmer cannot extend the attribute to warn for mismatches with custom types.
    825 
    826 Needless to say, C's variadic functions are a deficient language feature.
    827 Two options were examined to provide better, type-safe variadic functions in \CFA.
    828 \subsection{Whole Tuple Matching}
    829 Option 1 is to change the argument matching algorithm, so that type parameters can match whole tuples, rather than just their components.
    830 This option could be implemented with two phases of argument matching when a function contains type parameters and the argument list contains tuple arguments.
    831 If flattening and structuring fail to produce a match, a second attempt at matching the function and argument combination is made where tuple arguments are not expanded and structure must match exactly, modulo non-tuple implicit conversions.
    832 For example:
    833 \begin{cfacode}
    834   forall(otype T, otype U | { T g(U); })
    835   void f(T, U);
    836 
    837   [int, int] g([int, int, int, int]);
    838 
    839   f([1, 2], [3, 4, 5, 6]);
    840 \end{cfacode}
    841 With flattening and structuring, the call is first transformed into @f(1, 2, 3, 4, 5, 6)@.
    842 Since the first argument of type @T@ does not have a tuple type, unification decides that @T=int@ and @1@ is matched as the first parameter.
    843 Likewise, @U@ does not have a tuple type, so @U=int@ and @2@ is accepted as the second parameter.
    844 There are now no remaining formal parameters, but there are remaining arguments and the function is not variadic, so the match fails.
    845 
    846 With the addition of an exact matching attempt, @T=[int,int]@ and @U=[int,int,int,int]@ and so the arguments type check.
    847 Likewise, when inferring assertion @g@, an exact match is found.
    848 
    849 This approach is strict with respect to argument structure by nature, which makes it syntactically awkward to use in ways that the existing tuple design is not.
    850 For example, consider a @new@ function which allocates memory using @malloc@ and constructs the result, using arbitrary arguments.
    851 \begin{cfacode}
    852 struct Array;
    853 void ?{}(Array *, int, int, int);
    854 
    855 forall(dtype T, otype Params | sized(T) | { void ?{}(T *, Params); })
    856 T * new(Params p) {
    857   return malloc(){ p };
    858 }
    859 Array(int) * x = new([1, 2, 3]);
    860 \end{cfacode}
    861 The call to @new@ is not particularly appealing, since it requires the use of square brackets at the call-site, which is not required in any other function call.
    862 This shifts the burden from the compiler to the programmer, which is almost always wrong, and creates an odd inconsistency within the language.
    863 Similarly, in order to pass 0 variadic arguments, an explicit empty tuple must be passed into the argument list, otherwise the exact matching rule would not have an argument to bind against.
    864 
    865 It should be otherwise noted that the addition of an exact matching rule only affects the outcome for polymorphic type binding when tuples are involved.
    866 For non-tuple arguments, exact matching and flattening \& structuring are equivalent.
    867 For tuple arguments to a function without polymorphic formal parameters, flattening and structuring work whenever an exact match would have worked, since the tuple is flattened and implicitly restructured to its original structure.
    868 Thus there is nothing to be gained from permitting the exact matching rule to take effect when a function does not contain polymorphism and none of the arguments are tuples.
    869 
    870 Overall, this option takes a step in the right direction, but is contrary to the flexibility of the existing tuple design.
    871 
    872 \subsection{A New Typeclass}
    873 A second option is the addition of another kind of type parameter, @ttype@.
    874 Matching against a @ttype@ parameter consumes all remaining argument components and packages them into a tuple, binding to the resulting tuple of types.
    875 In a given parameter list, there should be at most one @ttype@ parameter that must occur last, otherwise the call can never resolve, given the previous rule.
    876 % TODO: a similar rule exists in C/C++ for "..."
    877 This idea essentially matches normal variadic semantics, with a strong feeling of similarity to \CCeleven variadic templates.
    878 As such, @ttype@ variables will also be referred to as argument packs.
    879 This is the option that has been added to \CFA.
    880 
    881 Like variadic templates, the main way to manipulate @ttype@ polymorphic functions is through recursion.
    882 Since nothing is known about a parameter pack by default, assertion parameters are key to doing anything meaningful.
    883 Unlike variadic templates, @ttype@ polymorphic functions can be separately compiled.
    884 
    885 For example, a simple translation of the C sum function using @ttype@ would look like
    886 \begin{cfacode}
    887 int sum(){ return 0; }        // (0)
    888 forall(ttype Params | { int sum(Params); })
    889 int sum(int x, Params rest) { // (1)
    890   return x+sum(rest);
    891 }
    892 sum(10, 20, 30);
    893 \end{cfacode}
    894 Since (0) does not accept any arguments, it is not a valid candidate function for the call @sum(10, 20, 30)@.
    895 In order to call (1), @10@ is matched with @x@, and the argument resolution moves on to the argument pack @rest@, which consumes the remainder of the argument list and @Params@ is bound to @[20, 30]@.
    896 In order to finish the resolution of @sum@, an assertion parameter which matches @int sum(int, int)@ is required.
    897 Like in the previous iteration, (0) is not a valid candiate, so (1) is examined with @Params@ bound to @[int]@, requiring the assertion @int sum(int)@.
    898 Next, (0) fails, and to satisfy (1) @Params@ is bound to @[]@, requiring an assertion @int sum()@.
    899 Finally, (0) matches and (1) fails, which terminates the recursion.
    900 Effectively, this traces as @sum(10, 20, 30)@ $\rightarrow$ @10+sum(20, 30)@ $\rightarrow$ @10+(20+sum(30))@ $\rightarrow$ @10+(20+(30+sum()))@ $\rightarrow$ @10+(20+(30+0))@.
    901 
    902 A point of note is that this version does not require any form of argument descriptor, since the \CFA type system keeps track of all of these details.
    903 It might be reasonable to take the @sum@ function a step further to enforce a minimum number of arguments, which could be done simply
    904 \begin{cfacode}
    905 int sum(int x, int y){
    906   return x+y;
    907 }
    908 forall(ttype Params | { int sum(int, Params); })
    909 int sum(int x, int y, Params rest) {
    910   return sum(x+y, rest);
    911 }
    912 sum(10, 20, 30);
    913 \end{cfacode}
    914 
    915 One more iteration permits the summation of any summable type, as long as all arguments are the same type.
    916 \begin{cfacode}
    917 trait summable(otype T) {
    918   T ?+?(T, T);
    919 };
    920 forall(otype R | summable(R))
    921 R sum(R x, R y){
    922   return x+y;
    923 }
    924 forall(otype R, ttype Params
    925   | summable(R)
    926   | { R sum(R, Params); })
    927 R sum(R x, R y, Params rest) {
    928   return sum(x+y, rest);
    929 }
    930 sum(3, 10, 20, 30);
    931 \end{cfacode}
    932 Unlike C, it is not necessary to hard code the expected type.
    933 This is naturally open to extension, in that any user-defined type with a @?+?@ operator is automatically able to be used with the @sum@ function.
    934 That is to say, the programmer who writes @sum@ does not need full program knowledge of every possible data type, unlike what is necessary to write an equivalent function using the standard C mechanisms.
    935 
    936 Going one last step, it is possible to achieve full generality in \CFA, allowing the summation of arbitrary lists of summable types.
    937 \begin{cfacode}
    938 trait summable(otype T1, otype T2, otype R) {
    939   R ?+?(T1, T2);
    940 };
    941 forall(otype T1, otype T2, otype R | summable(T1, T2, R))
    942 R sum(T1 x, T2 y) {
    943   return x+y;
    944 }
    945 forall(otype T1, otype T2, otype T3, ttype Params, otype R
    946   | summable(T1, T2, T3)
    947   | { R sum(T3, Params); })
    948 R sum(T1 x, T2 y, Params rest ) {
    949   return sum(x+y, rest);
    950 }
    951 sum(3, 10.5, 20, 30.3);
    952 \end{cfacode}
    953 The \CFA translator requires adding explicit @double ?+?(int, double)@ and @double ?+?(double, int)@ functions for this call to work, since implicit conversions are not supported for assertions.
    954 
    955 C variadic syntax and @ttype@ polymorphism probably should not be mixed, since it is not clear where to draw the line to decide which arguments belong where.
    956 Furthermore, it might be desirable to disallow polymorphic functions to use C variadic syntax to encourage a Cforall style.
    957 Aside from calling C variadic functions, it is not obvious that there is anything that can be done with C variadics that could not also be done with @ttype@ parameters.
    958 
    959 Variadic templates in \CC require an ellipsis token to express that a parameter is a parameter pack and to expand a parameter pack.
    960 \CFA does not need an ellipsis in either case, since the type class @ttype@ is only used for variadics.
    961 An alternative design could have used an ellipsis combined with an existing type class.
    962 This approach was not taken because the largest benefit of the ellipsis token in \CC is the ability to expand a parameter pack within an expression, e.g. in fold expressions, which requires compile-time knowledge of the structure of the parameter pack, which is not available in \CFA.
    963 \begin{cppcode}
    964 template<typename... Args>
    965 void f(Args &... args) {
    966   g(&args...);  // expand to addresses of pack elements
    967 }
    968 \end{cppcode}
    969 As such, the addition of an ellipsis token would be purely an aesthetic change in \CFA today.
    970 
    971 It is possible to write a type-safe variadic print routine, which can replace @printf@
    972 \begin{cfacode}
    973 struct S { int x, y; };
    974 forall(otype T, ttype Params |
    975   { void print(T); void print(Params); })
    976 void print(T arg, Params rest) {
    977   print(arg);
    978   print(rest);
    979 }
    980 void print(char * x) { printf("%s", x); }
    981 void print(int x) { printf("%d", x);  }
    982 void print(S s) { print("{ ", s.x, ",", s.y, " }"); }
    983 print("s = ", (S){ 1, 2 }, "\n");
    984 \end{cfacode}
    985 This example routine showcases a variadic-template-like decomposition of the provided argument list.
    986 The individual @print@ routines allow printing a single element of a type.
    987 The polymorphic @print@ allows printing any list of types, as long as each individual type has a @print@ function.
    988 The individual print functions can be used to build up more complicated @print@ routines, such as for @S@, which is something that cannot be done with @printf@ in C.
    989 
    990 It is also possible to use @ttype@ polymorphism to provide arbitrary argument forwarding functions.
    991 For example, it is possible to write @new@ as a library function.
    992 Example 2: new (i.e. type-safe malloc + constructors)
    993 \begin{cfacode}
    994 struct Array;
    995 void ?{}(Array *, int, int, int);
    996 
    997 forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); })
    998 T * new(Params p) {
    999   return malloc(){ p }; // construct result of malloc
    1000 }
    1001 Array * x = new(1, 2, 3);
    1002 \end{cfacode}
    1003 The @new@ function provides the combination of type-safe @malloc@ with a constructor call, so that it becomes impossible to forget to construct dynamically allocated objects.
    1004 This provides the type-safety of @new@ in \CC, without the need to specify the allocated type, thanks to return-type inference.
    1005 
    1006 In the call to @new@, @Array@ is selected to match @T@, and @Params@ is expanded to match @[int, int, int, int]@. To satisfy the assertions, a constructor with an interface compatible with @void ?{}(Array *, int, int, int)@ must exist in the current scope.
    1007 
    1008 \subsection{Implementation}
    1009 
    1010 The definition of @new@
    1011 \begin{cfacode}
    1012 forall(dtype T | sized(T)) T * malloc();
    1013 
    1014 forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); })
    1015 T * new(Params p) {
    1016   return malloc(){ p }; // construct result of malloc
    1017 }
    1018 \end{cfacode}
    1019 Generates the following
    1020 \begin{cfacode}
    1021 void *malloc(long unsigned int _sizeof_T, long unsigned int _alignof_T);
    1022 
    1023 void *new(
    1024   void (*_adapter_)(void (*)(), void *, void *),
    1025   long unsigned int _sizeof_T,
    1026   long unsigned int _alignof_T,
    1027   long unsigned int _sizeof_Params,
    1028   long unsigned int _alignof_Params,
    1029   void (* _ctor_T)(void *, void *),
    1030   void *p
    1031 ){
    1032   void *_retval_new;
    1033   void *_tmp_cp_ret0;
    1034   void *_tmp_ctor_expr0;
    1035   _retval_new=
    1036     (_adapter_(_ctor_T,
    1037       (_tmp_ctor_expr0=(_tmp_cp_ret0=malloc(_sizeof_2tT, _alignof_2tT),
    1038         _tmp_cp_ret0)),
    1039       p),
    1040     _tmp_ctor_expr0); // ?{}
    1041   *(void **)&_tmp_cp_ret0; // ^?{}
    1042   return _retval_new;
    1043 }
    1044 \end{cfacode}
    1045 The constructor for @T@ is called indirectly through the adapter function on the result of @malloc@ and the parameter pack.
    1046 The variable that was allocated and constructed is then returned from @new@.
    1047 
    1048 A call to @new@
    1049 \begin{cfacode}
    1050 struct S { int x, y; };
    1051 void ?{}(S *, int, int);
    1052 
    1053 S * s = new(3, 4);
    1054 \end{cfacode}
    1055 Generates the following
    1056 \begin{cfacode}
    1057 struct _tuple2_ {  // _tuple2_(T0, T1)
    1058   void *field_0;
    1059   void *field_1;
    1060 };
    1061 struct _conc__tuple2_0 {  // _tuple2_(int, int)
    1062   int field_0;
    1063   int field_1;
    1064 };
    1065 struct _conc__tuple2_0 _tmp_cp1;  // tuple argument to new
    1066 struct S *_tmp_cp_ret1;           // return value from new
    1067 void _thunk0(  // ?{}(S *, [int, int])
    1068   struct S *_p0,
    1069   struct _conc__tuple2_0 _p1
    1070 ){
    1071   _ctor_S(_p0, _p1.field_0, _p1.field_1);  // restructure tuple parameter
    1072 }
    1073 void _adapter(void (*_adaptee)(), void *_p0, void *_p1){
    1074   // apply adaptee to arguments after casting to actual types
    1075   ((void (*)(struct S *, struct _conc__tuple2_0))_adaptee)(
    1076     _p0,
    1077     *(struct _conc__tuple2_0 *)_p1
    1078   );
    1079 }
    1080 struct S *s = (struct S *)(_tmp_cp_ret1=
    1081   new(
    1082     _adapter,
    1083     sizeof(struct S),
    1084     __alignof__(struct S),
    1085     sizeof(struct _conc__tuple2_0),
    1086     __alignof__(struct _conc__tuple2_0),
    1087     (void (*)(void *, void *))&_thunk0,
    1088     (({ // copy construct tuple argument to new
    1089       int *__multassign_L0 = (int *)&_tmp_cp1.field_0;
    1090       int *__multassign_L1 = (int *)&_tmp_cp1.field_1;
    1091       int __multassign_R0 = 3;
    1092       int __multassign_R1 = 4;
    1093       ((*__multassign_L0=__multassign_R0 /* ?{} */) ,
    1094        (*__multassign_L1=__multassign_R1 /* ?{} */));
    1095     }), &_tmp_cp1)
    1096   ), _tmp_cp_ret1);
    1097 *(struct S **)&_tmp_cp_ret1; // ^?{}  // destroy return value from new
    1098 ({  // destroy argument temporary
    1099   int *__massassign_L0 = (int *)&_tmp_cp1.field_0;
    1100   int *__massassign_L1 = (int *)&_tmp_cp1.field_1;
    1101   ((*__massassign_L0 /* ^?{} */) , (*__massassign_L1 /* ^?{} */));
    1102 });
    1103 \end{cfacode}
    1104 Of note, @_thunk0@ is generated to translate calls to @?{}(S *, [int, int])@ into calls to @?{}(S *, int, int)@.
    1105 The call to @new@ constructs a tuple argument using the supplied arguments.
    1106 
    1107 The @print@ function
    1108 \begin{cfacode}
    1109 forall(otype T, ttype Params |
    1110   { void print(T); void print(Params); })
    1111 void print(T arg, Params rest) {
    1112   print(arg);
    1113   print(rest);
    1114 }
    1115 \end{cfacode}
    1116 Generates
    1117 \begin{cfacode}
    1118 void print_variadic(
    1119   void (*_adapterF_7tParams__P)(void (*)(), void *),
    1120   void (*_adapterF_2tT__P)(void (*)(), void *),
    1121   void (*_adapterF_P2tT2tT__MP)(void (*)(), void *, void *),
    1122   void (*_adapterF2tT_P2tT2tT_P_MP)(void (*)(), void *, void *, void *),
    1123   long unsigned int _sizeof_T,
    1124   long unsigned int _alignof_T,
    1125   long unsigned int _sizeof_Params,
    1126   long unsigned int _alignof_Params,
    1127   void *(*_assign_TT)(void *, void *),
    1128   void (*_ctor_T)(void *),
    1129   void (*_ctor_TT)(void *, void *),
    1130   void (*_dtor_T)(void *),
    1131   void (*print_T)(void *),
    1132   void (*print_Params)(void *),
    1133   void *arg,
    1134   void *rest
    1135 ){
    1136   void *_tmp_cp0 = __builtin_alloca(_sizeof_T);
    1137   _adapterF_2tT__P(  // print(arg)
    1138     ((void (*)())print_T),
    1139     (_adapterF_P2tT2tT__MP( // copy construct argument
    1140       ((void (*)())_ctor_TT),
    1141       _tmp_cp0,
    1142       arg
    1143     ), _tmp_cp0)
    1144   );
    1145   _dtor_T(_tmp_cp0);  // destroy argument temporary
    1146   _adapterF_7tParams__P(  // print(rest)
    1147     ((void (*)())print_Params),
    1148     rest
    1149   );
    1150 }
    1151 \end{cfacode}
    1152 The @print_T@ routine is called indirectly through an adapter function with a copy constructed argument, followed by an indirect call to @print_Params@.
    1153 
    1154 A call to print
    1155 \begin{cfacode}
    1156 void print(const char * x) { printf("%s", x); }
    1157 void print(int x) { printf("%d", x);  }
    1158 
    1159 print("x = ", 123, ".\n");
    1160 \end{cfacode}
    1161 Generates the following
    1162 \begin{cfacode}
    1163 void print_string(const char *x){
    1164   int _tmp_cp_ret0;
    1165   (_tmp_cp_ret0=printf("%s", x)) , _tmp_cp_ret0;
    1166   *(int *)&_tmp_cp_ret0; // ^?{}
    1167 }
    1168 void print_int(int x){
    1169   int _tmp_cp_ret1;
    1170   (_tmp_cp_ret1=printf("%d", x)) , _tmp_cp_ret1;
    1171   *(int *)&_tmp_cp_ret1; // ^?{}
    1172 }
    1173 
    1174 struct _tuple2_ {  // _tuple2_(T0, T1)
    1175   void *field_0;
    1176   void *field_1;
    1177 };
    1178 struct _conc__tuple2_0 {  // _tuple2_(int, const char *)
    1179   int field_0;
    1180   const char *field_1;
    1181 };
    1182 struct _conc__tuple2_0 _tmp_cp6;  // _tuple2_(int, const char *)
    1183 const char *_thunk0(const char **_p0, const char *_p1){
    1184         // const char * ?=?(const char **, const char *)
    1185   return *_p0=_p1;
    1186 }
    1187 void _thunk1(const char **_p0){ // void ?{}(const char **)
    1188   *_p0; // ?{}
    1189 }
    1190 void _thunk2(const char **_p0, const char *_p1){
    1191         // void ?{}(const char **, const char *)
    1192   *_p0=_p1; // ?{}
    1193 }
    1194 void _thunk3(const char **_p0){ // void ^?{}(const char **)
    1195   *_p0; // ^?{}
    1196 }
    1197 void _thunk4(struct _conc__tuple2_0 _p0){ // void print([int, const char *])
    1198   struct _tuple1_ { // _tuple1_(T0)
    1199     void *field_0;
    1200   };
    1201   struct _conc__tuple1_1 { // _tuple1_(const char *)
    1202     const char *field_0;
    1203   };
    1204   void _thunk5(struct _conc__tuple1_1 _pp0){ // void print([const char *])
    1205     print_string(_pp0.field_0);  // print(rest.0)
    1206   }
    1207   void _adapter_i_pii_(void (*_adaptee)(), void *_ret, void *_p0, void *_p1){
    1208     *(int *)_ret=((int (*)(int *, int))_adaptee)(_p0, *(int *)_p1);
    1209   }
    1210   void _adapter_pii_(void (*_adaptee)(), void *_p0, void *_p1){
    1211     ((void (*)(int *, int ))_adaptee)(_p0, *(int *)_p1);
    1212   }
    1213   void _adapter_i_(void (*_adaptee)(), void *_p0){
    1214     ((void (*)(int))_adaptee)(*(int *)_p0);
    1215   }
    1216   void _adapter_tuple1_5_(void (*_adaptee)(), void *_p0){
    1217     ((void (*)(struct _conc__tuple1_1 ))_adaptee)(*(struct _conc__tuple1_1 *)_p0);
    1218   }
    1219   print_variadic(
    1220     _adapter_tuple1_5,
    1221     _adapter_i_,
    1222     _adapter_pii_,
    1223     _adapter_i_pii_,
    1224     sizeof(int),
    1225     __alignof__(int),
    1226     sizeof(struct _conc__tuple1_1),
    1227     __alignof__(struct _conc__tuple1_1),
    1228     (void *(*)(void *, void *))_assign_i,     // int ?=?(int *, int)
    1229     (void (*)(void *))_ctor_i,                // void ?{}(int *)
    1230     (void (*)(void *, void *))_ctor_ii,       // void ?{}(int *, int)
    1231     (void (*)(void *))_dtor_ii,               // void ^?{}(int *)
    1232     (void (*)(void *))print_int,              // void print(int)
    1233     (void (*)(void *))&_thunk5,               // void print([const char *])
    1234     &_p0.field_0,                             // rest.0
    1235     &(struct _conc__tuple1_1 ){ _p0.field_1 } // [rest.1]
    1236   );
    1237 }
    1238 struct _tuple1_ {  // _tuple1_(T0)
    1239   void *field_0;
    1240 };
    1241 struct _conc__tuple1_6 {  // _tuple_1(const char *)
    1242   const char *field_0;
    1243 };
    1244 const char *_temp0;
    1245 _temp0="x = ";
    1246 void _adapter_pstring_pstring_string(
    1247   void (*_adaptee)(),
    1248   void *_ret,
    1249   void *_p0,
    1250   void *_p1
    1251 ){
    1252   *(const char **)_ret=
    1253     ((const char *(*)(const char **, const char *))_adaptee)(
    1254       _p0,
    1255       *(const char **)_p1
    1256     );
    1257 }
    1258 void _adapter_pstring_string(void (*_adaptee)(), void *_p0, void *_p1){
    1259   ((void (*)(const char **, const char *))_adaptee)(_p0, *(const char **)_p1);
    1260 }
    1261 void _adapter_string_(void (*_adaptee)(), void *_p0){
    1262   ((void (*)(const char *))_adaptee)(*(const char **)_p0);
    1263 }
    1264 void _adapter_tuple2_0_(void (*_adaptee)(), void *_p0){
    1265   ((void (*)(struct _conc__tuple2_0 ))_adaptee)(*(struct _conc__tuple2_0 *)_p0);
    1266 }
    1267 print_variadic(
    1268   _adapter_tuple2_0_,
    1269   _adapter_string_,
    1270   _adapter_pstring_string_,
    1271   _adapter_pstring_pstring_string_,
    1272   sizeof(const char *),
    1273   __alignof__(const char *),
    1274   sizeof(struct _conc__tuple2_0 ),
    1275   __alignof__(struct _conc__tuple2_0 ),
    1276   (void *(*)(void *, void *))&_thunk0, // const char * ?=?(const char **, const char *)
    1277   (void (*)(void *))&_thunk1,          // void ?{}(const char **)
    1278   (void (*)(void *, void *))&_thunk2,  // void ?{}(const char **, const char *)
    1279   (void (*)(void *))&_thunk3,          // void ^?{}(const char **)
    1280   (void (*)(void *))print_string,      // void print(const char *)
    1281   (void (*)(void *))&_thunk4,          // void print([int, const char *])
    1282   &_temp0,                             // "x = "
    1283   (({  // copy construct tuple argument to print
    1284     int *__multassign_L0 = (int *)&_tmp_cp6.field_0;
    1285     const char **__multassign_L1 = (const char **)&_tmp_cp6.field_1;
    1286     int __multassign_R0 = 123;
    1287     const char *__multassign_R1 = ".\n";
    1288     ((*__multassign_L0=__multassign_R0 /* ?{} */),
    1289      (*__multassign_L1=__multassign_R1 /* ?{} */));
    1290   }), &_tmp_cp6)                        // [123, ".\n"]
    1291 );
    1292 ({  // destroy argument temporary
    1293   int *__massassign_L0 = (int *)&_tmp_cp6.field_0;
    1294   const char **__massassign_L1 = (const char **)&_tmp_cp6.field_1;
    1295   ((*__massassign_L0 /* ^?{} */) , (*__massassign_L1 /* ^?{} */));
    1296 });
    1297 \end{cfacode}
    1298 The type @_tuple2_@ is generated to allow passing the @rest@ argument to @print_variadic@.
    1299 Thunks 0 through 3 provide wrappers for the @otype@ parameters for @const char *@, while @_thunk4@ translates a call to @print([int, const char *])@ into a call to @print_variadic(int, [const char *])@.
    1300 This all builds to a call to @print_variadic@, with the appropriate copy construction of the tuple argument.
    1301 
    1302 \section{Future Work}
Note: See TracChangeset for help on using the changeset viewer.