# Changeset 7493339 for doc/rob_thesis/tuples.tex

Ignore:
Timestamp:
Apr 3, 2017, 7:04:30 PM (5 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, resolv-new, with_gc
Children:
Parents:
ae6cc8b
Message:

incorporate Peter's feedback, handle many TODOs

File:
1 edited

Unmodified
Removed
• ## doc/rob_thesis/tuples.tex

 rae6cc8b \section{Introduction} % TODO: named return values are not currently implemented in CFA - tie in with named tuples? (future work) % TODO: no passing input parameters by assignment, instead will have reference types => this is not a very C-like model and greatly complicates syntax for likely little gain (and would cause confusion with already supported return-by-rerefence) % TODO: tuples are allowed in expressions, exact meaning is defined by operator overloading (e.g. can add tuples). An important caveat to note is that it is currently impossible to allow adding two triples but prevent adding a pair with a quadruple (single flattening/structuring conversions are implicit, only total number of components matters). May be able to solve this with more nuanced conversion rules (future work) % TODO: no passing input parameters by assignment, instead will have reference types => this is not a very C-like model and greatly complicates syntax for likely little gain (and would cause confusion with already supported return-by-reference) % TODO: benefits (conclusion) by Till: reduced number of variables and statements; no specified order of execution for multiple assignment (more optimzation freedom); can store parameter lists in variable; MRV routines (natural code); more convenient assignment statements; simple and efficient access of record fields; named return values more legible and efficient in use of storage const char * str = "hello world"; char ch;                            // pre-allocate return value int freq = most_frequent(str, &ch); // pass return value as parameter int freq = most_frequent(str, &ch); // pass return value as out parameter printf("%s -- %d %c\n", str, freq, ch); \end{cfacode} Notably, using this approach, the caller is directly responsible for allocating storage for the additional temporary return values. This complicates the call site with a sequence of variable declarations leading up to the call. Notably, using this approach, the caller is directly responsible for allocating storage for the additional temporary return values, which complicates the call site with a sequence of variable declarations leading up to the call. Also, while a disciplined use of @const@ can give clues about whether a pointer parameter is going to be used as an out parameter, it is not immediately obvious from only the routine signature whether the callee expects such a parameter to be initialized before the call. Furthermore, while many C routines that accept pointers are designed so that it is safe to pass @NULL@ as a parameter, there are many C routines that are not null-safe. } \end{cfacode} This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type. This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type, which precludes the bug seen with out parameters. The addition of multiple-return-value functions necessitates a syntax for accepting multiple values at the call-site. In this case, there is only one option for a function named @most_frequent@ that takes a string as input. This function returns two values, one @int@ and one @char@. There are four options for a function named @process@, but only two which accept two arguments, and of those the best match is (3), which is also an exact match. There are four options for a function named @process@, but only two that accept two arguments, and of those the best match is (3), which is also an exact match. This expression first calls @most_frequent("hello world")@, which produces the values @3@ and @'l'@, which are fed directly to the first and second parameters of (3), respectively. The previous expression has 3 \emph{components}. Each component in a tuple expression can be any \CFA expression, including another tuple expression. % TODO: Tuple expressions can appear anywhere that any other expression can appear (...?) The order of evaluation of the components in a tuple expression is unspecified, to allow a compiler the greatest flexibility for program optimization. It is, however, guaranteed that each component of a tuple expression is evaluated for side-effects, even if the result is not used. Multiple-return-value functions can equivalently be called \emph{tuple-returning functions}. % TODO: does this statement still apply, and if so to what extent? %   Tuples are a compile-time phenomenon and have little to no run-time presence. \subsection{Tuple Variables} These variables can be used in any of the contexts where a tuple expression is allowed, such as in the @printf@ function call. As in the @process@ example, the components of the tuple value are passed as separate parameters to @printf@, allowing very simple printing of tuple expressions. If the individual components are required, they can be extracted with a simple assignment, as in previous examples. One way to access the individual components is with a simple assignment, as in previous examples. \begin{cfacode} int freq; \label{s:TupleAssignment} An assignment where the left side of the assignment operator has a tuple type is called tuple assignment. There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called Multiple and Mass Assignment, respectively. There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called \emph{Multiple} and \emph{Mass} Assignment, respectively. \begin{cfacode} int x; A mass assignment assigns the value $R$ to each $L_i$. For a mass assignment to be valid, @?=?(&$L_i$, $R$)@ must be a well-typed expression. This differs from C cascading assignment (e.g. @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. These semantics differ from C cascading assignment (e.g. @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, which results in the value @3.14@ in @y@ and the value @3@ in @x@. On the other hand, the C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, which results in the value @3@ in @x@, and as a result the value @3@ in @y@ as well. These semantics allow cascading tuple assignment to work out naturally in any context where a tuple is permitted. These semantics are a change from the original tuple design in \KWC \cite{Till89}, wherein tuple assignment was a statement that allows cascading assignments as a special case. This decision wa made in an attempt to fix what was seen as a problem with assignment, wherein it can be used in many different locations, such as in function-call argument position. The \KWC semantics fix what was seen as a problem with assignment, wherein it can be used in many different locations, such as in function-call argument position. % TODO: remove?? While permitting assignment as an expression does introduce the potential for subtle complexities, it is impossible to remove assignment expressions from \CFA without affecting backwards compatibility. Furthermore, there are situations where permitting assignment as an expression improves readability by keeping code succinct and reducing repetition, and complicating the definition of tuple assignment puts a greater cognitive burden on the user. void ?{}(S *, S);      // (4) [S, S] x = [3, 6.28];  // uses (2), (3) [S, S] y;              // uses (1), (1) [S, S] z = x.0;        // uses (4), (4) [S, S] x = [3, 6.28];  // uses (2), (3), specialized constructors [S, S] y;              // uses (1), (1), default constructor [S, S] z = x.0;        // uses (4), (4), copy constructor \end{cfacode} In this example, @x@ is initialized by the multiple constructor calls @?{}(&x.0, 3)@ and @?{}(&x.1, 6.28)@, while @y@ is initilaized by two default constructor calls @?{}(&y.0)@ and @?{}(&y.1)@. S s = t; \end{cfacode} The initialization of @s@ with @t@ works by default, because @t@ is flattened into its components, which satisfies the generated field constructor @?{}(S *, int, double)@ to initialize the first two values. The initialization of @s@ with @t@ works by default because @t@ is flattened into its components, which satisfies the generated field constructor @?{}(S *, int, double)@ to initialize the first two values. \section{Member-Access Tuple Expression} Then the type of @a.[x, y, z]@ is @[T_x, T_y, T_z]@. Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (e.g. rearrange components, drop components, duplicate components, etc.). Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (e.g., rearrange components, drop components, duplicate components, etc.). \begin{cfacode} [int, int, long, double] x; Since \CFA permits these tuple-access expressions using structures, unions, and tuples, \emph{member tuple expression} or \emph{field tuple expression} is more appropriate. It could be possible to extend member-access expressions further. It is possible to extend member-access expressions further. Currently, a member-access expression whose member is a name requires that the aggregate is a structure or union, while a constant integer member requires the aggregate to be a tuple. In the interest of orthogonal design, \CFA could apply some meaning to the remaining combinations as well. One benefit of this interpretation is familiar, since it is extremely reminiscent of tuple-index expressions. On the other hand, it could be argued that this interpretation is brittle in that changing the order of members or adding new members to a structure becomes a brittle operation. This problem is less of a concern with tuples, since modifying a tuple affects only the code which directly uses that tuple, whereas modifying a structure has far reaching consequences with every instance of the structure. As for @z.y@, a natural interpretation would be to extend the meaning of member tuple expressions. This problem is less of a concern with tuples, since modifying a tuple affects only the code that directly uses the tuple, whereas modifying a structure has far reaching consequences for every instance of the structure. As for @z.y@, a one interpretation is to extend the meaning of member tuple expressions. That is, currently the tuple must occur as the member, i.e. to the right of the dot. Allowing tuples to the left of the dot could distribute the member across the elements of the tuple, in much the same way that member tuple expressions distribute the aggregate across the member tuple. In this example, @z.y@ expands to @[z.0.y, z.1.y]@, allowing what is effectively a very limited compile-time field-sections map operation, where the argument must be a tuple containing only aggregates having a member named @y@. It is questionable how useful this would actually be in practice, since generally structures are not designed to have names in common with other structures, and further this could cause maintainability issues in that it encourages programmers to adopt very simple naming conventions, to maximize the amount of overlap between different types. It is questionable how useful this would actually be in practice, since structures often do not have names in common with other structures, and further this could cause maintainability issues in that it encourages programmers to adopt very simple naming conventions to maximize the amount of overlap between different types. Perhaps more useful would be to allow arrays on the left side of the dot, which would likewise allow mapping a field access across the entire array, producing an array of the contained fields. The immediate problem with this idea is that C arrays do not carry around their size, which would make it impossible to use this extension for anything other than a simple stack allocated array. Supposing this feature works as described, it would be necessary to specify an ordering for the expansion of member access expressions versus member tuple expressions. Supposing this feature works as described, it would be necessary to specify an ordering for the expansion of member-access expressions versus member-tuple expressions. \begin{cfacode} struct { int x, y; }; \end{cfacode} Depending on exactly how the two tuples are combined, different results can be achieved. As such, a specific ordering would need to be imposed in order for this feature to be useful. Furthermore, this addition moves a member tuple expression's meaning from being clear statically to needing resolver support, since the member name needs to be distributed appropriately over each member of the tuple, which could itself be a tuple. Ultimately, both of these extensions introduce complexity into the model, with relatively little peceived benefit. As such, a specific ordering would need to be imposed to make this feature useful. Furthermore, this addition moves a member-tuple expression's meaning from being clear statically to needing resolver support, since the member name needs to be distributed appropriately over each member of the tuple, which could itself be a tuple. A second possibility is for \CFA to have named tuples, as they exist in Swift and D. \begin{cfacode} typedef [int x, int y] Point2D; Point2D p1, p2; p1.x + p1.y + p2.x + p2.y; p1.0 + p1.1 + p2.0 + p2.1;  // equivalent \end{cfacode} In this simpler interpretation, a named tuple type carries with it a list of possibly empty identifiers. This approach fits naturally with the named return-value feature, and would likely go a long way towards implementing it. Ultimately, the first two extensions introduce complexity into the model, with relatively little peceived benefit, and so were dropped from consideration. Named tuples are a potentially useful addition to the language, provided they can be parsed with a reasonable syntax. \section{Casting} (int)f();  // choose (2) \end{cfacode} Since casting is a fundamental operation in \CFA, casts should be given a meaningful interpretation in the context of tuples. Since casting is a fundamental operation in \CFA, casts need to be given a meaningful interpretation in the context of tuples. Taking a look at standard C provides some guidance with respect to the way casts should work with tuples. \begin{cfacode}[numbers=left] void g(); (void)f(); (int)g(); (void)f();  // valid, ignore results (int)g();   // invalid, void cannot be converted to int struct A { int x; }; (struct A)f(); (struct A)f();  // invalid \end{cfacode} In C, line 4 is a valid cast, which calls @f@ and discards its result. On the other hand, line 5 is invalid, because @g@ does not produce a result, so requesting an @int@ to materialize from nothing is nonsensical. Finally, line 8 is also invalid, because in C casts only provide conversion between scalar types \cite{C11}. For consistency, this implies that any case wherein the number of components increases as a result of the cast is invalid, while casts which have the same or fewer number of components may be valid. Finally, line 8 is also invalid, because in C casts only provide conversion between scalar types \cite[p.~91]{C11}. For consistency, this implies that any case wherein the number of components increases as a result of the cast is invalid, while casts that have the same or fewer number of components may be valid. Formally, a cast to tuple type is valid when $T_n \leq S_m$, where $T_n$ is the number of components in the target type and $S_m$ is the number of components in the source type, and for each $i$ in $[0, n)$, $S_i$ can be cast to $T_i$. \end{cfacode} Note that due to the implicit tuple conversions, this function is not restricted to the addition of two triples. A call to this plus operator type checks as long as a total of 6 non-tuple arguments are passed after flattening, and all of the arguments have a common type which can bind to @T@, with a pairwise @?+?@ over @T@. For example, these expressions will also succeed and produce the same value. \begin{cfacode} ([x.0, x.1]) + ([x.2, 10, 20, 30]); x.0 + ([x.1, x.2, 10, 20, 30]); For example, these expressions also succeed and produce the same value. A call to this plus operator type checks as long as a total of 6 non-tuple arguments are passed after flattening, and all of the arguments have a common type that can bind to @T@, with a pairwise @?+?@ over @T@. \begin{cfacode} ([x.0, x.1]) + ([x.2, 10, 20, 30]);  // x + ([10, 20, 30]) x.0 + ([x.1, x.2, 10, 20, 30]);      // x + ([10, 20, 30]) \end{cfacode} This presents a potential problem if structure is important, as these three expressions look like they should have different meanings. Further, these calls can be made ambiguous by adding seemingly different functions. Furthermore, these calls can be made ambiguous by adding seemingly different functions. \begin{cfacode} forall(otype T | { T ?+?(T, T); }) \end{cfacode} It is also important to note that these calls could be disambiguated if the function return types were different, as they likely would be for a reasonable implementation of @?+?@, since the return type is used in overload resolution. Still, this is a deficiency of the current argument matching algorithm, and depending on the function, differing return values may not always be appropriate. It's possible that this could be rectified by applying an appropriate cost to the structuring and flattening conversions, which are currently 0-cost conversions. Still, these semantics are a deficiency of the current argument matching algorithm, and depending on the function, differing return values may not always be appropriate. These issues could be rectified by applying an appropriate cost to the structuring and flattening conversions, which are currently 0-cost conversions. Care would be needed in this case to ensure that exact matches do not incur such a cost. \begin{cfacode} \end{cfacode} Until this point, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (i.e. no implicit conversions are applied to assertion arguments). Until this point, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (i.e., no implicit conversions are applied to assertion arguments). This decision presents a conflict with the flexibility of tuples. \subsection{Assertion Inference} In the call to @f@, the second and third argument components are structured into a tuple argument. Expressions which may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. Expressions that may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. Each unique expression is assigned an identifier and is guaranteed to be executed exactly once. \begin{cfacode} g(h()); \end{cfacode} Interally, this is converted to Interally, this is converted to psuedo-\CFA \begin{cfacode} void g(int, double); [int, double] h(); let unq<0> = f() : g(unq<0>.0, unq<0>.1);  // notation? \end{cfacode} lazy [int, double] unq<0> = h(); g(unq<0>.0, unq<0>.1); \end{cfacode} That is, the function @h@ is evaluated lazily and its result is stored for subsequent accesses. Ultimately, unique expressions are converted into two variables and an expression. \begin{cfacode} [int, double] _unq0; g( (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).0, (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).1, (_unq0_finished_ ? _unq0 : (_unq0 = h(), _unq0_finished_ = 1, _unq0)).0, (_unq0_finished_ ? _unq0 : (_unq0 = h(), _unq0_finished_ = 1, _unq0)).1, ); \end{cfacode} Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression. Currently, the \CFA translator has a very broad, imprecise definition of impurity, where any function call is assumed to be impure. This notion could be made more precise for certain intrinsic, autogenerated, and builtin functions, and could analyze function bodies when they are available to recursively detect impurity, to eliminate some unique expressions. It's possible that unique expressions could be exposed to the user through a language feature, but it's not immediately obvious that there is a benefit to doing so. Currently, the \CFA translator has a very broad, imprecise definition of impurity (side-effects), where any function call is assumed to be impure. This notion could be made more precise for certain intrinsic, autogenerated, and builtin functions, and could analyze function bodies, when they are available, to recursively detect impurity, to eliminate some unique expressions. It is possible that lazy evaluation could be exposed to the user through a lazy keyword with little additional effort. Tuple member expressions are recursively expanded into a list of member access expressions. x.[0, 1.[0, 2]]; \end{cfacode} Which becomes which becomes \begin{cfacode} [x.0, [x.1.0, x.1.2]]; \end{cfacode} Tuple member expressions also take advantage of unique expressions in the case of possible impurity. Tuple-member expressions also take advantage of unique expressions in the case of possible impurity. Finally, the various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions. }); \end{cfacode} A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, e.g. in a unique expression. A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, e.g., in a unique expression. $N$ LHS variables are generated and constructed using the address of the tuple components, and a single RHS variable is generated to store the value of the RHS without any loss of precision. A nested statement expression is generated that performs the individual assignments and constructs the return value using the results of the individual assignments. The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language. There are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this is not a new restriction. \section{Variadic Functions} % TODO: should this maybe be its own chapter? C provides variadic functions through the manipulation of @va_list@ objects. A variadic function is one which contains at least one parameter, followed by @...@ as the last token in the parameter list. In particular, some form of \emph{argument descriptor} is needed to inform the function of the number of arguments and their types. Two common argument descriptors are format strings or and counter parameters. It's important to note that both of these mechanisms are inherently redundant, because they require the user to specify information that the compiler knows explicitly. This required repetition is error prone, because it's easy for the user to add or remove arguments without updating the argument descriptor. In addition, C requires the programmer to hard code all of the possible expected types. As a result, it is cumbersome to write a function that is open to extension. For example, a simple function which sums $N$ @int@s, \begin{cfacode} int sum(int N, ...) { va_list args; va_start(args, N); int ret = 0; while(N) { ret += va_arg(args, int);  // have to specify type N--; } va_end(args); return ret; } sum(3, 10, 20, 30);  // need to keep counter in sync \end{cfacode} The @va_list@ type is a special C data type that abstracts variadic argument manipulation. The @va_start@ macro initializes a @va_list@, given the last named parameter. Each use of the @va_arg@ macro allows access to the next variadic argument, given a type. Since the function signature does not provide any information on what types can be passed to a variadic function, the compiler does not perform any error checks on a variadic call. As such, it is possible to pass any value to the @sum@ function, including pointers, floating-point numbers, and structures. In the case where the provided type is not compatible with the argument's actual type after default argument promotions, or if too many arguments are accessed, the behaviour is undefined \cite{C11}. Furthermore, there is no way to perform the necessary error checks in the @sum@ function at run-time, since type information is not carried into the function body. Since they rely on programmer convention rather than compile-time checks, variadic functions are generally unsafe. In practice, compilers can provide warnings to help mitigate some of the problems. For example, GCC provides the @format@ attribute to specify that a function uses a format string, which allows the compiler to perform some checks related to the standard format specifiers. Unfortunately, this does not permit extensions to the format string syntax, so a programmer cannot extend the attribute to warn for mismatches with custom types. Needless to say, C's variadic functions are a deficient language feature. Two options were examined to provide better, type-safe variadic functions in \CFA. \subsection{Whole Tuple Matching} Option 1 is to change the argument matching algorithm, so that type parameters can match whole tuples, rather than just their components. This option could be implemented with two phases of argument matching when a function contains type parameters and the argument list contains tuple arguments. If flattening and structuring fail to produce a match, a second attempt at matching the function and argument combination is made where tuple arguments are not expanded and structure must match exactly, modulo non-tuple implicit conversions. For example: \begin{cfacode} forall(otype T, otype U | { T g(U); }) void f(T, U); [int, int] g([int, int, int, int]); f([1, 2], [3, 4, 5, 6]); \end{cfacode} With flattening and structuring, the call is first transformed into @f(1, 2, 3, 4, 5, 6)@. Since the first argument of type @T@ does not have a tuple type, unification decides that @T=int@ and @1@ is matched as the first parameter. Likewise, @U@ does not have a tuple type, so @U=int@ and @2@ is accepted as the second parameter. There are now no remaining formal parameters, but there are remaining arguments and the function is not variadic, so the match fails. With the addition of an exact matching attempt, @T=[int,int]@ and @U=[int,int,int,int]@ and so the arguments type check. Likewise, when inferring assertion @g@, an exact match is found. This approach is strict with respect to argument structure by nature, which makes it syntactically awkward to use in ways that the existing tuple design is not. For example, consider a @new@ function which allocates memory using @malloc@ and constructs the result, using arbitrary arguments. \begin{cfacode} struct Array; void ?{}(Array *, int, int, int); forall(dtype T, otype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; } Array(int) * x = new([1, 2, 3]); \end{cfacode} The call to @new@ is not particularly appealing, since it requires the use of square brackets at the call-site, which is not required in any other function call. This shifts the burden from the compiler to the programmer, which is almost always wrong, and creates an odd inconsistency within the language. Similarly, in order to pass 0 variadic arguments, an explicit empty tuple must be passed into the argument list, otherwise the exact matching rule would not have an argument to bind against. It should be otherwise noted that the addition of an exact matching rule only affects the outcome for polymorphic type binding when tuples are involved. For non-tuple arguments, exact matching and flattening \& structuring are equivalent. For tuple arguments to a function without polymorphic formal parameters, flattening and structuring work whenever an exact match would have worked, since the tuple is flattened and implicitly restructured to its original structure. Thus there is nothing to be gained from permitting the exact matching rule to take effect when a function does not contain polymorphism and none of the arguments are tuples. Overall, this option takes a step in the right direction, but is contrary to the flexibility of the existing tuple design. \subsection{A New Typeclass} A second option is the addition of another kind of type parameter, @ttype@. Matching against a @ttype@ parameter consumes all remaining argument components and packages them into a tuple, binding to the resulting tuple of types. In a given parameter list, there should be at most one @ttype@ parameter that must occur last, otherwise the call can never resolve, given the previous rule. % TODO: a similar rule exists in C/C++ for "..." This idea essentially matches normal variadic semantics, with a strong feeling of similarity to \CCeleven variadic templates. As such, @ttype@ variables will also be referred to as argument packs. This is the option that has been added to \CFA. Like variadic templates, the main way to manipulate @ttype@ polymorphic functions is through recursion. Since nothing is known about a parameter pack by default, assertion parameters are key to doing anything meaningful. Unlike variadic templates, @ttype@ polymorphic functions can be separately compiled. For example, a simple translation of the C sum function using @ttype@ would look like \begin{cfacode} int sum(){ return 0; }        // (0) forall(ttype Params | { int sum(Params); }) int sum(int x, Params rest) { // (1) return x+sum(rest); } sum(10, 20, 30); \end{cfacode} Since (0) does not accept any arguments, it is not a valid candidate function for the call @sum(10, 20, 30)@. In order to call (1), @10@ is matched with @x@, and the argument resolution moves on to the argument pack @rest@, which consumes the remainder of the argument list and @Params@ is bound to @[20, 30]@. In order to finish the resolution of @sum@, an assertion parameter which matches @int sum(int, int)@ is required. Like in the previous iteration, (0) is not a valid candiate, so (1) is examined with @Params@ bound to @[int]@, requiring the assertion @int sum(int)@. Next, (0) fails, and to satisfy (1) @Params@ is bound to @[]@, requiring an assertion @int sum()@. Finally, (0) matches and (1) fails, which terminates the recursion. Effectively, this traces as @sum(10, 20, 30)@ $\rightarrow$ @10+sum(20, 30)@ $\rightarrow$ @10+(20+sum(30))@ $\rightarrow$ @10+(20+(30+sum()))@ $\rightarrow$ @10+(20+(30+0))@. A point of note is that this version does not require any form of argument descriptor, since the \CFA type system keeps track of all of these details. It might be reasonable to take the @sum@ function a step further to enforce a minimum number of arguments, which could be done simply \begin{cfacode} int sum(int x, int y){ return x+y; } forall(ttype Params | { int sum(int, Params); }) int sum(int x, int y, Params rest) { return sum(x+y, rest); } sum(10, 20, 30); \end{cfacode} One more iteration permits the summation of any summable type, as long as all arguments are the same type. \begin{cfacode} trait summable(otype T) { T ?+?(T, T); }; forall(otype R | summable(R)) R sum(R x, R y){ return x+y; } forall(otype R, ttype Params | summable(R) | { R sum(R, Params); }) R sum(R x, R y, Params rest) { return sum(x+y, rest); } sum(3, 10, 20, 30); \end{cfacode} Unlike C, it is not necessary to hard code the expected type. This is naturally open to extension, in that any user-defined type with a @?+?@ operator is automatically able to be used with the @sum@ function. That is to say, the programmer who writes @sum@ does not need full program knowledge of every possible data type, unlike what is necessary to write an equivalent function using the standard C mechanisms. Going one last step, it is possible to achieve full generality in \CFA, allowing the summation of arbitrary lists of summable types. \begin{cfacode} trait summable(otype T1, otype T2, otype R) { R ?+?(T1, T2); }; forall(otype T1, otype T2, otype R | summable(T1, T2, R)) R sum(T1 x, T2 y) { return x+y; } forall(otype T1, otype T2, otype T3, ttype Params, otype R | summable(T1, T2, T3) | { R sum(T3, Params); }) R sum(T1 x, T2 y, Params rest ) { return sum(x+y, rest); } sum(3, 10.5, 20, 30.3); \end{cfacode} The \CFA translator requires adding explicit @double ?+?(int, double)@ and @double ?+?(double, int)@ functions for this call to work, since implicit conversions are not supported for assertions. C variadic syntax and @ttype@ polymorphism probably should not be mixed, since it is not clear where to draw the line to decide which arguments belong where. Furthermore, it might be desirable to disallow polymorphic functions to use C variadic syntax to encourage a Cforall style. Aside from calling C variadic functions, it is not obvious that there is anything that can be done with C variadics that could not also be done with @ttype@ parameters. Variadic templates in \CC require an ellipsis token to express that a parameter is a parameter pack and to expand a parameter pack. \CFA does not need an ellipsis in either case, since the type class @ttype@ is only used for variadics. An alternative design could have used an ellipsis combined with an existing type class. This approach was not taken because the largest benefit of the ellipsis token in \CC is the ability to expand a parameter pack within an expression, e.g. in fold expressions, which requires compile-time knowledge of the structure of the parameter pack, which is not available in \CFA. \begin{cppcode} template void f(Args &... args) { g(&args...);  // expand to addresses of pack elements } \end{cppcode} As such, the addition of an ellipsis token would be purely an aesthetic change in \CFA today. It is possible to write a type-safe variadic print routine, which can replace @printf@ \begin{cfacode} struct S { int x, y; }; forall(otype T, ttype Params | { void print(T); void print(Params); }) void print(T arg, Params rest) { print(arg); print(rest); } void print(char * x) { printf("%s", x); } void print(int x) { printf("%d", x);  } void print(S s) { print("{ ", s.x, ",", s.y, " }"); } print("s = ", (S){ 1, 2 }, "\n"); \end{cfacode} This example routine showcases a variadic-template-like decomposition of the provided argument list. The individual @print@ routines allow printing a single element of a type. The polymorphic @print@ allows printing any list of types, as long as each individual type has a @print@ function. The individual print functions can be used to build up more complicated @print@ routines, such as for @S@, which is something that cannot be done with @printf@ in C. It is also possible to use @ttype@ polymorphism to provide arbitrary argument forwarding functions. For example, it is possible to write @new@ as a library function. Example 2: new (i.e. type-safe malloc + constructors) \begin{cfacode} struct Array; void ?{}(Array *, int, int, int); forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; // construct result of malloc } Array * x = new(1, 2, 3); \end{cfacode} The @new@ function provides the combination of type-safe @malloc@ with a constructor call, so that it becomes impossible to forget to construct dynamically allocated objects. This provides the type-safety of @new@ in \CC, without the need to specify the allocated type, thanks to return-type inference. In the call to @new@, @Array@ is selected to match @T@, and @Params@ is expanded to match @[int, int, int, int]@. To satisfy the assertions, a constructor with an interface compatible with @void ?{}(Array *, int, int, int)@ must exist in the current scope. \subsection{Implementation} The definition of @new@ \begin{cfacode} forall(dtype T | sized(T)) T * malloc(); forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; // construct result of malloc } \end{cfacode} Generates the following \begin{cfacode} void *malloc(long unsigned int _sizeof_T, long unsigned int _alignof_T); void *new( void (*_adapter_)(void (*)(), void *, void *), long unsigned int _sizeof_T, long unsigned int _alignof_T, long unsigned int _sizeof_Params, long unsigned int _alignof_Params, void (* _ctor_T)(void *, void *), void *p ){ void *_retval_new; void *_tmp_cp_ret0; void *_tmp_ctor_expr0; _retval_new= (_adapter_(_ctor_T, (_tmp_ctor_expr0=(_tmp_cp_ret0=malloc(_sizeof_2tT, _alignof_2tT), _tmp_cp_ret0)), p), _tmp_ctor_expr0); // ?{} *(void **)&_tmp_cp_ret0; // ^?{} return _retval_new; } \end{cfacode} The constructor for @T@ is called indirectly through the adapter function on the result of @malloc@ and the parameter pack. The variable that was allocated and constructed is then returned from @new@. A call to @new@ \begin{cfacode} struct S { int x, y; }; void ?{}(S *, int, int); S * s = new(3, 4); \end{cfacode} Generates the following \begin{cfacode} struct _tuple2_ {  // _tuple2_(T0, T1) void *field_0; void *field_1; }; struct _conc__tuple2_0 {  // _tuple2_(int, int) int field_0; int field_1; }; struct _conc__tuple2_0 _tmp_cp1;  // tuple argument to new struct S *_tmp_cp_ret1;           // return value from new void _thunk0(  // ?{}(S *, [int, int]) struct S *_p0, struct _conc__tuple2_0 _p1 ){ _ctor_S(_p0, _p1.field_0, _p1.field_1);  // restructure tuple parameter } void _adapter(void (*_adaptee)(), void *_p0, void *_p1){ // apply adaptee to arguments after casting to actual types ((void (*)(struct S *, struct _conc__tuple2_0))_adaptee)( _p0, *(struct _conc__tuple2_0 *)_p1 ); } struct S *s = (struct S *)(_tmp_cp_ret1= new( _adapter, sizeof(struct S), __alignof__(struct S), sizeof(struct _conc__tuple2_0), __alignof__(struct _conc__tuple2_0), (void (*)(void *, void *))&_thunk0, (({ // copy construct tuple argument to new int *__multassign_L0 = (int *)&_tmp_cp1.field_0; int *__multassign_L1 = (int *)&_tmp_cp1.field_1; int __multassign_R0 = 3; int __multassign_R1 = 4; ((*__multassign_L0=__multassign_R0 /* ?{} */) , (*__multassign_L1=__multassign_R1 /* ?{} */)); }), &_tmp_cp1) ), _tmp_cp_ret1); *(struct S **)&_tmp_cp_ret1; // ^?{}  // destroy return value from new ({  // destroy argument temporary int *__massassign_L0 = (int *)&_tmp_cp1.field_0; int *__massassign_L1 = (int *)&_tmp_cp1.field_1; ((*__massassign_L0 /* ^?{} */) , (*__massassign_L1 /* ^?{} */)); }); \end{cfacode} Of note, @_thunk0@ is generated to translate calls to @?{}(S *, [int, int])@ into calls to @?{}(S *, int, int)@. The call to @new@ constructs a tuple argument using the supplied arguments. The @print@ function \begin{cfacode} forall(otype T, ttype Params | { void print(T); void print(Params); }) void print(T arg, Params rest) { print(arg); print(rest); } \end{cfacode} Generates \begin{cfacode} void print_variadic( void (*_adapterF_7tParams__P)(void (*)(), void *), void (*_adapterF_2tT__P)(void (*)(), void *), void (*_adapterF_P2tT2tT__MP)(void (*)(), void *, void *), void (*_adapterF2tT_P2tT2tT_P_MP)(void (*)(), void *, void *, void *), long unsigned int _sizeof_T, long unsigned int _alignof_T, long unsigned int _sizeof_Params, long unsigned int _alignof_Params, void *(*_assign_TT)(void *, void *), void (*_ctor_T)(void *), void (*_ctor_TT)(void *, void *), void (*_dtor_T)(void *), void (*print_T)(void *), void (*print_Params)(void *), void *arg, void *rest ){ void *_tmp_cp0 = __builtin_alloca(_sizeof_T); _adapterF_2tT__P(  // print(arg) ((void (*)())print_T), (_adapterF_P2tT2tT__MP( // copy construct argument ((void (*)())_ctor_TT), _tmp_cp0, arg ), _tmp_cp0) ); _dtor_T(_tmp_cp0);  // destroy argument temporary _adapterF_7tParams__P(  // print(rest) ((void (*)())print_Params), rest ); } \end{cfacode} The @print_T@ routine is called indirectly through an adapter function with a copy constructed argument, followed by an indirect call to @print_Params@. A call to print \begin{cfacode} void print(const char * x) { printf("%s", x); } void print(int x) { printf("%d", x);  } print("x = ", 123, ".\n"); \end{cfacode} Generates the following \begin{cfacode} void print_string(const char *x){ int _tmp_cp_ret0; (_tmp_cp_ret0=printf("%s", x)) , _tmp_cp_ret0; *(int *)&_tmp_cp_ret0; // ^?{} } void print_int(int x){ int _tmp_cp_ret1; (_tmp_cp_ret1=printf("%d", x)) , _tmp_cp_ret1; *(int *)&_tmp_cp_ret1; // ^?{} } struct _tuple2_ {  // _tuple2_(T0, T1) void *field_0; void *field_1; }; struct _conc__tuple2_0 {  // _tuple2_(int, const char *) int field_0; const char *field_1; }; struct _conc__tuple2_0 _tmp_cp6;  // _tuple2_(int, const char *) const char *_thunk0(const char **_p0, const char *_p1){ // const char * ?=?(const char **, const char *) return *_p0=_p1; } void _thunk1(const char **_p0){ // void ?{}(const char **) *_p0; // ?{} } void _thunk2(const char **_p0, const char *_p1){ // void ?{}(const char **, const char *) *_p0=_p1; // ?{} } void _thunk3(const char **_p0){ // void ^?{}(const char **) *_p0; // ^?{} } void _thunk4(struct _conc__tuple2_0 _p0){ // void print([int, const char *]) struct _tuple1_ { // _tuple1_(T0) void *field_0; }; struct _conc__tuple1_1 { // _tuple1_(const char *) const char *field_0; }; void _thunk5(struct _conc__tuple1_1 _pp0){ // void print([const char *]) print_string(_pp0.field_0);  // print(rest.0) } void _adapter_i_pii_(void (*_adaptee)(), void *_ret, void *_p0, void *_p1){ *(int *)_ret=((int (*)(int *, int))_adaptee)(_p0, *(int *)_p1); } void _adapter_pii_(void (*_adaptee)(), void *_p0, void *_p1){ ((void (*)(int *, int ))_adaptee)(_p0, *(int *)_p1); } void _adapter_i_(void (*_adaptee)(), void *_p0){ ((void (*)(int))_adaptee)(*(int *)_p0); } void _adapter_tuple1_5_(void (*_adaptee)(), void *_p0){ ((void (*)(struct _conc__tuple1_1 ))_adaptee)(*(struct _conc__tuple1_1 *)_p0); } print_variadic( _adapter_tuple1_5, _adapter_i_, _adapter_pii_, _adapter_i_pii_, sizeof(int), __alignof__(int), sizeof(struct _conc__tuple1_1), __alignof__(struct _conc__tuple1_1), (void *(*)(void *, void *))_assign_i,     // int ?=?(int *, int) (void (*)(void *))_ctor_i,                // void ?{}(int *) (void (*)(void *, void *))_ctor_ii,       // void ?{}(int *, int) (void (*)(void *))_dtor_ii,               // void ^?{}(int *) (void (*)(void *))print_int,              // void print(int) (void (*)(void *))&_thunk5,               // void print([const char *]) &_p0.field_0,                             // rest.0 &(struct _conc__tuple1_1 ){ _p0.field_1 } // [rest.1] ); } struct _tuple1_ {  // _tuple1_(T0) void *field_0; }; struct _conc__tuple1_6 {  // _tuple_1(const char *) const char *field_0; }; const char *_temp0; _temp0="x = "; void _adapter_pstring_pstring_string( void (*_adaptee)(), void *_ret, void *_p0, void *_p1 ){ *(const char **)_ret= ((const char *(*)(const char **, const char *))_adaptee)( _p0, *(const char **)_p1 ); } void _adapter_pstring_string(void (*_adaptee)(), void *_p0, void *_p1){ ((void (*)(const char **, const char *))_adaptee)(_p0, *(const char **)_p1); } void _adapter_string_(void (*_adaptee)(), void *_p0){ ((void (*)(const char *))_adaptee)(*(const char **)_p0); } void _adapter_tuple2_0_(void (*_adaptee)(), void *_p0){ ((void (*)(struct _conc__tuple2_0 ))_adaptee)(*(struct _conc__tuple2_0 *)_p0); } print_variadic( _adapter_tuple2_0_, _adapter_string_, _adapter_pstring_string_, _adapter_pstring_pstring_string_, sizeof(const char *), __alignof__(const char *), sizeof(struct _conc__tuple2_0 ), __alignof__(struct _conc__tuple2_0 ), (void *(*)(void *, void *))&_thunk0, // const char * ?=?(const char **, const char *) (void (*)(void *))&_thunk1,          // void ?{}(const char **) (void (*)(void *, void *))&_thunk2,  // void ?{}(const char **, const char *) (void (*)(void *))&_thunk3,          // void ^?{}(const char **) (void (*)(void *))print_string,      // void print(const char *) (void (*)(void *))&_thunk4,          // void print([int, const char *]) &_temp0,                             // "x = " (({  // copy construct tuple argument to print int *__multassign_L0 = (int *)&_tmp_cp6.field_0; const char **__multassign_L1 = (const char **)&_tmp_cp6.field_1; int __multassign_R0 = 123; const char *__multassign_R1 = ".\n"; ((*__multassign_L0=__multassign_R0 /* ?{} */), (*__multassign_L1=__multassign_R1 /* ?{} */)); }), &_tmp_cp6)                        // [123, ".\n"] ); ({  // destroy argument temporary int *__massassign_L0 = (int *)&_tmp_cp6.field_0; const char **__massassign_L1 = (const char **)&_tmp_cp6.field_1; ((*__massassign_L0 /* ^?{} */) , (*__massassign_L1 /* ^?{} */)); }); \end{cfacode} The type @_tuple2_@ is generated to allow passing the @rest@ argument to @print_variadic@. Thunks 0 through 3 provide wrappers for the @otype@ parameters for @const char *@, while @_thunk4@ translates a call to @print([int, const char *])@ into a call to @print_variadic(int, [const char *])@. This all builds to a call to @print_variadic@, with the appropriate copy construction of the tuple argument. \section{Future Work}
Note: See TracChangeset for help on using the changeset viewer.