# Changeset f674479

Ignore:
Timestamp:
Apr 11, 2017, 4:20:51 PM (4 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-env, no_list, persistent-indexer, resolv-new, with_gc
Children:
da1c772
Parents:
a0fc78a (diff), 32bcef7 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg2:software/cfa/cfa-cc

Files:
16 edited

Unmodified
Removed
• ## doc/generic_types/generic_types.tex

 ra0fc78a \end{lstlisting} Within the block, the nested version of @<@ performs @>@ and this local version overrides the built-in @<@ so it is passed to @qsort@. Hence, programmers can easily form a local environments, adding and modifying appropriate functions, to maximize reuse of other existing functions and types. Hence, programmers can easily form local environments, adding and modifying appropriate functions, to maximize reuse of other existing functions and types. Finally, \CFA allows variable overloading:
• ## doc/rob_thesis/cfa-format.tex

 ra0fc78a morecomment=[n]{/+}{+/}, morecomment=[n][\color{blue}]{/++}{+/}, % Options sensitive=true } \lstdefinelanguage{rust}{ % Keywords morekeywords=[1]{ abstract, alignof, as, become, box, break, const, continue, crate, do, else, enum, extern, false, final, fn, for, if, impl, in, let, loop, macro, match, mod, move, mut, offsetof, override, priv, proc, pub, pure, ref, return, Self, self, sizeof, static, struct, super, trait, true,  type, typeof, unsafe, unsized, use, virtual, where, while, yield }, % Strings morestring=[b]{"}, % Comments comment=[l]{//}, morecomment=[s]{/*}{*/}, % Options sensitive=true \lstset{ language = D, style=defaultStyle, #1 } }{} \lstnewenvironment{rustcode}[1][]{ \lstset{ language = rust, style=defaultStyle, #1

• ## doc/rob_thesis/ctordtor.tex

 ra0fc78a %====================================================================== % TODO: discuss move semantics; they haven't been implemented, but could be. Currently looking at alternative models. (future work) % TODO: as an experiment, implement Andrei Alexandrescu's ScopeGuard http://www.drdobbs.com/cpp/generic-change-the-way-you-write-excepti/184403758?pgno=2 % TODO now: as an experiment, implement Andrei Alexandrescu's ScopeGuard http://www.drdobbs.com/cpp/generic-change-the-way-you-write-excepti/184403758?pgno=2 % doesn't seem possible to do this without allowing ttype on generic structs? % If a Cforall constructor is in scope, C style initialization is % disabled by default. % * initialization rule: if any constructor is in scope for type T, try %   to find a matching constructor for the call. If there are no %   constructors in scope for type T, then attempt to fall back on %   C-style initialization. % + if this rule was not in place, it would be easy to accidentally %   use C-style initialization in certain cases, which could lead to %   subtle errors [2] % - this means we need special syntax if we want to allow users to force %   a C-style initialization (to give users more control) % - two different declarations in the same scope can be implicitly %   initialized differently. That is, there may be two objects of type %   T that are initialized differently because there is a constructor %   definition between them. This is not technically specific to %   constructors. % C-style initializers can be accessed with @= syntax % + provides a way to get around the requirement of using a constructor %   (for advanced programmers only) % - can break invariants in the type => unsafe % * provides a way of asserting that a variable is an instance of a %   C struct (i.e. a POD struct), and so will not be implicitly %   destructed (this can be useful at times, maybe mitigates the need %   for move semantics?) [3] % + can modernize a code base one step at a time % Cforall constructors can be used in expressions to initialize any % piece of memory. % + malloc() { ... } calls the appropriate constructor on the newly %   allocated space; the argument is moved into the constructor call %   without taking its address [4] % - with the above form, there is still no way to ensure that %   dynamically allocated objects are constructed. To resolve this, %   we might want a stronger "new" function which always calls the %   constructor, although how we accomplish that is currently still %   unresolved (compiler magic vs. better variadic functions?) % + This can be used as a placement syntax [5] % - can call the constructor on an object more than once, which could %   cause resource leaks and reinitialize const fields (can try to %   detect and prevent this in some cases) %   * compiler always tries to implicitly insert a ctor/dtor pair for %     non-@= objects. %     * For POD objects, this will resolve to an autogenerated or %       intrinsic function. %     * Intrinsic functions are not automatically called. Autogenerated %       are, because they may call a non-autogenerated function. %     * destructors are automatically inserted at appropriate branches %       (e.g. return, break, continue, goto) and at the end of the block %       in which they are declared. %   * For @= objects, the compiler never tries to interfere and insert %     constructor and destructor calls for that object. Copy constructor %     calls do not count, because the object is not the target of the copy %     constructor. % A constructor is declared with the name ?{} % + combines the look of C initializers with the precedent of ?() being %   the name for the function call operator % + it is possible to easily search for all constructors in a project %   and immediately know that a function is a constructor by seeing the %   name "?{}" % A destructor is declared with the name ^?{} % + name mirrors a constructor's name, with an extra symbol to %   distinguish it % - the symbol '~' cannot be used due to parsing conflicts with the %   unary '~' (bitwise negation) operator - this conflict exists because %   we want to allow users to write ^x{}; to destruct x, rather than %   ^?{}(&x); % The first argument of a constructor must be a pointer. The constructed % type is the base type of the pointer. E.g. void ?{}(T *) is a default % constructor for a T. % + can name the argument whatever you like, so not constrained by %   language keyword "this" or "self", etc. % - have to explicitly qualify all object members to initialize them %   (e.g. this->x = 0, rather than just x = 0) % Destructors can take arguments other than just the destructed pointer % * open research problem: not sure how useful this is % Pointer constructors % + can construct separately compiled objects (opaque types) [6] % + orthogonal design, follows directly from the definition of the first %   argument of a constructor % - may require copy constructor or move constructor (or equivalent) %   for correct implementation, which may not be obvious to everyone % + required feature for the prelude to specify as much behavior as possible %   (similar to pointer assignment operators in this respect) % Designations can only be used for C-style initialization % * designation for constructors is equivalent to designation for any %   general function call. Since a function prototype can be redeclared %   many times, with arguments named differently each time (or not at %   all!), this is considered to be an undesirable feature. We could %   construct some set of rules to allow this behaviour, but it is %   probably more trouble than it's worth, and no matter what we choose, %   it is not likely to be obvious to most people. % Constructing an anonymous member [7] % + same as with calling any other function on an anonymous member %   (implicit conversion by the compiler) % - may be some cases where this is ambiguous => clarify with a cast %   (try to design APIs to avoid sharing function signatures between %   composed types to avoid this) % Default Constructors and Destructors are called implicitly % + cannot forget to construct or destruct an object % - requires special syntax to specify that an object is not to be %   constructed (@=) % * an object will not be implicitly constructed OR destructed if %   explicitly initialized like a C object (@= syntax) % * an object will be destructed if there are no constructors in scope %   (even though it is initialized like a C object) [8] % An object which changes from POD type to non POD type will not change % the semantics of a type containing it by composition % * That is, constructors will not be regenerated at the point where %   an object changes from POD type to non POD type, because this could %   cause a cascade of constructors being regenerated for many other %   types. Further, there is precedence for this behaviour in other %   facets of Cforall's design, such as how nested functions interact. % * This behaviour can be simplified in a language without declaration %   before use, because a type can be classified as POD or non POD %   (rather than potentially changing between the two at some point) at %   at the global scope (which is likely the most common case) % * [9] % Changes to polymorphic type classes % * dtype and ftype remain the same % * forall(otype T) is currently essentially the same as %   forall(dtype T | { @size(T); void ?=?(T *, T); }). %   The big addition is that you can declare an object of type T, rather %   than just a pointer to an object of type T since you know the size, %   and you can assign into a T. %   * this definition is changed to add default constructor and %     destructor declarations, to remain consistent with what type meant %     before the introduction of constructors and destructors. %     * that is, forall(type T) is now essentially the same as %       forall(dtype T | { @size(T); void ?=?(T *, T); %                          void ?{}(T *); void ^?{}(T *); }) %     + this is required to make generic types work correctly in %       polymorphic functions %     ? since declaring a constructor invalidates the autogenerated %       routines, it is possible for a type to have constructors, but %       not default constructors. That is, it might be the case that %       you want to write a polymorphic function for a type which has %       a size, but non-default constructors? Some options: %       * declaring a constructor as a part of the assertions list for %         a type declaration invalidates the default, so %         forall(otype T | { void ?{}(T *, int); }) %         really means %         forall(dtype T | { @size(T); void ?=?(T *, T); %                            void ?{}(T *, int); void ^?{}(T *); }) %       * force users to fully declare the assertions list like the %         above in this case (this seems very undesirable) %       * add another type class with the current desugaring of type %         (just size and assignment) %       * provide some way of subtracting from an existing assertions %         list (this might be useful to have in general) % Implementation issues: % Changes to prelude/autogen or built in defaults? % * pointer ctors/dtors [prelude] %   * other pointer type routines are declared in the prelude, and this %     doesn't seem like it should be any different % * basic type ctors/dtors [prelude] %   * other basic type routines are declared in the prelude, and this %     doesn't seem like it should be any different % ? aggregate types [undecided, but leaning towards autogenerate] %   * prelude %     * routines specific to aggregate types cannot be predeclared in %       the prelude because we don't know the name of every %       aggregate type in the entire program %   * autogenerate %     + default assignment operator is already autogenerated for %       aggregate types %       * this seems to lead us in the direction of autogenerating, %         because we may have a struct which contains other objects %         that require construction [10]. If we choose not to %         autogenerate in this case, then objects which are part of %         other objects by composition will not be constructed unless %         a constructor for the outer type is explicitly defined %       * in this case, we would always autogenerate the appropriate %         constructor(s) for an aggregate type, but just like with %         basic types, pointer types, and enum types, the constructor %         call can be elided when when it is not necessary. %     + constructors will have to be explicitly autogenerated %       in the case where they are required for a polymorphic function, %       when no user defined constructor is in scope, which may make it %       easiest to always autogenerate all appropriate constructors %     - n+2 constructors would have to be generated for a POD type %       * one constructor for each number of valid arguments [0, n], %         plus the copy constructor %         * this is taking a simplified approach: in C, it is possible %           to omit the enclosing braces in a declaration, which would %           lead to a combinatorial explosion of generated constructors. %           In the interest of keeping things tractable, Cforall may be %           incompatible with C in this case. [11] %       * for non-POD types, only autogenerate the default and copy %         constructors %       * alternative: generate only the default constructor and %         special case initialization for any other constructor when %         only the autogenerated one exists %         - this is not very sensible, as by the previous point, these %           constructors may be needed for polymorphic functions %           anyway. %     - must somehow distinguish in resolver between autogenerated and %       user defined constructors (autogenerated should never be chosen %       when a user defined option exists [check first parameter], even %       if full signature differs) (this may also have applications %       to other autogenerated routines?) %     - this scheme does not naturally support designation (i.e. general %       functions calls do not support designation), thus these cases %       will have to be treated specially in either case %   * defaults %     * i.e. hardcode a new set of rules for some "appropriate" default %       behaviour for %     + when resolving an initialization expression, explicitly check to %       see if any constructors are in scope. If yes, attempt to resolve %       to a constructor, and produce an error message if a match is not %       found. If there are no constructors in scope, resolve to %       initializing each field individually (C-style) %     + does not attempt to autogenerate constructors for POD types, %       which can be seen as a space optimization for the program %       binary %     - as stated previously, a polymorphic routine may require these %       autogenerated constructors, so this doesn't seem like a big win, %       because this leads to more complicated logic and tracking of %       which constructors have already been generated %     - even though a constructor is not explicitly declared or used %       polymorphically, we might still need one for all uses of a %       struct (e.g. in the case of composition). %   * the biggest tradeoff in autogenerating vs. defaulting appears to %     be in where and how the special code to check if constructors are %     present is handled. It appears that there are more reasons to %     autogenerate than not. % --- examples % [1] As an example of using constructors polymorphically, consider a % slight modification on the foldl example I put on the mailing list a % few months ago: % context iterable(type collection, type element, type iterator) { %   void ?{}(iterator *, collection); // used to be makeIterator, but can %                             // idiomatically use constructor %   int hasNext(iterator); %   iterator ++?(iterator *); %   lvalue element *?(iterator); % }; % forall(type collection, type element, type result, type iterator %   | iterable(collection, element, iterator)) % result foldl(collection c, result acc, %     result (*reduce)(result, element)) { %   iterator it = { c }; %   while (hasNext(it)) { %     acc = reduce(acc, *it); %     ++it; %   } %   return acc; % } % Now foldl makes use of the knowledge that the iterator type has a % single argument constructor which takes the collection to iterate % over. This pattern allows polymorphic code to look more natural % (constructors are generally preferred to named initializer/creation % routines, e.g. "makeIterator") % [2] An example of some potentially dangerous code that we don't want % to let easily slip through the cracks - if this is really what you % want, then use @= syntax for the second declaration to quiet the % compiler. % struct A { int x, y, z; } % ?{}(A *, int); % ?{}(A *, int, int, int); % A a1 = { 1 };         // uses ?{}(A *, int); % A a2 = { 2, 3 };      // C-style initialization -> no invariants! % A a3 = { 4, 5, 6 };   // uses ?{}(A *, int, int, int); % [3] Since @= syntax creates a C object (essentially a POD, as far as % the compiler is concerned), the object will not be destructed % implicitly when it leaves scope, nor will it be copy constructed when % it is returned. In this case, a memcpy should be equivalent to a move. % // Box.h % struct Box; % void ?{}(Box **, int}; % void ^?{}(Box **); % Box * make_fortytwo(); % // Box.cfa % Box * make_fortytwo() { %   Box *b @= {}; %   (&b){ 42 }; // construct explicitly %   return b; // no destruction, essentially a move? % } % [4] Cforall's typesafe malloc can be composed with constructor % expressions. It is possible for a user to define their own functions % similar to malloc and achieve the same effects (e.g. Aaron's example % of an arena allocator) % // CFA malloc % forall(type T) % T * malloc() { return (T *)malloc(sizeof(T)); } % struct A { int x, y, z; }; % void ?{}(A *, int); % int foo(){ %   ... %   // desugars to: %   // A * a = ?{}(malloc(), 123); %   A * a = malloc() { 123 }; %   ... % } % [5] Aaron's example of combining function calls with constructor % syntax to perform an operation similar to C++'s std::vector::emplace % (i.e. to construct a new element in place, without the need to % copy) % forall(type T) % struct vector { %   T * elem; %   int len; %   ... % }; % ... % forall(type T) % T * vector_new(vector(T) * v) { %   // reallocate if needed %   return &v->elem[len++]; % } % int main() { %   vector(int) * v = ... %   vector_new(v){ 42 };  // add element to the end of vector % } % [6] Pointer Constructors. It could be useful to use the existing % constructor syntax even more uniformly for ADTs. With this, ADTs can % be initialized in the same manor as any other object in a polymorphic % function. % // vector.h % forall(type T) struct vector; % forall(type T) void ?{}(vector(T) **); % // adds an element to the end % forall(type T) vector(T) * ?+?(vector(T) *, T); % // vector.cfa % // don't want to expose the implementation to the user and/or don't % // want to recompile the entire program if the struct definition % // changes % forall(type T) struct vector { %   T * elem; %   int len; %   int capacity; % }; % forall(type T) void resize(vector(T) ** v) { ... } % forall(type T) void ?{}(vector(T) ** v) { %   vector(T) * vect = *v = malloc(); %   vect->capacity = 10; %   vect->len = 0; %   vect->elem = malloc(vect->capacity); % } % forall(type T) vector(T) * ?+?(vector(T) *v, T elem) { %   if (v->len == v->capacity) resize(&v); %   v->elem[v->len++] = elem; % } % // main.cfa % #include "adt.h" % forall(type T | { T ?+?(T, int); } % T sumRange(int lower, int upper) { %   T x;    // default construct %   for (int i = lower; i <= upper; i++) { %     x = x + i; %   } %   return x; % } % int main() { %   vector(int) * numbers = sumRange(1, 10); %   // numbers is now a vector containing [1..10] %   int sum = sumRange(1, 10); %   // sum is now an int containing the value 55 % } % [7] The current proposal is to use the plan 9 model of inheritance. % Under this model, all of the members of an unnamed struct instance % become members of the containing struct. In addition, an object % can be passed as an argument to a function expecting one of its % base structs. % struct Point { %   double x; %   double y; % }; % struct ColoredPoint { %   Point;        // anonymous member (no identifier) %                 // => a ColoredPoint has an x and y of type double %   int color; % }; % ColoredPoint cp = ...; % cp.x = 10.3;    // x from Point is accessed directly % cp.color = 0x33aaff; // color is accessed normally % foo(cp);        // cp can be used directly as a Point % void ?{}(Point *p, double x, double y) { %   p->x = x; %   p->y = y; % } % void ?{}(ColoredPoint *cp, double x, double y, int color) { %   (&cp){ x, y };  // unambiguous, no ?{}(ColoredPoint*,double,double) %   cp->color = color; % } % struct Size { %   double width; %   double height; % }; % void ?{}(Size *s, double w, double h) { %   p->width = w; %   p->height = h; % } % struct Foo { %   Point; %   Size; % } % ?{}(Foo &f, double x, double y, double w, double h) { %   // (&F,x,y) is ambiguous => is it ?{}(Point*,double,double) or %   // ?{}(Size*,double,double)? Solve with a cast: %   ((Point*)&F){ x, y }; %   ((Size*)&F){ w, h }; % } % [8] Destructors will be called on objects that were not constructed. % struct A { ... }; % ^?{}(A *); % { %   A x; %   A y @= {}; % } // x is destructed, even though it wasn't constructed %   // y is not destructed, because it is explicitly a C object % [9] A type's constructor is generated at declaration time using % current information about an object's members. This is analogous to % the treatment of other operators. For example, an object's assignment % operator will not change to call the override of a member's assignment % operator unless the object's assignment is also explicitly overridden. % This problem can potentially be treated differently in Do, since each % compilation unit is passed over at least twice (once to gather % symbol information, once to generate code - this is necessary to % achieve the "No declarations" goal) % struct A { ... }; % struct B { A x; }; % ... % void ?{}(A *);  // from this point on, A objects will be constructed % B b1;           // b1 and b1.x are both NOT constructed, because B %                 // objects are not constructed % void ?{}(B *);  // from this point on, B objects will be constructed % B b2;           // b2 and b2.x are both constructed % struct C { A x; }; % // implicit definition of ?{}(C*), because C is not a POD type since % // it contains a non-POD type by composition % C c;            // c and c.x are both constructed % [10] Requiring construction by composition % struct A { %   ... % }; % // declared ctor disables default c-style initialization of % // A objects; A is no longer a POD type % void ?{}(A *); % struct B { %   A x; % }; % // B objects can not be C-style initialized, because A objects % // must be constructed => B objects are transitively not POD types % B b; // b.x must be constructed, but B is not constructible %      // => must autogenerate ?{}(B *) after struct B definition, %      // which calls ?{}(&b.x) % [11] Explosion in the number of generated constructors, due to strange % C semantics. % struct A { int x, y; }; % struct B { A u, v, w; }; % A a = { 0, 0 }; % // in C, you are allowed to do this % B b1 = { 1, 2, 3, 4, 5, 6 }; % B b2 = { 1, 2, 3 }; % B b3 = { a, a, a }; % B b4 = { a, 5, 4, a }; % B b5 = { 1, 2, a, 3 }; % // we want to disallow b1, b2, b4, and b5 in Cforall. % // In particular, we will autogenerate these constructors: % void ?{}(A *);             // default/0 parameters % void ?{}(A *, int);        // 1 parameter % void ?{}(A *, int, int);   // 2 parameters % void ?{}(A *, const A *);  // copy constructor % void ?{}(B *);             // default/0 parameters % void ?{}(B *, A);          // 1 parameter % void ?{}(B *, A, A);       // 2 parameters % void ?{}(B *, A, A, A);    // 3 parameters % void ?{}(B *, const B *);  // copy constructor % // we will not generate constructors for every valid combination % // of members in C. For example, we will not generate % void ?{}(B *, int, int, int, int, int, int);   // b1 would need this % void ?{}(B *, int, int, int);                  // b2 would need this % void ?{}(B *, A, int, int, A);                 // b4 would need this % void ?{}(B *, int, int, A, int);               // b5 would need this % // and so on % TODO: talk somewhere about compound literals? Since \CFA is a true systems language, it does not provide a garbage collector. As well, \CFA is not an object-oriented programming language, i.e. structures cannot have routine members. As well, \CFA is not an object-oriented programming language, i.e., structures cannot have routine members. Nevertheless, one important goal is to reduce programming complexity and increase safety. To that end, \CFA provides support for implicit pre/post-execution of routines for objects, via constructors and destructors. % TODO: this is old. remove or refactor % Manual resource management is difficult. % Part of the difficulty results from not having any guarantees about the current state of an object. % Objects can be internally composed of pointers that may reference resources which may or may not need to be manually released, and keeping track of that state for each object can be difficult for the end user. % Constructors and destructors provide a mechanism to bookend the lifetime of an object, allowing the designer of a type to establish invariants for objects of that type. % Constructors guarantee that object initialization code is run before the object can be used, while destructors provide a mechanism that is guaranteed to be run immediately before an object's lifetime ends. % Constructors and destructors can help to simplify resource management when used in a disciplined way. % In particular, when all resources are acquired in a constructor, and all resources are released in a destructor, no resource leaks are possible. % This pattern is a popular idiom in several languages, such as \CC, known as RAII (Resource Acquisition Is Initialization). This chapter details the design of constructors and destructors in \CFA, along with their current implementation in the translator. Generated code samples have been edited to provide comments for clarity and to save on space. Generated code samples have been edited for clarity and brevity. \section{Design Criteria} Next, @x@ is assigned the value of @y@. In the last line, @z@ is implicitly initialized to 0 since it is marked @static@. The key difference between assignment and initialization being that assignment occurs on a live object (i.e. an object that contains data). The key difference between assignment and initialization being that assignment occurs on a live object (i.e., an object that contains data). It is important to note that this means @x@ could have been used uninitialized prior to being assigned, while @y@ could not be used uninitialized. Use of uninitialized variables yields undefined behaviour, which is a common source of errors in C programs. % TODO: *citation* Declaration initialization is insufficient, because it permits uninitialized variables to exist and because it does not allow for the insertion of arbitrary code before the variable is live. Many C compilers give good warnings most of the time, but they cannot in all cases. \begin{cfacode} int f(int *);  // never reads the parameter, only writes int g(int *);  // reads the parameter - expects an initialized variable Use of uninitialized variables yields undefined behaviour, which is a common source of errors in C programs. Initialization of a declaration is strictly optional, permitting uninitialized variables to exist. Furthermore, declaration initialization is limited to expressions, so there is no way to insert arbitrary code before a variable is live, without delaying the declaration. Many C compilers give good warnings for uninitialized variables most of the time, but they cannot in all cases. \begin{cfacode} int f(int *);  // output parameter: never reads, only writes int g(int *);  // input parameter: never writes, only reads, // so requires initialized variable int x, y; f(&x);  // okay - only writes to x g(&y);  // will use y uninitialized \end{cfacode} Other languages are able to give errors in the case of uninitialized variable use, but due to backwards compatibility concerns, this cannot be the case in \CFA. In C, constructors and destructors are often mimicked by providing routines that create and teardown objects, where the teardown function is typically only necessary if the type modifies the execution environment. g(&y);  // uses y uninitialized \end{cfacode} Other languages are able to give errors in the case of uninitialized variable use, but due to backwards compatibility concerns, this is not the case in \CFA. In C, constructors and destructors are often mimicked by providing routines that create and tear down objects, where the tear down function is typically only necessary if the type modifies the execution environment. \begin{cfacode} struct array_int { }; struct array_int create_array(int sz) { return (struct array_int) { malloc(sizeof(int)*sz) }; return (struct array_int) { calloc(sizeof(int)*sz) }; } void destroy_rh(struct resource_holder * rh) { Furthermore, even with this idiom it is easy to make mistakes, such as forgetting to destroy an object or destroying it multiple times. A constructor provides a way of ensuring that the necessary aspects of object initialization is performed, from setting up invariants to providing compile-time checks for appropriate initialization parameters. A constructor provides a way of ensuring that the necessary aspects of object initialization is performed, from setting up invariants to providing compile- and run-time checks for appropriate initialization parameters. This goal is achieved through a guarantee that a constructor is called implicitly after every object is allocated from a type with associated constructors, as part of an object's definition. Since a constructor is called on every object of a managed type, it is impossible to forget to initialize such objects, as long as all constructors perform some sensible form of initialization. In \CFA, a constructor is a function with the name @?{}@. Like other operators in \CFA, the name represents the syntax used to call the constructor, e.g., @struct S = { ... };@. Every constructor must have a return type of @void@ and at least one parameter, the first of which is colloquially referred to as the \emph{this} parameter, as in many object-oriented programming-languages (however, a programmer can give it an arbitrary name). The @this@ parameter must have a pointer type, whose base type is the type of object that the function constructs. In C, if the user creates an @Array@ object, the fields @data@ and @len@ are uninitialized, unless an explicit initializer list is present. It is the user's responsibility to remember to initialize both of the fields to sensible values. It is the user's responsibility to remember to initialize both of the fields to sensible values, since there are no implicit checks for invalid values or reasonable defaults. In \CFA, the user can define a constructor to handle initialization of @Array@ objects. This constructor initializes @x@ so that its @length@ field has the value 10, and its @data@ field holds a pointer to a block of memory large enough to hold 10 @int@s, and sets the value of each element of the array to 0. This particular form of constructor is called the \emph{default constructor}, because it is called on an object defined without an initializer. In other words, a default constructor is a constructor that takes a single argument, the @this@ parameter. In \CFA, a destructor is a function much like a constructor, except that its name is \lstinline!^?{}!. In other words, a default constructor is a constructor that takes a single argument: the @this@ parameter. In \CFA, a destructor is a function much like a constructor, except that its name is \lstinline!^?{}! and it take only one argument. A destructor for the @Array@ type can be defined as such. \begin{cfacode} } \end{cfacode} Since the destructor is automatically called at deallocation for all objects of type @Array@, the memory associated with an @Array@ is automatically freed when the object's lifetime ends. The destructor is automatically called at deallocation for all objects of type @Array@. Hence, the memory associated with an @Array@ is automatically freed when the object's lifetime ends. The exact guarantees made by \CFA with respect to the calling of destructors are discussed in section \ref{sub:implicit_dtor}. \end{cfacode} By the previous definition of the default constructor for @Array@, @x@ and @y@ are initialized to valid arrays of length 10 after their respective definitions. On line 3, @z@ is initialized with the value of @x@, while on line @4@, @y@ is assigned the value of @x@. On line 2, @z@ is initialized with the value of @x@, while on line 3, @y@ is assigned the value of @x@. The key distinction between initialization and assignment is that a value to be initialized does not hold any meaningful values, whereas an object to be assigned might. In particular, these cases cannot be handled the same way because in the former case @z@ does not currently own an array, while @y@ does. The first function is called a \emph{copy constructor}, because it constructs its argument by copying the values from another object of the same type. The second function is the standard copy-assignment operator. These four functions are special in that they control the state of most objects. The four functions (default constructor, destructor, copy constructor, and assignment operator) are special in that they safely control the state of most objects. It is possible to define a constructor that takes any combination of parameters to provide additional initialization options. For example, a reasonable extension to the array type would be a constructor that allocates the array to a given initial capacity and initializes the array to a given @fill@ value. For example, a reasonable extension to the array type would be a constructor that allocates the array to a given initial capacity and initializes the elements of the array to a given @fill@ value. \begin{cfacode} void ?{}(Array * arr, int capacity, int fill) { Array x, y = { 20, 0xdeadbeef }, z = y; \end{cfacode} In \CFA, constructor calls look just like C initializers, which allows them to be inserted into legacy C code with minimal code changes, and also provides a very simple syntax that veteran C programmers are familiar with. One downside of reusing C initialization syntax is that it isn't possible to determine whether an object is constructed just by looking at its declaration, since that requires knowledge of whether the type is managed at that point. Destructors are implicitly called in reverse declaration-order so that objects with dependencies are destructed before the objects they are dependent on. \subsection{Syntax} \label{sub:syntax} % TODO: finish this section \subsection{Calling Syntax} \label{sub:syntax} There are several ways to construct an object in \CFA. As previously introduced, every variable is automatically constructed at its definition, which is the most natural way to construct an object. A * y = malloc();  // copy construct: ?{}(&y, malloc()) ?{}(&x);    // explicit construct x ?{}(y, x);  // explit construct y from x ^?{}(&x);   // explicit destroy x ?{}(&x);    // explicit construct x, second construction ?{}(y, x);  // explit construct y from x, second construction ^?{}(&x);   // explicit destroy x, in different order ^?{}(y);    // explicit destroy y // implicit ^?{}(&x); \end{cfacode} Calling a constructor or destructor directly is a flexible feature that allows complete control over the management of a piece of storage. Calling a constructor or destructor directly is a flexible feature that allows complete control over the management of storage. In particular, constructors double as a placement syntax. \begin{cfacode} Finally, constructors and destructors support \emph{operator syntax}. Like other operators in \CFA, the function name mirrors the use-case, in that the first $N$ arguments fill in the place of the question mark. This syntactic form is similar to the new initialization syntax in \CCeleven, except that it is used in expression contexts, rather than declaration contexts. \begin{cfacode} struct A { ... }; Destructor operator syntax is actually an statement, and requires parentheses for symmetry with constructor syntax. One of these three syntactic forms should appeal to either C or \CC programmers using \CFA. \subsection{Constructor Expressions} In \CFA, it is possible to use a constructor as an expression. Like other operators, the function name @?{}@ matches its operator syntax. For example, @(&x){}@ calls the default constructor on the variable @x@, and produces @&x@ as a result. A key example for this capability is the use of constructor expressions to initialize the result of a call to standard C routine @malloc@. \begin{cfacode} struct X { ... }; void ?{}(X *, double); X * x = malloc(sizeof(X)){ 1.5 }; \end{cfacode} In this example, @malloc@ dynamically allocates storage and initializes it using a constructor, all before assigning it into the variable @x@. If this extension is not present, constructing dynamically allocated objects is much more cumbersome, requiring separate initialization of the pointer and initialization of the pointed-to memory. \begin{cfacode} X * x = malloc(sizeof(X)); x{ 1.5 }; \end{cfacode} Not only is this verbose, but it is also more error prone, since this form allows maintenance code to easily sneak in between the initialization of @x@ and the initialization of the memory that @x@ points to. This feature is implemented via a transformation producing the value of the first argument of the constructor, since constructors do not themselves have a return value. Since this transformation results in two instances of the subexpression, care is taken to allocate a temporary variable to hold the result of the subexpression in the case where the subexpression may contain side effects. The previous example generates the following code. \begin{cfacode} struct X *_tmp_ctor; struct X *x = ?{}(  // construct result of malloc _tmp_ctor=malloc(sizeof(struct X)), // store result of malloc 1.5 ), _tmp_ctor; // produce constructed result of malloc \end{cfacode} It should be noted that this technique is not exclusive to @malloc@, and allows a user to write a custom allocator that can be idiomatically used in much the same way as a constructed @malloc@ call. It is also possible to use operator syntax with destructors. Unlike constructors, operator syntax with destructors is a statement and thus does not produce a value, since the destructed object is invalidated by the use of a destructor. For example, \lstinline!^(&x){}! calls the destructor on the variable @x@. \subsection{Function Generation} In \CFA, every type is defined to have the core set of four functions described previously. In \CFA, every type is defined to have the core set of four special functions described previously. Having these functions exist for every type greatly simplifies the semantics of the language, since most operations can simply be defined directly in terms of function calls. In addition to simplifying the definition of the language, it also simplifies the analysis that the translator must perform. There are several options for user-defined types: structures, unions, and enumerations. To aid in ease of use, the standard set of four functions is automatically generated for a user-defined type after its definition is completed. By auto-generating these functions, it is ensured that legacy C code will continue to work correctly in every context where \CFA expects these functions to exist, since they are generated for every complete type. By auto-generating these functions, it is ensured that legacy C code continues to work correctly in every context where \CFA expects these functions to exist, since they are generated for every complete type. The generated functions for enumerations are the simplest. Since enumerations in C are essentially just another integral type, the generated functions behave in the same way that the builtin functions for the basic types work. % TODO: examples for enums Since enumerations in C are essentially just another integral type, the generated functions behave in the same way that the built-in functions for the basic types work. For example, given the enumeration \begin{cfacode} } void ?{}(enum Colour *_dst, enum Colour _src){ (*_dst)=_src;  // bitwise copy *_dst=_src;  // bitwise copy } void ^?{}(enum Colour *_dst){ } enum Colour ?=?(enum Colour *_dst, enum Colour _src){ return (*_dst)=_src; // bitwise copy return *_dst=_src; // bitwise copy } \end{cfacode} In the future, \CFA will introduce strongly-typed enumerations, like those in \CC. The existing generated routines will be sufficient to express this restriction, since they are currently set up to take in values of that enumeration type. The existing generated routines are sufficient to express this restriction, since they are currently set up to take in values of that enumeration type. Changes related to this feature only need to affect the expression resolution phase, where more strict rules will be applied to prevent implicit conversions from integral types to enumeration types, but should continue to permit conversions from enumeration types to @int@. In this way, it will still be possible to add an @int@ to an enumeration, but the resulting value will be an @int@, meaning that it won't be possible to reassign the value into an enumeration without a cast. In this way, it is still possible to add an @int@ to an enumeration, but the resulting value is an @int@, meaning it cannot be reassigned to an enumeration without a cast. For structures, the situation is more complicated. For a structure @S@ with members @M$_0$@, @M$_1$@, ... @M$_{N-1}$@, each function @f@ in the standard set calls \lstinline{f(s->M$_i$, ...)} for each @$i$@. That is, a default constructor for @S@ default constructs the members of @S@, the copy constructor with copy construct them, and so on. For example given the struct definition Given a structure @S@ with members @M$_0$@, @M$_1$@, ... @M$_{N-1}$@, each function @f@ in the standard set calls \lstinline{f(s->M$_i$, ...)} for each @$i$@. That is, a default constructor for @S@ default constructs the members of @S@, the copy constructor copy constructs them, and so on. For example, given the structure definition \begin{cfacode} struct A { } \end{cfacode} It is important to note that the destructors are called in reverse declaration order to resolve conflicts in the event there are dependencies among members. It is important to note that the destructors are called in reverse declaration order to prevent conflicts in the event there are dependencies among members. In addition to the standard set, a set of \emph{field constructors} is also generated for structures. The field constructors are constructors that consume a prefix of the struct's member list. The field constructors are constructors that consume a prefix of the structure's member-list. That is, $N$ constructors are built of the form @void ?{}(S *, T$_{\text{M}_0}$)@, @void ?{}(S *, T$_{\text{M}_0}$, T$_{\text{M}_1}$)@, ..., @void ?{}(S *, T$_{\text{M}_0}$, T$_{\text{M}_1}$, ..., T$_{\text{M}_{N-1}}$)@, where members are copy constructed if they have a corresponding positional argument and are default constructed otherwise. The addition of field constructors allows structs in \CFA to be used naturally in the same ways that they could be used in C (i.e. to initialize any prefix of the struct), e.g., @A a0 = { b }, a1 = { b, c }@. The addition of field constructors allows structures in \CFA to be used naturally in the same ways as used in C (i.e., to initialize any prefix of the structure), e.g., @A a0 = { b }, a1 = { b, c }@. Extending the previous example, the following constructors are implicitly generated for @A@. \begin{cfacode} \end{cfacode} For unions, the default constructor and destructor do nothing, as it is not obvious which member if any should be constructed. For unions, the default constructor and destructor do nothing, as it is not obvious which member, if any, should be constructed. For copy constructor and assignment operations, a bitwise @memcpy@ is applied. In standard C, a union can also be initialized using a value of the same type as its first member, and so a corresponding field constructor is generated to perform a bitwise @memcpy@ of the object. An alterantive to this design is to always construct and destruct the first member of a union, to match with the C semantics of initializing the first member of the union. An alternative to this design is to always construct and destruct the first member of a union, to match with the C semantics of initializing the first member of the union. This approach ultimately feels subtle and unsafe. Another option is to, like \CC, disallow unions from containing members that are themselves managed types. % This feature works in the \CFA model, since constructors are simply special functions and can be called explicitly, unlike in \CC. % this sentence isn't really true => placement new In \CCeleven, this restriction has been loosened to allow unions with managed members, with the caveat that any if there are any members with a user-defined operation, then that operation is not implicitly defined, forcing the user to define the operation if necessary. In \CCeleven, unions may have managed members, with the caveat that if there are any members with a user-defined operation, then that operation is not implicitly defined, forcing the user to define the operation if necessary. This restriction could easily be added into \CFA once \emph{deleted} functions are added. Here, @&s@ and @&s2@ are cast to unqualified pointer types. This mechanism allows the same constructors and destructors to be used for qualified objects as for unqualified objects. Since this applies only to implicitly generated constructor calls, the language does not allow qualified objects to be re-initialized with a constructor without an explicit cast. This applies only to implicitly generated constructor calls. Hence, explicitly re-initializing qualified objects with a constructor requires an explicit cast. As discussed in Section \ref{sub:c_background}, compound literals create unnamed objects. This mechanism can continue to be used seamlessly in \CFA with managed types to create temporary objects. The object created by a compound literal is constructed using the provided brace-enclosed initializer-list, and is destructed at the end of the scope it is used in. For example, \begin{cfacode} struct A { int x; }; void ?{}(A *, int, int); { int x = (A){ 10, 20 }.x; } \end{cfacode} is equivalent to \begin{cfacode} struct A { int x, y; }; void ?{}(A *, int, int); { A _tmp; ?{}(&_tmp, 10, 20); int x = _tmp.x; ^?{}(&tmp); } \end{cfacode} Unlike \CC, \CFA provides an escape hatch that allows a user to decide at an object's definition whether it should be managed or not. A a2 @= { 0 };  // unmanaged \end{cfacode} In this example, @a1@ is a managed object, and thus is default constructed and destructed at the end of @a1@'s lifetime, while @a2@ is an unmanaged object and is not implicitly constructed or destructed. Instead, @a2->x@ is initialized to @0@ as if it were a C object, due to the explicit initializer. Existing constructors are ignored when \ateq is used, so that any valid C initializer is able to initialize the object. In addition to freedom, \ateq provides a simple path to migrating legacy C code to Cforall, in that objects can be moved from C-style initialization to \CFA gradually and individually. In this example, @a1@ is a managed object, and thus is default constructed and destructed at the start/end of @a1@'s lifetime, while @a2@ is an unmanaged object and is not implicitly constructed or destructed. Instead, @a2->x@ is initialized to @0@ as if it were a C object, because of the explicit initializer. In addition to freedom, \ateq provides a simple path to migrating legacy C code to \CFA, in that objects can be moved from C-style initialization to \CFA gradually and individually. It is worth noting that the use of unmanaged objects can be tricky to get right, since there is no guarantee that the proper invariants are established on an unmanaged object. It is recommended that most objects be managed by sensible constructors and destructors, except where absolutely necessary. When the user declares any constructor or destructor, the corresponding intrinsic/generated function and all field constructors for that type are hidden, so that they will not be found during expression resolution unless the user-defined function goes out of scope. Furthermore, if the user declares any constructor, then the intrinsic/generated default constructor is also hidden, making it so that objects of a type may not be default constructable. This closely mirrors the rule for implicit declaration of constructors in \CC, wherein the default constructor is implicitly declared if there is no user-declared constructor. % TODO: cite C++98 page 186?? When a user declares any constructor or destructor, the corresponding intrinsic/generated function and all field constructors for that type are hidden, so that they are not found during expression resolution until the user-defined function goes out of scope. Furthermore, if the user declares any constructor, then the intrinsic/generated default constructor is also hidden, precluding default construction. These semantics closely mirror the rule for implicit declaration of constructors in \CC, wherein the default constructor is implicitly declared if there is no user-declared constructor \cite[p.~186]{ANSI98:C++}. \begin{cfacode} struct S { int x, y; }; S s0, s1 = { 0 }, s2 = { 0, 2 }, s3 = s2;  // okay { void ?{}(S * s, int i) { s->x = i*2; } void ?{}(S * s, int i) { s->x = i*2; } // locally hide autogen constructors S s4;  // error S s5 = { 3 };  // okay When defining a constructor or destructor for a struct @S@, any members that are not explicitly constructed or destructed are implicitly constructed or destructed automatically. If an explicit call is present, then that call is taken in preference to any implicitly generated call. A consequence of this rule is that it is possible, unlike \CC, to precisely control the order of construction and destruction of subobjects on a per-constructor basis, whereas in \CC subobject initialization and destruction is always performed based on the declaration order. A consequence of this rule is that it is possible, unlike \CC, to precisely control the order of construction and destruction of sub-objects on a per-constructor basis, whereas in \CC sub-object initialization and destruction is always performed based on the declaration order. \begin{cfacode} struct A { } \end{cfacode} Finally, it is illegal for a subobject to be explicitly constructed for the first time after it is used for the first time. Finally, it is illegal for a sub-object to be explicitly constructed for the first time after it is used for the first time. If the translator cannot be reasonably sure that an object is constructed prior to its first use, but is constructed afterward, an error is emitted. More specifically, the translator searches the body of a constructor to ensure that every subobject is initialized. More specifically, the translator searches the body of a constructor to ensure that every sub-object is initialized. \begin{cfacode} void ?{}(A * a, double x) { } \end{cfacode} However, if the translator sees a subobject used within the body of a constructor, but does not see a constructor call that uses the subobject as the target of a constructor, then the translator assumes the object is to be implicitly constructed (copy constructed in a copy constructor and default constructed in any other constructor). However, if the translator sees a sub-object used within the body of a constructor, but does not see a constructor call that uses the sub-object as the target of a constructor, then the translator assumes the object is to be implicitly constructed (copy constructed in a copy constructor and default constructed in any other constructor). \begin{cfacode} void ?{}(A * a) { } // z, y, w implicitly destructed, in this order \end{cfacode} If at any point, the @this@ parameter is passed directly as the target of another constructor, then it is assumed that constructor handles the initialization of all of the object's members and no implicit constructor calls are added. % TODO: confirm that this is correct. It might be possible to get subtle errors if you initialize some members then call another constructor... -- in fact, this is basically always wrong. if anything, I should check that such a constructor does not initialize any members, otherwise it'll always initialize the member twice (once locally, once by the called constructor). If at any point, the @this@ parameter is passed directly as the target of another constructor, then it is assumed that constructor handles the initialization of all of the object's members and no implicit constructor calls are added. To override this rule, \ateq can be used to force the translator to trust the programmer's discretion. This form of \ateq is not yet implemented. Despite great effort, some forms of C syntax do not work well with constructors in \CFA. In particular, constructor calls cannot contain designations (see \ref{sub:c_background}), since this is equivalent to allowing designations on the arguments to arbitrary function calls. In C, function prototypes are permitted to have arbitrary parameter names, including no names at all, which may have no connection to the actual names used at function definition. Furthermore, a function prototype can be repeated an arbitrary number of times, each time using different names. \begin{cfacode} // all legal forward declarations in C f(b:10, a:20, c:30);  // which parameter is which? \end{cfacode} In C, function prototypes are permitted to have arbitrary parameter names, including no names at all, which may have no connection to the actual names used at function definition. Furthermore, a function prototype can be repeated an arbitrary number of times, each time using different names. As a result, it was decided that any attempt to resolve designated function calls with C's function prototype rules would be brittle, and thus it is not sensible to allow designations in constructor calls. % Many other languages do allow named arguments, such as Python and Scala, but they do not allow multiple arbitrarily named forward declarations of a function. In addition, constructor calls cannot have a nesting depth greater than the number of array components in the type of the initialized object, plus one. In addition, constructor calls do not support unnamed nesting. \begin{cfacode} struct B { int x; }; struct C { int y; }; struct A { B b; C c; }; void ?{}(A *, B); void ?{}(A *, C); A a = { { 10 },  // construct B? - invalid }; \end{cfacode} In C, nesting initializers means that the programmer intends to initialize sub-objects with the nested initializers. The reason for this omission is to both simplify the mental model for using constructors, and to make initialization simpler for the expression resolver. If this were allowed, it would be necessary for the expression resolver to decide whether each argument to the constructor call could initialize to some argument in one of the available constructors, making the problem highly recursive and potentially much more expensive. That is, in the previous example the line marked as an error could mean construct using @?{}(A *, B)@ or with @?{}(A *, C)@, since the inner initializer @{ 10 }@ could be taken as an intermediate object of type @B@ or @C@. In practice, however, there could be many objects that can be constructed from a given @int@ (or, indeed, any arbitrary parameter list), and thus a complete solution to this problem would require fully exploring all possibilities. More precisely, constructor calls cannot have a nesting depth greater than the number of array components in the type of the initialized object, plus one. For example, \begin{cfacode} } \end{cfacode} % TODO: in CFA if the array dimension is empty, no object constructors are added -- need to fix this. The body of @A@ has been omitted, since only the constructor interfaces are important. In C, having a greater nesting depth means that the programmer intends to initialize subobjects with the nested initializer. The reason for this omission is to both simplify the mental model for using constructors, and to make initialization simpler for the expression resolver. If this were allowed, it would be necessary for the expression resolver to decide whether each argument to the constructor call could initialize to some argument in one of the available constructors, making the problem highly recursive and potentially much more expensive. That is, in the previous example the line marked as an error could mean construct using @?{}(A *, A, A)@, since the inner initializer @{ 11 }@ could be taken as an intermediate object of type @A@ constructed with @?{}(A *, int)@. In practice, however, there could be many objects that can be constructed from a given @int@ (or, indeed, any arbitrary parameter list), and thus a complete solution to this problem would require fully exploring all possibilities. It should be noted that unmanaged objects can still make use of designations and nested initializers in \CFA. It is simple to overcome this limitation for managed objects by making use of compound literals, so that the arguments to the constructor call are explicitly typed. \subsection{Implicit Destructors} if (i == 2) return; // destruct x, y } // destruct y } \end{cfacode} %% having this feels excessive, but it's here if necessary % This procedure generates the following code. % \begin{cfacode} % void f(int i){ %   struct A x; %   ?{}(&x); %   { %     struct A y; %     ?{}(&y); %     { %       struct A z; %       ?{}(&z); %       { %         if ((i==0)!=0) { %           ^?{}(&z); %           ^?{}(&y); %           ^?{}(&x); %           return; %         } %       } %       if (((i==1)!=0) { %           ^?{}(&z); %           ^?{}(&y); %           ^?{}(&x); %           return ; %       } %       ^?{}(&z); %     } %     if ((i==2)!=0) { %       ^?{}(&y); %       ^?{}(&x); %       return; %     } %     ^?{}(&y); %   } %   ^?{}(&x); % } % \end{cfacode} } // destruct x \end{cfacode} The next example illustrates the use of simple continue and break statements and the manner that they interact with implicit destructors. \end{cfacode} Since a destructor call is automatically inserted at the end of the block, nothing special needs to happen to destruct @x@ in the case where control reaches the end of the loop. In the case where @i@ is @2@, the continue statement runs the loop update expression and attemps to begin the next iteration of the loop. Since continue is a C statement, which does not understand destructors, a destructor call is added just before the continue statement to ensure that @x@ is destructed. In the case where @i@ is @2@, the continue statement runs the loop update expression and attempts to begin the next iteration of the loop. Since continue is a C statement, which does not understand destructors, it is transformed into a @goto@ statement that branches to the end of the loop, just before the block's destructors, to ensure that @x@ is destructed. When @i@ is @3@, the break statement moves control to just past the end of the loop. Like the previous case, a destructor call for @x@ is inserted just before the break statement. \CFA also supports labelled break and continue statements, which allow more precise manipulation of control flow. Labelled break and continue allow the programmer to specify which control structure to target by using a label attached to a control structure. Unlike the previous case, the destructor for @x@ cannot be reused, so a destructor call for @x@ is inserted just before the break statement. \CFA also supports labeled break and continue statements, which allow more precise manipulation of control flow. Labeled break and continue allow the programmer to specify which control structure to target by using a label attached to a control structure. \begin{cfacode}[emph={L1,L2}, emphstyle=\color{red}] L1: for (int i = 0; i < 10; i++) { A x; L2: for (int j = 0; j < 10; j++) { for (int j = 0; j < 10; j++) { A y; if (j == 0) { continue;    // destruct y } else if (j == 1) { break;       // destruct y } else if (i == 1) { if (i == 1) { continue L1; // destruct y } else if (i == 2) { \end{cfacode} The statement @continue L1@ begins the next iteration of the outer for-loop. Since the semantics of continue require the loop update expression to execute, control branches to the \emph{end} of the outer for loop, meaning that the block destructor for @x@ can be reused, and it is only necessary to generate the destructor for @y@. Break, on the other hand, requires jumping out of the loop, so the destructors for both @x@ and @y@ are generated and inserted before the @break L1@ statement. Since the semantics of continue require the loop update expression to execute, control branches to the end of the outer for loop, meaning that the block destructor for @x@ can be reused, and it is only necessary to generate the destructor for @y@. Break, on the other hand, requires jumping out of both loops, so the destructors for both @x@ and @y@ are generated and inserted before the @break L1@ statement. Finally, an example which demonstrates goto. } \end{cfacode} Labelled break and continue are implemented in \CFA in terms of goto statements, so the more constrained forms are precisely goverened by these rules. All break and continue statements are implemented in \CFA in terms of goto statements, so the more constrained forms are precisely governed by these rules. The next example demonstrates the error case. \subsection{Implicit Copy Construction} \label{s:implicit_copy_construction} When a function is called, the arguments supplied to the call are subject to implicit copy construction (and destruction of the generated temporary), and the return value is subject to destruction. When a value is returned from a function, the copy constructor is called to pass the value back to the call site. Exempt from these rules are intrinsic and builtin functions. Exempt from these rules are intrinsic and built-in functions. It should be noted that unmanaged objects are subject to copy constructor calls when passed as arguments to a function or when returned from a function, since they are not the \emph{target} of the copy constructor call. That is, since the parameter is not marked as an unmanaged object using \ateq, it will be copy constructed if it is returned by value or passed as an argument to another function, so to guarantee consistent behaviour, unmanaged objects must be copy constructed when passed as arguments. This is an important detail to bear in mind when using unmanaged objects, and could produce unexpected results when mixed with objects that are explicitly constructed. \begin{cfacode} void ^?{}(A *); A f(A x) { return x; A identity(A x) { // pass by value => need local copy return x;       // return by value => make call-site copy } A y, z @= {}; identity(y); identity(z); identity(y);  // copy construct y into x identity(z);  // copy construct z into x \end{cfacode} Note that @z@ is copy constructed into a temporary variable to be passed as an argument, which is also destructed after the call. A special syntactic form, such as a variant of \ateq, could be implemented to specify at the call site that an argument should not be copy constructed, to regain some control for the C programmer. This generates the following \begin{cfacode} struct A f(struct A x){ struct A _retval_f; ?{}((&_retval_f), x); struct A _retval_f;    // return value ?{}((&_retval_f), x);  // copy construct return value return _retval_f; } struct A y; ?{}(&y); struct A z = { 0 }; struct A _tmp_cp1;     // argument 1 struct A _tmp_cp_ret0; // return value _tmp_cp_ret0=f((?{}(&_tmp_cp1, y) , _tmp_cp1)), _tmp_cp_ret0; ^?{}(&_tmp_cp_ret0);   // return value ^?{}(&_tmp_cp1);       // argument 1 struct A _tmp_cp2;     // argument 1 struct A _tmp_cp_ret1; // return value _tmp_cp_ret1=f((?{}(&_tmp_cp2, z), _tmp_cp2)), _tmp_cp_ret1; ^?{}(&_tmp_cp_ret1);   // return value ^?{}(&_tmp_cp2);       // argument 1 ?{}(&y);                 // default construct struct A z = { 0 };      // C default struct A _tmp_cp1;       // argument 1 struct A _tmp_cp_ret0;   // return value _tmp_cp_ret0=f( (?{}(&_tmp_cp1, y) , _tmp_cp1)  // argument is a comma expression ), _tmp_cp_ret0;         // return value for cascading ^?{}(&_tmp_cp_ret0);     // destruct return value ^?{}(&_tmp_cp1);         // destruct argument 1 struct A _tmp_cp2;       // argument 1 struct A _tmp_cp_ret1;   // return value _tmp_cp_ret1=f( (?{}(&_tmp_cp2, z), _tmp_cp2)  // argument is a common expression ), _tmp_cp_ret1;         // return value for cascading ^?{}(&_tmp_cp_ret1);     // destruct return value ^?{}(&_tmp_cp2);         // destruct argument 1 ^?{}(&y); \end{cfacode} A known issue with this implementation is that the return value of a function is not guaranteed to have the same address for its entire lifetime. Specifically, since @_retval_f@ is allocated and constructed in @f@ then returned by value, the internal data is bitwise copied into the caller's stack frame. A special syntactic form, such as a variant of \ateq, can be implemented to specify at the call site that an argument should not be copy constructed, to regain some control for the C programmer. \begin{cfacode} identity(z@);  // do not copy construct argument // - will copy construct/destruct return value A@ identity_nocopy(A @ x) {  // argument not copy constructed or destructed return x;  // not copy constructed // return type marked @ => not destructed } \end{cfacode} It should be noted that reference types will allow specifying that a value does not need to be copied, however reference types do not provide a means of preventing implicit copy construction from uses of the reference, so the problem is still present when passing or returning the reference by value. A known issue with this implementation is that the argument and return value temporaries are not guaranteed to have the same address for their entire lifetimes. In the previous example, since @_retval_f@ is allocated and constructed in @f@, then returned by value, the internal data is bitwise copied into the caller's stack frame. This approach works out most of the time, because typically destructors need to only access the fields of the object and recursively destroy. It is currently the case that constructors and destructors which use the @this@ pointer as a unique identifier to store data externally will not work correctly for return value objects. Thus is it not safe to rely on an object's @this@ pointer to remain constant throughout execution of the program. It is currently the case that constructors and destructors that use the @this@ pointer as a unique identifier to store data externally do not work correctly for return value objects. Thus, it is currently not safe to rely on an object's @this@ pointer to remain constant throughout execution of the program. \begin{cfacode} A * external_data[32]; } } A makeA() { A x;  // stores &x in external_data return x; } makeA();  // return temporary has a different address than x // equivalent to: //   A _tmp; //   _tmp = makeA(), _tmp; //   ^?{}(&_tmp); \end{cfacode} In the above example, a global array of pointers is used to keep track of all of the allocated @A@ objects. Due to copying on return, the current object being destructed will not exist in the array if an @A@ object is ever returned by value from a function. This problem could be solved in the translator by mutating the function signatures so that the return value is moved into the parameter list. Due to copying on return, the current object being destructed does not exist in the array if an @A@ object is ever returned by value from a function, such as in @makeA@. This problem could be solved in the translator by changing the function signatures so that the return value is moved into the parameter list. For example, the translator could restructure the code like so \begin{cfacode} \end{cfacode} This transformation provides @f@ with the address of the return variable so that it can be constructed into directly. It is worth pointing out that this kind of signature rewriting already occurs in polymorphic functions which return by value, as discussed in \cite{Bilson03}. It is worth pointing out that this kind of signature rewriting already occurs in polymorphic functions that return by value, as discussed in \cite{Bilson03}. A key difference in this case is that every function would need to be rewritten like this, since types can switch between managed and unmanaged at different scope levels, e.g. \begin{cfacode} struct A { int v; }; A x; // unmanaged A x; // unmanaged, since only trivial constructors are available { void ?{}(A * a) { ... } A z; // unmanaged \end{cfacode} Hence there is not enough information to determine at function declaration to determine whether a type is managed or not, and thus it is the case that all signatures have to be rewritten to account for possible copy constructor and destructor calls. Hence there is not enough information to determine at function declaration whether a type is managed or not, and thus it is the case that all signatures have to be rewritten to account for possible copy constructor and destructor calls. Even with this change, it would still be possible to declare backwards compatible function prototypes with an @extern "C"@ block, which allows for the definition of C-compatible functions within \CFA code, however this would require actual changes to the way code inside of an @extern "C"@ function is generated as compared with normal code generation. Furthermore, it isn't possible to overload C functions, so using @extern "C"@ to declare functions is of limited use. It would be possible to regain some control by adding an attribute to structs which specifies whether they can be managed or not (perhaps \emph{manageable} or \emph{unmanageable}), and to emit an error in the case that a constructor or destructor is declared for an unmanageable type. Furthermore, it is not possible to overload C functions, so using @extern "C"@ to declare functions is of limited use. It would be possible to regain some control by adding an attribute to structs that specifies whether they can be managed or not (perhaps \emph{manageable} or \emph{unmanageable}), and to emit an error in the case that a constructor or destructor is declared for an unmanageable type. Ideally, structs should be manageable by default, since otherwise the default case becomes more verbose. This means that in general, function signatures would have to be rewritten, and in a select few cases the signatures would not be rewritten. \end{cfacode} An alternative is to instead make the attribute \emph{identifiable}, which states that objects of this type use the @this@ parameter as an identity. This strikes more closely to the visibile problem, in that only types marked as identifiable would need to have the return value moved into the parameter list, and every other type could remain the same. This strikes more closely to the visible problem, in that only types marked as identifiable would need to have the return value moved into the parameter list, and every other type could remain the same. Furthermore, no restrictions would need to be placed on whether objects can be constructed. \begin{cfacode} \end{cfacode} Ultimately, this is the type of transformation that a real compiler would make when generating assembly code. Since a compiler has full control over its calling conventions, it can seamlessly allow passing the return parameter without outwardly changing the signature of a routine. As such, it has been decided that this issue is not currently a priority. Ultimately, both of these are patchwork solutions. Since a real compiler has full control over its calling conventions, it can seamlessly allow passing the return parameter without outwardly changing the signature of a routine. As such, it has been decided that this issue is not currently a priority and will be fixed when a full \CFA compiler is implemented. \section{Implementation} \subsection{Array Initialization} Arrays are a special case in the C type system. Arrays are a special case in the C type-system. C arrays do not carry around their size, making it impossible to write a standalone \CFA function that constructs or destructs an array while maintaining the standard interface for constructors and destructors. Instead, \CFA defines the initialization and destruction of an array recursively. By default, objects within a translation unit are constructed in declaration order, and destructed in the reverse order. The default order of construction of objects amongst translation units is unspecified. % TODO: not yet implemented, but g++ provides attribute init_priority, which allows specifying the order of global construction on a per object basis %   https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes % suggestion: implement this in CFA by picking objects with a specified priority and pulling them into their own init functions (could even group them by priority level -> map>) and pull init_priority forward into constructor and destructor attributes with the same priority level It is, however, guaranteed that any global objects in the standard library are initialized prior to the initialization of any object in the user program. This feature is implemented in the \CFA translator by grouping every global constructor call into a function with the GCC attribute \emph{constructor}, which performs most of the heavy lifting. % CITE: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#Common-Function-Attributes This feature is implemented in the \CFA translator by grouping every global constructor call into a function with the GCC attribute \emph{constructor}, which performs most of the heavy lifting \cite[6.31.1]{GCCExtensions}. A similar function is generated with the \emph{destructor} attribute, which handles all global destructor calls. At the time of writing, initialization routines in the library are specified with priority \emph{101}, which is the highest priority level that GCC allows, whereas initialization routines in the user's code are implicitly given the default priority level, which ensures they have a lower priority than any code with a specified priority level. This mechanism allows arbitrarily complicated initialization to occur before any user code runs, making it possible for library designers to initialize their modules without requiring the user to call specific startup or teardown routines. This mechanism allows arbitrarily complicated initialization to occur before any user code runs, making it possible for library designers to initialize their modules without requiring the user to call specific startup or tear-down routines. For example, given the following global declarations. \end{cfacode} %   https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes % suggestion: implement this in CFA by picking objects with a specified priority and pulling them into their own init functions (could even group them by priority level -> map>) and pull init_priority forward into constructor and destructor attributes with the same priority level GCC provides an attribute @init_priority@, which allows specifying the relative priority for initialization of global objects on a per-object basis in \CC. A similar attribute can be implemented in \CFA by pulling marked objects into global constructor/destructor-attribute functions with the specified priority. For example, \begin{cfacode} struct A { ... }; void ?{}(A *, int); void ^?{}(A *); __attribute__((init_priority(200))) A x = { 123 }; \end{cfacode} would generate \begin{cfacode} A x; __attribute__((constructor(200))) __init_x() { ?{}(&x, 123);  // construct x with priority 200 } __attribute__((destructor(200))) __destroy_x() { ?{}(&x);       // destruct x with priority 200 } \end{cfacode} \subsection{Static Local Variables} In standard C, it is possible to mark variables that are local to a function with the @static@ storage class. Unlike normal local variables, a @static@ local variable is defined to live for the entire duration of the program, so that each call to the function has access to the same variable with the same address and value as it had in the previous call to the function. % TODO: mention dynamic loading caveat?? Much like global variables, in C @static@ variables must be initialized to a \emph{compile-time constant value} so that a compiler is able to create storage for the variable and initialize it before the program begins running. Unlike normal local variables, a @static@ local variable is defined to live for the entire duration of the program, so that each call to the function has access to the same variable with the same address and value as it had in the previous call to the function. Much like global variables, in C @static@ variables can only be initialized to a \emph{compile-time constant value} so that a compiler is able to create storage for the variable and initialize it at compile-time. Yet again, this rule is too restrictive for a language with constructors and destructors. Construction of @static@ local objects is implemented via an accompanying @static bool@ variable, which records whether the variable has already been constructed. A conditional branch checks the value of the companion @bool@, and if the variable has not yet been constructed then the object is constructed. The object's destructor is scheduled to be run when the program terminates using @atexit@, and the companion @bool@'s value is set so that subsequent invocations of the function will not reconstruct the object. The object's destructor is scheduled to be run when the program terminates using @atexit@ \footnote{When using the dynamic linker, it is possible to dynamically load and unload a shared library. Since glibc 2.2.3 \cite{atexit}, functions registered with @atexit@ within the shared library are called when unloading the shared library. As such, static local objects can be destructed using this mechanism even in shared libraries on Linux systems.}, and the companion @bool@'s value is set so that subsequent invocations of the function do not reconstruct the object. Since the parameter to @atexit@ is a parameter-less function, some additional tweaking is required. First, the @static@ variable must be hoisted up to global scope and uniquely renamed to prevent name clashes with other global objects. Finally, the newly generated function is registered with @atexit@, instead of registering the destructor directly. Since @atexit@ calls functions in the reverse order in which they are registered, @static@ local variables are guaranteed to be destructed in the reverse order that they are constructed, which may differ between multiple executions of the same program. Extending the previous example \begin{cfacode} \end{cfacode} \subsection{Constructor Expressions} In \CFA, it is possible to use a constructor as an expression. Like other operators, the function name @?{}@ matches its operator syntax. For example, @(&x){}@ calls the default constructor on the variable @x@, and produces @&x@ as a result. The significance of constructors as expressions rather than as statements is that the result of a constructor expression can be used as part of a larger expression. A key example is the use of constructor expressions to initialize the result of a call to standard C routine @malloc@. \begin{cfacode} struct X { ... }; void ?{}(X *, double); X * x = malloc(sizeof(X)){ 1.5 }; \end{cfacode} In this example, @malloc@ dynamically allocates storage and initializes it using a constructor, all before assigning it into the variable @x@. If this extension is not present, constructing dynamically allocated objects is much more cumbersome, requiring separate initialization of the pointer and initialization of the pointed-to memory. \begin{cfacode} X * x = malloc(sizeof(X)); x{ 1.5 }; \end{cfacode} Not only is this verbose, but it is also more error prone, since this form allows maintenance code to easily sneak in between the initialization of @x@ and the initialization of the memory that @x@ points to. This feature is implemented via a transformation produceing the value of the first argument of the constructor, since constructors do not themslves have a return value. Since this transformation results in two instances of the subexpression, care is taken to allocate a temporary variable to hold the result of the subexpression in the case where the subexpression may contain side effects. The previous example generates the following code. \begin{cfacode} struct X *_tmp_ctor; struct X *x = ?{}((_tmp_ctor=((_tmp_cp_ret0= malloc(sizeof(struct X))), _tmp_cp_ret0))), 1.5), _tmp_ctor); \end{cfacode} It should be noted that this technique is not exclusive to @malloc@, and allows a user to write a custom allocator that can be idiomatically used in much the same way as a constructed @malloc@ call. It is also possible to use operator syntax with destructors. Unlike constructors, operator syntax with destructors is a statement and thus does not produce a value, since the destructed object is invalidated by the use of a destructor. For example, \lstinline!^(&x){}! calls the destructor on the variable @x@. \subsection{Polymorphism} As mentioned in section \ref{sub:polymorphism}, \CFA currently has 3 type-classes that are used to designate polymorphic data types: @otype@, @dtype@, and @ftype@. In previous versions of \CFA, @otype@ was syntactic sugar for @dtype@ with known size/alignment information and an assignment function. That is, \begin{cfacode} forall(otype T) void f(T); \end{cfacode} was equivalent to \begin{cfacode} forall(dtype T | sized(T) | { T ?=?(T *, T); }) void f(T); \end{cfacode} This allows easily specifying constraints that are common to all complete object types very simply. Now that \CFA has constructors and destructors, more of a complete object's behaviour can be specified by than was previously possible. As such, @otype@ has been augmented to include assertions for a default constructor, copy constructor, and destructor. That is, the previous example is now equivalent to \begin{cfacode} forall(dtype T | sized(T) | { T ?=?(T *, T); void ?{}(T *); void ?{}(T *, T); void ^?{}(T *); }) void f(T); \end{cfacode} This allows @f@'s body to create and destroy objects of type @T@, and pass objects of type @T@ as arguments to other functions, following the normal \CFA rules. A point of note here is that objects can be missing default constructors (and eventually other functions through deleted functions), so it is important for \CFA programmers to think carefully about the operations needed by their function, as to not over-constrain the acceptable parameter types.
• ## doc/rob_thesis/intro.tex

 ra0fc78a \section{\CFA Background} \label{s:background} \CFA is a modern extension to the C programming language. \CFA \footnote{Pronounced C-for-all'', and written \CFA or Cforall.} is a modern non-object-oriented extension to the C programming language. As it is an extension of C, there is already a wealth of existing C code and principles that govern the design of the language. Among the goals set out in the original design of \CFA, four points stand out \cite{Bilson03}. Therefore, these design principles must be kept in mind throughout the design and development of new language features. In order to appeal to existing C programmers, great care must be taken to ensure that new features naturally feel like C. The remainder of this section describes some of the important new features that currently exist in \CFA, to give the reader the necessary context in which the new features presented in this thesis must dovetail. % TODO: harmonize with? The remainder of this section describes some of the important new features that currently exist in \CFA, to give the reader the necessary context in which the new features presented in this thesis must dovetail. \subsection{C Background} A a1 = { 1, .y:7, 6 }; A a2[4] = { [2]:a0, [0]:a1, { .z:3 } }; // equvialent to // equivalent to // A a0 = { 0, 8, 0, 1 }; // A a1 = { 1, 0, 7, 6 }; Designations allow specifying the field to initialize by name, rather than by position. Any field not explicitly initialized is initialized as if it had static storage duration \cite[p.~141]{C11}. A designator specifies the current object for initialization, and as such any undesignated subobjects pick up where the last initialization left off. For example, in the initialization of @a1@, the initializer of @y@ is @7@, and the unnamed initializer @6@ initializes the next subobject, @z@. Later initializers override earlier initializers, so a subobject for which there is more than one initializer is only initailized by its last initializer. This can be seen in the initialization of @a0@, where @x@ is designated twice, and thus initialized to @8@. Note that in \CFA, designations use a colon separator, rather than an equals sign as in C. A designator specifies the current object for initialization, and as such any undesignated sub-objects pick up where the last initialization left off. For example, in the initialization of @a1@, the initializer of @y@ is @7@, and the unnamed initializer @6@ initializes the next sub-object, @z@. Later initializers override earlier initializers, so a sub-object for which there is more than one initializer is only initialized by its last initializer. These semantics can be seen in the initialization of @a0@, where @x@ is designated twice, and thus initialized to @8@. Note that in \CFA, designations use a colon separator, rather than an equals sign as in C, because this syntax is one of the few places that conflicts with the new language features. C also provides \emph{compound literal} expressions, which provide a first-class mechanism for creating unnamed objects. There are times when a function should logically return multiple values. Since a function in standard C can only return a single value, a programmer must either take in additional return values by address, or the function's designer must create a wrapper structure t0 package multiple return-values. Since a function in standard C can only return a single value, a programmer must either take in additional return values by address, or the function's designer must create a wrapper structure to package multiple return-values. \begin{cfacode} int f(int * ret) {        // returns a value through parameter ret \end{cfacode} The former solution is awkward because it requires the caller to explicitly allocate memory for $n$ result variables, even if they are only temporary values used as a subexpression, or even not used at all. The latter approach: \begin{cfacode} struct A { ... res3.x ... res3.y ... // use result values \end{cfacode} The latter approach requires the caller to either learn the field names of the structure or learn the names of helper routines to access the individual return values. requires the caller to either learn the field names of the structure or learn the names of helper routines to access the individual return values. Both solutions are syntactically unnatural. In \CFA, it is possible to directly declare a function returning mutliple values. This provides important semantic information to the caller, since return values are only for output. \begin{cfacode} [int, int] f() {       // don't need to create a new type In \CFA, it is possible to directly declare a function returning multiple values. This extension provides important semantic information to the caller, since return values are only for output. \begin{cfacode} [int, int] f() {       // no new type return [123, 37]; } \end{cfacode} However, the ability to return multiple values requires a syntax for accepting the results from a function. However, the ability to return multiple values is useless without a syntax for accepting the results from the function. In standard C, return values are most commonly assigned directly into local variables, or are used as the arguments to another function call. \CFA allows both of these contexts to accept multiple return values. g(f());             // selects (2) \end{cfacode} In this example, the only possible call to @f@ that can produce the two @int@s required by @g@ is the second option. A similar reasoning holds for assigning into multiple variables. In this example, the only possible call to @f@ that can produce the two @int@s required for assigning into the variables @x@ and @y@ is the second option. A similar reasoning holds calling the function @g@. In \CFA, overloading also applies to operator names, known as \emph{operator overloading}. bool ?}). Finally, \CFA also permits overloading variable identifiers. This feature is not available in \CC. \begin{cfacode} % TODO: pick something better than x? max, zero, one? \begin{cfacode} struct Rational { int numer, denom; }; int x = 3;               // (1) In this example, there are three definitions of the variable @x@. Based on the context, \CFA attempts to choose the variable whose type best matches the expression context. When used judiciously, this feature allows names like @MAX@, @MIN@, and @PI@ to apply across many types. Finally, the values @0@ and @1@ have special status in standard C. } \end{cfacode} Every if statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result. Every if- and iteration-statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result. Due to these rewrite rules, the values @0@ and @1@ have the types \zero and \one in \CFA, which allow for overloading various operations that connect to @0@ and @1@ \footnote{In the original design of \CFA, @0@ and @1@ were overloadable names \cite[p.~7]{cforall}.}. The types \zero and \one have special built in implicit conversions to the various integral types, and a conversion to pointer types for @0@, which allows standard C code involving @0@ and @1@ to work as normal. The types \zero and \one have special built-in implicit conversions to the various integral types, and a conversion to pointer types for @0@, which allows standard C code involving @0@ and @1@ to work as normal. \begin{cfacode} // lvalue is similar to returning a reference in C++ This capability allows specifying the same set of assertions in multiple locations, without the repetition and likelihood of mistakes that come with manually writing them out for each function declaration. An interesting application of return-type resolution and polymorphism is with type-safe @malloc@. \begin{cfacode} forall(dtype T | sized(T)) T * malloc() { return (T*)malloc(sizeof(T)); // call C malloc } int * x = malloc();     // malloc(sizeof(int)) double * y = malloc();  // malloc(sizeof(double)) struct S { ... }; S * s = malloc();       // malloc(sizeof(S)) \end{cfacode} The built-in trait @sized@ ensures that size and alignment information for @T@ is available in the body of @malloc@ through @sizeof@ and @_Alignof@ expressions respectively. In calls to @malloc@, the type @T@ is bound based on call-site information, allowing \CFA code to allocate memory without the potential for errors introduced by manually specifying the size of the allocated block. \section{Invariants} % TODO: discuss software engineering benefits of ctor/dtors: {pre/post} conditions, invariants % an important invariant is the state of the environment (memory, resources) % some objects pass their contract to the object user An \emph{invariant} is a logical assertion that true for some duration of a program's execution. An \emph{invariant} is a logical assertion that is true for some duration of a program's execution. Invariants help a programmer to reason about code correctness and prove properties of programs. In object-oriented programming languages, type invariants are typically established in a constructor and maintained throughout the object's lifetime. This is typically achieved through a combination of access control modifiers and a restricted interface. These assertions are typically achieved through a combination of access control modifiers and a restricted interface. Typically, data which requires the maintenance of an invariant is hidden from external sources using the \emph{private} modifier, which restricts reads and writes to a select set of trusted routines, including member functions. It is these trusted routines that perform all modifications to internal data in a way that is consistent with the invariant, by ensuring that the invariant holds true at the end of the routine call. In C, the @assert@ macro is often used to ensure invariants are true. Using @assert@, the programmer can check a condition and abort execution if the condition is not true. This is a powerful tool that forces the programmer to deal with logical inconsistencies as they occur. This powerful tool forces the programmer to deal with logical inconsistencies as they occur. For production, assertions can be removed by simply defining the preprocessor macro @NDEBUG@, making it simple to ensure that assertions are 0-cost for a performance intensive application. \begin{cfacode} \end{dcode} The D compiler is able to assume that assertions and invariants hold true and perform optimizations based on those assumptions. An important invariant is the state of the execution environment, including the heap, the open file table, the state of global variables, etc. Since resources are finite, it is important to ensure that objects clean up properly when they are finished, restoring the execution environment to a stable state so that new objects can reuse resources. Note, these invariants are internal to the type's correct behaviour. Types also have external invariants with the state of the execution environment, including the heap, the open-file table, the state of global variables, etc. Since resources are finite and shared (concurrency), it is important to ensure that objects clean up properly when they are finished, restoring the execution environment to a stable state so that new objects can reuse resources. \section{Resource Management} The program stack grows and shrinks automatically with each function call, as needed for local variables. However, whenever a program needs a variable to outlive the block it is created in, the storage must be allocated dynamically with @malloc@ and later released with @free@. This pattern is extended to more complex objects, such as files and sockets, which also outlive the block where they are created, but at their core is resource management. Once allocated storage escapes a block, the responsibility for deallocating the storage is not specified in a function's type, that is, that the return value is owned by the caller. This pattern is extended to more complex objects, such as files and sockets, which can also outlive the block where they are created, and thus require their own resource management. Once allocated storage escapes\footnote{In garbage collected languages, such as Java, escape analysis \cite{Choi:1999:EAJ:320385.320386} is used to determine when dynamically allocated objects are strictly contained within a function, which allows the optimizer to allocate them on the stack.} a block, the responsibility for deallocating the storage is not specified in a function's type, that is, that the return value is owned by the caller. This implicit convention is provided only through documentation about the expectations of functions. In other languages, a hybrid situation exists where resources escape the allocation block, but ownership is precisely controlled by the language. This pattern requires a strict interface and protocol for a data structure, where the protocol consists of a pre-initialization and a post-termination call, and all intervening access is done via interface routines. This kind of encapsulation is popular in object-oriented programming languages, and like the stack, it contains a significant portion of resource management cases. This pattern requires a strict interface and protocol for a data structure, consisting of a pre-initialization and a post-termination call, and all intervening access is done via interface routines. This kind of encapsulation is popular in object-oriented programming languages, and like the stack, it takes care of a significant portion of resource management cases. For example, \CC directly supports this pattern through class types and an idiom known as RAII \footnote{Resource Acquisition is Initialization} by means of constructors and destructors. On the other hand, destructors provide a simple mechanism for tearing down an object and resetting the environment in which the object lived. RAII ensures that if all resources are acquired in a constructor and released in a destructor, there are no resource leaks, even in exceptional circumstances. A type with at least one non-trivial constructor or destructor will henceforth be referred to as a \emph{managed type}. In the context of \CFA, a non-trivial constructor is either a user defined constructor or an auto generated constructor that calls a non-trivial constructor. For the remaining resource ownership cases, programmer must follow a brittle, explicit protocol for freeing resources or an implicit porotocol implemented via the programming language. A type with at least one non-trivial constructor or destructor is henceforth referred to as a \emph{managed type}. In the context of \CFA, a non-trivial constructor is either a user defined constructor or an auto-generated constructor that calls a non-trivial constructor. For the remaining resource ownership cases, programmer must follow a brittle, explicit protocol for freeing resources or an implicit protocol implemented via the programming language. In garbage collected languages, such as Java, resources are largely managed by the garbage collector. There are many kinds of resources that the garbage collector does not understand, such as sockets, open files, and database connections. In particular, Java supports \emph{finalizers}, which are similar to destructors. Sadly, finalizers come with far fewer guarantees, to the point where a completely conforming JVM may never call a single finalizer. % TODO: citation JVM spec; http://stackoverflow.com/a/2506514/2386739 Due to operating system resource limits, this is unacceptable for many long running tasks. % TODO: citation? Instead, the paradigm in Java requires programmers manually keep track of all resource \emph{except} memory, leading many novices and experts alike to forget to close files, etc. Complicating the picture, uncaught exceptions can cause control flow to change dramatically, leaking a resource which appears on first glance to be closed. Sadly, finalizers are only guaranteed to be called before an object is reclaimed by the garbage collector \cite[p.~373]{Java8}, which may not happen if memory use is not contentious. Due to operating-system resource-limits, this is unacceptable for many long running programs. Instead, the paradigm in Java requires programmers to manually keep track of all resources \emph{except} memory, leading many novices and experts alike to forget to close files, etc. Complicating the picture, uncaught exceptions can cause control flow to change dramatically, leaking a resource that appears on first glance to be released. \begin{javacode} void write(String filename, String msg) throws Exception { } \end{javacode} Any line in this program can throw an exception. This leads to a profusion of finally blocks around many function bodies, since it isn't always clear when an exception may be thrown. Any line in this program can throw an exception, which leads to a profusion of finally blocks around many function bodies, since it is not always clear when an exception may be thrown. \begin{javacode} public void write(String filename, String msg) throws Exception { \end{javacode} In Java 7, a new \emph{try-with-resources} construct was added to alleviate most of the pain of working with resources, but ultimately it still places the burden squarely on the user rather than on the library designer. Furthermore, for complete safety this pattern requires nested objects to be declared separately, otherwise resources which can throw an exception on close can leak nested resources. % TODO: cite oracle article http://www.oracle.com/technetwork/articles/java/trywithresources-401775.html? Furthermore, for complete safety this pattern requires nested objects to be declared separately, otherwise resources that can throw an exception on close can leak nested resources \cite{TryWithResources}. \begin{javacode} public void write(String filename, String msg) throws Exception { try ( try (  // try-with-resources FileOutputStream out = new FileOutputStream(filename); FileOutputStream log = new FileOutputStream("log.txt"); } \end{javacode} On the other hand, the Java compiler generates more code if more resources are declared, meaning that users must be more familiar with each type and library designers must provide better documentation. Variables declared as part of a try-with-resources statement must conform to the @AutoClosable@ interface, and the compiler implicitly calls @close@ on each of the variables at the end of the block. Depending on when the exception is raised, both @out@ and @log@ are null, @log@ is null, or both are non-null, therefore, the cleanup for these variables at the end is appropriately guarded and conditionally executed to prevent null-pointer exceptions. While Rust \cite{Rust} does not enforce the use of a garbage collector, it does provide a manual memory management environment, with a strict ownership model that automatically frees allocated memory and prevents common memory management errors. In particular, a variable has ownership over its associated value, which is freed automatically when the owner goes out of scope. Furthermore, values are \emph{moved} by default on assignment, rather than copied, which invalidates the previous variable binding. \begin{rustcode} struct S { x: i32 } let s = S { x: 123 }; let z = s;           // move, invalidate s println!("{}", s.x); // error, s has been moved \end{rustcode} Types can be made copyable by implementing the @Copy@ trait. Rust allows multiple unowned views into an object through references, also known as borrows, provided that a reference does not outlive its referent. A mutable reference is allowed only if it is the only reference to its referent, preventing data race errors and iterator invalidation errors. \begin{rustcode} let mut x = 10; { let y = &x; let z = &x; println!("{} {}", y, z); // prints 10 10 } { let y = &mut x; // let z1 = &x;     // not allowed, have mutable reference // let z2 = &mut x; // not allowed, have mutable reference *y = 5; println!("{}", y); // prints 5 } println!("{}", x); // prints 5 \end{rustcode} Since references are not owned, they do not release resources when they go out of scope. There is no runtime cost imposed on these restrictions, since they are enforced at compile-time. Rust provides RAII through the @Drop@ trait, allowing arbitrary code to execute when the object goes out of scope, allowing Rust programs to automatically clean up auxiliary resources much like a \CC program. \begin{rustcode} struct S { name: &'static str } impl Drop for S {  // RAII for S fn drop(&mut self) { println!("dropped {}", self.name); } } { let x = S { name: "x" }; let y = S { name: "y" }; } // prints "dropped y" "dropped x" \end{rustcode} % D has constructors and destructors that are worth a mention (under classes) https://dlang.org/spec/spec.html The programming language, D, also manages resources with constructors and destructors \cite{D}. In D, @struct@s are stack allocated and managed via scoping like in \CC, whereas @class@es are managed automatically by the garbage collector. Like Java, using the garbage collector means that destructors may never be called, requiring the use of finally statements to ensure dynamically allocated resources that are not managed by the garbage collector, such as open files, are cleaned up. Like Java, using the garbage collector means that destructors are called indeterminately, requiring the use of finally statements to ensure dynamically allocated resources that are not managed by the garbage collector, such as open files, are cleaned up. Since D supports RAII, it is possible to use the same techniques as in \CC to ensure that resources are released in a timely manner. Finally, D provides a scope guard statement, which allows an arbitrary statement to be executed at normal scope exit with \emph{success}, at exceptional scope exit with \emph{failure}, or at normal and exceptional scope exit with \emph{exit}. % cite? https://dlang.org/spec/statement.html#ScopeGuardStatement It has been shown that the \emph{exit} form of the scope guard statement can be implemented in a library in \CC. % cite: http://www.drdobbs.com/184403758 % TODO: discussion of lexical scope vs. dynamic % see Peter's suggestions % RAII works in both cases. Guaranteed to work in stack case, works in heap case if root is deleted (but it's dangerous to rely on this, because of exceptions) Finally, D provides a scope guard statement, which allows an arbitrary statement to be executed at normal scope exit with \emph{success}, at exceptional scope exit with \emph{failure}, or at normal and exceptional scope exit with \emph{exit}. % https://dlang.org/spec/statement.html#ScopeGuardStatement It has been shown that the \emph{exit} form of the scope guard statement can be implemented in a library in \CC \cite{ExceptSafe}. To provide managed types in \CFA, new kinds of constructors and destructors are added to \CFA and discussed in Chapter 2. \section{Tuples} \label{s:Tuples} In mathematics, tuples are finite-length sequences which, unlike sets, allow duplicate elements. In programming languages, tuples are a construct that provide fixed-sized heterogeneous lists of elements. In mathematics, tuples are finite-length sequences which, unlike sets, are ordered and allow duplicate elements. In programming languages, tuples provide fixed-sized heterogeneous lists of elements. Many programming languages have tuple constructs, such as SETL, \KWC, ML, and Scala. Adding tuples to \CFA has previously been explored by Esteves \cite{Esteves04}. The design of tuples in \KWC took much of its inspiration from SETL. The design of tuples in \KWC took much of its inspiration from SETL \cite{SETL}. SETL is a high-level mathematical programming language, with tuples being one of the primary data types. Tuples in SETL allow a number of operations, including subscripting, dynamic expansion, and multiple assignment. \begin{cppcode} tuple triple(10, 20, 30); get<1>(triple); // access component 1 => 30 get<1>(triple); // access component 1 => 20 tuple f(); Tuples are simple data structures with few specific operations. In particular, it is possible to access a component of a tuple using @std::get@. Another interesting feature is @std::tie@, which creates a tuple of references, which allows assigning the results of a tuple-returning function into separate local variables, without requiring a temporary variable. Another interesting feature is @std::tie@, which creates a tuple of references, allowing assignment of the results of a tuple-returning function into separate local variables, without requiring a temporary variable. Tuples also support lexicographic comparisons, making it simple to write aggregate comparators using @std::tie@. There is a proposal for \CCseventeen called \emph{structured bindings}, that introduces new syntax to eliminate the need to pre-declare variables and use @std::tie@ for binding the results from a function call. % TODO: cite http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0144r0.pdf There is a proposal for \CCseventeen called \emph{structured bindings} \cite{StructuredBindings}, that introduces new syntax to eliminate the need to pre-declare variables and use @std::tie@ for binding the results from a function call. \begin{cppcode} tuple f(); Structured bindings allow unpacking any struct with all public non-static data members into fresh local variables. The use of @&@ allows declaring new variables as references, which is something that cannot be done with @std::tie@, since \CC references do not support rebinding. This extension requires the use of @auto@ to infer the types of the new variables, so complicated expressions with a non-obvious type must documented with some other mechanism. This extension requires the use of @auto@ to infer the types of the new variables, so complicated expressions with a non-obvious type must be documented with some other mechanism. Furthermore, structured bindings are not a full replacement for @std::tie@, as it always declares new variables. Like \CC, D provides tuples through a library variadic template struct. In D, it is possible to name the fields of a tuple type, which creates a distinct type. \begin{dcode} % TODO: cite http://dlang.org/phobos/std_typecons.html % http://dlang.org/phobos/std_typecons.html \begin{dcode} Tuple!(float, "x", float, "y") point2D; Tuple!(float, float) float2;  // different types Tuple!(float, float) float2;  // different type from point2D point2D[0]; // access first element The @expand@ method produces the components of the tuple as a list of separate values, making it possible to call a function that takes $N$ arguments using a tuple with $N$ components. Tuples are a fundamental abstraction in most functional programming languages, such as Standard ML. Tuples are a fundamental abstraction in most functional programming languages, such as Standard ML \cite{sml}. A function in SML always accepts exactly one argument. There are two ways to mimic multiple argument functions: the first through currying and the second by accepting tuple arguments. Tuples are a foundational tool in SML, allowing the creation of arbitrarily complex structured data types. Scala, like \CC, provides tuple types through the standard library. Scala, like \CC, provides tuple types through the standard library \cite{Scala}. Scala provides tuples of size 1 through 22 inclusive through generic data structures. Tuples support named access and subscript access, among a few other operations. \end{scalacode} In Scala, tuples are primarily used as simple data structures for carrying around multiple values or for returning multiple values from a function. The 22-element restriction is an odd and arbitrary choice, but in practice it doesn't cause problems since large tuples are uncommon. The 22-element restriction is an odd and arbitrary choice, but in practice it does not cause problems since large tuples are uncommon. Subscript access is provided through the @productElement@ method, which returns a value of the top-type @Any@, since it is impossible to receive a more precise type from a general subscripting method due to type erasure. The disparity between named access beginning at @_1@ and subscript access starting at @0@ is likewise an oddity, but subscript access is typically avoided since it discards type information. \Csharp has similarly strange limitations, allowing tuples of size up to 7 components. % TODO: cite https://msdn.microsoft.com/en-us/library/system.tuple(v=vs.110).aspx \Csharp also has tuples, but has similarly strange limitations, allowing tuples of size up to 7 components. % https://msdn.microsoft.com/en-us/library/system.tuple(v=vs.110).aspx The officially supported workaround for this shortcoming is to nest tuples in the 8th component. \Csharp allows accessing a component of a tuple by using the field @Item$N$@ for components 1 through 7, and @Rest@ for the nested tuple. % TODO: cite 5.3 https://docs.python.org/3/tutorial/datastructures.html In Python, tuples are immutable sequences that provide packing and unpacking operations. In Python \cite{Python}, tuples are immutable sequences that provide packing and unpacking operations. While the tuple itself is immutable, and thus does not allow the assignment of components, there is nothing preventing a component from being internally mutable. The components of a tuple can be accessed by unpacking into multiple variables, indexing, or via field name, like D. Tuples support multiple assignment through a combination of packing and unpacking, in addition to the common sequence operations. % TODO: cite https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/Types.html#//apple_ref/doc/uid/TP40014097-CH31-ID448 Swift, like D, provides named tuples, with components accessed by name, index, or via extractors. Swift \cite{Swift}, like D, provides named tuples, with components accessed by name, index, or via extractors. Tuples are primarily used for returning multiple values from a function. In Swift, @Void@ is an alias for the empty tuple, and there are no single element tuples. Tuples comparable to those described above are added to \CFA and discussed in Chapter 3. \section{Variadic Functions} printf("%d %g %c %s", 10, 3.5, 'X', "a string"); \end{cfacode} Through the use of a format string, @printf@ allows C programmers to print any of the standard C data types. Through the use of a format string, C programmers can communicate argument type information to @printf@, allowing C programmers to print any of the standard C data types. Still, @printf@ is extremely limited, since the format codes are specified by the C standard, meaning users cannot define their own format codes to extend @printf@ for new data types or new formatting rules. A parameter pack matches 0 or more elements, which can be types or expressions depending on the context. Like other templates, variadic template functions rely on an implicit set of constraints on a type, in this example a @print@ routine. That is, it is possible to use the @f@ routine any any type provided there is a corresponding @print@ routine, making variadic templates fully open to extension, unlike variadic functions in C. That is, it is possible to use the @f@ routine on any type provided there is a corresponding @print@ routine, making variadic templates fully open to extension, unlike variadic functions in C. Recent \CC standards (\CCfourteen, \CCseventeen) expand on the basic premise by allowing variadic template variables and providing convenient expansion syntax to remove the need for recursion in some cases, amongst other things. Unfortunately, Java's use of nominal inheritance means that types must explicitly inherit from classes or interfaces in order to be considered a subclass. The combination of these two issues greatly restricts the usefulness of variadic functions in Java. Type-safe variadic functions are added to \CFA and discussed in Chapter 4.
• ## doc/rob_thesis/thesis-frontpgs.tex

 ra0fc78a \begin{center}\textbf{Abstract}\end{center} % \CFA is a modern extension to the C programming language. % Some of the features of \CFA include parametric polymorphism, overloading, and . TODO \CFA is a modern, non-object-oriented extension of the C programming language. This thesis introduces two fundamental language features: tuples and constructors/destructors, as well as improved variadic functions. While these features exist in prior programming languages, the contribution of this work is engineering these features into a highly complex type system. \cleardoublepage
• ## doc/rob_thesis/thesis.tex

 ra0fc78a \documentclass[letterpaper,12pt,titlepage,oneside,final]{book} % For PDF, suitable for double-sided printing, change the PrintVersion variable below % to "true" and use this \documentclass line instead of the one above: % \documentclass[letterpaper,12pt,titlepage,openright,twoside,final]{book} \usepackage[T1]{fontenc}                                % allow Latin1 (extended ASCII) characters \usepackage{textcomp} % \usepackage[utf8]{inputenc} \usepackage[latin1]{inputenc} % \usepackage[latin1]{inputenc} \usepackage{fullpage,times,comment} % \usepackage{epic,eepic} \interfootnotelinepenalty=10000 % For PDF, suitable for double-sided printing, change the PrintVersion variable below % to "true" and use this \documentclass line instead of the one above: %\documentclass[letterpaper,12pt,titlepage,openright,twoside,final]{book} % Some LaTeX commands I define for my own nomenclature. \input{tuples} \input{variadic} \input{conclusions} \addcontentsline{toc}{chapter}{\textbf{References}} \bibliography{cfa} \bibliography{cfa,thesis} % Tip 5: You can create multiple .bib files to organize your references. % Just list them all in the \bibliogaphy command, separated by commas (no spaces).
• ## doc/rob_thesis/tuples.tex

 ra0fc78a \chapter{Tuples} %====================================================================== \section{Introduction} % TODO: named return values are not currently implemented in CFA - tie in with named tuples? (future work) % TODO: no passing input parameters by assignment, instead will have reference types => this is not a very C-like model and greatly complicates syntax for likely little gain (and would cause confusion with already supported return-by-rerefence) % TODO: tuples are allowed in expressions, exact meaning is defined by operator overloading (e.g. can add tuples). An important caveat to note is that it is currently impossible to allow adding two triples but prevent adding a pair with a quadruple (single flattening/structuring conversions are implicit, only total number of components matters). May be able to solve this with more nuanced conversion rules (future work) % TODO: benefits (conclusion) by Till: reduced number of variables and statements; no specified order of execution for multiple assignment (more optimzation freedom); can store parameter lists in variable; MRV routines (natural code); more convenient assignment statements; simple and efficient access of record fields; named return values more legible and efficient in use of storage \section{Multiple-Return-Value Functions} This restriction results in code which emulates functions with multiple return values by \emph{aggregation} or by \emph{aliasing}. In the former situation, the function designer creates a record type that combines all of the return values into a single type. For example, consider a function returning the most frequently occuring letter in a string, and its frequency. % TODO: consider simplifying the example! %   Two things I like about this example: %   * it uses different types to illustrate why an array is insufficient (this is not necessary, but is nice) %   * it's complicated enough to show the uninitialized pitfall that exists in the aliasing example. %   Still, it may be a touch too complicated. Is there a simpler example with these two properties? For example, consider a function returning the most frequently occurring letter in a string, and its frequency. This example is complex enough to illustrate that an array is insufficient, since arrays are homogeneous, and demonstrates a potential pitfall that exists with aliasing. \begin{cfacode} struct mf_ret { const char * str = "hello world"; char ch;                            // pre-allocate return value int freq = most_frequent(str, &ch); // pass return value as parameter int freq = most_frequent(str, &ch); // pass return value as out parameter printf("%s -- %d %c\n", str, freq, ch); \end{cfacode} Notably, using this approach, the caller is directly responsible for allocating storage for the additional temporary return values. This complicates the call site with a sequence of variable declarations leading up to the call. Notably, using this approach, the caller is directly responsible for allocating storage for the additional temporary return values, which complicates the call site with a sequence of variable declarations leading up to the call. Also, while a disciplined use of @const@ can give clues about whether a pointer parameter is going to be used as an out parameter, it is not immediately obvious from only the routine signature whether the callee expects such a parameter to be initialized before the call. Furthermore, while many C routines that accept pointers are designed so that it is safe to pass @NULL@ as a parameter, there are many C routines that are not null-safe. The expression resolution phase of the \CFA translator ensures that the correct form is used depending on the values being returned and the return type of the current function. A multiple-returning function with return type @T@ can return any expression that is implicitly convertible to @T@. Using the running example, the @most_frequent@ function can be written in using multiple return values as such, Using the running example, the @most_frequent@ function can be written using multiple return values as such, \begin{cfacode} [int, char] most_frequent(const char * str) { } \end{cfacode} This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type. This approach provides the benefits of compile-time checking for appropriate return statements as in aggregation, but without the required verbosity of declaring a new named type, which precludes the bug seen with out parameters. The addition of multiple-return-value functions necessitates a syntax for accepting multiple values at the call-site. In this case, there is only one option for a function named @most_frequent@ that takes a string as input. This function returns two values, one @int@ and one @char@. There are four options for a function named @process@, but only two which accept two arguments, and of those the best match is (3), which is also an exact match. There are four options for a function named @process@, but only two that accept two arguments, and of those the best match is (3), which is also an exact match. This expression first calls @most_frequent("hello world")@, which produces the values @3@ and @'l'@, which are fed directly to the first and second parameters of (3), respectively. The previous expression has 3 \emph{components}. Each component in a tuple expression can be any \CFA expression, including another tuple expression. % TODO: Tuple expressions can appear anywhere that any other expression can appear (...?) The order of evaluation of the components in a tuple expression is unspecified, to allow a compiler the greatest flexibility for program optimization. It is, however, guaranteed that each component of a tuple expression is evaluated for side-effects, even if the result is not used. Multiple-return-value functions can equivalently be called \emph{tuple-returning functions}. % TODO: does this statement still apply, and if so to what extent? %   Tuples are a compile-time phenomenon and have little to no run-time presence. \subsection{Tuple Variables} These variables can be used in any of the contexts where a tuple expression is allowed, such as in the @printf@ function call. As in the @process@ example, the components of the tuple value are passed as separate parameters to @printf@, allowing very simple printing of tuple expressions. If the individual components are required, they can be extracted with a simple assignment, as in previous examples. One way to access the individual components is with a simple assignment, as in previous examples. \begin{cfacode} int freq; \label{s:TupleAssignment} An assignment where the left side of the assignment operator has a tuple type is called tuple assignment. There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called Multiple and Mass Assignment, respectively. There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called \emph{Multiple} and \emph{Mass} Assignment, respectively. \begin{cfacode} int x; A mass assignment assigns the value $R$ to each $L_i$. For a mass assignment to be valid, @?=?(&$L_i$, $R$)@ must be a well-typed expression. This differs from C cascading assignment (e.g. @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. These semantics differ from C cascading assignment (e.g. @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, which results in the value @3.14@ in @y@ and the value @3@ in @x@. On the other hand, the C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, which results in the value @3@ in @x@, and as a result the value @3@ in @y@ as well. These semantics allow cascading tuple assignment to work out naturally in any context where a tuple is permitted. These semantics are a change from the original tuple design in \KWC \cite{Till89}, wherein tuple assignment was a statement that allows cascading assignments as a special case. This decision wa made in an attempt to fix what was seen as a problem with assignment, wherein it can be used in many different locations, such as in function-call argument position. Restricting tuple assignment to statements was an attempt to to fix what was seen as a problem with assignment, wherein it can be used in many different locations, such as in function-call argument position. While permitting assignment as an expression does introduce the potential for subtle complexities, it is impossible to remove assignment expressions from \CFA without affecting backwards compatibility. Furthermore, there are situations where permitting assignment as an expression improves readability by keeping code succinct and reducing repetition, and complicating the definition of tuple assignment puts a greater cognitive burden on the user. void ?{}(S *, S);      // (4) [S, S] x = [3, 6.28];  // uses (2), (3) [S, S] y;              // uses (1), (1) [S, S] z = x.0;        // uses (4), (4) \end{cfacode} In this example, @x@ is initialized by the multiple constructor calls @?{}(&x.0, 3)@ and @?{}(&x.1, 6.28)@, while @y@ is initilaized by two default constructor calls @?{}(&y.0)@ and @?{}(&y.1)@. [S, S] x = [3, 6.28];  // uses (2), (3), specialized constructors [S, S] y;              // uses (1), (1), default constructor [S, S] z = x.0;        // uses (4), (4), copy constructor \end{cfacode} In this example, @x@ is initialized by the multiple constructor calls @?{}(&x.0, 3)@ and @?{}(&x.1, 6.28)@, while @y@ is initialized by two default constructor calls @?{}(&y.0)@ and @?{}(&y.1)@. @z@ is initialized by mass copy constructor calls @?{}(&z.0, x.0)@ and @?{}(&z.1, x.0)@. Finally, @x@, @y@, and @z@ are destructed, i.e. the calls @^?{}(&x.0)@, @^?{}(&x.1)@, @^?{}(&y.0)@, @^?{}(&y.1)@, @^?{}(&z.0)@, and @^?{}(&z.1)@. S s = t; \end{cfacode} The initialization of @s@ with @t@ works by default, because @t@ is flattened into its components, which satisfies the generated field constructor @?{}(S *, int, double)@ to initialize the first two values. The initialization of @s@ with @t@ works by default because @t@ is flattened into its components, which satisfies the generated field constructor @?{}(S *, int, double)@ to initialize the first two values. \section{Member-Access Tuple Expression} Then the type of @a.[x, y, z]@ is @[T_x, T_y, T_z]@. Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (e.g. rearrange components, drop components, duplicate components, etc.). Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (e.g., rearrange components, drop components, duplicate components, etc.). \begin{cfacode} [int, int, long, double] x; Since \CFA permits these tuple-access expressions using structures, unions, and tuples, \emph{member tuple expression} or \emph{field tuple expression} is more appropriate. It could be possible to extend member-access expressions further. It is possible to extend member-access expressions further. Currently, a member-access expression whose member is a name requires that the aggregate is a structure or union, while a constant integer member requires the aggregate to be a tuple. In the interest of orthogonal design, \CFA could apply some meaning to the remaining combinations as well. z.y;  // ??? \end{cfacode} One possiblity is for @s.1@ to select the second member of @s@. One possibility is for @s.1@ to select the second member of @s@. Under this interpretation, it becomes possible to not only access members of a struct by name, but also by position. Likewise, it seems natural to open this mechanism to enumerations as well, wherein the left side would be a type, rather than an expression. One benefit of this interpretation is familiar, since it is extremely reminiscent of tuple-index expressions. One benefit of this interpretation is familiarity, since it is extremely reminiscent of tuple-index expressions. On the other hand, it could be argued that this interpretation is brittle in that changing the order of members or adding new members to a structure becomes a brittle operation. This problem is less of a concern with tuples, since modifying a tuple affects only the code which directly uses that tuple, whereas modifying a structure has far reaching consequences with every instance of the structure. As for @z.y@, a natural interpretation would be to extend the meaning of member tuple expressions. This problem is less of a concern with tuples, since modifying a tuple affects only the code that directly uses the tuple, whereas modifying a structure has far reaching consequences for every instance of the structure. As for @z.y@, one interpretation is to extend the meaning of member tuple expressions. That is, currently the tuple must occur as the member, i.e. to the right of the dot. Allowing tuples to the left of the dot could distribute the member across the elements of the tuple, in much the same way that member tuple expressions distribute the aggregate across the member tuple. In this example, @z.y@ expands to @[z.0.y, z.1.y]@, allowing what is effectively a very limited compile-time field-sections map operation, where the argument must be a tuple containing only aggregates having a member named @y@. It is questionable how useful this would actually be in practice, since generally structures are not designed to have names in common with other structures, and further this could cause maintainability issues in that it encourages programmers to adopt very simple naming conventions, to maximize the amount of overlap between different types. It is questionable how useful this would actually be in practice, since structures often do not have names in common with other structures, and further this could cause maintainability issues in that it encourages programmers to adopt very simple naming conventions to maximize the amount of overlap between different types. Perhaps more useful would be to allow arrays on the left side of the dot, which would likewise allow mapping a field access across the entire array, producing an array of the contained fields. The immediate problem with this idea is that C arrays do not carry around their size, which would make it impossible to use this extension for anything other than a simple stack allocated array. Supposing this feature works as described, it would be necessary to specify an ordering for the expansion of member access expressions versus member tuple expressions. Supposing this feature works as described, it would be necessary to specify an ordering for the expansion of member-access expressions versus member-tuple expressions. \begin{cfacode} struct { int x, y; }; \end{cfacode} Depending on exactly how the two tuples are combined, different results can be achieved. As such, a specific ordering would need to be imposed in order for this feature to be useful. Furthermore, this addition moves a member tuple expression's meaning from being clear statically to needing resolver support, since the member name needs to be distributed appropriately over each member of the tuple, which could itself be a tuple. Ultimately, both of these extensions introduce complexity into the model, with relatively little peceived benefit. As such, a specific ordering would need to be imposed to make this feature useful. Furthermore, this addition moves a member-tuple expression's meaning from being clear statically to needing resolver support, since the member name needs to be distributed appropriately over each member of the tuple, which could itself be a tuple. A second possibility is for \CFA to have named tuples, as they exist in Swift and D. \begin{cfacode} typedef [int x, int y] Point2D; Point2D p1, p2; p1.x + p1.y + p2.x + p2.y; p1.0 + p1.1 + p2.0 + p2.1;  // equivalent \end{cfacode} In this simpler interpretation, a tuple type carries with it a list of possibly empty identifiers. This approach fits naturally with the named return-value feature, and would likely go a long way towards implementing it. Ultimately, the first two extensions introduce complexity into the model, with relatively little perceived benefit, and so were dropped from consideration. Named tuples are a potentially useful addition to the language, provided they can be parsed with a reasonable syntax. \section{Casting} In C, the cast operator is used to explicitly convert between types. In \CFA, the cast operator has a secondary use, which is type ascription. In \CFA, the cast operator has a secondary use, which is type ascription, since it force the expression resolution algorithm to choose the lowest cost conversion to the target type. That is, a cast can be used to select the type of an expression when it is ambiguous, as in the call to an overloaded function. \begin{cfacode} (int)f();  // choose (2) \end{cfacode} Since casting is a fundamental operation in \CFA, casts should be given a meaningful interpretation in the context of tuples. Since casting is a fundamental operation in \CFA, casts need to be given a meaningful interpretation in the context of tuples. Taking a look at standard C provides some guidance with respect to the way casts should work with tuples. \begin{cfacode}[numbers=left] void g(); (void)f(); (int)g(); (void)f();  // valid, ignore results (int)g();   // invalid, void cannot be converted to int struct A { int x; }; (struct A)f(); (struct A)f();  // invalid \end{cfacode} In C, line 4 is a valid cast, which calls @f@ and discards its result. On the other hand, line 5 is invalid, because @g@ does not produce a result, so requesting an @int@ to materialize from nothing is nonsensical. Finally, line 8 is also invalid, because in C casts only provide conversion between scalar types \cite{C11}. For consistency, this implies that any case wherein the number of components increases as a result of the cast is invalid, while casts which have the same or fewer number of components may be valid. Finally, line 8 is also invalid, because in C casts only provide conversion between scalar types \cite[p.~91]{C11}. For consistency, this implies that any case wherein the number of components increases as a result of the cast is invalid, while casts that have the same or fewer number of components may be valid. Formally, a cast to tuple type is valid when $T_n \leq S_m$, where $T_n$ is the number of components in the target type and $S_m$ is the number of components in the source type, and for each $i$ in $[0, n)$, $S_i$ can be cast to $T_i$. \end{cfacode} Note that due to the implicit tuple conversions, this function is not restricted to the addition of two triples. A call to this plus operator type checks as long as a total of 6 non-tuple arguments are passed after flattening, and all of the arguments have a common type which can bind to @T@, with a pairwise @?+?@ over @T@. For example, these expressions will also succeed and produce the same value. \begin{cfacode} ([x.0, x.1]) + ([x.2, 10, 20, 30]); x.0 + ([x.1, x.2, 10, 20, 30]); A call to this plus operator type checks as long as a total of 6 non-tuple arguments are passed after flattening, and all of the arguments have a common type that can bind to @T@, with a pairwise @?+?@ over @T@. For example, these expressions also succeed and produce the same value. \begin{cfacode} ([x.0, x.1]) + ([x.2, 10, 20, 30]);  // x + ([10, 20, 30]) x.0 + ([x.1, x.2, 10, 20, 30]);      // x + ([10, 20, 30]) \end{cfacode} This presents a potential problem if structure is important, as these three expressions look like they should have different meanings. Further, these calls can be made ambiguous by adding seemingly different functions. Furthermore, these calls can be made ambiguous by introducing seemingly different functions. \begin{cfacode} forall(otype T | { T ?+?(T, T); }) \end{cfacode} It is also important to note that these calls could be disambiguated if the function return types were different, as they likely would be for a reasonable implementation of @?+?@, since the return type is used in overload resolution. Still, this is a deficiency of the current argument matching algorithm, and depending on the function, differing return values may not always be appropriate. It's possible that this could be rectified by applying an appropriate cost to the structuring and flattening conversions, which are currently 0-cost conversions. Still, these semantics are a deficiency of the current argument matching algorithm, and depending on the function, differing return values may not always be appropriate. These issues could be rectified by applying an appropriate cost to the structuring and flattening conversions, which are currently 0-cost conversions. Care would be needed in this case to ensure that exact matches do not incur such a cost. \begin{cfacode} \end{cfacode} Until this point, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (i.e. no implicit conversions are applied to assertion arguments). Until this point, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (i.e., no implicit conversions are applied to assertion arguments). This decision presents a conflict with the flexibility of tuples. \subsection{Assertion Inference} In the call to @f@, the second and third argument components are structured into a tuple argument. Expressions which may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. Expressions that may contain side effects are made into \emph{unique expressions} before being expanded by the flattening conversion. Each unique expression is assigned an identifier and is guaranteed to be executed exactly once. \begin{cfacode} g(h()); \end{cfacode} Interally, this is converted to Internally, this is converted to pseudo-\CFA \begin{cfacode} void g(int, double); [int, double] h(); let unq<0> = f() : g(unq<0>.0, unq<0>.1);  // notation? \end{cfacode} lazy [int, double] unq0 = h(); // deferred execution g(unq0.0, unq0.1);             // execute h() once \end{cfacode} That is, the function @h@ is evaluated lazily and its result is stored for subsequent accesses. Ultimately, unique expressions are converted into two variables and an expression. \begin{cfacode} [int, double] _unq0; g( (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).0, (_unq0_finished_ ? _unq0 : (_unq0 = f(), _unq0_finished_ = 1, _unq0)).1, (_unq0_finished_ ? _unq0 : (_unq0 = h(), _unq0_finished_ = 1, _unq0)).0, (_unq0_finished_ ? _unq0 : (_unq0 = h(), _unq0_finished_ = 1, _unq0)).1, ); \end{cfacode} Every subsequent evaluation of the unique expression then results in an access to the stored result of the actual expression. Currently, the \CFA translator has a very broad, imprecise definition of impurity, where any function call is assumed to be impure. This notion could be made more precise for certain intrinsic, autogenerated, and builtin functions, and could analyze function bodies when they are available to recursively detect impurity, to eliminate some unique expressions. It's possible that unique expressions could be exposed to the user through a language feature, but it's not immediately obvious that there is a benefit to doing so. Currently, the \CFA translator has a very broad, imprecise definition of impurity (side-effects), where every function call is assumed to be impure. This notion could be made more precise for certain intrinsic, auto-generated, and built-in functions, and could analyze function bodies, when they are available, to recursively detect impurity, to eliminate some unique expressions. It is possible that lazy evaluation could be exposed to the user through a lazy keyword with little additional effort. Tuple member expressions are recursively expanded into a list of member access expressions. x.[0, 1.[0, 2]]; \end{cfacode} Which becomes which becomes \begin{cfacode} [x.0, [x.1.0, x.1.2]]; \end{cfacode} Tuple member expressions also take advantage of unique expressions in the case of possible impurity. Tuple-member expressions also take advantage of unique expressions in the case of possible impurity. Finally, the various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions. }); \end{cfacode} A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, e.g. in a unique expression. A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, e.g., in a unique expression. $N$ LHS variables are generated and constructed using the address of the tuple components, and a single RHS variable is generated to store the value of the RHS without any loss of precision. A nested statement expression is generated that performs the individual assignments and constructs the return value using the results of the individual assignments. The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language. There are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this is not a new restriction. \section{Variadic Functions} % TODO: should this maybe be its own chapter? C provides variadic functions through the manipulation of @va_list@ objects. A variadic function is one which contains at least one parameter, followed by @...@ as the last token in the parameter list. In particular, some form of \emph{argument descriptor} is needed to inform the function of the number of arguments and their types. Two common argument descriptors are format strings or and counter parameters. It's important to note that both of these mechanisms are inherently redundant, because they require the user to specify information that the compiler knows explicitly. This required repetition is error prone, because it's easy for the user to add or remove arguments without updating the argument descriptor. In addition, C requires the programmer to hard code all of the possible expected types. As a result, it is cumbersome to write a function that is open to extension. For example, a simple function which sums $N$ @int@s, \begin{cfacode} int sum(int N, ...) { va_list args; va_start(args, N); int ret = 0; while(N) { ret += va_arg(args, int);  // have to specify type N--; } va_end(args); return ret; } sum(3, 10, 20, 30);  // need to keep counter in sync \end{cfacode} The @va_list@ type is a special C data type that abstracts variadic argument manipulation. The @va_start@ macro initializes a @va_list@, given the last named parameter. Each use of the @va_arg@ macro allows access to the next variadic argument, given a type. Since the function signature does not provide any information on what types can be passed to a variadic function, the compiler does not perform any error checks on a variadic call. As such, it is possible to pass any value to the @sum@ function, including pointers, floating-point numbers, and structures. In the case where the provided type is not compatible with the argument's actual type after default argument promotions, or if too many arguments are accessed, the behaviour is undefined \cite{C11}. Furthermore, there is no way to perform the necessary error checks in the @sum@ function at run-time, since type information is not carried into the function body. Since they rely on programmer convention rather than compile-time checks, variadic functions are generally unsafe. In practice, compilers can provide warnings to help mitigate some of the problems. For example, GCC provides the @format@ attribute to specify that a function uses a format string, which allows the compiler to perform some checks related to the standard format specifiers. Unfortunately, this does not permit extensions to the format string syntax, so a programmer cannot extend the attribute to warn for mismatches with custom types. Needless to say, C's variadic functions are a deficient language feature. Two options were examined to provide better, type-safe variadic functions in \CFA. \subsection{Whole Tuple Matching} Option 1 is to change the argument matching algorithm, so that type parameters can match whole tuples, rather than just their components. This option could be implemented with two phases of argument matching when a function contains type parameters and the argument list contains tuple arguments. If flattening and structuring fail to produce a match, a second attempt at matching the function and argument combination is made where tuple arguments are not expanded and structure must match exactly, modulo non-tuple implicit conversions. For example: \begin{cfacode} forall(otype T, otype U | { T g(U); }) void f(T, U); [int, int] g([int, int, int, int]); f([1, 2], [3, 4, 5, 6]); \end{cfacode} With flattening and structuring, the call is first transformed into @f(1, 2, 3, 4, 5, 6)@. Since the first argument of type @T@ does not have a tuple type, unification decides that @T=int@ and @1@ is matched as the first parameter. Likewise, @U@ does not have a tuple type, so @U=int@ and @2@ is accepted as the second parameter. There are now no remaining formal parameters, but there are remaining arguments and the function is not variadic, so the match fails. With the addition of an exact matching attempt, @T=[int,int]@ and @U=[int,int,int,int]@ and so the arguments type check. Likewise, when inferring assertion @g@, an exact match is found. This approach is strict with respect to argument structure by nature, which makes it syntactically awkward to use in ways that the existing tuple design is not. For example, consider a @new@ function which allocates memory using @malloc@ and constructs the result, using arbitrary arguments. \begin{cfacode} struct Array; void ?{}(Array *, int, int, int); forall(dtype T, otype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; } Array(int) * x = new([1, 2, 3]); \end{cfacode} The call to @new@ is not particularly appealing, since it requires the use of square brackets at the call-site, which is not required in any other function call. This shifts the burden from the compiler to the programmer, which is almost always wrong, and creates an odd inconsistency within the language. Similarly, in order to pass 0 variadic arguments, an explicit empty tuple must be passed into the argument list, otherwise the exact matching rule would not have an argument to bind against. It should be otherwise noted that the addition of an exact matching rule only affects the outcome for polymorphic type binding when tuples are involved. For non-tuple arguments, exact matching and flattening \& structuring are equivalent. For tuple arguments to a function without polymorphic formal parameters, flattening and structuring work whenever an exact match would have worked, since the tuple is flattened and implicitly restructured to its original structure. Thus there is nothing to be gained from permitting the exact matching rule to take effect when a function does not contain polymorphism and none of the arguments are tuples. Overall, this option takes a step in the right direction, but is contrary to the flexibility of the existing tuple design. \subsection{A New Typeclass} A second option is the addition of another kind of type parameter, @ttype@. Matching against a @ttype@ parameter consumes all remaining argument components and packages them into a tuple, binding to the resulting tuple of types. In a given parameter list, there should be at most one @ttype@ parameter that must occur last, otherwise the call can never resolve, given the previous rule. % TODO: a similar rule exists in C/C++ for "..." This idea essentially matches normal variadic semantics, with a strong feeling of similarity to \CCeleven variadic templates. As such, @ttype@ variables will also be referred to as argument packs. This is the option that has been added to \CFA. Like variadic templates, the main way to manipulate @ttype@ polymorphic functions is through recursion. Since nothing is known about a parameter pack by default, assertion parameters are key to doing anything meaningful. Unlike variadic templates, @ttype@ polymorphic functions can be separately compiled. For example, a simple translation of the C sum function using @ttype@ would look like \begin{cfacode} int sum(){ return 0; }        // (0) forall(ttype Params | { int sum(Params); }) int sum(int x, Params rest) { // (1) return x+sum(rest); } sum(10, 20, 30); \end{cfacode} Since (0) does not accept any arguments, it is not a valid candidate function for the call @sum(10, 20, 30)@. In order to call (1), @10@ is matched with @x@, and the argument resolution moves on to the argument pack @rest@, which consumes the remainder of the argument list and @Params@ is bound to @[20, 30]@. In order to finish the resolution of @sum@, an assertion parameter which matches @int sum(int, int)@ is required. Like in the previous iteration, (0) is not a valid candiate, so (1) is examined with @Params@ bound to @[int]@, requiring the assertion @int sum(int)@. Next, (0) fails, and to satisfy (1) @Params@ is bound to @[]@, requiring an assertion @int sum()@. Finally, (0) matches and (1) fails, which terminates the recursion. Effectively, this traces as @sum(10, 20, 30)@ $\rightarrow$ @10+sum(20, 30)@ $\rightarrow$ @10+(20+sum(30))@ $\rightarrow$ @10+(20+(30+sum()))@ $\rightarrow$ @10+(20+(30+0))@. A point of note is that this version does not require any form of argument descriptor, since the \CFA type system keeps track of all of these details. It might be reasonable to take the @sum@ function a step further to enforce a minimum number of arguments, which could be done simply \begin{cfacode} int sum(int x, int y){ return x+y; } forall(ttype Params | { int sum(int, Params); }) int sum(int x, int y, Params rest) { return sum(x+y, rest); } sum(10, 20, 30); \end{cfacode} One more iteration permits the summation of any summable type, as long as all arguments are the same type. \begin{cfacode} trait summable(otype T) { T ?+?(T, T); }; forall(otype R | summable(R)) R sum(R x, R y){ return x+y; } forall(otype R, ttype Params | summable(R) | { R sum(R, Params); }) R sum(R x, R y, Params rest) { return sum(x+y, rest); } sum(3, 10, 20, 30); \end{cfacode} Unlike C, it is not necessary to hard code the expected type. This is naturally open to extension, in that any user-defined type with a @?+?@ operator is automatically able to be used with the @sum@ function. That is to say, the programmer who writes @sum@ does not need full program knowledge of every possible data type, unlike what is necessary to write an equivalent function using the standard C mechanisms. Going one last step, it is possible to achieve full generality in \CFA, allowing the summation of arbitrary lists of summable types. \begin{cfacode} trait summable(otype T1, otype T2, otype R) { R ?+?(T1, T2); }; forall(otype T1, otype T2, otype R | summable(T1, T2, R)) R sum(T1 x, T2 y) { return x+y; } forall(otype T1, otype T2, otype T3, ttype Params, otype R | summable(T1, T2, T3) | { R sum(T3, Params); }) R sum(T1 x, T2 y, Params rest ) { return sum(x+y, rest); } sum(3, 10.5, 20, 30.3); \end{cfacode} The \CFA translator requires adding explicit @double ?+?(int, double)@ and @double ?+?(double, int)@ functions for this call to work, since implicit conversions are not supported for assertions. C variadic syntax and @ttype@ polymorphism probably should not be mixed, since it is not clear where to draw the line to decide which arguments belong where. Furthermore, it might be desirable to disallow polymorphic functions to use C variadic syntax to encourage a Cforall style. Aside from calling C variadic functions, it is not obvious that there is anything that can be done with C variadics that could not also be done with @ttype@ parameters. Variadic templates in \CC require an ellipsis token to express that a parameter is a parameter pack and to expand a parameter pack. \CFA does not need an ellipsis in either case, since the type class @ttype@ is only used for variadics. An alternative design could have used an ellipsis combined with an existing type class. This approach was not taken because the largest benefit of the ellipsis token in \CC is the ability to expand a parameter pack within an expression, e.g. in fold expressions, which requires compile-time knowledge of the structure of the parameter pack, which is not available in \CFA. \begin{cppcode} template void f(Args &... args) { g(&args...);  // expand to addresses of pack elements } \end{cppcode} As such, the addition of an ellipsis token would be purely an aesthetic change in \CFA today. It is possible to write a type-safe variadic print routine, which can replace @printf@ \begin{cfacode} struct S { int x, y; }; forall(otype T, ttype Params | { void print(T); void print(Params); }) void print(T arg, Params rest) { print(arg); print(rest); } void print(char * x) { printf("%s", x); } void print(int x) { printf("%d", x);  } void print(S s) { print("{ ", s.x, ",", s.y, " }"); } print("s = ", (S){ 1, 2 }, "\n"); \end{cfacode} This example routine showcases a variadic-template-like decomposition of the provided argument list. The individual @print@ routines allow printing a single element of a type. The polymorphic @print@ allows printing any list of types, as long as each individual type has a @print@ function. The individual print functions can be used to build up more complicated @print@ routines, such as for @S@, which is something that cannot be done with @printf@ in C. It is also possible to use @ttype@ polymorphism to provide arbitrary argument forwarding functions. For example, it is possible to write @new@ as a library function. Example 2: new (i.e. type-safe malloc + constructors) \begin{cfacode} struct Array; void ?{}(Array *, int, int, int); forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; // construct result of malloc } Array * x = new(1, 2, 3); \end{cfacode} The @new@ function provides the combination of type-safe @malloc@ with a constructor call, so that it becomes impossible to forget to construct dynamically allocated objects. This provides the type-safety of @new@ in \CC, without the need to specify the allocated type, thanks to return-type inference. In the call to @new@, @Array@ is selected to match @T@, and @Params@ is expanded to match @[int, int, int, int]@. To satisfy the assertions, a constructor with an interface compatible with @void ?{}(Array *, int, int, int)@ must exist in the current scope. \subsection{Implementation} The definition of @new@ \begin{cfacode} forall(dtype T | sized(T)) T * malloc(); forall(dtype T, ttype Params | sized(T) | { void ?{}(T *, Params); }) T * new(Params p) { return malloc(){ p }; // construct result of malloc } \end{cfacode} Generates the following \begin{cfacode} void *malloc(long unsigned int _sizeof_T, long unsigned int _alignof_T); void *new( void (*_adapter_)(void (*)(), void *, void *), long unsigned int _sizeof_T, long unsigned int _alignof_T, long unsigned int _sizeof_Params, long unsigned int _alignof_Params, void (* _ctor_T)(void *, void *), void *p ){ void *_retval_new; void *_tmp_cp_ret0; void *_tmp_ctor_expr0; _retval_new= (_adapter_(_ctor_T, (_tmp_ctor_expr0=(_tmp_cp_ret0=malloc(_sizeof_2tT, _alignof_2tT), _tmp_cp_ret0)), p), _tmp_ctor_expr0); // ?{} *(void **)&_tmp_cp_ret0; // ^?{} return _retval_new; } \end{cfacode} The constructor for @T@ is called indirectly through the adapter function on the result of @malloc@ and the parameter pack. The variable that was allocated and constructed is then returned from @new@. A call to @new@ \begin{cfacode} struct S { int x, y; }; void ?{}(S *, int, int); S * s = new(3, 4); \end{cfacode} Generates the following \begin{cfacode} struct _tuple2_ {  // _tuple2_(T0, T1) void *field_0; void *field_1; }; struct _conc__tuple2_0 {  // _tuple2_(int, int) int field_0; int field_1; }; struct _conc__tuple2_0 _tmp_cp1;  // tuple argument to new struct S *_tmp_cp_ret1;           // return value from new void _thunk0(  // ?{}(S *, [int, int]) struct S *_p0, struct _conc__tuple2_0 _p1 ){ _ctor_S(_p0, _p1.field_0, _p1.field_1);  // restructure tuple parameter } void _adapter(void (*_adaptee)(), void *_p0, void *_p1){ // apply adaptee to arguments after casting to actual types ((void (*)(struct S *, struct _conc__tuple2_0))_adaptee)( _p0, *(struct _conc__tuple2_0 *)_p1 ); } struct S *s = (struct S *)(_tmp_cp_ret1= new( _adapter, sizeof(struct S), __alignof__(struct S), sizeof(struct _conc__tuple2_0), __alignof__(struct _conc__tuple2_0), (void (*)(void *, void *))&_thunk0, (({ // copy construct tuple argument to new int *__multassign_L0 = (int *)&_tmp_cp1.field_0; int *__multassign_L1 = (int *)&_tmp_cp1.field_1; int __multassign_R0 = 3; int __multassign_R1 = 4; ((*__multassign_L0=__multassign_R0 /* ?{} */) , (*__multassign_L1=__multassign_R1 /* ?{} */)); }), &_tmp_cp1) ), _tmp_cp_ret1); *(struct S **)&_tmp_cp_ret1; // ^?{}  // destroy return value from new ({  // destroy argument temporary int *__massassign_L0 = (int *)&_tmp_cp1.field_0; int *__massassign_L1 = (int *)&_tmp_cp1.field_1; ((*__massassign_L0 /* ^?{} */) , (*__massassign_L1 /* ^?{} */)); }); \end{cfacode} Of note, @_thunk0@ is generated to translate calls to @?{}(S *, [int, int])@ into calls to @?{}(S *, int, int)@. The call to @new@ constructs a tuple argument using the supplied arguments. The @print@ function \begin{cfacode} forall(otype T, ttype Params | { void print(T); void print(Params); }) void print(T arg, Params rest) { print(arg); print(rest); } \end{cfacode} Generates \begin{cfacode} void print_variadic( void (*_adapterF_7tParams__P)(void (*)(), void *), void (*_adapterF_2tT__P)(void (*)(), void *), void (*_adapterF_P2tT2tT__MP)(void (*)(), void *, void *), void (*_adapterF2tT_P2tT2tT_P_MP)(void (*)(), void *, void *, void *), long unsigned int _sizeof_T, long unsigned int _alignof_T, long unsigned int _sizeof_Params, long unsigned int _alignof_Params, void *(*_assign_TT)(void *, void *), void (*_ctor_T)(void *), void (*_ctor_TT)(void *, void *), void (*_dtor_T)(void *), void (*print_T)(void *), void (*print_Params)(void *), void *arg, void *rest ){ void *_tmp_cp0 = __builtin_alloca(_sizeof_T); _adapterF_2tT__P(  // print(arg) ((void (*)())print_T), (_adapterF_P2tT2tT__MP( // copy construct argument ((void (*)())_ctor_TT), _tmp_cp0, arg ), _tmp_cp0) ); _dtor_T(_tmp_cp0);  // destroy argument temporary _adapterF_7tParams__P(  // print(rest) ((void (*)())print_Params), rest ); } \end{cfacode} The @print_T@ routine is called indirectly through an adapter function with a copy constructed argument, followed by an indirect call to @print_Params@. A call to print \begin{cfacode} void print(const char * x) { printf("%s", x); } void print(int x) { printf("%d", x);  } print("x = ", 123, ".\n"); \end{cfacode} Generates the following \begin{cfacode} void print_string(const char *x){ int _tmp_cp_ret0; (_tmp_cp_ret0=printf("%s", x)) , _tmp_cp_ret0; *(int *)&_tmp_cp_ret0; // ^?{} } void print_int(int x){ int _tmp_cp_ret1; (_tmp_cp_ret1=printf("%d", x)) , _tmp_cp_ret1; *(int *)&_tmp_cp_ret1; // ^?{} } struct _tuple2_ {  // _tuple2_(T0, T1) void *field_0; void *field_1; }; struct _conc__tuple2_0 {  // _tuple2_(int, const char *) int field_0; const char *field_1; }; struct _conc__tuple2_0 _tmp_cp6;  // _tuple2_(int, const char *) const char *_thunk0(const char **_p0, const char *_p1){ // const char * ?=?(const char **, const char *) return *_p0=_p1; } void _thunk1(const char **_p0){ // void ?{}(const char **) *_p0; // ?{} } void _thunk2(const char **_p0, const char *_p1){ // void ?{}(const char **, const char *) *_p0=_p1; // ?{} } void _thunk3(const char **_p0){ // void ^?{}(const char **) *_p0; // ^?{} } void _thunk4(struct _conc__tuple2_0 _p0){ // void print([int, const char *]) struct _tuple1_ { // _tuple1_(T0) void *field_0; }; struct _conc__tuple1_1 { // _tuple1_(const char *) const char *field_0; }; void _thunk5(struct _conc__tuple1_1 _pp0){ // void print([const char *]) print_string(_pp0.field_0);  // print(rest.0) } void _adapter_i_pii_(void (*_adaptee)(), void *_ret, void *_p0, void *_p1){ *(int *)_ret=((int (*)(int *, int))_adaptee)(_p0, *(int *)_p1); } void _adapter_pii_(void (*_adaptee)(), void *_p0, void *_p1){ ((void (*)(int *, int ))_adaptee)(_p0, *(int *)_p1); } void _adapter_i_(void (*_adaptee)(), void *_p0){ ((void (*)(int))_adaptee)(*(int *)_p0); } void _adapter_tuple1_5_(void (*_adaptee)(), void *_p0){ ((void (*)(struct _conc__tuple1_1 ))_adaptee)(*(struct _conc__tuple1_1 *)_p0); } print_variadic( _adapter_tuple1_5, _adapter_i_, _adapter_pii_, _adapter_i_pii_, sizeof(int), __alignof__(int), sizeof(struct _conc__tuple1_1), __alignof__(struct _conc__tuple1_1), (void *(*)(void *, void *))_assign_i,     // int ?=?(int *, int) (void (*)(void *))_ctor_i,                // void ?{}(int *) (void (*)(void *, void *))_ctor_ii,       // void ?{}(int *, int) (void (*)(void *))_dtor_ii,               // void ^?{}(int *) (void (*)(void *))print_int,              // void print(int) (void (*)(void *))&_thunk5,               // void print([const char *]) &_p0.field_0,                             // rest.0 &(struct _conc__tuple1_1 ){ _p0.field_1 } // [rest.1] ); } struct _tuple1_ {  // _tuple1_(T0) void *field_0; }; struct _conc__tuple1_6 {  // _tuple_1(const char *) const char *field_0; }; const char *_temp0; _temp0="x = "; void _adapter_pstring_pstring_string( void (*_adaptee)(), void *_ret, void *_p0, void *_p1 ){ *(const char **)_ret= ((const char *(*)(const char **, const char *))_adaptee)( _p0, *(const char **)_p1 ); } void _adapter_pstring_string(void (*_adaptee)(), void *_p0, void *_p1){ ((void (*)(const char **, const char *))_adaptee)(_p0, *(const char **)_p1); } void _adapter_string_(void (*_adaptee)(), void *_p0){ ((void (*)(const char *))_adaptee)(*(const char **)_p0); } void _adapter_tuple2_0_(void (*_adaptee)(), void *_p0){ ((void (*)(struct _conc__tuple2_0 ))_adaptee)(*(struct _conc__tuple2_0 *)_p0); } print_variadic( _adapter_tuple2_0_, _adapter_string_, _adapter_pstring_string_, _adapter_pstring_pstring_string_, sizeof(const char *), __alignof__(const char *), sizeof(struct _conc__tuple2_0 ), __alignof__(struct _conc__tuple2_0 ), (void *(*)(void *, void *))&_thunk0, // const char * ?=?(const char **, const char *) (void (*)(void *))&_thunk1,          // void ?{}(const char **) (void (*)(void *, void *))&_thunk2,  // void ?{}(const char **, const char *) (void (*)(void *))&_thunk3,          // void ^?{}(const char **) (void (*)(void *))print_string,      // void print(const char *) (void (*)(void *))&_thunk4,          // void print([int, const char *]) &_temp0,                             // "x = " (({  // copy construct tuple argument to print int *__multassign_L0 = (int *)&_tmp_cp6.field_0; const char **__multassign_L1 = (const char **)&_tmp_cp6.field_1; int __multassign_R0 = 123; const char *__multassign_R1 = ".\n"; ((*__multassign_L0=__multassign_R0 /* ?{} */), (*__multassign_L1=__multassign_R1 /* ?{} */)); }), &_tmp_cp6)                        // [123, ".\n"] ); ({  // destroy argument temporary int *__massassign_L0 = (int *)&_tmp_cp6.field_0; const char **__massassign_L1 = (const char **)&_tmp_cp6.field_1; ((*__massassign_L0 /* ^?{} */) , (*__massassign_L1 /* ^?{} */)); }); \end{cfacode} The type @_tuple2_@ is generated to allow passing the @rest@ argument to @print_variadic@. Thunks 0 through 3 provide wrappers for the @otype@ parameters for @const char *@, while @_thunk4@ translates a call to @print([int, const char *])@ into a call to @print_variadic(int, [const char *])@. This all builds to a call to @print_variadic@, with the appropriate copy construction of the tuple argument. \section{Future Work}
• ## src/ControlStruct/LabelGenerator.cc

 ra0fc78a #include "SynTree/Label.h" #include "SynTree/Attribute.h" #include "SynTree/Statement.h" namespace ControlStruct { } Label LabelGenerator::newLabel( std::string suffix ) { Label LabelGenerator::newLabel( std::string suffix, Statement * stmt ) { std::ostringstream os; os << "__L" << current++ << "__" << suffix; if ( stmt && ! stmt->get_labels().empty() ) { os << "_" << stmt->get_labels().front() << "__"; } std::string ret = os.str(); Label l( ret );
• ## src/ControlStruct/LabelGenerator.h

 ra0fc78a // file "LICENCE" distributed with Cforall. // // LabelGenerator.h -- // LabelGenerator.h -- // // Author           : Rodolfo G. Esteves public: static LabelGenerator *getGenerator(); Label newLabel(std::string suffix = ""); Label newLabel(std::string suffix, Statement * stmt = nullptr); void reset() { current = 0; } void rewind() { current--; }
• ## src/ControlStruct/MLEMutator.cc

 ra0fc78a bool labeledBlock = !(cmpndStmt->get_labels().empty()); if ( labeledBlock ) { Label brkLabel = generator->newLabel("blockBreak"); Label brkLabel = generator->newLabel("blockBreak", cmpndStmt); enclosingControlStructures.push_back( Entry( cmpndStmt, brkLabel ) ); } // if // whether brkLabel and contLabel are used with branch statements and will recursively do the same to nested // loops Label brkLabel = generator->newLabel("loopBreak"); Label contLabel = generator->newLabel("loopContinue"); Label brkLabel = generator->newLabel("loopBreak", loopStmt); Label contLabel = generator->newLabel("loopContinue", loopStmt); enclosingControlStructures.push_back( Entry( loopStmt, brkLabel, contLabel ) ); loopStmt->set_body ( loopStmt->get_body()->acceptMutator( *this ) ); assert( ! enclosingControlStructures.empty() ); Entry &e = enclosingControlStructures.back(); // sanity check that the enclosing loops have been popped correctly bool labeledBlock = !(ifStmt->get_labels().empty()); if ( labeledBlock ) { Label brkLabel = generator->newLabel("blockBreak"); Label brkLabel = generator->newLabel("blockBreak", ifStmt); enclosingControlStructures.push_back( Entry( ifStmt, brkLabel ) ); } // if Parent::mutate( ifStmt ); if ( labeledBlock ) { if ( ! enclosingControlStructures.back().useBreakExit().empty() ) { Statement *MLEMutator::handleSwitchStmt( SwitchClass *switchStmt ) { // generate a label for breaking out of a labeled switch Label brkLabel = generator->newLabel("switchBreak"); Label brkLabel = generator->newLabel("switchBreak", switchStmt); enclosingControlStructures.push_back( Entry(switchStmt, brkLabel) ); mutateAll( switchStmt->get_statements(), *this ); std::list< Entry >::reverse_iterator targetEntry; if ( branchStmt->get_type() == BranchStmt::Goto ) { return branchStmt; } else if ( branchStmt->get_type() == BranchStmt::Continue) { // continue target must be a loop if ( branchStmt->get_target() == "" ) { targetEntry = std::find_if( enclosingControlStructures.rbegin(), enclosingControlStructures.rend(), [](Entry &e) { return isLoop( e.get_controlStructure() ); } ); } else { // labelled continue - lookup label in table ot find attached control structure targetEntry = std::find( enclosingControlStructures.rbegin(), enclosingControlStructures.rend(), (*targetTable)[branchStmt->get_target()] ); } // if if ( targetEntry == enclosingControlStructures.rend() || ! isLoop( targetEntry->get_controlStructure() ) ) { throw SemanticError( "'continue' target must be an enclosing loop: " + originalTarget ); } // if } else if ( branchStmt->get_type() == BranchStmt::Break ) { if ( enclosingControlStructures.empty() ) throw SemanticError( "'break' outside a loop, switch, or labelled block" ); targetEntry = enclosingControlStructures.rbegin(); } else { assert( false ); } // if if ( branchStmt->get_target() != "" && targetTable->find( branchStmt->get_target() ) == targetTable->end() ) { throw SemanticError("The label defined in the exit loop statement does not exist: " + originalTarget );  // shouldn't happen (since that's already checked) } // if switch ( branchStmt->get_type() ) { case BranchStmt::Goto: return branchStmt; case BranchStmt::Continue: case BranchStmt::Break: { bool isContinue = branchStmt->get_type() == BranchStmt::Continue; // unlabeled break/continue if ( branchStmt->get_target() == "" ) { if ( isContinue ) { // continue target is outermost loop targetEntry = std::find_if( enclosingControlStructures.rbegin(), enclosingControlStructures.rend(), [](Entry &e) { return isLoop( e.get_controlStructure() ); } ); } else { // break target is outmost control structure if ( enclosingControlStructures.empty() ) throw SemanticError( "'break' outside a loop, switch, or labelled block" ); targetEntry = enclosingControlStructures.rbegin(); } // if } else { // labeled break/continue - lookup label in table to find attached control structure targetEntry = std::find( enclosingControlStructures.rbegin(), enclosingControlStructures.rend(), (*targetTable)[branchStmt->get_target()] ); } // if // ensure that selected target is valid if ( targetEntry == enclosingControlStructures.rend() || (isContinue && ! isLoop( targetEntry->get_controlStructure() ) ) ) { throw SemanticError( toString( (isContinue ? "'continue'" : "'break'"), " target must be an enclosing ", (isContinue ? "loop: " : "control structure: "), originalTarget ) ); } // if break; } default: assert( false ); } // switch // branch error checks, get the appropriate label name and create a goto } // switch if ( branchStmt->get_target() == "" && branchStmt->get_type() != BranchStmt::Continue ) { // unlabelled break/continue - can keep branch as break/continue for extra semantic information, but add // exitLabel as its destination so that label passes can easily determine where the break/continue goes to branchStmt->set_target( exitLabel ); return branchStmt; } else { // labelled break/continue - can't easily emulate this with break and continue, so transform into a goto delete branchStmt; return new BranchStmt( std::list