1 | %====================================================================== |
---|
2 | \chapter{Introduction} |
---|
3 | %====================================================================== |
---|
4 | |
---|
5 | \section{\protect\CFA Background} |
---|
6 | \label{s:background} |
---|
7 | \CFA \footnote{Pronounced ``C-for-all'', and written \CFA or Cforall.} is a modern non-object-oriented extension to the C programming language. |
---|
8 | As it is an extension of C, there is already a wealth of existing C code and principles that govern the design of the language. |
---|
9 | Among the goals set out in the original design of \CFA, four points stand out \cite{Bilson03}. |
---|
10 | \begin{enumerate} |
---|
11 | \item The behaviour of standard C code must remain the same when translated by a \CFA compiler as when translated by a C compiler. |
---|
12 | \item Standard C code must be as fast and as small when translated by a \CFA compiler as when translated by a C compiler. |
---|
13 | \item \CFA code must be at least as portable as standard C code. |
---|
14 | \item Extensions introduced by \CFA must be translated in the most efficient way possible. |
---|
15 | \end{enumerate} |
---|
16 | Therefore, these design principles must be kept in mind throughout the design and development of new language features. |
---|
17 | In order to appeal to existing C programmers, great care must be taken to ensure that new features naturally feel like C. |
---|
18 | These goals ensure existing C code-bases can be converted to \CFA incrementally with minimal effort, and C programmers can productively generate \CFA code without training beyond the features being used. |
---|
19 | Unfortunately, \CC is actively diverging from C, so incremental additions require significant effort and training, coupled with multiple legacy design-choices that cannot be updated. |
---|
20 | |
---|
21 | The current implementation of \CFA is a source-to-source translator from \CFA to GNU C \cite{GCCExtensions}. |
---|
22 | |
---|
23 | The remainder of this section describes some of the important features that currently exist in \CFA, to give the reader the necessary context in which the new features presented in this thesis must dovetail. |
---|
24 | |
---|
25 | \subsection{C Background} |
---|
26 | \label{sub:c_background} |
---|
27 | In the context of this work, the term \emph{object} refers to a region of data storage in the execution environment, the contents of which can represent values \cite[p.~6]{C11}. |
---|
28 | |
---|
29 | One of the lesser-known features of standard C is \emph{designations}. |
---|
30 | Designations are similar to named parameters in languages such as Python and Scala, except that they only apply to aggregate initializers. |
---|
31 | Note that in \CFA, designations use a colon separator, rather than an equals sign as in C, because this syntax is one of the few places that conflicts with the new language features. |
---|
32 | \begin{cfacode} |
---|
33 | struct A { |
---|
34 | int w, x, y, z; |
---|
35 | }; |
---|
36 | A a0 = { .x:4 .z:1, .x:8 }; |
---|
37 | A a1 = { 1, .y:7, 6 }; |
---|
38 | A a2[4] = { [2]:a0, [0]:a1, { .z:3 } }; |
---|
39 | // equivalent to |
---|
40 | // A a0 = { 0, 8, 0, 1 }; |
---|
41 | // A a1 = { 1, 0, 7, 6 }; |
---|
42 | // A a2[4] = { a1, { 0, 0, 0, 3 }, a0, { 0, 0, 0, 0 } }; |
---|
43 | \end{cfacode} |
---|
44 | Designations allow specifying the field to initialize by name, rather than by position. |
---|
45 | Any field not explicitly initialized is initialized as if it had static storage duration \cite[p.~141]{C11}. |
---|
46 | A designator specifies the current object for initialization, and as such any undesignated sub-objects pick up where the last initialization left off. |
---|
47 | For example, in the initialization of @a1@, the initializer of @y@ is @7@, and the unnamed initializer @6@ initializes the next sub-object, @z@. |
---|
48 | Later initializers override earlier initializers, so a sub-object for which there is more than one initializer is only initialized by its last initializer. |
---|
49 | These semantics can be seen in the initialization of @a0@, where @x@ is designated twice, and thus initialized to @8@. |
---|
50 | |
---|
51 | C also provides \emph{compound literal} expressions, which provide a first-class mechanism for creating unnamed objects. |
---|
52 | \begin{cfacode} |
---|
53 | struct A { int x, y; }; |
---|
54 | int f(A, int); |
---|
55 | int g(int *); |
---|
56 | |
---|
57 | f((A){ 3, 4 }, (int){ 5 } = 10); |
---|
58 | g((int[]){ 1, 2, 3 }); |
---|
59 | g(&(int){ 0 }); |
---|
60 | \end{cfacode} |
---|
61 | Compound literals create an unnamed object, and result in an lvalue, so it is legal to assign a value into a compound literal or to take its address \cite[p.~86]{C11}. |
---|
62 | Syntactically, compound literals look like a cast operator followed by a brace-enclosed initializer, but semantically are different from a C cast, which only applies basic conversions and coercions and is never an lvalue. |
---|
63 | |
---|
64 | The \CFA translator makes use of several GNU C extensions, including \emph{nested functions} and \emph{attributes}. |
---|
65 | Nested functions make it possible to access data that is lexically in scope in the nested function's body. |
---|
66 | \begin{cfacode} |
---|
67 | int f() { |
---|
68 | int x = 0; |
---|
69 | void g() { |
---|
70 | x++; |
---|
71 | } |
---|
72 | g(); // changes x |
---|
73 | } |
---|
74 | \end{cfacode} |
---|
75 | Nested functions come with the usual C caveat that they should not leak into the containing environment, since they are only valid as long as the containing function's stack frame is active. |
---|
76 | |
---|
77 | Attributes make it possible to inform the compiler of certain properties of the code. |
---|
78 | For example, a function can be marked as deprecated, so that legacy APIs can be identified and slowly removed, or as \emph{hot}, so that the compiler knows the function is called frequently and should be aggresively optimized. |
---|
79 | \begin{cfacode} |
---|
80 | __attribute__((deprecated("foo is deprecated, use bar instead"))) |
---|
81 | void foo(); |
---|
82 | __attribute__((hot)) void bar(); // heavily optimized |
---|
83 | |
---|
84 | foo(); // warning |
---|
85 | bar(); |
---|
86 | \end{cfacode} |
---|
87 | |
---|
88 | \subsection{Overloading} |
---|
89 | \label{sub:overloading} |
---|
90 | Overloading is the ability to specify multiple entities with the same name. |
---|
91 | The most common form of overloading is function overloading, wherein multiple functions can be defined with the same name, but with different signatures. |
---|
92 | C provides a small amount of built-in overloading, \eg + is overloaded for the basic types. |
---|
93 | Like in \CC, \CFA allows user-defined overloading based both on the number of parameters and on the types of parameters. |
---|
94 | \begin{cfacode} |
---|
95 | void f(void); // (1) |
---|
96 | void f(int); // (2) |
---|
97 | void f(char); // (3) |
---|
98 | |
---|
99 | f('A'); // selects (3) |
---|
100 | \end{cfacode} |
---|
101 | In this case, there are three @f@ procedures, where @f@ takes either 0 or 1 arguments, and if an argument is provided then it may be of type @int@ or of type @char@. |
---|
102 | Exactly which procedure is executed depends on the number and types of arguments passed. |
---|
103 | If there is no exact match available, \CFA attempts to find a suitable match by examining the C built-in conversion heuristics. |
---|
104 | The \CFA expression resolution algorithm uses a cost function to determine the interpretation that uses the fewest conversions and polymorphic type bindings. |
---|
105 | \begin{cfacode} |
---|
106 | void g(long long); |
---|
107 | |
---|
108 | g(12345); |
---|
109 | \end{cfacode} |
---|
110 | In the above example, there is only one instance of @g@, which expects a single parameter of type @long long@. |
---|
111 | Here, the argument provided has type @int@, but since all possible values of type @int@ can be represented by a value of type @long long@, there is a safe conversion from @int@ to @long long@, and so \CFA calls the provided @g@ routine. |
---|
112 | |
---|
113 | Overloading solves the problem present in C where there can only be one function with a given name, requiring multiple names for functions that perform the same operation but take in different types. |
---|
114 | This can be seen in the example of the absolute value functions C: |
---|
115 | \begin{cfacode} |
---|
116 | // stdlib.h |
---|
117 | int abs(int); |
---|
118 | long int labs(long int); |
---|
119 | long long int llabs(long long int); |
---|
120 | \end{cfacode} |
---|
121 | In \CFA, the functions @labs@ and @llabs@ are replaced by appropriate overloads of @abs@. |
---|
122 | |
---|
123 | In addition to this form of overloading, \CFA also allows overloading based on the number and types of \emph{return} values. |
---|
124 | This extension is a feature that is not available in \CC, but is available in other programming languages such as Ada \cite{Ada95}. |
---|
125 | \begin{cfacode} |
---|
126 | int g(); // (1) |
---|
127 | double g(); // (2) |
---|
128 | |
---|
129 | int x = g(); // selects (1) |
---|
130 | \end{cfacode} |
---|
131 | Here, the only difference between the signatures of the different versions of @g@ is in the return values. |
---|
132 | The result context is used to select an appropriate routine definition. |
---|
133 | In this case, the result of @g@ is assigned into a variable of type @int@, so \CFA prefers the routine that returns a single @int@, because it is an exact match. |
---|
134 | |
---|
135 | Return-type overloading solves similar problems to parameter-list overloading, in that multiple functions that perform similar operations can have the same, but produce different values. |
---|
136 | One use case for this feature is to provide two versions of the @bsearch@ routine: |
---|
137 | \begin{cfacode} |
---|
138 | forall(otype T | { int ?<?( T, T ); }) |
---|
139 | T * bsearch(T key, const T * arr, size_t dimension) { |
---|
140 | int comp(const void * t1, const void * t2) { |
---|
141 | return *(T *)t1 < *(T *)t2 ? -1 : *(T *)t2 < *(T *)t1 ? 1 : 0; |
---|
142 | } |
---|
143 | return (T *)bsearch(&key, arr, dimension, sizeof(T), comp); |
---|
144 | } |
---|
145 | forall(otype T | { int ?<?( T, T ); }) |
---|
146 | unsigned int bsearch(T key, const T * arr, size_t dimension) { |
---|
147 | T *result = bsearch(key, arr, dimension); |
---|
148 | // pointer subtraction includes sizeof(T) |
---|
149 | return result ? result - arr : dimension; |
---|
150 | } |
---|
151 | double key = 5.0; |
---|
152 | double vals[10] = { /* 10 floating-point values */ }; |
---|
153 | |
---|
154 | double * val = bsearch( 5.0, vals, 10 ); // selection based on return type |
---|
155 | int posn = bsearch( 5.0, vals, 10 ); |
---|
156 | \end{cfacode} |
---|
157 | The first version provides a thin wrapper around the C @bsearch@ routine, converting untyped @void *@ to the polymorphic type @T *@, allowing the \CFA compiler to catch errors when the type of @key@, @arr@, and the target at the call-site do not agree. |
---|
158 | The second version provides an alternate return of the index in the array of the selected element, rather than its address. |
---|
159 | |
---|
160 | There are times when a function should logically return multiple values. |
---|
161 | Since a function in standard C can only return a single value, a programmer must either take in additional return values by address, or the function's designer must create a wrapper structure to package multiple return-values. |
---|
162 | For example, the first approach: |
---|
163 | \begin{cfacode} |
---|
164 | int f(int * ret) { // returns a value through parameter ret |
---|
165 | *ret = 37; |
---|
166 | return 123; |
---|
167 | } |
---|
168 | |
---|
169 | int res1, res2; // allocate return value |
---|
170 | int res1 = g(&res2); // explicitly pass storage |
---|
171 | \end{cfacode} |
---|
172 | is awkward because it requires the caller to explicitly allocate memory for $n$ result variables, even if they are only temporary values used as a subexpression, or even not used at all. |
---|
173 | The second approach: |
---|
174 | \begin{cfacode} |
---|
175 | struct A { |
---|
176 | int x, y; |
---|
177 | }; |
---|
178 | struct A g() { // returns values through a structure |
---|
179 | return (struct A) { 123, 37 }; |
---|
180 | } |
---|
181 | struct A res3 = g(); |
---|
182 | ... res3.x ... res3.y ... // use result values |
---|
183 | \end{cfacode} |
---|
184 | is awkward because the caller has to either learn the field names of the structure or learn the names of helper routines to access the individual return values. |
---|
185 | Both approaches are syntactically unnatural. |
---|
186 | |
---|
187 | In \CFA, it is possible to directly declare a function returning multiple values. |
---|
188 | This extension provides important semantic information to the caller, since return values are only for output. |
---|
189 | \begin{cfacode} |
---|
190 | [int, int] f() { // no new type |
---|
191 | return [123, 37]; |
---|
192 | } |
---|
193 | \end{cfacode} |
---|
194 | However, the ability to return multiple values is useless without a syntax for accepting the results from the function. |
---|
195 | |
---|
196 | In standard C, return values are most commonly assigned directly into local variables, or are used as the arguments to another function call. |
---|
197 | \CFA allows both of these contexts to accept multiple return values. |
---|
198 | \begin{cfacode} |
---|
199 | int res1, res2; |
---|
200 | [res1, res2] = f(); // assign return values into local variables |
---|
201 | |
---|
202 | void g(int, int); |
---|
203 | g(f()); // pass both return values of f to g |
---|
204 | \end{cfacode} |
---|
205 | As seen in the example, it is possible to assign the results from a return value directly into local variables. |
---|
206 | These local variables can be referenced naturally, without requiring any unpacking as in structured return values. |
---|
207 | Perhaps more interesting is the fact that multiple return values can be passed to multiple parameters seamlessly, as in the call @g(f())@. |
---|
208 | In this call, the return values from @f@ are linked to the parameters of @g@ so that each of the return values is passed directly to the corresponding parameter of @g@, without any explicit storing, unpacking, or additional naming. |
---|
209 | |
---|
210 | An extra quirk introduced by multiple return values is in the resolution of function calls. |
---|
211 | \begin{cfacode} |
---|
212 | int f(); // (1) |
---|
213 | [int, int] f(); // (2) |
---|
214 | |
---|
215 | void g(int, int); |
---|
216 | |
---|
217 | int x, y; |
---|
218 | [x, y] = f(); // selects (2) |
---|
219 | g(f()); // selects (2) |
---|
220 | \end{cfacode} |
---|
221 | In this example, the only possible call to @f@ that can produce the two @int@s required for assigning into the variables @x@ and @y@ is the second option. |
---|
222 | A similar reasoning holds calling the function @g@. |
---|
223 | |
---|
224 | This duality between aggregation and aliasing can be seen in the C standard library in the @div@ and @remquo@ functions, which return the quotient and remainder for a division of integer and floating-point values, respectively. |
---|
225 | \begin{cfacode} |
---|
226 | typedef struct { int quo, rem; } div_t; // from stdlib.h |
---|
227 | div_t div( int num, int den ); |
---|
228 | double remquo( double num, double den, int * quo ); |
---|
229 | div_t qr = div( 13, 5 ); // return quotient/remainder aggregate |
---|
230 | int q; |
---|
231 | double r = remquo( 13.5, 5.2, &q ); // return remainder, alias quotient |
---|
232 | \end{cfacode} |
---|
233 | @div@ aggregates the quotient/remainder in a structure, while @remquo@ aliases a parameter to an argument. |
---|
234 | Alternatively, a programming language can directly support returning multiple values, \eg in \CFA: |
---|
235 | \begin{lstlisting} |
---|
236 | [int, int] div(int num, int den); // return two integers |
---|
237 | [double, double] div( double num, double den ); // return two doubles |
---|
238 | int q, r; // overloaded variable names |
---|
239 | double q, r; |
---|
240 | [q, r] = div(13, 5); // select appropriate div and q, r |
---|
241 | [q, r] = div(13.5, 5.2); |
---|
242 | \end{lstlisting} |
---|
243 | |
---|
244 | In \CFA, overloading also applies to operator names, known as \emph{operator overloading}. |
---|
245 | Similar to function overloading, a single operator is given multiple meanings by defining new versions of the operator with different signatures. |
---|
246 | In \CC, this can be done as follows |
---|
247 | \begin{cppcode} |
---|
248 | struct A { int i; }; |
---|
249 | A operator+(A x, A y); |
---|
250 | bool operator<(A x, A y); |
---|
251 | \end{cppcode} |
---|
252 | |
---|
253 | In \CFA, the same example can be written as follows. |
---|
254 | \begin{cfacode} |
---|
255 | struct A { int i; }; |
---|
256 | A ?+?(A x, A y); // '?'s represent operands |
---|
257 | int ?<?(A x, A y); |
---|
258 | \end{cfacode} |
---|
259 | Notably, the only difference is syntax. |
---|
260 | Most of the operators supported by \CC for operator overloading are also supported in \CFA. |
---|
261 | Of notable exception are the logical operators (\eg @||@), the sequence operator (\ie @,@), and the member-access operators (\eg @.@ and \lstinline{->}). |
---|
262 | |
---|
263 | Finally, \CFA also permits overloading variable identifiers. |
---|
264 | This feature is not available in \CC. |
---|
265 | \begin{cfacode} |
---|
266 | struct Rational { int numer, denom; }; |
---|
267 | int x = 3; // (1) |
---|
268 | double x = 1.27; // (2) |
---|
269 | Rational x = { 4, 11 }; // (3) |
---|
270 | |
---|
271 | void g(double); |
---|
272 | |
---|
273 | x += 1; // chooses (1) |
---|
274 | g(x); // chooses (2) |
---|
275 | Rational y = x; // chooses (3) |
---|
276 | \end{cfacode} |
---|
277 | In this example, there are three definitions of the variable @x@. |
---|
278 | Based on the context, \CFA attempts to choose the variable whose type best matches the expression context. |
---|
279 | When used judiciously, this feature allows names like @MAX@, @MIN@, and @PI@ to apply across many types. |
---|
280 | |
---|
281 | Finally, the values @0@ and @1@ have special status in standard C. |
---|
282 | In particular, the value @0@ is both an integer and a pointer literal, and thus its meaning depends on the context. |
---|
283 | In addition, several operations can be redefined in terms of other operations and the values @0@ and @1@. |
---|
284 | For example, |
---|
285 | \begin{cfacode} |
---|
286 | int x; |
---|
287 | if (x) { // if (x != 0) |
---|
288 | x++; // x += 1; |
---|
289 | } |
---|
290 | \end{cfacode} |
---|
291 | Every if- and iteration-statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result. |
---|
292 | Due to these rewrite rules, the values @0@ and @1@ have the types \zero and \one in \CFA, which allow for overloading various operations that connect to @0@ and @1@ \footnote{In the original design of \CFA, @0@ and @1@ were overloadable names \cite[p.~7]{cforall}.}. |
---|
293 | The types \zero and \one have special built-in implicit conversions to the various integral types, and a conversion to pointer types for @0@, which allows standard C code involving @0@ and @1@ to work as normal. |
---|
294 | \begin{cfacode} |
---|
295 | // lvalue is similar to returning a reference in C++ |
---|
296 | lvalue Rational ?+=?(Rational *a, Rational b); |
---|
297 | Rational ?=?(Rational * dst, zero_t) { |
---|
298 | return *dst = (Rational){ 0, 1 }; |
---|
299 | } |
---|
300 | |
---|
301 | Rational sum(Rational *arr, int n) { |
---|
302 | Rational r; |
---|
303 | r = 0; // use rational-zero_t assignment |
---|
304 | for (; n > 0; n--) { |
---|
305 | r += arr[n-1]; |
---|
306 | } |
---|
307 | return r; |
---|
308 | } |
---|
309 | \end{cfacode} |
---|
310 | This function takes an array of @Rational@ objects and produces the @Rational@ representing the sum of the array. |
---|
311 | Note the use of an overloaded assignment operator to set an object of type @Rational@ to an appropriate @0@ value. |
---|
312 | |
---|
313 | \subsection{Polymorphism} |
---|
314 | \label{sub:polymorphism} |
---|
315 | In its most basic form, polymorphism grants the ability to write a single block of code that accepts different types. |
---|
316 | In particular, \CFA supports the notion of parametric polymorphism. |
---|
317 | Parametric polymorphism allows a function to be written generically, for all values of all types, without regard to the specifics of a particular type. |
---|
318 | For example, in \CC, the simple identity function for all types can be written as: |
---|
319 | \begin{cppcode} |
---|
320 | template<typename T> |
---|
321 | T identity(T x) { return x; } |
---|
322 | \end{cppcode} |
---|
323 | \CC uses the template mechanism to support parametric polymorphism. In \CFA, an equivalent function can be written as: |
---|
324 | \begin{cfacode} |
---|
325 | forall(otype T) |
---|
326 | T identity(T x) { return x; } |
---|
327 | \end{cfacode} |
---|
328 | Once again, the only visible difference in this example is syntactic. |
---|
329 | Fundamental differences can be seen by examining more interesting examples. |
---|
330 | In \CC, a generic sum function is written as follows: |
---|
331 | \begin{cppcode} |
---|
332 | template<typename T> |
---|
333 | T sum(T *arr, int n) { |
---|
334 | T t; // default construct => 0 |
---|
335 | for (; n > 0; n--) t += arr[n-1]; |
---|
336 | return t; |
---|
337 | } |
---|
338 | \end{cppcode} |
---|
339 | Here, the code assumes the existence of a default constructor, assignment operator, and an addition operator over the provided type @T@. |
---|
340 | If any of these required operators are not available, the \CC compiler produces an error message stating which operators could not be found. |
---|
341 | |
---|
342 | A similar sum function can be written in \CFA as follows: |
---|
343 | \begin{cfacode} |
---|
344 | forall(otype T | **R**{ T ?=?(T *, zero_t); T ?+=?(T *, T); }**R**) |
---|
345 | T sum(T *arr, int n) { |
---|
346 | T t = 0; |
---|
347 | for (; n > 0; n--) t = t += arr[n-1]; |
---|
348 | return t; |
---|
349 | } |
---|
350 | \end{cfacode} |
---|
351 | The first thing to note here is that immediately following the declaration of @otype T@ is a list of \emph{type assertions} that specify restrictions on acceptable choices of @T@. |
---|
352 | In particular, the assertions above specify that there must be an assignment from \zero to @T@ and an addition assignment operator from @T@ to @T@. |
---|
353 | The existence of an assignment operator from @T@ to @T@ and the ability to create an object of type @T@ are assumed implicitly by declaring @T@ with the @otype@ type-class. |
---|
354 | In addition to @otype@, there are currently two other type-classes. |
---|
355 | |
---|
356 | @dtype@, short for \emph{data type}, serves as the top type for object types; any object type, complete or incomplete, can be bound to a @dtype@ type variable. |
---|
357 | To contrast, @otype@, short for \emph{object type}, is a @dtype@ with known size, alignment, and an assignment operator, and thus bind only to complete object types. |
---|
358 | With this extra information, complete objects can be used in polymorphic code in the same way they are used in monomorphic code, providing familiarity and ease of use. |
---|
359 | The third type-class is @ftype@, short for \emph{function type}, matching only function types. |
---|
360 | The three type parameter kinds are summarized in \autoref{table:types} |
---|
361 | |
---|
362 | \begin{table}[h!] |
---|
363 | \begin{center} |
---|
364 | \begin{tabular}{|c||c|c|c||c|c|c|} |
---|
365 | \hline |
---|
366 | name & object type & incomplete type & function type & can assign & can create & has size \\ \hline |
---|
367 | @otype@ & X & & & X & X & X \\ \hline |
---|
368 | @dtype@ & X & X & & & & \\ \hline |
---|
369 | @ftype@ & & & X & & & \\ \hline |
---|
370 | \end{tabular} |
---|
371 | \end{center} |
---|
372 | \caption{\label{table:types} The different kinds of type parameters in \protect\CFA} |
---|
373 | \end{table} |
---|
374 | |
---|
375 | A major difference between the approaches of \CC and \CFA to polymorphism is that the set of assumed properties for a type is \emph{explicit} in \CFA. |
---|
376 | One of the major limiting factors of \CC's approach is that templates cannot be separately compiled. |
---|
377 | In contrast, the explicit nature of assertions allows \CFA's polymorphic functions to be separately compiled, as the function prototype states all necessary requirements separate from the implementation. |
---|
378 | For example, the prototype for the previous sum function is |
---|
379 | \begin{cfacode} |
---|
380 | forall(otype T | **R**{ T ?=?(T *, zero_t); T ?+=?(T *, T); }**R**) |
---|
381 | T sum(T *arr, int n); |
---|
382 | \end{cfacode} |
---|
383 | With this prototype, a caller in another translation unit knows all of the constraints on @T@, and thus knows all of the operations that need to be made available to @sum@. |
---|
384 | |
---|
385 | In \CFA, a set of assertions can be factored into a \emph{trait}. |
---|
386 | \begin{cfacode} |
---|
387 | trait Addable(otype T) { |
---|
388 | T ?+?(T, T); |
---|
389 | T ++?(T); |
---|
390 | T ?++(T); |
---|
391 | } |
---|
392 | forall(otype T | Addable(T)) void f(T); |
---|
393 | forall(otype T | Addable(T) | { T --?(T); }) T g(T); |
---|
394 | forall(otype T, U | Addable(T) | { T ?/?(T, U); }) U h(T, U); |
---|
395 | \end{cfacode} |
---|
396 | This capability allows specifying the same set of assertions in multiple locations, without the repetition and likelihood of mistakes that come with manually writing them out for each function declaration. |
---|
397 | |
---|
398 | An interesting application of return-type resolution and polymorphism is a polymorphic version of @malloc@. |
---|
399 | \begin{cfacode} |
---|
400 | forall(dtype T | sized(T)) |
---|
401 | T * malloc() { |
---|
402 | return (T*)malloc(sizeof(T)); // call C malloc |
---|
403 | } |
---|
404 | int * x = malloc(); // malloc(sizeof(int)) |
---|
405 | double * y = malloc(); // malloc(sizeof(double)) |
---|
406 | |
---|
407 | struct S { ... }; |
---|
408 | S * s = malloc(); // malloc(sizeof(S)) |
---|
409 | \end{cfacode} |
---|
410 | The built-in trait @sized@ ensures that size and alignment information for @T@ is available in the body of @malloc@ through @sizeof@ and @_Alignof@ expressions respectively. |
---|
411 | In calls to @malloc@, the type @T@ is bound based on call-site information, allowing \CFA code to allocate memory without the potential for errors introduced by manually specifying the size of the allocated block. |
---|
412 | |
---|
413 | \subsection{Planned Features} |
---|
414 | |
---|
415 | One of the planned features \CFA is \emph{reference types}. |
---|
416 | At a high level, the current proposal is to add references as a way to cleanup pointer syntax. |
---|
417 | With references, it will be possible to store any address, as with a pointer, with the key difference being that references are automatically dereferenced. |
---|
418 | \begin{cfacode} |
---|
419 | int x = 0; |
---|
420 | int * p = &x; // needs & |
---|
421 | int & ref = x; // no & |
---|
422 | |
---|
423 | printf("%d %d\n", *p, ref); // pointer needs *, ref does not |
---|
424 | \end{cfacode} |
---|
425 | |
---|
426 | It is possible to add new functions or shadow existing functions for the duration of a scope, using normal C scoping rules. |
---|
427 | One application of this feature is to reverse the order of @qsort@. |
---|
428 | \begin{cfacode} |
---|
429 | forall(otype T | { int ?<?( T, T ); }) |
---|
430 | void qsort(const T * arr, size_t size) { |
---|
431 | int comp(const void * t1, const void * t2) { |
---|
432 | return *(T *)t1 < *(T *)t2 ? -1 : *(T *)t2 < *(T *)t1 ? 1 : 0; |
---|
433 | } |
---|
434 | qsort(arr, dimension, sizeof(T), comp); |
---|
435 | |
---|
436 | } |
---|
437 | double vals[10] = { ... }; |
---|
438 | qsort(vals, 10); // ascending order |
---|
439 | { |
---|
440 | int ?<?(double x, double y) { // locally override behaviour |
---|
441 | return x > y; |
---|
442 | } |
---|
443 | qsort(vals, 10); // descending sort |
---|
444 | } |
---|
445 | \end{cfacode} |
---|
446 | Currently, there is no way to \emph{remove} a function from consideration from the duration of a scope. |
---|
447 | For example, it may be desirable to eliminate assignment from a scope, to reduce accidental mutation. |
---|
448 | To address this desire, \emph{deleted functions} are a planned feature for \CFA. |
---|
449 | \begin{cfacode} |
---|
450 | forall(otype T) void f(T *); |
---|
451 | |
---|
452 | int x = 0; |
---|
453 | f(&x); // might modify x |
---|
454 | { |
---|
455 | int ?=?(int *, int) = delete; |
---|
456 | f(&x); // error, no assignment for int |
---|
457 | } |
---|
458 | \end{cfacode} |
---|
459 | Now, if the deleted function is chosen as the best match, the expression resolver emits an error. |
---|
460 | |
---|
461 | \section{Invariants} |
---|
462 | An \emph{invariant} is a logical assertion that is true for some duration of a program's execution. |
---|
463 | Invariants help a programmer to reason about code correctness and prove properties of programs. |
---|
464 | |
---|
465 | \begin{sloppypar} |
---|
466 | In object-oriented programming languages, type invariants are typically established in a constructor and maintained throughout the object's lifetime. |
---|
467 | These assertions are typically achieved through a combination of access-control modifiers and a restricted interface. |
---|
468 | Typically, data which requires the maintenance of an invariant is hidden from external sources using the \emph{private} modifier, which restricts reads and writes to a select set of trusted routines, including member functions. |
---|
469 | It is these trusted routines that perform all modifications to internal data in a way that is consistent with the invariant, by ensuring that the invariant holds true at the end of the routine call. |
---|
470 | \end{sloppypar} |
---|
471 | |
---|
472 | In C, the @assert@ macro is often used to ensure invariants are true. |
---|
473 | Using @assert@, the programmer can check a condition and abort execution if the condition is not true. |
---|
474 | This powerful tool forces the programmer to deal with logical inconsistencies as they occur. |
---|
475 | For production, assertions can be removed by simply defining the preprocessor macro @NDEBUG@, making it simple to ensure that assertions are 0-cost for a performance intensive application. |
---|
476 | \begin{cfacode} |
---|
477 | struct Rational { |
---|
478 | int n, d; |
---|
479 | }; |
---|
480 | struct Rational create_rational(int n, int d) { |
---|
481 | assert(d != 0); // precondition |
---|
482 | if (d < 0) { |
---|
483 | n *= -1; |
---|
484 | d *= -1; |
---|
485 | } |
---|
486 | assert(d > 0); // postcondition |
---|
487 | // rational invariant: d > 0 |
---|
488 | return (struct Rational) { n, d }; |
---|
489 | } |
---|
490 | struct Rational rat_abs(struct Rational r) { |
---|
491 | assert(r.d > 0); // check invariant, since no access control |
---|
492 | r.n = abs(r.n); |
---|
493 | assert(r.d > 0); // ensure function preserves invariant on return value |
---|
494 | return r; |
---|
495 | } |
---|
496 | \end{cfacode} |
---|
497 | |
---|
498 | Some languages, such as D, provide language-level support for specifying program invariants. |
---|
499 | In addition to providing a C-like @assert@ expression, D allows specifying type invariants that are automatically checked at the end of a constructor, beginning of a destructor, and at the beginning and end of every public member function. |
---|
500 | \begin{dcode} |
---|
501 | import std.math; |
---|
502 | struct Rational { |
---|
503 | invariant { |
---|
504 | assert(d > 0, "d <= 0"); |
---|
505 | } |
---|
506 | int n, d; |
---|
507 | this(int n, int d) { // constructor |
---|
508 | assert(d != 0); |
---|
509 | this.n = n; |
---|
510 | this.d = d; |
---|
511 | // implicitly check invariant |
---|
512 | } |
---|
513 | Rational abs() { |
---|
514 | // implicitly check invariant |
---|
515 | return Rational(std.math.abs(n), d); |
---|
516 | // implicitly check invariant |
---|
517 | } |
---|
518 | } |
---|
519 | \end{dcode} |
---|
520 | The D compiler is able to assume that assertions and invariants hold true and perform optimizations based on those assumptions. |
---|
521 | Note, these invariants are internal to the type's correct behaviour. |
---|
522 | |
---|
523 | Types also have external invariants with the state of the execution environment, including the heap, the open-file table, the state of global variables, etc. |
---|
524 | Since resources are finite and shared (concurrency), it is important to ensure that objects clean up properly when they are finished, restoring the execution environment to a stable state so that new objects can reuse resources. |
---|
525 | |
---|
526 | \section{Resource Management} |
---|
527 | \label{s:ResMgmt} |
---|
528 | |
---|
529 | Resource management is a problem that pervades every programming language. |
---|
530 | |
---|
531 | In standard C, resource management is largely a manual effort on the part of the programmer, with a notable exception to this rule being the program stack. |
---|
532 | The program stack grows and shrinks automatically with each function call, as needed for local variables. |
---|
533 | However, whenever a program needs a variable to outlive the block it is created in, the storage must be allocated dynamically with @malloc@ and later released with @free@. |
---|
534 | This pattern is extended to more complex objects, such as files and sockets, which can also outlive the block where they are created, and thus require their own resource management. |
---|
535 | Once allocated storage escapes\footnote{In garbage collected languages, such as Java, escape analysis \cite{Choi:1999:EAJ:320385.320386} is used to determine when dynamically allocated objects are strictly contained within a function, which allows the optimizer to allocate them on the stack.} a block, the responsibility for deallocating the storage is not specified in a function's type, that is, that the return value is owned by the caller. |
---|
536 | This implicit convention is provided only through documentation about the expectations of functions. |
---|
537 | |
---|
538 | In other languages, a hybrid situation exists where resources escape the allocation block, but ownership is precisely controlled by the language. |
---|
539 | This pattern requires a strict interface and protocol for a data structure, consisting of a pre-initialization and a post-termination call, and all intervening access is done via interface routines. |
---|
540 | This kind of encapsulation is popular in object-oriented programming languages, and like the stack, it takes care of a significant portion of resource-management cases. |
---|
541 | |
---|
542 | For example, \CC directly supports this pattern through class types and an idiom known as RAII \footnote{Resource Acquisition is Initialization} by means of constructors and destructors. |
---|
543 | Constructors and destructors are special routines that are automatically inserted into the appropriate locations to bookend the lifetime of an object. |
---|
544 | Constructors allow the designer of a type to establish invariants for objects of that type, since it is guaranteed that every object must be initialized through a constructor. |
---|
545 | In particular, constructors allow a programmer to ensure that all objects are initially set to a valid state. |
---|
546 | On the other hand, destructors provide a simple mechanism for tearing down an object and resetting the environment in which the object lived. |
---|
547 | RAII ensures that if all resources are acquired in a constructor and released in a destructor, there are no resource leaks, even in exceptional circumstances. |
---|
548 | A type with at least one non-trivial constructor or destructor is henceforth referred to as a \emph{managed type}. |
---|
549 | In the context of \CFA, a non-trivial constructor is either a user defined constructor or an auto-generated constructor that calls a non-trivial constructor. |
---|
550 | |
---|
551 | For the remaining resource ownership cases, a programmer must follow a brittle, explicit protocol for freeing resources or an implicit protocol enforced by the programming language. |
---|
552 | |
---|
553 | In garbage collected languages, such as Java, resources are largely managed by the garbage collector. |
---|
554 | Still, garbage collectors typically focus only on memory management. |
---|
555 | There are many kinds of resources that the garbage collector does not understand, such as sockets, open files, and database connections. |
---|
556 | In particular, Java supports \emph{finalizers}, which are similar to destructors. |
---|
557 | Unfortunately, finalizers are only guaranteed to be called before an object is reclaimed by the garbage collector \cite[p.~373]{Java8}, which may not happen if memory use is not contentious. |
---|
558 | Due to operating-system resource-limits, this is unacceptable for many long running programs. |
---|
559 | Instead, the paradigm in Java requires programmers to manually keep track of all resources \emph{except} memory, leading many novices and experts alike to forget to close files, etc. |
---|
560 | Complicating the picture, uncaught exceptions can cause control flow to change dramatically, leaking a resource that appears on first glance to be released. |
---|
561 | \begin{javacode} |
---|
562 | void write(String filename, String msg) throws Exception { |
---|
563 | FileOutputStream out = new FileOutputStream(filename); |
---|
564 | FileOutputStream log = new FileOutputStream(filename); |
---|
565 | out.write(msg.getBytes()); |
---|
566 | log.write(msg.getBytes()); |
---|
567 | log.close(); |
---|
568 | out.close(); |
---|
569 | } |
---|
570 | \end{javacode} |
---|
571 | Any line in this program can throw an exception, which leads to a profusion of finally blocks around many function bodies, since it is not always clear when an exception may be thrown. |
---|
572 | \begin{javacode} |
---|
573 | public void write(String filename, String msg) throws Exception { |
---|
574 | FileOutputStream out = new FileOutputStream(filename); |
---|
575 | try { |
---|
576 | FileOutputStream log = new FileOutputStream("log.txt"); |
---|
577 | try { |
---|
578 | out.write(msg.getBytes()); |
---|
579 | log.write(msg.getBytes()); |
---|
580 | } finally { |
---|
581 | log.close(); |
---|
582 | } |
---|
583 | } finally { |
---|
584 | out.close(); |
---|
585 | } |
---|
586 | } |
---|
587 | \end{javacode} |
---|
588 | In Java 7, a new \emph{try-with-resources} construct was added to alleviate most of the pain of working with resources, but ultimately it still places the burden squarely on the user rather than on the library designer. |
---|
589 | Furthermore, for complete safety this pattern requires nested objects to be declared separately, otherwise resources that can throw an exception on close can leak nested resources \footnote{Since close is only guaranteed to be called on objects declared in the try-list and not objects passed as constructor parameters, the @B@ object may not be closed in @new A(new B())@ if @A@'s close raises an exception.} \cite{TryWithResources}. |
---|
590 | \begin{javacode} |
---|
591 | public void write(String filename, String msg) throws Exception { |
---|
592 | try ( // try-with-resources |
---|
593 | FileOutputStream out = new FileOutputStream(filename); |
---|
594 | FileOutputStream log = new FileOutputStream("log.txt"); |
---|
595 | ) { |
---|
596 | out.write(msg.getBytes()); |
---|
597 | log.write(msg.getBytes()); |
---|
598 | } // automatically closes out and log in every exceptional situation |
---|
599 | } |
---|
600 | \end{javacode} |
---|
601 | Variables declared as part of a try-with-resources statement must conform to the @AutoClosable@ interface, and the compiler implicitly calls @close@ on each of the variables at the end of the block. |
---|
602 | Depending on when the exception is raised, both @out@ and @log@ are null, @log@ is null, or both are non-null, therefore, the cleanup for these variables at the end is automatically guarded and conditionally executed to prevent null-pointer exceptions. |
---|
603 | |
---|
604 | While Rust \cite{Rust} does not enforce the use of a garbage collector, it does provide a manual memory management environment, with a strict ownership model that automatically frees allocated memory and prevents common memory management errors. |
---|
605 | In particular, a variable has ownership over its associated value, which is freed automatically when the owner goes out of scope. |
---|
606 | Furthermore, values are \emph{moved} by default on assignment, rather than copied, which invalidates the previous variable binding. |
---|
607 | \begin{rustcode} |
---|
608 | struct S { |
---|
609 | x: i32 |
---|
610 | } |
---|
611 | let s = S { x: 123 }; |
---|
612 | let z = s; // move, invalidate s |
---|
613 | println!("{}", s.x); // error, s has been moved |
---|
614 | \end{rustcode} |
---|
615 | Types can be made copyable by implementing the @Copy@ trait. |
---|
616 | |
---|
617 | Rust allows multiple unowned views into an object through references, also known as borrows, provided that a reference does not outlive its referent. |
---|
618 | A mutable reference is allowed only if it is the only reference to its referent, preventing data race errors and iterator invalidation errors. |
---|
619 | \begin{rustcode} |
---|
620 | let mut x = 10; |
---|
621 | { |
---|
622 | let y = &x; |
---|
623 | let z = &x; |
---|
624 | println!("{} {}", y, z); // prints 10 10 |
---|
625 | } |
---|
626 | { |
---|
627 | let y = &mut x; |
---|
628 | // let z1 = &x; // not allowed, have mutable reference |
---|
629 | // let z2 = &mut x; // not allowed, have mutable reference |
---|
630 | *y = 5; |
---|
631 | println!("{}", y); // prints 5 |
---|
632 | } |
---|
633 | println!("{}", x); // prints 5 |
---|
634 | \end{rustcode} |
---|
635 | Since references are not owned, they do not release resources when they go out of scope. |
---|
636 | There is no runtime cost imposed on these restrictions, since they are enforced at compile-time. |
---|
637 | |
---|
638 | Rust provides RAII through the @Drop@ trait, allowing arbitrary code to execute when the object goes out of scope, providing automatic clean up of auxiliary resources, much like a \CC program. |
---|
639 | \begin{rustcode} |
---|
640 | struct S { |
---|
641 | name: &'static str |
---|
642 | } |
---|
643 | |
---|
644 | impl Drop for S { // RAII for S |
---|
645 | fn drop(&mut self) { // destructor |
---|
646 | println!("dropped {}", self.name); |
---|
647 | } |
---|
648 | } |
---|
649 | |
---|
650 | { |
---|
651 | let x = S { name: "x" }; |
---|
652 | let y = S { name: "y" }; |
---|
653 | } // prints "dropped y" "dropped x" |
---|
654 | \end{rustcode} |
---|
655 | |
---|
656 | % D has constructors and destructors that are worth a mention (under classes) https://dlang.org/spec/spec.html |
---|
657 | % also https://dlang.org/spec/struct.html#struct-constructor |
---|
658 | % these are declared in the struct, so they're closer to C++ than to CFA, at least syntactically. Also do not allow for default constructors |
---|
659 | % D has a GC, which already makes the situation quite different from C/C++ |
---|
660 | The programming language D also manages resources with constructors and destructors \cite{D}. |
---|
661 | In D, @struct@s are stack allocatable and managed via scoping like in \CC, whereas @class@es are managed automatically by the garbage collector. |
---|
662 | Like Java, using the garbage collector means that destructors are called indeterminately, requiring the use of finally statements to ensure dynamically allocated resources that are not managed by the garbage collector, such as open files, are cleaned up. |
---|
663 | Since D supports RAII, it is possible to use the same techniques as in \CC to ensure that resources are released in a timely manner. |
---|
664 | Finally, D provides a scope guard statement, which allows an arbitrary statement to be executed at normal scope exit with \emph{success}, at exceptional scope exit with \emph{failure}, or at normal and exceptional scope exit with \emph{exit}. % https://dlang.org/spec/statement.html#ScopeGuardStatement |
---|
665 | It has been shown that the \emph{exit} form of the scope guard statement can be implemented in a library in \CC \cite{ExceptSafe}. |
---|
666 | |
---|
667 | To provide managed types in \CFA, new kinds of constructors and destructors are added to \CFA and discussed in Chapter 2. |
---|
668 | |
---|
669 | \section{Tuples} |
---|
670 | \label{s:Tuples} |
---|
671 | In mathematics, tuples are finite-length sequences which, unlike sets, are ordered and allow duplicate elements. |
---|
672 | In programming languages, tuples provide fixed-sized heterogeneous lists of elements. |
---|
673 | Many programming languages have tuple constructs, such as SETL, \KWC, ML, and Scala. |
---|
674 | |
---|
675 | \KWC, a predecessor of \CFA, introduced tuples to C as an extension of the C syntax, rather than as a full-blown data type \cite{Till89}. |
---|
676 | In particular, Till noted that C already contains a tuple context in the form of function parameter lists. |
---|
677 | The main contributions of that work were in the form of adding tuple contexts to assignment in the form of multiple assignment and mass assignment (discussed in detail in section \ref{s:TupleAssignment}), function return values (see section \ref{s:MRV_Functions}), and record field access (see section \ref{s:MemberAccessTuple}). |
---|
678 | Adding tuples to \CFA has previously been explored by Esteves \cite{Esteves04}. |
---|
679 | |
---|
680 | The design of tuples in \KWC took much of its inspiration from SETL \cite{SETL}. |
---|
681 | SETL is a high-level mathematical programming language, with tuples being one of the primary data types. |
---|
682 | Tuples in SETL allow a number of operations, including subscripting, dynamic expansion, and multiple assignment. |
---|
683 | |
---|
684 | \CCeleven introduced @std::tuple@ as a library variadic template struct. |
---|
685 | Tuples are a generalization of @std::pair@, in that they allow for arbitrary length, fixed-size aggregation of heterogeneous values. |
---|
686 | \begin{cppcode} |
---|
687 | tuple<int, int, int> triple(10, 20, 30); |
---|
688 | get<1>(triple); // access component 1 => 20 |
---|
689 | |
---|
690 | tuple<int, double> f(); |
---|
691 | int i; |
---|
692 | double d; |
---|
693 | tie(i, d) = f(); // assign fields of return value into local variables |
---|
694 | |
---|
695 | tuple<int, int, int> greater(11, 0, 0); |
---|
696 | triple < greater; // true |
---|
697 | \end{cppcode} |
---|
698 | Tuples are simple data structures with few specific operations. |
---|
699 | In particular, it is possible to access a component of a tuple using @std::get<N>@. |
---|
700 | Another interesting feature is @std::tie@, which creates a tuple of references, allowing assignment of the results of a tuple-returning function into separate local variables, without requiring a temporary variable. |
---|
701 | Tuples also support lexicographic comparisons, making it simple to write aggregate comparators using @std::tie@. |
---|
702 | |
---|
703 | There is a proposal for \CCseventeen called \emph{structured bindings} \cite{StructuredBindings}, that introduces new syntax to eliminate the need to pre-declare variables and use @std::tie@ for binding the results from a function call. |
---|
704 | \begin{cppcode} |
---|
705 | tuple<int, double> f(); |
---|
706 | auto [i, d] = f(); // unpacks into new variables i, d |
---|
707 | |
---|
708 | tuple<int, int, int> triple(10, 20, 30); |
---|
709 | auto & [t1, t2, t3] = triple; |
---|
710 | t2 = 0; // changes middle element of triple |
---|
711 | |
---|
712 | struct S { int x; double y; }; |
---|
713 | S s = { 10, 22.5 }; |
---|
714 | auto [x, y] = s; // unpack s |
---|
715 | \end{cppcode} |
---|
716 | Structured bindings allow unpacking any structure with all public non-static data members into fresh local variables. |
---|
717 | The use of @&@ allows declaring new variables as references, which is something that cannot be done with @std::tie@, since \CC references do not support rebinding. |
---|
718 | This extension requires the use of @auto@ to infer the types of the new variables, so complicated expressions with a non-obvious type must be documented with some other mechanism. |
---|
719 | Furthermore, structured bindings are not a full replacement for @std::tie@, as it always declares new variables. |
---|
720 | |
---|
721 | Like \CC, D provides tuples through a library variadic-template structure. |
---|
722 | In D, it is possible to name the fields of a tuple type, which creates a distinct type. |
---|
723 | % http://dlang.org/phobos/std_typecons.html |
---|
724 | \begin{dcode} |
---|
725 | Tuple!(float, "x", float, "y") point2D; |
---|
726 | Tuple!(float, float) float2; // different type from point2D |
---|
727 | |
---|
728 | point2D[0]; // access first element |
---|
729 | point2D.x; // access first element |
---|
730 | |
---|
731 | float f(float x, float y) { |
---|
732 | return x+y; |
---|
733 | } |
---|
734 | |
---|
735 | f(point2D.expand); |
---|
736 | \end{dcode} |
---|
737 | Tuples are 0-indexed and can be subscripted using an integer or field name, if applicable. |
---|
738 | The @expand@ method produces the components of the tuple as a list of separate values, making it possible to call a function that takes $N$ arguments using a tuple with $N$ components. |
---|
739 | |
---|
740 | Tuples are a fundamental abstraction in most functional programming languages, such as Standard ML \cite{sml}. |
---|
741 | A function in SML always accepts exactly one argument. |
---|
742 | There are two ways to mimic multiple argument functions: the first through currying and the second by accepting tuple arguments. |
---|
743 | \begin{smlcode} |
---|
744 | fun fact (n : int) = |
---|
745 | if (n = 0) then 1 |
---|
746 | else n*fact(n-1) |
---|
747 | |
---|
748 | fun binco (n: int, k: int) = |
---|
749 | real (fact n) / real (fact k * fact (n-k)) |
---|
750 | \end{smlcode} |
---|
751 | Here, the function @binco@ appears to take 2 arguments, but it actually takes a single argument which is implicitly decomposed via pattern matching. |
---|
752 | Tuples are a foundational tool in SML, allowing the creation of arbitrarily-complex structured data-types. |
---|
753 | |
---|
754 | Scala, like \CC, provides tuple types through the standard library \cite{Scala}. |
---|
755 | Scala provides tuples of size 1 through 22 inclusive through generic data structures. |
---|
756 | Tuples support named access and subscript access, among a few other operations. |
---|
757 | \begin{scalacode} |
---|
758 | val a = new Tuple3(0, "Text", 2.1) // explicit creation |
---|
759 | val b = (6, 'a', 1.1f) // syntactic sugar: Tuple3[Int, Char, Float] |
---|
760 | val (i, _, d) = triple // extractor syntax, ignore middle element |
---|
761 | |
---|
762 | println(a._2) // named access => print "Text" |
---|
763 | println(b.productElement(0)) // subscript access => print 6 |
---|
764 | \end{scalacode} |
---|
765 | In Scala, tuples are primarily used as simple data structures for carrying around multiple values or for returning multiple values from a function. |
---|
766 | The 22-element restriction is an odd and arbitrary choice, but in practice it does not cause problems since large tuples are uncommon. |
---|
767 | Subscript access is provided through the @productElement@ method, which returns a value of the top-type @Any@, since it is impossible to receive a more precise type from a general subscripting method due to type erasure. |
---|
768 | The disparity between named access beginning at @_1@ and subscript access starting at @0@ is likewise an oddity, but subscript access is typically avoided since it discards type information. |
---|
769 | Due to the language's pattern matching facilities, it is possible to extract the values from a tuple into named variables, which is a more idiomatic way of accessing the components of a tuple. |
---|
770 | |
---|
771 | |
---|
772 | \Csharp also has tuples, but has similarly strange limitations, allowing tuples of size up to 7 components. % https://msdn.microsoft.com/en-us/library/system.tuple(v=vs.110).aspx |
---|
773 | The officially supported workaround for this shortcoming is to nest tuples in the 8th component. |
---|
774 | \Csharp allows accessing a component of a tuple by using the field @Item$N$@ for components 1 through 7, and @Rest@ for the nested tuple. |
---|
775 | |
---|
776 | In Python \cite{Python}, tuples are immutable sequences that provide packing and unpacking operations. |
---|
777 | While the tuple itself is immutable, and thus does not allow the assignment of components, there is nothing preventing a component from being internally mutable. |
---|
778 | The components of a tuple can be accessed by unpacking into multiple variables, indexing, or via field name, like D. |
---|
779 | Tuples support multiple assignment through a combination of packing and unpacking, in addition to the common sequence operations. |
---|
780 | |
---|
781 | Swift \cite{Swift}, like D, provides named tuples, with components accessed by name, index, or via extractors. |
---|
782 | Tuples are primarily used for returning multiple values from a function. |
---|
783 | In Swift, @Void@ is an alias for the empty tuple, and there are no single element tuples. |
---|
784 | |
---|
785 | Tuples comparable to those described above are added to \CFA and discussed in Chapter 3. |
---|
786 | |
---|
787 | \section{Variadic Functions} |
---|
788 | \label{sec:variadic_functions} |
---|
789 | In statically-typed programming languages, functions are typically defined to receive a fixed number of arguments of specified types. |
---|
790 | Variadic argument functions provide the ability to define a function that can receive a theoretically unbounded number of arguments. |
---|
791 | |
---|
792 | C provides a simple implementation of variadic functions. |
---|
793 | A function whose parameter list ends with @, ...@ is a variadic function. |
---|
794 | Among the most common variadic functions is @printf@. |
---|
795 | \begin{cfacode} |
---|
796 | int printf(const char * fmt, ...); |
---|
797 | printf("%d %g %c %s", 10, 3.5, 'X', "a string"); |
---|
798 | \end{cfacode} |
---|
799 | Through the use of a format string, C programmers can communicate argument type information to @printf@, allowing C programmers to print any of the standard C data types. |
---|
800 | Still, @printf@ is extremely limited, since the format codes are specified by the C standard, meaning users cannot define their own format codes to extend @printf@ for new data types or new formatting rules. |
---|
801 | |
---|
802 | \begin{sloppypar} |
---|
803 | C provides manipulation of variadic arguments through the @va_list@ data type, which abstracts details of the manipulation of variadic arguments. |
---|
804 | Since the variadic arguments are untyped, it is up to the function to interpret any data that is passed in. |
---|
805 | Additionally, the interface to manipulate @va_list@ objects is essentially limited to advancing to the next argument, without any built-in facility to determine when the last argument is read. |
---|
806 | This limitation requires the use of an \emph{argument descriptor} to pass information to the function about the structure of the argument list, including the number of arguments and their types. |
---|
807 | The format string in @printf@ is one such example of an argument descriptor. |
---|
808 | \begin{cfacode} |
---|
809 | int f(const char * fmt, ...) { |
---|
810 | va_list args; |
---|
811 | va_start(args, fmt); // initialize va_list |
---|
812 | for (const char * c = fmt; *c != '\0'; ++c) { |
---|
813 | if (*c == '%') { |
---|
814 | ++c; |
---|
815 | switch (*c) { |
---|
816 | case 'd': { |
---|
817 | int i = va_arg(args, int); // have to specify type |
---|
818 | // ... |
---|
819 | break; |
---|
820 | } |
---|
821 | case 'g': { |
---|
822 | double d = va_arg(args, double); |
---|
823 | // ... |
---|
824 | break; |
---|
825 | } |
---|
826 | ... |
---|
827 | } |
---|
828 | } |
---|
829 | } |
---|
830 | va_end(args); |
---|
831 | return ...; |
---|
832 | } |
---|
833 | \end{cfacode} |
---|
834 | Every case must be handled explicitly, since the @va_arg@ macro requires a type argument to determine how the next set of bytes is to be interpreted. |
---|
835 | Furthermore, if the user makes a mistake, compile-time checking is typically restricted to standard format codes and their corresponding types. |
---|
836 | In general, this means that C's variadic functions are not type-safe, making them difficult to use properly. |
---|
837 | \end{sloppypar} |
---|
838 | |
---|
839 | % When arguments are passed to a variadic function, they undergo \emph{default argument promotions}. |
---|
840 | % Specifically, this means that |
---|
841 | |
---|
842 | \CCeleven added support for \emph{variadic templates}, which add much needed type-safety to C's variadic landscape. |
---|
843 | It is possible to use variadic templates to define variadic functions and variadic data types. |
---|
844 | \begin{cppcode} |
---|
845 | void print(int); |
---|
846 | void print(char); |
---|
847 | void print(double); |
---|
848 | ... |
---|
849 | |
---|
850 | void f() {} // base case |
---|
851 | |
---|
852 | template<typename T, typename... Args> |
---|
853 | void f(const T & arg, const Args &... rest) { |
---|
854 | print(arg); // print the current element |
---|
855 | f(rest...); // handle remaining arguments recursively |
---|
856 | } |
---|
857 | \end{cppcode} |
---|
858 | Variadic templates work largely through recursion on the \emph{parameter pack}, which is the argument with @...@ following its type. |
---|
859 | A parameter pack matches 0 or more elements, which can be types or expressions depending on the context. |
---|
860 | Like other templates, variadic template functions rely on an implicit set of constraints on a type, in this example a @print@ routine. |
---|
861 | That is, it is possible to use the @f@ routine on any type provided there is a corresponding @print@ routine, making variadic templates fully open to extension, unlike variadic functions in C. |
---|
862 | |
---|
863 | Recent \CC standards (\CCfourteen, \CCseventeen) expand on the basic premise by allowing variadic template variables and providing convenient expansion syntax to remove the need for recursion in some cases, amongst other things. |
---|
864 | |
---|
865 | % D has variadic templates that deserve a mention http://dlang.org/ctarguments.html |
---|
866 | |
---|
867 | In Java, a variadic function appears similar to a C variadic function in syntax. |
---|
868 | \begin{javacode} |
---|
869 | int sum(int... args) { |
---|
870 | int s = 0; |
---|
871 | for (int x : args) { |
---|
872 | s += x; |
---|
873 | } |
---|
874 | return s; |
---|
875 | } |
---|
876 | |
---|
877 | void print(Object... objs) { |
---|
878 | for (Object obj : objs) { |
---|
879 | System.out.print(obj); |
---|
880 | } |
---|
881 | } |
---|
882 | |
---|
883 | print("The sum from 1 to 10 is ", sum(1,2,3,4,5,6,7,8,9,10), ".\n"); |
---|
884 | \end{javacode} |
---|
885 | The key difference is that Java variadic functions are type-safe, because they specify the type of the argument immediately prior to the ellipsis. |
---|
886 | In Java, variadic arguments are syntactic sugar for arrays, allowing access to length, subscripting operations, and for-each iteration on the variadic arguments, among other things. |
---|
887 | Since the argument type is specified explicitly, the top-type @Object@ can be used to accept arguments of any type, but to do anything interesting on the argument requires a down-cast to a more specific type, landing Java in a similar situation to C in that writing a function open to extension is difficult. |
---|
888 | |
---|
889 | The other option is to restrict the number of types that can be passed to the function by using a more specific type. |
---|
890 | Unfortunately, Java's use of nominal inheritance means that types must explicitly inherit from classes or interfaces in order to be considered a subclass. |
---|
891 | The combination of these two issues greatly restricts the usefulness of variadic functions in Java. |
---|
892 | |
---|
893 | Type-safe variadic functions are added to \CFA and discussed in Chapter 4. |
---|
894 | |
---|
895 | \section{Contributions} |
---|
896 | \label{s:contributions} |
---|
897 | |
---|
898 | No prior work on constructors or destructors had been done for \CFA. |
---|
899 | I did both the design and implementation work. |
---|
900 | While the overall design is based on constructors and destructors in object-oriented C++, it had to be re-engineered into non-object-oriented \CFA. |
---|
901 | I also had to make changes to the \CFA expression-resolver to integrate constructors and destructors into the type system. |
---|
902 | |
---|
903 | Prior work on the design of tuples for \CFA was done by Till, and some initial implementation work by Esteves. |
---|
904 | I largely took the Till design but added tuple indexing, which exists in a number of programming languages with tuples, simplified the implicit tuple conversions, and integrated with the \CFA polymorphism and assertion satisfaction model. |
---|
905 | I did a new implementation of tuples, and extensively |
---|
906 | augmented initial work by Bilson to incorporate tuples into the \CFA expression-resolver and type-unifier. |
---|
907 | |
---|
908 | No prior work on variadic functions had been done for \CFA. |
---|
909 | I did both the design and implementation work. |
---|
910 | While the overall design is based on variadic templates in C++, my design is novel in the way it is incorporated into the \CFA polymorphism model, and is engineered into \CFA so it dovetails with tuples. |
---|