1 | %====================================================================== |
---|
2 | \chapter{Introduction} |
---|
3 | %====================================================================== |
---|
4 | |
---|
5 | \section{\CFA Background} |
---|
6 | \label{s:background} |
---|
7 | \CFA \footnote{Pronounced ``C-for-all'', and written \CFA or Cforall.} is a modern non-object-oriented extension to the C programming language. |
---|
8 | As it is an extension of C, there is already a wealth of existing C code and principles that govern the design of the language. |
---|
9 | Among the goals set out in the original design of \CFA, four points stand out \cite{Bilson03}. |
---|
10 | \begin{enumerate} |
---|
11 | \item The behaviour of standard C code must remain the same when translated by a \CFA compiler as when translated by a C compiler. |
---|
12 | \item Standard C code must be as fast and as small when translated by a \CFA compiler as when translated by a C compiler. |
---|
13 | \item \CFA code must be at least as portable as standard C code. |
---|
14 | \item Extensions introduced by \CFA must be translated in the most efficient way possible. |
---|
15 | \end{enumerate} |
---|
16 | Therefore, these design principles must be kept in mind throughout the design and development of new language features. |
---|
17 | In order to appeal to existing C programmers, great care must be taken to ensure that new features naturally feel like C. |
---|
18 | The remainder of this section describes some of the important new features that currently exist in \CFA, to give the reader the necessary context in which the new features presented in this thesis must dovetail. |
---|
19 | |
---|
20 | \subsection{C Background} |
---|
21 | \label{sub:c_background} |
---|
22 | One of the lesser-known features of standard C is \emph{designations}. |
---|
23 | Designations are similar to named parameters in languages such as Python and Scala, except that they only apply to aggregate initializers. |
---|
24 | \begin{cfacode} |
---|
25 | struct A { |
---|
26 | int w, x, y, z; |
---|
27 | }; |
---|
28 | A a0 = { .x:4 .z:1, .x:8 }; |
---|
29 | A a1 = { 1, .y:7, 6 }; |
---|
30 | A a2[4] = { [2]:a0, [0]:a1, { .z:3 } }; |
---|
31 | // equivalent to |
---|
32 | // A a0 = { 0, 8, 0, 1 }; |
---|
33 | // A a1 = { 1, 0, 7, 6 }; |
---|
34 | // A a2[4] = { a1, { 0, 0, 0, 3 }, a0, { 0, 0, 0, 0 } }; |
---|
35 | \end{cfacode} |
---|
36 | Designations allow specifying the field to initialize by name, rather than by position. |
---|
37 | Any field not explicitly initialized is initialized as if it had static storage duration \cite[p.~141]{C11}. |
---|
38 | A designator specifies the current object for initialization, and as such any undesignated sub-objects pick up where the last initialization left off. |
---|
39 | For example, in the initialization of @a1@, the initializer of @y@ is @7@, and the unnamed initializer @6@ initializes the next sub-object, @z@. |
---|
40 | Later initializers override earlier initializers, so a sub-object for which there is more than one initializer is only initialized by its last initializer. |
---|
41 | These semantics can be seen in the initialization of @a0@, where @x@ is designated twice, and thus initialized to @8@. |
---|
42 | Note that in \CFA, designations use a colon separator, rather than an equals sign as in C, because this syntax is one of the few places that conflicts with the new language features. |
---|
43 | |
---|
44 | C also provides \emph{compound literal} expressions, which provide a first-class mechanism for creating unnamed objects. |
---|
45 | \begin{cfacode} |
---|
46 | struct A { int x, y; }; |
---|
47 | int f(A, int); |
---|
48 | int g(int *); |
---|
49 | |
---|
50 | f((A){ 3, 4 }, (int){ 5 } = 10); |
---|
51 | g((int[]){ 1, 2, 3 }); |
---|
52 | g(&(int){ 0 }); |
---|
53 | \end{cfacode} |
---|
54 | Compound literals create an unnamed object, and result in an lvalue, so it is legal to assign a value into a compound literal or to take its address \cite[p.~86]{C11}. |
---|
55 | Syntactically, compound literals look like a cast operator followed by a brace-enclosed initializer, but semantically are different from a C cast, which only applies basic conversions and is never an lvalue. |
---|
56 | |
---|
57 | \subsection{Overloading} |
---|
58 | \label{sub:overloading} |
---|
59 | Overloading is the ability to specify multiple entities with the same name. |
---|
60 | The most common form of overloading is function overloading, wherein multiple functions can be defined with the same name, but with different signatures. |
---|
61 | Like in \CC, \CFA allows overloading based both on the number of parameters and on the types of parameters. |
---|
62 | \begin{cfacode} |
---|
63 | void f(void); // (1) |
---|
64 | void f(int); // (2) |
---|
65 | void f(char); // (3) |
---|
66 | |
---|
67 | f('A'); // selects (3) |
---|
68 | \end{cfacode} |
---|
69 | In this case, there are three @f@ procedures, where @f@ takes either 0 or 1 arguments, and if an argument is provided then it may be of type @int@ or of type @char@. |
---|
70 | Exactly which procedure is executed depends on the number and types of arguments passed. |
---|
71 | If there is no exact match available, \CFA attempts to find a suitable match by examining the C built-in conversion heuristics. |
---|
72 | \begin{cfacode} |
---|
73 | void g(long long); |
---|
74 | |
---|
75 | g(12345); |
---|
76 | \end{cfacode} |
---|
77 | In the above example, there is only one instance of @g@, which expects a single parameter of type @long long@. |
---|
78 | Here, the argument provided has type @int@, but since all possible values of type @int@ can be represented by a value of type @long long@, there is a safe conversion from @int@ to @long long@, and so \CFA calls the provided @g@ routine. |
---|
79 | |
---|
80 | In addition to this form of overloading, \CFA also allows overloading based on the number and types of \emph{return} values. |
---|
81 | This extension is a feature that is not available in \CC, but is available in other programming languages such as Ada \cite{Ada95}. |
---|
82 | \begin{cfacode} |
---|
83 | int g(); // (1) |
---|
84 | double g(); // (2) |
---|
85 | |
---|
86 | int x = g(); // selects (1) |
---|
87 | \end{cfacode} |
---|
88 | Here, the only difference between the signatures of the different versions of @g@ is in the return values. |
---|
89 | The result context is used to select an appropriate routine definition. |
---|
90 | In this case, the result of @g@ is assigned into a variable of type @int@, so \CFA prefers the routine that returns a single @int@, because it is an exact match. |
---|
91 | |
---|
92 | There are times when a function should logically return multiple values. |
---|
93 | Since a function in standard C can only return a single value, a programmer must either take in additional return values by address, or the function's designer must create a wrapper structure to package multiple return-values. |
---|
94 | \begin{cfacode} |
---|
95 | int f(int * ret) { // returns a value through parameter ret |
---|
96 | *ret = 37; |
---|
97 | return 123; |
---|
98 | } |
---|
99 | |
---|
100 | int res1, res2; // allocate return value |
---|
101 | int res1 = g(&res2); // explicitly pass storage |
---|
102 | \end{cfacode} |
---|
103 | The former solution is awkward because it requires the caller to explicitly allocate memory for $n$ result variables, even if they are only temporary values used as a subexpression, or even not used at all. |
---|
104 | The latter approach: |
---|
105 | \begin{cfacode} |
---|
106 | struct A { |
---|
107 | int x, y; |
---|
108 | }; |
---|
109 | struct A g() { // returns values through a structure |
---|
110 | return (struct A) { 123, 37 }; |
---|
111 | } |
---|
112 | struct A res3 = g(); |
---|
113 | ... res3.x ... res3.y ... // use result values |
---|
114 | \end{cfacode} |
---|
115 | requires the caller to either learn the field names of the structure or learn the names of helper routines to access the individual return values. |
---|
116 | Both solutions are syntactically unnatural. |
---|
117 | |
---|
118 | In \CFA, it is possible to directly declare a function returning multiple values. |
---|
119 | This extension provides important semantic information to the caller, since return values are only for output. |
---|
120 | \begin{cfacode} |
---|
121 | [int, int] f() { // no new type |
---|
122 | return [123, 37]; |
---|
123 | } |
---|
124 | \end{cfacode} |
---|
125 | However, the ability to return multiple values is useless without a syntax for accepting the results from the function. |
---|
126 | |
---|
127 | In standard C, return values are most commonly assigned directly into local variables, or are used as the arguments to another function call. |
---|
128 | \CFA allows both of these contexts to accept multiple return values. |
---|
129 | \begin{cfacode} |
---|
130 | int res1, res2; |
---|
131 | [res1, res2] = f(); // assign return values into local variables |
---|
132 | |
---|
133 | void g(int, int); |
---|
134 | g(f()); // pass both return values of f to g |
---|
135 | \end{cfacode} |
---|
136 | As seen in the example, it is possible to assign the results from a return value directly into local variables. |
---|
137 | These local variables can be referenced naturally, without requiring any unpacking as in structured return values. |
---|
138 | Perhaps more interesting is the fact that multiple return values can be passed to multiple parameters seamlessly, as in the call @g(f())@. |
---|
139 | In this call, the return values from @f@ are linked to the parameters of @g@ so that each of the return values is passed directly to the corresponding parameter of @g@, without any explicit storing, unpacking, or additional naming. |
---|
140 | |
---|
141 | An extra quirk introduced by multiple return values is in the resolution of function calls. |
---|
142 | \begin{cfacode} |
---|
143 | int f(); // (1) |
---|
144 | [int, int] f(); // (2) |
---|
145 | |
---|
146 | void g(int, int); |
---|
147 | |
---|
148 | int x, y; |
---|
149 | [x, y] = f(); // selects (2) |
---|
150 | g(f()); // selects (2) |
---|
151 | \end{cfacode} |
---|
152 | In this example, the only possible call to @f@ that can produce the two @int@s required for assigning into the variables @x@ and @y@ is the second option. |
---|
153 | A similar reasoning holds calling the function @g@. |
---|
154 | |
---|
155 | In \CFA, overloading also applies to operator names, known as \emph{operator overloading}. |
---|
156 | Similar to function overloading, a single operator is given multiple meanings by defining new versions of the operator with different signatures. |
---|
157 | In \CC, this can be done as follows |
---|
158 | \begin{cppcode} |
---|
159 | struct A { int i; }; |
---|
160 | int operator+(A x, A y); |
---|
161 | bool operator<(A x, A y); |
---|
162 | \end{cppcode} |
---|
163 | |
---|
164 | In \CFA, the same example can be written as follows. |
---|
165 | \begin{cfacode} |
---|
166 | struct A { int i; }; |
---|
167 | int ?+?(A x, A y); |
---|
168 | bool ?<?(A x, A y); |
---|
169 | \end{cfacode} |
---|
170 | Notably, the only difference is syntax. |
---|
171 | Most of the operators supported by \CC for operator overloading are also supported in \CFA. |
---|
172 | Of notable exception are the logical operators (e.g. @||@), the sequence operator (i.e. @,@), and the member-access operators (e.g. @.@ and \lstinline{->}). |
---|
173 | |
---|
174 | Finally, \CFA also permits overloading variable identifiers. |
---|
175 | This feature is not available in \CC. |
---|
176 | \begin{cfacode} |
---|
177 | struct Rational { int numer, denom; }; |
---|
178 | int x = 3; // (1) |
---|
179 | double x = 1.27; // (2) |
---|
180 | Rational x = { 4, 11 }; // (3) |
---|
181 | |
---|
182 | void g(double); |
---|
183 | |
---|
184 | x += 1; // chooses (1) |
---|
185 | g(x); // chooses (2) |
---|
186 | Rational y = x; // chooses (3) |
---|
187 | \end{cfacode} |
---|
188 | In this example, there are three definitions of the variable @x@. |
---|
189 | Based on the context, \CFA attempts to choose the variable whose type best matches the expression context. |
---|
190 | When used judiciously, this feature allows names like @MAX@, @MIN@, and @PI@ to apply across many types. |
---|
191 | |
---|
192 | Finally, the values @0@ and @1@ have special status in standard C. |
---|
193 | In particular, the value @0@ is both an integer and a pointer literal, and thus its meaning depends on the context. |
---|
194 | In addition, several operations can be redefined in terms of other operations and the values @0@ and @1@. |
---|
195 | For example, |
---|
196 | \begin{cfacode} |
---|
197 | int x; |
---|
198 | if (x) { // if (x != 0) |
---|
199 | x++; // x += 1; |
---|
200 | } |
---|
201 | \end{cfacode} |
---|
202 | Every if- and iteration-statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result. |
---|
203 | Due to these rewrite rules, the values @0@ and @1@ have the types \zero and \one in \CFA, which allow for overloading various operations that connect to @0@ and @1@ \footnote{In the original design of \CFA, @0@ and @1@ were overloadable names \cite[p.~7]{cforall}.}. |
---|
204 | The types \zero and \one have special built-in implicit conversions to the various integral types, and a conversion to pointer types for @0@, which allows standard C code involving @0@ and @1@ to work as normal. |
---|
205 | \begin{cfacode} |
---|
206 | // lvalue is similar to returning a reference in C++ |
---|
207 | lvalue Rational ?+=?(Rational *a, Rational b); |
---|
208 | Rational ?=?(Rational * dst, zero_t) { |
---|
209 | return *dst = (Rational){ 0, 1 }; |
---|
210 | } |
---|
211 | |
---|
212 | Rational sum(Rational *arr, int n) { |
---|
213 | Rational r; |
---|
214 | r = 0; // use rational-zero_t assignment |
---|
215 | for (; n > 0; n--) { |
---|
216 | r += arr[n-1]; |
---|
217 | } |
---|
218 | return r; |
---|
219 | } |
---|
220 | \end{cfacode} |
---|
221 | This function takes an array of @Rational@ objects and produces the @Rational@ representing the sum of the array. |
---|
222 | Note the use of an overloaded assignment operator to set an object of type @Rational@ to an appropriate @0@ value. |
---|
223 | |
---|
224 | \subsection{Polymorphism} |
---|
225 | \label{sub:polymorphism} |
---|
226 | In its most basic form, polymorphism grants the ability to write a single block of code that accepts different types. |
---|
227 | In particular, \CFA supports the notion of parametric polymorphism. |
---|
228 | Parametric polymorphism allows a function to be written generically, for all values of all types, without regard to the specifics of a particular type. |
---|
229 | For example, in \CC, the simple identity function for all types can be written as |
---|
230 | \begin{cppcode} |
---|
231 | template<typename T> |
---|
232 | T identity(T x) { return x; } |
---|
233 | \end{cppcode} |
---|
234 | \CC uses the template mechanism to support parametric polymorphism. In \CFA, an equivalent function can be written as |
---|
235 | \begin{cfacode} |
---|
236 | forall(otype T) |
---|
237 | T identity(T x) { return x; } |
---|
238 | \end{cfacode} |
---|
239 | Once again, the only visible difference in this example is syntactic. |
---|
240 | Fundamental differences can be seen by examining more interesting examples. |
---|
241 | In \CC, a generic sum function is written as follows |
---|
242 | \begin{cppcode} |
---|
243 | template<typename T> |
---|
244 | T sum(T *arr, int n) { |
---|
245 | T t; |
---|
246 | for (; n > 0; n--) t += arr[n-1]; |
---|
247 | return t; |
---|
248 | } |
---|
249 | \end{cppcode} |
---|
250 | Here, the code assumes the existence of a default constructor, assignment operator, and an addition operator over the provided type @T@. |
---|
251 | If any of these required operators are not available, the \CC compiler produces an error message stating which operators could not be found. |
---|
252 | |
---|
253 | A similar sum function can be written in \CFA as follows |
---|
254 | \begin{cfacode} |
---|
255 | forall(otype T | **R**{ T ?=?(T *, zero_t); T ?+=?(T *, T); }**R**) |
---|
256 | T sum(T *arr, int n) { |
---|
257 | T t = 0; |
---|
258 | for (; n > 0; n--) t = t += arr[n-1]; |
---|
259 | return t; |
---|
260 | } |
---|
261 | \end{cfacode} |
---|
262 | The first thing to note here is that immediately following the declaration of @otype T@ is a list of \emph{type assertions} that specify restrictions on acceptable choices of @T@. |
---|
263 | In particular, the assertions above specify that there must be a an assignment from \zero to @T@ and an addition assignment operator from @T@ to @T@. |
---|
264 | The existence of an assignment operator from @T@ to @T@ and the ability to create an object of type @T@ are assumed implicitly by declaring @T@ with the @otype@ type-class. |
---|
265 | In addition to @otype@, there are currently two other type-classes. |
---|
266 | The three type parameter kinds are summarized in \autoref{table:types} |
---|
267 | |
---|
268 | \begin{table}[h!] |
---|
269 | \begin{center} |
---|
270 | \begin{tabular}{|c||c|c|c||c|c|c|} |
---|
271 | \hline |
---|
272 | name & object type & incomplete type & function type & can assign value & can create & has size \\ \hline |
---|
273 | @otype@ & X & & & X & X & X \\ \hline |
---|
274 | @dtype@ & X & X & & & & \\ \hline |
---|
275 | @ftype@ & & & X & & & \\ \hline |
---|
276 | \end{tabular} |
---|
277 | \end{center} |
---|
278 | \caption{\label{table:types} The different kinds of type parameters in \CFA} |
---|
279 | \end{table} |
---|
280 | |
---|
281 | A major difference between the approaches of \CC and \CFA to polymorphism is that the set of assumed properties for a type is \emph{explicit} in \CFA. |
---|
282 | One of the major limiting factors of \CC's approach is that templates cannot be separately compiled. |
---|
283 | In contrast, the explicit nature of assertions allows \CFA's polymorphic functions to be separately compiled. |
---|
284 | |
---|
285 | In \CFA, a set of assertions can be factored into a \emph{trait}. |
---|
286 | \begin{cfacode} |
---|
287 | trait Addable(otype T) { |
---|
288 | T ?+?(T, T); |
---|
289 | T ++?(T); |
---|
290 | T ?++(T); |
---|
291 | } |
---|
292 | forall(otype T | Addable(T)) void f(T); |
---|
293 | forall(otype T | Addable(T) | { T --?(T); }) T g(T); |
---|
294 | forall(otype T, U | Addable(T) | { T ?/?(T, U); }) U h(T, U); |
---|
295 | \end{cfacode} |
---|
296 | This capability allows specifying the same set of assertions in multiple locations, without the repetition and likelihood of mistakes that come with manually writing them out for each function declaration. |
---|
297 | |
---|
298 | An interesting application of return-type resolution and polymorphism is with type-safe @malloc@. |
---|
299 | \begin{cfacode} |
---|
300 | forall(dtype T | sized(T)) |
---|
301 | T * malloc() { |
---|
302 | return (T*)malloc(sizeof(T)); // call C malloc |
---|
303 | } |
---|
304 | int * x = malloc(); // malloc(sizeof(int)) |
---|
305 | double * y = malloc(); // malloc(sizeof(double)) |
---|
306 | |
---|
307 | struct S { ... }; |
---|
308 | S * s = malloc(); // malloc(sizeof(S)) |
---|
309 | \end{cfacode} |
---|
310 | The built-in trait @sized@ ensures that size and alignment information for @T@ is available in the body of @malloc@ through @sizeof@ and @_Alignof@ expressions respectively. |
---|
311 | In calls to @malloc@, the type @T@ is bound based on call-site information, allowing \CFA code to allocate memory without the potential for errors introduced by manually specifying the size of the allocated block. |
---|
312 | |
---|
313 | \section{Invariants} |
---|
314 | An \emph{invariant} is a logical assertion that is true for some duration of a program's execution. |
---|
315 | Invariants help a programmer to reason about code correctness and prove properties of programs. |
---|
316 | |
---|
317 | In object-oriented programming languages, type invariants are typically established in a constructor and maintained throughout the object's lifetime. |
---|
318 | These assertions are typically achieved through a combination of access control modifiers and a restricted interface. |
---|
319 | Typically, data which requires the maintenance of an invariant is hidden from external sources using the \emph{private} modifier, which restricts reads and writes to a select set of trusted routines, including member functions. |
---|
320 | It is these trusted routines that perform all modifications to internal data in a way that is consistent with the invariant, by ensuring that the invariant holds true at the end of the routine call. |
---|
321 | |
---|
322 | In C, the @assert@ macro is often used to ensure invariants are true. |
---|
323 | Using @assert@, the programmer can check a condition and abort execution if the condition is not true. |
---|
324 | This powerful tool forces the programmer to deal with logical inconsistencies as they occur. |
---|
325 | For production, assertions can be removed by simply defining the preprocessor macro @NDEBUG@, making it simple to ensure that assertions are 0-cost for a performance intensive application. |
---|
326 | \begin{cfacode} |
---|
327 | struct Rational { |
---|
328 | int n, d; |
---|
329 | }; |
---|
330 | struct Rational create_rational(int n, int d) { |
---|
331 | assert(d != 0); // precondition |
---|
332 | if (d < 0) { |
---|
333 | n *= -1; |
---|
334 | d *= -1; |
---|
335 | } |
---|
336 | assert(d > 0); // postcondition |
---|
337 | // rational invariant: d > 0 |
---|
338 | return (struct Rational) { n, d }; |
---|
339 | } |
---|
340 | struct Rational rat_abs(struct Rational r) { |
---|
341 | assert(r.d > 0); // check invariant, since no access control |
---|
342 | r.n = abs(r.n); |
---|
343 | assert(r.d > 0); // ensure function preserves invariant on return value |
---|
344 | return r; |
---|
345 | } |
---|
346 | \end{cfacode} |
---|
347 | |
---|
348 | Some languages, such as D, provide language-level support for specifying program invariants. |
---|
349 | In addition to providing a C-like @assert@ expression, D allows specifying type invariants that are automatically checked at the end of a constructor, beginning of a destructor, and at the beginning and end of every public member function. |
---|
350 | \begin{dcode} |
---|
351 | import std.math; |
---|
352 | struct Rational { |
---|
353 | invariant { |
---|
354 | assert(d > 0, "d <= 0"); |
---|
355 | } |
---|
356 | int n, d; |
---|
357 | this(int n, int d) { // constructor |
---|
358 | assert(d != 0); |
---|
359 | this.n = n; |
---|
360 | this.d = d; |
---|
361 | // implicitly check invariant |
---|
362 | } |
---|
363 | Rational abs() { |
---|
364 | // implicitly check invariant |
---|
365 | return Rational(std.math.abs(n), d); |
---|
366 | // implicitly check invariant |
---|
367 | } |
---|
368 | } |
---|
369 | \end{dcode} |
---|
370 | The D compiler is able to assume that assertions and invariants hold true and perform optimizations based on those assumptions. |
---|
371 | Note, these invariants are internal to the type's correct behaviour. |
---|
372 | |
---|
373 | Types also have external invariants with the state of the execution environment, including the heap, the open-file table, the state of global variables, etc. |
---|
374 | Since resources are finite and shared (concurrency), it is important to ensure that objects clean up properly when they are finished, restoring the execution environment to a stable state so that new objects can reuse resources. |
---|
375 | |
---|
376 | \section{Resource Management} |
---|
377 | \label{s:ResMgmt} |
---|
378 | |
---|
379 | Resource management is a problem that pervades every programming language. |
---|
380 | |
---|
381 | In standard C, resource management is largely a manual effort on the part of the programmer, with a notable exception to this rule being the program stack. |
---|
382 | The program stack grows and shrinks automatically with each function call, as needed for local variables. |
---|
383 | However, whenever a program needs a variable to outlive the block it is created in, the storage must be allocated dynamically with @malloc@ and later released with @free@. |
---|
384 | This pattern is extended to more complex objects, such as files and sockets, which can also outlive the block where they are created, and thus require their own resource management. |
---|
385 | Once allocated storage escapes\footnote{In garbage collected languages, such as Java, escape analysis \cite{Choi:1999:EAJ:320385.320386} is used to determine when dynamically allocated objects are strictly contained within a function, which allows the optimizer to allocate them on the stack.} a block, the responsibility for deallocating the storage is not specified in a function's type, that is, that the return value is owned by the caller. |
---|
386 | This implicit convention is provided only through documentation about the expectations of functions. |
---|
387 | |
---|
388 | In other languages, a hybrid situation exists where resources escape the allocation block, but ownership is precisely controlled by the language. |
---|
389 | This pattern requires a strict interface and protocol for a data structure, consisting of a pre-initialization and a post-termination call, and all intervening access is done via interface routines. |
---|
390 | This kind of encapsulation is popular in object-oriented programming languages, and like the stack, it takes care of a significant portion of resource management cases. |
---|
391 | |
---|
392 | For example, \CC directly supports this pattern through class types and an idiom known as RAII \footnote{Resource Acquisition is Initialization} by means of constructors and destructors. |
---|
393 | Constructors and destructors are special routines that are automatically inserted into the appropriate locations to bookend the lifetime of an object. |
---|
394 | Constructors allow the designer of a type to establish invariants for objects of that type, since it is guaranteed that every object must be initialized through a constructor. |
---|
395 | In particular, constructors allow a programmer to ensure that all objects are initially set to a valid state. |
---|
396 | On the other hand, destructors provide a simple mechanism for tearing down an object and resetting the environment in which the object lived. |
---|
397 | RAII ensures that if all resources are acquired in a constructor and released in a destructor, there are no resource leaks, even in exceptional circumstances. |
---|
398 | A type with at least one non-trivial constructor or destructor is henceforth referred to as a \emph{managed type}. |
---|
399 | In the context of \CFA, a non-trivial constructor is either a user defined constructor or an auto-generated constructor that calls a non-trivial constructor. |
---|
400 | |
---|
401 | For the remaining resource ownership cases, programmer must follow a brittle, explicit protocol for freeing resources or an implicit protocol implemented via the programming language. |
---|
402 | |
---|
403 | In garbage collected languages, such as Java, resources are largely managed by the garbage collector. |
---|
404 | Still, garbage collectors are typically focus only on memory management. |
---|
405 | There are many kinds of resources that the garbage collector does not understand, such as sockets, open files, and database connections. |
---|
406 | In particular, Java supports \emph{finalizers}, which are similar to destructors. |
---|
407 | Sadly, finalizers are only guaranteed to be called before an object is reclaimed by the garbage collector \cite[p.~373]{Java8}, which may not happen if memory use is not contentious. |
---|
408 | Due to operating-system resource-limits, this is unacceptable for many long running programs. |
---|
409 | Instead, the paradigm in Java requires programmers to manually keep track of all resources \emph{except} memory, leading many novices and experts alike to forget to close files, etc. |
---|
410 | Complicating the picture, uncaught exceptions can cause control flow to change dramatically, leaking a resource that appears on first glance to be released. |
---|
411 | \begin{javacode} |
---|
412 | void write(String filename, String msg) throws Exception { |
---|
413 | FileOutputStream out = new FileOutputStream(filename); |
---|
414 | FileOutputStream log = new FileOutputStream(filename); |
---|
415 | out.write(msg.getBytes()); |
---|
416 | log.write(msg.getBytes()); |
---|
417 | log.close(); |
---|
418 | out.close(); |
---|
419 | } |
---|
420 | \end{javacode} |
---|
421 | Any line in this program can throw an exception, which leads to a profusion of finally blocks around many function bodies, since it is not always clear when an exception may be thrown. |
---|
422 | \begin{javacode} |
---|
423 | public void write(String filename, String msg) throws Exception { |
---|
424 | FileOutputStream out = new FileOutputStream(filename); |
---|
425 | try { |
---|
426 | FileOutputStream log = new FileOutputStream("log.txt"); |
---|
427 | try { |
---|
428 | out.write(msg.getBytes()); |
---|
429 | log.write(msg.getBytes()); |
---|
430 | } finally { |
---|
431 | log.close(); |
---|
432 | } |
---|
433 | } finally { |
---|
434 | out.close(); |
---|
435 | } |
---|
436 | } |
---|
437 | \end{javacode} |
---|
438 | In Java 7, a new \emph{try-with-resources} construct was added to alleviate most of the pain of working with resources, but ultimately it still places the burden squarely on the user rather than on the library designer. |
---|
439 | Furthermore, for complete safety this pattern requires nested objects to be declared separately, otherwise resources that can throw an exception on close can leak nested resources \cite{TryWithResources}. |
---|
440 | \begin{javacode} |
---|
441 | public void write(String filename, String msg) throws Exception { |
---|
442 | try ( // try-with-resources |
---|
443 | FileOutputStream out = new FileOutputStream(filename); |
---|
444 | FileOutputStream log = new FileOutputStream("log.txt"); |
---|
445 | ) { |
---|
446 | out.write(msg.getBytes()); |
---|
447 | log.write(msg.getBytes()); |
---|
448 | } // automatically closes out and log in every exceptional situation |
---|
449 | } |
---|
450 | \end{javacode} |
---|
451 | Variables declared as part of a try-with-resources statement must conform to the @AutoClosable@ interface, and the compiler implicitly calls @close@ on each of the variables at the end of the block. |
---|
452 | Depending on when the exception is raised, both @out@ and @log@ are null, @log@ is null, or both are non-null, therefore, the cleanup for these variables at the end is appropriately guarded and conditionally executed to prevent null-pointer exceptions. |
---|
453 | |
---|
454 | While Rust \cite{Rust} does not enforce the use of a garbage collector, it does provide a manual memory management environment, with a strict ownership model that automatically frees allocated memory and prevents common memory management errors. |
---|
455 | In particular, a variable has ownership over its associated value, which is freed automatically when the owner goes out of scope. |
---|
456 | Furthermore, values are \emph{moved} by default on assignment, rather than copied, which invalidates the previous variable binding. |
---|
457 | \begin{rustcode} |
---|
458 | struct S { |
---|
459 | x: i32 |
---|
460 | } |
---|
461 | let s = S { x: 123 }; |
---|
462 | let z = s; // move, invalidate s |
---|
463 | println!("{}", s.x); // error, s has been moved |
---|
464 | \end{rustcode} |
---|
465 | Types can be made copyable by implementing the @Copy@ trait. |
---|
466 | |
---|
467 | Rust allows multiple unowned views into an object through references, also known as borrows, provided that a reference does not outlive its referent. |
---|
468 | A mutable reference is allowed only if it is the only reference to its referent, preventing data race errors and iterator invalidation errors. |
---|
469 | \begin{rustcode} |
---|
470 | let mut x = 10; |
---|
471 | { |
---|
472 | let y = &x; |
---|
473 | let z = &x; |
---|
474 | println!("{} {}", y, z); // prints 10 10 |
---|
475 | } |
---|
476 | { |
---|
477 | let y = &mut x; |
---|
478 | // let z1 = &x; // not allowed, have mutable reference |
---|
479 | // let z2 = &mut x; // not allowed, have mutable reference |
---|
480 | *y = 5; |
---|
481 | println!("{}", y); // prints 5 |
---|
482 | } |
---|
483 | println!("{}", x); // prints 5 |
---|
484 | \end{rustcode} |
---|
485 | Since references are not owned, they do not release resources when they go out of scope. |
---|
486 | There is no runtime cost imposed on these restrictions, since they are enforced at compile-time. |
---|
487 | |
---|
488 | Rust provides RAII through the @Drop@ trait, allowing arbitrary code to execute when the object goes out of scope, allowing Rust programs to automatically clean up auxiliary resources much like a \CC program. |
---|
489 | \begin{rustcode} |
---|
490 | struct S { |
---|
491 | name: &'static str |
---|
492 | } |
---|
493 | |
---|
494 | impl Drop for S { // RAII for S |
---|
495 | fn drop(&mut self) { |
---|
496 | println!("dropped {}", self.name); |
---|
497 | } |
---|
498 | } |
---|
499 | |
---|
500 | { |
---|
501 | let x = S { name: "x" }; |
---|
502 | let y = S { name: "y" }; |
---|
503 | } // prints "dropped y" "dropped x" |
---|
504 | \end{rustcode} |
---|
505 | |
---|
506 | % D has constructors and destructors that are worth a mention (under classes) https://dlang.org/spec/spec.html |
---|
507 | % also https://dlang.org/spec/struct.html#struct-constructor |
---|
508 | % these are declared in the struct, so they're closer to C++ than to CFA, at least syntactically. Also do not allow for default constructors |
---|
509 | % D has a GC, which already makes the situation quite different from C/C++ |
---|
510 | The programming language, D, also manages resources with constructors and destructors \cite{D}. |
---|
511 | In D, @struct@s are stack allocated and managed via scoping like in \CC, whereas @class@es are managed automatically by the garbage collector. |
---|
512 | Like Java, using the garbage collector means that destructors are called indeterminately, requiring the use of finally statements to ensure dynamically allocated resources that are not managed by the garbage collector, such as open files, are cleaned up. |
---|
513 | Since D supports RAII, it is possible to use the same techniques as in \CC to ensure that resources are released in a timely manner. |
---|
514 | Finally, D provides a scope guard statement, which allows an arbitrary statement to be executed at normal scope exit with \emph{success}, at exceptional scope exit with \emph{failure}, or at normal and exceptional scope exit with \emph{exit}. % https://dlang.org/spec/statement.html#ScopeGuardStatement |
---|
515 | It has been shown that the \emph{exit} form of the scope guard statement can be implemented in a library in \CC \cite{ExceptSafe}. |
---|
516 | |
---|
517 | To provide managed types in \CFA, new kinds of constructors and destructors are added to \CFA and discussed in Chapter 2. |
---|
518 | |
---|
519 | \section{Tuples} |
---|
520 | \label{s:Tuples} |
---|
521 | In mathematics, tuples are finite-length sequences which, unlike sets, are ordered and allow duplicate elements. |
---|
522 | In programming languages, tuples provide fixed-sized heterogeneous lists of elements. |
---|
523 | Many programming languages have tuple constructs, such as SETL, \KWC, ML, and Scala. |
---|
524 | |
---|
525 | \KWC, a predecessor of \CFA, introduced tuples to C as an extension of the C syntax, rather than as a full-blown data type \cite{Till89}. |
---|
526 | In particular, Till noted that C already contains a tuple context in the form of function parameter lists. |
---|
527 | The main contributions of that work were in the form of adding tuple contexts to assignment in the form of multiple assignment and mass assignment (discussed in detail in section \ref{s:TupleAssignment}), function return values (see section \ref{s:MRV_Functions}), and record field access (see section \ref{s:MemberAccessTuple}). |
---|
528 | Adding tuples to \CFA has previously been explored by Esteves \cite{Esteves04}. |
---|
529 | |
---|
530 | The design of tuples in \KWC took much of its inspiration from SETL \cite{SETL}. |
---|
531 | SETL is a high-level mathematical programming language, with tuples being one of the primary data types. |
---|
532 | Tuples in SETL allow a number of operations, including subscripting, dynamic expansion, and multiple assignment. |
---|
533 | |
---|
534 | \CCeleven introduced @std::tuple@ as a library variadic template struct. |
---|
535 | Tuples are a generalization of @std::pair@, in that they allow for arbitrary length, fixed-size aggregation of heterogeneous values. |
---|
536 | \begin{cppcode} |
---|
537 | tuple<int, int, int> triple(10, 20, 30); |
---|
538 | get<1>(triple); // access component 1 => 20 |
---|
539 | |
---|
540 | tuple<int, double> f(); |
---|
541 | int i; |
---|
542 | double d; |
---|
543 | tie(i, d) = f(); // assign fields of return value into local variables |
---|
544 | |
---|
545 | tuple<int, int, int> greater(11, 0, 0); |
---|
546 | triple < greater; // true |
---|
547 | \end{cppcode} |
---|
548 | Tuples are simple data structures with few specific operations. |
---|
549 | In particular, it is possible to access a component of a tuple using @std::get<N>@. |
---|
550 | Another interesting feature is @std::tie@, which creates a tuple of references, allowing assignment of the results of a tuple-returning function into separate local variables, without requiring a temporary variable. |
---|
551 | Tuples also support lexicographic comparisons, making it simple to write aggregate comparators using @std::tie@. |
---|
552 | |
---|
553 | There is a proposal for \CCseventeen called \emph{structured bindings} \cite{StructuredBindings}, that introduces new syntax to eliminate the need to pre-declare variables and use @std::tie@ for binding the results from a function call. |
---|
554 | \begin{cppcode} |
---|
555 | tuple<int, double> f(); |
---|
556 | auto [i, d] = f(); // unpacks into new variables i, d |
---|
557 | |
---|
558 | tuple<int, int, int> triple(10, 20, 30); |
---|
559 | auto & [t1, t2, t3] = triple; |
---|
560 | t2 = 0; // changes triple |
---|
561 | |
---|
562 | struct S { int x; double y; }; |
---|
563 | S s = { 10, 22.5 }; |
---|
564 | auto [x, y] = s; // unpack s |
---|
565 | \end{cppcode} |
---|
566 | Structured bindings allow unpacking any struct with all public non-static data members into fresh local variables. |
---|
567 | The use of @&@ allows declaring new variables as references, which is something that cannot be done with @std::tie@, since \CC references do not support rebinding. |
---|
568 | This extension requires the use of @auto@ to infer the types of the new variables, so complicated expressions with a non-obvious type must be documented with some other mechanism. |
---|
569 | Furthermore, structured bindings are not a full replacement for @std::tie@, as it always declares new variables. |
---|
570 | |
---|
571 | Like \CC, D provides tuples through a library variadic template struct. |
---|
572 | In D, it is possible to name the fields of a tuple type, which creates a distinct type. |
---|
573 | % http://dlang.org/phobos/std_typecons.html |
---|
574 | \begin{dcode} |
---|
575 | Tuple!(float, "x", float, "y") point2D; |
---|
576 | Tuple!(float, float) float2; // different type from point2D |
---|
577 | |
---|
578 | point2D[0]; // access first element |
---|
579 | point2D.x; // access first element |
---|
580 | |
---|
581 | float f(float x, float y) { |
---|
582 | return x+y; |
---|
583 | } |
---|
584 | |
---|
585 | f(point2D.expand); |
---|
586 | \end{dcode} |
---|
587 | Tuples are 0-indexed and can be subscripted using an integer or field name, if applicable. |
---|
588 | The @expand@ method produces the components of the tuple as a list of separate values, making it possible to call a function that takes $N$ arguments using a tuple with $N$ components. |
---|
589 | |
---|
590 | Tuples are a fundamental abstraction in most functional programming languages, such as Standard ML \cite{sml}. |
---|
591 | A function in SML always accepts exactly one argument. |
---|
592 | There are two ways to mimic multiple argument functions: the first through currying and the second by accepting tuple arguments. |
---|
593 | \begin{smlcode} |
---|
594 | fun fact (n : int) = |
---|
595 | if (n = 0) then 1 |
---|
596 | else n*fact(n-1) |
---|
597 | |
---|
598 | fun binco (n: int, k: int) = |
---|
599 | real (fact n) / real (fact k * fact (n-k)) |
---|
600 | \end{smlcode} |
---|
601 | Here, the function @binco@ appears to take 2 arguments, but it actually takes a single argument which is implicitly decomposed via pattern matching. |
---|
602 | Tuples are a foundational tool in SML, allowing the creation of arbitrarily complex structured data types. |
---|
603 | |
---|
604 | Scala, like \CC, provides tuple types through the standard library \cite{Scala}. |
---|
605 | Scala provides tuples of size 1 through 22 inclusive through generic data structures. |
---|
606 | Tuples support named access and subscript access, among a few other operations. |
---|
607 | \begin{scalacode} |
---|
608 | val a = new Tuple3[Int, String, Double](0, "Text", 2.1) // explicit creation |
---|
609 | val b = (6, 'a', 1.1f) // syntactic sugar for Tuple3[Int, Char, Float] |
---|
610 | val (i, _, d) = triple // extractor syntax, ignore middle element |
---|
611 | |
---|
612 | println(a._2) // named access => print "Text" |
---|
613 | println(b.productElement(0)) // subscript access => print 6 |
---|
614 | \end{scalacode} |
---|
615 | In Scala, tuples are primarily used as simple data structures for carrying around multiple values or for returning multiple values from a function. |
---|
616 | The 22-element restriction is an odd and arbitrary choice, but in practice it does not cause problems since large tuples are uncommon. |
---|
617 | Subscript access is provided through the @productElement@ method, which returns a value of the top-type @Any@, since it is impossible to receive a more precise type from a general subscripting method due to type erasure. |
---|
618 | The disparity between named access beginning at @_1@ and subscript access starting at @0@ is likewise an oddity, but subscript access is typically avoided since it discards type information. |
---|
619 | Due to the language's pattern matching facilities, it is possible to extract the values from a tuple into named variables, which is a more idiomatic way of accessing the components of a tuple. |
---|
620 | |
---|
621 | |
---|
622 | \Csharp also has tuples, but has similarly strange limitations, allowing tuples of size up to 7 components. % https://msdn.microsoft.com/en-us/library/system.tuple(v=vs.110).aspx |
---|
623 | The officially supported workaround for this shortcoming is to nest tuples in the 8th component. |
---|
624 | \Csharp allows accessing a component of a tuple by using the field @Item$N$@ for components 1 through 7, and @Rest@ for the nested tuple. |
---|
625 | |
---|
626 | In Python \cite{Python}, tuples are immutable sequences that provide packing and unpacking operations. |
---|
627 | While the tuple itself is immutable, and thus does not allow the assignment of components, there is nothing preventing a component from being internally mutable. |
---|
628 | The components of a tuple can be accessed by unpacking into multiple variables, indexing, or via field name, like D. |
---|
629 | Tuples support multiple assignment through a combination of packing and unpacking, in addition to the common sequence operations. |
---|
630 | |
---|
631 | Swift \cite{Swift}, like D, provides named tuples, with components accessed by name, index, or via extractors. |
---|
632 | Tuples are primarily used for returning multiple values from a function. |
---|
633 | In Swift, @Void@ is an alias for the empty tuple, and there are no single element tuples. |
---|
634 | |
---|
635 | Tuples comparable to those described above are added to \CFA and discussed in Chapter 3. |
---|
636 | |
---|
637 | \section{Variadic Functions} |
---|
638 | \label{sec:variadic_functions} |
---|
639 | In statically-typed programming languages, functions are typically defined to receive a fixed number of arguments of specified types. |
---|
640 | Variadic argument functions provide the ability to define a function that can receive a theoretically unbounded number of arguments. |
---|
641 | |
---|
642 | C provides a simple implementation of variadic functions. |
---|
643 | A function whose parameter list ends with @, ...@ is a variadic function. |
---|
644 | Among the most common variadic functions is @printf@. |
---|
645 | \begin{cfacode} |
---|
646 | int printf(const char * fmt, ...); |
---|
647 | printf("%d %g %c %s", 10, 3.5, 'X', "a string"); |
---|
648 | \end{cfacode} |
---|
649 | Through the use of a format string, C programmers can communicate argument type information to @printf@, allowing C programmers to print any of the standard C data types. |
---|
650 | Still, @printf@ is extremely limited, since the format codes are specified by the C standard, meaning users cannot define their own format codes to extend @printf@ for new data types or new formatting rules. |
---|
651 | |
---|
652 | C provides manipulation of variadic arguments through the @va_list@ data type, which abstracts details of the manipulation of variadic arguments. |
---|
653 | Since the variadic arguments are untyped, it is up to the function to interpret any data that is passed in. |
---|
654 | Additionally, the interface to manipulate @va_list@ objects is essentially limited to advancing to the next argument, without any built-in facility to determine when the last argument is read. |
---|
655 | This requires the use of an \emph{argument descriptor} to pass information to the function about the structure of the argument list, including the number of arguments and their types. |
---|
656 | The format string in @printf@ is one such example of an argument descriptor. |
---|
657 | \begin{cfacode} |
---|
658 | int f(const char * fmt, ...) { |
---|
659 | va_list args; |
---|
660 | va_start(args, fmt); // initialize va_list |
---|
661 | for (const char * c = fmt; *c != '\0'; ++c) { |
---|
662 | if (*c == '%') { |
---|
663 | ++c; |
---|
664 | switch (*c) { |
---|
665 | case 'd': { |
---|
666 | int i = va_arg(args, int); // have to specify type |
---|
667 | // ... |
---|
668 | break; |
---|
669 | } |
---|
670 | case 'g': { |
---|
671 | double d = va_arg(args, double); |
---|
672 | // ... |
---|
673 | break; |
---|
674 | } |
---|
675 | ... |
---|
676 | } |
---|
677 | } |
---|
678 | } |
---|
679 | va_end(args); |
---|
680 | return ...; |
---|
681 | } |
---|
682 | \end{cfacode} |
---|
683 | Every case must be handled explicitly, since the @va_arg@ macro requires a type argument to determine how the next set of bytes is to be interpreted. |
---|
684 | Furthermore, if the user makes a mistake, compile-time checking is typically restricted to standard format codes and their corresponding types. |
---|
685 | In general, this means that C's variadic functions are not type-safe, making them difficult to use properly. |
---|
686 | |
---|
687 | % When arguments are passed to a variadic function, they undergo \emph{default argument promotions}. |
---|
688 | % Specifically, this means that |
---|
689 | |
---|
690 | \CCeleven added support for \emph{variadic templates}, which add much needed type-safety to C's variadic landscape. |
---|
691 | It is possible to use variadic templates to define variadic functions and variadic data types. |
---|
692 | \begin{cppcode} |
---|
693 | void print(int); |
---|
694 | void print(char); |
---|
695 | void print(double); |
---|
696 | ... |
---|
697 | |
---|
698 | void f() {} // base case |
---|
699 | |
---|
700 | template<typename T, typename... Args> |
---|
701 | void f(const T & arg, const Args &... rest) { |
---|
702 | print(arg); // print the current element |
---|
703 | f(rest...); // handle remaining arguments recursively |
---|
704 | } |
---|
705 | \end{cppcode} |
---|
706 | Variadic templates work largely through recursion on the \emph{parameter pack}, which is the argument with @...@ following its type. |
---|
707 | A parameter pack matches 0 or more elements, which can be types or expressions depending on the context. |
---|
708 | Like other templates, variadic template functions rely on an implicit set of constraints on a type, in this example a @print@ routine. |
---|
709 | That is, it is possible to use the @f@ routine on any type provided there is a corresponding @print@ routine, making variadic templates fully open to extension, unlike variadic functions in C. |
---|
710 | |
---|
711 | Recent \CC standards (\CCfourteen, \CCseventeen) expand on the basic premise by allowing variadic template variables and providing convenient expansion syntax to remove the need for recursion in some cases, amongst other things. |
---|
712 | |
---|
713 | % D has variadic templates that deserve a mention http://dlang.org/ctarguments.html |
---|
714 | |
---|
715 | In Java, a variadic function appears similar to a C variadic function in syntax. |
---|
716 | \begin{javacode} |
---|
717 | int sum(int... args) { |
---|
718 | int s = 0; |
---|
719 | for (int x : args) { |
---|
720 | s += x; |
---|
721 | } |
---|
722 | return s; |
---|
723 | } |
---|
724 | |
---|
725 | void print(Object... objs) { |
---|
726 | for (Object obj : objs) { |
---|
727 | System.out.print(obj); |
---|
728 | } |
---|
729 | } |
---|
730 | |
---|
731 | print("The sum from 1 to 10 is ", sum(1,2,3,4,5,6,7,8,9,10), ".\n"); |
---|
732 | \end{javacode} |
---|
733 | The key difference is that Java variadic functions are type-safe, because they specify the type of the argument immediately prior to the ellipsis. |
---|
734 | In Java, variadic arguments are syntactic sugar for arrays, allowing access to length, subscripting operations, and for-each iteration on the variadic arguments, among other things. |
---|
735 | Since the argument type is specified explicitly, the top-type @Object@ can be used to accept arguments of any type, but to do anything interesting on the argument requires a down-cast to a more specific type, landing Java in a similar situation to C in that writing a function open to extension is difficult. |
---|
736 | |
---|
737 | The other option is to restrict the number of types that can be passed to the function by using a more specific type. |
---|
738 | Unfortunately, Java's use of nominal inheritance means that types must explicitly inherit from classes or interfaces in order to be considered a subclass. |
---|
739 | The combination of these two issues greatly restricts the usefulness of variadic functions in Java. |
---|
740 | |
---|
741 | Type-safe variadic functions are added to \CFA and discussed in Chapter 4. |
---|