source: doc/theses/andrew_beach_MMath/existing.tex @ d8f8d08

ADTast-experimentalenumforall-pointer-decayjacob/cs343-translationpthread-emulationqualifiedEnum
Last change on this file since d8f8d08 was be497c6, checked in by Andrew Beach <ajbeach@…>, 3 years ago

Andrew MMath: Used Peter's feedback for the existing chapter.

  • Property mode set to 100644
File size: 14.8 KB
Line 
1\chapter{\CFA{} Existing Features}
2\label{c:existing}
3
4\CFA is an open-source project extending ISO C with
5modern safety and productivity features, while still ensuring backwards
6compatibility with C and its programmers.  \CFA is designed to have an
7orthogonal feature-set based closely on the C programming paradigm
8(non-object-oriented) and these features can be added incrementally to an
9existing C code-base allowing programmers to learn \CFA on an as-needed basis.
10
11Only those \CFA features pertaining to this thesis are discussed.
12A familiarity with
13C or C-like languages is assumed.
14
15\section{Overloading and \lstinline{extern}}
16\CFA has extensive overloading, allowing multiple definitions of the same name
17to be defined~\cite{Moss18}.
18\begin{cfa}
19char i; int i; double i;
20int f(); double f();
21void g( int ); void g( double );
22\end{cfa}
23This feature requires name mangling so the assembly symbols are unique for
24different overloads. For compatibility with names in C, there is also a syntax
25to disable name mangling. These unmangled names cannot be overloaded but act as
26the interface between C and \CFA code.  The syntax for disabling/enabling
27mangling is:
28\begin{cfa}
29// name mangling on by default
30int i; // _X1ii_1
31extern "C" {  // disables name mangling
32        int j; // j
33        extern "Cforall" {  // enables name mangling
34                int k; // _X1ki_1
35        }
36        // revert to no name mangling
37}
38// revert to name mangling
39\end{cfa}
40Both forms of @extern@ affect all the declarations within their nested lexical
41scope and transition back to the previous mangling state when the lexical scope
42ends.
43
44\section{Reference Type}
45\CFA adds a reference type to C as an auto-dereferencing pointer.
46They work very similarly to pointers.
47Reference-types are written the same way as a pointer-type but each
48asterisk (@*@) is replaced with a ampersand (@&@);
49this includes cv-qualifiers and multiple levels of reference.
50
51Generally, references act like pointers with an implicate dereferencing
52operation added to each use of the variable.
53These automatic dereferences may be disabled with the address-of operator
54(@&@).
55
56% Check to see if these are generating errors.
57\begin{minipage}{0,5\textwidth}
58With references:
59\begin{cfa}
60int i, j;
61int & ri = i;
62int && rri = ri;
63rri = 3;
64&ri = &j;
65ri = 5;
66\end{cfa}
67\end{minipage}
68\begin{minipage}{0,5\textwidth}
69With pointers:
70\begin{cfa}
71int i, j;
72int * pi = &i
73int ** ppi = &pi;
74**ppi = 3;
75pi = &j;
76*pi = 5;
77\end{cfa}
78\end{minipage}
79
80References are intended to be used when the indirection of a pointer is
81required, but the address is not as important as the value and dereferencing
82is the common usage.
83Mutable references may be assigned to by converting them to a pointer
84with a @&@ and then assigning a pointer to them, as in @&ri = &j;@ above.
85% ???
86
87\section{Operators}
88
89\CFA implements operator overloading by providing special names, where
90operator expressions are translated into function calls using these names.
91An operator name is created by taking the operator symbols and joining them with
92@?@s to show where the arguments go.
93For example,
94infixed multiplication is @?*?@, while prefix dereference is @*?@.
95This syntax make it easy to tell the difference between prefix operations
96(such as @++?@) and post-fix operations (@?++@).
97
98As an example, here are the addition and equality operators for a point type.
99\begin{cfa}
100point ?+?(point a, point b) { return point{a.x + b.x, a.y + b.y}; }
101int ?==?(point a, point b) { return a.x == b.x && a.y == b.y; }
102{
103        assert(point{1, 2} + point{3, 4} == point{4, 6});
104}
105\end{cfa}
106Note that this syntax works effectively but a textual transformation,
107the compiler converts all operators into functions and then resolves them
108normally. This means any combination of types may be used,
109although nonsensical ones (like @double ?==?(point, int);@) are discouraged.
110This feature is also used for all builtin operators as well,
111although those are implicitly provided by the language.
112
113%\subsection{Constructors and Destructors}
114In \CFA, constructors and destructors are operators, which means they are
115functions with special operator names rather than type names in \Cpp.
116Both constructors and destructors can be implicity called by the compiler,
117however the operator names allow explicit calls.
118% Placement new means that this is actually equivant to C++.
119
120The special name for a constructor is @?{}@, which comes from the
121initialization syntax in C, \eg @Example e = { ... }@.
122\CFA generates a constructor call each time a variable is declared,
123passing the initialization arguments to the constructor.
124\begin{cfa}
125struct Example { ... };
126void ?{}(Example & this) { ... }
127{
128        Example a;
129        Example b = {};
130}
131void ?{}(Example & this, char first, int num) { ... }
132{
133        Example c = {'a', 2};
134}
135\end{cfa}
136Both @a@ and @b@ will be initalized with the first constructor,
137@b@ because of the explicit call and @a@ implicitly.
138@c@ will be initalized with the second constructor.
139Currently, there is no general way to skip initialation.
140% I don't use @= anywhere in the thesis.
141
142% I don't like the \^{} symbol but $^\wedge$ isn't better.
143Similarly, destructors use the special name @^?{}@ (the @^@ has no special
144meaning).
145\begin{cfa}
146void ^?{}(Example & this) { ... }
147{
148        Example d;
149        ^?{}(d);
150
151        Example e;
152} // Implicit call of ^?{}(e);
153\end{cfa}
154
155Whenever a type is defined, \CFA creates a default zero-argument
156constructor, a copy constructor, a series of argument-per-field constructors
157and a destructor. All user constructors are defined after this.
158
159\section{Polymorphism}
160\CFA uses parametric polymorphism to create functions and types that are
161defined over multiple types. \CFA polymorphic declarations serve the same role
162as \Cpp templates or Java generics. The ``parametric'' means the polymorphism is
163accomplished by passing argument operations to associate \emph{parameters} at
164the call site, and these parameters are used in the function to differentiate
165among the types the function operates on.
166
167Polymorphic declarations start with a universal @forall@ clause that goes
168before the standard (monomorphic) declaration. These declarations have the same
169syntax except they may use the universal type names introduced by the @forall@
170clause.  For example, the following is a polymorphic identity function that
171works on any type @T@:
172\begin{cfa}
173forall( T ) T identity( T val ) { return val; }
174int forty_two = identity( 42 );
175char capital_a = identity( 'A' );
176\end{cfa}
177Each use of a polymorphic declaration resolves its polymorphic parameters
178(in this case, just @T@) to concrete types (@int@ in the first use and @char@
179in the second).
180
181To allow a polymorphic function to be separately compiled, the type @T@ must be
182constrained by the operations used on @T@ in the function body. The @forall@
183clause is augmented with a list of polymorphic variables (local type names)
184and assertions (constraints), which represent the required operations on those
185types used in a function, \eg:
186\begin{cfa}
187forall( T | { void do_once(T); } )
188void do_twice(T value) {
189        do_once(value);
190        do_once(value);
191}
192\end{cfa}
193
194A polymorphic function can be used in the same way as a normal function.  The
195polymorphic variables are filled in with concrete types and the assertions are
196checked. An assertion is checked by verifying each assertion operation (with
197all the variables replaced with the concrete types from the arguments) is
198defined at a call site.
199\begin{cfa}
200void do_once(int i) { ... }
201int i;
202do_twice(i);
203\end{cfa}
204Any object with a type fulfilling the assertion may be passed as an argument to
205a @do_twice@ call.
206
207Note, a function named @do_once@ is not required in the scope of @do_twice@ to
208compile it, unlike \Cpp template expansion. Furthermore, call-site inferencing
209allows local replacement of the specific parametric functions needs for a
210call.
211\begin{cfa}
212void do_once(double y) { ... }
213int quadruple(int x) {
214        void do_once(int & y) { y = y * 2; }
215        do_twice(x);
216        return x;
217}
218\end{cfa}
219Specifically, the complier deduces that @do_twice@'s T is an integer from the
220argument @x@. It then looks for the most specific definition matching the
221assertion, which is the nested integral @do_once@ defined within the
222function. The matched assertion function is then passed as a function pointer
223to @do_twice@ and called within it.
224The global definition of @do_once@ is ignored, however if quadruple took a
225@double@ argument, then the global definition would be used instead as it
226would then be a better match.
227\todo{cite Aaron's thesis (maybe)}
228
229To avoid typing long lists of assertions, constraints can be collected into
230convenient a package called a @trait@, which can then be used in an assertion
231instead of the individual constraints.
232\begin{cfa}
233trait done_once(T) {
234        void do_once(T);
235}
236\end{cfa}
237and the @forall@ list in the previous example is replaced with the trait.
238\begin{cfa}
239forall(dtype T | done_once(T))
240\end{cfa}
241In general, a trait can contain an arbitrary number of assertions, both
242functions and variables, and are usually used to create a shorthand for, and
243give descriptive names to, common groupings of assertions describing a certain
244functionality, like @sumable@, @listable@, \etc.
245
246Polymorphic structures and unions are defined by qualifying an aggregate type
247with @forall@. The type variables work the same except they are used in field
248declarations instead of parameters, returns, and local variable declarations.
249\begin{cfa}
250forall(dtype T)
251struct node {
252        node(T) * next;
253        T * data;
254};
255node(int) inode;
256\end{cfa}
257The generic type @node(T)@ is an example of a polymorphic type usage.  Like \Cpp
258template usage, a polymorphic type usage must specify a type parameter.
259
260There are many other polymorphism features in \CFA but these are the ones used
261by the exception system.
262
263\section{Control Flow}
264\CFA has a number of advanced control-flow features: @generator@, @coroutine@, @monitor@, @mutex@ parameters, and @thread@.
265The two features that interact with
266the exception system are @coroutine@ and @thread@; they and their supporting
267constructs are described here.
268
269\subsection{Coroutine}
270A coroutine is a type with associated functions, where the functions are not
271required to finish execution when control is handed back to the caller. Instead
272they may suspend execution at any time and be resumed later at the point of
273last suspension. (Generators are stackless and coroutines are stackful.) These
274types are not concurrent but share some similarities along with common
275underpinnings, so they are combined with the \CFA threading library. Further
276discussion in this section only refers to the coroutine because generators are
277similar.
278
279In \CFA, a coroutine is created using the @coroutine@ keyword, which is an
280aggregate type like @struct,@ except the structure is implicitly modified by
281the compiler to satisfy the @is_coroutine@ trait; hence, a coroutine is
282restricted by the type system to types that provide this special trait.  The
283coroutine structure acts as the interface between callers and the coroutine,
284and its fields are used to pass information in and out of coroutine interface
285functions.
286
287Here is a simple example where a single field is used to pass (communicate) the
288next number in a sequence.
289\begin{cfa}
290coroutine CountUp {
291        unsigned int next;
292};
293CountUp countup;
294\end{cfa}
295Each coroutine has a @main@ function, which takes a reference to a coroutine
296object and returns @void@.
297%[numbers=left] Why numbers on this one?
298\begin{cfa}
299void main(CountUp & this) {
300        for (unsigned int next = 0 ; true ; ++next) {
301                this.next = next;
302                suspend;$\label{suspend}$
303        }
304}
305\end{cfa}
306In this function, or functions called by this function (helper functions), the
307@suspend@ statement is used to return execution to the coroutine's caller
308without terminating the coroutine's function.
309
310A coroutine is resumed by calling the @resume@ function, \eg @resume(countup)@.
311The first resume calls the @main@ function at the top. Thereafter, resume calls
312continue a coroutine in the last suspended function after the @suspend@
313statement. In this case there is only one and, hence, the difference between
314subsequent calls is the state of variables inside the function and the
315coroutine object.
316The return value of @resume@ is a reference to the coroutine, to make it
317convent to access fields of the coroutine in the same expression.
318Here is a simple example in a helper function:
319\begin{cfa}
320unsigned int get_next(CountUp & this) {
321        return resume(this).next;
322}
323\end{cfa}
324
325When the main function returns the coroutine halts and can no longer be
326resumed.
327
328\subsection{Monitor and Mutex Parameter}
329Concurrency does not guarantee ordering; without ordering results are
330non-deterministic. To claw back ordering, \CFA uses monitors and @mutex@
331(mutual exclusion) parameters. A monitor is another kind of aggregate, where
332the compiler implicitly inserts a lock and instances are compatible with
333@mutex@ parameters.
334
335A function that requires deterministic (ordered) execution, acquires mutual
336exclusion on a monitor object by qualifying an object reference parameter with
337@mutex@.
338\begin{cfa}
339void example(MonitorA & mutex argA, MonitorB & mutex argB);
340\end{cfa}
341When the function is called, it implicitly acquires the monitor lock for all of
342the mutex parameters without deadlock.  This semantics means all functions with
343the same mutex type(s) are part of a critical section for objects of that type
344and only one runs at a time.
345
346\subsection{Thread}
347Functions, generators, and coroutines are sequential so there is only a single
348(but potentially sophisticated) execution path in a program. Threads introduce
349multiple execution paths that continue independently.
350
351For threads to work safely with objects requires mutual exclusion using
352monitors and mutex parameters. For threads to work safely with other threads,
353also requires mutual exclusion in the form of a communication rendezvous, which
354also supports internal synchronization as for mutex objects. For exceptions,
355only two basic thread operations are important: fork and join.
356
357Threads are created like coroutines with an associated @main@ function:
358\begin{cfa}
359thread StringWorker {
360        const char * input;
361        int result;
362};
363void main(StringWorker & this) {
364        const char * localCopy = this.input;
365        // ... do some work, perhaps hashing the string ...
366        this.result = result;
367}
368{
369        StringWorker stringworker; // fork thread running in "main"
370} // Implicit call to join(stringworker), waits for completion.
371\end{cfa}
372The thread main is where a new thread starts execution after a fork operation
373and then the thread continues executing until it is finished. If another thread
374joins with an executing thread, it waits until the executing main completes
375execution. In other words, everything a thread does is between a fork and join.
376
377From the outside, this behaviour is accomplished through creation and
378destruction of a thread object.  Implicitly, fork happens after a thread
379object's constructor is run and join happens before the destructor runs. Join
380can also be specified explicitly using the @join@ function to wait for a
381thread's completion independently from its deallocation (\ie destructor
382call). If @join@ is called explicitly, the destructor does not implicitly join.
Note: See TracBrowser for help on using the repository browser.