source: doc/theses/andrew_beach_MMath/existing.tex @ 780a614

ADTarm-ehast-experimentalenumforall-pointer-decayjacob/cs343-translationnew-ast-unique-exprpthread-emulationqualifiedEnum
Last change on this file since 780a614 was 67c6a47, checked in by Peter A. Buhr <pabuhr@…>, 3 years ago

proofread Andrew's thesis chapter existing.tex

  • Property mode set to 100644
File size: 13.2 KB
Line 
1\chapter{\CFA Existing Features}
2
3\CFA (C-for-all)~\cite{Cforall} is an open-source project extending ISO C with
4modern safety and productivity features, while still ensuring backwards
5compatibility with C and its programmers.  \CFA is designed to have an
6orthogonal feature-set based closely on the C programming paradigm
7(non-object-oriented) and these features can be added incrementally to an
8existing C code-base allowing programmers to learn \CFA on an as-needed basis.
9
10Only those \CFA features pertinent to this thesis are discussed.  Many of the
11\CFA syntactic and semantic features used in the thesis should be fairly
12obvious to the reader.
13
14\section{Overloading and \lstinline{extern}}
15\CFA has extensive overloading, allowing multiple definitions of the same name
16to be defined~\cite{Moss18}.
17\begin{cfa}
18char i; int i; double i;                        $\C[3.75in]{// variable overload}$
19int f(); double f();                            $\C{// return overload}$
20void g( int ); void g( double );        $\C{// parameter overload}\CRT$
21\end{cfa}
22This feature requires name mangling so the assembly symbols are unique for
23different overloads. For compatibility with names in C, there is also a syntax
24to disable name mangling. These unmangled names cannot be overloaded but act as
25the interface between C and \CFA code.  The syntax for disabling/enabling
26mangling is:
27\begin{cfa}
28// name mangling
29int i; // _X1ii_1
30@extern "C"@ {  // no name mangling
31        int j; // j
32        @extern "Cforall"@ {  // name mangling
33                int k; // _X1ki_1
34        }
35        // no name mangling
36}
37// name mangling
38\end{cfa}
39Both forms of @extern@ affect all the declarations within their nested lexical
40scope and transition back to the previous mangling state when the lexical scope
41ends.
42
43\section{Reference Type}
44\CFA adds a rebindable reference type to C, but more expressive than the \Cpp
45reference.  Multi-level references are allowed and act like auto-dereferenced
46pointers using the ampersand (@&@) instead of the pointer asterisk (@*@). \CFA
47references may also be mutable or non-mutable. If mutable, a reference variable
48may be assigned using the address-of operator (@&@), which converts the
49reference to a pointer.
50\begin{cfa}
51int i, j;
52int @&@ ri = i, @&&@ rri = ri;
53rri = 3;  // auto-dereference assign to i
54@&@ri = @&@j; // rebindable
55ri = 5;   // assign to j
56\end{cfa}
57
58\section{Constructors and Destructors}
59
60Both constructors and destructors are operators, which means they are
61functions with special operator names rather than type names in \Cpp. The
62special operator names may be used to call the functions explicitly (not
63allowed in \Cpp for constructors).
64
65In general, operator names in \CFA are constructed by bracketing an operator
66token with @?@, which indicates the position of the arguments. For example, infixed
67multiplication is @?*?@ while prefix dereference is @*?@. This syntax make it
68easy to tell the difference between prefix operations (such as @++?@) and
69post-fix operations (@?++@).
70
71The special name for a constructor is @?{}@, which comes from the
72initialization syntax in C. The special name for a destructor is @^{}@, where
73the @^@ has no special meaning.
74% I don't like the \^{} symbol but $^\wedge$ isn't better.
75\begin{cfa}
76struct T { ... };
77void ?@{}@(@T &@ this, ...) { ... }  // constructor
78void ?@^{}@(@T &@ this, ...) { ... } // destructor
79{
80        T s = @{@ ... @}@;  // same constructor/initialization braces
81} // destructor call automatically generated
82\end{cfa}
83The first parameter is a reference parameter to the type for the
84constructor/destructor. Destructors may have multiple parameters.  The compiler
85implicitly matches an overloaded constructor @void ^?{}(T &, ...);@ to an
86object declaration with associated initialization, and generates a construction
87call after the object is allocated. When an object goes out of scope, the
88matching overloaded destructor @void ^?{}(T &);@ is called.  Without explicit
89definition, \CFA creates a default and copy constructor, destructor and
90assignment (like \Cpp). It is possible to define constructors/destructors for
91basic and existing types (unlike \Cpp).
92
93\section{Polymorphism}
94\CFA uses parametric polymorphism to create functions and types that are
95defined over multiple types. \CFA polymorphic declarations serve the same role
96as \Cpp templates or Java generics. The ``parametric'' means the polymorphism is
97accomplished by passing argument operations to associate \emph{parameters} at
98the call site, and these parameters are used in the function to differentiate
99among the types the function operates on.
100
101Polymorphic declarations start with a universal @forall@ clause that goes
102before the standard (monomorphic) declaration. These declarations have the same
103syntax except they may use the universal type names introduced by the @forall@
104clause.  For example, the following is a polymorphic identity function that
105works on any type @T@:
106\begin{cfa}
107@forall( T )@ @T@ identity( @T@ val ) { return val; }
108int forty_two = identity( 42 ); // T bound to int, forty_two == 42
109\end{cfa}
110
111To allow a polymorphic function to be separately compiled, the type @T@ must be
112constrained by the operations used on @T@ in the function body. The @forall@
113clauses is augmented with a list of polymorphic variables (local type names)
114and assertions (constraints), which represent the required operations on those
115types used in a function, \eg:
116\begin{cfa}
117forall( T @| { void do_once(T); }@) // assertion
118void do_twice(T value) {
119        do_once(value);
120        do_once(value);
121}
122void do_once(@int@ i) { ... }  // provide assertion
123@int@ i;
124do_twice(i); // implicitly pass assertion do_once to do_twice
125\end{cfa}
126Any object with a type fulfilling the assertion may be passed as an argument to
127a @do_twice@ call.
128
129A polymorphic function can be used in the same way as a normal function.  The
130polymorphic variables are filled in with concrete types and the assertions are
131checked. An assertion is checked by verifying each assertion operation (with
132all the variables replaced with the concrete types from the arguments) is
133defined at a call site.
134
135Note, a function named @do_once@ is not required in the scope of @do_twice@ to
136compile it, unlike \Cpp template expansion. Furthermore, call-site inferencing
137allows local replacement of the most specific parametric functions needs for a
138call.
139\begin{cfa}
140void do_once(double y) { ... } // global
141int quadruple(int x) {
142        void do_once(int y) { y = y * 2; } // local
143        do_twice(x); // using local "do_once"
144        return x;
145}
146\end{cfa}
147Specifically, the complier deduces that @do_twice@'s T is an integer from the
148argument @x@. It then looks for the most specific definition matching the
149assertion, which is the nested integral @do_once@ defined within the
150function. The matched assertion function is then passed as a function pointer
151to @do_twice@ and called within it.
152
153To avoid typing long lists of assertions, constraints can be collect into
154convenient packages called a @trait@, which can then be used in an assertion
155instead of the individual constraints.
156\begin{cfa}
157trait done_once(T) {
158        void do_once(T);
159}
160\end{cfa}
161and the @forall@ list in the previous example is replaced with the trait.
162\begin{cfa}
163forall(dtype T | @done_once(T)@)
164\end{cfa}
165In general, a trait can contain an arbitrary number of assertions, both
166functions and variables, and are usually used to create a shorthand for, and
167give descriptive names to, common groupings of assertions describing a certain
168functionality, like @sumable@, @listable@, \etc.
169
170Polymorphic structures and unions are defined by qualifying the aggregate type
171with @forall@. The type variables work the same except they are used in field
172declarations instead of parameters, returns, and local variable declarations.
173\begin{cfa}
174forall(dtype @T@)
175struct node {
176        node(@T@) * next;  // generic linked node
177        @T@ * data;
178}
179node(@int@) inode;
180\end{cfa}
181The generic type @node(T)@ is an example of a polymorphic-type usage.  Like \Cpp
182template usage, a polymorphic-type usage must specify a type parameter.
183
184There are many other polymorphism features in \CFA but these are the ones used
185by the exception system.
186
187\section{Control Flow}
188\CFA has a number of advanced control-flow features: @generator@, @coroutine@, @monitor@, @mutex@ parameters, and @thread@.
189The two features that interact with
190the exception system are @coroutine@ and @thread@; they and their supporting
191constructs are described here.
192
193\subsection{Coroutine}
194A coroutine is a type with associated functions, where the functions are not
195required to finish execution when control is handed back to the caller. Instead
196they may suspend execution at any time and be resumed later at the point of
197last suspension. (Generators are stackless and coroutines are stackful.) These
198types are not concurrent but share some similarities along with common
199underpinnings, so they are combined with the \CFA threading library. Further
200discussion in this section only refers to the coroutine because generators are
201similar.
202
203In \CFA, a coroutine is created using the @coroutine@ keyword, which is an
204aggregate type like @struct,@ except the structure is implicitly modified by
205the compiler to satisfy the @is_coroutine@ trait; hence, a coroutine is
206restricted by the type system to types that provide this special trait.  The
207coroutine structure acts as the interface between callers and the coroutine,
208and its fields are used to pass information in and out of coroutine interface
209functions.
210
211Here is a simple example where a single field is used to pass (communicate) the
212next number in a sequence.
213\begin{cfa}
214coroutine CountUp {
215        unsigned int next; // communication variable
216}
217CountUp countup;
218\end{cfa}
219Each coroutine has a @main@ function, which takes a reference to a coroutine
220object and returns @void@.
221\begin{cfa}[numbers=left]
222void main(@CountUp & this@) { // argument matches trait is_coroutine
223        unsigned int up = 0;  // retained between calls
224        while (true) {
225                next = up; // make "up" available outside function
226                @suspend;@$\label{suspend}$
227                up += 1;
228        }
229}
230\end{cfa}
231In this function, or functions called by this function (helper functions), the
232@suspend@ statement is used to return execution to the coroutine's caller
233without terminating the coroutine's function.
234
235A coroutine is resumed by calling the @resume@ function, \eg @resume(countup)@.
236The first resume calls the @main@ function at the top. Thereafter, resume calls
237continue a coroutine in the last suspended function after the @suspend@
238statement, in this case @main@ line~\ref{suspend}.  The @resume@ function takes
239a reference to the coroutine structure and returns the same reference. The
240return value allows easy access to communication variables defined in the
241coroutine object. For example, the @next@ value for coroutine object @countup@
242is both generated and collected in the single expression:
243@resume(countup).next@.
244
245\subsection{Monitor and Mutex Parameter}
246Concurrency does not guarantee ordering; without ordering results are
247non-deterministic. To claw back ordering, \CFA uses monitors and @mutex@
248(mutual exclusion) parameters. A monitor is another kind of aggregate, where
249the compiler implicitly inserts a lock and instances are compatible with
250@mutex@ parameters.
251
252A function that requires deterministic (ordered) execution, acquires mutual
253exclusion on a monitor object by qualifying an object reference parameter with
254@mutex@.
255\begin{cfa}
256void example(MonitorA & @mutex@ argA, MonitorB & @mutex@ argB);
257\end{cfa}
258When the function is called, it implicitly acquires the monitor lock for all of
259the mutex parameters without deadlock.  This semantics means all functions with
260the same mutex type(s) are part of a critical section for objects of that type
261and only one runs at a time.
262
263\subsection{Thread}
264Functions, generators, and coroutines are sequential so there is only a single
265(but potentially sophisticated) execution path in a program. Threads introduce
266multiple execution paths that continue independently.
267
268For threads to work safely with objects requires mutual exclusion using
269monitors and mutex parameters. For threads to work safely with other threads,
270also requires mutual exclusion in the form of a communication rendezvous, which
271also supports internal synchronization as for mutex objects. For exceptions,
272only two basic thread operations are important: fork and join.
273
274Threads are created like coroutines with an associated @main@ function:
275\begin{cfa}
276thread StringWorker {
277        const char * input;
278        int result;
279};
280void main(StringWorker & this) {
281        const char * localCopy = this.input;
282        // ... do some work, perhaps hashing the string ...
283        this.result = result;
284}
285{
286        StringWorker stringworker; // fork thread running in "main"
287} // implicitly join with thread $\(\Rightarrow\)$ wait for completion
288\end{cfa}
289The thread main is where a new thread starts execution after a fork operation
290and then the thread continues executing until it is finished. If another thread
291joins with an executing thread, it waits until the executing main completes
292execution. In other words, everything a thread does is between a fork and join.
293
294From the outside, this behaviour is accomplished through creation and
295destruction of a thread object.  Implicitly, fork happens after a thread
296object's constructor is run and join happens before the destructor runs. Join
297can also be specified explicitly using the @join@ function to wait for a
298thread's completion independently from its deallocation (\ie destructor
299call). If @join@ is called explicitly, the destructor does not implicitly join.
Note: See TracBrowser for help on using the repository browser.