\chapter{\CFA Existing Features}

\CFA (C-for-all)~\cite{Cforall} is an open-source project extending ISO C with
modern safety and productivity features, while still ensuring backwards
compatibility with C and its programmers.  \CFA is designed to have an
orthogonal feature-set based closely on the C programming paradigm
(non-object-oriented) and these features can be added incrementally to an
existing C code-base allowing programmers to learn \CFA on an as-needed basis.

Only those \CFA features pertinent to this thesis are discussed.  Many of the
\CFA syntactic and semantic features used in the thesis should be fairly
obvious to the reader.

\section{Overloading and \lstinline{extern}}
\CFA has extensive overloading, allowing multiple definitions of the same name
to be defined~\cite{Moss18}.
\begin{cfa}
char i; int i; double i;			$\C[3.75in]{// variable overload}$
int f(); double f();				$\C{// return overload}$
void g( int ); void g( double );	$\C{// parameter overload}\CRT$
\end{cfa}
This feature requires name mangling so the assembly symbols are unique for
different overloads. For compatibility with names in C, there is also a syntax
to disable name mangling. These unmangled names cannot be overloaded but act as
the interface between C and \CFA code.  The syntax for disabling/enabling
mangling is:
\begin{cfa}
// name mangling
int i; // _X1ii_1
@extern "C"@ {  // no name mangling
	int j; // j
	@extern "Cforall"@ {  // name mangling
		int k; // _X1ki_1
	}
	// no name mangling
}
// name mangling
\end{cfa}
Both forms of @extern@ affect all the declarations within their nested lexical
scope and transition back to the previous mangling state when the lexical scope
ends.

\section{Reference Type}
\CFA adds a rebindable reference type to C, but more expressive than the \Cpp
reference.  Multi-level references are allowed and act like auto-dereferenced
pointers using the ampersand (@&@) instead of the pointer asterisk (@*@). \CFA
references may also be mutable or non-mutable. If mutable, a reference variable
may be assigned using the address-of operator (@&@), which converts the
reference to a pointer.
\begin{cfa}
int i, j;
int @&@ ri = i, @&&@ rri = ri;
rri = 3;  // auto-dereference assign to i
@&@ri = @&@j; // rebindable
ri = 5;   // assign to j
\end{cfa}

\section{Constructors and Destructors}

Both constructors and destructors are operators, which means they are
functions with special operator names rather than type names in \Cpp. The
special operator names may be used to call the functions explicitly (not
allowed in \Cpp for constructors).

In general, operator names in \CFA are constructed by bracketing an operator
token with @?@, which indicates the position of the arguments. For example, infixed
multiplication is @?*?@ while prefix dereference is @*?@. This syntax make it
easy to tell the difference between prefix operations (such as @++?@) and
post-fix operations (@?++@).

The special name for a constructor is @?{}@, which comes from the
initialization syntax in C. The special name for a destructor is @^{}@, where
the @^@ has no special meaning.
% I don't like the \^{} symbol but $^\wedge$ isn't better.
\begin{cfa}
struct T { ... };
void ?@{}@(@T &@ this, ...) { ... }  // constructor
void ?@^{}@(@T &@ this, ...) { ... } // destructor
{
	T s = @{@ ... @}@;  // same constructor/initialization braces
} // destructor call automatically generated
\end{cfa}
The first parameter is a reference parameter to the type for the
constructor/destructor. Destructors may have multiple parameters.  The compiler
implicitly matches an overloaded constructor @void ^?{}(T &, ...);@ to an
object declaration with associated initialization, and generates a construction
call after the object is allocated. When an object goes out of scope, the
matching overloaded destructor @void ^?{}(T &);@ is called.  Without explicit
definition, \CFA creates a default and copy constructor, destructor and
assignment (like \Cpp). It is possible to define constructors/destructors for
basic and existing types (unlike \Cpp).

\section{Polymorphism}
\CFA uses parametric polymorphism to create functions and types that are
defined over multiple types. \CFA polymorphic declarations serve the same role
as \Cpp templates or Java generics. The ``parametric'' means the polymorphism is
accomplished by passing argument operations to associate \emph{parameters} at
the call site, and these parameters are used in the function to differentiate
among the types the function operates on.

Polymorphic declarations start with a universal @forall@ clause that goes
before the standard (monomorphic) declaration. These declarations have the same
syntax except they may use the universal type names introduced by the @forall@
clause.  For example, the following is a polymorphic identity function that
works on any type @T@:
\begin{cfa}
@forall( T )@ @T@ identity( @T@ val ) { return val; }
int forty_two = identity( 42 ); // T bound to int, forty_two == 42
\end{cfa}

To allow a polymorphic function to be separately compiled, the type @T@ must be
constrained by the operations used on @T@ in the function body. The @forall@
clauses is augmented with a list of polymorphic variables (local type names)
and assertions (constraints), which represent the required operations on those
types used in a function, \eg:
\begin{cfa}
forall( T @| { void do_once(T); }@) // assertion
void do_twice(T value) {
	do_once(value);
	do_once(value);
}
void do_once(@int@ i) { ... }  // provide assertion
@int@ i;
do_twice(i); // implicitly pass assertion do_once to do_twice
\end{cfa}
Any object with a type fulfilling the assertion may be passed as an argument to
a @do_twice@ call.

A polymorphic function can be used in the same way as a normal function.  The
polymorphic variables are filled in with concrete types and the assertions are
checked. An assertion is checked by verifying each assertion operation (with
all the variables replaced with the concrete types from the arguments) is
defined at a call site.

Note, a function named @do_once@ is not required in the scope of @do_twice@ to
compile it, unlike \Cpp template expansion. Furthermore, call-site inferencing
allows local replacement of the most specific parametric functions needs for a
call.
\begin{cfa}
void do_once(double y) { ... } // global
int quadruple(int x) {
	void do_once(int y) { y = y * 2; } // local
	do_twice(x); // using local "do_once"
	return x;
}
\end{cfa}
Specifically, the complier deduces that @do_twice@'s T is an integer from the
argument @x@. It then looks for the most specific definition matching the
assertion, which is the nested integral @do_once@ defined within the
function. The matched assertion function is then passed as a function pointer
to @do_twice@ and called within it.

To avoid typing long lists of assertions, constraints can be collect into
convenient packages called a @trait@, which can then be used in an assertion
instead of the individual constraints.
\begin{cfa}
trait done_once(T) {
	void do_once(T);
}
\end{cfa}
and the @forall@ list in the previous example is replaced with the trait.
\begin{cfa}
forall(dtype T | @done_once(T)@)
\end{cfa}
In general, a trait can contain an arbitrary number of assertions, both
functions and variables, and are usually used to create a shorthand for, and
give descriptive names to, common groupings of assertions describing a certain
functionality, like @sumable@, @listable@, \etc.

Polymorphic structures and unions are defined by qualifying the aggregate type
with @forall@. The type variables work the same except they are used in field
declarations instead of parameters, returns, and local variable declarations.
\begin{cfa}
forall(dtype @T@)
struct node {
	node(@T@) * next;  // generic linked node
	@T@ * data;
}
node(@int@) inode;
\end{cfa}
The generic type @node(T)@ is an example of a polymorphic-type usage.  Like \Cpp
template usage, a polymorphic-type usage must specify a type parameter.

There are many other polymorphism features in \CFA but these are the ones used
by the exception system.

\section{Control Flow}
\CFA has a number of advanced control-flow features: @generator@, @coroutine@, @monitor@, @mutex@ parameters, and @thread@.
The two features that interact with
the exception system are @coroutine@ and @thread@; they and their supporting
constructs are described here.

\subsection{Coroutine}
A coroutine is a type with associated functions, where the functions are not
required to finish execution when control is handed back to the caller. Instead
they may suspend execution at any time and be resumed later at the point of
last suspension. (Generators are stackless and coroutines are stackful.) These
types are not concurrent but share some similarities along with common
underpinnings, so they are combined with the \CFA threading library. Further
discussion in this section only refers to the coroutine because generators are
similar.

In \CFA, a coroutine is created using the @coroutine@ keyword, which is an
aggregate type like @struct,@ except the structure is implicitly modified by
the compiler to satisfy the @is_coroutine@ trait; hence, a coroutine is
restricted by the type system to types that provide this special trait.  The
coroutine structure acts as the interface between callers and the coroutine,
and its fields are used to pass information in and out of coroutine interface
functions.

Here is a simple example where a single field is used to pass (communicate) the
next number in a sequence.
\begin{cfa}
coroutine CountUp {
	unsigned int next; // communication variable
}
CountUp countup;
\end{cfa}
Each coroutine has a @main@ function, which takes a reference to a coroutine
object and returns @void@.
\begin{cfa}[numbers=left]
void main(@CountUp & this@) { // argument matches trait is_coroutine
	unsigned int up = 0;  // retained between calls
	while (true) {
		next = up; // make "up" available outside function
		@suspend;@$\label{suspend}$
		up += 1;
	}
}
\end{cfa}
In this function, or functions called by this function (helper functions), the
@suspend@ statement is used to return execution to the coroutine's caller
without terminating the coroutine's function.

A coroutine is resumed by calling the @resume@ function, \eg @resume(countup)@.
The first resume calls the @main@ function at the top. Thereafter, resume calls
continue a coroutine in the last suspended function after the @suspend@
statement, in this case @main@ line~\ref{suspend}.  The @resume@ function takes
a reference to the coroutine structure and returns the same reference. The
return value allows easy access to communication variables defined in the
coroutine object. For example, the @next@ value for coroutine object @countup@
is both generated and collected in the single expression:
@resume(countup).next@.

\subsection{Monitor and Mutex Parameter}
Concurrency does not guarantee ordering; without ordering results are
non-deterministic. To claw back ordering, \CFA uses monitors and @mutex@
(mutual exclusion) parameters. A monitor is another kind of aggregate, where
the compiler implicitly inserts a lock and instances are compatible with
@mutex@ parameters.

A function that requires deterministic (ordered) execution, acquires mutual
exclusion on a monitor object by qualifying an object reference parameter with
@mutex@.
\begin{cfa}
void example(MonitorA & @mutex@ argA, MonitorB & @mutex@ argB);
\end{cfa}
When the function is called, it implicitly acquires the monitor lock for all of
the mutex parameters without deadlock.  This semantics means all functions with
the same mutex type(s) are part of a critical section for objects of that type
and only one runs at a time.

\subsection{Thread}
Functions, generators, and coroutines are sequential so there is only a single
(but potentially sophisticated) execution path in a program. Threads introduce
multiple execution paths that continue independently.

For threads to work safely with objects requires mutual exclusion using
monitors and mutex parameters. For threads to work safely with other threads,
also requires mutual exclusion in the form of a communication rendezvous, which
also supports internal synchronization as for mutex objects. For exceptions,
only two basic thread operations are important: fork and join.

Threads are created like coroutines with an associated @main@ function:
\begin{cfa}
thread StringWorker {
	const char * input;
	int result;
};
void main(StringWorker & this) {
	const char * localCopy = this.input;
	// ... do some work, perhaps hashing the string ...
	this.result = result;
}
{
	StringWorker stringworker; // fork thread running in "main"
} // implicitly join with thread $\(\Rightarrow\)$ wait for completion
\end{cfa}
The thread main is where a new thread starts execution after a fork operation
and then the thread continues executing until it is finished. If another thread
joins with an executing thread, it waits until the executing main completes
execution. In other words, everything a thread does is between a fork and join.

From the outside, this behaviour is accomplished through creation and
destruction of a thread object.  Implicitly, fork happens after a thread
object's constructor is run and join happens before the destructor runs. Join
can also be specified explicitly using the @join@ function to wait for a
thread's completion independently from its deallocation (\ie destructor
call). If @join@ is called explicitly, the destructor does not implicitly join.
