Ignore:
File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Paper.tex

    r16948499 r0161ddf  
    153153                auto, _Bool, catch, catchResume, choose, _Complex, __complex, __complex__, __const, __const__,
    154154                coroutine, disable, dtype, enable, exception, __extension__, fallthrough, fallthru, finally,
    155                 __float80, float80, __float128, float128, forall, ftype, _Generic, _Imaginary, __imag, __imag__,
     155                __float80, float80, __float128, float128, forall, ftype, generator, _Generic, _Imaginary, __imag, __imag__,
    156156                inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or,
    157                 otype, restrict, __restrict, __restrict__, __signed, __signed__, _Static_assert, thread,
     157                otype, restrict, __restrict, __restrict__, __signed, __signed__, _Static_assert, suspend, thread,
    158158                _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__,
    159159                virtual, __volatile, __volatile__, waitfor, when, with, zero_t},
     
    231231}
    232232
     233\newbox\myboxA
     234\newbox\myboxB
     235\newbox\myboxC
     236\newbox\myboxD
     237
    233238\title{\texorpdfstring{Advanced Control-flow and Concurrency in \protect\CFA}{Advanced Control-flow in Cforall}}
    234239
     
    247252This paper discusses the design philosophy and implementation of its advanced control-flow and concurrent/parallel features, along with the supporting runtime.
    248253These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library features like C pthreads.
    249 \CFA introduces modern language-level control-flow mechanisms, like coroutines, user-level threading, and monitors for mutual exclusion and synchronization.
     254\CFA introduces modern language-level control-flow mechanisms, like generators, coroutines, user-level threading, and monitors for mutual exclusion and synchronization.
    250255Library extension for executors, futures, and actors are built on these basic mechanisms.
    251 The runtime provides significant programmer simplification and safety by eliminating spurious wakeup and reducing monitor barging.
     256The runtime provides significant programmer simplification and safety by eliminating spurious wakeup and optional monitor barging.
    252257The runtime also ensures multiple monitors can be safely acquired \emph{simultaneously} (deadlock free), and this feature is fully integrated with all monitor synchronization mechanisms.
    253 All language features integrate with the \CFA polymorphic type-system and exception handling, while respecting the expectations and style of C programmers.
     258All control-flow features integrate with the \CFA polymorphic type-system and exception handling, while respecting the expectations and style of C programmers.
    254259Experimental results show comparable performance of the new features with similar mechanisms in other concurrent programming-languages.
    255260}%
    256261
    257 \keywords{coroutines, concurrency, parallelism, threads, monitors, runtime, C, \CFA (Cforall)}
     262\keywords{generator, coroutine, concurrency, parallelism, thread, monitor, runtime, C, \CFA (Cforall)}
    258263
    259264
     
    285290As a result, languages like Java, Scala~\cite{Scala}, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms.
    286291From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,BoostThreads}, including putting green threads back into Java~\cite{Quasar}.
    287 The main argument for user-level threading is that they are lighter weight than kernel threads (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing smaller work-units to facilitate load balancing by the runtime~\cite{Verch12}.
     292The main argument for user-level threading is that they are lighter weight than kernel threads (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work-units to facilitate load balancing by the runtime~\cite{Verch12}.
    288293As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{vonBehren03}.
    289294Finally, performant user-threading implementations (both time and space) are largely competitive with direct kernel-threading implementations, while achieving the programming advantages of high concurrency levels and safety.
     
    300305
    301306Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary.
    302 Two concurrency violations of this philosophy are \emph{spurious wakeup} and \emph{barging}, i.e., random wakeup~\cite[\S~8]{Buhr05a} and signalling-as-hints~\cite[\S~8]{Buhr05a}, where one begats the other.
    303 If you believe spurious wakeup is a foundational concurrency property, than unblocking (signalling) a thread is always a hint.
    304 If you \emph{do not} believe spurious wakeup is foundational, than signalling-as-hints is a performance decision.
    305 Most importantly, removing spurious wakeup and signals-as-hints makes concurrent programming significantly safer because it removes local non-determinism.
    306 Clawing back performance where the local non-determinism is unimportant, should be an option not the default.
     307Two concurrency violations of this philosophy are \emph{spurious wakeup} and \emph{barging}, i.e., random wakeup~\cite[\S~8]{Buhr05a} and signals-as-hints~\cite[\S~8]{Buhr05a}, where one is a consequence of the other, i.e., once there is spurious wakeup, signals-as-hints follows.
     308However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice.
     309Similarly, signals-as-hints is also a performance decision.
     310We argue removing spurious wakeup and signals-as-hints makes concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation.
     311(Experience by the authors teaching concurrency is that students are highly confused by these semantics.)
     312Clawing back performance, when local non-determinism is unimportant, should be an option not the default.
    307313
    308314\begin{comment}
     
    327333
    328334\CFA embraces user-level threading, language extensions for advanced control-flow, and safety as the default.
    329 We present comparative examples so the reader can judge if the \CFA control-flow extensions are better and safer than those in or proposed for \Celeven, \CC and other concurrent, imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms.
     335We present comparative examples so the reader can judge if the \CFA control-flow extensions are better and safer than those in or proposed for \Celeven, \CC, and other concurrent, imperative programming-languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms.
    330336The main contributions of this work are:
    331337\begin{itemize}
    332338\item
    333 expressive language-level coroutines and user-level threading, which respect the expectations of C programmers.
     339language-level generators, coroutines and user-level threading, which respect the expectations of C programmers.
    334340\item
    335 monitor synchronization without barging.
    336 \item
    337 safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating this capability with all monitor synchronization mechanisms.
     341monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capability with all monitor synchronization mechanisms.
    338342\item
    339343providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features.
     
    343347a runtime system with no spurious wakeup.
    344348\item
    345 experimental results showing comparable performance of the new features with similar mechanisms in other concurrent programming-languages.
     349a dynamic partitioning mechanism to segregate the execution environment for specialized requirements.
     350\item
     351a non-blocking I/O library
     352\item
     353experimental results showing comparable performance of the new features with similar mechanisms in other programming languages.
    346354\end{itemize}
    347355
     356
     357\section{Stateful Function}
     358
     359The generator/coroutine provides a stateful function, which is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}.
     360A stateful function allows execution to be temporarily suspended and later resumed, e.g., plugin, device driver, finite-state machine.
     361Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension.
     362This capability is accomplished by retaining a data/execution \emph{closure} between invocations.
     363If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, e.g., suspending outside the generator is prohibited.
     364If the closure is variable sized, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions.
     365Hence, refactoring a stackless coroutine may require changing it to stackful.
     366A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, i.e., resume/suspend operations are remembered through the closure not the stack.
     367As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
     368A fixed closure activated by modified call/return is faster than a variable closure activated by context switching.
     369Additionally, any storage management for the closure (especially in unmanaged languages, i.e., no garbage collection) must also be factored into design and performance.
     370Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general.
     371Note, creation cost is amortized across usage, so activation cost is usually the dominant factor.
     372
     373
     374\begin{figure}
     375\centering
     376\begin{lrbox}{\myboxA}
     377\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     378typedef struct {
     379        int fn1, fn;
     380} Fib;
     381#define FibCtor { 1, 0 }
     382int fib( Fib * f ) {
     383
     384
     385
     386        int fn = f->fn; f->fn = f->fn1;
     387                f->fn1 = f->fn + fn;
     388        return fn;
     389
     390}
     391int main() {
     392        Fib f1 = FibCtor, f2 = FibCtor;
     393        for ( int i = 0; i < 10; i += 1 )
     394                printf( "%d %d\n",
     395                           fib( &f1 ), fib( &f2 ) );
     396}
     397\end{cfa}
     398\end{lrbox}
     399
     400\begin{lrbox}{\myboxB}
     401\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     402`generator` Fib {
     403        int fn1, fn;
     404};
     405
     406void `main(Fib & fib)` with(fib) {
     407
     408        [fn1, fn] = [1, 0];
     409        for () {
     410                `suspend;`
     411                [fn1, fn] = [fn, fn + fn1];
     412
     413        }
     414}
     415int main() {
     416        Fib f1, f2;
     417        for ( 10 )
     418                sout | `resume( f1 )`.fn
     419                         | `resume( f2 )`.fn;
     420}
     421\end{cfa}
     422\end{lrbox}
     423
     424\begin{lrbox}{\myboxC}
     425\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     426typedef struct {
     427        int fn1, fn;  void * `next`;
     428} Fib;
     429#define FibCtor { 1, 0, NULL }
     430Fib * comain( Fib * f ) {
     431        if ( f->next ) goto *f->next;
     432        f->next = &&s1;
     433        for ( ;; ) {
     434                return f;
     435          s1:; int fn = f->fn + f->fn1;
     436                        f->fn1 = f->fn; f->fn = fn;
     437        }
     438}
     439int main() {
     440        Fib f1 = FibCtor, f2 = FibCtor;
     441        for ( int i = 0; i < 10; i += 1 )
     442                printf("%d %d\n",comain(&f1)->fn,
     443                                 comain(&f2)->fn);
     444}
     445\end{cfa}
     446\end{lrbox}
     447
     448\subfloat[C asymmetric generator]{\label{f:CFibonacci}\usebox\myboxA}
     449\hspace{3pt}
     450\vrule
     451\hspace{3pt}
     452\subfloat[\CFA asymmetric generator]{\label{f:CFAFibonacciGen}\usebox\myboxB}
     453\hspace{3pt}
     454\vrule
     455\hspace{3pt}
     456\subfloat[C generator implementation]{\label{f:CFibonacciSim}\usebox\myboxC}
     457\caption{Fibonacci (output) Asymmetric Generator}
     458\label{f:FibonacciAsymmetricGenerator}
     459
     460\bigskip
     461
     462\begin{lrbox}{\myboxA}
     463\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     464`generator Fmt` {
     465        char ch;
     466        int g, b;
     467};
     468void ?{}( Fmt & fmt ) { `resume(fmt);` } // constructor
     469void ^?{}( Fmt & f ) with(f) { $\C[1.75in]{// destructor}$
     470        if ( g != 0 || b != 0 ) sout | nl; }
     471void `main( Fmt & f )` with(f) {
     472        for () { $\C{// until destructor call}$
     473                for ( ; g < 5; g += 1 ) { $\C{// groups}$
     474                        for ( ; b < 4; b += 1 ) { $\C{// blocks}$
     475                                `suspend;` $\C{// wait for character}$
     476                                while ( ch == '\n' ) `suspend;` // ignore
     477                                sout | ch;                                              // newline
     478                        } sout | " ";  // block spacer
     479                } sout | nl; // group newline
     480        }
     481}
     482int main() {
     483        Fmt fmt; $\C{// fmt constructor called}$
     484        for () {
     485                sin | fmt.ch; $\C{// read into generator}$
     486          if ( eof( sin ) ) break;
     487                `resume( fmt );`
     488        }
     489
     490} $\C{// fmt destructor called}\CRT$
     491\end{cfa}
     492\end{lrbox}
     493
     494\begin{lrbox}{\myboxB}
     495\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     496typedef struct {
     497        void * next;
     498        char ch;
     499        int g, b;
     500} Fmt;
     501void comain( Fmt * f ) {
     502        if ( f->next ) goto *f->next;
     503        f->next = &&s1;
     504        for ( ;; ) {
     505                for ( f->g = 0; f->g < 5; f->g += 1 ) {
     506                        for ( f->b = 0; f->b < 4; f->b += 1 ) {
     507                                return;
     508                          s1:;  while ( f->ch == '\n' ) return;
     509                                printf( "%c", f->ch );
     510                        } printf( " " );
     511                } printf( "\n" );
     512        }
     513}
     514int main() {
     515        Fmt fmt = { NULL };  comain( &fmt ); // prime
     516        for ( ;; ) {
     517                scanf( "%c", &fmt.ch );
     518          if ( feof( stdin ) ) break;
     519                comain( &fmt );
     520        }
     521        if ( fmt.g != 0 || fmt.b != 0 ) printf( "\n" );
     522}
     523\end{cfa}
     524\end{lrbox}
     525
     526\subfloat[\CFA asymmetric generator]{\label{f:CFAFormatGen}\usebox\myboxA}
     527\hspace{3pt}
     528\vrule
     529\hspace{3pt}
     530\subfloat[C generator simulation]{\label{f:CFormatSim}\usebox\myboxB}
     531\hspace{3pt}
     532\caption{Formatter (input) Asymmetric Generator}
     533\label{f:FormatterAsymmetricGenerator}
     534\end{figure}
     535
     536
     537\subsection{Generator}
     538
     539Stackless generators have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution.
     540The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity.
     541How this goal is accomplished is done through a series of different kinds of generators and their implementation.
     542
     543Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version.
     544This kind of generator is an \emph{output generator}, producing a new result on each resumption.
     545To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle.
     546An additional requirement is the ability to create an arbitrary number of generators (of any kind), \ie retaining state in global variables is insufficient;
     547hence, state is retained in a closure between calls.
     548Figure~\ref{f:CFibonacci} shows the C approach of manually creating the closure in structure @Fib@, and multiple instances of this closure provide multiple Fibonacci generators.
     549The C version only has the middle execution state because the top execution state becomes declaration initialization.
     550Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a special \CFA @generator@ type.
     551This generator type is then connected to a function named @main@ that takes as its only parameter a reference to the generator type, called a \emph{generator main}.
     552The generator main contains @suspend@ statements that suspend execution without ending the generator versus @return@.
     553For the Fibonacci generator-main,\footnote{
     554The \CFA \lstinline|with| opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics.
     555Multiple aggregates may be opened.}
     556the top initialization state appears at the start and the middle execution state is denoted by statement @suspend@.
     557Any local variables in @main@ \emph{are not retained} between calls;
     558hence local variable are only for temporary computations \emph{between} suspends.
     559All retained state \emph{must} appear in the generator's type.
     560As well, generator code containing a @suspend@ cannot be refactored into a helper function called by the generator, because @suspend@ is implemented via @return@, so a return from the helper function goes back to the current generator not the resumer.
     561The generator is started by calling function @resume@ with a generator instance, which begins execution at the top of the generator main, and subsequent @resume@ calls restart the generator at its point of last suspension.
     562Resuming an ended (returned) generator is undefined.
     563Function @resume@ returns its argument generator so it can be cascaded in an expression, in this case to print the next Fibonacci value @fn@ computed in the generator instance.
     564Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA generator only needs one additional field, @next@, to handle retention of execution state.
     565The computed @goto@ at the start of the generator main, which branches after the previous suspend, adds very little cost to the resume call.
     566Finally, an explicit generator type provides both design and performance benefits, such as multiple type-safe interface functions taking and returning arbitrary types.
     567\begin{cfa}
     568int ?()( Fib & fib ) with( fib ) { return `resume( fib )`.fn; }   // function-call interface
     569int ?()( Fib & fib, int N ) with( fib ) { for ( N - 1 ) `fib()`; return `fib()`; }   // use simple interface
     570double ?()( Fib & fib ) with( fib ) { return (int)`fib()` / 3.14159; } // cast prevents recursive call
     571sout | (int)f1() | (double)f1() | f2( 2 );   // simple interface, cast selects call based on return type, step 2 values
     572\end{cfa}
     573Now, the generator can be a separately-compiled opaque-type only accessed through its interface functions.
     574For contrast, Figure~\ref{f:PythonFibonacci} shows the equivalent Python Fibonacci generator, which does not use a generator type, and hence only has a single interface, but an implicit closure.
     575
     576Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden and possible point of error.
     577(This restriction is removed by the coroutine, see Section~\ref{s:Coroutine}.)
     578Our concern is that static analysis to discriminate local state from temporary variables is complex and beyond the current scope of the \CFA project.
     579As well, supporting variable-size local-state, like variable-length arrays, requires dynamic allocation of the local state, which significantly increases the cost of generator creation/destruction, and is a show-stopper for embedded programming.
     580But more importantly, the size of the generator type is tied to the local state in the generator main, which precludes separate compilation of the generator main, \ie a generator must be inlined or local state must be dynamically allocated.
     581Our current experience is that most generator problems have simple data state, including local state, but complex execution state.
     582As well, C programmers are not afraid with this kind of semantic programming requirement, if it results in very small, fast generators.
     583
     584Figure~\ref{f:CFAFormatGen} shows an asymmetric \newterm{input generator}, @Fmt@, for restructuring text into groups of characters of fixed-size blocks, \ie the input on the left is reformatted into the output on the right, where newlines are ignored.
     585\begin{center}
     586\tt
     587\begin{tabular}{@{}l|l@{}}
     588\multicolumn{1}{c|}{\textbf{\textrm{input}}} & \multicolumn{1}{c}{\textbf{\textrm{output}}} \\
     589\begin{tabular}[t]{@{}ll@{}}
     590abcdefghijklmnopqrstuvwxyz \\
     591abcdefghijklmnopqrstuvwxyz
     592\end{tabular}
     593&
     594\begin{tabular}[t]{@{}lllll@{}}
     595abcd    & efgh  & ijkl  & mnop  & qrst  \\
     596uvwx    & yzab  & cdef  & ghij  & klmn  \\
     597opqr    & stuv  & wxyz  &               &
     598\end{tabular}
     599\end{tabular}
     600\end{center}
     601The example takes advantage of resuming a generator in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops.
     602The destructor provides a newline, if formatted text ends with a full line.
     603Figure~\ref{f:CFormatSim} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@.
     604For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the Fibonacci generator.
     605
     606Figure~\ref{f:DeviceDriverGen} shows a \emph{killer} asymmetric generator, a device-driver, because device drivers caused 70\%-85\% of failures in Windows/Linux~\cite{Swift05}.
     607Device drives follow the pattern of simple data state but complex execution state, \ie finite state-machine (FSM) parsing a protocol.
     608For example, the following protocol:
     609\begin{center}
     610\ldots\, STX \ldots\, message \ldots\, ESC ETX \ldots\, message \ldots\, ETX 2-byte crc \ldots
     611\end{center}
     612is a network message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check.
     613Control characters may appear in a message if preceded by an ESC.
     614When a message byte arrives, it triggers an interrupt, and the operating system services the interrupt by calling the device driver with the byte read from a hardware register.
     615The device driver returns a status code of its current state, and when a complete message is obtained, the operating system knows the message is in the message buffer.
     616Hence, the device driver is an input/output generator.
     617
     618Note, the cost of creating and resuming the device-driver generator, @Driver@, is virtually identical to call/return, so performance in an operating-system kernel is excellent.
     619As well, the data state is small, where variables @byte@ and @msg@ are communication variables for passing in message bytes and returning the message, and variables @lnth@, @crc@, and @sum@ are local variable that must be retained between calls and are hoisted into the generator type.
     620Manually, detecting and hoisting local-state variables is easy when the number is small.
     621Finally, the execution state is large, with one @resume@ and seven @suspend@s.
     622Hence, the key benefits of the generator are correctness, safety, and maintenance because the execution states of the FSM are transcribed directly into the programming language rather than using a table-driven approach.
     623Because FSMs can be complex and occur frequently in important domains, direct support of the generator is crucial in a systems programming-language.
     624
     625\begin{figure}
     626\centering
     627\newbox\myboxA
     628\begin{lrbox}{\myboxA}
     629\begin{python}[aboveskip=0pt,belowskip=0pt]
     630def Fib():
     631    fn1, fn = 0, 1
     632    while True:
     633        `yield fn1`
     634        fn1, fn = fn, fn1 + fn
     635f1 = Fib()
     636f2 = Fib()
     637for i in range( 10 ):
     638        print( next( f1 ), next( f2 ) )
     639
     640
     641
     642
     643
     644
     645\end{python}
     646\end{lrbox}
     647
     648\newbox\myboxB
     649\begin{lrbox}{\myboxB}
     650\begin{python}[aboveskip=0pt,belowskip=0pt]
     651def Fmt():
     652        try:
     653                while True:
     654                        for g in range( 5 ):
     655                                for b in range( 4 ):
     656                                        print( `(yield)`, end='' )
     657                                print( '  ', end='' )
     658                        print()
     659        except GeneratorExit:
     660                if g != 0 | b != 0:
     661                        print()
     662fmt = Fmt()
     663`next( fmt )`                    # prime, next prewritten
     664for i in range( 41 ):
     665        `fmt.send( 'a' );`      # send to yield
     666\end{python}
     667\end{lrbox}
     668\subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA}
     669\hspace{3pt}
     670\vrule
     671\hspace{3pt}
     672\subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB}
     673\caption{Python Generator}
     674\label{f:PythonGenerator}
     675
     676\bigskip
     677
     678\begin{tabular}{@{}l|l@{}}
     679\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     680enum Status { CONT, MSG, ESTX,
     681                                ELNTH, ECRC };
     682`generator` Driver {
     683        Status status;
     684        unsigned char byte, * msg; // communication
     685        unsigned int lnth, sum;      // local state
     686        unsigned short int crc;
     687};
     688void ?{}( Driver & d, char * m ) { d.msg = m; }
     689Status next( Driver & d, char b ) with( d ) {
     690        byte = b; `resume( d );` return status;
     691}
     692void main( Driver & d ) with( d ) {
     693        enum { STX = '\002', ESC = '\033',
     694                        ETX = '\003', MaxMsg = 64 };
     695  msg: for () { // parse message
     696                status = CONT;
     697                lnth = 0; sum = 0;
     698                while ( byte != STX ) `suspend;`
     699          emsg: for () {
     700                        `suspend;` // process byte
     701\end{cfa}
     702&
     703\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     704                        choose ( byte ) { // switch with implicit break
     705                          case STX:
     706                                status = ESTX; `suspend;` continue msg;
     707                          case ETX:
     708                                break emsg;
     709                          case ESC:
     710                                `suspend;`
     711                        }
     712                        if ( lnth >= MaxMsg ) { // buffer full ?
     713                                status = ELNTH; `suspend;` continue msg; }
     714                        msg[lnth++] = byte;
     715                        sum += byte;
     716                }
     717                msg[lnth] = '\0'; // terminate string
     718                `suspend;`
     719                crc = byte << 8;
     720                `suspend;`
     721                status = (crc | byte) == sum ? MSG : ECRC;
     722                `suspend;`
     723        }
     724}
     725\end{cfa}
     726\end{tabular}
     727\caption{Device-driver generator for communication protocol}
     728\label{f:DeviceDriverGen}
     729\end{figure}
     730
     731Figure~\ref{f:CFAPingPongGen} shows a symmetric generator, where the generator resumes another generator, forming a resume/resume cycle.
     732(The trivial cycle is a generator resuming itself.)
     733This control flow is similar to recursion for functions, but without stack growth.
     734The steps for symmetric control-flow are creating, executing, and terminating the cycle.
     735Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope.
     736(This issues occurs for any cyclic data-structure.)
     737% The example creates all the generators and then assigns the partners that form the cycle.
     738% Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.
     739Once the cycle is formed, the program main resumes one of the generators, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s).
     740Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example).
     741The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack.
     742Also, since local variables are not retained in the generator function, it does not contain an objects with destructors that must be called, so the  cost is the same as a function return.
     743Destructor cost occurs when the generator instance is deallocated, which is easily controlled by the programmer.
     744
     745Figure~\ref{f:CPingPongSim} shows the implementation of the symmetric generator, where the complexity is the @resume@, which needs an extension to the calling convention to perform a forward rather than backward jump.
     746This jump starts at the top of the next generator main to re-execute the normal calling convention to make space on the stack for its local variables.
     747However, before the jump, the caller must reset its stack (and any registers) equivalent to a @return@, but subsequently jump forward not backwards.
     748This semantics is basically a tail-call optimization, which compilers already perform.
     749Hence, assembly code is manually insert in the example to undo the generator's entry code before the direct jump.
     750This assembly code is fragile as it depends on what entry code is generated, specifically if there are local variables and the level of optimization.
     751To provide this new calling convention requires a mechanism built into the compiler, which is beyond the scope of \CFA at this time.
     752Nevertheless, it is possible to hand generate any symmetric generators for proof of concept and performance testing.
     753A compiler could also eliminate other artifacts in the generator simulation to further increase performance.
     754
     755\begin{figure}
     756\centering
     757\begin{lrbox}{\myboxA}
     758\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     759`generator PingPong` {
     760        const char * name;
     761        PingPong & partner; // rebindable reference
     762        int N, i;
     763};
     764void ?{}(PingPong & pp, char * nm, int N) with(pp) {
     765        name = nm;  partner = 0p;  pp.N = N;  i = 0; }
     766void `main( PingPong & pp )` with(pp) {
     767        for ( ; i < N; i += 1 ) {
     768                sout | name | i;
     769                `resume( partner );`
     770        }
     771}
     772int main() {
     773        enum { N = 5 };
     774        PingPong ping = {"ping", N}, pong = {"pong", N};
     775        &ping.partner = &pong;  &pong.partner = &ping;
     776        `resume( ping );`
     777}
     778\end{cfa}
     779\end{lrbox}
     780
     781\begin{lrbox}{\myboxB}
     782\begin{cfa}[escapechar={},aboveskip=0pt,belowskip=0pt]
     783typedef struct PingPong {
     784        const char * name;
     785        struct PingPong * partner;
     786        int N, i;
     787        void * next;
     788} PingPong;
     789#define PPCtor(name, N) { name, NULL, N, 0, NULL }
     790void comain( PingPong * pp ) {
     791        if ( pp->next ) goto *pp->next;
     792        pp->next = &&cycle;
     793        for ( ; pp->i < pp->N; pp->i += 1 ) {
     794                printf( "%s %d\n", pp->name, pp->i );
     795                asm( "mov  %0,%%rdi" : "=m" (pp->partner) );
     796                asm( "mov  %rdi,%rax" );
     797                asm( "popq %rbx" );
     798                asm( "jmp  comain" );
     799          cycle: ;
     800        }
     801}
     802\end{cfa}
     803\end{lrbox}
     804
     805\subfloat[\CFA symmetric generator]{\label{f:CFAPingPongGen}\usebox\myboxA}
     806\hspace{3pt}
     807\vrule
     808\hspace{3pt}
     809\subfloat[C generator simulation]{\label{f:CPingPongSim}\usebox\myboxB}
     810\hspace{3pt}
     811\caption{Ping-Pong Symmetric Generator}
     812\label{f:PingPongSymmetricGenerator}
     813\end{figure}
     814
     815Finally, part of this generator work was inspired by the recent \CCtwenty generator proposal~\cite{C++20Coroutine19} (which they call coroutines).
     816Our work provides the same high-performance asymmetric-generators as \CCtwenty, and extends their work with symmetric generators.
     817An additional \CCtwenty generator feature allows @suspend@ and @resume@ to be followed by a restricted compound-statement that is executed after the current generator has reset its stack but before calling the next generator, specified with the following \CFA syntax.
     818\begin{cfa}
     819... suspend`{ ... }`;
     820... resume( C )`{ ... }` ...
     821\end{cfa}
     822Since the current generator's stack is released before calling the compound statement, the compound statement can only reference variables in the generator's type.
     823This feature is useful when a generator is used in a concurrent context to ensure it is stopped before releasing a lock in the compound statement, which might immediately allow another thread to resume the generator.
     824Hence, this mechanism provides a general and safe handoff of the generator among competing threads.
     825
     826
     827\subsection{Coroutine}
     828\label{s:Coroutine}
     829
     830Stackful coroutines extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main.
     831A coroutine is specified by replacing @generator@ with @coroutine@ for the type.
     832This generality results in higher cost for creation, due to dynamic stack allocation, execution, due to context switching among stacks, and terminating, due to possible stack unwinding and dynamic stack deallocation.
     833How coroutines differ from generators is done through a series of examples.
     834
     835First, the previous generator examples are converted to their coroutine counterparts, allowing local-state variables to be moved from the generator type into the coroutine main.
     836\begin{description}
     837\item[Fibonacci]
     838Move the declaration of @fn1@ to the start of coroutine main.
     839\begin{cfa}[xleftmargin=0pt]
     840void main( Fib & fib ) with(fib) {
     841        `int fn1;`
     842\end{cfa}
     843\item[Formatter]
     844Move the declaration of @g@ and @b@ to the for loops in the coroutine main.
     845\begin{cfa}[xleftmargin=0pt]
     846for ( `g`; 5 ) {
     847        for ( `b`; 4 ) {
     848\end{cfa}
     849\item[Device Driver]
     850Move the declaration of @lnth@ and @sum@ to their points of initialization.
     851\begin{cfa}[xleftmargin=0pt]
     852        status = CONT;
     853        `unsigned int lnth = 0, sum = 0;`
     854        ...
     855        `unsigned short int crc = byte << 8;`
     856\end{cfa}
     857\item[PingPong]
     858Move the declaration of @i@ to the for loop in the coroutine main.
     859\begin{cfa}[xleftmargin=0pt]
     860void main( PingPong & pp ) with(pp) {
     861        for ( `i`; N ) {
     862\end{cfa}
     863\end{description}
     864It is also possible to refactor code containing local-state and @suspend@ statements into a helper routine, like the computation of the CRC for the device driver.
     865\begin{cfa}
     866unsigned int Crc() {
     867        `suspend;`
     868        unsigned short int crc = byte << 8;
     869        `suspend;`
     870        status = (crc | byte) == sum ? MSG : ECRC;
     871        return crc;
     872}
     873\end{cfa}
     874A call to this function is placed at the end of the driver's coroutine-main.
     875For complex finite-state machines, refactoring is part of normal program abstraction, especially when code is used in multiple places.
     876Again, this complexity is usually associated with execution state rather than data state.
     877
    348878\begin{comment}
    349 This paper provides a minimal concurrency \newterm{Application Program Interface} (API) that is simple, efficient and can be used to build other concurrency features.
    350 While the simplest concurrency system is a thread and a lock, this low-level approach is hard to master.
    351 An easier approach for programmers is to support higher-level constructs as the basis of concurrency.
    352 Indeed, for highly-productive concurrent-programming, high-level approaches are much more popular~\cite{Hochstein05}.
    353 Examples of high-level approaches are jobs (thread pool)~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}.
    354 
    355 The following terminology is used.
    356 A \newterm{thread} is a fundamental unit of execution that runs a sequence of code and requires a stack to maintain state.
    357 Multiple simultaneous threads give rise to \newterm{concurrency}, which requires locking to ensure access to shared data and safe communication.
    358 \newterm{Locking}, and by extension \newterm{locks}, are defined as a mechanism to prevent progress of threads to provide safety.
    359 \newterm{Parallelism} is running multiple threads simultaneously.
    360 Parallelism implies \emph{actual} simultaneous execution, where concurrency only requires \emph{apparent} simultaneous execution.
    361 As such, parallelism only affects performance, which is observed through differences in space and/or time at runtime.
    362 Hence, there are two problems to be solved: concurrency and parallelism.
    363 While these two concepts are often combined, they are distinct, requiring different tools~\cite[\S~2]{Buhr05a}.
    364 Concurrency tools handle mutual exclusion and synchronization, while parallelism tools handle performance, cost, and resource utilization.
    365 
    366 The proposed concurrency API is implemented in a dialect of C, called \CFA (pronounced C-for-all).
    367 The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantics, and type checking, and the corresponding high-performance runtime-library to implement the concurrent features.
    368 \end{comment}
    369 
    370 
    371 \begin{comment}
    372 \section{\CFA Overview}
    373 
    374 The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency.
    375 Extended versions and explanation of the following code examples are available at the \CFA website~\cite{Cforall} or in Moss~\etal~\cite{Moss18}.
    376 
    377 \CFA is a non-object-oriented extension of ISO-C, and hence, supports all C paradigms.
    378 Like C, the building blocks of \CFA are structures and routines.
    379 Virtually all of the code generated by the \CFA translator respects C memory layouts and calling conventions.
    380 While \CFA is not object oriented, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C has a notion of objects: ``region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}.
    381 While some object-oriented features appear in \CFA, they are independent capabilities, allowing \CFA to adopt them while maintaining a procedural paradigm.
    382 
    383 
    384 \subsection{References}
    385 
    386 \CFA provides multi-level rebindable references, as an alternative to pointers, which significantly reduces syntactic noise.
    387 \begin{cfa}
    388 int x = 1, y = 2, z = 3;
    389 int * p1 = &x, ** p2 = &p1,  *** p3 = &p2,      $\C{// pointers to x}$
    390     `&` r1 = x,   `&&` r2 = r1,   `&&&` r3 = r2;        $\C{// references to x}$
    391 int * p4 = &z, `&` r4 = z;
    392 
    393 *p1 = 3; **p2 = 3; ***p3 = 3;       // change x
    394 r1 =  3;     r2 = 3;      r3 = 3;        // change x: implicit dereferences *r1, **r2, ***r3
    395 **p3 = &y; *p3 = &p4;                // change p1, p2
    396 `&`r3 = &y; `&&`r3 = &`&`r4;             // change r1, r2: cancel implicit dereferences (&*)**r3, (&(&*)*)*r3, &(&*)r4
    397 \end{cfa}
    398 A reference is a handle to an object, like a pointer, but is automatically dereferenced the specified number of levels.
    399 Referencing (address-of @&@) a reference variable cancels one of the implicit dereferences, until there are no more implicit references, after which normal expression behaviour applies.
    400 
    401 
    402 \subsection{\texorpdfstring{\protect\lstinline{with} Statement}{with Statement}}
    403 \label{s:WithStatement}
    404 
    405 Heterogeneous data is aggregated into a structure/union.
    406 To reduce syntactic noise, \CFA provides a @with@ statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate field-qualification by opening a scope containing the field identifiers.
    407 \begin{cquote}
    408 \vspace*{-\baselineskip}%???
    409 \lstDeleteShortInline@%
    410 \begin{cfa}
    411 struct S { char c; int i; double d; };
    412 struct T { double m, n; };
    413 // multiple aggregate parameters
    414 \end{cfa}
    415 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    416 \begin{cfa}
    417 void f( S & s, T & t ) {
    418         `s.`c; `s.`i; `s.`d;
    419         `t.`m; `t.`n;
    420 }
    421 \end{cfa}
    422 &
    423 \begin{cfa}
    424 void f( S & s, T & t ) `with ( s, t )` {
    425         c; i; d;                // no qualification
    426         m; n;
    427 }
    428 \end{cfa}
    429 \end{tabular}
    430 \lstMakeShortInline@%
    431 \end{cquote}
    432 Object-oriented programming languages only provide implicit qualification for the receiver.
    433 
    434 In detail, the @with@-statement syntax is:
    435 \begin{cfa}
    436 $\emph{with-statement}$:
    437         'with' '(' $\emph{expression-list}$ ')' $\emph{compound-statement}$
    438 \end{cfa}
    439 and may appear as the body of a routine or nested within a routine body.
    440 Each expression in the expression-list provides a type and object.
    441 The type must be an aggregate type.
    442 (Enumerations are already opened.)
    443 The object is the implicit qualifier for the open structure-fields.
    444 All expressions in the expression list are opened in parallel within the compound statement, which is different from Pascal, which nests the openings from left to right.
    445 
    446 
    447 \subsection{Overloading}
    448 
    449 \CFA maximizes the ability to reuse names via overloading to aggressively address the naming problem.
    450 Both variables and routines may be overloaded, where selection is based on number and types of returns and arguments (as in Ada~\cite{Ada}).
    451 \newpage
    452 \vspace*{-2\baselineskip}%???
    453 \begin{cquote}
    454 \begin{cfa}
    455 // selection based on type
    456 \end{cfa}
    457 \lstDeleteShortInline@%
    458 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    459 \begin{cfa}
    460 const short int `MIN` = -32768;
    461 const int `MIN` = -2147483648;
    462 const long int `MIN` = -9223372036854775808L;
    463 \end{cfa}
    464 &
    465 \begin{cfa}
    466 short int si = `MIN`;
    467 int i = `MIN`;
    468 long int li = `MIN`;
    469 \end{cfa}
    470 \end{tabular}
    471 \begin{cfa}
    472 // selection based on type and number of parameters
    473 \end{cfa}
    474 \begin{tabular}{@{}l@{\hspace{2.7\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    475 \begin{cfa}
    476 void `f`( void );
    477 void `f`( char );
    478 void `f`( int, double );
    479 \end{cfa}
    480 &
    481 \begin{cfa}
    482 `f`();
    483 `f`( 'a' );
    484 `f`( 3, 5.2 );
    485 \end{cfa}
    486 \end{tabular}
    487 \begin{cfa}
    488 // selection based on type and number of returns
    489 \end{cfa}
    490 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    491 \begin{cfa}
    492 char `f`( int );
    493 double `f`( int );
    494 [char, double] `f`( int );
    495 \end{cfa}
    496 &
    497 \begin{cfa}
    498 char c = `f`( 3 );
    499 double d = `f`( 3 );
    500 [d, c] = `f`( 3 );
    501 \end{cfa}
    502 \end{tabular}
    503 \lstMakeShortInline@%
    504 \end{cquote}
    505 Overloading is important for \CFA concurrency since the runtime system relies on creating different types to represent concurrency objects.
    506 Therefore, overloading eliminates long prefixes and other naming conventions to prevent name clashes.
    507 As seen in Section~\ref{s:Concurrency}, routine @main@ is heavily overloaded.
    508 As another example, variable overloading is useful in the parallel semantics of the @with@ statement for fields with the same name:
    509 \begin{cfa}
    510 struct S { int `i`; int j; double m; } s;
    511 struct T { int `i`; int k; int m; } t;
    512 with ( s, t ) {
    513         j + k;                                                                  $\C{// unambiguous, s.j + t.k}$
    514         m = 5.0;                                                                $\C{// unambiguous, s.m = 5.0}$
    515         m = 1;                                                                  $\C{// unambiguous, t.m = 1}$
    516         int a = m;                                                              $\C{// unambiguous, a = t.m }$
    517         double b = m;                                                   $\C{// unambiguous, b = s.m}$
    518         int c = `s.i` + `t.i`;                                  $\C{// unambiguous, qualification}$
    519         (double)m;                                                              $\C{// unambiguous, cast s.m}$
    520 }
    521 \end{cfa}
    522 For parallel semantics, both @s.i@ and @t.i@ are visible with the same type, so only @i@ is ambiguous without qualification.
    523 
    524 
    525 \subsection{Operators}
    526 
    527 Overloading also extends to operators.
    528 Operator-overloading syntax creates a routine name with an operator symbol and question marks for the operands:
    529 \begin{cquote}
    530 \lstDeleteShortInline@%
    531 \begin{tabular}{@{}ll@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
    532 \begin{cfa}
    533 int ++?(int op);
    534 int ?++(int op);
    535 int `?+?`(int op1, int op2);
    536 int ?<=?(int op1, int op2);
    537 int ?=? (int & op1, int op2);
    538 int ?+=?(int & op1, int op2);
    539 \end{cfa}
    540 &
    541 \begin{cfa}
    542 // unary prefix increment
    543 // unary postfix increment
    544 // binary plus
    545 // binary less than
    546 // binary assignment
    547 // binary plus-assignment
    548 \end{cfa}
    549 &
    550 \begin{cfa}
    551 struct S { int i, j; };
    552 S `?+?`( S op1, S op2) { // add two structures
    553         return (S){op1.i + op2.i, op1.j + op2.j};
    554 }
    555 S s1 = {1, 2}, s2 = {2, 3}, s3;
    556 s3 = s1 `+` s2;         // compute sum: s3 == {2, 5}
    557 \end{cfa}
    558 \end{tabular}
    559 \lstMakeShortInline@%
    560 \end{cquote}
    561 
    562 
    563 \subsection{Constructors / Destructors}
    564 
    565 Object lifetime is a challenge in non-managed programming languages.
    566 \CFA responds with \CC-like constructors and destructors, using a different operator-overloading syntax.
    567 \begin{cfa}
    568 struct VLA { int len, * data; };                        $\C{// variable length array of integers}$
    569 void ?{}( VLA & vla ) with ( vla ) { len = 10;  data = alloc( len ); }  $\C{// default constructor}$
    570 void ?{}( VLA & vla, int size, char fill ) with ( vla ) { len = size;  data = alloc( len, fill ); } // initialization
    571 void ?{}( VLA & vla, VLA other ) { vla.len = other.len;  vla.data = other.data; } $\C{// copy, shallow}$
    572 void ^?{}( VLA & vla ) with ( vla ) { free( data ); } $\C{// destructor}$
    573 {
    574         VLA  x,            y = { 20, 0x01 },     z = y; $\C{// z points to y}$
    575         // $\LstCommentStyle{\color{red}\ \ \ x\{\};\ \ \ \ \ \ \ \ \ y\{ 20, 0x01 \};\ \ \ \ \ \ \ \ \ \ z\{ z, y \};\ \ \ \ \ \ \ implicit calls}$
    576         ^x{};                                                                   $\C{// deallocate x}$
    577         x{};                                                                    $\C{// reallocate x}$
    578         z{ 5, 0xff };                                                   $\C{// reallocate z, not pointing to y}$
    579         ^y{};                                                                   $\C{// deallocate y}$
    580         y{ x };                                                                 $\C{// reallocate y, points to x}$
    581         x{};                                                                    $\C{// reallocate x, not pointing to y}$
    582 }       //  $\LstCommentStyle{\color{red}\^{}z\{\};\ \ \^{}y\{\};\ \ \^{}x\{\};\ \ \ implicit calls}$
    583 \end{cfa}
    584 Like \CC, construction is implicit on allocation (stack/heap) and destruction is implicit on deallocation.
    585 The object and all their fields are constructed/destructed.
    586 \CFA also provides @new@ and @delete@ as library routines, which behave like @malloc@ and @free@, in addition to constructing and destructing objects:
    587 \begin{cfa}
    588 {
    589         ... struct S s = {10}; ...                              $\C{// allocation, call constructor}$
    590 }                                                                                       $\C{// deallocation, call destructor}$
    591 struct S * s = new();                                           $\C{// allocation, call constructor}$
    592 ...
    593 delete( s );                                                            $\C{// deallocation, call destructor}$
    594 \end{cfa}
    595 \CFA concurrency uses object lifetime as a means of mutual exclusion and/or synchronization.
    596 
    597 
    598 \subsection{Parametric Polymorphism}
    599 \label{s:ParametricPolymorphism}
    600 
    601 The signature feature of \CFA is parametric-polymorphic routines~\cite{Cforall} with routines generalized using a @forall@ clause (giving the language its name), which allow separately compiled routines to support generic usage over multiple types.
    602 For example, the following sum routine works for any type that supports construction from 0 and addition:
    603 \begin{cfa}
    604 forall( otype T | { void `?{}`( T *, zero_t ); T `?+?`( T, T ); } ) // constraint type, 0 and +
    605 T sum( T a[$\,$], size_t size ) {
    606         `T` total = { `0` };                                    $\C{// initialize by 0 constructor}$
    607         for ( size_t i = 0; i < size; i += 1 )
    608                 total = total `+` a[i];                         $\C{// select appropriate +}$
    609         return total;
    610 }
    611 S sa[5];
    612 int i = sum( sa, 5 );                                           $\C{// use S's 0 construction and +}$
    613 \end{cfa}
    614 Type variables can be @otype@ or @dtype@.
    615 @otype@ refers to a \emph{complete type}, \ie, a type with size, alignment, default constructor, copy constructor, destructor, and assignment operator.
    616 @dtype@ refers to an \emph{incomplete type}, \eg, void or a forward-declared type.
    617 The builtin types @zero_t@ and @one_t@ overload constant 0 and 1 for a new types, where both 0 and 1 have special meaning in C.
    618 
    619 \CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each routine declaration:
    620 \begin{cfa}
    621 trait `sumable`( otype T ) {
    622         void `?{}`( T &, zero_t );                              $\C{// 0 literal constructor}$
    623         T `?+?`( T, T );                                                $\C{// assortment of additions}$
    624         T ?+=?( T &, T );
    625         T ++?( T & );
    626         T ?++( T & );
    627 };
    628 forall( otype T `| sumable( T )` )                      $\C{// use trait}$
    629 T sum( T a[$\,$], size_t size );
    630 \end{cfa}
    631 
    632 Using the return type for overload discrimination, it is possible to write a type-safe @alloc@ based on the C @malloc@:
    633 \begin{cfa}
    634 forall( dtype T | sized(T) ) T * alloc( void ) { return (T *)malloc( sizeof(T) ); }
    635 int * ip = alloc();                                                     $\C{// select type and size from left-hand side}$
    636 double * dp = alloc();
    637 struct S {...} * sp = alloc();
    638 \end{cfa}
    639 where the return type supplies the type/size of the allocation, which is impossible in most type systems.
    640 \end{comment}
    641 
    642 
    643 \section{Coroutines: Stepping Stone}
    644 \label{coroutine}
    645 
    646 Coroutines are generalized routines allowing execution to be temporarily suspended and later resumed.
    647 Hence, unlike a normal routine, a coroutine may not terminate when it returns to its caller, allowing it to be restarted with the values and execution location present at the point of suspension.
    648 This capability is accomplished via the coroutine's stack, where suspend/resume context switch among stacks.
    649 Because threading design-challenges are present in coroutines, their design effort is relevant, and this effort can be easily exposed to programmers giving them a useful new programming paradigm because a coroutine handles the class of problems that need to retain state between calls, \eg plugins, device drivers, and finite-state machines.
    650 Therefore, the two fundamental features of the core \CFA coroutine-API are independent call-stacks and @suspend@/@resume@ operations.
    651 
    652 For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers
    653 \begin{displaymath}
    654 \mathsf{fib}(n) = \left \{
    655 \begin{array}{ll}
    656 0                                       & n = 0         \\
    657 1                                       & n = 1         \\
    658 \mathsf{fib}(n-1) + \mathsf{fib}(n-2)   & n \ge 2       \\
    659 \end{array}
    660 \right.
    661 \end{displaymath}
    662 where Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C.
    663 Figure~\ref{f:GlobalVariables} illustrates the following problems: unique unencapsulated global variables necessary to retain state between calls, only one Fibonacci generator, and execution state must be explicitly retained via explicit state variables.
    664 Figure~\ref{f:ExternalState} addresses these issues: unencapsulated program global variables become encapsulated structure variables, unique global variables are replaced by multiple Fibonacci objects, and explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $\mathsf{fib}(n-2)$.
     879Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface routines, \eg @next@.
     880Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
     881The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@.
     882The interface routine @next@, takes a Fibonacci instance and context switches to it using @resume@;
     883on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
     884The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack;
     885when the coroutine main returns, its stack is deallocated.
     886Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes.
     887Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}.
     888Coroutine generators are called \newterm{output coroutines} because values are only returned.
    665889
    666890\begin{figure}
     
    689913\begin{lrbox}{\myboxA}
    690914\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    691 #define FIB_INIT { 0, 1 }
     915#define FibCtor { 0, 1 }
    692916typedef struct { int fn1, fn; } Fib;
    693917int fib( Fib * f ) {
     
    702926
    703927int main() {
    704         Fib f1 = FIB_INIT, f2 = FIB_INIT;
     928        Fib f1 = FibCtor, f2 = FibCtor;
    705929        for ( int i = 0; i < 10; i += 1 ) {
    706930                printf( "%d %d\n",
     
    719943        [fn1, fn] = [0, 1];
    720944        for () {
    721                 `suspend();`
     945                `suspend;`
    722946                [fn1, fn] = [fn, fn1 + fn];
    723947        }
    724948}
    725949int ?()( Fib & fib ) with( fib ) {
    726         `resume( fib );`  return fn1;
     950        return `resume( fib )`.fn1;
    727951}
    728952int main() {
     
    772996\caption{Fibonacci Generator}
    773997\label{f:C-fibonacci}
    774 
    775 % \bigskip
    776 %
    777 % \newbox\myboxA
    778 % \begin{lrbox}{\myboxA}
    779 % \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    780 % `coroutine` Fib { int fn; };
    781 % void main( Fib & fib ) with( fib ) {
    782 %       fn = 0;  int fn1 = fn; `suspend()`;
    783 %       fn = 1;  int fn2 = fn1;  fn1 = fn; `suspend()`;
    784 %       for () {
    785 %               fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `suspend()`; }
    786 % }
    787 % int next( Fib & fib ) with( fib ) { `resume( fib );` return fn; }
    788 % int main() {
    789 %       Fib f1, f2;
    790 %       for ( 10 )
    791 %               sout | next( f1 ) | next( f2 );
    792 % }
    793 % \end{cfa}
    794 % \end{lrbox}
    795 % \newbox\myboxB
    796 % \begin{lrbox}{\myboxB}
    797 % \begin{python}[aboveskip=0pt,belowskip=0pt]
    798 %
    799 % def Fibonacci():
    800 %       fn = 0; fn1 = fn; `yield fn`  # suspend
    801 %       fn = 1; fn2 = fn1; fn1 = fn; `yield fn`
    802 %       while True:
    803 %               fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `yield fn`
    804 %
    805 %
    806 % f1 = Fibonacci()
    807 % f2 = Fibonacci()
    808 % for i in range( 10 ):
    809 %       print( `next( f1 )`, `next( f2 )` ) # resume
    810 %
    811 % \end{python}
    812 % \end{lrbox}
    813 % \subfloat[\CFA]{\label{f:Coroutine3States}\usebox\myboxA}
    814 % \qquad
    815 % \subfloat[Python]{\label{f:Coroutine1State}\usebox\myboxB}
    816 % \caption{Fibonacci input coroutine, 3 states, internal variables}
    817 % \label{f:cfa-fibonacci}
    818998\end{figure}
    819999
    820 Using a coroutine, it is possible to express the Fibonacci formula directly without any of the C problems.
    821 Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface routines, \eg @next@.
    822 Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
    823 The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@.
    824 The interface routine @next@, takes a Fibonacci instance and context switches to it using @resume@;
    825 on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
    826 The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack;
    827 when the coroutine main returns, its stack is deallocated.
    828 Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes.
    829 Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}.
    830 Coroutine generators are called \newterm{output coroutines} because values are only returned.
    831 
    832 Figure~\ref{f:CFAFmt} shows an \newterm{input coroutine}, @Format@, for restructuring text into groups of characters of fixed-size blocks.
    833 For example, the input of the left is reformatted into the output on the right.
    834 \begin{quote}
    835 \tt
    836 \begin{tabular}{@{}l|l@{}}
    837 \multicolumn{1}{c|}{\textbf{\textrm{input}}} & \multicolumn{1}{c}{\textbf{\textrm{output}}} \\
    838 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
    839 &
    840 \begin{tabular}[t]{@{}lllll@{}}
    841 abcd    & efgh  & ijkl  & mnop  & qrst  \\
    842 uvwx    & yzab  & cdef  & ghij  & klmn  \\
    843 opqr    & stuv  & wxyz  &               &
    844 \end{tabular}
    845 \end{tabular}
    846 \end{quote}
    847 The example takes advantage of resuming a coroutine in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops.
    848 The destructor provides a newline, if formatted text ends with a full line.
    849 Figure~\ref{f:CFmt} shows the C equivalent formatter, where the loops of the coroutine are flattened (linearized) and rechecked on each call because execution location is not retained between calls.
    850 (Linearized code is the bane of device drivers.)
    851 
    852 \begin{figure}
    853 \centering
     1000\bigskip
     1001
    8541002\newbox\myboxA
    8551003\begin{lrbox}{\myboxA}
    8561004\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    857 `coroutine` Fmt {
    858         char ch;   // communication variables
    859         int g, b;   // needed in destructor
    860 };
    861 void main( Fmt & fmt ) with( fmt ) {
     1005`coroutine` Fib { int fn; };
     1006void main( Fib & fib ) with( fib ) {
     1007        fn = 0;  int fn1 = fn; `suspend`;
     1008        fn = 1;  int fn2 = fn1;  fn1 = fn; `suspend`;
    8621009        for () {
    863                 for ( g = 0; g < 5; g += 1 ) { // groups
    864                         for ( b = 0; b < 4; b += 1 ) { // blocks
    865                                 `suspend();`
    866                                 sout | ch; } // print character
    867                         sout | "  "; } // block separator
    868                 sout | nl; }  // group separator
    869 }
    870 void ?{}( Fmt & fmt ) { `resume( fmt );` } // prime
    871 void ^?{}( Fmt & fmt ) with( fmt ) { // destructor
    872         if ( g != 0 || b != 0 ) // special case
    873                 sout | nl; }
    874 void send( Fmt & fmt, char c ) { fmt.ch = c; `resume( fmt )`; }
     1010                fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `suspend`; }
     1011}
     1012int next( Fib & fib ) with( fib ) { `resume( fib );` return fn; }
    8751013int main() {
    876         Fmt fmt;
    877         sout | nlOff;   // turn off auto newline
    878         for ( 41 )
    879                 send( fmt, 'a' );
     1014        Fib f1, f2;
     1015        for ( 10 )
     1016                sout | next( f1 ) | next( f2 );
    8801017}
    8811018\end{cfa}
    8821019\end{lrbox}
    883 
    8841020\newbox\myboxB
    8851021\begin{lrbox}{\myboxB}
    8861022\begin{python}[aboveskip=0pt,belowskip=0pt]
    8871023
    888 
    889 
    890 def Fmt():
    891         try:
    892                 while True:
    893                         for g in range( 5 ):
    894                                 for b in range( 4 ):
    895 
    896                                         print( `(yield)`, end='' )
    897                                 print( '  ', end='' )
    898                         print()
    899 
    900 
    901         except GeneratorExit:
    902                 if g != 0 | b != 0:
    903                         print()
    904 
    905 
    906 fmt = Fmt()
    907 `next( fmt )`                    # prime
    908 for i in range( 41 ):
    909         `fmt.send( 'a' );`      # send to yield
     1024def Fibonacci():
     1025        fn = 0; fn1 = fn; `yield fn`  # suspend
     1026        fn = 1; fn2 = fn1; fn1 = fn; `yield fn`
     1027        while True:
     1028                fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `yield fn`
     1029
     1030
     1031f1 = Fibonacci()
     1032f2 = Fibonacci()
     1033for i in range( 10 ):
     1034        print( `next( f1 )`, `next( f2 )` ) # resume
    9101035
    9111036\end{python}
    9121037\end{lrbox}
    913 \subfloat[\CFA]{\label{f:CFAFmt}\usebox\myboxA}
     1038\subfloat[\CFA]{\label{f:Coroutine3States}\usebox\myboxA}
    9141039\qquad
    915 \subfloat[Python]{\label{f:CFmt}\usebox\myboxB}
    916 \caption{Output formatting text}
    917 \label{f:fmt-line}
     1040\subfloat[Python]{\label{f:Coroutine1State}\usebox\myboxB}
     1041\caption{Fibonacci input coroutine, 3 states, internal variables}
     1042\label{f:cfa-fibonacci}
    9181043\end{figure}
    919 
    920 The previous examples are \newterm{asymmetric (semi) coroutine}s because one coroutine always calls a resuming routine for another coroutine, and the resumed coroutine always suspends back to its last resumer, similar to call/return for normal routines.
    921 However, @resume@ and @suspend@ context switch among existing stack-frames, rather than create new ones so there is no stack growth.
    922 \newterm{Symmetric (full) coroutine}s have a coroutine call to a resuming routine for another coroutine, and its coroutine main calls another resuming routine, which eventually forms a resuming-call cycle.
    923 (The trivial cycle is a coroutine resuming itself.)
    924 This control flow is similar to recursion for normal routines, but again there is no stack growth from the context switch.
     1044\end{comment}
    9251045
    9261046\begin{figure}
     
    9301050\begin{cfa}
    9311051`coroutine` Prod {
    932         Cons & c;
     1052        Cons & c;                       // communication
    9331053        int N, money, receipt;
    9341054};
     
    9511071}
    9521072void start( Prod & prod, int N, Cons &c ) {
    953         &prod.c = &c; // reassignable reference
     1073        &prod.c = &c;
    9541074        prod.[N, receipt] = [N, 0];
    9551075        `resume( prod );`
     
    9641084\begin{cfa}
    9651085`coroutine` Cons {
    966         Prod & p;
     1086        Prod & p;                       // communication
    9671087        int p1, p2, status;
    9681088        bool done;
     
    9721092        cons.[status, done ] = [0, false];
    9731093}
    974 void ^?{}( Cons & cons ) {}
    9751094void main( Cons & cons ) with( cons ) {
    9761095        // 1st resume starts here
     
    9941113        `resume( cons );`
    9951114}
     1115
    9961116\end{cfa}
    9971117\end{tabular}
    9981118\caption{Producer / consumer: resume-resume cycle, bi-directional communication}
    9991119\label{f:ProdCons}
     1120
     1121\medskip
     1122
     1123\begin{center}
     1124\input{FullProdConsStack.pstex_t}
     1125\end{center}
     1126\vspace*{-10pt}
     1127\caption{Producer / consumer runtime stacks}
     1128\label{f:ProdConsRuntimeStacks}
     1129
     1130\medskip
     1131
     1132\begin{center}
     1133\input{FullCoroutinePhases.pstex_t}
     1134\end{center}
     1135\vspace*{-10pt}
     1136\caption{Ping / Pong coroutine steps}
     1137\label{f:PingPongFullCoroutineSteps}
    10001138\end{figure}
    10011139
    1002 Figure~\ref{f:ProdCons} shows a producer/consumer symmetric-coroutine performing bi-directional communication.
    1003 Since the solution involves a full-coroutining cycle, the program main creates one coroutine in isolation, passes this coroutine to its partner, and closes the cycle at the call to @start@.
    1004 The @start@ routine communicates both the number of elements to be produced and the consumer into the producer's coroutine-structure.
    1005 Then the @resume@ to @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
    1006 @prod@'s coroutine main starts, creates local variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer.
     1140Figure~\ref{f:ProdCons} shows the ping-pong example in Figure~\ref{f:CFAPingPongGen} extended into a producer/consumer symmetric-coroutine performing bidirectional communication.
     1141This example is illustrative because both producer/consumer have two interface functions with @resume@s that suspend execution in these interface (helper) functions.
     1142The program main creates the producer coroutine, passes it to the consumer coroutine in its initialization, and closes the cycle at the call to @start@ along with the number of items to be produced.
     1143The first @resume@ of @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
     1144@prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer.
    10071145
    10081146The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
    1009 For the first resume, @cons@'s stack is initialized, creating local variables retained between subsequent activations of the coroutine.
     1147On the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine.
    10101148The consumer iterates until the @done@ flag is set, prints the values delivered by the producer, increments status, and calls back to the producer via @payment@, and on return from @payment@, prints the receipt from the producer and increments @money@ (inflation).
    10111149The call from the consumer to @payment@ introduces the cycle between producer and consumer.
    10121150When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed.
    10131151The context switch restarts the producer at the point where it last context switched, so it continues in @delivery@ after the resume.
    1014 
    10151152@delivery@ returns the status value in @prod@'s coroutine main, where the status is printed.
    10161153The loop then repeats calling @delivery@, where each call resumes the consumer coroutine.
     
    10181155The consumer increments and returns the receipt to the call in @cons@'s coroutine main.
    10191156The loop then repeats calling @payment@, where each call resumes the producer coroutine.
    1020 
    1021 After iterating $N$ times, the producer calls @stop@.
    1022 The @done@ flag is set to stop the consumer's execution and a resume is executed.
    1023 The context switch restarts @cons@ in @payment@ and it returns with the last receipt.
    1024 The consumer terminates its loops because @done@ is true, its @main@ terminates, so @cons@ transitions from a coroutine back to an object, and @prod@ reactivates after the resume in @stop@.
    1025 @stop@ returns and @prod@'s coroutine main terminates.
    1026 The program main restarts after the resume in @start@.
    1027 @start@ returns and the program main terminates.
    1028 
    1029 One \emph{killer} application for a coroutine is device drivers, which at one time caused 70\%-85\% of failures in Windows/Linux~\cite{Swift05}.
    1030 Many device drivers are a finite state-machine parsing a protocol, e.g.:
    1031 \begin{tabbing}
    1032 \ldots STX \= \ldots message \ldots \= ESC \= ETX \= \ldots message \ldots  \= ETX \= 2-byte crc \= \ldots      \kill
    1033 \ldots STX \> \ldots message \ldots \> ESC \> ETX \> \ldots message \ldots  \> ETX \> 2-byte crc \> \ldots
    1034 \end{tabbing}
    1035 where a network message begins with the control character STX and ends with an ETX, followed by a 2-byte cyclic-redundancy check.
    1036 Control characters may appear in a message if preceded by an ESC.
    1037 Because FSMs can be complex and occur frequently in important domains, direct support of the coroutine is crucial in a systems programminglanguage.
    1038 
    1039 \begin{figure}
    1040 \begin{cfa}
    1041 enum Status { CONT, MSG, ESTX, ELNTH, ECRC };
    1042 `coroutine` Driver {
    1043         Status status;
    1044         char * msg, byte;
    1045 };
    1046 void ?{}( Driver & d, char * m ) { d.msg = m; }         $\C[3.0in]{// constructor}$
    1047 Status next( Driver & d, char b ) with( d ) {           $\C{// 'with' opens scope}$
    1048         byte = b; `resume( d );` return status;
    1049 }
    1050 void main( Driver & d ) with( d ) {
    1051         enum { STX = '\002', ESC = '\033', ETX = '\003', MaxMsg = 64 };
    1052         unsigned short int crc;                                                 $\C{// error checking}$
    1053   msg: for () {                                                                         $\C{// parse message}$
    1054                 status = CONT;
    1055                 unsigned int lnth = 0, sum = 0;
    1056                 while ( byte != STX ) `suspend();`
    1057           emsg: for () {
    1058                         `suspend();`                                                    $\C{// process byte}$
    1059                         choose ( byte ) {                                               $\C{// switch with default break}$
    1060                           case STX:
    1061                                 status = ESTX; `suspend();` continue msg;
    1062                           case ETX:
    1063                                 break emsg;
    1064                           case ESC:
    1065                                 suspend();
    1066                         } // choose
    1067                         if ( lnth >= MaxMsg ) {                                 $\C{// buffer full ?}$
    1068                                 status = ELNTH; `suspend();` continue msg; }
    1069                         msg[lnth++] = byte;
    1070                         sum += byte;
    1071                 } // for
    1072                 msg[lnth] = '\0';                                                       $\C{// terminate string}\CRT$
    1073                 `suspend();`
    1074                 crc = (unsigned char)byte << 8; // prevent sign extension for signed char
    1075                 `suspend();`
    1076                 status = (crc | (unsigned char)byte) == sum ? MSG : ECRC;
    1077                 `suspend();`
    1078         } // for
    1079 }
    1080 \end{cfa}
    1081 \caption{Device driver for simple communication protocol}
    1082 \end{figure}
     1157Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling.
     1158
     1159Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack.
     1160Furthermore, each deallocated coroutine must guarantee all destructors are run for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
     1161When a coroutine's main ends, its stack is already unwound so any stack allocated objects with destructors have been finalized.
     1162The na\"{i}ve semantics for coroutine-cycle termination is context switch to the last resumer, like executing a @suspend@/@return@ in a generator.
     1163However, for coroutines, the last resumer is \emph{not} implicitly below the current stack frame, as for generators, because each coroutine's stack is independent.
     1164Unfortunately, it is impossible to determine statically if a coroutine is in a cycle and unrealistic to check dynamically (graph-cycle problem).
     1165Hence, a compromise solution is necessary that works for asymmetric (acyclic) and symmetric (cyclic) coroutines.
     1166
     1167Our solution for coroutine termination works well for the most common asymmetric and symmetric coroutine usage-patterns.
     1168For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer.
     1169All previous generators converted to coroutines have this property.
     1170For symmetric coroutines, it is common for the cycle creator to persist for the life-time of the cycle.
     1171Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter.
     1172Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end.
     1173For other scenarios, it is always possible to devise a solution with additional programming effort.
     1174
     1175The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first.
     1176Assume generator @PingPong@ is converted to a coroutine.
     1177Figure~\ref{f:PingPongFullCoroutineSteps} shows the creation, starter, and cyclic execution steps of the coroutine version.
     1178The program main creates (declares) coroutine instances @ping@ and @pong@.
     1179Next, program main resumes @ping@, making it @ping@'s starter, and @ping@'s main resumes @pong@'s main, making it @pong@'s starter.
     1180Execution forms a cycle when @pong@ resumes @ping@, and cycles $N$ times.
     1181By adjusting $N$ for either @ping@/@pong@, it is possible to have either one finish first, instead of @pong@ always ending first.
     1182If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main in function @start@.
     1183If @ping@ ends first, it resumes its starter the program main in function @start@.
     1184Regardless of the cycle complexity, the starter stack always leads back to the program main, but the stack can be entered at an arbitrary point.
     1185Once back at the program main, coroutines @ping@ and @pong@ are deallocated.
     1186For generators, deallocation runs the destructors for all objects in the generator type.
     1187For coroutines, deallocation deals with objects in the coroutine type and must also run the destructors for any objects pending on the coroutine's stack for any unterminated coroutine.
     1188Hence, if a coroutine's destructor detects the coroutine is not ended, it implicitly raises a cancellation exception (uncatchable exception) at the coroutine and resumes it so the cancellation exception can propagate to the root of the coroutine's stack destroying all local variable on the stack.
     1189So the \CFA semantics for the generator and coroutine, ensure both can be safely deallocated at any time, regardless of their current state, like any other aggregate object.
     1190Explicitly raising normal exceptions at another coroutine can replace flag variables, like @stop@, \eg @prod@ raises a @stop@ exception at @cons@ after it finishes generating values and resumes @cons@, which catches the @stop@exception to terminate its loop.
     1191
     1192Finally, there is an interesting effect for @suspend@ with symmetric coroutines.
     1193A coroutine must retain its last resumer to suspend back because the resumer is on a different stack.
     1194These reverse pointers allow @suspend@ to cycle \emph{backwards}, which may be useful in certain cases.
     1195However, there is an anomaly if a coroutine resumes itself, because it overwrites its last resumer with itself, losing the ability to resume the last external resumer.
     1196To prevent losing this information, a self-resume does not overwrite the last resumer.
    10831197
    10841198
     
    10861200
    10871201A significant implementation challenge for coroutines (and threads, see Section~\ref{threads}) is adding extra fields and executing code after/before the coroutine constructor/destructor and coroutine main to create/initialize/de-initialize/destroy extra fields and the stack.
    1088 There are several solutions to this problem and the chosen option forced the \CFA coroutine design.
    1089 
    1090 Object-oriented inheritance provides extra fields and code in a restricted context, but it requires programmers to explicitly perform the inheritance:
     1202There are several solutions to this problem and the chosen option directed the \CFA coroutine design.
     1203
     1204For object-oriented languages, inheritance can be used to provide extra fields and code, but it requires explicitly writing the inheritance:
    10911205\begin{cfa}[morekeywords={class,inherits}]
    10921206class mycoroutine inherits baseCoroutine { ... }
    10931207\end{cfa}
    1094 and the programming language (and possibly its tool set, \eg debugger) may need to understand @baseCoroutine@ because of the stack.
     1208In addition, the programming language and possibly its tool set, \eg debugger, @valgrind@ need to understand @baseCoroutine@ because of special stack property of coroutines.
    10951209Furthermore, the execution of constructors/destructors is in the wrong order for certain operations.
    10961210For example, for threads if the thread is implicitly started, it must start \emph{after} all constructors, because the thread relies on a completely initialized object, but the inherited constructor runs \emph{before} the derived.
     
    11031217}
    11041218\end{cfa}
    1105 which also requires an explicit declaration that must be the last one to ensure correct initialization order.
    1106 However, there is nothing preventing wrong placement or multiple declarations.
    1107 
    1108 For coroutines as for threads, many implementations are based on routine pointers or routine objects~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.
     1219which also requires an explicit declaration that must be last to ensure correct initialization order.
     1220However, there is nothing preventing wrong placement or multiple declarations of @baseCoroutine@.
     1221
     1222For coroutines, as for threads, many implementations are based on function pointers or function objects~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.
    11091223For example, Boost implements coroutines in terms of four functor object-types:
    11101224\begin{cfa}
     
    11141228symmetric_coroutine<>::yield_type
    11151229\end{cfa}
    1116 Similarly, the canonical threading paradigm is often based on routine pointers, \eg @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}.
    1117 However, the generic thread-handle (identifier) is limited (few operations), unless it is wrapped in a custom type.
     1230
     1231Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a task is similar).
     1232The coroutine handle is the @coroutine@ instance containing programmer specified global/communication variables.
     1233The coroutine descriptor contains all implicit declarations needed by the runtime, \eg @suspend@/@resume@, and can be part of the coroutine handle or separate.\footnote{
     1234We are examining variable-sized structures (VLS), where fields can be variable-sized structures or arrays.
     1235Once allocated, a VLS is fixed sized.}
     1236The coroutine stack can appear in a number of locations and forms, fixed or variable sized.
     1237Hence, the coroutine's stack could be a VLS on the allocating stack, provided the allocating stack is large enough.
     1238For stack allocation, allocation/deallocation only requires a few arithmetic operation to compute the size and adjust the stack point, modulo any constructor costs.
     1239For heap allocation, allocation/deallocation is an expensive heap operation (where the heap can be a shared resource), modulo any constructor costs.
     1240For heap stack allocation, it is also possible to use a split (segmented) stack calling-convention, available with gcc and clang, so the stack is variable sized.
     1241Currently, \CFA supports stack/heap allocated descriptors but only heap allocated stacks;
     1242split-stack allocation is under development.
     1243In \CFA debug-mode, a fixed-sized stack is terminated with a write-only page, which catches most stack overflows.
     1244
     1245\begin{figure}
     1246\centering
     1247\input{corlayout.pstex_t}
     1248\caption{Coroutine memory layout}
     1249\label{f:CoroutineMemoryLayout}
     1250\end{figure}
     1251
     1252Similarly, the canonical threading paradigm is often based on function pointers, \eg @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}.
     1253However, the generic thread-handle (identifier) is limited (few operations), unless it is wrapped in a custom type, as in the pthreads approach.
    11181254\begin{cfa}
    11191255void mycor( coroutine_t cid, void * arg ) {
     
    11271263}
    11281264\end{cfa}
    1129 Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda-based coroutines adds very little.
     1265Since the custom type is simple to write in \CFA and solves several issues, added support for function/lambda-based coroutines adds very little.
    11301266
    11311267Note, the type @coroutine_t@ must be an abstract handle to the coroutine, because the coroutine descriptor and its stack are non-copyable.
Note: See TracChangeset for help on using the changeset viewer.