Ignore:
Timestamp:
May 18, 2018, 2:09:21 PM (7 years ago)
Author:
Aaron Moss <a3moss@…>
Branches:
new-env, with_gc
Children:
2472a19
Parents:
f6f0cca3 (diff), c7d8100c (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge remote-tracking branch 'origin/master' into with_gc

Location:
doc/papers/concurrency
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Makefile

    rf6f0cca3 rff29f08  
    7575        mkdir -p ${Build}
    7676
    77 ${BASE}.out.ps:
    78         ln -fs build/Paper.out.ps .
     77${BASE}.out.ps: ${Build}
     78        ln -fs ${Build}/Paper.out.ps .
    7979
    8080WileyNJD-AMA.bst:
    8181        ln -fs ../AMA/AMA-stix/ama/WileyNJD-AMA.bst .
    8282
    83 %.tex : %.fig
     83%.tex : %.fig ${Build}
    8484        fig2dev -L eepic $< > ${Build}/$@
    8585
    86 %.ps : %.fig
     86%.ps : %.fig ${Build}
    8787        fig2dev -L ps $< > ${Build}/$@
    8888
    89 %.pstex : %.fig
     89%.pstex : %.fig ${Build}
    9090        fig2dev -L pstex $< > ${Build}/$@
    9191        fig2dev -L pstex_t -p ${Build}/$@ $< > ${Build}/$@_t
  • doc/papers/concurrency/Paper.tex

    rf6f0cca3 rff29f08  
    2222\captionsetup{justification=raggedright,singlelinecheck=false}
    2323\usepackage{siunitx}
    24 \sisetup{ binary-units=true }
     24\sisetup{binary-units=true}
    2525
    2626\hypersetup{breaklinks=true}
     
    3232\renewcommand{\linenumberfont}{\scriptsize\sffamily}
    3333
    34 \renewcommand{\textfraction}{0.0}       % the entire page maybe devoted to floats with no text on the page at all
     34\renewcommand{\textfraction}{0.0}                       % the entire page maybe devoted to floats with no text on the page at all
    3535
    3636\lefthyphenmin=3                                                        % hyphen only after 4 characters
     
    7070%\DeclareTextCommandDefault{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.1ex}}}
    7171\renewcommand{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.075ex}}}
     72%\def\myCHarFont{\fontencoding{T1}\selectfont}%
     73% \def\{{\ttfamily\upshape\myCHarFont \char`\}}}%
     74
     75\renewcommand*{\thefootnote}{\Alph{footnote}} % hack because fnsymbol does not work
     76%\renewcommand*{\thefootnote}{\fnsymbol{footnote}}
    7277
    7378\makeatletter
     
    8590\setlength{\gcolumnposn}{3.5in}
    8691\setlength{\columnposn}{\gcolumnposn}
     92
    8793\newcommand{\C}[2][\@empty]{\ifx#1\@empty\else\global\setlength{\columnposn}{#1}\global\columnposn=\columnposn\fi\hfill\makebox[\textwidth-\columnposn][l]{\lst@basicstyle{\LstCommentStyle{#2}}}}
    8894\newcommand{\CRT}{\global\columnposn=\gcolumnposn}
     
    170176literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
    171177        {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
    172         {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textgreater}}2,
     178        {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
     179        {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textrm{\textgreater}}}2,
    173180moredelim=**[is][\color{red}]{`}{`},
    174181}% lstset
     
    212219\lstMakeShortInline@%
    213220
     221\let\OLDthebibliography\thebibliography
     222\renewcommand\thebibliography[1]{
     223  \OLDthebibliography{#1}
     224  \setlength{\parskip}{0pt}
     225  \setlength{\itemsep}{4pt plus 0.3ex}
     226}
    214227
    215228\title{\texorpdfstring{Concurrency in \protect\CFA}{Concurrency in Cforall}}
     
    228241\CFA is a modern, polymorphic, \emph{non-object-oriented} extension of the C programming language.
    229242This paper discusses the design of the concurrency and parallelism features in \CFA, and the concurrent runtime-system.
    230 These features are created from scratch as ISO C lacks concurrency, relying largely on pthreads library.
     243These features are created from scratch as ISO C lacks concurrency, relying largely on the pthreads library.
    231244Coroutines and lightweight (user) threads are introduced into the language.
    232245In addition, monitors are added as a high-level mechanism for mutual exclusion and synchronization.
     
    244257\maketitle
    245258
    246 % ======================================================================
    247 % ======================================================================
     259
    248260\section{Introduction}
    249 % ======================================================================
    250 % ======================================================================
    251261
    252262This paper provides a minimal concurrency \newterm{Abstract Program Interface} (API) that is simple, efficient and can be used to build other concurrency features.
     
    254264An easier approach for programmers is to support higher-level constructs as the basis of concurrency.
    255265Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{Hochstein05}.
    256 Examples of high-level approaches are task based~\cite{TBB}, message passing~\cite{Erlang,MPI}, and implicit threading~\cite{OpenMP}.
    257 
    258 This paper uses the following terminology.
     266Examples of high-level approaches are task (work) based~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}.
     267
     268The following terminology is used.
    259269A \newterm{thread} is a fundamental unit of execution that runs a sequence of code and requires a stack to maintain state.
    260270Multiple simultaneous threads give rise to \newterm{concurrency}, which requires locking to ensure safe communication and access to shared data.
    261271% Correspondingly, concurrency is defined as the concepts and challenges that occur when multiple independent (sharing memory, timing dependencies, \etc) concurrent threads are introduced.
    262 \newterm{Locking}, and by extension locks, are defined as a mechanism to prevent progress of threads to provide safety.
     272\newterm{Locking}, and by extension \newterm{locks}, are defined as a mechanism to prevent progress of threads to provide safety.
    263273\newterm{Parallelism} is running multiple threads simultaneously.
    264274Parallelism implies \emph{actual} simultaneous execution, where concurrency only requires \emph{apparent} simultaneous execution.
    265 As such, parallelism only affects performance, which is observed through differences in space and/or time.
    266 
    267 Hence, there are two problems to be solved in the design of concurrency for a programming language: concurrency and parallelism.
     275As such, parallelism only affects performance, which is observed through differences in space and/or time at runtime.
     276
     277Hence, there are two problems to be solved: concurrency and parallelism.
    268278While these two concepts are often combined, they are distinct, requiring different tools~\cite[\S~2]{Buhr05a}.
    269279Concurrency tools handle synchronization and mutual exclusion, while parallelism tools handle performance, cost and resource utilization.
    270280
    271281The proposed concurrency API is implemented in a dialect of C, called \CFA.
    272 The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantic, and type checking, and the corresponding high-perforamnce runtime-library to implement the concurrency features.
    273 
    274 % ======================================================================
    275 % ======================================================================
     282The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantic, and type checking, and the corresponding high-performance runtime-library to implement the concurrency features.
     283
     284
    276285\section{\CFA Overview}
    277 % ======================================================================
    278 % ======================================================================
    279286
    280287The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency.
    281 Most of the following code examples can be found on the \CFA website~\cite{Cforall}.
    282 
    283 \CFA is an extension of ISO-C, and therefore, supports all of the same paradigms as C.
     288Extended versions and explanation of the following code examples are available at the \CFA website~\cite{Cforall} or in Moss~\etal~\cite{Moss18}.
     289
     290\CFA is an extension of ISO-C, and hence, supports all C paradigms.
    284291%It is a non-object-oriented system-language, meaning most of the major abstractions have either no runtime overhead or can be opted out easily.
    285 Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code.
    286 The vast majority of the code produced by the \CFA translator respects memory layouts and calling conventions laid out by C.
    287 Interestingly, while \CFA is not an object-oriented language, lacking the concept of a receiver (\eg @this@) and inheritance, it does have some notion of objects\footnote{C defines the term objects as : ``region of data storage in the execution environment, the contents of which can represent
    288 values''~\cite[3.15]{C11}}, most importantly construction and destruction of objects.
     292Like C, the basics of \CFA revolve around structures and functions.
     293Virtually all of the code generated by the \CFA translator respects C memory layouts and calling conventions.
     294While \CFA is not an object-oriented language, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C does have a notion of objects: ``region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}.
     295While some \CFA features are common in object-oriented programming-languages, they are an independent capability allowing \CFA to adopt them while retaining a procedural paradigm.
    289296
    290297
    291298\subsection{References}
    292299
    293 Like \CC, \CFA introduces rebind-able references providing multiple dereferencing as an alternative to pointers.
    294 In regards to concurrency, the semantic difference between pointers and references are not particularly relevant, but since this document uses mostly references, here is a quick overview of the semantics:
    295 \begin{cfa}
    296 int x, y, z;
    297 int * p1 = &x, ** p2 = &p1, *** p3 = &p2,       $\C{// pointers to x}$
    298         & r1 = x,   && r2 = r1, &&& r3 = r2;    $\C{// references to x}$
    299 
    300 *p1 = 3; **p2 = 3; ***p3 = 3;                           $\C{// change x}$
    301   r1 = 3;    r2 = 3;      r3 = 3;                       $\C{// change x}$
    302 **p3 = &y; *p3 = &z;                                            $\C{// change p1, p2}$
    303 &&r3 = &y; &r3 = &z;                                            $\C{// change p1, p2}$
    304 int & ar[3] = {x, y, z};                                        $\C{// initialize array of references}$
    305 
    306 typeof( ar[1]) p;                                                       $\C{// is int, referenced object type}$
    307 typeof(&ar[1]) q;                                                       $\C{// is int \&, reference type}$
    308 sizeof( ar[1]) == sizeof(int);                          $\C{// is true, referenced object size}$
    309 sizeof(&ar[1]) == sizeof(int *);                        $\C{// is true, reference size}$
    310 \end{cfa}
    311 The important take away from this code example is that a reference offers a handle to an object, much like a pointer, but which is automatically dereferenced for convenience.
    312 
    313 % ======================================================================
     300\CFA provides multi-level rebindable references, as an alternative to pointers, which significantly reduces syntactic noise.
     301\begin{cfa}
     302int x = 1, y = 2, z = 3;
     303int * p1 = &x, ** p2 = &p1,  *** p3 = &p2,      $\C{// pointers to x}$
     304        `&` r1 = x,  `&&` r2 = r1,  `&&&` r3 = r2;      $\C{// references to x}$
     305int * p4 = &z, `&` r4 = z;
     306
     307*p1 = 3; **p2 = 3; ***p3 = 3;       // change x
     308r1 =  3;     r2 = 3;      r3 = 3;        // change x: implicit dereferences *r1, **r2, ***r3
     309**p3 = &y; *p3 = &p4;                // change p1, p2
     310`&`r3 = &y; `&&`r3 = &`&`r4;             // change r1, r2: cancel implicit dereferences (&*)**r3, (&(&*)*)*r3, &(&*)r4
     311\end{cfa}
     312A reference is a handle to an object, like a pointer, but is automatically dereferenced the specified number of levels.
     313Referencing (address-of @&@) a reference variable cancels one of the implicit dereferences, until there are no more implicit references, after which normal expression behaviour applies.
     314
     315
     316\subsection{\texorpdfstring{\protect\lstinline{with} Statement}{with Statement}}
     317\label{s:WithStatement}
     318
     319Heterogeneous data is aggregated into a structure/union.
     320To reduce syntactic noise, \CFA provides a @with@ statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate field-qualification by opening a scope containing the field identifiers.
     321\begin{cquote}
     322\vspace*{-\baselineskip}%???
     323\lstDeleteShortInline@%
     324\begin{cfa}
     325struct S { char c; int i; double d; };
     326struct T { double m, n; };
     327// multiple aggregate parameters
     328\end{cfa}
     329\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
     330\begin{cfa}
     331void f( S & s, T & t ) {
     332        `s.`c; `s.`i; `s.`d;
     333        `t.`m; `t.`n;
     334}
     335\end{cfa}
     336&
     337\begin{cfa}
     338void f( S & s, T & t ) `with ( s, t )` {
     339        c; i; d;                // no qualification
     340        m; n;
     341}
     342\end{cfa}
     343\end{tabular}
     344\lstMakeShortInline@%
     345\end{cquote}
     346Object-oriented programming languages only provide implicit qualification for the receiver.
     347
     348In detail, the @with@ statement has the form:
     349\begin{cfa}
     350$\emph{with-statement}$:
     351        'with' '(' $\emph{expression-list}$ ')' $\emph{compound-statement}$
     352\end{cfa}
     353and may appear as the body of a function or nested within a function body.
     354Each expression in the expression-list provides a type and object.
     355The type must be an aggregate type.
     356(Enumerations are already opened.)
     357The object is the implicit qualifier for the open structure-fields.
     358All expressions in the expression list are open in parallel within the compound statement, which is different from Pascal, which nests the openings from left to right.
     359
     360
    314361\subsection{Overloading}
    315362
    316 Another important feature of \CFA is function overloading as in Java and \CC, where routines with the same name are selected based on the number and type of the arguments.
    317 As well, \CFA uses the return type as part of the selection criteria, as in Ada~\cite{Ada}.
    318 For routines with multiple parameters and returns, the selection is complex.
     363\CFA maximizes the ability to reuse names via overloading to aggressively address the naming problem.
     364Both variables and functions may be overloaded, where selection is based on types, and number of returns (as in Ada~\cite{Ada}) and arguments.
     365\begin{cquote}
     366\vspace*{-\baselineskip}%???
     367\lstDeleteShortInline@%
     368\begin{cfa}
     369// selection based on type
     370\end{cfa}
     371\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
     372\begin{cfa}
     373const short int `MIN` = -32768;
     374const int `MIN` = -2147483648;
     375const long int `MIN` = -9223372036854775808L;
     376\end{cfa}
     377&
     378\begin{cfa}
     379short int si = `MIN`;
     380int i = `MIN`;
     381long int li = `MIN`;
     382\end{cfa}
     383\end{tabular}
    319384\begin{cfa}
    320385// selection based on type and number of parameters
    321 void f(void);                   $\C{// (1)}$
    322 void f(char);                   $\C{// (2)}$
    323 void f(int, double);    $\C{// (3)}$
    324 f();                                    $\C{// select (1)}$
    325 f('a');                                 $\C{// select (2)}$
    326 f(3, 5.2);                              $\C{// select (3)}$
    327 
    328 // selection based on  type and number of returns
    329 char   f(int);                  $\C{// (1)}$
    330 double f(int);                  $\C{// (2)}$
    331 char   c = f(3);                $\C{// select (1)}$
    332 double d = f(4);                $\C{// select (2)}$
    333 \end{cfa}
    334 This feature is particularly important for concurrency since the runtime system relies on creating different types to represent concurrency objects.
    335 Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes.
    336 As seen in section \ref{basics}, routine @main@ is an example that benefits from overloading.
    337 
    338 % ======================================================================
     386\end{cfa}
     387\begin{tabular}{@{}l@{\hspace{2.7\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
     388\begin{cfa}
     389void `f`( void );
     390void `f`( char );
     391void `f`( int, double );
     392\end{cfa}
     393&
     394\begin{cfa}
     395`f`();
     396`f`( 'a' );
     397`f`( 3, 5.2 );
     398\end{cfa}
     399\end{tabular}
     400\begin{cfa}
     401// selection based on type and number of returns
     402\end{cfa}
     403\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
     404\begin{cfa}
     405char `f`( int );
     406double `f`( int );
     407[char, double] `f`( int );
     408\end{cfa}
     409&
     410\begin{cfa}
     411char c = `f`( 3 );
     412double d = `f`( 3 );
     413[d, c] = `f`( 3 );
     414\end{cfa}
     415\end{tabular}
     416\lstMakeShortInline@%
     417\end{cquote}
     418Overloading is important for \CFA concurrency since the runtime system relies on creating different types to represent concurrency objects.
     419Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions to prevent name clashes.
     420As seen in Section~\ref{basics}, function @main@ is heavily overloaded.
     421
     422Variable overloading is useful in the parallel semantics of the @with@ statement for fields with the same name:
     423\begin{cfa}
     424struct S { int `i`; int j; double m; } s;
     425struct T { int `i`; int k; int m; } t;
     426with ( s, t ) {
     427        j + k;                                                                  $\C{// unambiguous, s.j + t.k}$
     428        m = 5.0;                                                                $\C{// unambiguous, s.m = 5.0}$
     429        m = 1;                                                                  $\C{// unambiguous, t.m = 1}$
     430        int a = m;                                                              $\C{// unambiguous, a = t.m }$
     431        double b = m;                                                   $\C{// unambiguous, b = s.m}$
     432        int c = `s.i` + `t.i`;                                  $\C{// unambiguous, qualification}$
     433        (double)m;                                                              $\C{// unambiguous, cast s.m}$
     434}
     435\end{cfa}
     436For parallel semantics, both @s.i@ and @t.i@ are visible the same type, so only @i@ is ambiguous without qualification.
     437
     438
    339439\subsection{Operators}
     440
    340441Overloading also extends to operators.
    341 The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation appear, \eg:
    342 \begin{cfa}
    343 int ++? (int op);                       $\C{// unary prefix increment}$
    344 int ?++ (int op);                       $\C{// unary postfix increment}$
    345 int ?+? (int op1, int op2);             $\C{// binary plus}$
    346 int ?<=?(int op1, int op2);             $\C{// binary less than}$
    347 int ?=? (int & op1, int op2);           $\C{// binary assignment}$
    348 int ?+=?(int & op1, int op2);           $\C{// binary plus-assignment}$
    349 
    350 struct S {int i, j;};
    351 S ?+?(S op1, S op2) {                           $\C{// add two structures}$
     442Operator-overloading syntax names a routine with the operator symbol and question marks for the operands:
     443\begin{cquote}
     444\lstDeleteShortInline@%
     445\begin{tabular}{@{}ll@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
     446\begin{cfa}
     447int ++? (int op);
     448int ?++ (int op);
     449int `?+?` (int op1, int op2);
     450int ?<=?(int op1, int op2);
     451int ?=? (int & op1, int op2);
     452int ?+=?(int & op1, int op2);
     453\end{cfa}
     454&
     455\begin{cfa}
     456// unary prefix increment
     457// unary postfix increment
     458// binary plus
     459// binary less than
     460// binary assignment
     461// binary plus-assignment
     462\end{cfa}
     463&
     464\begin{cfa}
     465struct S { int i, j; };
     466S `?+?`( S op1, S op2) { // add two structures
    352467        return (S){op1.i + op2.i, op1.j + op2.j};
    353468}
    354469S s1 = {1, 2}, s2 = {2, 3}, s3;
    355 s3 = s1 + s2;                                           $\C{// compute sum: s3 == {2, 5}}$
    356 \end{cfa}
    357 While concurrency does not use operator overloading directly, this feature is more important as an introduction for the syntax of constructors.
    358 
    359 % ======================================================================
    360 \subsection{Constructors/Destructors}
    361 Object lifetime is often a challenge in concurrency. \CFA uses the approach of giving concurrent meaning to object lifetime as a means of synchronization and/or mutual exclusion.
    362 Since \CFA relies heavily on the lifetime of objects, constructors and destructors is a core feature required for concurrency and parallelism. \CFA uses the following syntax for constructors and destructors:
    363 \begin{cfa}
    364 struct S {
    365         size_t size;
    366         int * ia;
    367 };
    368 void ?{}(S & s, int asize) {    $\C{// constructor operator}$
    369         s.size = asize;                         $\C{// initialize fields}$
    370         s.ia = calloc(size, sizeof(S));
    371 }
    372 void ^?{}(S & s) {                              $\C{// destructor operator}$
    373         free(ia);                                       $\C{// de-initialization fields}$
    374 }
    375 int main() {
    376         S x = {10}, y = {100};          $\C{// implicit calls: ?\{\}(x, 10), ?\{\}(y, 100)}$
    377         ...                                                     $\C{// use x and y}$
    378         ^x{};  ^y{};                            $\C{// explicit calls to de-initialize}$
    379         x{20};  y{200};                         $\C{// explicit calls to reinitialize}$
    380         ...                                                     $\C{// reuse x and y}$
    381 }                                                               $\C{// implicit calls: \^?\{\}(y), \^?\{\}(x)}$
    382 \end{cfa}
    383 The language guarantees that every object and all their fields are constructed.
    384 Like \CC, construction of an object is automatically done on allocation and destruction of the object is done on deallocation.
    385 Allocation and deallocation can occur on the stack or on the heap.
    386 \begin{cfa}
    387 {
    388         struct S s = {10};      $\C{// allocation, call constructor}$
    389         ...
    390 }                                               $\C{// deallocation, call destructor}$
    391 struct S * s = new();   $\C{// allocation, call constructor}$
    392 ...
    393 delete(s);                              $\C{// deallocation, call destructor}$
    394 \end{cfa}
    395 Note that like \CC, \CFA introduces @new@ and @delete@, which behave like @malloc@ and @free@ in addition to constructing and destructing objects, after calling @malloc@ and before calling @free@, respectively.
    396 
    397 % ======================================================================
     470s3 = s1 `+` s2;         // compute sum: s3 == {2, 5}
     471\end{cfa}
     472\end{tabular}
     473\lstMakeShortInline@%
     474\end{cquote}
     475While concurrency does not use operator overloading directly, it provides an introduction for the syntax of constructors.
     476
     477
    398478\subsection{Parametric Polymorphism}
    399479\label{s:ParametricPolymorphism}
    400 Routines in \CFA can also be reused for multiple types.
    401 This capability is done using the @forall@ clauses, which allow separately compiled routines to support generic usage over multiple types.
     480
     481The signature feature of \CFA is parametric-polymorphic functions~\cite{} with functions generalized using a @forall@ clause (giving the language its name), which allow separately compiled routines to support generic usage over multiple types.
    402482For example, the following sum function works for any type that supports construction from 0 and addition:
    403483\begin{cfa}
    404 // constraint type, 0 and +
    405 forall(otype T | { void ?{}(T *, zero_t); T ?+?(T, T); })
    406 T sum(T a[ ], size_t size) {
    407         T total = 0;                            $\C{// construct T from 0}$
    408         for(size_t i = 0; i < size; i++)
    409                 total = total + a[i];   $\C{// select appropriate +}$
     484forall( otype T | { void `?{}`( T *, zero_t ); T `?+?`( T, T ); } ) // constraint type, 0 and +
     485T sum( T a[$\,$], size_t size ) {
     486        `T` total = { `0` };                                    $\C{// initialize by 0 constructor}$
     487        for ( size_t i = 0; i < size; i += 1 )
     488                total = total `+` a[i];                         $\C{// select appropriate +}$
    410489        return total;
    411490}
    412 
    413491S sa[5];
    414 int i = sum(sa, 5);                             $\C{// use S's 0 construction and +}$
    415 \end{cfa}
    416 
    417 Since writing constraints on types can become cumbersome for more constrained functions, \CFA also has the concept of traits.
    418 Traits are named collection of constraints that can be used both instead and in addition to regular constraints:
    419 \begin{cfa}
    420 trait summable( otype T ) {
    421         void ?{}(T *, zero_t);          $\C{// constructor from 0 literal}$
    422         T ?+?(T, T);                            $\C{// assortment of additions}$
    423         T ?+=?(T *, T);
    424         T ++?(T *);
    425         T ?++(T *);
     492int i = sum( sa, 5 );                                           $\C{// use S's 0 construction and +}$
     493\end{cfa}
     494
     495\CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each function declaration:
     496\begin{cfa}
     497trait `sumable`( otype T ) {
     498        void `?{}`( T &, zero_t );                              $\C{// 0 literal constructor}$
     499        T `?+?`( T, T );                                                $\C{// assortment of additions}$
     500        T ?+=?( T &, T );
     501        T ++?( T & );
     502        T ?++( T & );
    426503};
    427 forall( otype T | summable(T) ) $\C{// use trait}$
    428 T sum(T a[], size_t size);
    429 \end{cfa}
    430 
    431 Note that the type use for assertions can be either an @otype@ or a @dtype@.
    432 Types declared as @otype@ refer to ``complete'' objects, \ie objects with a size, a default constructor, a copy constructor, a destructor and an assignment operator.
    433 Using @dtype@, on the other hand, has none of these assumptions but is extremely restrictive, it only guarantees the object is addressable.
    434 
    435 % ======================================================================
    436 \subsection{with Clause/Statement}
    437 Since \CFA lacks the concept of a receiver, certain functions end up needing to repeat variable names often.
    438 To remove this inconvenience, \CFA provides the @with@ statement, which opens an aggregate scope making its fields directly accessible (like Pascal).
    439 \begin{cfa}
    440 struct S { int i, j; };
    441 int mem(S & this) with (this)           $\C{// with clause}$
    442         i = 1;                                                  $\C{// this->i}$
    443         j = 2;                                                  $\C{// this->j}$
    444 }
    445 int foo() {
    446         struct S1 { ... } s1;
    447         struct S2 { ... } s2;
    448         with (s1)                                               $\C{// with statement}$
    449         {
    450                 // access fields of s1 without qualification
    451                 with (s2)                                       $\C{// nesting}$
    452                 {
    453                         // access fields of s1 and s2 without qualification
    454                 }
    455         }
    456         with (s1, s2)                                   $\C{// scopes open in parallel}$
    457         {
    458                 // access fields of s1 and s2 without qualification
    459         }
    460 }
    461 \end{cfa}
    462 
    463 For more information on \CFA see \cite{cforall-ug,Schluntz17,www-cfa}.
    464 
    465 % ======================================================================
    466 % ======================================================================
     504forall( otype T `| sumable( T )` )                      $\C{// use trait}$
     505T sum( T a[$\,$], size_t size );
     506\end{cfa}
     507
     508Assertions can be @otype@ or @dtype@.
     509@otype@ refers to a ``complete'' object, \ie an object has a size, default constructor, copy constructor, destructor and an assignment operator.
     510@dtype@ only guarantees an object has a size and alignment.
     511
     512Using the return type for discrimination, it is possible to write a type-safe @alloc@ based on the C @malloc@:
     513\begin{cfa}
     514forall( dtype T | sized(T) ) T * alloc( void ) { return (T *)malloc( sizeof(T) ); }
     515int * ip = alloc();                                                     $\C{// select type and size from left-hand side}$
     516double * dp = alloc();
     517struct S {...} * sp = alloc();
     518\end{cfa}
     519where the return type supplies the type/size of the allocation, which is impossible in most type systems.
     520
     521
     522\subsection{Constructors / Destructors}
     523
     524Object lifetime is a challenge in non-managed programming languages.
     525\CFA responds with \CC-like constructors and destructors:
     526\begin{cfa}
     527struct VLA { int len, * data; };                        $\C{// variable length array of integers}$
     528void ?{}( VLA & vla ) with ( vla ) { len = 10;  data = alloc( len ); }  $\C{// default constructor}$
     529void ?{}( VLA & vla, int size, char fill ) with ( vla ) { len = size;  data = alloc( len, fill ); } // initialization
     530void ?{}( VLA & vla, VLA other ) { vla.len = other.len;  vla.data = other.data; } $\C{// copy, shallow}$
     531void ^?{}( VLA & vla ) with ( vla ) { free( data ); } $\C{// destructor}$
     532{
     533        VLA  x,            y = { 20, 0x01 },     z = y; $\C{// z points to y}$
     534        //    x{};         y{ 20, 0x01 };          z{ z, y };
     535        ^x{};                                                                   $\C{// deallocate x}$
     536        x{};                                                                    $\C{// reallocate x}$
     537        z{ 5, 0xff };                                                   $\C{// reallocate z, not pointing to y}$
     538        ^y{};                                                                   $\C{// deallocate y}$
     539        y{ x };                                                                 $\C{// reallocate y, points to x}$
     540        x{};                                                                    $\C{// reallocate x, not pointing to y}$
     541        //  ^z{};  ^y{};  ^x{};
     542}
     543\end{cfa}
     544Like \CC, construction is implicit on allocation (stack/heap) and destruction is implicit on deallocation.
     545The object and all their fields are constructed/destructed.
     546\CFA also provides @new@ and @delete@, which behave like @malloc@ and @free@, in addition to constructing and destructing objects:
     547\begin{cfa}
     548{       struct S s = {10};                                              $\C{// allocation, call constructor}$
     549        ...
     550}                                                                                       $\C{// deallocation, call destructor}$
     551struct S * s = new();                                           $\C{// allocation, call constructor}$
     552...
     553delete( s );                                                            $\C{// deallocation, call destructor}$
     554\end{cfa}
     555\CFA concurrency uses object lifetime as a means of synchronization and/or mutual exclusion.
     556
     557
    467558\section{Concurrency Basics}\label{basics}
    468 % ======================================================================
    469 % ======================================================================
    470 
    471 At its core, concurrency is based on having multiple call-stacks and scheduling among threads of execution executing on these stacks.
    472 Multiple call stacks (or contexts) and a single thread of execution does \emph{not} imply concurrency.
    473 Execution with a single thread and multiple stacks where the thread is deterministically self-scheduling across the stacks is called \newterm{coroutining};
    474 execution with a single thread and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread's perspective) across the stacks is called concurrency~\cite[\S~3]{Buhr05a}.
    475 Therefore, a minimal concurrency system can be achieved using coroutines (see Section \ref{coroutine}), which instead of context-switching among each other, always defer to an oracle for where to context-switch next.
    476 
    477 While coroutines can execute on the caller's stack-frame, stack-full coroutines allow full generality and are sufficient as the basis for concurrency.
    478 The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model (a.k.a., non-preemptive scheduling).
    479 The oracle/scheduler can either be a stack-less or stack-full entity and correspondingly require one or two context-switches to run a different coroutine.
    480 In any case, a subset of concurrency related challenges start to appear.
    481 For the complete set of concurrency challenges to occur, the only feature missing is preemption.
    482 
    483 A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context switches occur.
    484 Mutual exclusion and synchronization are ways of limiting non-determinism in a concurrent system.
    485 Now it is important to understand that uncertainty is desirable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel.
     559
     560At its core, concurrency is based on multiple call-stacks and scheduling threads executing on these stacks.
     561Multiple call stacks (or contexts) and a single thread of execution, called \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}.
     562In coroutining, the single thread is self-scheduling across the stacks, so execution is deterministic, \ie given fixed inputs, the execution path to the outputs is fixed and predictable.
     563A \newterm{stackless} coroutine executes on the caller's stack~\cite{Python} but this approach is restrictive, \eg preventing modularization and supporting only iterator/generator-style programming;
     564a \newterm{stackfull} coroutine executes on its own stack, allowing full generality.
     565Only stackfull coroutines are a stepping-stone to concurrency.
     566
     567The transition to concurrency, even for execution with a single thread and multiple stacks, occurs when coroutines also context switch to a scheduling oracle, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}.
     568Therefore, a minimal concurrency system is possible using coroutines (see Section \ref{coroutine}) in conjunction with a scheduler to decide where to context switch next.
     569The resulting execution system now follows a cooperative threading-model, called \newterm{non-preemptive scheduling}.
     570
     571Because the scheduler is special, it can either be a stackless or stackfull coroutine.
     572For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch.
     573For stackfull, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
     574A stackfull scheduler is often used for simplicity and security, even through there is a slightly higher runtime-cost.
     575
     576Regardless of the approach used, a subset of concurrency related challenges start to appear.
     577For the complete set of concurrency challenges to occur, the missing feature is \newterm{preemption}, where context switching occurs randomly between any two instructions, often based on a timer interrupt, called \newterm{preemptive scheduling}.
     578While a scheduler introduces uncertainty in the order of execution, preemption introduces uncertainty where context switches occur.
     579Interestingly, uncertainty is necessary for the runtime (operating) system to give the illusion of parallelism on a single processor and increase performance on multiple processors.
     580The reason is that only the runtime has complete knowledge about resources and how to best utilized them.
     581However, the introduction of unrestricted non-determinism results in the need for \newterm{mutual exclusion} and \newterm{synchronization} to restrict non-determinism for correctness;
     582otherwise, it is impossible to write meaningful programs.
    486583Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows.
    487584
     
    489586\subsection{\protect\CFA's Thread Building Blocks}
    490587
    491 One of the important features that are missing in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional.
     588An important missing feature in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional.
    492589As such, library support for threading is far from widespread.
    493590At the time of writing the paper, neither \protect\lstinline|gcc| nor \protect\lstinline|clang| support ``threads.h'' in their standard libraries.}.
    494 On modern architectures, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write efficient concurrent programs to take advantage of parallelism.
     591On modern architectures, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism.
    495592As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages.
    496 And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
     593Furthermore, because C is a system-level language, programmers expect to choose precisely which features they need and which cost they are willing to pay.
     594Hence, concurrent programs should be written using high-level mechanisms, and only step down to lower-level mechanisms when performance bottlenecks are encountered.
    497595
    498596
    499597\subsection{Coroutines: A Stepping Stone}\label{coroutine}
    500598
    501 While the focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system.
    502 \newterm{Coroutine}s are generalized routines with points where execution is suspended and resumed at a later time.
    503 Suspend/resume is a context switche and coroutines have other context-management operations.
    504 Many design challenges of threads are partially present in designing coroutines, which makes the design effort relevant.
    505 The core \textbf{api} of coroutines has two features: independent call-stacks and @suspend@/@resume@.
    506 
    507 A coroutine handles the class of problems that need to retain state between calls (\eg plugin, device driver, finite-state machine).
    508 For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers:
     599While the focus of this discussion is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system.
     600Coroutines are generalized routines allowing execution to be temporarily suspend and later resumed.
     601Hence, unlike a normal routine, a coroutine may not terminate when it returns to its caller, allowing it to be restarted with the values and execution location present at the point of suspension.
     602This capability is accomplish via the coroutine's stack, where suspend/resume context switch among stacks.
     603Because threading design-challenges are present in coroutines, their design effort is relevant, and this effort can be easily exposed to programmers giving them a useful new programming paradigm because a coroutine handles the class of problems that need to retain state between calls, \eg plugins, device drivers, and finite-state machines.
     604Therefore, the core \CFA coroutine-API for has two fundamental features: independent call-stacks and @suspend@/@resume@ operations.
     605
     606For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers, where Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C.
    509607\begin{displaymath}
    510 f(n) = \left \{
     608\mathsf{fib}(n) = \left \{
    511609\begin{array}{ll}
    512 0                               & n = 0         \\
    513 1                               & n = 1         \\
    514 f(n-1) + f(n-2) & n \ge 2       \\
     6100                                       & n = 0         \\
     6111                                       & n = 1         \\
     612\mathsf{fib}(n-1) + \mathsf{fib}(n-2)   & n \ge 2       \\
    515613\end{array}
    516614\right.
    517615\end{displaymath}
    518 Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C.
    519 
    520616Figure~\ref{f:GlobalVariables} illustrates the following problems:
    521 unencapsulated global variables necessary to retain state between calls;
    522 only one fibonacci generator can run at a time;
    523 execution state must be explicitly retained.
     617unique unencapsulated global variables necessary to retain state between calls;
     618only one Fibonacci generator;
     619execution state must be explicitly retained via explicit state variables.
    524620Figure~\ref{f:ExternalState} addresses these issues:
    525621unencapsulated program global variables become encapsulated structure variables;
    526 multiple fibonacci generators can run at a time by declaring multiple fibonacci objects;
    527 explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $f(n-2)$.
     622unique global variables are replaced by multiple Fibonacci objects;
     623explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $\mathsf{fib}(n-2)$.
    528624
    529625\begin{figure}
     
    585681\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
    586682`coroutine` Fib { int fn; };
    587 void main( Fib & f ) with( f ) {
     683void main( Fib & fib ) with( fib ) {
    588684        int f1, f2;
    589685        fn = 0;  f1 = fn;  `suspend()`;
     
    609705\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
    610706`coroutine` Fib { int ret; };
    611 void main( Fib & f ) with( f ) {
     707void main( Fib & f ) with( fib ) {
    612708        int fn, f1 = 1, f2 = 0;
    613709        for ( ;; ) {
     
    636732\end{figure}
    637733
    638 Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, which provides communication for multiple interface functions, and the \newterm{coroutine main}, which runs on the coroutine stack.
    639 \begin{cfa}
    640 `coroutine C { char c; int i; _Bool s; };`      $\C{// used for communication}$
    641 void ?{}( C & c ) { s = false; }                        $\C{// constructor}$
    642 void main( C & cor ) with( cor ) {                      $\C{// actual coroutine}$
    643         while ( ! s ) // process c
    644         if ( v == ... ) s = false;
    645 }
    646 // interface functions
    647 char cont( C & cor, char ch ) { c = ch; resume( cor ); return c; }
    648 _Bool stop( C & cor, int v ) { s = true; i = v; resume( cor ); return s; }
    649 \end{cfa}
    650 
    651 encapsulates the Fibonacci state in the  shows is an example of a solution to the Fibonacci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the next generation.
    652 This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used.
    653 Indeed, this version is as easy to use as the @fibonacci_state@ solution, while the implementation is very similar to the @fibonacci_func@ example.
    654 
    655 Figure~\ref{f:fmt-line} shows the @Format@ coroutine for restructuring text into groups of character blocks of fixed size.
    656 The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor.
     734Using a coroutine, it is possible to express the Fibonacci formula directly without any of the C problems.
     735Figure~\ref{f:Coroutine3States} creates a @coroutine@ type:
     736\begin{cfa}
     737`coroutine` Fib { int fn; };
     738\end{cfa}
     739which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, @next@.
     740Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
     741The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code has the three suspend points, representing the three states in the Fibonacci formula, to context switch back to the caller's resume.
     742The interface function, @next@, takes a Fibonacci instance and context switches to it using @resume@;
     743on return, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
     744The first @resume@ is special because it cocalls the coroutine at its coroutine main and allocates the stack;
     745when the coroutine main returns, its stack is deallocated.
     746Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes.
     747Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}.
     748Coroutine generators are called \newterm{output coroutines} because values are returned by the coroutine.
     749
     750Figure~\ref{f:CFAFmt} shows an \newterm{input coroutine}, @Format@, for restructuring text into groups of character blocks of fixed size.
     751For example, the input of the left is reformatted into the output on the right.
     752\begin{quote}
     753\tt
     754\begin{tabular}{@{}l|l@{}}
     755\multicolumn{1}{c|}{\textbf{\textrm{input}}} & \multicolumn{1}{c}{\textbf{\textrm{output}}} \\
     756abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
     757&
     758\begin{tabular}[t]{@{}lllll@{}}
     759abcd    & efgh  & ijkl  & mnop  & qrst  \\
     760uvwx    & yzab  & cdef  & ghij  & klmn  \\
     761opqr    & stuv  & wxyz  &               &
     762\end{tabular}
     763\end{tabular}
     764\end{quote}
     765The example takes advantage of resuming coroutines in the constructor to prime the coroutine loops so the first character sent for formatting appears inside the nested loops.
     766The destruction provides a newline if formatted text ends with a full line.
     767Figure~\ref{f:CFmt} shows the C equivalent formatter, where the loops of the coroutine are flatten (linearized) and rechecked on each call because execution location is not retained between calls.
    657768
    658769\begin{figure}
    659 \begin{cfa}[xleftmargin=4\parindentlnth]
     770\centering
     771\newbox\myboxA
     772\begin{lrbox}{\myboxA}
     773\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
    660774`coroutine` Format {
    661         char ch;                                                                $\C{// used for communication}$
    662         int g, b;                                                               $\C{// global because used in destructor}$
     775        char ch;   // used for communication
     776        int g, b;  // global because used in destructor
    663777};
    664 void ?{}( Format & fmt ) { `resume( fmt );` } $\C{// prime (start) coroutine}$
    665 void ^?{}( Format & fmt ) with( fmt ) { if ( g != 0 || b != 0 ) sout | endl; }
    666778void main( Format & fmt ) with( fmt ) {
    667         for ( ;; ) {                                                    $\C{// for as many characters}$
    668                 for ( g = 0; g < 5; g += 1 ) {          $\C{// groups of 5 blocks}$
    669                         for ( b = 0; b < 4; b += 1 ) {  $\C{// blocks of 4 characters}$
     779        for ( ;; ) {   
     780                for ( g = 0; g < 5; g += 1 ) {  // group
     781                        for ( b = 0; b < 4; b += 1 ) { // block
    670782                                `suspend();`
    671                                 sout | ch;                                      $\C{// print character}$
     783                                sout | ch;              // separator
    672784                        }
    673                         sout | "  ";                                    $\C{// print block separator}$
     785                        sout | "  ";               // separator
    674786                }
    675                 sout | endl;                                            $\C{// print group separator}$
     787                sout | endl;
    676788        }
    677789}
    678 void prt( Format & fmt, char ch ) {
    679         fmt.ch = ch;
     790void ?{}( Format & fmt ) { `resume( fmt );` }
     791void ^?{}( Format & fmt ) with( fmt ) {
     792        if ( g != 0 || b != 0 ) sout | endl;
     793}
     794void format( Format & fmt ) {
    680795        `resume( fmt );`
    681796}
    682797int main() {
    683798        Format fmt;
     799        eof: for ( ;; ) {
     800                sin | fmt.ch;
     801          if ( eof( sin ) ) break eof;
     802                format( fmt );
     803        }
     804}
     805\end{lstlisting}
     806\end{lrbox}
     807
     808\newbox\myboxB
     809\begin{lrbox}{\myboxB}
     810\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
     811struct Format {
    684812        char ch;
    685         for ( ;; ) {                                                    $\C{// read until end of file}$
    686                 sin | ch;                                                       $\C{// read one character}$
    687           if ( eof( sin ) ) break;                              $\C{// eof ?}$
    688                 prt( fmt, ch );                                         $\C{// push character for formatting}$
     813        int g, b;
     814};
     815void format( struct Format * fmt ) {
     816        if ( fmt->ch != -1 ) { // not EOF
     817                printf( "%c", fmt->ch );
     818                fmt->b += 1;
     819                if ( fmt->b == 4 ) {  // block
     820                        printf( "  " );      // separator
     821                        fmt->b = 0;
     822                        fmt->g += 1;
     823                }
     824                if ( fmt->g == 5 ) {  // group
     825                        printf( "\n" );      // separator
     826                        fmt->g = 0;
     827                }
     828        } else {
     829                if ( fmt->g != 0 || fmt->b != 0 ) printf( "\n" );
    689830        }
    690831}
    691 \end{cfa}
     832int main() {
     833        struct Format fmt = { 0, 0, 0 };
     834        for ( ;; ) {
     835                scanf( "%c", &fmt.ch );
     836          if ( feof( stdin ) ) break;
     837                format( &fmt );
     838        }
     839        fmt.ch = -1;
     840        format( &fmt );
     841}
     842\end{lstlisting}
     843\end{lrbox}
     844\subfloat[\CFA Coroutine]{\label{f:CFAFmt}\usebox\myboxA}
     845\qquad
     846\subfloat[C Linearized]{\label{f:CFmt}\usebox\myboxB}
    692847\caption{Formatting text into lines of 5 blocks of 4 characters.}
    693848\label{f:fmt-line}
    694849\end{figure}
    695850
     851The previous examples are \newterm{asymmetric (semi) coroutine}s because one coroutine always calls a resuming function for another coroutine, and the resumed coroutine always suspends back to its last resumer, similar to call/return for normal functions.
     852However, there is no stack growth because @resume@/@suspend@ context switch to an existing stack frames rather than create a new one.
     853\newterm{Symmetric (full) coroutine}s have a coroutine call a resuming function for another coroutine, which eventually forms a cycle.
     854(The trivial cycle is a coroutine resuming itself.)
     855This control flow is similar to recursion for normal routines, but again there is no stack growth from the context switch.
     856
    696857\begin{figure}
    697858\centering
    698 \lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}
     859\lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}% allow $
    699860\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}}
    700861\begin{cfa}
     
    728889        Prod prod;
    729890        Cons cons = { prod };
    730         srandom( getpid() );
    731891        start( prod, 5, cons );
    732892}
     
    765925        `resume( cons );`
    766926}
    767 
    768927\end{cfa}
    769928\end{tabular}
     
    771930\label{f:ProdCons}
    772931\end{figure}
     932
     933Figure~\ref{f:ProdCons} shows a producer/consumer symmetric-coroutine performing bi-directional communication.
     934Since the solution involves a full-coroutining cycle, the program main creates one coroutine in isolation, passes this coroutine to its partner, and closes the cycle at the call to @start@.
     935The @start@ function communicates both the number of elements to be produced and the consumer into the producer's coroutine structure.
     936Then the @resume@ to @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
     937@prod@'s coroutine main starts, creates local variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random vales, calling the consumer to deliver the values, and printing the status returned from the consumer.
     938
     939The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
     940For the first resume, @cons@'s stack is initialized, creating local variables retained between subsequent activations of the coroutine.
     941The consumer iterates until the @done@ flag is set, prints, increments status, and calls back to the producer's @payment@ member, and on return prints the receipt from the producer and increments the money for the next payment.
     942The call from the consumer to the producer's @payment@ member introduces the cycle between producer and consumer.
     943When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed.
     944The context switch restarts the producer at the point where it was last context switched and it continues in member @delivery@ after the resume.
     945
     946The @delivery@ member returns the status value in @prod@'s @main@ member, where the status is printed.
     947The loop then repeats calling @delivery@, where each call resumes the consumer coroutine.
     948The context switch to the consumer continues in @payment@.
     949The consumer increments and returns the receipt to the call in @cons@'s @main@ member.
     950The loop then repeats calling @payment@, where each call resumes the producer coroutine.
     951
     952After iterating $N$ times, the producer calls @stop@.
     953The @done@ flag is set to stop the consumer's execution and a resume is executed.
     954The context switch restarts @cons@ in @payment@ and it returns with the last receipt.
     955The consumer terminates its loops because @done@ is true, its @main@ terminates, so @cons@ transitions from a coroutine back to an object, and @prod@ reactivates after the resume in @stop@.
     956The @stop@ member returns and @prod@'s @main@ member terminates.
     957The program main restarts after the resume in @start@.
     958The @start@ member returns and the program main terminates.
    773959
    774960
     
    34153601\bibliography{pl,local}
    34163602
     3603
    34173604\end{document}
    34183605
Note: See TracChangeset for help on using the changeset viewer.