Index: doc/papers/concurrency/Paper.tex
===================================================================
--- doc/papers/concurrency/Paper.tex	(revision 3d60c08ac51fee59e5e460399d8f60bfa15e20c6)
+++ doc/papers/concurrency/Paper.tex	(revision 48b9b36d78295b9976f9e29cde21b9f3922eab9c)
@@ -73,4 +73,7 @@
 % \def\{{\ttfamily\upshape\myCHarFont \char`\}}}%
 
+\renewcommand*{\thefootnote}{\Alph{footnote}} % hack because fnsymbol does not work
+%\renewcommand*{\thefootnote}{\fnsymbol{footnote}}
+
 \makeatletter
 % parindent is relative, i.e., toggled on/off in environments like itemize, so store the value for
@@ -87,4 +90,5 @@
 \setlength{\gcolumnposn}{3.5in}
 \setlength{\columnposn}{\gcolumnposn}
+
 \newcommand{\C}[2][\@empty]{\ifx#1\@empty\else\global\setlength{\columnposn}{#1}\global\columnposn=\columnposn\fi\hfill\makebox[\textwidth-\columnposn][l]{\lst@basicstyle{\LstCommentStyle{#2}}}}
 \newcommand{\CRT}{\global\columnposn=\gcolumnposn}
@@ -172,5 +176,6 @@
 literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
 	{~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
-	{<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textgreater}}2,
+	{<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
+	{<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textrm{\textgreater}}}2,
 moredelim=**[is][\color{red}]{`}{`},
 }% lstset
@@ -214,4 +219,10 @@
 \lstMakeShortInline@%
 
+\let\OLDthebibliography\thebibliography
+\renewcommand\thebibliography[1]{
+  \OLDthebibliography{#1}
+  \setlength{\parskip}{0pt}
+  \setlength{\itemsep}{4pt plus 0.3ex}
+}
 
 \title{\texorpdfstring{Concurrency in \protect\CFA}{Concurrency in Cforall}}
@@ -230,5 +241,5 @@
 \CFA is a modern, polymorphic, \emph{non-object-oriented} extension of the C programming language.
 This paper discusses the design of the concurrency and parallelism features in \CFA, and the concurrent runtime-system.
-These features are created from scratch as ISO C lacks concurrency, relying largely on pthreads library.
+These features are created from scratch as ISO C lacks concurrency, relying largely on the pthreads library.
 Coroutines and lightweight (user) threads are introduced into the language.
 In addition, monitors are added as a high-level mechanism for mutual exclusion and synchronization.
@@ -255,19 +266,19 @@
 Examples of high-level approaches are task (work) based~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}.
 
-This paper uses the following terminology.
+The following terminology is used.
 A \newterm{thread} is a fundamental unit of execution that runs a sequence of code and requires a stack to maintain state.
 Multiple simultaneous threads give rise to \newterm{concurrency}, which requires locking to ensure safe communication and access to shared data.
 % Correspondingly, concurrency is defined as the concepts and challenges that occur when multiple independent (sharing memory, timing dependencies, \etc) concurrent threads are introduced.
-\newterm{Locking}, and by extension locks, are defined as a mechanism to prevent progress of threads to provide safety.
+\newterm{Locking}, and by extension \newterm{locks}, are defined as a mechanism to prevent progress of threads to provide safety.
 \newterm{Parallelism} is running multiple threads simultaneously.
 Parallelism implies \emph{actual} simultaneous execution, where concurrency only requires \emph{apparent} simultaneous execution.
-As such, parallelism only affects performance, which is observed through differences in space and/or time.
-
-Hence, there are two problems to be solved in the design of concurrency for a programming language: concurrency and parallelism.
+As such, parallelism only affects performance, which is observed through differences in space and/or time at runtime.
+
+Hence, there are two problems to be solved: concurrency and parallelism.
 While these two concepts are often combined, they are distinct, requiring different tools~\cite[\S~2]{Buhr05a}.
 Concurrency tools handle synchronization and mutual exclusion, while parallelism tools handle performance, cost and resource utilization.
 
 The proposed concurrency API is implemented in a dialect of C, called \CFA.
-The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantic, and type checking, and the corresponding high-perforamnce runtime-library to implement the concurrency features.
+The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantic, and type checking, and the corresponding high-performance runtime-library to implement the concurrency features.
 
 
@@ -275,5 +286,5 @@
 
 The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency.
-Extended versions of the following code examples are available at the \CFA website~\cite{Cforall} or Moss~\etal~\cite{XXX}.
+Extended versions and explanation of the following code examples are available at the \CFA website~\cite{Cforall} or in Moss~\etal~\cite{Moss18}.
 
 \CFA is an extension of ISO-C, and hence, supports all C paradigms.
@@ -282,4 +293,5 @@
 Virtually all of the code generated by the \CFA translator respects C memory layouts and calling conventions.
 While \CFA is not an object-oriented language, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C does have a notion of objects: ``region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}.
+While some \CFA features are common in object-oriented programming-languages, they are an independent capability allowing \CFA to adopt them while retaining a procedural paradigm.
 
 
@@ -296,5 +308,5 @@
 r1 =  3;     r2 = 3;      r3 = 3;        // change x: implicit dereferences *r1, **r2, ***r3
 **p3 = &y; *p3 = &p4;                // change p1, p2
-`&`r3 = &y; `&&`r3 = &`&`r4;             // change r1, r2: cancel implicit deferences (&*)**r3, (&(&*)*)*r3, &(&*)r4
+`&`r3 = &y; `&&`r3 = &`&`r4;             // change r1, r2: cancel implicit dereferences (&*)**r3, (&(&*)*)*r3, &(&*)r4
 \end{cfa}
 A reference is a handle to an object, like a pointer, but is automatically dereferenced the specified number of levels.
@@ -305,5 +317,5 @@
 \label{s:WithStatement}
 
-Heterogeneous data is often aggregated into a structure/union.
+Heterogeneous data is aggregated into a structure/union.
 To reduce syntactic noise, \CFA provides a @with@ statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate field-qualification by opening a scope containing the field identifiers.
 \begin{cquote}
@@ -315,5 +327,5 @@
 // multiple aggregate parameters
 \end{cfa}
-\begin{tabular}{@{}l@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
+\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
 \begin{cfa}
 void f( S & s, T & t ) {
@@ -357,15 +369,15 @@
 // selection based on type
 \end{cfa}
-\begin{tabular}{@{}l@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
-\begin{cfa}
-const short int MIN = -32768;
-const int MIN = -2147483648;
-const long int MIN = -9223372036854775808L;
+\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
+\begin{cfa}
+const short int `MIN` = -32768;
+const int `MIN` = -2147483648;
+const long int `MIN` = -9223372036854775808L;
 \end{cfa}
 &
 \begin{cfa}
-short int si = MIN;
-int i = MIN;
-long int li = MIN;
+short int si = `MIN`;
+int i = `MIN`;
+long int li = `MIN`;
 \end{cfa}
 \end{tabular}
@@ -373,15 +385,15 @@
 // selection based on type and number of parameters
 \end{cfa}
-\begin{tabular}{@{}l@{\hspace{1.7\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
-\begin{cfa}
-void f( void );
-void f( char );
-void f( int, double );
+\begin{tabular}{@{}l@{\hspace{2.7\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
+\begin{cfa}
+void `f`( void );
+void `f`( char );
+void `f`( int, double );
 \end{cfa}
 &
 \begin{cfa}
-f();
-f( 'a' );
-f( 3, 5.2 );
+`f`();
+`f`( 'a' );
+`f`( 3, 5.2 );
 \end{cfa}
 \end{tabular}
@@ -389,15 +401,15 @@
 // selection based on type and number of returns
 \end{cfa}
-\begin{tabular}{@{}l@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
-\begin{cfa}
-char f( int );
-double f( int );
-[char, double] f( int );
+\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
+\begin{cfa}
+char `f`( int );
+double `f`( int );
+[char, double] `f`( int );
 \end{cfa}
 &
 \begin{cfa}
-char c = f( 3 );
-double d = f( 3 );
-[d, c] = f( 3 );
+char c = `f`( 3 );
+double d = `f`( 3 );
+[d, c] = `f`( 3 );
 \end{cfa}
 \end{tabular}
@@ -405,5 +417,5 @@
 \end{cquote}
 Overloading is important for \CFA concurrency since the runtime system relies on creating different types to represent concurrency objects.
-Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes.
+Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions to prevent name clashes.
 As seen in Section~\ref{basics}, function @main@ is heavily overloaded.
 
@@ -414,13 +426,13 @@
 with ( s, t ) {
 	j + k;									$\C{// unambiguous, s.j + t.k}$
-	m = 5.0;								$\C{// unambiguous, t.m = 5.0}$
-	m = 1;									$\C{// unambiguous, s.m = 1}$
-	int a = m;								$\C{// unambiguous, a = s.i }$
-	double b = m;							$\C{// unambiguous, b = t.m}$
+	m = 5.0;								$\C{// unambiguous, s.m = 5.0}$
+	m = 1;									$\C{// unambiguous, t.m = 1}$
+	int a = m;								$\C{// unambiguous, a = t.m }$
+	double b = m;							$\C{// unambiguous, b = s.m}$
 	int c = `s.i` + `t.i`;					$\C{// unambiguous, qualification}$
-	(double)m;								$\C{// unambiguous, cast}$
-}
-\end{cfa}
-For parallel semantics, both @s.i@ and @t.i@ are visible with the same type, so only @i@ is ambiguous without qualification.
+	(double)m;								$\C{// unambiguous, cast s.m}$
+}
+\end{cfa}
+For parallel semantics, both @s.i@ and @t.i@ are visible the same type, so only @i@ is ambiguous without qualification.
 
 
@@ -514,11 +526,11 @@
 \begin{cfa}
 struct VLA { int len, * data; };			$\C{// variable length array of integers}$
-void ?{}( VLA & vla ) with ( vla ) { len = 10;  data = alloc( len ); }  // default constructor
+void ?{}( VLA & vla ) with ( vla ) { len = 10;  data = alloc( len ); }  $\C{// default constructor}$
 void ?{}( VLA & vla, int size, char fill ) with ( vla ) { len = size;  data = alloc( len, fill ); } // initialization
-void ?{}( VLA & vla, VLA other ) { vla.len = other.len;  vla.data = other.data; } // copy, shallow
+void ?{}( VLA & vla, VLA other ) { vla.len = other.len;  vla.data = other.data; } $\C{// copy, shallow}$
 void ^?{}( VLA & vla ) with ( vla ) { free( data ); } $\C{// destructor}$
 {
 	VLA  x,            y = { 20, 0x01 },     z = y;	$\C{// z points to y}$
-	//    ?{}( x );   ?{}( y, 20, 0x01 );   ?{}( z, y ); 
+	//    x{};         y{ 20, 0x01 };          z{ z, y }; 
 	^x{};									$\C{// deallocate x}$
 	x{};									$\C{// reallocate x}$
@@ -527,5 +539,5 @@
 	y{ x };									$\C{// reallocate y, points to x}$
 	x{};									$\C{// reallocate x, not pointing to y}$
-	// ^?{}(z);  ^?{}(y);  ^?{}(x);
+	//  ^z{};  ^y{};  ^x{};
 }
 \end{cfa}
@@ -546,19 +558,27 @@
 \section{Concurrency Basics}\label{basics}
 
-At its core, concurrency is based on multiple call-stacks and scheduling among threads executing on these stacks.
-Multiple call stacks (or contexts) and a single thread of execution does \emph{not} imply concurrency.
-Execution with a single thread and multiple stacks where the thread is deterministically self-scheduling across the stacks is called \newterm{coroutining};
-execution with a single thread and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread's perspective) across the stacks is called concurrency~\cite[\S~3]{Buhr05a}.
-Therefore, a minimal concurrency system can be achieved using coroutines (see Section \ref{coroutine}), which instead of context-switching among each other, always defer to an oracle for where to context-switch next.
-
-While coroutines can execute on the caller's stack-frame, stack-full coroutines allow full generality and are sufficient as the basis for concurrency.
-The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model (a.k.a., non-preemptive scheduling).
-The oracle/scheduler can either be a stack-less or stack-full entity and correspondingly require one or two context-switches to run a different coroutine.
-In any case, a subset of concurrency related challenges start to appear.
-For the complete set of concurrency challenges to occur, the only feature missing is preemption.
-
-A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context switches occur.
-Mutual exclusion and synchronization are ways of limiting non-determinism in a concurrent system.
-Now it is important to understand that uncertainty is desirable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel.
+At its core, concurrency is based on multiple call-stacks and scheduling threads executing on these stacks.
+Multiple call stacks (or contexts) and a single thread of execution, called \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}.
+In coroutining, the single thread is self-scheduling across the stacks, so execution is deterministic, \ie given fixed inputs, the execution path to the outputs is fixed and predictable.
+A \newterm{stackless} coroutine executes on the caller's stack~\cite{Python} but this approach is restrictive, \eg preventing modularization and supporting only iterator/generator-style programming;
+a \newterm{stackfull} coroutine executes on its own stack, allowing full generality.
+Only stackfull coroutines are a stepping-stone to concurrency.
+
+The transition to concurrency, even for execution with a single thread and multiple stacks, occurs when coroutines also context switch to a scheduling oracle, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}.
+Therefore, a minimal concurrency system is possible using coroutines (see Section \ref{coroutine}) in conjunction with a scheduler to decide where to context switch next.
+The resulting execution system now follows a cooperative threading-model, called \newterm{non-preemptive scheduling}.
+
+Because the scheduler is special, it can either be a stackless or stackfull coroutine.
+For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch.
+For stackfull, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
+A stackfull scheduler is often used for simplicity and security, even through there is a slightly higher runtime-cost.
+
+Regardless of the approach used, a subset of concurrency related challenges start to appear.
+For the complete set of concurrency challenges to occur, the missing feature is \newterm{preemption}, where context switching occurs randomly between any two instructions, often based on a timer interrupt, called \newterm{preemptive scheduling}.
+While a scheduler introduces uncertainty in the order of execution, preemption introduces uncertainty where context switches occur.
+Interestingly, uncertainty is necessary for the runtime (operating) system to give the illusion of parallelism on a single processor and increase performance on multiple processors.
+The reason is that only the runtime has complete knowledge about resources and how to best utilized them.
+However, the introduction of unrestricted non-determinism results in the need for \newterm{mutual exclusion} and \newterm{synchronization} to restrict non-determinism for correctness;
+otherwise, it is impossible to write meaningful programs.
 Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows.
 
@@ -566,41 +586,40 @@
 \subsection{\protect\CFA's Thread Building Blocks}
 
-One of the important features that are missing in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional.
+An important missing feature in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional.
 As such, library support for threading is far from widespread.
 At the time of writing the paper, neither \protect\lstinline|gcc| nor \protect\lstinline|clang| support ``threads.h'' in their standard libraries.}.
-On modern architectures, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write efficient concurrent programs to take advantage of parallelism.
+On modern architectures, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism.
 As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages.
-And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
+Furthermore, because C is a system-level language, programmers expect to choose precisely which features they need and which cost they are willing to pay.
+Hence, concurrent programs should be written using high-level mechanisms, and only step down to lower-level mechanisms when performance bottlenecks are encountered.
 
 
 \subsection{Coroutines: A Stepping Stone}\label{coroutine}
 
-While the focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system.
-\newterm{Coroutine}s are generalized routines with points where execution is suspended and resumed at a later time.
-Suspend/resume is a context switche and coroutines have other context-management operations.
-Many design challenges of threads are partially present in designing coroutines, which makes the design effort relevant.
-The core \textbf{api} of coroutines has two features: independent call-stacks and @suspend@/@resume@.
-
-A coroutine handles the class of problems that need to retain state between calls (\eg plugin, device driver, finite-state machine).
-For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers:
+While the focus of this discussion is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system.
+Coroutines are generalized routines allowing execution to be temporarily suspend and later resumed.
+Hence, unlike a normal routine, a coroutine may not terminate when it returns to its caller, allowing it to be restarted with the values and execution location present at the point of suspension.
+This capability is accomplish via the coroutine's stack, where suspend/resume context switch among stacks.
+Because threading design-challenges are present in coroutines, their design effort is relevant, and this effort can be easily exposed to programmers giving them a useful new programming paradigm because a coroutine handles the class of problems that need to retain state between calls, \eg plugins, device drivers, and finite-state machines.
+Therefore, the core \CFA coroutine-API for has two fundamental features: independent call-stacks and @suspend@/@resume@ operations.
+
+For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers, where Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C.
 \begin{displaymath}
-f(n) = \left \{
+\mathsf{fib}(n) = \left \{
 \begin{array}{ll}
-0				& n = 0		\\
-1				& n = 1		\\
-f(n-1) + f(n-2)	& n \ge 2	\\
+0					& n = 0		\\
+1					& n = 1		\\
+\mathsf{fib}(n-1) + \mathsf{fib}(n-2)	& n \ge 2	\\
 \end{array}
 \right.
 \end{displaymath}
-Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C.
-
 Figure~\ref{f:GlobalVariables} illustrates the following problems:
-unencapsulated global variables necessary to retain state between calls;
-only one fibonacci generator can run at a time;
-execution state must be explicitly retained.
+unique unencapsulated global variables necessary to retain state between calls;
+only one Fibonacci generator;
+execution state must be explicitly retained via explicit state variables.
 Figure~\ref{f:ExternalState} addresses these issues:
 unencapsulated program global variables become encapsulated structure variables;
-multiple fibonacci generators can run at a time by declaring multiple fibonacci objects;
-explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $f(n-2)$.
+unique global variables are replaced by multiple Fibonacci objects;
+explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $\mathsf{fib}(n-2)$.
 
 \begin{figure}
@@ -662,6 +681,5 @@
 \begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
 `coroutine` Fib { int fn; };
-void ?{}( Fibonacci & fib ) with( fib ) { fn = 0; }
-void main( Fib & f ) with( f ) {
+void main( Fib & fib ) with( fib ) {
 	int f1, f2;
 	fn = 0;  f1 = fn;  `suspend()`;
@@ -687,5 +705,5 @@
 \begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
 `coroutine` Fib { int ret; };
-void main( Fib & f ) with( f ) {
+void main( Fib & f ) with( fib ) {
 	int fn, f1 = 1, f2 = 0;
 	for ( ;; ) {
@@ -714,65 +732,130 @@
 \end{figure}
 
-Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, which provides communication for multiple interface functions, and the \newterm{coroutine main}, which runs on the coroutine stack.
-\begin{cfa}
-`coroutine C { char c; int i; _Bool s; };`	$\C{// used for communication}$
-void ?{}( C & c ) { s = false; }			$\C{// constructor}$
-void main( C & cor ) with( cor ) {			$\C{// actual coroutine}$
-	while ( ! s ) // process c
-	if ( v == ... ) s = false;
-}
-// interface functions
-char cont( C & cor, char ch ) { c = ch; resume( cor ); return c; }
-_Bool stop( C & cor, int v ) { s = true; i = v; resume( cor ); return s; }
-\end{cfa}
-
-encapsulates the Fibonacci state in the  shows is an example of a solution to the Fibonacci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the next generation.
-This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used.
-Indeed, this version is as easy to use as the @fibonacci_state@ solution, while the implementation is very similar to the @fibonacci_func@ example.
-
-Figure~\ref{f:fmt-line} shows the @Format@ coroutine for restructuring text into groups of character blocks of fixed size.
-The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor.
+Using a coroutine, it is possible to express the Fibonacci formula directly without any of the C problems.
+Figure~\ref{f:Coroutine3States} creates a @coroutine@ type:
+\begin{cfa}
+`coroutine` Fib { int fn; };
+\end{cfa}
+which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, @next@.
+Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
+The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code has the three suspend points, representing the three states in the Fibonacci formula, to context switch back to the caller's resume.
+The interface function, @next@, takes a Fibonacci instance and context switches to it using @resume@;
+on return, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
+The first @resume@ is special because it cocalls the coroutine at its coroutine main and allocates the stack;
+when the coroutine main returns, its stack is deallocated.
+Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes.
+Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}.
+Coroutine generators are called \newterm{output coroutines} because values are returned by the coroutine.
+
+Figure~\ref{f:CFAFmt} shows an \newterm{input coroutine}, @Format@, for restructuring text into groups of character blocks of fixed size.
+For example, the input of the left is reformatted into the output on the right.
+\begin{quote}
+\tt
+\begin{tabular}{@{}l|l@{}}
+\multicolumn{1}{c|}{\textbf{\textrm{input}}} & \multicolumn{1}{c}{\textbf{\textrm{output}}} \\
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
+&
+\begin{tabular}[t]{@{}lllll@{}}
+abcd	& efgh	& ijkl	& mnop	& qrst	\\
+uvwx	& yzab	& cdef	& ghij	& klmn	\\
+opqr	& stuv	& wxyz	&		&
+\end{tabular}
+\end{tabular}
+\end{quote}
+The example takes advantage of resuming coroutines in the constructor to prime the coroutine loops so the first character sent for formatting appears inside the nested loops.
+The destruction provides a newline if formatted text ends with a full line.
+Figure~\ref{f:CFmt} shows the C equivalent formatter, where the loops of the coroutine are flatten (linearized) and rechecked on each call because execution location is not retained between calls.
 
 \begin{figure}
-\begin{cfa}[xleftmargin=4\parindentlnth]
+\centering
+\newbox\myboxA
+\begin{lrbox}{\myboxA}
+\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
 `coroutine` Format {
-	char ch;								$\C{// used for communication}$
-	int g, b;								$\C{// global because used in destructor}$
+	char ch;   // used for communication
+	int g, b;  // global because used in destructor
 };
-void ?{}( Format & fmt ) { `resume( fmt );` } $\C{// prime (start) coroutine}$
-void ^?{}( Format & fmt ) with( fmt ) { if ( g != 0 || b != 0 ) sout | endl; }
 void main( Format & fmt ) with( fmt ) {
-	for ( ;; ) {							$\C{// for as many characters}$
-		for ( g = 0; g < 5; g += 1 ) {		$\C{// groups of 5 blocks}$
-			for ( b = 0; b < 4; b += 1 ) {	$\C{// blocks of 4 characters}$
+	for ( ;; ) {	
+		for ( g = 0; g < 5; g += 1 ) {  // group
+			for ( b = 0; b < 4; b += 1 ) { // block
 				`suspend();`
-				sout | ch;					$\C{// print character}$
+				sout | ch;              // separator
 			}
-			sout | "  ";					$\C{// print block separator}$
+			sout | "  ";               // separator
 		}
-		sout | endl;						$\C{// print group separator}$
+		sout | endl;
 	}
 }
-void prt( Format & fmt, char ch ) {
-	fmt.ch = ch;
+void ?{}( Format & fmt ) { `resume( fmt );` }
+void ^?{}( Format & fmt ) with( fmt ) {
+	if ( g != 0 || b != 0 ) sout | endl;
+}
+void format( Format & fmt ) {
 	`resume( fmt );`
 }
 int main() {
 	Format fmt;
+	eof: for ( ;; ) {
+		sin | fmt.ch;
+	  if ( eof( sin ) ) break eof;
+		format( fmt );
+	}
+}
+\end{lstlisting}
+\end{lrbox}
+
+\newbox\myboxB
+\begin{lrbox}{\myboxB}
+\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
+struct Format {
 	char ch;
-	for ( ;; ) {							$\C{// read until end of file}$
-		sin | ch;							$\C{// read one character}$
-	  if ( eof( sin ) ) break;				$\C{// eof ?}$
-		prt( fmt, ch );						$\C{// push character for formatting}$
+	int g, b;
+};
+void format( struct Format * fmt ) {
+	if ( fmt->ch != -1 ) { // not EOF
+		printf( "%c", fmt->ch );
+		fmt->b += 1;
+		if ( fmt->b == 4 ) {  // block
+			printf( "  " );      // separator
+			fmt->b = 0;
+			fmt->g += 1;
+		}
+		if ( fmt->g == 5 ) {  // group
+			printf( "\n" );      // separator
+			fmt->g = 0;
+		}
+	} else {
+		if ( fmt->g != 0 || fmt->b != 0 ) printf( "\n" );
 	}
 }
-\end{cfa}
+int main() {
+	struct Format fmt = { 0, 0, 0 };
+	for ( ;; ) {
+		scanf( "%c", &fmt.ch );
+	  if ( feof( stdin ) ) break;
+		format( &fmt );
+	}
+	fmt.ch = -1;
+	format( &fmt );
+}
+\end{lstlisting}
+\end{lrbox}
+\subfloat[\CFA Coroutine]{\label{f:CFAFmt}\usebox\myboxA}
+\qquad
+\subfloat[C Linearized]{\label{f:CFmt}\usebox\myboxB}
 \caption{Formatting text into lines of 5 blocks of 4 characters.}
 \label{f:fmt-line}
 \end{figure}
 
+The previous examples are \newterm{asymmetric (semi) coroutine}s because one coroutine always calls a resuming function for another coroutine, and the resumed coroutine always suspends back to its last resumer, similar to call/return for normal functions.
+However, there is no stack growth because @resume@/@suspend@ context switch to an existing stack frames rather than create a new one.
+\newterm{Symmetric (full) coroutine}s have a coroutine call a resuming function for another coroutine, which eventually forms a cycle.
+(The trivial cycle is a coroutine resuming itself.)
+This control flow is similar to recursion for normal routines, but again there is no stack growth from the context switch.
+
 \begin{figure}
 \centering
-\lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}
+\lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}% allow $
 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}}
 \begin{cfa}
@@ -806,5 +889,4 @@
 	Prod prod;
 	Cons cons = { prod };
-	srandom( getpid() );
 	start( prod, 5, cons );
 }
@@ -843,5 +925,4 @@
 	`resume( cons );`
 }
-
 \end{cfa}
 \end{tabular}
@@ -849,4 +930,31 @@
 \label{f:ProdCons}
 \end{figure}
+
+Figure~\ref{f:ProdCons} shows a producer/consumer symmetric-coroutine performing bi-directional communication.
+Since the solution involves a full-coroutining cycle, the program main creates one coroutine in isolation, passes this coroutine to its partner, and closes the cycle at the call to @start@.
+The @start@ function communicates both the number of elements to be produced and the consumer into the producer's coroutine structure.
+Then the @resume@ to @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
+@prod@'s coroutine main starts, creates local variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random vales, calling the consumer to deliver the values, and printing the status returned from the consumer.
+
+The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
+For the first resume, @cons@'s stack is initialized, creating local variables retained between subsequent activations of the coroutine.
+The consumer iterates until the @done@ flag is set, prints, increments status, and calls back to the producer's @payment@ member, and on return prints the receipt from the producer and increments the money for the next payment.
+The call from the consumer to the producer's @payment@ member introduces the cycle between producer and consumer.
+When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed.
+The context switch restarts the producer at the point where it was last context switched and it continues in member @delivery@ after the resume.
+
+The @delivery@ member returns the status value in @prod@'s @main@ member, where the status is printed.
+The loop then repeats calling @delivery@, where each call resumes the consumer coroutine.
+The context switch to the consumer continues in @payment@.
+The consumer increments and returns the receipt to the call in @cons@'s @main@ member.
+The loop then repeats calling @payment@, where each call resumes the producer coroutine.
+
+After iterating $N$ times, the producer calls @stop@.
+The @done@ flag is set to stop the consumer's execution and a resume is executed.
+The context switch restarts @cons@ in @payment@ and it returns with the last receipt.
+The consumer terminates its loops because @done@ is true, its @main@ terminates, so @cons@ transitions from a coroutine back to an object, and @prod@ reactivates after the resume in @stop@.
+The @stop@ member returns and @prod@'s @main@ member terminates.
+The program main restarts after the resume in @start@.
+The @start@ member returns and the program main terminates.
 
 
@@ -3493,4 +3601,5 @@
 \bibliography{pl,local}
 
+
 \end{document}
 
Index: doc/papers/general/Paper.tex
===================================================================
--- doc/papers/general/Paper.tex	(revision 3d60c08ac51fee59e5e460399d8f60bfa15e20c6)
+++ doc/papers/general/Paper.tex	(revision 48b9b36d78295b9976f9e29cde21b9f3922eab9c)
@@ -53,4 +53,6 @@
 %\newcommand{\TODO}[1]{\textbf{TODO}: {\itshape #1}} % TODO included
 \newcommand{\TODO}[1]{} % TODO elided
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
 % Default underscore is too low and wide. Cannot use lstlisting "literate" as replacing underscore
@@ -163,5 +165,6 @@
 literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
 	{~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
-	{<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textgreater}}2,
+	{<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
+	{<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textrm{\textgreater}}}2,
 moredelim=**[is][\color{red}]{`}{`},
 }% lstset
@@ -203,5 +206,6 @@
 
 The goal of the \CFA project (pronounced ``C-for-all'') is to create an extension of C that provides modern safety and productivity features while still ensuring strong backwards compatibility with C and its programmers.
-Prior projects have attempted similar goals but failed to honour C programming-style; for instance, adding object-oriented or functional programming with garbage collection is a non-starter for many C developers.
+Prior projects have attempted similar goals but failed to honour C programming-style;
+for instance, adding object-oriented or functional programming with garbage collection is a non-starter for many C developers.
 Specifically, \CFA is designed to have an orthogonal feature-set based closely on the C programming paradigm, so that \CFA features can be added \emph{incrementally} to existing C code-bases, and C programmers can learn \CFA extensions on an as-needed basis, preserving investment in existing code and programmers.
 This paper presents a quick tour of \CFA features showing how their design avoids shortcomings of similar features in C and other C-like languages.
@@ -219,4 +223,5 @@
 
 \section{Introduction}
+
 The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from commercial operating-systems to hobby projects.
 This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more.
@@ -2000,5 +2005,5 @@
 Destruction parameters are useful for specifying storage-management actions, such as de-initialize but not deallocate.}.
 \begin{cfa}
-struct VLA { int len, * data; };
+struct VLA { int len, * data; };			$\C{// variable length array of integers}$
 void ?{}( VLA & vla ) with ( vla ) { len = 10;  data = alloc( len ); }  $\C{// default constructor}$
 void ^?{}( VLA & vla ) with ( vla ) { free( data ); } $\C{// destructor}$
