Changeset 80dbf6a for doc/papers


Ignore:
Timestamp:
Feb 6, 2020, 10:30:05 AM (4 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
9fb8f01
Parents:
53a49cc
Message:

updated concurrency paper

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Paper.tex

    r53a49cc r80dbf6a  
    6161\newcommand{\CCseventeen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}17\xspace} % C++17 symbolic name
    6262\newcommand{\CCtwenty}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}20\xspace} % C++20 symbolic name
    63 \newcommand{\Csharp}{C\raisebox{-0.7ex}{\Large$^\sharp$}\xspace} % C# symbolic name
     63\newcommand{\Csharp}{C\raisebox{-0.7ex}{\large$^\sharp$}\xspace} % C# symbolic name
    6464
    6565%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     
    127127\newcommand*{\etc}{%
    128128        \@ifnextchar{.}{\ETC}%
    129         {\ETC.\xspace}%
     129                {\ETC.\xspace}%
    130130}}{}%
    131131\@ifundefined{etal}{
    132132\newcommand{\ETAL}{\abbrevFont{et}~\abbrevFont{al}}
    133133\newcommand*{\etal}{%
    134         \@ifnextchar{.}{\protect\ETAL}%
    135                 {\protect\ETAL.\xspace}%
     134        \@ifnextchar{.}{\ETAL}%
     135                {\ETAL.\xspace}%
    136136}}{}%
    137137\@ifundefined{viz}{
     
    163163                __float80, float80, __float128, float128, forall, ftype, generator, _Generic, _Imaginary, __imag, __imag__,
    164164                inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or,
    165                 otype, restrict, __restrict, __restrict__, __signed, __signed__, _Static_assert, thread,
     165                otype, restrict, resume, __restrict, __restrict__, __signed, __signed__, _Static_assert, suspend, thread,
    166166                _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__,
    167167                virtual, __volatile, __volatile__, waitfor, when, with, zero_t},
    168168        moredirectives={defined,include_next},
    169169        % replace/adjust listing characters that look bad in sanserif
    170         literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
     170        literate={-}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
    171171                {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
    172172                {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
     
    197197                _Else, _Enable, _Event, _Finally, _Monitor, _Mutex, _Nomutex, _PeriodicTask, _RealTimeTask,
    198198                _Resume, _Select, _SporadicTask, _Task, _Timeout, _When, _With, _Throw},
    199 }
    200 \lstdefinelanguage{Golang}{
    201         morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,},
    202         morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64,
    203                 bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface},
    204         morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,},
    205         morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,},
    206         morekeywords=[5]{Println,Printf,Error,},
    207         sensitive=true,
    208         morecomment=[l]{//},
    209         morecomment=[s]{/*}{*/},
    210         morestring=[b]',
    211         morestring=[b]",
    212         morestring=[s]{`}{`},
    213199}
    214200
     
    241227{}
    242228\lstnewenvironment{uC++}[1][]
    243 {\lstset{#1}}
     229{\lstset{language=uC++,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}}
    244230{}
    245231\lstnewenvironment{Go}[1][]
     
    280266\CFA is a polymorphic, non-object-oriented, concurrent, backwards-compatible extension of the C programming language.
    281267This paper discusses the design philosophy and implementation of its advanced control-flow and concurrent/parallel features, along with the supporting runtime written in \CFA.
    282 These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library features like pthreads.
     268These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library approaches like pthreads.
    283269\CFA introduces modern language-level control-flow mechanisms, like generators, coroutines, user-level threading, and monitors for mutual exclusion and synchronization.
    284270% Library extension for executors, futures, and actors are built on these basic mechanisms.
     
    293279
    294280\begin{document}
    295 \linenumbers                                            % comment out to turn off line numbering
     281\linenumbers                            % comment out to turn off line numbering
    296282
    297283\maketitle
     
    300286\section{Introduction}
    301287
    302 This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA~\cite{Moss18,Cforall} and its runtime, which is written entirely in \CFA.
    303 \CFA is a modern, polymorphic, non-object-oriented\footnote{
    304 \CFA has features often associated with object-oriented programming languages, such as constructors, destructors, virtuals and simple inheritance.
     288\CFA~\cite{Moss18,Cforall} is a modern, polymorphic, non-object-oriented\footnote{
     289\CFA has object-oriented features, such as constructors, destructors, virtuals and simple trait/interface inheritance.
     290% Go interfaces, Rust traits, Swift Protocols, Haskell Type Classes and Java Interfaces.
     291% "Trait inheritance" works for me. "Interface inheritance" might also be a good choice, and distinguish clearly from implementation inheritance.
     292% You'll want to be a little bit careful with terms like "structural" and "nominal" inheritance as well. CFA has structural inheritance (I think Go as well) -- it's inferred based on the structure of the code. Java, Rust, and Haskell (not sure about Swift) have nominal inheritance, where there needs to be a specific statement that "this type inherits from this type".
    305293However, functions \emph{cannot} be nested in structures, so there is no lexical binding between a structure and set of functions (member/method) implemented by an implicit \lstinline@this@ (receiver) parameter.},
    306294backwards-compatible extension of the C programming language.
    307 In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a \emph{research vehicle} for new typing and control-flow capabilities on top of a highly popular programming language allowing immediate dissemination.
    308 Within the \CFA framework, new control-flow features are created from scratch because ISO \Celeven defines only a subset of the \CFA extensions, where the overlapping features are concurrency~\cite[\S~7.26]{C11}.
    309 However, \Celeven concurrency is largely wrappers for a subset of the pthreads library~\cite{Butenhof97,Pthreads}, and \Celeven and pthreads concurrency is simple, based on thread fork/join in a function and mutex/condition locks, which is low-level and error-prone;
    310 no high-level language concurrency features are defined.
    311 Interestingly, almost a decade after publication of the \Celeven standard, neither gcc-8, clang-9 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating little interest in the C11 concurrency approach (possibly because the effort to add concurrency to \CC).
    312 Finally, while the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}.
    313 
     295In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a \emph{research vehicle} for new typing and control-flow capabilities on top of a highly popular programming language\footnote{
     296The TIOBE index~\cite{TIOBE} for December 2019 ranks the top five \emph{popular} programming languages as Java 17\%, C 16\%, Python 10\%, and \CC 6\%, \Csharp 5\% = 54\%, and over the past 30 years, C has always ranked either first or second in popularity.}
     297allowing immediate dissemination.
     298This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA and its runtime, which is written entirely in \CFA.
     299The \CFA control-flow framework extends ISO \Celeven~\cite{C11} with new call/return and concurrent/parallel control-flow.
     300
     301% The call/return extensions retain state between callee and caller versus losing the callee's state on return;
     302% the concurrency extensions allow high-level management of threads.
     303
     304Call/return control-flow with argument/parameter passing appeared in the first programming languages.
     305Over the past 50 years, call/return has been augmented with features like static/dynamic call, exceptions (multi-level return) and generators/coroutines (retain state between calls).
     306While \CFA has mechanisms for dynamic call (algebraic effects) and exceptions\footnote{
     307\CFA exception handling will be presented in a separate paper.
     308The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}}, this work only discusses retaining state between calls via generators/coroutines.
     309\newterm{Coroutining} was introduced by Conway~\cite{Conway63} (1963), discussed by Knuth~\cite[\S~1.4.2]{Knuth73V1}, implemented in Simula67~\cite{Simula67}, formalized by Marlin~\cite{Marlin80}, and is now popular and appears in old and new programming languages: CLU~\cite{CLU}, \Csharp~\cite{Csharp}, Ruby~\cite{Ruby}, Python~\cite{Python}, JavaScript~\cite{JavaScript}, Lua~\cite{Lua}, \CCtwenty~\cite{C++20Coroutine19}.
     310Coroutining is sequential execution requiring direct handoff among coroutines, \ie only the programmer is controlling execution order.
     311If coroutines transfer to an internal event-engine for scheduling the next coroutines, the program transitions into the realm of concurrency~\cite[\S~3]{Buhr05a}.
     312Coroutines are only a stepping stone towards concurrency where the commonality is that coroutines and threads retain state between calls.
     313
     314\Celeven/\CCeleven define concurrency~\cite[\S~7.26]{C11}, but it is largely wrappers for a subset of the pthreads library~\cite{Pthreads}.\footnote{Pthreads concurrency is based on simple thread fork/join in a function and mutex/condition locks, which is low-level and error-prone}
     315Interestingly, almost a decade after the \Celeven standard, neither gcc-9, clang-9 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating no interest in the C11 concurrency approach (possibly because of the recent effort to add concurrency to \CC).
     316While the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}, as for \CC.
    314317In contrast, there has been a renewed interest during the past decade in user-level (M:N, green) threading in old and new programming languages.
    315318As multi-core hardware became available in the 1980/90s, both user and kernel threading were examined.
    316319Kernel threading was chosen, largely because of its simplicity and fit with the simpler operating systems and hardware architectures at the time, which gave it a performance advantage~\cite{Drepper03}.
    317320Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads.
    318 As a result, languages like Java, Scala, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms.
    319 From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}, including putting green threads back into Java~\cite{Quasar}.
    320 The main argument for user-level threading is that it is lighter weight than kernel threading (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work units to facilitate load balancing by the runtime~\cite{Verch12}.
     321As a result, many current languages implementations adopt the 1:1 kernel-threading model, like Java (Scala), Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, C\#~\cite{Csharp} and Rust~\cite{Rust}, with a variety of presentation mechanisms.
     322From 2000 onwards, several language implementations have championed the M:N user-threading model, like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book}, including putting green threads back into Java~\cite{Quasar}, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}.
     323The main argument for user-level threading is that it is lighter weight than kernel threading (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourages large numbers of threads performing medium-sized work to facilitate load balancing by the runtime~\cite{Verch12}.
    321324As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{Adya02,vonBehren03}.
    322325Finally, performant user-threading implementations (both time and space) meet or exceed direct kernel-threading implementations, while achieving the programming advantages of high concurrency levels and safety.
    323326
    324 A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \ie some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}.
     327A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \eg some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}.
    325328The consequence is that a language must provide sufficient tools to program around safety issues, as inline and library code is all sequential to the compiler.
    326329One solution is low-level qualifiers and functions (\eg @volatile@ and atomics) allowing \emph{programmers} to explicitly write safe (race-free~\cite{Boehm12}) programs.
    327 A safer solution is high-level language constructs so the \emph{compiler} knows the optimization boundaries, and hence, provides implicit safety.
    328 This problem is best known with respect to concurrency, but applies to other complex control-flow, like exceptions\footnote{
    329 \CFA exception handling will be presented in a separate paper.
    330 The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}
    331 } and coroutines.
    332 Finally, language solutions allow matching constructs with language paradigm, \ie imperative and functional languages often have different presentations of the same concept to fit their programming model.
    333 
    334 Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary.
    335 Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~8]{Buhr05a}) and \emph{barging}\footnote{
    336 The notion of competitive succession instead of direct handoff, \ie a lock owner releases the lock and an arriving thread acquires it ahead of preexisting waiter threads.
     330A safer solution is high-level language constructs so the \emph{compiler} knows the concurrency boundaries (where mutual exclusion and synchronization are acquired/released) and provide implicit safety at and across these boundaries.
     331While the optimization problem is best known with respect to concurrency, it applies to other complex control-flow, like exceptions and coroutines.
     332As well, language solutions allow matching the language paradigm with the approach, \eg matching the functional paradigm with data-flow programming or the imperative paradigm with thread programming.
     333
     334Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety (unsafe code) for performance when necessary.
     335Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~9]{Buhr05a}) and \emph{barging}\footnote{
     336Barging is competitive succession instead of direct handoff, \ie after a lock is released both arriving and preexisting waiter threads compete to acquire the lock.
     337Hence, an arriving thread can temporally \emph{barge} ahead of threads already waiting for an event, which can repeat indefinitely leading to starvation of waiter threads.
    337338} (signals-as-hints~\cite[\S~8]{Buhr05a}), where one is a consequence of the other, \ie once there is spurious wakeup, signals-as-hints follow.
    338 However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice.
    339 Similarly, signals-as-hints are often a performance decision.
    340 We argue removing spurious wakeup and signals-as-hints make concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation.
    341 (Author experience teaching concurrency is that students are highly confused by these semantics.)
    342 Clawing back performance, when local non-determinism is unimportant, should be an option not the default.
    343 
    344 \begin{comment}
    345 Most augmented traditional (Fortran 18~\cite{Fortran18}, Cobol 14~\cite{Cobol14}, Ada 12~\cite{Ada12}, Java 11~\cite{Java11}) and new languages (Go~\cite{Go}, Rust~\cite{Rust}, and D~\cite{D}), except \CC, diverge from C with different syntax and semantics, only interoperate indirectly with C, and are not systems languages, for those with managed memory.
    346 As a result, there is a significant learning curve to move to these languages, and C legacy-code must be rewritten.
    347 While \CC, like \CFA, takes an evolutionary approach to extend C, \CC's constantly growing complex and interdependent features-set (\eg objects, inheritance, templates, etc.) mean idiomatic \CC code is difficult to use from C, and C programmers must expend significant effort learning \CC.
    348 Hence, rewriting and retraining costs for these languages, even \CC, are prohibitive for companies with a large C software-base.
    349 \CFA with its orthogonal feature-set, its high-performance runtime, and direct access to all existing C libraries circumvents these problems.
    350 \end{comment}
    351 
    352 \CFA embraces user-level threading, language extensions for advanced control-flow, and safety as the default.
    353 We present comparative examples so the reader can judge if the \CFA control-flow extensions are better and safer than those in other concurrent, imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms.
     339(Author experience teaching concurrency is that students are confused by these semantics.)
     340However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~9]{Buhr05a};
     341it is a performance design choice.
     342We argue removing spurious wakeup and signals-as-hints make concurrent programming simpler and safer as there is less local non-determinism to manage.
     343If barging acquisition is allowed, its specialized performance advantage should be available as an option not the default.
     344
     345\CFA embraces language extensions for advanced control-flow, user-level threading, and safety as the default.
     346We present comparative examples to support our argument that the \CFA control-flow extensions are as expressive and safe as those in other concurrent imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms.
    354347The main contributions of this work are:
    355 \begin{itemize}[topsep=3pt,itemsep=1pt]
     348\begin{itemize}[topsep=3pt,itemsep=0pt]
    356349\item
    357 language-level generators, coroutines and user-level threading, which respect the expectations of C programmers.
     350a set of fundamental execution properties that dictate which language-level control-flow features need to be supported,
     351
    358352\item
    359 monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capabilities with all monitor synchronization mechanisms.
     353integration of these language-level control-flow features, while respecting the style and expectations of C programmers,
     354
    360355\item
    361 providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features.
     356monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capabilities with all monitor synchronization mechanisms,
     357
     358\item
     359providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features,
     360
    362361% \item
    363362% library extensions for executors, futures, and actors built on the basic mechanisms.
     363
    364364\item
    365 a runtime system with no spurious wakeup.
     365a runtime system without spurious wake-up and no performance loss,
     366
    366367\item
    367 a dynamic partitioning mechanism to segregate the execution environment for specialized requirements.
     368a dynamic partitioning mechanism to segregate groups of executing user and kernel threads performing specialized work (\eg web-server or compute engine) or requiring different scheduling (\eg NUMA or real-time).
     369
    368370% \item
    369371% a non-blocking I/O library
     372
    370373\item
    371 experimental results showing comparable performance of the new features with similar mechanisms in other programming languages.
     374experimental results showing comparable performance of the \CFA features with similar mechanisms in other languages.
    372375\end{itemize}
    373376
    374 Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls, which produces constructs @generator@ and @coroutine@.
    375 Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread, which produces the @thread@ construct.
     377Section~\ref{s:FundamentalExecutionProperties} presents the compositional hierarchy of execution properties directing the design of control-flow features in \CFA.
     378Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls producing constructs @generator@ and @coroutine@.
     379Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread producing the @thread@ construct.
    376380Section~\ref{s:MutualExclusionSynchronization} discusses the two mechanisms to restricted nondeterminism when controlling shared access to resources (mutual exclusion) and timing relationships among threads (synchronization).
    377381Section~\ref{s:Monitor} shows how both mutual exclusion and synchronization are safely embedded in the @monitor@ and @thread@ constructs.
    378382Section~\ref{s:CFARuntimeStructure} describes the large-scale mechanism to structure (cluster) threads and virtual processors (kernel threads).
    379 Section~\ref{s:Performance} uses a series of microbenchmarks to compare \CFA threading with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0.
     383Section~\ref{s:Performance} uses a series of microbenchmarks to compare \CFA threading with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js 12.14.1, and \uC 7.0.0.
     384
     385
     386\section{Fundamental Execution Properties}
     387\label{s:FundamentalExecutionProperties}
     388
     389The features in a programming language should be composed from a set of fundamental properties rather than an ad hoc collection chosen by the designers.
     390To this end, the control-flow features created for \CFA are based on the fundamental properties of any language with function-stack control-flow (see also \uC~\cite[pp.~140-142]{uC++}).
     391The fundamental properties are execution state, thread, and mutual-exclusion/synchronization (MES).
     392These independent properties can be used alone, in pairs, or in triplets to compose different language features, forming a compositional hierarchy where the most advanced feature has all the properties (state/thread/MES).
     393While it is possible for a language to only support the most advanced feature~\cite{Hermes90}, this unnecessarily complicates and makes inefficient solutions to certain classes of problems.
     394As is shown, each of the (non-rejected) composed features solves a particular set of problems, and hence, has a defensible position in a programming language.
     395If a compositional feature is missing, a programmer has too few/many fundamental properties resulting in a complex and/or is inefficient solution.
     396
     397In detail, the fundamental properties are:
     398\begin{description}[leftmargin=\parindent,topsep=3pt,parsep=0pt]
     399\item[\newterm{execution state}:]
     400is the state information needed by a control-flow feature to initialize, manage compute data and execution location(s), and de-initialize.
     401State is retained in fixed-sized aggregate structures and dynamic-sized stack(s), often allocated in the heap(s) managed by the runtime system.
     402The lifetime of the state varies with the control-flow feature, where longer life-time and dynamic size provide greater power but also increase usage complexity and cost.
     403Control-flow transfers among execution states occurs in multiple ways, such as function call, context switch, asynchronous await, etc.
     404Because the programming language determines what constitutes an execution state, implicitly manages this state, and defines movement mechanisms among states, execution state is an elementary property of the semantics of a programming language.
     405% An execution-state is related to the notion of a process continuation \cite{Hieb90}.
     406
     407\item[\newterm{threading}:]
     408is execution of code that occurs independently of other execution, \ie the execution resulting from a thread is sequential.
     409Multiple threads provide \emph{concurrent execution};
     410concurrent execution becomes parallel when run on multiple processing units (hyper-threading, cores, sockets).
     411There must be language mechanisms to create, block/unblock, and join with a thread.
     412
     413\item[\newterm{MES}:]
     414is the concurrency mechanisms to perform an action without interruption and establish timing relationships among multiple threads.
     415These two properties are independent, \ie mutual exclusion cannot provide synchronization and vice versa without introducing additional threads~\cite[\S~4]{Buhr05a}.
     416Limiting MES, \eg no access to shared data, results in contrived solutions and inefficiency on multi-core von Neumann computers where shared memory is a foundational aspect of its design.
     417\end{description}
     418These properties are fundamental because they cannot be built from existing language features, \eg a basic programming language like C99~\cite{C99} cannot create new control-flow features, concurrency, or provide MES using atomic hardware mechanisms.
     419
     420
     421\subsection{Execution Properties}
     422
     423Table~\ref{t:ExecutionPropertyComposition} shows how the three fundamental execution properties: state, thread, and mutual exclusion compose a hierarchy of control-flow features needed in a programming language.
     424(When doing case analysis, not all combinations are meaningful.)
     425Note, basic von Neumann execution requires at least one thread and an execution state providing some form of call stack.
     426For table entries missing these minimal components, the property is borrowed from the invoker (caller).
     427
     428Case 1 is a function that borrows storage for its state (stack frame/activation) and a thread from its invoker and retains this state across \emph{callees}, \ie function local-variables are retained on the stack across calls.
     429Case 2 is case 1 with access to shared state so callers are restricted during update (mutual exclusion) and scheduling for other threads (synchronization).
     430Case 3 is a stateful function supporting resume/suspend along with call/return to retain state across \emph{callers}, but has some restrictions because the function's state is stackless.
     431Note, stackless functions still borrow the caller's stack and thread, where the stack is used to preserve state across its callees.
     432Case 4 is cases 2 and 3 with protection to shared state for stackless functions.
     433Cases 5 and 6 are the same as 3 and 4 but only the thread is borrowed as the function state is stackful, so resume/suspend is a context switch from the caller's to the function's stack.
     434Cases 7 and 8 are rejected because a function that is given a new thread must have its own stack where the thread begins and stack frames are stored for calls, \ie there is no stack to borrow.
     435Cases 9 and 10 are rejected because a thread with a fixed state (no stack) cannot accept calls, make calls, block, or be preempted, all of which require an unknown amount of additional dynamic state.
     436Hence, once started, this kind of thread must execute to completion, \ie computation only, which severely restricts runtime management.
     437Cases 11 and 12 have a stackful thread with and without safe access to shared state.
     438Execution properties increase the cost of creation and execution along with complexity of usage.
     439
     440\begin{table}
     441\caption{Execution property composition}
     442\centering
     443\label{t:ExecutionPropertyComposition}
     444\renewcommand{\arraystretch}{1.25}
     445%\setlength{\tabcolsep}{5pt}
     446\begin{tabular}{c|c||l|l}
     447\multicolumn{2}{c||}{execution properties} & \multicolumn{2}{c}{mutual exclusion / synchronization} \\
     448\hline
     449stateful                        & thread        & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\
     450\hline   
     451\hline   
     452No                                      & No            & \textbf{1}\ \ \ function                              & \textbf{2}\ \ \ @monitor@ function    \\
     453\hline   
     454Yes (stackless)         & No            & \textbf{3}\ \ \ @generator@                   & \textbf{4}\ \ \ @monitor@ @generator@ \\
     455\hline   
     456Yes (stackful)          & No            & \textbf{5}\ \ \ @coroutine@                   & \textbf{6}\ \ \ @monitor@ @coroutine@ \\
     457\hline   
     458No                                      & Yes           & \textbf{7}\ \ \ {\color{red}rejected} & \textbf{8}\ \ \ {\color{red}rejected} \\
     459\hline   
     460Yes (stackless)         & Yes           & \textbf{9}\ \ \ {\color{red}rejected} & \textbf{10}\ \ \ {\color{red}rejected} \\
     461\hline   
     462Yes (stackful)          & Yes           & \textbf{11}\ \ \ @thread@                             & \textbf{12}\ \ @monitor@ @thread@             \\
     463\end{tabular}
     464\end{table}
     465
     466Given the execution-properties taxonomy, programmers can now answer three basic questions: is state necessary across calls and how much, is a separate thread necessary, is access to shared state necessary.
     467The answers define the optimal language feature need for implementing a programming problem.
     468The next sections discusses how \CFA fills in the table with language features, while other programming languages may only provide a subset of the table.
     469
     470
     471\subsection{Design Requirements}
     472
     473The following design requirements largely stem from building \CFA on top of C.
     474\begin{itemize}[topsep=3pt,parsep=0pt]
     475\item
     476All communication must be statically type checkable for early detection of errors and efficient code generation.
     477This requirement is consistent with the fact that C is a statically-typed programming-language.
     478
     479\item
     480Direct interaction among language features must be possible allowing any feature to be selected without restricting comm\-unication.
     481For example, many concurrent languages do not provide direct communication (calls) among threads, \ie threads only communicate indirectly through monitors, channels, messages, and/or futures.
     482Indirect communication increases the number of objects, consuming more resources, and require additional synchronization and possibly data transfer.
     483
     484\item
     485All communication is performed using function calls, \ie data is transmitted from argument to parameter and results are returned from function calls.
     486Alternative forms of communication, such as call-backs, message passing, channels, or communication ports, step outside of C's normal form of communication.
     487
     488\item
     489All stateful features must follow the same declaration scopes and lifetimes as other language data.
     490For C that means at program startup, during block and function activation, and on demand using dynamic allocation.
     491
     492\item
     493MES must be available implicitly in language constructs as well as explicitly for specialized requirements, because requiring programmers to build MES using low-level locks often leads to incorrect programs.
     494Furthermore, reducing synchronization scope by encapsulating it within language constructs further reduces errors in concurrent programs.
     495
     496\item
     497Both synchronous and asynchronous communication are needed.
     498However, we believe the best way to provide asynchrony, such as call-buffering/chaining and/or returning futures~\cite{multilisp}, is building it from expressive synchronous features.
     499
     500\item
     501Synchronization must be able to control the service order of requests including prioritizing selection from different kinds of outstanding requests, and postponing a request for an unspecified time while continuing to accept new requests.
     502Otherwise, certain concurrency problems are difficult, e.g.\ web server, disk scheduling, and the amount of concurrency is inhibited~\cite{Gentleman81}.
     503\end{itemize}
     504We have satisfied these requirements in \CFA while maintaining backwards compatibility with the huge body of legacy C programs.
     505% In contrast, other new programming languages must still access C programs (\eg operating-system service routines), but do so through fragile C interfaces.
     506
     507
     508\subsection{Asynchronous Await / Call}
     509
     510Asynchronous await/call is a caller mechanism for structuring programs and/or increasing concurrency, where the caller (client) postpones an action into the future, which is subsequently executed by a callee (server).
     511The caller detects the action's completion through a \newterm{future}/\newterm{promise}.
     512The benefit is asynchronous caller execution with respect to the callee until future resolution.
     513For single-threaded languages like JavaScript, an asynchronous call passes a callee action, which is queued in the event-engine, and continues execution with a promise.
     514When the caller needs the promise to be fulfilled, it executes @await@.
     515A promise-completion call-back can be part of the callee action or the caller is rescheduled;
     516in either case, the call back is executed after the promise is fulfilled.
     517While asynchronous calls generate new callee (server) events, we content this mechanism is insufficient for advanced control-flow mechanisms like generators or coroutines (which are discussed next).
     518Specifically, control between caller and callee occurs indirectly through the event-engine precluding direct handoff and cycling among events, and requires complex resolution of a control promise and data.
     519Note, @async-await@ is just syntactic-sugar over the event engine so it does not solve these deficiencies.
     520For multi-threaded languages like Java, the asynchronous call queues a callee action with an executor (server), which subsequently executes the work by a thread in the executor thread-pool.
     521The problem is when concurrent work-units need to interact and/or block as this effects the executor, \eg stops threads.
     522While it is possible to extend this approach to support the necessary mechanisms, \eg message passing in Actors, we show monitors and threads provide an equally competitive approach that does not deviate from normal call communication and can be used to build asynchronous call, as is done in Java.
    380523
    381524
     
    383526\label{s:StatefulFunction}
    384527
    385 The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine.
    386 Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension.
    387 This capability is accomplished by retaining a data/execution \emph{closure} between invocations.
    388 If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited.
    389 If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions.
    390 Hence, refactoring a stackless coroutine may require changing it to stackful.
    391 A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack.
    392 As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
    393 A fixed closure activated by modified call/return is faster than a variable closure activated by context switching.
    394 Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance.
    395 Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general.
    396 Note, creation cost is amortized across usage, so activation cost is usually the dominant factor.
     528A \emph{stateful function} has the ability to remember state between calls, where state can be either data or execution, \eg plugin, device driver, finite-state machine (FSM).
     529A simple technique to retain data state between calls is @static@ declarations within a function, which is often implemented by hoisting the declarations to the global scope but hiding the names within the function using name mangling.
     530However, each call starts the function at the top making it difficult to determine the last point of execution in an algorithm, and requiring multiple flag variables and testing to reestablish the continuation point.
     531Hence, the next step of generalizing function state is implicitly remembering the return point between calls and reentering the function at this point rather than the top, called \emph{generators}\,/\,\emph{iterators} or \emph{stackless coroutines}.
     532For example, a Fibonacci generator retains data and execution state allowing it to remember prior values needed to generate the next value and the location in the algorithm to compute that value.
     533The next step of generalization is instantiating the function to allow multiple named instances, \eg multiple Fibonacci generators, where each instance has its own state, and hence, can generate an independent sequence of values.
     534Note, a subset of generator state is a function \emph{closure}, \ie the technique of capturing lexical references when returning a nested function.
     535A further generalization is adding a stack to a generator's state, called a \emph{coroutine}, so it can suspend outside of itself, \eg call helper functions to arbitrary depth before suspending back to its resumer without unwinding these calls.
     536For example, a coroutine iterator for a binary tree can stop the traversal at the visit point (pre, infix, post traversal), return the node value to the caller, and then continue the recursive traversal from the current node on the next call.
     537
     538There are two styles of activating a stateful function, \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
     539These styles \emph{do not} cause incremental stack growth, \eg a million resume/suspend or resume/resume cycles do not remember each cycle just the last resumer for each cycle.
     540Selecting between stackless/stackful semantics and asymmetric/symmetric style is a tradeoff between programming requirements, performance, and design, where stackless is faster and smaller (modified call/return between closures), stackful is more general but slower and larger (context switching between distinct stacks), and asymmetric is simpler control-flow than symmetric.
     541Additionally, storage management for the closure/stack (especially in unmanaged languages, \ie no garbage collection) must be factored into design and performance.
     542Note, creation cost (closure/stack) is amortized across usage, so activation cost (resume/suspend) is usually the dominant factor.
     543
     544% The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine.
     545% Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension.
     546% If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited.
     547% If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions.
     548% Hence, refactoring a stackless coroutine may require changing it to stackful.
     549% A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack.
     550% As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
     551% A fixed closure activated by modified call/return is faster than a variable closure activated by context switching.
     552% Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance.
     553% Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general.
     554% nppNote, creation cost is amortized across usage, so activation cost is usually the dominant factor.
     555
     556For example, Python presents asymmetric generators as a function object, \uC presents symmetric coroutines as a \lstinline[language=C++]|class|-like object, and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}.
     557\begin{center}
     558\begin{tabular}{@{}l|l|l@{}}
     559\multicolumn{1}{@{}c|}{Python asymmetric generator} & \multicolumn{1}{c|}{\uC symmetric coroutine} & \multicolumn{1}{c@{}}{Pthreads thread} \\
     560\hline
     561\begin{python}
     562`def Gen():` $\LstCommentStyle{\color{red}// function}$
     563        ... yield val ...
     564gen = Gen()
     565for i in range( 10 ):
     566        print( next( gen ) )
     567\end{python}
     568&
     569\begin{uC++}
     570`_Coroutine Cycle {` $\LstCommentStyle{\color{red}// class}$
     571        Cycle * p;
     572        void main() { p->cycle(); }
     573        void cycle() { resume(); }  `};`
     574Cycle c1, c2; c1.p=&c2; c2.p=&c1; c1.cycle();
     575\end{uC++}
     576&
     577\begin{cfa}
     578void * rtn( void * arg ) { ... }
     579int i = 3, rc;
     580pthread_t t; $\C{// thread id}$
     581$\LstCommentStyle{\color{red}// function pointer}$
     582rc=pthread_create(&t, `rtn`, (void *)i);
     583\end{cfa}
     584\end{tabular}
     585\end{center}
     586\CFA's preferred presentation model for generators/coroutines/threads is a hybrid of functions and classes, giving an object-oriented flavour.
     587Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type via the type's name.
     588The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed for flexibility reasons.
     589Each custom type is discussed in detail in the following sections.
     590
     591
     592\subsection{Generator}
     593
     594Stackless generators (Table~\ref{t:ExecutionPropertyComposition} case 3) have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution.
     595The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity.
     596A series of different kinds of generators and their implementation demonstrate how this goal is accomplished.\footnote{
     597The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|?++|, \lstinline|++?|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|.
     598Operator \lstinline+|+ is overloaded for printing, like bit-shift \lstinline|<<| in \CC.
     599The \CFA \lstinline|with| clause opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics;
     600multiple aggregates may be opened.
     601\CFA has rebindable references \lstinline|int i, & ip = i, j; `&ip = &j;`| and non-rebindable references \lstinline|int i, & `const` ip = i, j; `&ip = &j;` // disallowed|.
     602}%
    397603
    398604\begin{figure}
     
    408614
    409615
     616
     617
    410618        int fn = f->fn; f->fn = f->fn1;
    411619                f->fn1 = f->fn + fn;
    412620        return fn;
    413 
    414621}
    415622int main() {
     
    430637void `main(Fib & fib)` with(fib) {
    431638
     639
    432640        [fn1, fn] = [1, 0];
    433641        for () {
     
    449657\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    450658typedef struct {
    451         int fn1, fn;  void * `next`;
     659        int `restart`, fn1, fn;
    452660} Fib;
    453 #define FibCtor { 1, 0, NULL }
     661#define FibCtor { `0`, 1, 0 }
    454662Fib * comain( Fib * f ) {
    455         if ( f->next ) goto *f->next;
    456         f->next = &&s1;
     663        `static void * states[] = {&&s0, &&s1};`
     664        `goto *states[f->restart];`
     665  s0: f->`restart` = 1;
    457666        for ( ;; ) {
    458667                return f;
    459668          s1:; int fn = f->fn + f->fn1;
    460                         f->fn1 = f->fn; f->fn = fn;
     669                f->fn1 = f->fn; f->fn = fn;
    461670        }
    462671}
     
    470679\end{lrbox}
    471680
    472 \subfloat[C asymmetric generator]{\label{f:CFibonacci}\usebox\myboxA}
     681\subfloat[C]{\label{f:CFibonacci}\usebox\myboxA}
    473682\hspace{3pt}
    474683\vrule
    475684\hspace{3pt}
    476 \subfloat[\CFA asymmetric generator]{\label{f:CFAFibonacciGen}\usebox\myboxB}
     685\subfloat[\CFA]{\label{f:CFAFibonacciGen}\usebox\myboxB}
    477686\hspace{3pt}
    478687\vrule
    479688\hspace{3pt}
    480 \subfloat[C generator implementation]{\label{f:CFibonacciSim}\usebox\myboxC}
     689\subfloat[C generated code for \CFA version]{\label{f:CFibonacciSim}\usebox\myboxC}
    481690\caption{Fibonacci (output) asymmetric generator}
    482691\label{f:FibonacciAsymmetricGenerator}
     
    491700};
    492701void ?{}( Fmt & fmt ) { `resume(fmt);` } // constructor
    493 void ^?{}( Fmt & f ) with(f) { $\C[1.75in]{// destructor}$
     702void ^?{}( Fmt & f ) with(f) { $\C[2.25in]{// destructor}$
    494703        if ( g != 0 || b != 0 ) sout | nl; }
    495704void `main( Fmt & f )` with(f) {
     
    497706                for ( ; g < 5; g += 1 ) { $\C{// groups}$
    498707                        for ( ; b < 4; b += 1 ) { $\C{// blocks}$
    499                                 `suspend;` $\C{// wait for character}$
    500                                 while ( ch == '\n' ) `suspend;` // ignore
    501                                 sout | ch;                                              // newline
    502                         } sout | " ";  // block spacer
    503                 } sout | nl; // group newline
     708                                do { `suspend;` $\C{// wait for character}$
     709                                while ( ch == '\n' ); // ignore newline
     710                                sout | ch;                      $\C{// print character}$
     711                        } sout | " ";  $\C{// block separator}$
     712                } sout | nl; $\C{// group separator}$
    504713        }
    505714}
     
    519728\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    520729typedef struct {
    521         void * next;
     730        int `restart`, g, b;
    522731        char ch;
    523         int g, b;
    524732} Fmt;
    525733void comain( Fmt * f ) {
    526         if ( f->next ) goto *f->next;
    527         f->next = &&s1;
     734        `static void * states[] = {&&s0, &&s1};`
     735        `goto *states[f->restart];`
     736  s0: f->`restart` = 1;
    528737        for ( ;; ) {
    529738                for ( f->g = 0; f->g < 5; f->g += 1 ) {
    530739                        for ( f->b = 0; f->b < 4; f->b += 1 ) {
    531                                 return;
    532                           s1:;  while ( f->ch == '\n' ) return;
     740                                do { return;  s1: ;
     741                                } while ( f->ch == '\n' );
    533742                                printf( "%c", f->ch );
    534743                        } printf( " " );
     
    537746}
    538747int main() {
    539         Fmt fmt = { NULL };  comain( &fmt ); // prime
     748        Fmt fmt = { `0` };  comain( &fmt ); // prime
    540749        for ( ;; ) {
    541750                scanf( "%c", &fmt.ch );
     
    548757\end{lrbox}
    549758
    550 \subfloat[\CFA asymmetric generator]{\label{f:CFAFormatGen}\usebox\myboxA}
    551 \hspace{3pt}
     759\subfloat[\CFA]{\label{f:CFAFormatGen}\usebox\myboxA}
     760\hspace{35pt}
    552761\vrule
    553762\hspace{3pt}
    554 \subfloat[C generator simulation]{\label{f:CFormatSim}\usebox\myboxB}
     763\subfloat[C generated code for \CFA version]{\label{f:CFormatGenImpl}\usebox\myboxB}
    555764\hspace{3pt}
    556765\caption{Formatter (input) asymmetric generator}
     
    558767\end{figure}
    559768
    560 Stateful functions appear as generators, coroutines, and threads, where presentations are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.
    561 For example, Python presents generators as a function object:
    562 \begin{python}
    563 def Gen():
    564         ... `yield val` ...
    565 gen = Gen()
    566 for i in range( 10 ):
    567         print( next( gen ) )
    568 \end{python}
    569 Boost presents coroutines in terms of four functor object-types:
    570 \begin{cfa}
    571 asymmetric_coroutine<>::pull_type
    572 asymmetric_coroutine<>::push_type
    573 symmetric_coroutine<>::call_type
    574 symmetric_coroutine<>::yield_type
    575 \end{cfa}
    576 and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}, \eg pthreads:
    577 \begin{cfa}
    578 void * rtn( void * arg ) { ... }
    579 int i = 3, rc;
    580 pthread_t t; $\C{// thread id}$
    581 `rc = pthread_create( &t, rtn, (void *)i );` $\C{// create and initialized task, type-unsafe input parameter}$
    582 \end{cfa}
    583 % void mycor( pthread_t cid, void * arg ) {
    584 %       int * value = (int *)arg;                               $\C{// type unsafe, pointer-size only}$
    585 %       // thread body
    586 % }
    587 % int main() {
    588 %       int input = 0, output;
    589 %       coroutine_t cid = coroutine_create( &mycor, (void *)&input ); $\C{// type unsafe, pointer-size only}$
    590 %       coroutine_resume( cid, (void *)input, (void **)&output ); $\C{// type unsafe, pointer-size only}$
    591 % }
    592 \CFA's preferred presentation model for generators/coroutines/threads is a hybrid of objects and functions, with an object-oriented flavour.
    593 Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type.
    594 The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed.
    595 
    596 
    597 \subsection{Generator}
    598 
    599 Stackless generators have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution.
    600 The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity.
    601 A series of different kinds of generators and their implementation demonstrate how this goal is accomplished.
    602 
    603 Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version.
     769Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written (left to right) in C, \CFA, and showing the underlying C implementation for the \CFA version.
    604770This generator is an \emph{output generator}, producing a new result on each resumption.
    605771To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle.
     
    609775The C version only has the middle execution state because the top execution state is declaration initialization.
    610776Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a custom \CFA @generator@ type.
    611 This generator type is then connected to a function that \emph{must be named \lstinline|main|},\footnote{
    612 The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution.
    613 Hence, overloading this name for other starting points (generator/coroutine/thread) is a logical extension.}
    614 called a \emph{generator main},which takes as its only parameter a reference to the generator type.
     777Each generator type must have a function named \lstinline|main|,
     778% \footnote{
     779% The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution.
     780% Leveraging starting semantics to this name for generator/coroutine/thread is a logical extension.}
     781called a \emph{generator main} (leveraging the starting semantics for program @main@ in C), which is connected to the generator type via its single reference parameter.
    615782The generator main contains @suspend@ statements that suspend execution without ending the generator versus @return@.
    616 For the Fibonacci generator-main,\footnote{
    617 The \CFA \lstinline|with| opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics.
    618 Multiple aggregates may be opened.}
     783For the Fibonacci generator-main,
    619784the top initialization state appears at the start and the middle execution state is denoted by statement @suspend@.
    620785Any local variables in @main@ \emph{are not retained} between calls;
     
    625790Resuming an ended (returned) generator is undefined.
    626791Function @resume@ returns its argument generator so it can be cascaded in an expression, in this case to print the next Fibonacci value @fn@ computed in the generator instance.
    627 Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA generator only needs one additional field, @next@, to handle retention of execution state.
    628 The computed @goto@ at the start of the generator main, which branches after the previous suspend, adds very little cost to the resume call.
    629 Finally, an explicit generator type provides both design and performance benefits, such as multiple type-safe interface functions taking and returning arbitrary types.\footnote{
    630 The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|++?|, \lstinline|?++|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|.
    631 }%
     792Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA asymmetric generator.
     793Only one execution-state field, @restart@, is needed to subscript the suspension points in the generator.
     794At the start of the generator main, the @static@ declaration, @states@, is initialized to the N suspend points in the generator (where operator @&&@ dereferences/references a label~\cite{gccValueLabels}).
     795Next, the computed @goto@ selects the last suspend point and branches to it.
     796The  cost of setting @restart@ and branching via the computed @goto@ adds very little cost to the suspend/resume calls.
     797
     798An advantage of the \CFA explicit generator type is the ability to allow multiple type-safe interface functions taking and returning arbitrary types.
    632799\begin{cfa}
    633800int ?()( Fib & fib ) { return `resume( fib )`.fn; } $\C[3.9in]{// function-call interface}$
    634 int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// use function-call interface to skip N values}$
    635 double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}\CRT$
    636 sout | (int)f1() | (double)f1() | f2( 2 ); // alternative interface, cast selects call based on return type, step 2 values
     801int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// add parameter to skip N values}$
     802double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}$
     803Fib f;  int i;  double d;
     804i = f();  i = f( 2 );  d = f();                                         $\C{// alternative interfaces}\CRT$
    637805\end{cfa}
    638806Now, the generator can be a separately compiled opaque-type only accessed through its interface functions.
    639807For contrast, Figure~\ref{f:PythonFibonacci} shows the equivalent Python Fibonacci generator, which does not use a generator type, and hence only has a single interface, but an implicit closure.
    640808
    641 Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden.
    642 (This restriction is removed by the coroutine in Section~\ref{s:Coroutine}.)
    643 This requirement follows from the generality of variable-size local-state, \eg local state with a variable-length array requires dynamic allocation because the array size is unknown at compile time.
     809\begin{figure}
     810%\centering
     811\newbox\myboxA
     812\begin{lrbox}{\myboxA}
     813\begin{python}[aboveskip=0pt,belowskip=0pt]
     814def Fib():
     815        fn1, fn = 0, 1
     816        while True:
     817                `yield fn1`
     818                fn1, fn = fn, fn1 + fn
     819f1 = Fib()
     820f2 = Fib()
     821for i in range( 10 ):
     822        print( next( f1 ), next( f2 ) )
     823
     824
     825
     826
     827
     828
     829
     830
     831
     832
     833\end{python}
     834\end{lrbox}
     835
     836\newbox\myboxB
     837\begin{lrbox}{\myboxB}
     838\begin{python}[aboveskip=0pt,belowskip=0pt]
     839def Fmt():
     840        try:
     841                while True:                                             $\C[2.5in]{\# until destructor call}$
     842                        for g in range( 5 ):            $\C{\# groups}$
     843                                for b in range( 4 ):    $\C{\# blocks}$
     844                                        while True:
     845                                                ch = (yield)    $\C{\# receive from send}$
     846                                                if '\n' not in ch: $\C{\# ignore newline}$
     847                                                        break
     848                                        print( ch, end='' )     $\C{\# print character}$
     849                                print( '  ', end='' )   $\C{\# block separator}$
     850                        print()                                         $\C{\# group separator}$
     851        except GeneratorExit:                           $\C{\# destructor}$
     852                if g != 0 | b != 0:                             $\C{\# special case}$
     853                        print()
     854fmt = Fmt()
     855`next( fmt )`                                                   $\C{\# prime, next prewritten}$
     856for i in range( 41 ):
     857        `fmt.send( 'a' );`                                      $\C{\# send to yield}$
     858\end{python}
     859\end{lrbox}
     860
     861\hspace{30pt}
     862\subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA}
     863\hspace{3pt}
     864\vrule
     865\hspace{3pt}
     866\subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB}
     867\caption{Python generator}
     868\label{f:PythonGenerator}
     869\end{figure}
     870
     871Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden (removed by the coroutine in Section~\ref{s:Coroutine}).
     872This manual requirement follows from the generality of allowing variable-size local-state, \eg local state with a variable-length array requires dynamic allocation as the array size is unknown at compile time.
    644873However, dynamic allocation significantly increases the cost of generator creation/destruction and is a showstopper for embedded real-time programming.
    645874But more importantly, the size of the generator type is tied to the local state in the generator main, which precludes separate compilation of the generator main, \ie a generator must be inlined or local state must be dynamically allocated.
    646 With respect to safety, we believe static analysis can discriminate local state from temporary variables in a generator, \ie variable usage spanning @suspend@, and generate a compile-time error.
    647 Finally, our current experience is that most generator problems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small.
     875With respect to safety, we believe static analysis can discriminate persistent generator state from temporary generator-main state and raise a compile-time error for temporary usage spanning suspend points.
     876Our experience using generators is that the problems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small.
    648877As well, C programmers are not afraid of this kind of semantic programming requirement, if it results in very small, fast generators.
    649878
     
    667896The example takes advantage of resuming a generator in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops.
    668897The destructor provides a newline, if formatted text ends with a full line.
    669 Figure~\ref{f:CFormatSim} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@.
    670 For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the Fibonacci generator.
    671 
    672 Figure~\ref{f:DeviceDriverGen} shows a \emph{killer} asymmetric generator, a device-driver, because device drivers caused 70\%-85\% of failures in Windows/Linux~\cite{Swift05}.
    673 Device drives follow the pattern of simple data state but complex execution state, \ie finite state-machine (FSM) parsing a protocol.
    674 For example, the following protocol:
     898Figure~\ref{f:CFormatGenImpl} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@.
     899For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the format generator.
     900
     901% https://dl-acm-org.proxy.lib.uwaterloo.ca/
     902
     903Figure~\ref{f:DeviceDriverGen} shows an important application for an asymmetric generator, a device-driver, because device drivers are a significant source of operating-system errors: 85\% in Windows XP~\cite[p.~78]{Swift05} and 51.6\% in Linux~\cite[p.~1358,]{Xiao19}. %\cite{Palix11}
     904Swift \etal~\cite[p.~86]{Swift05} restructure device drivers using the Extension Procedure Call (XPC) within the kernel via functions @nooks_driver_call@ and @nooks_kernel_call@, which have coroutine properties context switching to separate stacks with explicit hand-off calls;
     905however, the calls do not retain execution state, and hence always start from the top.
     906The alternative approach for implementing device drivers is using stack-ripping.
     907However, Adya \etal~\cite{Adya02} argue against stack ripping in Section 3.2 and suggest a hybrid approach in Section 4 using cooperatively scheduled \emph{fibers}, which is coroutining.
     908
     909As an example, the following protocol:
    675910\begin{center}
    676911\ldots\, STX \ldots\, message \ldots\, ESC ETX \ldots\, message \ldots\, ETX 2-byte crc \ldots
    677912\end{center}
    678 is a network message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check.
     913is for a simple network message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check.
    679914Control characters may appear in a message if preceded by an ESC.
    680915When a message byte arrives, it triggers an interrupt, and the operating system services the interrupt by calling the device driver with the byte read from a hardware register.
    681 The device driver returns a status code of its current state, and when a complete message is obtained, the operating system knows the message is in the message buffer.
    682 Hence, the device driver is an input/output generator.
    683 
    684 Note, the cost of creating and resuming the device-driver generator, @Driver@, is virtually identical to call/return, so performance in an operating-system kernel is excellent.
    685 As well, the data state is small, where variables @byte@ and @msg@ are communication variables for passing in message bytes and returning the message, and variables @lnth@, @crc@, and @sum@ are local variable that must be retained between calls and are manually hoisted into the generator type.
    686 % Manually, detecting and hoisting local-state variables is easy when the number is small.
    687 In contrast, the execution state is large, with one @resume@ and seven @suspend@s.
    688 Hence, the key benefits of the generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than using a table-driven approach.
    689 Because FSMs can be complex and frequently occur in important domains, direct generator support is important in a system programming language.
     916The device driver returns a status code of its current state, and when a complete message is obtained, the operating system read the message accumulated in the supplied buffer.
     917Hence, the device driver is an input/output generator, where the cost of resuming the device-driver generator is the same as call/return, so performance in an operating-system kernel is excellent.
     918The key benefits of using a generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than table lookup or stack ripping.
     919The conclusion is that FSMs are complex and occur in important domains, so direct generator support is important in a system programming language.
    690920
    691921\begin{figure}
    692922\centering
    693 \newbox\myboxA
    694 \begin{lrbox}{\myboxA}
    695 \begin{python}[aboveskip=0pt,belowskip=0pt]
    696 def Fib():
    697         fn1, fn = 0, 1
    698         while True:
    699                 `yield fn1`
    700                 fn1, fn = fn, fn1 + fn
    701 f1 = Fib()
    702 f2 = Fib()
    703 for i in range( 10 ):
    704         print( next( f1 ), next( f2 ) )
    705 
    706 
    707 
    708 
    709 
    710 
    711 \end{python}
    712 \end{lrbox}
    713 
    714 \newbox\myboxB
    715 \begin{lrbox}{\myboxB}
    716 \begin{python}[aboveskip=0pt,belowskip=0pt]
    717 def Fmt():
    718         try:
    719                 while True:
    720                         for g in range( 5 ):
    721                                 for b in range( 4 ):
    722                                         print( `(yield)`, end='' )
    723                                 print( '  ', end='' )
    724                         print()
    725         except GeneratorExit:
    726                 if g != 0 | b != 0:
    727                         print()
    728 fmt = Fmt()
    729 `next( fmt )`                    # prime, next prewritten
    730 for i in range( 41 ):
    731         `fmt.send( 'a' );`      # send to yield
    732 \end{python}
    733 \end{lrbox}
    734 \subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA}
    735 \hspace{3pt}
    736 \vrule
    737 \hspace{3pt}
    738 \subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB}
    739 \caption{Python generator}
    740 \label{f:PythonGenerator}
    741 
    742 \bigskip
    743 
    744923\begin{tabular}{@{}l|l@{}}
    745924\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     
    748927`generator` Driver {
    749928        Status status;
    750         unsigned char byte, * msg; // communication
    751         unsigned int lnth, sum;      // local state
    752         unsigned short int crc;
     929        char byte, * msg; // communication
     930        int lnth, sum;      // local state
     931        short int crc;
    753932};
    754933void ?{}( Driver & d, char * m ) { d.msg = m; }
     
    798977(The trivial cycle is a generator resuming itself.)
    799978This control flow is similar to recursion for functions but without stack growth.
    800 The steps for symmetric control-flow are creating, executing, and terminating the cycle.
     979Figure~\ref{f:PingPongFullCoroutineSteps} shows the steps for symmetric control-flow are creating, executing, and terminating the cycle.
    801980Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope.
    802981(This issue occurs for any cyclic data structure.)
    803 % The example creates all the generators and then assigns the partners that form the cycle.
    804 % Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.
    805 Once the cycle is formed, the program main resumes one of the generators, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s).
     982The example creates the generators, @ping@/@pong@, and then assigns the partners that form the cycle.
     983% (Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.)
     984Once the cycle is formed, the program main resumes one of the generators, @ping@, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s).
    806985Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example).
     986Note, the creator and starter may be different, \eg if the creator calls another function that starts the cycle.
    807987The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack.
    808 Also, since local variables are not retained in the generator function, it does not contain any objects with destructors that must be called, so the  cost is the same as a function return.
    809 Destructor cost occurs when the generator instance is deallocated, which is easily controlled by the programmer.
    810 
    811 Figure~\ref{f:CPingPongSim} shows the implementation of the symmetric generator, where the complexity is the @resume@, which needs an extension to the calling convention to perform a forward rather than backward jump.
    812 This jump-starts at the top of the next generator main to re-execute the normal calling convention to make space on the stack for its local variables.
    813 However, before the jump, the caller must reset its stack (and any registers) equivalent to a @return@, but subsequently jump forward.
    814 This semantics is basically a tail-call optimization, which compilers already perform.
    815 The example shows the assembly code to undo the generator's entry code before the direct jump.
    816 This assembly code depends on what entry code is generated, specifically if there are local variables and the level of optimization.
    817 To provide this new calling convention requires a mechanism built into the compiler, which is beyond the scope of \CFA at this time.
    818 Nevertheless, it is possible to hand generate any symmetric generators for proof of concept and performance testing.
    819 A compiler could also eliminate other artifacts in the generator simulation to further increase performance, \eg LLVM has various coroutine support~\cite{CoroutineTS}, and \CFA can leverage this support should it fork @clang@.
     988Also, since local variables are not retained in the generator function, there are no objects with destructors to be called, so the cost is the same as a function return.
     989Destructor cost occurs when the generator instance is deallocated by the creator.
    820990
    821991\begin{figure}
     
    824994\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    825995`generator PingPong` {
     996        int N, i;                               // local state
    826997        const char * name;
    827         int N;
    828         int i;                          // local state
    829998        PingPong & partner; // rebindable reference
    830999};
    8311000
    8321001void `main( PingPong & pp )` with(pp) {
     1002
     1003
    8331004        for ( ; i < N; i += 1 ) {
    8341005                sout | name | i;
     
    8481019\begin{cfa}[escapechar={},aboveskip=0pt,belowskip=0pt]
    8491020typedef struct PingPong {
     1021        int restart, N, i;
    8501022        const char * name;
    851         int N, i;
    8521023        struct PingPong * partner;
    853         void * next;
    8541024} PingPong;
    855 #define PPCtor(name, N) {name,N,0,NULL,NULL}
     1025#define PPCtor(name, N) {0, N, 0, name, NULL}
    8561026void comain( PingPong * pp ) {
    857         if ( pp->next ) goto *pp->next;
    858         pp->next = &&cycle;
     1027        static void * states[] = {&&s0, &&s1};
     1028        goto *states[pp->restart];
     1029  s0: pp->restart = 1;
    8591030        for ( ; pp->i < pp->N; pp->i += 1 ) {
    8601031                printf( "%s %d\n", pp->name, pp->i );
    8611032                asm( "mov  %0,%%rdi" : "=m" (pp->partner) );
    8621033                asm( "mov  %rdi,%rax" );
    863                 asm( "popq %rbx" );
     1034                asm( "add  $16, %rsp" );
     1035                asm( "popq %rbp" );
    8641036                asm( "jmp  comain" );
    865           cycle: ;
     1037          s1: ;
    8661038        }
    8671039}
     
    8791051\end{figure}
    8801052
    881 Finally, part of this generator work was inspired by the recent \CCtwenty generator proposal~\cite{C++20Coroutine19} (which they call coroutines).
     1053\begin{figure}
     1054\centering
     1055\input{FullCoroutinePhases.pstex_t}
     1056\vspace*{-10pt}
     1057\caption{Symmetric coroutine steps: Ping / Pong}
     1058\label{f:PingPongFullCoroutineSteps}
     1059\end{figure}
     1060
     1061Figure~\ref{f:CPingPongSim} shows the C implementation of the \CFA symmetric generator, where there is still only one additional field, @restart@, but @resume@ is more complex because it does a forward rather than backward jump.
     1062Before the jump, the parameter for the next call @partner@ is placed into the register used for the first parameter, @rdi@, and the remaining registers are reset for a return.
     1063The @jmp comain@ restarts the function but with a different parameter, so the new call's behaviour depends on the state of the coroutine type, i.e., branch to restart location with different data state.
     1064While the semantics of call forward is a tail-call optimization, which compilers perform, the generator state is different on each call rather a common state for a tail-recursive function (i.e., the parameter to the function never changes during the forward calls.
     1065However, this assembler code depends on what entry code is generated, specifically if there are local variables and the level of optimization.
     1066Hence, internal compiler support is necessary for any forward call (or backwards return), \eg LLVM has various coroutine support~\cite{CoroutineTS}, and \CFA can leverage this support should it eventually fork @clang@.
     1067For this reason, \CFA does not support general symmetric generators at this time, but, it is possible to hand generate any symmetric generators (as in Figure~\ref{f:CPingPongSim}) for proof of concept and performance testing.
     1068
     1069Finally, part of this generator work was inspired by the recent \CCtwenty coroutine proposal~\cite{C++20Coroutine19}, which uses the general term coroutine to mean generator.
    8821070Our work provides the same high-performance asymmetric generators as \CCtwenty, and extends their work with symmetric generators.
    8831071An additional \CCtwenty generator feature allows @suspend@ and @resume@ to be followed by a restricted compound statement that is executed after the current generator has reset its stack but before calling the next generator, specified with \CFA syntax:
     
    8941082\label{s:Coroutine}
    8951083
    896 Stackful coroutines extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main.
     1084Stackful coroutines (Table~\ref{t:ExecutionPropertyComposition} case 5) extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main.
    8971085A coroutine is specified by replacing @generator@ with @coroutine@ for the type.
    898 Coroutine generality results in higher cost for creation, due to dynamic stack allocation, execution, due to context switching among stacks, and terminating, due to possible stack unwinding and dynamic stack deallocation.
     1086Coroutine generality results in higher cost for creation, due to dynamic stack allocation, for execution, due to context switching among stacks, and for terminating, due to possible stack unwinding and dynamic stack deallocation.
    8991087A series of different kinds of coroutines and their implementations demonstrate how coroutines extend generators.
    9001088
    9011089First, the previous generator examples are converted to their coroutine counterparts, allowing local-state variables to be moved from the generator type into the coroutine main.
    902 \begin{description}
    903 \item[Fibonacci]
    904 Move the declaration of @fn1@ to the start of coroutine main.
     1090\begin{center}
     1091\begin{tabular}{@{}l|l|l|l@{}}
     1092\multicolumn{1}{c|}{Fibonacci} & \multicolumn{1}{c|}{Formatter} & \multicolumn{1}{c|}{Device Driver} & \multicolumn{1}{c}{PingPong} \\
     1093\hline
    9051094\begin{cfa}[xleftmargin=0pt]
    906 void main( Fib & fib ) with(fib) {
     1095void main( Fib & fib ) ...
    9071096        `int fn1;`
    908 \end{cfa}
    909 \item[Formatter]
    910 Move the declaration of @g@ and @b@ to the for loops in the coroutine main.
     1097
     1098
     1099\end{cfa}
     1100&
    9111101\begin{cfa}[xleftmargin=0pt]
    9121102for ( `g`; 5 ) {
    9131103        for ( `b`; 4 ) {
    914 \end{cfa}
    915 \item[Device Driver]
    916 Move the declaration of @lnth@ and @sum@ to their points of initialization.
     1104
     1105
     1106\end{cfa}
     1107&
    9171108\begin{cfa}[xleftmargin=0pt]
    918         status = CONT;
    919         `unsigned int lnth = 0, sum = 0;`
    920         ...
    921         `unsigned short int crc = byte << 8;`
    922 \end{cfa}
    923 \item[PingPong]
    924 Move the declaration of @i@ to the for loop in the coroutine main.
     1109status = CONT;
     1110`int lnth = 0, sum = 0;`
     1111...
     1112`short int crc = byte << 8;`
     1113\end{cfa}
     1114&
    9251115\begin{cfa}[xleftmargin=0pt]
    926 void main( PingPong & pp ) with(pp) {
     1116void main( PingPong & pp ) ...
    9271117        for ( `i`; N ) {
    928 \end{cfa}
    929 \end{description}
     1118
     1119
     1120\end{cfa}
     1121\end{tabular}
     1122\end{center}
    9301123It is also possible to refactor code containing local-state and @suspend@ statements into a helper function, like the computation of the CRC for the device driver.
    9311124\begin{cfa}
    932 unsigned int Crc() {
     1125int Crc() {
    9331126        `suspend;`
    934         unsigned short int crc = byte << 8;
     1127        short int crc = byte << 8;
    9351128        `suspend;`
    9361129        status = (crc | byte) == sum ? MSG : ECRC;
     
    9431136
    9441137\begin{comment}
    945 Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @next@.
     1138Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @restart@.
    9461139Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
    9471140The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@.
    948 The interface function @next@, takes a Fibonacci instance and context switches to it using @resume@;
     1141The interface function @restart@, takes a Fibonacci instance and context switches to it using @resume@;
    9491142on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
    9501143The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack;
     
    11121305\begin{figure}
    11131306\centering
    1114 \lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}% allow $
    11151307\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}}
    11161308\begin{cfa}
    11171309`coroutine` Prod {
    1118         Cons & c;                       // communication
     1310        Cons & c;                       $\C[1.5in]{// communication}$
    11191311        int N, money, receipt;
    11201312};
    11211313void main( Prod & prod ) with( prod ) {
    1122         // 1st resume starts here
    1123         for ( i; N ) {
     1314        for ( i; N ) {          $\C{// 1st resume}\CRT$
    11241315                int p1 = random( 100 ), p2 = random( 100 );
    1125                 sout | p1 | " " | p2;
    11261316                int status = delivery( c, p1, p2 );
    1127                 sout | " $" | money | nl | status;
    11281317                receipt += 1;
    11291318        }
    11301319        stop( c );
    1131         sout | "prod stops";
    11321320}
    11331321int payment( Prod & prod, int money ) {
     
    11501338\begin{cfa}
    11511339`coroutine` Cons {
    1152         Prod & p;                       // communication
     1340        Prod & p;                       $\C[1.5in]{// communication}$
    11531341        int p1, p2, status;
    11541342        bool done;
    11551343};
    11561344void ?{}( Cons & cons, Prod & p ) {
    1157         &cons.p = &p; // reassignable reference
     1345        &cons.p = &p;           $\C{// reassignable reference}$
    11581346        cons.[status, done ] = [0, false];
    11591347}
    11601348void main( Cons & cons ) with( cons ) {
    1161         // 1st resume starts here
    1162         int money = 1, receipt;
     1349        int money = 1, receipt; $\C{// 1st resume}\CRT$
    11631350        for ( ; ! done; ) {
    1164                 sout | p1 | " " | p2 | nl | " $" | money;
    11651351                status += 1;
    11661352                receipt = payment( p, money );
    1167                 sout | " #" | receipt;
    11681353                money += 1;
    11691354        }
    1170         sout | "cons stops";
    11711355}
    11721356int delivery( Cons & cons, int p1, int p2 ) {
     
    11891373This example is illustrative because both producer/consumer have two interface functions with @resume@s that suspend execution in these interface (helper) functions.
    11901374The program main creates the producer coroutine, passes it to the consumer coroutine in its initialization, and closes the cycle at the call to @start@ along with the number of items to be produced.
    1191 The first @resume@ of @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
    1192 @prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer.
    1193 
     1375The call to @start@ is the first @resume@ of @prod@, which remembers the program main as the starter and creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
     1376@prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer's @deliver@ function to transfer the values, and printing the status returned from the consumer.
    11941377The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
    1195 On the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine.
    1196 The consumer iterates until the @done@ flag is set, prints the values delivered by the producer, increments status, and calls back to the producer via @payment@, and on return from @payment@, prints the receipt from the producer and increments @money@ (inflation).
    1197 The call from the consumer to @payment@ introduces the cycle between producer and consumer.
    1198 When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed.
    1199 The context switch restarts the producer at the point where it last context switched, so it continues in @delivery@ after the resume.
    1200 @delivery@ returns the status value in @prod@'s coroutine main, where the status is printed.
    1201 The loop then repeats calling @delivery@, where each call resumes the consumer coroutine.
    1202 The context switch to the consumer continues in @payment@.
    1203 The consumer increments and returns the receipt to the call in @cons@'s coroutine main.
    1204 The loop then repeats calling @payment@, where each call resumes the producer coroutine.
     1378Similarly on the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine.
     1379The symmetric coroutine cycle forms when the consumer calls the producer's @payment@ function, which resumes the producer in the consumer's delivery function.
     1380When the producer calls @delivery@ again, it resumes the consumer in the @payment@ function.
     1381Both interface function than return to the their corresponding coroutine-main functions for the next cycle.
    12051382Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling.
     1383As a consequence of a coroutine retaining its last resumer for suspending back, these reverse pointers allow @suspend@ to cycle \emph{backwards} around a symmetric coroutine cycle.
    12061384
    12071385\begin{figure}
     
    12121390\caption{Producer / consumer runtime stacks}
    12131391\label{f:ProdConsRuntimeStacks}
    1214 
    1215 \medskip
    1216 
    1217 \begin{center}
    1218 \input{FullCoroutinePhases.pstex_t}
    1219 \end{center}
    1220 \vspace*{-10pt}
    1221 \caption{Ping / Pong coroutine steps}
    1222 \label{f:PingPongFullCoroutineSteps}
    12231392\end{figure}
    12241393
    12251394Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack.
    1226 Furthermore, each deallocated coroutine must guarantee all destructors are run for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
    1227 When a coroutine's main ends, its stack is already unwound so any stack allocated objects with destructors have been finalized.
     1395Furthermore, each deallocated coroutine must execute all destructors for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
     1396In the example, termination begins with the producer's loop stopping after N iterations and calling the consumer's @stop@ function, which sets the @done@ flag, resumes the consumer in function @payment@, terminating the call, and the consumer's loop in its coroutine main.
     1397% (Not shown is having @prod@ raise a nonlocal @stop@ exception at @cons@ after it finishes generating values and suspend back to @cons@, which catches the @stop@ exception to terminate its loop.)
     1398When the consumer's main ends, its stack is already unwound so any stack allocated objects with destructors are finalized.
     1399The question now is where does control continue?
     1400
    12281401The na\"{i}ve semantics for coroutine-cycle termination is to context switch to the last resumer, like executing a @suspend@/@return@ in a generator.
    12291402However, for coroutines, the last resumer is \emph{not} implicitly below the current stack frame, as for generators, because each coroutine's stack is independent.
    12301403Unfortunately, it is impossible to determine statically if a coroutine is in a cycle and unrealistic to check dynamically (graph-cycle problem).
    12311404Hence, a compromise solution is necessary that works for asymmetric (acyclic) and symmetric (cyclic) coroutines.
    1232 
    1233 Our solution is to context switch back to the first resumer (starter) once the coroutine ends.
     1405Our solution is to retain a coroutine's starter (first resumer), and context switch back to the starter when the coroutine ends.
     1406Hence, the consumer restarts its first resumer, @prod@, in @stop@, and when the producer ends, it restarts its first resumer, program main, in @start@ (see dashed lines from the end of the coroutine mains in Figure~\ref{f:ProdConsRuntimeStacks}).
    12341407This semantics works well for the most common asymmetric and symmetric coroutine usage patterns.
    1235 For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer.
    1236 All previous generators converted to coroutines have this property.
    1237 For symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle.
    1238 Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter.
    1239 Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end.
     1408For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer;
     1409for symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle.
    12401410For other scenarios, it is always possible to devise a solution with additional programming effort, such as forcing the cycle forward (backward) to a safe point before starting termination.
    12411411
    1242 The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first.
    1243 Assume generator @PingPong@ is converted to a coroutine.
    1244 Figure~\ref{f:PingPongFullCoroutineSteps} shows the creation, starter, and cyclic execution steps of the coroutine version.
    1245 The program main creates (declares) coroutine instances @ping@ and @pong@.
    1246 Next, program main resumes @ping@, making it @ping@'s starter, and @ping@'s main resumes @pong@'s main, making it @pong@'s starter.
    1247 Execution forms a cycle when @pong@ resumes @ping@, and cycles $N$ times.
    1248 By adjusting $N$ for either @ping@/@pong@, it is possible to have either one finish first, instead of @pong@ always ending first.
    1249 If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main in function @start@.
    1250 If @ping@ ends first, it resumes its starter the program main in function @start@.
    1251 Regardless of the cycle complexity, the starter stack always leads back to the program main, but the stack can be entered at an arbitrary point.
    1252 Once back at the program main, coroutines @ping@ and @pong@ are deallocated.
    1253 For generators, deallocation runs the destructors for all objects in the generator type.
    1254 For coroutines, deallocation deals with objects in the coroutine type and must also run the destructors for any objects pending on the coroutine's stack for any unterminated coroutine.
    1255 Hence, if a coroutine's destructor detects the coroutine is not ended, it implicitly raises a cancellation exception (uncatchable exception) at the coroutine and resumes it so the cancellation exception can propagate to the root of the coroutine's stack destroying all local variable on the stack.
    1256 So the \CFA semantics for the generator and coroutine, ensure both can be safely deallocated at any time, regardless of their current state, like any other aggregate object.
    1257 Explicitly raising normal exceptions at another coroutine can replace flag variables, like @stop@, \eg @prod@ raises a @stop@ exception at @cons@ after it finishes generating values and resumes @cons@, which catches the @stop@ exception to terminate its loop.
    1258 
    1259 Finally, there is an interesting effect for @suspend@ with symmetric coroutines.
    1260 A coroutine must retain its last resumer to suspend back because the resumer is on a different stack.
    1261 These reverse pointers allow @suspend@ to cycle \emph{backwards}, which may be useful in certain cases.
    1262 However, there is an anomaly if a coroutine resumes itself, because it overwrites its last resumer with itself, losing the ability to resume the last external resumer.
    1263 To prevent losing this information, a self-resume does not overwrite the last resumer.
     1412Note, the producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first.
     1413Assume generator @PingPong@ in Figure~\ref{f:PingPongSymmetricGenerator} is converted to a coroutine.
     1414Unlike generators, coroutines have a starter structure with multiple levels, where the program main starts @ping@ and @ping@ starts @pong@.
     1415By adjusting $N$ for either @ping@/@pong@, it is possible to have either finish first.
     1416If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main on return;
     1417if @ping@ ends first, it resumes its starter the program main on return.
     1418Regardless of the cycle complexity, the starter structure always leads back to the program main, but the path can be entered at an arbitrary point.
     1419Once back at the program main (creator), coroutines @ping@ and @pong@ are deallocated, runnning any destructors for objects within the coroutine and possibly deallocating any coroutine stacks for non-terminated coroutines, where stack deallocation implies stack unwinding to find destructors for allocated objects on the stack.
     1420Hence, the \CFA termination semantics for the generator and coroutine ensure correct deallocation semnatics, regardless of the coroutine's state (terminated or active), like any other aggregate object.
    12641421
    12651422
     
    12921449Users wanting to extend custom types or build their own can only do so in ways offered by the language.
    12931450Furthermore, implementing custom types without language support may display the power of a programming language.
    1294 \CFA blends the two approaches, providing custom type for idiomatic \CFA code, while extending and building new custom types is still possible, similar to Java concurrency with builtin and library.
     1451\CFA blends the two approaches, providing custom type for idiomatic \CFA code, while extending and building new custom types is still possible, similar to Java concurrency with builtin and library (@java.util.concurrent@) monitors.
    12951452
    12961453Part of the mechanism to generalize custom types is the \CFA trait~\cite[\S~2.3]{Moss18}, \eg the definition for custom-type @coroutine@ is anything satisfying the trait @is_coroutine@, and this trait both enforces and restricts the coroutine-interface functions.
     
    13021459forall( `dtype` T | is_coroutine(T) ) void $suspend$( T & ), resume( T & );
    13031460\end{cfa}
    1304 Note, copying generators/coroutines/threads is not meaningful.
    1305 For example, both the resumer and suspender descriptors can have bidirectional pointers;
    1306 copying these coroutines does not update the internal pointers so behaviour of both copies would be difficult to understand.
    1307 Furthermore, two coroutines cannot logically execute on the same stack.
    1308 A deep coroutine copy, which copies the stack, is also meaningless in an unmanaged language (no garbage collection), like C, because the stack may contain pointers to object within it that require updating for the copy.
     1461Note, copying generators/coroutines/threads is undefined because muliple objects cannot execute on a shared stack and stack copying does not work in unmanaged languages (no garbage collection), like C, because the stack may contain pointers to objects within it that require updating for the copy.
    13091462The \CFA @dtype@ property provides no \emph{implicit} copying operations and the @is_coroutine@ trait provides no \emph{explicit} copying operations, so all coroutines must be passed by reference (pointer).
    13101463The function definitions ensure there is a statically typed @main@ function that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the coroutine descriptor from its handle.
     
    13501503The combination of custom types and fundamental @trait@ description of these types allows a concise specification for programmers and tools, while more advanced programmers can have tighter control over memory layout and initialization.
    13511504
    1352 Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a task is similar).
     1505Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a thread is similar).
    13531506The coroutine handle is the @coroutine@ instance containing programmer specified type global/communication variables across interface functions.
    13541507The coroutine descriptor contains all implicit declarations needed by the runtime, \eg @suspend@/@resume@, and can be part of the coroutine handle or separate.
    13551508The coroutine stack can appear in a number of locations and be fixed or variable sized.
    1356 Hence, the coroutine's stack could be a VLS\footnote{
    1357 We are examining variable-sized structures (VLS), where fields can be variable-sized structures or arrays.
     1509Hence, the coroutine's stack could be a variable-length structure (VLS)\footnote{
     1510We are examining VLSs, where fields can be variable-sized structures or arrays.
    13581511Once allocated, a VLS is fixed sized.}
    13591512on the allocating stack, provided the allocating stack is large enough.
    13601513For a VLS stack allocation/deallocation is an inexpensive adjustment of the stack pointer, modulo any stack constructor costs (\eg initial frame setup).
    1361 For heap stack allocation, allocation/deallocation is an expensive heap allocation (where the heap can be a shared resource), modulo any stack constructor costs.
    1362 With heap stack allocation, it is also possible to use a split (segmented) stack calling convention, available with gcc and clang, so the stack is variable sized.
     1514For stack allocation in the heap, allocation/deallocation is an expensive allocation, where the heap can be a shared resource, modulo any stack constructor costs.
     1515It is also possible to use a split (segmented) stack calling convention, available with gcc and clang, allowing a variable-sized stack via a set of connected blocks in the heap.
    13631516Currently, \CFA supports stack/heap allocated descriptors but only fixed-sized heap allocated stacks.
    13641517In \CFA debug-mode, the fixed-sized stack is terminated with a write-only page, which catches most stack overflows.
    13651518Experience teaching concurrency with \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue for students.
    1366 Split-stack allocation is under development but requires recompilation of legacy code, which may be impossible.
     1519Split-stack allocation is under development but requires recompilation of legacy code, which is not always possible.
    13671520
    13681521\begin{figure}
     
    13781531
    13791532Concurrency is nondeterministic scheduling of independent sequential execution paths (threads), where each thread has its own stack.
    1380 A single thread with multiple call stacks, \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}.
    1381 In coroutining, coroutines self-schedule the thread across stacks so execution is deterministic.
     1533A single thread with multiple stacks, \ie coroutining, does \emph{not} imply concurrency~\cite[\S~3]{Buhr05a}.
     1534Coroutining self-schedule the thread across stacks so execution is deterministic.
    13821535(It is \emph{impossible} to generate a concurrency error when coroutining.)
    1383 However, coroutines are a stepping stone towards concurrency.
    1384 
    1385 The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a}.
     1536
     1537The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}.
    13861538Therefore, a minimal concurrency system requires coroutines \emph{in conjunction with a nondeterministic scheduler}.
    1387 The resulting execution system now follows a cooperative threading model~\cite{Adya02,libdill}, called \newterm{non-preemptive scheduling}.
    1388 Adding \newterm{preemption} introduces non-cooperative scheduling, where context switching occurs randomly between any two instructions often based on a timer interrupt, called \newterm{preemptive scheduling}.
    1389 While a scheduler introduces uncertain execution among explicit context switches, preemption introduces uncertainty by introducing implicit context switches.
     1539The resulting execution system now follows a cooperative threading-model~\cite{Adya02,libdill} because context-switching points to the scheduler (blocking) are known, but the next unblocking point is unknown due to the scheduler.
     1540Adding \newterm{preemption} introduces \newterm{non-cooperative} or \newterm{preemptive} scheduling, where context switching points to the scheduler are unknown as they can occur randomly between any two instructions often based on a timer interrupt.
    13901541Uncertainty gives the illusion of parallelism on a single processor and provides a mechanism to access and increase performance on multiple processors.
    13911542The reason is that the scheduler/runtime have complete knowledge about resources and how to best utilized them.
    1392 However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization}, which restrict nondeterminism for correctness;
     1543However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization}~\cite[\S~4]{Buhr05a}, which restrict nondeterminism for correctness;
    13931544otherwise, it is impossible to write meaningful concurrent programs.
    13941545Optimal concurrent performance is often obtained by having as much nondeterminism as mutual exclusion and synchronization correctness allow.
    13951546
    1396 A scheduler can either be a stackless or stackful.
     1547A scheduler can also be stackless or stackful.
    13971548For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch.
    13981549For stackful, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
     
    14031554\label{s:threads}
    14041555
    1405 Threading needs the ability to start a thread and wait for its completion.
     1556Threading (Table~\ref{t:ExecutionPropertyComposition} case 11) needs the ability to start a thread and wait for its completion.
    14061557A common API for this ability is @fork@ and @join@.
    1407 \begin{cquote}
    1408 \begin{tabular}{@{}lll@{}}
    1409 \multicolumn{1}{c}{\textbf{Java}} & \multicolumn{1}{c}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\
    1410 \begin{cfa}
    1411 class MyTask extends Thread {...}
    1412 mytask t = new MyTask(...);
     1558\vspace{4pt}
     1559\par\noindent
     1560\begin{tabular}{@{}l|l|l@{}}
     1561\multicolumn{1}{c|}{\textbf{Java}} & \multicolumn{1}{c|}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\
     1562\hline
     1563\begin{cfa}
     1564class MyThread extends Thread {...}
     1565mythread t = new MyThread(...);
    14131566`t.start();` // start
    14141567// concurrency
     
    14171570&
    14181571\begin{cfa}
    1419 class MyTask { ... } // functor
    1420 MyTask mytask;
    1421 `thread t( mytask, ... );` // start
     1572class MyThread { ... } // functor
     1573MyThread mythread;
     1574`thread t( mythread, ... );` // start
    14221575// concurrency
    14231576`t.join();` // wait
     
    14321585\end{cfa}
    14331586\end{tabular}
    1434 \end{cquote}
     1587\vspace{1pt}
     1588\par\noindent
    14351589\CFA has a simpler approach using a custom @thread@ type and leveraging declaration semantics (allocation/deallocation), where threads implicitly @fork@ after construction and @join@ before destruction.
    14361590\begin{cfa}
    1437 thread MyTask {};
    1438 void main( MyTask & this ) { ... }
     1591thread MyThread {};
     1592void main( MyThread & this ) { ... }
    14391593int main() {
    1440         MyTask team`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$
     1594        MyThread team`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$
    14411595        // concurrency
    14421596} $\C{// deallocate stack-based threads, implicit joins before destruction}$
     
    14461600Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic allocation.
    14471601\begin{cfa}
    1448 MyTask * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$
     1602MyThread * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$
    14491603int main() {
    1450         MyTask * team = factory( 10 );
     1604        MyThread * team = factory( 10 );
    14511605        // concurrency
    14521606        `delete( team );` $\C{// deallocate heap-based threads, implicit joins before destruction}\CRT$
     
    14941648
    14951649Threads in \CFA are user level run by runtime kernel threads (see Section~\ref{s:CFARuntimeStructure}), where user threads provide concurrency and kernel threads provide parallelism.
    1496 Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the task-interface functions.
     1650Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the thread-interface functions.
    14971651\begin{cquote}
    14981652\begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}}
     
    15251679\label{s:MutualExclusionSynchronization}
    15261680
    1527 Unrestricted nondeterminism is meaningless as there is no way to know when the result is completed without synchronization.
     1681Unrestricted nondeterminism is meaningless as there is no way to know when a result is completed and safe to access.
    15281682To produce meaningful execution requires clawing back some determinism using mutual exclusion and synchronization, where mutual exclusion provides access control for threads using shared data, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}.
    1529 Some concurrent systems eliminate mutable shared-state by switching to stateless communication like message passing~\cite{Thoth,Harmony,V-Kernel,MPI} (Erlang, MPI), channels~\cite{CSP} (CSP,Go), actors~\cite{Akka} (Akka, Scala), or functional techniques (Haskell).
     1683The shared data protected by mutual exlusion is called a \newterm{critical section}~\cite{Dijkstra65}, and the protection can be simple (only 1 thread) or complex (only N kinds of threads, \eg group~\cite{Joung00} or readers/writer~\cite{Courtois71}).
     1684Without synchronization control in a critical section, an arriving thread can barge ahead of preexisting waiter threads resulting in short/long-term starvation, staleness/freshness problems, and/or incorrect transfer of data.
     1685Preventing or detecting barging is a challenge with low-level locks, but made easier through higher-level constructs.
     1686This challenge is often split into two different approaches: barging \emph{avoidance} and \emph{prevention}.
     1687Approaches that unconditionally releasing a lock for competing threads to acquire must use barging avoidance with flag/counter variable(s) to force barging threads to wait;
     1688approaches that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely.
     1689
     1690At the lowest level, concurrent control is provided by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg spin locks, semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}.
     1691However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}.
     1692A significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock.
     1693Easing composability is another feature higher-level mutual-exclusion mechanisms can offer.
     1694Some concurrent systems eliminate mutable shared-state by switching to non-shared communication like message passing~\cite{Thoth,Harmony,V-Kernel,MPI} (Erlang, MPI), channels~\cite{CSP} (CSP,Go), actors~\cite{Akka} (Akka, Scala), or functional techniques (Haskell).
    15301695However, these approaches introduce a new communication mechanism for concurrency different from the standard communication using function call/return.
    15311696Hence, a programmer must learn and manipulate two sets of design/programming patterns.
    15321697While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account.
    1533 In contrast, approaches based on stateful models more closely resemble the standard call/return programming model, resulting in a single programming paradigm.
    1534 
    1535 At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}.
    1536 However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}.
    1537 A newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}.
    1538 While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it is rejected as the core paradigm for concurrency in \CFA.
    1539 
    1540 One of the most natural, elegant, and efficient mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor}.
    1541 First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
    1542 In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to simulate monitors.
    1543 For these reasons, \CFA selected monitors as the core high-level concurrency construct, upon which higher-level approaches can be easily constructed.
    1544 
    1545 
    1546 \subsection{Mutual Exclusion}
    1547 
    1548 A group of instructions manipulating a specific instance of shared data that must be performed atomically is called a \newterm{critical section}~\cite{Dijkstra65}, which is enforced by \newterm{simple mutual-exclusion}.
    1549 The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session use the resource simultaneously and different sessions are segregated, which is enforced by \newterm{complex mutual-exclusion} providing the correct kind and number of threads using a group critical-section.
    1550 The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers share a session but writers have a unique session.
    1551 
    1552 However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use.
    1553 Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use.
    1554 Ease of use comes by either guaranteeing some problems cannot occur, \eg deadlock free, or by offering a more explicit coupling between shared data and critical section.
    1555 For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations, \eg reading/writing, for numerical types.
    1556 However, a significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock.
    1557 Easing composability is another feature higher-level mutual-exclusion mechanisms can offer.
    1558 
    1559 
    1560 \subsection{Synchronization}
    1561 
    1562 Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships.
    1563 Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use;
    1564 higher-level mechanisms often simplify usage by adding better coupling between synchronization and data, \eg receive-specific versus receive-any thread in message passing or offering specialized solutions, \eg barrier lock.
    1565 Often synchronization is used to order access to a critical section, \eg ensuring a waiting writer thread enters the critical section before a calling reader thread.
    1566 If the calling reader is scheduled before the waiting writer, the reader has barged.
    1567 Barging can result in staleness/freshness problems, where a reader barges ahead of a writer and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from ever being read (lost computation).
    1568 Preventing or detecting barging is an involved challenge with low-level locks, which is made easier through higher-level constructs.
    1569 This challenge is often split into two different approaches: barging avoidance and prevention.
    1570 Algorithms that unconditionally releasing a lock for competing threads to acquire use barging avoidance during synchronization to force a barging thread to wait;
    1571 algorithms that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely.
     1698In contrast, approaches based on shared-state models more closely resemble the standard call/return programming model, resulting in a single programming paradigm.
     1699Finally, a newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}.
     1700While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive~\cite{Cascaval08,Boehm09} to be the main concurrency paradigm for system languages.
    15721701
    15731702
     
    15751704\label{s:Monitor}
    15761705
    1577 A \textbf{monitor} is a set of functions that ensure mutual exclusion when accessing shared state.
    1578 More precisely, a monitor is a programming technique that implicitly binds mutual exclusion to static function scope, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation).
     1706One of the most natural, elegant, efficient, high-level mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor} (Table~\ref{t:ExecutionPropertyComposition} case 2).
     1707First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
     1708In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to manually implement a monitor.
     1709For these reasons, \CFA selected monitors as the core high-level concurrency construct, upon which higher-level approaches can be easily constructed.
     1710
     1711Specifically, a \textbf{monitor} is a set of functions that ensure mutual exclusion when accessing shared state.
     1712More precisely, a monitor is a programming technique that implicitly binds mutual exclusion to static function scope by call/return, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation).
    15791713Restricting acquire/release points eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency.
    15801714\CFA uses a custom @monitor@ type and leverages declaration semantics (deallocation) to protect active or waiting threads in a monitor.
    15811715
    15821716The following is a \CFA monitor implementation of an atomic counter.
    1583 \begin{cfa}[morekeywords=nomutex]
     1717\begin{cfa}
    15841718`monitor` Aint { int cnt; }; $\C[4.25in]{// atomic integer counter}$
    1585 int ++?( Aint & `mutex`$\(_{opt}\)$ this ) with( this ) { return ++cnt; } $\C{// increment}$
    1586 int ?=?( Aint & `mutex`$\(_{opt}\)$ lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions with int}\CRT$
    1587 int ?=?( int & lhs, Aint & `mutex`$\(_{opt}\)$ rhs ) with( rhs ) { lhs = cnt; }
    1588 \end{cfa}
    1589 % The @Aint@ constructor, @?{}@, uses the \lstinline[morekeywords=nomutex]@nomutex@ qualifier indicating mutual exclusion is unnecessary during construction because an object is inaccessible (private) until after it is initialized.
    1590 % (While a constructor may publish its address into a global variable, doing so generates a race-condition.)
    1591 The prefix increment operation, @++?@, is normally @mutex@, indicating mutual exclusion is necessary during function execution, to protect the incrementing from race conditions, unless there is an atomic increment instruction for the implementation type.
    1592 The assignment operators provide bidirectional conversion between an atomic and normal integer without accessing field @cnt@;
    1593 these operations only need @mutex@, if reading/writing the implementation type is not atomic.
    1594 The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics, which is similar to the \CC template @std::atomic@.
     1719int ++?( Aint & `mutex` this ) with( this ) { return ++cnt; } $\C{// increment}$
     1720int ?=?( Aint & `mutex` lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions with int, mutex optional}\CRT$
     1721int ?=?( int & lhs, Aint & `mutex` rhs ) with( rhs ) { lhs = cnt; }
     1722\end{cfa}
     1723The operators use the parameter-only declaration type-qualifier @mutex@ to mark which parameters require locking during function execution to protect from race conditions.
     1724The assignment operators provide bidirectional conversion between an atomic and normal integer without accessing field @cnt@.
     1725(These operations only need @mutex@, if reading/writing the implementation type is not atomic.)
     1726The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics.
    15951727\begin{cfa}
    15961728int i = 0, j = 0, k = 5;
     
    16001732i = x; j = y; k = z;
    16011733\end{cfa}
     1734Note, like other concurrent programming languages, \CFA has specializations for the basic types using atomic instructions for performance and a general trait similar to the \CC template @std::atomic@.
    16021735
    16031736\CFA monitors have \newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling other interface functions.
     1737\newpage
    16041738\begin{cfa}
    16051739monitor M { ... } m;
     
    16101744\end{cfa}
    16111745\CFA monitors also ensure the monitor lock is released regardless of how an acquiring function ends (normal or exceptional), and returning a shared variable is safe via copying before the lock is released.
    1612 Similar safety is offered by \emph{explicit} mechanisms like \CC RAII;
    1613 monitor \emph{implicit} safety ensures no programmer usage errors.
     1746Similar safety is offered by \emph{explicit} opt-in disciplines like \CC RAII versus the monitor \emph{implicit} language-enforced safety guarantee ensuring no programmer usage errors.
    16141747Furthermore, RAII mechanisms cannot handle complex synchronization within a monitor, where the monitor lock may not be released on function exit because it is passed to an unblocking thread;
    16151748RAII is purely a mutual-exclusion mechanism (see Section~\ref{s:Scheduling}).
     
    16371770\end{cquote}
    16381771The @dtype@ property prevents \emph{implicit} copy operations and the @is_monitor@ trait provides no \emph{explicit} copy operations, so monitors must be passed by reference (pointer).
    1639 % Copying a lock is insecure because it is possible to copy an open lock and then use the open copy when the original lock is closed to simultaneously access the shared data.
    1640 % Copying a monitor is secure because both the lock and shared data are copies, but copying the shared data is meaningless because it no longer represents a unique entity.
    16411772Similarly, the function definitions ensures there is a mechanism to get (read) the monitor descriptor from its handle, and a special destructor to prevent deallocation if a thread using the shared data.
    16421773The custom monitor type also inserts any locks needed to implement the mutual exclusion semantics.
     
    16501781For example, a monitor may be passed through multiple helper functions before it is necessary to acquire the monitor's mutual exclusion.
    16511782
    1652 The benefit of mandatory monitor qualifiers is self-documentation, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameters is redundant.
    1653 Instead, the semantics has one qualifier as the default and the other required.
    1654 For example, make the safe @mutex@ qualifier the default because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors.
    1655 Alternatively, make the unsafe \lstinline[morekeywords=nomutex]@nomutex@ qualifier the default because it is the \emph{normal} parameter semantics while @mutex@ parameters are rare.
    1656 Providing a default qualifier implies knowing whether a parameter is a monitor.
    1657 Since \CFA relies heavily on traits as an abstraction mechanism, types can coincidentally match the monitor trait but not be a monitor, similar to inheritance where a shape and playing card can both be drawable.
    1658 For this reason, \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@.
     1783\CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@, because @mutex@ parameters are rare and no keyword is the \emph{normal} parameter semantics.
     1784Hence, @mutex@ parameters are documentation, at the function and its prototype, to both programmer and compiler, without other redundant keywords.
     1785Furthermore, \CFA relies heavily on traits as an abstraction mechanism, so the @mutex@ qualifier prevents coincidentally matching of a monitor trait with a type that is not a monitor, similar to coincidental inheritance where a shape and playing card can both be drawable.
    16591786
    16601787The next semantic decision is establishing which parameter \emph{types} may be qualified with @mutex@.
     
    16701797Function @f3@ has a multiple object matrix, and @f4@ a multiple object data structure.
    16711798While shown shortly, multiple object acquisition is possible, but the number of objects must be statically known.
    1672 Therefore, \CFA only acquires one monitor per parameter with at most one level of indirection, excluding pointers as it is impossible to statically determine the size.
     1799Therefore, \CFA only acquires one monitor per parameter with exactly one level of indirection, and exclude pointer types to unknown sized arrays.
    16731800
    16741801For object-oriented monitors, \eg Java, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@.
     
    16771804While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.}
    16781805called \newterm{bulk acquire}.
    1679 \CFA guarantees acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitors is safe from deadlock.
     1806\CFA guarantees bulk acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitors in a bulk acquire is safe from deadlock.
    16801807Figure~\ref{f:BankTransfer} shows a trivial solution to the bank transfer problem~\cite{BankTransfer}, where two resources must be locked simultaneously, using \CFA monitors with implicit locking and \CC with explicit locking.
    16811808A \CFA programmer only has to manage when to acquire mutual exclusion;
     
    16971824void transfer( BankAccount & `mutex` my,
    16981825        BankAccount & `mutex` your, int me2you ) {
    1699 
     1826        // bulk acquire
    17001827        deposit( my, -me2you ); // debit
    17011828        deposit( your, me2you ); // credit
     
    17271854void transfer( BankAccount & my,
    17281855                        BankAccount & your, int me2you ) {
    1729         `scoped_lock lock( my.m, your.m );`
     1856        `scoped_lock lock( my.m, your.m );` // bulk acquire
    17301857        deposit( my, -me2you ); // debit
    17311858        deposit( your, me2you ); // credit
     
    17551882\end{figure}
    17561883
    1757 Users can still force the acquiring order by using @mutex@/\lstinline[morekeywords=nomutex]@nomutex@.
     1884Users can still force the acquiring order by using or not using @mutex@.
    17581885\begin{cfa}
    17591886void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$
    1760 void bar( M & mutex m1, M & /* nomutex */ m2 ) { $\C{// acquire m1}$
     1887void bar( M & mutex m1, M & m2 ) { $\C{// only acquire m1}$
    17611888        ... foo( m1, m2 ); ... $\C{// acquire m2}$
    17621889}
    1763 void baz( M & /* nomutex */ m1, M & mutex m2 ) { $\C{// acquire m2}$
     1890void baz( M & m1, M & mutex m2 ) { $\C{// only acquire m2}$
    17641891        ... foo( m1, m2 ); ... $\C{// acquire m1}$
    17651892}
     
    18041931% There are many aspects of scheduling in a concurrency system, all related to resource utilization by waiting threads, \ie which thread gets the resource next.
    18051932% Different forms of scheduling include access to processors by threads (see Section~\ref{s:RuntimeStructureCluster}), another is access to a shared resource by a lock or monitor.
    1806 This section discusses monitor scheduling for waiting threads eligible for entry, \ie which thread gets the shared resource next. (See Section~\ref{s:RuntimeStructureCluster} for scheduling threads on virtual processors.)
    1807 While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer may be full/empty so produce/consumer threads must block.
    1808 Leaving the monitor and trying again (busy waiting) is impractical for high-level programming.
    1809 Monitors eliminate busy waiting by providing synchronization to schedule threads needing access to the shared data, where threads block versus spinning.
     1933This section discusses scheduling for waiting threads eligible for monitor entry, \ie which user thread gets the shared resource next. (See Section~\ref{s:RuntimeStructureCluster} for scheduling kernel threads on virtual processors.)
     1934While monitor mutual-exclusion provides safe access to its shared data, the data may indicate a thread cannot proceed, \eg a bounded buffer may be full/\-empty so produce/consumer threads must block.
     1935Leaving the monitor and retrying (busy waiting) is impractical for high-level programming.
     1936
     1937Monitors eliminate busy waiting by providing synchronization within the monitor critical-section to schedule threads needing access to the shared data, where threads block versus spin.
    18101938Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling.
    1811 \newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry.
    1812 Finally, \CFA monitors do not allow calling threads to barge ahead of signalled threads, which simplifies synchronization among threads in the monitor and increases correctness.
    1813 If barging is allowed, synchronization between a signaller and signallee is difficult, often requiring additional flags and multiple unblock/block cycles.
    1814 In fact, signals-as-hints is completely opposite from that proposed by Hoare in the seminal paper on monitors~\cite[p.~550]{Hoare74}.
     1939\newterm{Internal} (largely) schedules threads located \emph{inside} the monitor and is accomplished using condition variables with signal and wait.
     1940\newterm{External} (largely) schedules threads located \emph{outside} the monitor and is accomplished with the @waitfor@ statement.
     1941Note, internal scheduling has a small amount of external scheduling and vice versus, so the naming denotes where the majority of the block threads reside (inside or outside) for scheduling.
     1942For complex scheduling, the approaches can be combined, so there can be an equal number of threads waiting inside and outside.
     1943
     1944\CFA monitors do not allow calling threads to barge ahead of signalled threads (via barging prevention), which simplifies synchronization among threads in the monitor and increases correctness.
     1945A direct consequence of this semantics is that unblocked waiting threads are not required to recheck the waiting condition, \ie waits are not in a starvation-prone busy-loop as required by the signals-as-hints style with barging.
     1946Preventing barging comes directly from Hoare's semantics in the seminal paper on monitors~\cite[p.~550]{Hoare74}.
    18151947% \begin{cquote}
    18161948% However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program.
    18171949% It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74}
    18181950% \end{cquote}
    1819 Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of self barging.
    1820 Hence, a \CFA @wait@ statement is not enclosed in a @while@ loop retesting a blocking predicate, which can cause thread starvation due to barging.
    1821 
    1822 Figure~\ref{f:MonitorScheduling} shows general internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}).
    1823 External calling threads block on the calling queue, if the monitor is occupied, otherwise they enter in FIFO order.
    1824 Internal threads block on condition queues via @wait@ and reenter from the condition in FIFO order.
    1825 Alternatively, internal threads block on urgent from the @signal_block@ or @waitfor@, and reenter implicitly when the monitor becomes empty, \ie, the thread in the monitor exits or waits.
    1826 
    1827 There are three signalling mechanisms to unblock waiting threads to enter the monitor.
    1828 Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion, so either the signaller or signallee can proceed.
    1829 For internal scheduling, threads are unblocked from condition queues using @signal@, where the signallee is moved to urgent and the signaller continues (solid line).
    1830 Multiple signals move multiple signallees to urgent until the condition is empty.
    1831 When the signaller exits or waits, a thread blocked on urgent is processed before calling threads to prevent barging.
     1951Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit self barging.
     1952
     1953Monitor mutual-exclusion means signalling cannot have the signaller and signalled thread in the monitor simultaneously, so only the signaller or signallee can proceed.
     1954Figure~\ref{f:MonitorScheduling} shows internal/external scheduling for the bounded-buffer examples in Figure~\ref{f:GenericBoundedBuffer}.
     1955For internal scheduling in Figure~\ref{f:BBInt}, the @signal@ moves the signallee (front thread of the specified condition queue) to urgent and the signaller continues (solid line).
     1956Multiple signals move multiple signallees to urgent until the condition queue is empty.
     1957When the signaller exits or waits, a thread is implicitly unblocked from urgent (if available) before unblocking a calling thread to prevent barging.
    18321958(Java conceptually moves the signalled thread to the calling queue, and hence, allows barging.)
    1833 The alternative unblock is in the opposite order using @signal_block@, where the signaller is moved to urgent and the signallee continues (dashed line), and is implicitly unblocked from urgent when the signallee exits or waits.
    1834 
    1835 For external scheduling, the condition queues are not used;
    1836 instead threads are unblocked directly from the calling queue using @waitfor@ based on function names requesting mutual exclusion.
    1837 (The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.)
    1838 The @waitfor@ has the same semantics as @signal_block@, where the signalled thread executes before the signallee, which waits on urgent.
    1839 Executing multiple @waitfor@s from different signalled functions causes the calling threads to move to urgent.
    1840 External scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited.
    1841 Internal scheduling behaves the same for an urgent stack or queue, except for multiple signalling, where the threads unblock from urgent in reverse order from signalling.
    1842 If the restart order is important, multiple signalling by a signal thread can be transformed into daisy-chain signalling among threads, where each thread signals the next thread.
    1843 We tried both a stack for @waitfor@ and queue for signalling, but that resulted in complex semantics about which thread enters next.
    1844 Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling.
     1959Signal is used when the signaller is providing the cooperation needed by the signallee (\eg creating an empty slot in a buffer for a producer) and the signaller immediately exits the monitor to run concurrently (consume the buffer element) and passes control of the monitor to the signalled thread, which can immediately take advantage of the state change.
     1960Specifically, the @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list.
     1961Signalling is unconditional because signalling an empty condition queue does nothing.
     1962It is common to declare condition queues as monitor fields to prevent shared access, hence no locking is required for access as the queues are protected by the monitor lock.
     1963In \CFA, a condition queue can be created/stored independently.
    18451964
    18461965\begin{figure}
     
    18601979\end{figure}
    18611980
    1862 Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, detect the buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@.
    1863 The @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list.
    1864 The appropriate condition variable is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer.
    1865 Signalling is unconditional, because signalling an empty condition variable does nothing.
    1866 It is common to declare condition variables as monitor fields to prevent shared access, hence no locking is required for access as the conditions are protected by the monitor lock.
    1867 In \CFA, a condition variable can be created/stored independently.
    1868 % To still prevent expensive locking on access, a condition variable is tied to a \emph{group} of monitors on first use, called \newterm{branding}, resulting in a low-cost boolean test to detect sharing from other monitors.
    1869 
    1870 % Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means:
    1871 % \begin{enumerate}
    1872 % \item
    1873 % The signalling thread returns immediately and the signalled thread continues.
    1874 % \item
    1875 % The signalling thread continues and the signalled thread is marked for urgent unblocking at the next scheduling point (exit/wait).
    1876 % \item
    1877 % The signalling thread blocks but is marked for urgent unblocking at the next scheduling point and the signalled thread continues.
    1878 % \end{enumerate}
    1879 % The first approach is too restrictive, as it precludes solving a reasonable class of problems, \eg dating service (see Figure~\ref{f:DatingService}).
    1880 % \CFA supports the next two semantics as both are useful.
    1881 
    18821981\begin{figure}
    18831982\centering
     
    18911990                T elements[10];
    18921991        };
    1893         void ?{}( Buffer(T) & buffer ) with(buffer) {
     1992        void ?{}( Buffer(T) & buf ) with(buf) {
    18941993                front = back = count = 0;
    18951994        }
    1896         void insert( Buffer(T) & mutex buffer, T elem )
    1897                                 with(buffer) {
    1898                 if ( count == 10 ) `wait( empty )`;
    1899                 // insert elem into buffer
     1995
     1996        void insert(Buffer(T) & mutex buf, T elm) with(buf){
     1997                if ( count == 10 ) `wait( empty )`; // full ?
     1998                // insert elm into buf
    19001999                `signal( full )`;
    19012000        }
    1902         T remove( Buffer(T) & mutex buffer ) with(buffer) {
    1903                 if ( count == 0 ) `wait( full )`;
    1904                 // remove elem from buffer
     2001        T remove( Buffer(T) & mutex buf ) with(buf) {
     2002                if ( count == 0 ) `wait( full )`; // empty ?
     2003                // remove elm from buf
    19052004                `signal( empty )`;
    1906                 return elem;
     2005                return elm;
    19072006        }
    19082007}
    19092008\end{cfa}
    19102009\end{lrbox}
    1911 
    1912 % \newbox\myboxB
    1913 % \begin{lrbox}{\myboxB}
    1914 % \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1915 % forall( otype T ) { // distribute forall
    1916 %       monitor Buffer {
    1917 %
    1918 %               int front, back, count;
    1919 %               T elements[10];
    1920 %       };
    1921 %       void ?{}( Buffer(T) & buffer ) with(buffer) {
    1922 %               [front, back, count] = 0;
    1923 %       }
    1924 %       T remove( Buffer(T) & mutex buffer ); // forward
    1925 %       void insert( Buffer(T) & mutex buffer, T elem )
    1926 %                               with(buffer) {
    1927 %               if ( count == 10 ) `waitfor( remove, buffer )`;
    1928 %               // insert elem into buffer
    1929 %
    1930 %       }
    1931 %       T remove( Buffer(T) & mutex buffer ) with(buffer) {
    1932 %               if ( count == 0 ) `waitfor( insert, buffer )`;
    1933 %               // remove elem from buffer
    1934 %
    1935 %               return elem;
    1936 %       }
    1937 % }
    1938 % \end{cfa}
    1939 % \end{lrbox}
    19402010
    19412011\newbox\myboxB
    19422012\begin{lrbox}{\myboxB}
    19432013\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2014forall( otype T ) { // distribute forall
     2015        monitor Buffer {
     2016
     2017                int front, back, count;
     2018                T elements[10];
     2019        };
     2020        void ?{}( Buffer(T) & buf ) with(buf) {
     2021                front = back = count = 0;
     2022        }
     2023        T remove( Buffer(T) & mutex buf ); // forward
     2024        void insert(Buffer(T) & mutex buf, T elm) with(buf){
     2025                if ( count == 10 ) `waitfor( remove : buf )`;
     2026                // insert elm into buf
     2027
     2028        }
     2029        T remove( Buffer(T) & mutex buf ) with(buf) {
     2030                if ( count == 0 ) `waitfor( insert : buf )`;
     2031                // remove elm from buf
     2032
     2033                return elm;
     2034        }
     2035}
     2036\end{cfa}
     2037\end{lrbox}
     2038
     2039\subfloat[Internal scheduling]{\label{f:BBInt}\usebox\myboxA}
     2040\hspace{1pt}
     2041\vrule
     2042\hspace{3pt}
     2043\subfloat[External scheduling]{\label{f:BBExt}\usebox\myboxB}
     2044
     2045\caption{Generic bounded buffer}
     2046\label{f:GenericBoundedBuffer}
     2047\end{figure}
     2048
     2049The @signal_block@ provides the opposite unblocking order, where the signaller is moved to urgent and the signallee continues and a thread is implicitly unblocked from urgent when the signallee exits or waits (dashed line).
     2050Signal block is used when the signallee is providing the cooperation needed by the signaller (\eg if the buffer is removed and a producer hands off an item to a consumer, as in Figure~\ref{f:DatingSignalBlock}) so the signaller must wait until the signallee unblocks, provides the cooperation, exits the monitor to run concurrently, and passes control of the monitor to the signaller, which can immediately take advantage of the state change.
     2051Using @signal@ or @signal_block@ can be a dynamic decision based on whether the thread providing the cooperation arrives before or after the thread needing the cooperation.
     2052
     2053External scheduling in Figure~\ref{f:BBExt} simplifies internal scheduling by eliminating condition queues and @signal@/@wait@ (cases where it cannot are discussed shortly), and has existed in the programming language Ada for almost 40 years with variants in other languages~\cite{SR,ConcurrentC++,uC++}.
     2054While prior languages use external scheduling solely for thread interaction, \CFA generalizes it to both monitors and threads.
     2055External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on condition queues in the monitor.
     2056Scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion.
     2057Specifically, a thread calling the monitor is unblocked directly from the calling queue based on function names that can fulfill the cooperation required by the signaller.
     2058(The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.)
     2059Hence, the @waitfor@ has the same semantics as @signal_block@, where the signallee thread from the calling queue executes before the signaller, which waits on urgent.
     2060Now when a producer/consumer detects a full/empty buffer, the necessary cooperation for continuation is specified by indicating the next function call that can occur.
     2061For example, a producer detecting a full buffer must have cooperation from a consumer to remove an item so function @remove@ is accepted, which prevents producers from entering the monitor, and after a consumer calls @remove@, the producer waiting on urgent is \emph{implicitly} unblocked because it can now continue its insert operation.
     2062Hence, this mechanism is done in terms of control flow, next call, versus in terms of data, channels, as in Go/Rust @select@.
     2063While both mechanisms have strengths and weaknesses, \CFA uses the control-flow mechanism to be consistent with other language features.
     2064
     2065Figure~\ref{f:ReadersWriterLock} shows internal/external scheduling for a readers/writer lock with no barging and threads are serviced in FIFO order to eliminate staleness/freshness among the reader/writer threads.
     2066For internal scheduling in Figure~\ref{f:RWInt}, the readers and writers wait on the same condition queue in FIFO order, making it impossible to tell if a waiting thread is a reader or writer.
     2067To clawback the kind of thread, a \CFA condition can store user data in the node for a blocking thread at the @wait@, \ie whether the thread is a @READER@ or @WRITER@.
     2068An unblocked reader thread checks if the thread at the front of the queue is a reader and unblock it, \ie the readers daisy-chain signal the next group of readers demarcated by the next writer or end of the queue.
     2069For external scheduling in Figure~\ref{f:RWExt}, a waiting reader checks if a writer is using the resource, and if so, restricts further calls until the writer exits by calling @EndWrite@.
     2070The writer does a similar action for each reader or writer using the resource.
     2071Note, no new calls to @StartRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@.
     2072
     2073\begin{figure}
     2074\centering
     2075\newbox\myboxA
     2076\begin{lrbox}{\myboxA}
     2077\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2078enum RW { READER, WRITER };
    19442079monitor ReadersWriter {
    1945         int rcnt, wcnt; // readers/writer using resource
     2080        int rcnt, wcnt; // readers/writer using resource
     2081        `condition RWers;`
    19462082};
    19472083void ?{}( ReadersWriter & rw ) with(rw) {
     
    19502086void EndRead( ReadersWriter & mutex rw ) with(rw) {
    19512087        rcnt -= 1;
     2088        if ( rcnt == 0 ) `signal( RWers )`;
    19522089}
    19532090void EndWrite( ReadersWriter & mutex rw ) with(rw) {
    19542091        wcnt = 0;
     2092        `signal( RWers );`
    19552093}
    19562094void StartRead( ReadersWriter & mutex rw ) with(rw) {
    1957         if ( wcnt > 0 ) `waitfor( EndWrite, rw );`
     2095        if ( wcnt !=0 || ! empty( RWers ) )
     2096                `wait( RWers, READER )`;
    19582097        rcnt += 1;
     2098        if ( ! empty(RWers) && `front(RWers) == READER` )
     2099                `signal( RWers )`;  // daisy-chain signalling
    19592100}
    19602101void StartWrite( ReadersWriter & mutex rw ) with(rw) {
    1961         if ( wcnt > 0 ) `waitfor( EndWrite, rw );`
    1962         else while ( rcnt > 0 ) `waitfor( EndRead, rw );`
     2102        if ( wcnt != 0 || rcnt != 0 ) `wait( RWers, WRITER )`;
     2103
    19632104        wcnt = 1;
    19642105}
    1965 
    19662106\end{cfa}
    19672107\end{lrbox}
    19682108
    1969 \subfloat[Generic bounded buffer, internal scheduling]{\label{f:BBInt}\usebox\myboxA}
    1970 \hspace{3pt}
     2109\newbox\myboxB
     2110\begin{lrbox}{\myboxB}
     2111\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2112
     2113monitor ReadersWriter {
     2114        int rcnt, wcnt; // readers/writer using resource
     2115
     2116};
     2117void ?{}( ReadersWriter & rw ) with(rw) {
     2118        rcnt = wcnt = 0;
     2119}
     2120void EndRead( ReadersWriter & mutex rw ) with(rw) {
     2121        rcnt -= 1;
     2122
     2123}
     2124void EndWrite( ReadersWriter & mutex rw ) with(rw) {
     2125        wcnt = 0;
     2126
     2127}
     2128void StartRead( ReadersWriter & mutex rw ) with(rw) {
     2129        if ( wcnt > 0 ) `waitfor( EndWrite : rw );`
     2130
     2131        rcnt += 1;
     2132
     2133
     2134}
     2135void StartWrite( ReadersWriter & mutex rw ) with(rw) {
     2136        if ( wcnt > 0 ) `waitfor( EndWrite : rw );`
     2137        else while ( rcnt > 0 ) `waitfor( EndRead : rw );`
     2138        wcnt = 1;
     2139}
     2140\end{cfa}
     2141\end{lrbox}
     2142
     2143\subfloat[Internal scheduling]{\label{f:RWInt}\usebox\myboxA}
     2144\hspace{1pt}
    19712145\vrule
    19722146\hspace{3pt}
    1973 \subfloat[Readers / writer lock, external scheduling]{\label{f:RWExt}\usebox\myboxB}
    1974 
    1975 \caption{Internal / external scheduling}
    1976 \label{f:InternalExternalScheduling}
     2147\subfloat[External scheduling]{\label{f:RWExt}\usebox\myboxB}
     2148
     2149\caption{Readers / writer lock}
     2150\label{f:ReadersWriterLock}
    19772151\end{figure}
    19782152
    1979 Figure~\ref{f:BBInt} can be transformed into external scheduling by removing the condition variables and signals/waits, and adding the following lines at the locations of the current @wait@s in @insert@/@remove@, respectively.
    1980 \begin{cfa}[aboveskip=2pt,belowskip=1pt]
    1981 if ( count == 10 ) `waitfor( remove, buffer )`;       |      if ( count == 0 ) `waitfor( insert, buffer )`;
    1982 \end{cfa}
    1983 Here, the producers/consumers detects a full/\-empty buffer and prevents more producers/consumers from entering the monitor until there is a free/empty slot in the buffer.
    1984 External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion.
    1985 If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer.
    1986 Threads calling excluded functions block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor.
    1987 Figure~\ref{f:RWExt} shows a readers/writer lock written using external scheduling, where a waiting reader detects a writer using the resource and restricts further calls until the writer exits by calling @EndWrite@.
    1988 The writer does a similar action for each reader or writer using the resource.
    1989 Note, no new calls to @StarRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@.
    1990 External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on conditions in the monitor.
    1991 The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go @select@ on channels.
    1992 While both mechanisms have strengths and weaknesses, this project uses the control-flow mechanism to be consistent with other language features.
    1993 % Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor functions (see Section~\ref{s:Multi-MonitorScheduling}).
    1994 
    1995 Figure~\ref{f:DatingService} shows a dating service demonstrating non-blocking and blocking signalling.
    1996 The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers.
    1997 A thread blocks until an appropriate partner arrives.
    1998 The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property.
    1999 For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner.
    2000 For signal-block scheduling, the implicit urgent-queue replaces the explict @exchange@-condition and @signal_block@ puts the finding thread on the urgent condition and unblocks the matcher.
    2001 The dating service is an example of a monitor that cannot be written using external scheduling because it requires knowledge of calling parameters to make scheduling decisions, and parameters of waiting threads are unavailable;
    2002 as well, an arriving thread may not find a partner and must wait, which requires a condition variable, and condition variables imply internal scheduling.
    2003 Furthermore, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number.
    2004 Putting loops around the @wait@s does not correct the problem;
    2005 the simple solution must be restructured to account for barging.
     2153Finally, external scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited.
     2154Internal schedulling performing multiple signalling results in unblocking from urgent in the reverse order from signalling.
     2155It is rare for the unblocking order to be important as an unblocked thread can be time-sliced immediately after leaving the monitor.
     2156If the unblocking order is important, multiple signalling can be restructured into daisy-chain signalling, where each thread signals the next thread.
     2157Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling.
     2158(Advanced @waitfor@ features are discussed in Section~\ref{s:ExtendedWaitfor}.)
    20062159
    20072160\begin{figure}
     
    20172170};
    20182171int girl( DS & mutex ds, int phNo, int ccode ) {
    2019         if ( is_empty( Boys[ccode] ) ) {
     2172        if ( empty( Boys[ccode] ) ) {
    20202173                wait( Girls[ccode] );
    20212174                GirlPhNo = phNo;
     
    20442197};
    20452198int girl( DS & mutex ds, int phNo, int ccode ) {
    2046         if ( is_empty( Boys[ccode] ) ) { // no compatible
     2199        if ( empty( Boys[ccode] ) ) { // no compatible
    20472200                wait( Girls[ccode] ); // wait for boy
    20482201                GirlPhNo = phNo; // make phone number available
     
    20642217\qquad
    20652218\subfloat[\lstinline@signal_block@]{\label{f:DatingSignalBlock}\usebox\myboxB}
    2066 \caption{Dating service}
    2067 \label{f:DatingService}
     2219\caption{Dating service Monitor}
     2220\label{f:DatingServiceMonitor}
    20682221\end{figure}
    20692222
    2070 In summation, for internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread;
    2071 the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently.
    2072 The waiter unblocks next from the urgent queue, uses/takes the state, and exits the monitor.
    2073 Blocking signal is the reverse, where the waiter is providing the cooperation for the signalling thread;
    2074 the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter.
    2075 The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to use/take the state.
     2223Figure~\ref{f:DatingServiceMonitor} shows a dating service demonstrating non-blocking and blocking signalling.
     2224The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers.
     2225A thread blocks until an appropriate partner arrives.
     2226The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property.
     2227For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner.
     2228For signal-block scheduling, the implicit urgent-queue replaces the explicit @exchange@-condition and @signal_block@ puts the finding thread on the urgent stack and unblocks the matcher.
     2229
     2230The dating service is an important example of a monitor that cannot be written using external scheduling.
     2231First, because scheduling requires knowledge of calling parameters to make matching decisions, and parameters of calling threads are unavailable within the monitor.
     2232For example, a girl thread within the monitor cannot examine the @ccode@ of boy threads waiting on the calling queue to determine if there is a matching partner.
     2233Second, because a scheduling decision may be delayed when there is no immediate match, which requires a condition queue for waiting, and condition queues imply internal scheduling.
     2234For example, if a girl thread could determine there is no calling boy with the same @ccode@, it must wait until a matching boy arrives.
     2235Finally, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number.
     2236This situation shows rechecking the waiting condition and waiting again (signals-as-hints) fails, requiring significant restructured to account for barging.
    20762237
    20772238Both internal and external scheduling extend to multiple monitors in a natural way.
    20782239\begin{cquote}
    2079 \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
     2240\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}}
    20802241\begin{cfa}
    20812242monitor M { `condition e`; ... };
     
    20882249&
    20892250\begin{cfa}
    2090 void rtn$\(_1\)$( M & mutex m1, M & mutex m2 );
     2251void rtn$\(_1\)$( M & mutex m1, M & mutex m2 ); // overload rtn
    20912252void rtn$\(_2\)$( M & mutex m1 );
    20922253void bar( M & mutex m1, M & mutex m2 ) {
    2093         ... waitfor( `rtn` ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\), m1, m2 )}$
    2094         ... waitfor( `rtn, m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$
     2254        ... waitfor( `rtn`${\color{red}\(_1\)}$ ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\) : m1, m2 )}$
     2255        ... waitfor( `rtn${\color{red}\(_2\)}$ : m1` ); ...
    20952256}
    20962257\end{cfa}
     
    20992260For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex parameters, \ie @wait( e, m1, m2 )@.
    21002261To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@.
    2101 Wait cannot statically verifies the released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@.
    2102 While \CC supports bulk locking, @wait@ only accepts a single lock for a condition variable, so bulk locking with condition variables is asymmetric.
     2262Wait cannot statically verify the released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@.
     2263While \CC supports bulk locking, @wait@ only accepts a single lock for a condition queue, so bulk locking with condition queues is asymmetric.
    21032264Finally, a signaller,
    21042265\begin{cfa}
     
    21092270must have acquired at least the same locks as the waiting thread signalled from a condition queue to allow the locks to be passed, and hence, prevent barging.
    21102271
    2111 Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn, m1, m2 )@.
    2112 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@.
     2272Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn : m1, m2 )@.
     2273To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn : m1 )@.
    21132274@waitfor@ does statically verify the monitor types passed are the same as the acquired mutex-parameters of the given function or function pointer, hence the function (pointer) prototype must be accessible.
    21142275% When an overloaded function appears in an @waitfor@ statement, calls to any function with that name are accepted.
     
    21182279void rtn( M & mutex m );
    21192280`int` rtn( M & mutex m );
    2120 waitfor( (`int` (*)( M & mutex ))rtn, m );
    2121 \end{cfa}
    2122 
    2123 The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock.
     2281waitfor( (`int` (*)( M & mutex ))rtn : m );
     2282\end{cfa}
     2283
     2284The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock (see Section~\ref{s:MutexAcquisition}).
     2285\newpage
    21242286\begin{cfa}
    21252287void foo( M & mutex m1, M & mutex m2 ) {
    2126         ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired )}$
    2127 void bar( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$
     2288        ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired}$
     2289void bar( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2}$
    21282290        ... signal( `e` ); ...
    21292291\end{cfa}
    21302292The @wait@ only releases @m1@ so the signalling thread cannot acquire @m1@ and @m2@ to enter @bar@ and @signal@ the condition.
    2131 While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitors, not being perfectly composable.
    2132 
     2293While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitor locking is not perfectly composable.
    21332294
    21342295
    21352296\subsection{\texorpdfstring{Extended \protect\lstinline@waitfor@}{Extended waitfor}}
     2297\label{s:ExtendedWaitfor}
    21362298
    21372299Figure~\ref{f:ExtendedWaitfor} shows the extended form of the @waitfor@ statement to conditionally accept one of a group of mutex functions, with an optional statement to be performed \emph{after} the mutex function finishes.
     
    21442306Hence, the terminating @else@ clause allows a conditional attempt to accept a call without blocking.
    21452307If both @timeout@ and @else@ clause are present, the @else@ must be conditional, or the @timeout@ is never triggered.
    2146 There is also a traditional future wait queue (not shown) (\eg Microsoft (@WaitForMultipleObjects@)), to wait for a specified number of future elements in the queue.
     2308There is also a traditional future wait queue (not shown) (\eg Microsoft @WaitForMultipleObjects@), to wait for a specified number of future elements in the queue.
     2309Finally, there is a shorthand for specifying multiple functions using the same set of monitors: @waitfor( f, g, h : m1, m2, m3 )@.
    21472310
    21482311\begin{figure}
     
    21712334The right example accepts either @mem1@ or @mem2@ if @C1@ and @C2@ are true.
    21722335
    2173 An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated, \eg assume the bounded buffer is restructred from a monitor to a thread with the following @main@.
     2336An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated, \eg assume the bounded buffer is restructured from a monitor to a thread with the following @main@.
    21742337\begin{cfa}
    21752338void main( Buffer(T) & buffer ) with(buffer) {
    21762339        for () {
    2177                 `waitfor( ^?{}, buffer )` break;
    2178                 or when ( count != 20 ) waitfor( insert, buffer ) { ... }
    2179                 or when ( count != 0 ) waitfor( remove, buffer ) { ... }
     2340                `waitfor( ^?{} : buffer )` break;
     2341                or when ( count != 20 ) waitfor( insert : buffer ) { ... }
     2342                or when ( count != 0 ) waitfor( remove : buffer ) { ... }
    21802343        }
    21812344        // clean up
     
    22692432To support this efficient semantics (and prevent barging), the implementation maintains a list of monitors acquired for each blocked thread.
    22702433When a signaller exits or waits in a monitor function/statement, the front waiter on urgent is unblocked if all its monitors are released.
    2271 Implementing a fast subset check for the necessary released monitors is important.
     2434Implementing a fast subset check for the necessary released monitors is important and discussed in the following sections.
    22722435% The benefit is encapsulating complexity into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
    22732436
    22742437
    2275 \subsection{Loose Object Definitions}
    2276 \label{s:LooseObjectDefinitions}
    2277 
    2278 In an object-oriented programming language, a class includes an exhaustive list of operations.
    2279 A new class can add members via static inheritance but the subclass still has an exhaustive list of operations.
    2280 (Dynamic member adding, \eg JavaScript~\cite{JavaScript}, is not considered.)
    2281 In the object-oriented scenario, the type and all its operators are always present at compilation (even separate compilation), so it is possible to number the operations in a bit mask and use an $O(1)$ compare with a similar bit mask created for the operations specified in a @waitfor@.
    2282 
    2283 However, in \CFA, monitor functions can be statically added/removed in translation units, making a fast subset check difficult.
    2284 \begin{cfa}
    2285         monitor M { ... }; // common type, included in .h file
    2286 translation unit 1
    2287         void `f`( M & mutex m );
    2288         void g( M & mutex m ) { waitfor( `f`, m ); }
    2289 translation unit 2
    2290         void `f`( M & mutex m ); $\C{// replacing f and g for type M in this translation unit}$
    2291         void `g`( M & mutex m );
    2292         void h( M & mutex m ) { waitfor( `f`, m ) or waitfor( `g`, m ); } $\C{// extending type M in this translation unit}$
    2293 \end{cfa}
    2294 The @waitfor@ statements in each translation unit cannot form a unique bit-mask because the monitor type does not carry that information.
     2438\subsection{\texorpdfstring{\protect\lstinline@waitfor@ Implementation}{waitfor Implementation}}
     2439\label{s:waitforImplementation}
     2440
     2441In a statically-typed object-oriented programming language, a class has an exhaustive list of members, even when members are added via static inheritance (see Figure~\ref{f:uCinheritance}).
     2442Knowing all members at compilation (even separate compilation) allows uniquely numbered them so the accept-statement implementation can use a fast/compact bit mask with $O(1)$ compare.
     2443
     2444\begin{figure}
     2445\centering
     2446\begin{lrbox}{\myboxA}
     2447\begin{uC++}[aboveskip=0pt,belowskip=0pt]
     2448$\emph{translation unit 1}$
     2449_Monitor B { // common type in .h file
     2450        _Mutex virtual void `f`( ... );
     2451        _Mutex virtual void `g`( ... );
     2452        _Mutex virtual void w1( ... ) { ... _Accept(`f`, `g`); ... }
     2453};
     2454$\emph{translation unit 2}$
     2455// include B
     2456_Monitor D : public B { // inherit
     2457        _Mutex void `h`( ... ); // add
     2458        _Mutex void w2( ... ) { ... _Accept(`f`, `h`); ... }
     2459};
     2460\end{uC++}
     2461\end{lrbox}
     2462
     2463\begin{lrbox}{\myboxB}
     2464\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2465$\emph{translation unit 1}$
     2466monitor M { ... }; // common type in .h file
     2467void `f`( M & mutex m, ... );
     2468void `g`( M & mutex m, ... );
     2469void w1( M & mutex m, ... ) { ... waitfor(`f`, `g` : m); ... }
     2470
     2471$\emph{translation unit 2}$
     2472// include M
     2473extern void `f`( M & mutex m, ... ); // import f but not g
     2474void `h`( M & mutex m ); // add
     2475void w2( M & mutex m, ... ) { ... waitfor(`f`, `h` : m); ... }
     2476
     2477\end{cfa}
     2478\end{lrbox}
     2479
     2480\subfloat[\uC]{\label{f:uCinheritance}\usebox\myboxA}
     2481\hspace{3pt}
     2482\vrule
     2483\hspace{3pt}
     2484\subfloat[\CFA]{\label{f:CFinheritance}\usebox\myboxB}
     2485\caption{Member / Function visibility}
     2486\label{f:MemberFunctionVisibility}
     2487\end{figure}
     2488
     2489However, the @waitfor@ statement in translation unit 2 (see Figure~\ref{f:CFinheritance}) cannot see function @g@ in translation unit 1 precluding a unique numbering for a bit-mask because the monitor type only carries the protected shared-data.
     2490(A possible way to construct a dense mapping is at link or load-time.)
    22952491Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array.
    2296 Then, the same implementation approach used for the urgent stack is used for the calling queue.
    2297 Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (usually short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers.
    2298 (A possible way to construct a dense mapping is at link or load-time.)
     2492Then, the same implementation approach used for the urgent stack (see Section~\ref{s:Scheduling}) is used for the calling queue.
     2493Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers.
    22992494
    23002495
     
    23112506The solution is for the programmer to disambiguate:
    23122507\begin{cfa}
    2313 waitfor( f, `m2` ); $\C{// wait for call to f with argument m2}$
     2508waitfor( f : `m2` ); $\C{// wait for call to f with argument m2}$
    23142509\end{cfa}
    23152510Both locks are acquired by function @g@, so when function @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@.
     
    23182513monitor M { ... };
    23192514void f( M & mutex m1, M & mutex m2 );
    2320 void g( M & mutex m1, M & mutex m2 ) { waitfor( f, `m1, m2` ); $\C{// wait for call to f with arguments m1 and m2}$
     2515void g( M & mutex m1, M & mutex m2 ) { waitfor( f : `m1, m2` ); $\C{// wait for call to f with arguments m1 and m2}$
    23212516\end{cfa}
    23222517Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by the accepting function.
    2323 Also, the order of the monitors in a @waitfor@ statement is unimportant.
    2324 
    2325 Figure~\ref{f:UnmatchedMutexSets} shows an example where, for internal and external scheduling with multiple monitors, a signalling or accepting thread must match exactly, \ie partial matching results in waiting.
    2326 For both examples, the set of monitors is disjoint so unblocking is impossible.
     2518% Also, the order of the monitors in a @waitfor@ statement must match the order of the mutex parameters.
     2519
     2520Figure~\ref{f:UnmatchedMutexSets} shows internal and external scheduling with multiple monitors that must match exactly with a signalling or accepting thread, \ie partial matching results in waiting.
     2521In both cases, the set of monitors is disjoint so unblocking is impossible.
    23272522
    23282523\begin{figure}
     
    23532548}
    23542549void g( M1 & mutex m1, M2 & mutex m2 ) {
    2355         waitfor( f, m1, m2 );
     2550        waitfor( f : m1, m2 );
    23562551}
    23572552g( `m11`, m2 ); // block on accept
     
    23682563\end{figure}
    23692564
    2370 
    2371 \subsection{\texorpdfstring{\protect\lstinline@mutex@ Threads}{mutex Threads}}
    2372 
    2373 Threads in \CFA can also be monitors to allow \emph{direct communication} among threads, \ie threads can have mutex functions that are called by other threads.
    2374 Hence, all monitor features are available when using threads.
    2375 Figure~\ref{f:DirectCommunication} shows a comparison of direct call communication in \CFA with direct channel communication in Go.
    2376 (Ada provides a similar mechanism to the \CFA direct communication.)
    2377 The program main in both programs communicates directly with the other thread versus indirect communication where two threads interact through a passive monitor.
    2378 Both direct and indirection thread communication are valuable tools in structuring concurrent programs.
    2379 
    23802565\begin{figure}
    23812566\centering
     
    23842569
    23852570struct Msg { int i, j; };
    2386 thread GoRtn { int i;  float f;  Msg m; };
     2571monitor thread GoRtn { int i;  float f;  Msg m; };
    23872572void mem1( GoRtn & mutex gortn, int i ) { gortn.i = i; }
    23882573void mem2( GoRtn & mutex gortn, float f ) { gortn.f = f; }
     
    23942579        for () {
    23952580
    2396                 `waitfor( mem1, gortn )` sout | i;  // wait for calls
    2397                 or `waitfor( mem2, gortn )` sout | f;
    2398                 or `waitfor( mem3, gortn )` sout | m.i | m.j;
    2399                 or `waitfor( ^?{}, gortn )` break;
     2581                `waitfor( mem1 : gortn )` sout | i;  // wait for calls
     2582                or `waitfor( mem2 : gortn )` sout | f;
     2583                or `waitfor( mem3 : gortn )` sout | m.i | m.j;
     2584                or `waitfor( ^?{} : gortn )` break; // low priority
    24002585
    24012586        }
     
    24512636\hspace{3pt}
    24522637\subfloat[Go]{\label{f:Gochannel}\usebox\myboxB}
    2453 \caption{Direct communication}
    2454 \label{f:DirectCommunication}
     2638\caption{Direct versus indirect communication}
     2639\label{f:DirectCommunicationComparison}
     2640
     2641\medskip
     2642
     2643\begin{cfa}
     2644monitor thread DatingService {
     2645        condition Girls[CompCodes], Boys[CompCodes];
     2646        int girlPhoneNo, boyPhoneNo, ccode;
     2647};
     2648int girl( DatingService & mutex ds, int phoneno, int code ) with( ds ) {
     2649        girlPhoneNo = phoneno;  ccode = code;
     2650        `wait( Girls[ccode] );`                                                         $\C{// wait for boy}$
     2651        girlPhoneNo = phoneno;  return boyPhoneNo;
     2652}
     2653int boy( DatingService & mutex ds, int phoneno, int code ) with( ds ) {
     2654        boyPhoneNo = phoneno;  ccode = code;
     2655        `wait( Boys[ccode] );`                                                          $\C{// wait for girl}$
     2656        boyPhoneNo = phoneno;  return girlPhoneNo;
     2657}
     2658void main( DatingService & ds ) with( ds ) {                    $\C{// thread starts, ds defaults to mutex}$
     2659        for () {
     2660                waitfor( ^?{} ) break;                                                  $\C{// high priority}$
     2661                or waitfor( girl )                                                              $\C{// girl called, compatible boy ? restart boy then girl}$
     2662                        if ( ! is_empty( Boys[ccode] ) ) { `signal_block( Boys[ccode] );  signal_block( Girls[ccode] );` }
     2663                or waitfor( boy ) {                                                             $\C{// boy called, compatible girl ? restart girl then boy}$
     2664                        if ( ! is_empty( Girls[ccode] ) ) { `signal_block( Girls[ccode] );  signal_block( Boys[ccode] );` }
     2665        }
     2666}
     2667\end{cfa}
     2668\caption{Direct communication dating service}
     2669\label{f:DirectCommunicationDatingService}
    24552670\end{figure}
    24562671
     
    24672682void main( Ping & pi ) {
    24682683        for ( 10 ) {
    2469                 `waitfor( ping, pi );`
     2684                `waitfor( ping : pi );`
    24702685                `pong( po );`
    24712686        }
     
    24802695        for ( 10 ) {
    24812696                `ping( pi );`
    2482                 `waitfor( pong, po );`
     2697                `waitfor( pong : po );`
    24832698        }
    24842699}
     
    24952710
    24962711
    2497 \subsection{Execution Properties}
    2498 
    2499 Table~\ref{t:ObjectPropertyComposition} shows how the \CFA high-level constructs cover 3 fundamental execution properties: thread, stateful function, and mutual exclusion.
    2500 Case 1 is a basic object, with none of the new execution properties.
    2501 Case 2 allows @mutex@ calls to Case 1 to protect shared data.
    2502 Case 3 allows stateful functions to suspend/resume but restricts operations because the state is stackless.
    2503 Case 4 allows @mutex@ calls to Case 3 to protect shared data.
    2504 Cases 5 and 6 are the same as 3 and 4 without restriction because the state is stackful.
    2505 Cases 7 and 8 are rejected because a thread cannot execute without a stackful state in a preemptive environment when context switching from the signal handler.
    2506 Cases 9 and 10 have a stackful thread without and with @mutex@ calls.
    2507 For situations where threads do not require direct communication, case 9 provides faster creation/destruction by eliminating @mutex@ setup.
    2508 
    2509 \begin{table}
    2510 \caption{Object property composition}
    2511 \centering
    2512 \label{t:ObjectPropertyComposition}
    2513 \renewcommand{\arraystretch}{1.25}
    2514 %\setlength{\tabcolsep}{5pt}
    2515 \begin{tabular}{c|c||l|l}
    2516 \multicolumn{2}{c||}{object properties} & \multicolumn{2}{c}{mutual exclusion} \\
    2517 \hline
    2518 thread  & stateful                              & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\
    2519 \hline
    2520 \hline
    2521 No              & No                                    & \textbf{1}\ \ \ aggregate type                & \textbf{2}\ \ \ @monitor@ aggregate type \\
    2522 \hline
    2523 No              & Yes (stackless)               & \textbf{3}\ \ \ @generator@                   & \textbf{4}\ \ \ @monitor@ @generator@ \\
    2524 \hline
    2525 No              & Yes (stackful)                & \textbf{5}\ \ \ @coroutine@                   & \textbf{6}\ \ \ @monitor@ @coroutine@ \\
    2526 \hline
    2527 Yes             & No / Yes (stackless)  & \textbf{7}\ \ \ {\color{red}rejected} & \textbf{8}\ \ \ {\color{red}rejected} \\
    2528 \hline
    2529 Yes             & Yes (stackful)                & \textbf{9}\ \ \ @thread@                              & \textbf{10}\ \ @monitor@ @thread@ \\
    2530 \end{tabular}
    2531 \end{table}
     2712\subsection{\texorpdfstring{\protect\lstinline@monitor@ Generators / Coroutines / Threads}{monitor Generators / Coroutines / Threads}}
     2713
     2714\CFA generators, coroutines, and threads can also be monitors (Table~\ref{t:ExecutionPropertyComposition} cases 4, 6, 12) allowing safe \emph{direct communication} with threads, \ie the custom types can have mutex functions that are called by other threads.
     2715All monitor features are available within these mutex functions.
     2716For example, if the formatter generator (or coroutine equivalent) in Figure~\ref{f:CFAFormatGen} is extended with the monitor property and this interface function is used to communicate with the formatter:
     2717\begin{cfa}
     2718void fmt( Fmt & mutex fmt, char ch ) { fmt.ch = ch; resume( fmt ) }
     2719\end{cfa}
     2720multiple threads can safely pass characters for formatting.
     2721
     2722Figure~\ref{f:DirectCommunicationComparison} shows a comparison of direct call-communication in \CFA versus indirect channel-communication in Go.
     2723(Ada has a similar mechanism to \CFA direct communication.)
     2724The program thread in \CFA @main@ uses the call/return paradigm to directly communicate with the @GoRtn main@, whereas Go switches to the channel paradigm to indirectly communicate with the goroutine.
     2725Communication by multiple threads is safe for the @gortn@ thread via mutex calls in \CFA or channel assignment in Go.
     2726
     2727Figure~\ref{f:DirectCommunicationDatingService} shows the dating-service problem in Figure~\ref{f:DatingServiceMonitor} extended from indirect monitor communication to direct thread communication.
     2728When converting a monitor to a thread (server), the coding pattern is to move as much code as possible from the accepted members into the thread main so it does an much work as possible.
     2729Notice, the dating server is postponing requests for an unspecified time while continuing to accept new requests.
     2730For complex servers (web-servers), there can be hundreds of lines of code in the thread main and safe interaction with clients can be complex.
    25322731
    25332732
     
    25352734
    25362735For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc.
    2537 Some of these low-level mechanism are used in the \CFA runtime, but we strongly advocate using high-level mechanisms whenever possible.
     2736Some of these low-level mechanism are used to build the \CFA runtime, but we always advocate using high-level mechanisms whenever possible.
    25382737
    25392738
     
    25782777\begin{cfa}
    25792778struct Adder {
    2580     int * row, cols;
     2779        int * row, cols;
    25812780};
    25822781int operator()() {
     
    26372836\label{s:RuntimeStructureCluster}
    26382837
    2639 A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the (user) threads from its own ready queue (like an OS executing kernel threads).
     2838A \newterm{cluster} is a collection of user and kernel threads, where the kernel threads run the user threads from the cluster's ready queue, and the operating system runs the kernel threads on the processors from its ready queue.
     2839The term \newterm{virtual processor} is introduced as a synonym for kernel thread to disambiguate between user and kernel thread.
     2840From the language perspective, a virtual processor is an actual processor (core).
     2841
    26402842The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults.
    26412843The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors.
     
    26562858Programs may use more virtual processors than hardware processors.
    26572859On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel.
    2658 (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.)
     2860(It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux,affinityWindows}, which is used when caching issues occur or for heterogeneous hardware processors.) %, affinityFreebsd, affinityNetbsd, affinityMacosx
    26592861The \CFA runtime attempts to block unused processors and unblock processors as the system load increases;
    2660 balancing the workload with processors is difficult because it requires future knowledge, \ie what will the applicaton workload do next.
     2862balancing the workload with processors is difficult because it requires future knowledge, \ie what will the application workload do next.
    26612863Preemption occurs on virtual processors rather than user threads, via operating-system interrupts.
    26622864Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads.
     
    26932895Nondeterministic preemption provides fairness from long-running threads, and forces concurrent programmers to write more robust programs, rather than relying on code between cooperative scheduling to be atomic.
    26942896This atomic reliance can fail on multi-core machines, because execution across cores is nondeterministic.
    2695 A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Microsoft runtime does not support interrupts and on Linux systems, interrupts are complex (see below).
     2897A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Windows runtime does not support interrupts and on Linux systems, interrupts are complex (see below).
    26962898Preemption is normally handled by setting a countdown timer on each virtual processor.
    2697 When the timer expires, an interrupt is delivered, and the interrupt handler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code.
     2899When the timer expires, an interrupt is delivered, and its signal handler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code.
    26982900Multiple signal handlers may be pending.
    26992901When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal is delivered.
    27002902The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler;
    27012903therefore, the same signal mask is required for all virtual processors in a cluster.
    2702 Because preemption frequency is usually long (1 millisecond) performance cost is negligible.
    2703 
    2704 Linux switched a decade ago from specific to arbitrary process signal-delivery for applications with multiple kernel threads.
    2705 \begin{cquote}
    2706 A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked.
    2707 If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which it will deliver the signal.
    2708 SIGNAL(7) - Linux Programmer's Manual
    2709 \end{cquote}
     2904Because preemption interval is usually long (1 millisecond) performance cost is negligible.
     2905
     2906Linux switched a decade ago from specific to arbitrary virtual-processor signal-delivery for applications with multiple kernel threads.
     2907In the new semantics, a virtual-processor directed signal may be delivered to any virtual processor created by the application that does not have the signal blocked.
    27102908Hence, the timer-expiry signal, which is generated \emph{externally} by the Linux kernel to an application, is delivered to any of its Linux subprocesses (kernel threads).
    27112909To ensure each virtual processor receives a preemption signal, a discrete-event simulation is run on a special virtual processor, and only it sets and receives timer events.
     
    27252923\label{s:Performance}
    27262924
    2727 To verify the implementation of the \CFA runtime, a series of microbenchmarks are performed comparing \CFA with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0.
     2925To test the performance of the \CFA runtime, a series of microbenchmarks are used to compare \CFA with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js 12.14.1, and \uC 7.0.0.
    27282926For comparison, the package must be multi-processor (M:N), which excludes libdill/libmil~\cite{libdill} (M:1)), and use a shared-memory programming model, \eg not message passing.
    2729 The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and \CFA/\uC are compiled with gcc 6.5.
     2927The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and pthreads/\CFA/\uC are compiled with gcc 9.2.1.
    27302928
    27312929All benchmarks are run using the following harness. (The Java harness is augmented to circumvent JIT issues.)
    27322930\begin{cfa}
    2733 unsigned int N = 10_000_000;
    2734 #define BENCH( `run` ) Time before = getTimeNsec();  `run;`  Duration result = (getTimeNsec() - before) / N;
    2735 \end{cfa}
    2736 The method used to get time is @clock_gettime( CLOCK_REALTIME )@.
    2737 Each benchmark is performed @N@ times, where @N@ varies depending on the benchmark;
    2738 the total time is divided by @N@ to obtain the average time for a benchmark.
    2739 Each benchmark experiment is run 31 times.
     2931#define BENCH( `run` ) uint64_t start = cputime_ns();  `run;`  double result = (double)(cputime_ns() - start) / N;
     2932\end{cfa}
     2933where CPU time in nanoseconds is from the appropriate language clock.
     2934Each benchmark is performed @N@ times, where @N@ is selected so the benchmark runs in the range of 2--20 seconds for the specific programming language.
     2935The total time is divided by @N@ to obtain the average time for a benchmark.
     2936Each benchmark experiment is run 13 times and the average appears in the table.
    27402937All omitted tests for other languages are functionally identical to the \CFA tests and available online~\cite{CforallBenchMarks}.
    2741 % tar --exclude=.deps --exclude=Makefile --exclude=Makefile.in --exclude=c.c --exclude=cxx.cpp --exclude=fetch_add.c -cvhf benchmark.tar benchmark
    2742 
    2743 \paragraph{Object Creation}
    2744 
    2745 Object creation is measured by creating/deleting the specific kind of concurrent object.
    2746 Figure~\ref{f:creation} shows the code for \CFA, with results in Table~\ref{tab:creation}.
    2747 The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine to force stack creation, the creation cost is artificially low.
    2748 
    2749 \begin{multicols}{2}
    2750 \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
    2751 \begin{cfa}
    2752 @thread@ MyThread {};
    2753 void @main@( MyThread & ) {}
    2754 int main() {
    2755         BENCH( for ( N ) { @MyThread m;@ } )
    2756         sout | result`ns;
    2757 }
    2758 \end{cfa}
    2759 \captionof{figure}{\CFA object-creation benchmark}
    2760 \label{f:creation}
    2761 
    2762 \columnbreak
    2763 
    2764 \vspace*{-16pt}
    2765 \captionof{table}{Object creation comparison (nanoseconds)}
    2766 \label{tab:creation}
    2767 
    2768 \begin{tabular}[t]{@{}r*{3}{D{.}{.}{5.2}}@{}}
    2769 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2770 \CFA Coroutine Lazy             & 13.2          & 13.1          & 0.44          \\
    2771 \CFA Coroutine Eager    & 531.3         & 536.0         & 26.54         \\
    2772 \CFA Thread                             & 2074.9        & 2066.5        & 170.76        \\
    2773 \uC Coroutine                   & 89.6          & 90.5          & 1.83          \\
    2774 \uC Thread                              & 528.2         & 528.5         & 4.94          \\
    2775 Goroutine                               & 4068.0        & 4113.1        & 414.55        \\
    2776 Java Thread                             & 103848.5      & 104295.4      & 2637.57       \\
    2777 Pthreads                                & 33112.6       & 33127.1       & 165.90
    2778 \end{tabular}
    2779 \end{multicols}
    2780 
    2781 
    2782 \paragraph{Context-Switching}
     2938% tar --exclude-ignore=exclude -cvhf benchmark.tar benchmark
     2939
     2940\paragraph{Context Switching}
    27832941
    27842942In procedural programming, the cost of a function call is important as modularization (refactoring) increases.
    2785 (In many cases, a compiler inlines function calls to eliminate this cost.)
    2786 Similarly, when modularization extends to coroutines/tasks, the time for a context switch becomes a relevant factor.
     2943(In many cases, a compiler inlines function calls to increase the size and number of basic blocks for optimizing.)
     2944Similarly, when modularization extends to coroutines/threads, the time for a context switch becomes a relevant factor.
    27872945The coroutine test is from resumer to suspender and from suspender to resumer, which is two context switches.
     2946%For async-await systems, the test is scheduling and fulfilling @N@ empty promises, where all promises are allocated before versus interleaved with fulfillment to avoid garbage collection.
     2947For async-await systems, the test measures the cost of the @await@ expression entering the event engine by awaiting @N@ promises, where each created promise is resolved by an immediate event in the engine (using Node.js @setImmediate@).
    27882948The thread test is using yield to enter and return from the runtime kernel, which is two context switches.
    27892949The difference in performance between coroutine and thread context-switch is the cost of scheduling for threads, whereas coroutines are self-scheduling.
    2790 Figure~\ref{f:ctx-switch} only shows the \CFA code for coroutines/threads (other systems are similar) with all results in Table~\ref{tab:ctx-switch}.
     2950Figure~\ref{f:ctx-switch} shows the \CFA code for a coroutine/thread with results in Table~\ref{t:ctx-switch}.
     2951
     2952% From: Gregor Richards <gregor.richards@uwaterloo.ca>
     2953% To: "Peter A. Buhr" <pabuhr@plg2.cs.uwaterloo.ca>
     2954% Date: Fri, 24 Jan 2020 13:49:18 -0500
     2955%
     2956% I can also verify that the previous version, which just tied a bunch of promises together, *does not* go back to the
     2957% event loop at all in the current version of Node. Presumably they're taking advantage of the fact that the ordering of
     2958% events is intentionally undefined to just jump right to the next 'then' in the chain, bypassing event queueing
     2959% entirely. That's perfectly correct behavior insofar as its difference from the specified behavior isn't observable, but
     2960% it isn't typical or representative of much anything useful, because most programs wouldn't have whole chains of eager
     2961% promises. Also, it's not representative of *anything* you can do with async/await, as there's no way to encode such an
     2962% eager chain that way.
    27912963
    27922964\begin{multicols}{2}
     
    27942966\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    27952967@coroutine@ C {} c;
    2796 void main( C & ) { for ( ;; ) { @suspend;@ } }
     2968void main( C & ) { while () { @suspend;@ } }
    27972969int main() { // coroutine test
    27982970        BENCH( for ( N ) { @resume( c );@ } )
    2799         sout | result`ns;
    2800 }
    2801 int main() { // task test
     2971        sout | result;
     2972}
     2973int main() { // thread test
    28022974        BENCH( for ( N ) { @yield();@ } )
    2803         sout | result`ns;
     2975        sout | result;
    28042976}
    28052977\end{cfa}
     
    28112983\vspace*{-16pt}
    28122984\captionof{table}{Context switch comparison (nanoseconds)}
    2813 \label{tab:ctx-switch}
     2985\label{t:ctx-switch}
    28142986\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
    28152987\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2816 C function              & 1.8   & 1.8   & 0.01  \\
    2817 \CFA generator  & 2.4   & 2.2   & 0.25  \\
    2818 \CFA Coroutine  & 36.2  & 36.2  & 0.25  \\
    2819 \CFA Thread             & 93.2  & 93.5  & 2.09  \\
    2820 \uC Coroutine   & 52.0  & 52.1  & 0.51  \\
    2821 \uC Thread              & 96.2  & 96.3  & 0.58  \\
    2822 Goroutine               & 141.0 & 141.3 & 3.39  \\
    2823 Java Thread             & 374.0 & 375.8 & 10.38 \\
    2824 Pthreads Thread & 361.0 & 365.3 & 13.19
     2988C function                      & 1.8           & 1.8           & 0.0   \\
     2989\CFA generator          & 1.8           & 1.8           & 0.1   \\
     2990\CFA coroutine          & 32.5          & 32.9          & 0.8   \\
     2991\CFA thread                     & 93.8          & 93.6          & 2.2   \\
     2992\uC coroutine           & 50.3          & 50.3          & 0.2   \\
     2993\uC thread                      & 97.3          & 97.4          & 1.0   \\
     2994Python generator        & 40.9          & 41.3          & 1.5   \\
     2995Node.js generator       & 32.6          & 32.2          & 1.0   \\
     2996Node.js await           & 1852.2        & 1854.7        & 16.4  \\
     2997Goroutine thread        & 143.0         & 143.3         & 1.1   \\
     2998Rust thread                     & 332.0         & 331.4         & 2.4   \\
     2999Java thread                     & 405.0         & 415.0         & 17.6  \\
     3000Pthreads thread         & 334.3         & 335.2         & 3.9
    28253001\end{tabular}
    28263002\end{multicols}
    28273003
    2828 
    2829 \paragraph{Mutual-Exclusion}
    2830 
    2831 Uncontented mutual exclusion, which frequently occurs, is measured by entering/leaving a critical section.
    2832 For monitors, entering and leaving a monitor function is measured.
    2833 To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured.
    2834 Figure~\ref{f:mutex} shows the code for \CFA with all results in Table~\ref{tab:mutex}.
     3004\paragraph{Internal Scheduling}
     3005
     3006Internal scheduling is measured using a cycle of two threads signalling and waiting.
     3007Figure~\ref{f:schedint} shows the code for \CFA, with results in Table~\ref{t:schedint}.
    28353008Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
     3009Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.
    28363010
    28373011\begin{multicols}{2}
    28383012\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
    28393013\begin{cfa}
     3014volatile int go = 0;
     3015@condition c;@
    28403016@monitor@ M {} m1/*, m2, m3, m4*/;
    2841 void __attribute__((noinline))
    2842 do_call( M & @mutex m/*, m2, m3, m4*/@ ) {}
     3017void call( M & @mutex p1/*, p2, p3, p4*/@ ) {
     3018        @signal( c );@
     3019}
     3020void wait( M & @mutex p1/*, p2, p3, p4*/@ ) {
     3021        go = 1; // continue other thread
     3022        for ( N ) { @wait( c );@ } );
     3023}
     3024thread T {};
     3025void main( T & ) {
     3026        while ( go == 0 ) { yield(); } // waiter must start first
     3027        BENCH( for ( N ) { call( m1/*, m2, m3, m4*/ ); } )
     3028        sout | result;
     3029}
    28433030int main() {
    2844         BENCH(
    2845                 for( N ) do_call( m1/*, m2, m3, m4*/ );
    2846         )
    2847         sout | result`ns;
    2848 }
    2849 \end{cfa}
    2850 \captionof{figure}{\CFA acquire/release mutex benchmark}
    2851 \label{f:mutex}
     3031        T t;
     3032        wait( m1/*, m2, m3, m4*/ );
     3033}
     3034\end{cfa}
     3035\captionof{figure}{\CFA Internal-scheduling benchmark}
     3036\label{f:schedint}
    28523037
    28533038\columnbreak
    28543039
    28553040\vspace*{-16pt}
    2856 \captionof{table}{Mutex comparison (nanoseconds)}
    2857 \label{tab:mutex}
    2858 \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
    2859 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2860 test and test-and-test lock             & 19.1  & 18.9  & 0.40  \\
    2861 \CFA @mutex@ function, 1 arg.   & 45.9  & 46.6  & 1.45  \\
    2862 \CFA @mutex@ function, 2 arg.   & 105.0 & 104.7 & 3.08  \\
    2863 \CFA @mutex@ function, 4 arg.   & 165.0 & 167.6 & 5.65  \\
    2864 \uC @monitor@ member rtn.               & 54.0  & 53.7  & 0.82  \\
    2865 Java synchronized method                & 31.0  & 31.1  & 0.50  \\
    2866 Pthreads Mutex Lock                             & 33.6  & 32.6  & 1.14
     3041\captionof{table}{Internal-scheduling comparison (nanoseconds)}
     3042\label{t:schedint}
     3043\bigskip
     3044
     3045\begin{tabular}{@{}r*{3}{D{.}{.}{5.2}}@{}}
     3046\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     3047\CFA @signal@, 1 monitor        & 364.4         & 364.2         & 4.4           \\
     3048\CFA @signal@, 2 monitor        & 484.4         & 483.9         & 8.8           \\
     3049\CFA @signal@, 4 monitor        & 709.1         & 707.7         & 15.0          \\
     3050\uC @signal@ monitor            & 328.3         & 327.4         & 2.4           \\
     3051Rust cond. variable                     & 7514.0        & 7437.4        & 397.2         \\
     3052Java @notify@ monitor           & 9623.0        & 9654.6        & 236.2         \\
     3053Pthreads cond. variable         & 5553.7        & 5576.1        & 345.6
    28673054\end{tabular}
    28683055\end{multicols}
     
    28723059
    28733060External scheduling is measured using a cycle of two threads calling and accepting the call using the @waitfor@ statement.
    2874 Figure~\ref{f:ext-sched} shows the code for \CFA, with results in Table~\ref{tab:ext-sched}.
     3061Figure~\ref{f:schedext} shows the code for \CFA with results in Table~\ref{t:schedext}.
    28753062Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
    28763063
     
    28793066\vspace*{-16pt}
    28803067\begin{cfa}
    2881 volatile int go = 0;
    2882 @monitor@ M {} m;
     3068@monitor@ M {} m1/*, m2, m3, m4*/;
     3069void call( M & @mutex p1/*, p2, p3, p4*/@ ) {}
     3070void wait( M & @mutex p1/*, p2, p3, p4*/@ ) {
     3071        for ( N ) { @waitfor( call : p1/*, p2, p3, p4*/ );@ }
     3072}
    28833073thread T {};
    2884 void __attribute__((noinline))
    2885 do_call( M & @mutex@ ) {}
    28863074void main( T & ) {
    2887         while ( go == 0 ) { yield(); }
    2888         while ( go == 1 ) { do_call( m ); }
    2889 }
    2890 int __attribute__((noinline))
    2891 do_wait( M & @mutex@ m ) {
    2892         go = 1; // continue other thread
    2893         BENCH( for ( N ) { @waitfor( do_call, m );@ } )
    2894         go = 0; // stop other thread
    2895         sout | result`ns;
     3075        BENCH( for ( N ) { call( m1/*, m2, m3, m4*/ ); } )
     3076        sout | result;
    28963077}
    28973078int main() {
    28983079        T t;
    2899         do_wait( m );
     3080        wait( m1/*, m2, m3, m4*/ );
    29003081}
    29013082\end{cfa}
    29023083\captionof{figure}{\CFA external-scheduling benchmark}
    2903 \label{f:ext-sched}
     3084\label{f:schedext}
    29043085
    29053086\columnbreak
     
    29073088\vspace*{-16pt}
    29083089\captionof{table}{External-scheduling comparison (nanoseconds)}
    2909 \label{tab:ext-sched}
     3090\label{t:schedext}
    29103091\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
    29113092\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2912 \CFA @waitfor@, 1 @monitor@     & 376.4 & 376.8 & 7.63  \\
    2913 \CFA @waitfor@, 2 @monitor@     & 491.4 & 492.0 & 13.31 \\
    2914 \CFA @waitfor@, 4 @monitor@     & 681.0 & 681.7 & 19.10 \\
    2915 \uC @_Accept@                           & 331.1 & 331.4 & 2.66
     3093\CFA @waitfor@, 1 monitor       & 367.1 & 365.3 & 5.0   \\
     3094\CFA @waitfor@, 2 monitor       & 463.0 & 464.6 & 7.1   \\
     3095\CFA @waitfor@, 4 monitor       & 689.6 & 696.2 & 21.5  \\
     3096\uC \lstinline[language=uC++]|_Accept| monitor  & 328.2 & 329.1 & 3.4   \\
     3097Go \lstinline[language=Golang]|select| channel  & 365.0 & 365.5 & 1.2
    29163098\end{tabular}
    29173099\end{multicols}
    29183100
    2919 
    2920 \paragraph{Internal Scheduling}
    2921 
    2922 Internal scheduling is measured using a cycle of two threads signalling and waiting.
    2923 Figure~\ref{f:int-sched} shows the code for \CFA, with results in Table~\ref{tab:int-sched}.
    2924 Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
    2925 Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.
     3101\paragraph{Mutual-Exclusion}
     3102
     3103Uncontented mutual exclusion, which frequently occurs, is measured by entering/leaving a critical section.
     3104For monitors, entering and leaving a monitor function is measured, otherwise the language-appropriate mutex-lock is measured.
     3105For comparison, a spinning (versus blocking) test-and-test-set lock is presented.
     3106Figure~\ref{f:mutex} shows the code for \CFA with results in Table~\ref{t:mutex}.
     3107Note the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
    29263108
    29273109\begin{multicols}{2}
    29283110\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
    29293111\begin{cfa}
    2930 volatile int go = 0;
    2931 @monitor@ M { @condition c;@ } m;
    2932 void __attribute__((noinline))
    2933 do_call( M & @mutex@ a1 ) { @signal( c );@ }
    2934 thread T {};
    2935 void main( T & this ) {
    2936         while ( go == 0 ) { yield(); }
    2937         while ( go == 1 ) { do_call( m ); }
    2938 }
    2939 int  __attribute__((noinline))
    2940 do_wait( M & mutex m ) with(m) {
    2941         go = 1; // continue other thread
    2942         BENCH( for ( N ) { @wait( c );@ } );
    2943         go = 0; // stop other thread
    2944         sout | result`ns;
    2945 }
     3112@monitor@ M {} m1/*, m2, m3, m4*/;
     3113call( M & @mutex p1/*, p2, p3, p4*/@ ) {}
    29463114int main() {
    2947         T t;
    2948         do_wait( m );
    2949 }
    2950 \end{cfa}
    2951 \captionof{figure}{\CFA Internal-scheduling benchmark}
    2952 \label{f:int-sched}
     3115        BENCH( for( N ) call( m1/*, m2, m3, m4*/ ); )
     3116        sout | result;
     3117}
     3118\end{cfa}
     3119\captionof{figure}{\CFA acquire/release mutex benchmark}
     3120\label{f:mutex}
    29533121
    29543122\columnbreak
    29553123
    29563124\vspace*{-16pt}
    2957 \captionof{table}{Internal-scheduling comparison (nanoseconds)}
    2958 \label{tab:int-sched}
    2959 \bigskip
    2960 
    2961 \begin{tabular}{@{}r*{3}{D{.}{.}{5.2}}@{}}
    2962 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2963 \CFA @signal@, 1 @monitor@      & 372.6         & 374.3         & 14.17         \\
    2964 \CFA @signal@, 2 @monitor@      & 492.7         & 494.1         & 12.99         \\
    2965 \CFA @signal@, 4 @monitor@      & 749.4         & 750.4         & 24.74         \\
    2966 \uC @signal@                            & 320.5         & 321.0         & 3.36          \\
    2967 Java @notify@                           & 10160.5       & 10169.4       & 267.71        \\
    2968 Pthreads Cond. Variable         & 4949.6        & 5065.2        & 363
     3125\captionof{table}{Mutex comparison (nanoseconds)}
     3126\label{t:mutex}
     3127\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
     3128\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     3129test-and-test-set lock                  & 19.1  & 18.9  & 0.4   \\
     3130\CFA @mutex@ function, 1 arg.   & 48.3  & 47.8  & 0.9   \\
     3131\CFA @mutex@ function, 2 arg.   & 86.7  & 87.6  & 1.9   \\
     3132\CFA @mutex@ function, 4 arg.   & 173.4 & 169.4 & 5.9   \\
     3133\uC @monitor@ member rtn.               & 54.8  & 54.8  & 0.1   \\
     3134Goroutine mutex lock                    & 34.0  & 34.0  & 0.0   \\
     3135Rust mutex lock                                 & 33.0  & 33.2  & 0.8   \\
     3136Java synchronized method                & 31.0  & 31.0  & 0.0   \\
     3137Pthreads mutex Lock                             & 31.0  & 31.1  & 0.4
    29693138\end{tabular}
    29703139\end{multicols}
    29713140
     3141\paragraph{Creation}
     3142
     3143Creation is measured by creating/deleting a specific kind of control-flow object.
     3144Figure~\ref{f:creation} shows the code for \CFA with results in Table~\ref{t:creation}.
     3145Note, the call stacks of \CFA coroutines are lazily created on the first resume, therefore the cost of creation with and without a stack are presented.
     3146
     3147\begin{multicols}{2}
     3148\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
     3149\begin{cfa}
     3150@coroutine@ MyCoroutine {};
     3151void ?{}( MyCoroutine & this ) {
     3152#ifdef EAGER
     3153        resume( this );
     3154#endif
     3155}
     3156void main( MyCoroutine & ) {}
     3157int main() {
     3158        BENCH( for ( N ) { @MyCoroutine c;@ } )
     3159        sout | result;
     3160}
     3161\end{cfa}
     3162\captionof{figure}{\CFA creation benchmark}
     3163\label{f:creation}
     3164
     3165\columnbreak
     3166
     3167\vspace*{-16pt}
     3168\captionof{table}{Creation comparison (nanoseconds)}
     3169\label{t:creation}
     3170
     3171\begin{tabular}[t]{@{}r*{3}{D{.}{.}{5.2}}@{}}
     3172\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     3173\CFA generator                  & 0.6           & 0.6           & 0.0           \\
     3174\CFA coroutine lazy             & 13.4          & 13.1          & 0.5           \\
     3175\CFA coroutine eager    & 144.7         & 143.9         & 1.5           \\
     3176\CFA thread                             & 466.4         & 468.0         & 11.3          \\
     3177\uC coroutine                   & 155.6         & 155.7         & 1.7           \\
     3178\uC thread                              & 523.4         & 523.9         & 7.7           \\
     3179Python generator                & 123.2         & 124.3         & 4.1           \\
     3180Node.js generator               & 32.3          & 32.2          & 0.3           \\
     3181Goroutine thread                & 751.0         & 750.5         & 3.1           \\
     3182Rust thread                             & 53801.0       & 53896.8       & 274.9         \\
     3183Java thread                             & 120274.0      & 120722.9      & 2356.7        \\
     3184Pthreads thread                 & 31465.5       & 31419.5       & 140.4
     3185\end{tabular}
     3186\end{multicols}
     3187
     3188
     3189\subsection{Discussion}
     3190
     3191Languages using 1:1 threading based on pthreads can at best meet or exceed (due to language overhead) the pthread results.
     3192Note, pthreads has a fast zero-contention mutex lock checked in user space.
     3193Languages with M:N threading have better performance than 1:1 because there is no operating-system interactions.
     3194Languages with stackful coroutines have higher cost than stackless coroutines because of stack allocation and context switching;
     3195however, stackful \uC and \CFA coroutines have approximately the same performance as stackless Python and Node.js generators.
     3196The \CFA stackless generator is approximately 25 times faster for suspend/resume and 200 times faster for creation than stackless Python and Node.js generators.
     3197
    29723198
    29733199\section{Conclusion}
     
    29753201Advanced control-flow will always be difficult, especially when there is temporal ordering and nondeterminism.
    29763202However, many systems exacerbate the difficulty through their presentation mechanisms.
    2977 This paper shows it is possible to present a hierarchy of control-flow features, generator, coroutine, thread, and monitor, providing an integrated set of high-level, efficient, and maintainable control-flow features.
    2978 Eliminated from \CFA are spurious wakeup and barging, which are nonintuitive and lead to errors, and having to work with a bewildering set of low-level locks and acquisition techniques.
    2979 \CFA high-level race-free monitors and tasks provide the core mechanisms for mutual exclusion and synchronization, without having to resort to magic qualifiers like @volatile@/@atomic@.
     3203This paper shows it is possible to understand high-level control-flow using three properties: statefulness, thread, mutual-exclusion/synchronization.
     3204Combining these properties creates a number of high-level, efficient, and maintainable control-flow types: generator, coroutine, thread, each of which can be a monitor.
     3205Eliminated from \CFA are barging and spurious wakeup, which are nonintuitive and lead to errors, and having to work with a bewildering set of low-level locks and acquisition techniques.
     3206\CFA high-level race-free monitors and threads provide the core mechanisms for mutual exclusion and synchronization, without having to resort to magic qualifiers like @volatile@/@atomic@.
    29803207Extending these mechanisms to handle high-level deadlock-free bulk acquire across both mutual exclusion and synchronization is a unique contribution.
    29813208The \CFA runtime provides concurrency based on a preemptive M:N user-level threading-system, executing in clusters, which encapsulate scheduling of work on multiple kernel threads providing parallelism.
    29823209The M:N model is judged to be efficient and provide greater flexibility than a 1:1 threading model.
    29833210These concepts and the \CFA runtime-system are written in the \CFA language, extensively leveraging the \CFA type-system, which demonstrates the expressiveness of the \CFA language.
    2984 Performance comparisons with other concurrent systems/languages show the \CFA approach is competitive across all low-level operations, which translates directly into good performance in well-written concurrent applications.
    2985 C programmers should feel comfortable using these mechanisms for developing complex control-flow in applications, with the ability to obtain maximum available performance by selecting mechanisms at the appropriate level of need.
     3211Performance comparisons with other concurrent systems/languages show the \CFA approach is competitive across all basic operations, which translates directly into good performance in well-written applications with advanced control-flow.
     3212C programmers should feel comfortable using these mechanisms for developing complex control-flow in applications, with the ability to obtain maximum available performance by selecting mechanisms at the appropriate level of need using only calling communication.
    29863213
    29873214
     
    30033230\label{futur:nbio}
    30043231
    3005 Many modern workloads are not bound by computation but IO operations, a common case being web servers and XaaS~\cite{XaaS} (anything as a service).
     3232Many modern workloads are not bound by computation but IO operations, common cases being web servers and XaaS~\cite{XaaS} (anything as a service).
    30063233These types of workloads require significant engineering to amortizing costs of blocking IO-operations.
    30073234At its core, non-blocking I/O is an operating-system level feature queuing IO operations, \eg network operations, and registering for notifications instead of waiting for requests to complete.
     
    30313258\section{Acknowledgements}
    30323259
    3033 The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach and Michael Brooks on the features described in this paper.
    3034 Funding for this project has been provided by Huawei Ltd.\ (\url{http://www.huawei.com}). %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.
     3260The authors recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach, and Michael Brooks; David Dice for commenting and helping with the Java benchmarks; and Gregor Richards for helping with the Node.js benchmarks.
     3261This research is funded by a grant from Waterloo-Huawei (\url{http://www.huawei.com}) Joint Innovation Lab. %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.
    30353262
    30363263{%
    3037 \fontsize{9bp}{12bp}\selectfont%
     3264\fontsize{9bp}{11.5bp}\selectfont%
    30383265\bibliography{pl,local}
    30393266}%
Note: See TracChangeset for help on using the changeset viewer.