Ignore:
Timestamp:
Feb 26, 2020, 6:13:54 PM (4 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
04e6f93, 5452673, dac55004
Parents:
aeb5d0d (diff), 7dc2e015 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Paper.tex

    raeb5d0d r930b504  
    6161\newcommand{\CCseventeen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}17\xspace} % C++17 symbolic name
    6262\newcommand{\CCtwenty}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}20\xspace} % C++20 symbolic name
    63 \newcommand{\Csharp}{C\raisebox{-0.7ex}{\Large$^\sharp$}\xspace} % C# symbolic name
     63\newcommand{\Csharp}{C\raisebox{-0.7ex}{\large$^\sharp$}\xspace} % C# symbolic name
    6464
    6565%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     
    127127\newcommand*{\etc}{%
    128128        \@ifnextchar{.}{\ETC}%
    129         {\ETC.\xspace}%
     129                {\ETC.\xspace}%
    130130}}{}%
    131131\@ifundefined{etal}{
    132132\newcommand{\ETAL}{\abbrevFont{et}~\abbrevFont{al}}
    133133\newcommand*{\etal}{%
    134         \@ifnextchar{.}{\protect\ETAL}%
    135                 {\protect\ETAL.\xspace}%
     134        \@ifnextchar{.}{\ETAL}%
     135                {\ETAL.\xspace}%
    136136}}{}%
    137137\@ifundefined{viz}{
     
    163163                __float80, float80, __float128, float128, forall, ftype, generator, _Generic, _Imaginary, __imag, __imag__,
    164164                inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or,
    165                 otype, restrict, __restrict, __restrict__, __signed, __signed__, _Static_assert, thread,
     165                otype, restrict, resume, __restrict, __restrict__, __signed, __signed__, _Static_assert, suspend, thread,
    166166                _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__,
    167167                virtual, __volatile, __volatile__, waitfor, when, with, zero_t},
    168168        moredirectives={defined,include_next},
    169169        % replace/adjust listing characters that look bad in sanserif
    170         literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
     170        literate={-}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
    171171                {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
    172172                {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
     
    197197                _Else, _Enable, _Event, _Finally, _Monitor, _Mutex, _Nomutex, _PeriodicTask, _RealTimeTask,
    198198                _Resume, _Select, _SporadicTask, _Task, _Timeout, _When, _With, _Throw},
    199 }
    200 \lstdefinelanguage{Golang}{
    201         morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,},
    202         morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64,
    203                 bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface},
    204         morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,},
    205         morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,},
    206         morekeywords=[5]{Println,Printf,Error,},
    207         sensitive=true,
    208         morecomment=[l]{//},
    209         morecomment=[s]{/*}{*/},
    210         morestring=[b]',
    211         morestring=[b]",
    212         morestring=[s]{`}{`},
    213199}
    214200
     
    241227{}
    242228\lstnewenvironment{uC++}[1][]
    243 {\lstset{#1}}
     229{\lstset{language=uC++,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}}
    244230{}
    245231\lstnewenvironment{Go}[1][]
     
    282268\CFA is a polymorphic, non-object-oriented, concurrent, backwards-compatible extension of the C programming language.
    283269This paper discusses the design philosophy and implementation of its advanced control-flow and concurrent/parallel features, along with the supporting runtime written in \CFA.
    284 These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library features like pthreads.
     270These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library approaches like pthreads.
    285271\CFA introduces modern language-level control-flow mechanisms, like generators, coroutines, user-level threading, and monitors for mutual exclusion and synchronization.
    286272% Library extension for executors, futures, and actors are built on these basic mechanisms.
     
    295281
    296282\begin{document}
    297 \linenumbers                                            % comment out to turn off line numbering
     283\linenumbers                            % comment out to turn off line numbering
    298284
    299285\maketitle
     
    302288\section{Introduction}
    303289
    304 This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA~\cite{Moss18,Cforall} and its runtime, which is written entirely in \CFA.
    305 \CFA is a modern, polymorphic, non-object-oriented\footnote{
    306 \CFA has features often associated with object-oriented programming languages, such as constructors, destructors, virtuals and simple inheritance.
     290\CFA~\cite{Moss18,Cforall} is a modern, polymorphic, non-object-oriented\footnote{
     291\CFA has object-oriented features, such as constructors, destructors, virtuals and simple trait/interface inheritance.
     292% Go interfaces, Rust traits, Swift Protocols, Haskell Type Classes and Java Interfaces.
     293% "Trait inheritance" works for me. "Interface inheritance" might also be a good choice, and distinguish clearly from implementation inheritance.
     294% You'll want to be a little bit careful with terms like "structural" and "nominal" inheritance as well. CFA has structural inheritance (I think Go as well) -- it's inferred based on the structure of the code. Java, Rust, and Haskell (not sure about Swift) have nominal inheritance, where there needs to be a specific statement that "this type inherits from this type".
    307295However, functions \emph{cannot} be nested in structures, so there is no lexical binding between a structure and set of functions (member/method) implemented by an implicit \lstinline@this@ (receiver) parameter.},
    308296backwards-compatible extension of the C programming language.
    309 In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a \emph{research vehicle} for new typing and control-flow capabilities on top of a highly popular programming language allowing immediate dissemination.
    310 Within the \CFA framework, new control-flow features are created from scratch because ISO \Celeven defines only a subset of the \CFA extensions, where the overlapping features are concurrency~\cite[\S~7.26]{C11}.
    311 However, \Celeven concurrency is largely wrappers for a subset of the pthreads library~\cite{Butenhof97,Pthreads}, and \Celeven and pthreads concurrency is simple, based on thread fork/join in a function and mutex/condition locks, which is low-level and error-prone;
    312 no high-level language concurrency features are defined.
    313 Interestingly, almost a decade after publication of the \Celeven standard, neither gcc-8, clang-9 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating little interest in the C11 concurrency approach (possibly because the effort to add concurrency to \CC).
    314 Finally, while the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}.
    315 
     297In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a \emph{research vehicle} for new typing and control-flow capabilities on top of a highly popular programming language\footnote{
     298The TIOBE index~\cite{TIOBE} for December 2019 ranks the top five \emph{popular} programming languages as Java 17\%, C 16\%, Python 10\%, and \CC 6\%, \Csharp 5\% = 54\%, and over the past 30 years, C has always ranked either first or second in popularity.}
     299allowing immediate dissemination.
     300This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA and its runtime, which is written entirely in \CFA.
     301The \CFA control-flow framework extends ISO \Celeven~\cite{C11} with new call/return and concurrent/parallel control-flow.
     302
     303% The call/return extensions retain state between callee and caller versus losing the callee's state on return;
     304% the concurrency extensions allow high-level management of threads.
     305
     306Call/return control-flow with argument/parameter passing appeared in the first programming languages.
     307Over the past 50 years, call/return has been augmented with features like static/dynamic call, exceptions (multi-level return) and generators/coroutines (retain state between calls).
     308While \CFA has mechanisms for dynamic call (algebraic effects) and exceptions\footnote{
     309\CFA exception handling will be presented in a separate paper.
     310The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}}, this work only discusses retaining state between calls via generators/coroutines.
     311\newterm{Coroutining} was introduced by Conway~\cite{Conway63} (1963), discussed by Knuth~\cite[\S~1.4.2]{Knuth73V1}, implemented in Simula67~\cite{Simula67}, formalized by Marlin~\cite{Marlin80}, and is now popular and appears in old and new programming languages: CLU~\cite{CLU}, \Csharp~\cite{Csharp}, Ruby~\cite{Ruby}, Python~\cite{Python}, JavaScript~\cite{JavaScript}, Lua~\cite{Lua}, \CCtwenty~\cite{C++20Coroutine19}.
     312Coroutining is sequential execution requiring direct handoff among coroutines, \ie only the programmer is controlling execution order.
     313If coroutines transfer to an internal event-engine for scheduling the next coroutines, the program transitions into the realm of concurrency~\cite[\S~3]{Buhr05a}.
     314Coroutines are only a stepping stone towards concurrency where the commonality is that coroutines and threads retain state between calls.
     315
     316\Celeven/\CCeleven define concurrency~\cite[\S~7.26]{C11}, but it is largely wrappers for a subset of the pthreads library~\cite{Pthreads}.\footnote{Pthreads concurrency is based on simple thread fork/join in a function and mutex/condition locks, which is low-level and error-prone}
     317Interestingly, almost a decade after the \Celeven standard, neither gcc-9, clang-9 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating no interest in the C11 concurrency approach (possibly because of the recent effort to add concurrency to \CC).
     318While the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}, as for \CC.
    316319In contrast, there has been a renewed interest during the past decade in user-level (M:N, green) threading in old and new programming languages.
    317320As multi-core hardware became available in the 1980/90s, both user and kernel threading were examined.
    318321Kernel threading was chosen, largely because of its simplicity and fit with the simpler operating systems and hardware architectures at the time, which gave it a performance advantage~\cite{Drepper03}.
    319322Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads.
    320 As a result, languages like Java, Scala, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms.
    321 From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}, including putting green threads back into Java~\cite{Quasar}.
    322 The main argument for user-level threading is that it is lighter weight than kernel threading (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work units to facilitate load balancing by the runtime~\cite{Verch12}.
     323As a result, many current languages implementations adopt the 1:1 kernel-threading model, like Java (Scala), Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, C\#~\cite{Csharp} and Rust~\cite{Rust}, with a variety of presentation mechanisms.
     324From 2000 onwards, several language implementations have championed the M:N user-threading model, like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book}, including putting green threads back into Java~\cite{Quasar}, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}.
     325The main argument for user-level threading is that it is lighter weight than kernel threading (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourages large numbers of threads performing medium-sized work to facilitate load balancing by the runtime~\cite{Verch12}.
    323326As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{Adya02,vonBehren03}.
    324327Finally, performant user-threading implementations (both time and space) meet or exceed direct kernel-threading implementations, while achieving the programming advantages of high concurrency levels and safety.
    325328
    326 A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \ie some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}.
     329A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \eg some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}.
    327330The consequence is that a language must provide sufficient tools to program around safety issues, as inline and library code is all sequential to the compiler.
    328331One solution is low-level qualifiers and functions (\eg @volatile@ and atomics) allowing \emph{programmers} to explicitly write safe (race-free~\cite{Boehm12}) programs.
    329 A safer solution is high-level language constructs so the \emph{compiler} knows the optimization boundaries, and hence, provides implicit safety.
    330 This problem is best known with respect to concurrency, but applies to other complex control-flow, like exceptions\footnote{
    331 \CFA exception handling will be presented in a separate paper.
    332 The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}
    333 } and coroutines.
    334 Finally, language solutions allow matching constructs with language paradigm, \ie imperative and functional languages often have different presentations of the same concept to fit their programming model.
    335 
    336 Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary.
    337 Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~8]{Buhr05a}) and \emph{barging}\footnote{
    338 The notion of competitive succession instead of direct handoff, \ie a lock owner releases the lock and an arriving thread acquires it ahead of preexisting waiter threads.
     332A safer solution is high-level language constructs so the \emph{compiler} knows the concurrency boundaries (where mutual exclusion and synchronization are acquired/released) and provide implicit safety at and across these boundaries.
     333While the optimization problem is best known with respect to concurrency, it applies to other complex control-flow, like exceptions and coroutines.
     334As well, language solutions allow matching the language paradigm with the approach, \eg matching the functional paradigm with data-flow programming or the imperative paradigm with thread programming.
     335
     336Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety (unsafe code) for performance when necessary.
     337Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~9]{Buhr05a}) and \emph{barging}\footnote{
     338Barging is competitive succession instead of direct handoff, \ie after a lock is released both arriving and preexisting waiter threads compete to acquire the lock.
     339Hence, an arriving thread can temporally \emph{barge} ahead of threads already waiting for an event, which can repeat indefinitely leading to starvation of waiter threads.
    339340} (signals-as-hints~\cite[\S~8]{Buhr05a}), where one is a consequence of the other, \ie once there is spurious wakeup, signals-as-hints follow.
    340 However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice.
    341 Similarly, signals-as-hints are often a performance decision.
    342 We argue removing spurious wakeup and signals-as-hints make concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation.
    343 (Author experience teaching concurrency is that students are highly confused by these semantics.)
    344 Clawing back performance, when local non-determinism is unimportant, should be an option not the default.
    345 
    346 \begin{comment}
    347 Most augmented traditional (Fortran 18~\cite{Fortran18}, Cobol 14~\cite{Cobol14}, Ada 12~\cite{Ada12}, Java 11~\cite{Java11}) and new languages (Go~\cite{Go}, Rust~\cite{Rust}, and D~\cite{D}), except \CC, diverge from C with different syntax and semantics, only interoperate indirectly with C, and are not systems languages, for those with managed memory.
    348 As a result, there is a significant learning curve to move to these languages, and C legacy-code must be rewritten.
    349 While \CC, like \CFA, takes an evolutionary approach to extend C, \CC's constantly growing complex and interdependent features-set (\eg objects, inheritance, templates, etc.) mean idiomatic \CC code is difficult to use from C, and C programmers must expend significant effort learning \CC.
    350 Hence, rewriting and retraining costs for these languages, even \CC, are prohibitive for companies with a large C software-base.
    351 \CFA with its orthogonal feature-set, its high-performance runtime, and direct access to all existing C libraries circumvents these problems.
    352 \end{comment}
    353 
    354 \CFA embraces user-level threading, language extensions for advanced control-flow, and safety as the default.
    355 We present comparative examples so the reader can judge if the \CFA control-flow extensions are better and safer than those in other concurrent, imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms.
     341(Author experience teaching concurrency is that students are confused by these semantics.)
     342However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~9]{Buhr05a};
     343it is a performance design choice.
     344We argue removing spurious wakeup and signals-as-hints make concurrent programming simpler and safer as there is less local non-determinism to manage.
     345If barging acquisition is allowed, its specialized performance advantage should be available as an option not the default.
     346
     347\CFA embraces language extensions for advanced control-flow, user-level threading, and safety as the default.
     348We present comparative examples to support our argument that the \CFA control-flow extensions are as expressive and safe as those in other concurrent imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms.
    356349The main contributions of this work are:
    357 \begin{itemize}[topsep=3pt,itemsep=1pt]
     350\begin{itemize}[topsep=3pt,itemsep=0pt]
    358351\item
    359 language-level generators, coroutines and user-level threading, which respect the expectations of C programmers.
     352a set of fundamental execution properties that dictate which language-level control-flow features need to be supported,
     353
    360354\item
    361 monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capabilities with all monitor synchronization mechanisms.
     355integration of these language-level control-flow features, while respecting the style and expectations of C programmers,
     356
    362357\item
    363 providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features.
     358monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capabilities with all monitor synchronization mechanisms,
     359
     360\item
     361providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features,
     362
    364363% \item
    365364% library extensions for executors, futures, and actors built on the basic mechanisms.
     365
    366366\item
    367 a runtime system with no spurious wakeup.
     367a runtime system without spurious wake-up and no performance loss,
     368
    368369\item
    369 a dynamic partitioning mechanism to segregate the execution environment for specialized requirements.
     370a dynamic partitioning mechanism to segregate groups of executing user and kernel threads performing specialized work (\eg web-server or compute engine) or requiring different scheduling (\eg NUMA or real-time).
     371
    370372% \item
    371373% a non-blocking I/O library
     374
    372375\item
    373 experimental results showing comparable performance of the new features with similar mechanisms in other programming languages.
     376experimental results showing comparable performance of the \CFA features with similar mechanisms in other languages.
    374377\end{itemize}
    375378
    376 Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls, which produces constructs @generator@ and @coroutine@.
    377 Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread, which produces the @thread@ construct.
     379Section~\ref{s:FundamentalExecutionProperties} presents the compositional hierarchy of execution properties directing the design of control-flow features in \CFA.
     380Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls producing constructs @generator@ and @coroutine@.
     381Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread producing the @thread@ construct.
    378382Section~\ref{s:MutualExclusionSynchronization} discusses the two mechanisms to restricted nondeterminism when controlling shared access to resources (mutual exclusion) and timing relationships among threads (synchronization).
    379383Section~\ref{s:Monitor} shows how both mutual exclusion and synchronization are safely embedded in the @monitor@ and @thread@ constructs.
    380384Section~\ref{s:CFARuntimeStructure} describes the large-scale mechanism to structure (cluster) threads and virtual processors (kernel threads).
    381 Section~\ref{s:Performance} uses a series of microbenchmarks to compare \CFA threading with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0.
     385Section~\ref{s:Performance} uses a series of microbenchmarks to compare \CFA threading with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js 12.14.1, and \uC 7.0.0.
     386
     387
     388\section{Fundamental Execution Properties}
     389\label{s:FundamentalExecutionProperties}
     390
     391The features in a programming language should be composed from a set of fundamental properties rather than an ad hoc collection chosen by the designers.
     392To this end, the control-flow features created for \CFA are based on the fundamental properties of any language with function-stack control-flow (see also \uC~\cite[pp.~140-142]{uC++}).
     393The fundamental properties are execution state, thread, and mutual-exclusion/synchronization (MES).
     394These independent properties can be used alone, in pairs, or in triplets to compose different language features, forming a compositional hierarchy where the most advanced feature has all the properties (state/thread/MES).
     395While it is possible for a language to only support the most advanced feature~\cite{Hermes90}, this unnecessarily complicates and makes inefficient solutions to certain classes of problems.
     396As is shown, each of the (non-rejected) composed features solves a particular set of problems, and hence, has a defensible position in a programming language.
     397If a compositional feature is missing, a programmer has too few/many fundamental properties resulting in a complex and/or is inefficient solution.
     398
     399In detail, the fundamental properties are:
     400\begin{description}[leftmargin=\parindent,topsep=3pt,parsep=0pt]
     401\item[\newterm{execution state}:]
     402is the state information needed by a control-flow feature to initialize, manage compute data and execution location(s), and de-initialize.
     403State is retained in fixed-sized aggregate structures and dynamic-sized stack(s), often allocated in the heap(s) managed by the runtime system.
     404The lifetime of the state varies with the control-flow feature, where longer life-time and dynamic size provide greater power but also increase usage complexity and cost.
     405Control-flow transfers among execution states occurs in multiple ways, such as function call, context switch, asynchronous await, etc.
     406Because the programming language determines what constitutes an execution state, implicitly manages this state, and defines movement mechanisms among states, execution state is an elementary property of the semantics of a programming language.
     407% An execution-state is related to the notion of a process continuation \cite{Hieb90}.
     408
     409\item[\newterm{threading}:]
     410is execution of code that occurs independently of other execution, \ie the execution resulting from a thread is sequential.
     411Multiple threads provide \emph{concurrent execution};
     412concurrent execution becomes parallel when run on multiple processing units (hyper-threading, cores, sockets).
     413There must be language mechanisms to create, block/unblock, and join with a thread.
     414
     415\item[\newterm{MES}:]
     416is the concurrency mechanisms to perform an action without interruption and establish timing relationships among multiple threads.
     417These two properties are independent, \ie mutual exclusion cannot provide synchronization and vice versa without introducing additional threads~\cite[\S~4]{Buhr05a}.
     418Limiting MES, \eg no access to shared data, results in contrived solutions and inefficiency on multi-core von Neumann computers where shared memory is a foundational aspect of its design.
     419\end{description}
     420These properties are fundamental because they cannot be built from existing language features, \eg a basic programming language like C99~\cite{C99} cannot create new control-flow features, concurrency, or provide MES using atomic hardware mechanisms.
     421
     422
     423\subsection{Execution Properties}
     424
     425Table~\ref{t:ExecutionPropertyComposition} shows how the three fundamental execution properties: state, thread, and mutual exclusion compose a hierarchy of control-flow features needed in a programming language.
     426(When doing case analysis, not all combinations are meaningful.)
     427Note, basic von Neumann execution requires at least one thread and an execution state providing some form of call stack.
     428For table entries missing these minimal components, the property is borrowed from the invoker (caller).
     429
     430Case 1 is a function that borrows storage for its state (stack frame/activation) and a thread from its invoker and retains this state across \emph{callees}, \ie function local-variables are retained on the stack across calls.
     431Case 2 is case 1 with access to shared state so callers are restricted during update (mutual exclusion) and scheduling for other threads (synchronization).
     432Case 3 is a stateful function supporting resume/suspend along with call/return to retain state across \emph{callers}, but has some restrictions because the function's state is stackless.
     433Note, stackless functions still borrow the caller's stack and thread, where the stack is used to preserve state across its callees.
     434Case 4 is cases 2 and 3 with protection to shared state for stackless functions.
     435Cases 5 and 6 are the same as 3 and 4 but only the thread is borrowed as the function state is stackful, so resume/suspend is a context switch from the caller's to the function's stack.
     436Cases 7 and 8 are rejected because a function that is given a new thread must have its own stack where the thread begins and stack frames are stored for calls, \ie there is no stack to borrow.
     437Cases 9 and 10 are rejected because a thread with a fixed state (no stack) cannot accept calls, make calls, block, or be preempted, all of which require an unknown amount of additional dynamic state.
     438Hence, once started, this kind of thread must execute to completion, \ie computation only, which severely restricts runtime management.
     439Cases 11 and 12 have a stackful thread with and without safe access to shared state.
     440Execution properties increase the cost of creation and execution along with complexity of usage.
     441
     442\begin{table}
     443\caption{Execution property composition}
     444\centering
     445\label{t:ExecutionPropertyComposition}
     446\renewcommand{\arraystretch}{1.25}
     447%\setlength{\tabcolsep}{5pt}
     448\begin{tabular}{c|c||l|l}
     449\multicolumn{2}{c||}{execution properties} & \multicolumn{2}{c}{mutual exclusion / synchronization} \\
     450\hline
     451stateful                        & thread        & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\
     452\hline   
     453\hline   
     454No                                      & No            & \textbf{1}\ \ \ function                              & \textbf{2}\ \ \ @monitor@ function    \\
     455\hline   
     456Yes (stackless)         & No            & \textbf{3}\ \ \ @generator@                   & \textbf{4}\ \ \ @monitor@ @generator@ \\
     457\hline   
     458Yes (stackful)          & No            & \textbf{5}\ \ \ @coroutine@                   & \textbf{6}\ \ \ @monitor@ @coroutine@ \\
     459\hline   
     460No                                      & Yes           & \textbf{7}\ \ \ {\color{red}rejected} & \textbf{8}\ \ \ {\color{red}rejected} \\
     461\hline   
     462Yes (stackless)         & Yes           & \textbf{9}\ \ \ {\color{red}rejected} & \textbf{10}\ \ \ {\color{red}rejected} \\
     463\hline   
     464Yes (stackful)          & Yes           & \textbf{11}\ \ \ @thread@                             & \textbf{12}\ \ @monitor@ @thread@             \\
     465\end{tabular}
     466\end{table}
     467
     468Given the execution-properties taxonomy, programmers can now answer three basic questions: is state necessary across calls and how much, is a separate thread necessary, is access to shared state necessary.
     469The answers define the optimal language feature need for implementing a programming problem.
     470The next sections discusses how \CFA fills in the table with language features, while other programming languages may only provide a subset of the table.
     471
     472
     473\subsection{Design Requirements}
     474
     475The following design requirements largely stem from building \CFA on top of C.
     476\begin{itemize}[topsep=3pt,parsep=0pt]
     477\item
     478All communication must be statically type checkable for early detection of errors and efficient code generation.
     479This requirement is consistent with the fact that C is a statically-typed programming-language.
     480
     481\item
     482Direct interaction among language features must be possible allowing any feature to be selected without restricting comm\-unication.
     483For example, many concurrent languages do not provide direct communication (calls) among threads, \ie threads only communicate indirectly through monitors, channels, messages, and/or futures.
     484Indirect communication increases the number of objects, consuming more resources, and require additional synchronization and possibly data transfer.
     485
     486\item
     487All communication is performed using function calls, \ie data is transmitted from argument to parameter and results are returned from function calls.
     488Alternative forms of communication, such as call-backs, message passing, channels, or communication ports, step outside of C's normal form of communication.
     489
     490\item
     491All stateful features must follow the same declaration scopes and lifetimes as other language data.
     492For C that means at program startup, during block and function activation, and on demand using dynamic allocation.
     493
     494\item
     495MES must be available implicitly in language constructs as well as explicitly for specialized requirements, because requiring programmers to build MES using low-level locks often leads to incorrect programs.
     496Furthermore, reducing synchronization scope by encapsulating it within language constructs further reduces errors in concurrent programs.
     497
     498\item
     499Both synchronous and asynchronous communication are needed.
     500However, we believe the best way to provide asynchrony, such as call-buffering/chaining and/or returning futures~\cite{multilisp}, is building it from expressive synchronous features.
     501
     502\item
     503Synchronization must be able to control the service order of requests including prioritizing selection from different kinds of outstanding requests, and postponing a request for an unspecified time while continuing to accept new requests.
     504Otherwise, certain concurrency problems are difficult, e.g.\ web server, disk scheduling, and the amount of concurrency is inhibited~\cite{Gentleman81}.
     505\end{itemize}
     506We have satisfied these requirements in \CFA while maintaining backwards compatibility with the huge body of legacy C programs.
     507% In contrast, other new programming languages must still access C programs (\eg operating-system service routines), but do so through fragile C interfaces.
     508
     509
     510\subsection{Asynchronous Await / Call}
     511
     512Asynchronous await/call is a caller mechanism for structuring programs and/or increasing concurrency, where the caller (client) postpones an action into the future, which is subsequently executed by a callee (server).
     513The caller detects the action's completion through a \newterm{future}/\newterm{promise}.
     514The benefit is asynchronous caller execution with respect to the callee until future resolution.
     515For single-threaded languages like JavaScript, an asynchronous call passes a callee action, which is queued in the event-engine, and continues execution with a promise.
     516When the caller needs the promise to be fulfilled, it executes @await@.
     517A promise-completion call-back can be part of the callee action or the caller is rescheduled;
     518in either case, the call back is executed after the promise is fulfilled.
     519While asynchronous calls generate new callee (server) events, we content this mechanism is insufficient for advanced control-flow mechanisms like generators or coroutines (which are discussed next).
     520Specifically, control between caller and callee occurs indirectly through the event-engine precluding direct handoff and cycling among events, and requires complex resolution of a control promise and data.
     521Note, @async-await@ is just syntactic-sugar over the event engine so it does not solve these deficiencies.
     522For multi-threaded languages like Java, the asynchronous call queues a callee action with an executor (server), which subsequently executes the work by a thread in the executor thread-pool.
     523The problem is when concurrent work-units need to interact and/or block as this effects the executor, \eg stops threads.
     524While it is possible to extend this approach to support the necessary mechanisms, \eg message passing in Actors, we show monitors and threads provide an equally competitive approach that does not deviate from normal call communication and can be used to build asynchronous call, as is done in Java.
    382525
    383526
     
    385528\label{s:StatefulFunction}
    386529
    387 The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine.
    388 Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension.
    389 This capability is accomplished by retaining a data/execution \emph{closure} between invocations.
    390 If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited.
    391 If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions.
    392 Hence, refactoring a stackless coroutine may require changing it to stackful.
    393 A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack.
    394 As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
    395 A fixed closure activated by modified call/return is faster than a variable closure activated by context switching.
    396 Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance.
    397 Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general.
    398 Note, creation cost is amortized across usage, so activation cost is usually the dominant factor.
     530A \emph{stateful function} has the ability to remember state between calls, where state can be either data or execution, \eg plugin, device driver, finite-state machine (FSM).
     531A simple technique to retain data state between calls is @static@ declarations within a function, which is often implemented by hoisting the declarations to the global scope but hiding the names within the function using name mangling.
     532However, each call starts the function at the top making it difficult to determine the last point of execution in an algorithm, and requiring multiple flag variables and testing to reestablish the continuation point.
     533Hence, the next step of generalizing function state is implicitly remembering the return point between calls and reentering the function at this point rather than the top, called \emph{generators}\,/\,\emph{iterators} or \emph{stackless coroutines}.
     534For example, a Fibonacci generator retains data and execution state allowing it to remember prior values needed to generate the next value and the location in the algorithm to compute that value.
     535The next step of generalization is instantiating the function to allow multiple named instances, \eg multiple Fibonacci generators, where each instance has its own state, and hence, can generate an independent sequence of values.
     536Note, a subset of generator state is a function \emph{closure}, \ie the technique of capturing lexical references when returning a nested function.
     537A further generalization is adding a stack to a generator's state, called a \emph{coroutine}, so it can suspend outside of itself, \eg call helper functions to arbitrary depth before suspending back to its resumer without unwinding these calls.
     538For example, a coroutine iterator for a binary tree can stop the traversal at the visit point (pre, infix, post traversal), return the node value to the caller, and then continue the recursive traversal from the current node on the next call.
     539
     540There are two styles of activating a stateful function, \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
     541These styles \emph{do not} cause incremental stack growth, \eg a million resume/suspend or resume/resume cycles do not remember each cycle just the last resumer for each cycle.
     542Selecting between stackless/stackful semantics and asymmetric/symmetric style is a tradeoff between programming requirements, performance, and design, where stackless is faster and smaller (modified call/return between closures), stackful is more general but slower and larger (context switching between distinct stacks), and asymmetric is simpler control-flow than symmetric.
     543Additionally, storage management for the closure/stack (especially in unmanaged languages, \ie no garbage collection) must be factored into design and performance.
     544Note, creation cost (closure/stack) is amortized across usage, so activation cost (resume/suspend) is usually the dominant factor.
     545
     546% The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine.
     547% Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension.
     548% If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited.
     549% If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions.
     550% Hence, refactoring a stackless coroutine may require changing it to stackful.
     551% A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack.
     552% As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
     553% A fixed closure activated by modified call/return is faster than a variable closure activated by context switching.
     554% Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance.
     555% Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general.
     556% nppNote, creation cost is amortized across usage, so activation cost is usually the dominant factor.
     557
     558For example, Python presents asymmetric generators as a function object, \uC presents symmetric coroutines as a \lstinline[language=C++]|class|-like object, and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}.
     559\begin{center}
     560\begin{tabular}{@{}l|l|l@{}}
     561\multicolumn{1}{@{}c|}{Python asymmetric generator} & \multicolumn{1}{c|}{\uC symmetric coroutine} & \multicolumn{1}{c@{}}{Pthreads thread} \\
     562\hline
     563\begin{python}
     564`def Gen():` $\LstCommentStyle{\color{red}// function}$
     565        ... yield val ...
     566gen = Gen()
     567for i in range( 10 ):
     568        print( next( gen ) )
     569\end{python}
     570&
     571\begin{uC++}
     572`_Coroutine Cycle {` $\LstCommentStyle{\color{red}// class}$
     573        Cycle * p;
     574        void main() { p->cycle(); }
     575        void cycle() { resume(); }  `};`
     576Cycle c1, c2; c1.p=&c2; c2.p=&c1; c1.cycle();
     577\end{uC++}
     578&
     579\begin{cfa}
     580void * rtn( void * arg ) { ... }
     581int i = 3, rc;
     582pthread_t t; $\C{// thread id}$
     583$\LstCommentStyle{\color{red}// function pointer}$
     584rc=pthread_create(&t, `rtn`, (void *)i);
     585\end{cfa}
     586\end{tabular}
     587\end{center}
     588\CFA's preferred presentation model for generators/coroutines/threads is a hybrid of functions and classes, giving an object-oriented flavour.
     589Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type via the type's name.
     590The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed for flexibility reasons.
     591Each custom type is discussed in detail in the following sections.
     592
     593
     594\subsection{Generator}
     595
     596Stackless generators (Table~\ref{t:ExecutionPropertyComposition} case 3) have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution.
     597The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity.
     598A series of different kinds of generators and their implementation demonstrate how this goal is accomplished.\footnote{
     599The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|?++|, \lstinline|++?|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|.
     600Operator \lstinline+|+ is overloaded for printing, like bit-shift \lstinline|<<| in \CC.
     601The \CFA \lstinline|with| clause opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics;
     602multiple aggregates may be opened.
     603\CFA has rebindable references \lstinline|int i, & ip = i, j; `&ip = &j;`| and non-rebindable references \lstinline|int i, & `const` ip = i, j; `&ip = &j;` // disallowed|.
     604}%
    399605
    400606\begin{figure}
     
    410616
    411617
     618
     619
    412620        int fn = f->fn; f->fn = f->fn1;
    413621                f->fn1 = f->fn + fn;
    414622        return fn;
    415 
    416623}
    417624int main() {
     
    432639void `main(Fib & fib)` with(fib) {
    433640
     641
    434642        [fn1, fn] = [1, 0];
    435643        for () {
     
    451659\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    452660typedef struct {
    453         int fn1, fn;  void * `next`;
     661        int `restart`, fn1, fn;
    454662} Fib;
    455 #define FibCtor { 1, 0, NULL }
     663#define FibCtor { `0`, 1, 0 }
    456664Fib * comain( Fib * f ) {
    457         if ( f->next ) goto *f->next;
    458         f->next = &&s1;
     665        `static void * states[] = {&&s0, &&s1};`
     666        `goto *states[f->restart];`
     667  s0: f->`restart` = 1;
    459668        for ( ;; ) {
    460669                return f;
    461670          s1:; int fn = f->fn + f->fn1;
    462                         f->fn1 = f->fn; f->fn = fn;
     671                f->fn1 = f->fn; f->fn = fn;
    463672        }
    464673}
     
    472681\end{lrbox}
    473682
    474 \subfloat[C asymmetric generator]{\label{f:CFibonacci}\usebox\myboxA}
     683\subfloat[C]{\label{f:CFibonacci}\usebox\myboxA}
    475684\hspace{3pt}
    476685\vrule
    477686\hspace{3pt}
    478 \subfloat[\CFA asymmetric generator]{\label{f:CFAFibonacciGen}\usebox\myboxB}
     687\subfloat[\CFA]{\label{f:CFAFibonacciGen}\usebox\myboxB}
    479688\hspace{3pt}
    480689\vrule
    481690\hspace{3pt}
    482 \subfloat[C generator implementation]{\label{f:CFibonacciSim}\usebox\myboxC}
     691\subfloat[C generated code for \CFA version]{\label{f:CFibonacciSim}\usebox\myboxC}
    483692\caption{Fibonacci (output) asymmetric generator}
    484693\label{f:FibonacciAsymmetricGenerator}
     
    493702};
    494703void ?{}( Fmt & fmt ) { `resume(fmt);` } // constructor
    495 void ^?{}( Fmt & f ) with(f) { $\C[1.75in]{// destructor}$
     704void ^?{}( Fmt & f ) with(f) { $\C[2.25in]{// destructor}$
    496705        if ( g != 0 || b != 0 ) sout | nl; }
    497706void `main( Fmt & f )` with(f) {
     
    499708                for ( ; g < 5; g += 1 ) { $\C{// groups}$
    500709                        for ( ; b < 4; b += 1 ) { $\C{// blocks}$
    501                                 `suspend;` $\C{// wait for character}$
    502                                 while ( ch == '\n' ) `suspend;` // ignore
    503                                 sout | ch;                                              // newline
    504                         } sout | " ";  // block spacer
    505                 } sout | nl; // group newline
     710                                do { `suspend;` $\C{// wait for character}$
     711                                while ( ch == '\n' ); // ignore newline
     712                                sout | ch;                      $\C{// print character}$
     713                        } sout | " ";  $\C{// block separator}$
     714                } sout | nl; $\C{// group separator}$
    506715        }
    507716}
     
    521730\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    522731typedef struct {
    523         void * next;
     732        int `restart`, g, b;
    524733        char ch;
    525         int g, b;
    526734} Fmt;
    527735void comain( Fmt * f ) {
    528         if ( f->next ) goto *f->next;
    529         f->next = &&s1;
     736        `static void * states[] = {&&s0, &&s1};`
     737        `goto *states[f->restart];`
     738  s0: f->`restart` = 1;
    530739        for ( ;; ) {
    531740                for ( f->g = 0; f->g < 5; f->g += 1 ) {
    532741                        for ( f->b = 0; f->b < 4; f->b += 1 ) {
    533                                 return;
    534                           s1:;  while ( f->ch == '\n' ) return;
     742                                do { return;  s1: ;
     743                                } while ( f->ch == '\n' );
    535744                                printf( "%c", f->ch );
    536745                        } printf( " " );
     
    539748}
    540749int main() {
    541         Fmt fmt = { NULL };  comain( &fmt ); // prime
     750        Fmt fmt = { `0` };  comain( &fmt ); // prime
    542751        for ( ;; ) {
    543752                scanf( "%c", &fmt.ch );
     
    550759\end{lrbox}
    551760
    552 \subfloat[\CFA asymmetric generator]{\label{f:CFAFormatGen}\usebox\myboxA}
    553 \hspace{3pt}
     761\subfloat[\CFA]{\label{f:CFAFormatGen}\usebox\myboxA}
     762\hspace{35pt}
    554763\vrule
    555764\hspace{3pt}
    556 \subfloat[C generator simulation]{\label{f:CFormatSim}\usebox\myboxB}
     765\subfloat[C generated code for \CFA version]{\label{f:CFormatGenImpl}\usebox\myboxB}
    557766\hspace{3pt}
    558767\caption{Formatter (input) asymmetric generator}
     
    560769\end{figure}
    561770
    562 Stateful functions appear as generators, coroutines, and threads, where presentations are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.
    563 For example, Python presents generators as a function object:
    564 \begin{python}
    565 def Gen():
    566         ... `yield val` ...
    567 gen = Gen()
    568 for i in range( 10 ):
    569         print( next( gen ) )
    570 \end{python}
    571 Boost presents coroutines in terms of four functor object-types:
    572 \begin{cfa}
    573 asymmetric_coroutine<>::pull_type
    574 asymmetric_coroutine<>::push_type
    575 symmetric_coroutine<>::call_type
    576 symmetric_coroutine<>::yield_type
    577 \end{cfa}
    578 and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}, \eg pthreads:
    579 \begin{cfa}
    580 void * rtn( void * arg ) { ... }
    581 int i = 3, rc;
    582 pthread_t t; $\C{// thread id}$
    583 `rc = pthread_create( &t, rtn, (void *)i );` $\C{// create and initialized task, type-unsafe input parameter}$
    584 \end{cfa}
    585 % void mycor( pthread_t cid, void * arg ) {
    586 %       int * value = (int *)arg;                               $\C{// type unsafe, pointer-size only}$
    587 %       // thread body
    588 % }
    589 % int main() {
    590 %       int input = 0, output;
    591 %       coroutine_t cid = coroutine_create( &mycor, (void *)&input ); $\C{// type unsafe, pointer-size only}$
    592 %       coroutine_resume( cid, (void *)input, (void **)&output ); $\C{// type unsafe, pointer-size only}$
    593 % }
    594 \CFA's preferred presentation model for generators/coroutines/threads is a hybrid of objects and functions, with an object-oriented flavour.
    595 Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type.
    596 The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed.
    597 
    598 
    599 \subsection{Generator}
    600 
    601 Stackless generators have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution.
    602 The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity.
    603 A series of different kinds of generators and their implementation demonstrate how this goal is accomplished.
    604 
    605 Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version.
     771Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written (left to right) in C, \CFA, and showing the underlying C implementation for the \CFA version.
    606772This generator is an \emph{output generator}, producing a new result on each resumption.
    607773To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle.
     
    611777The C version only has the middle execution state because the top execution state is declaration initialization.
    612778Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a custom \CFA @generator@ type.
    613 This generator type is then connected to a function that \emph{must be named \lstinline|main|},\footnote{
    614 The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution.
    615 Hence, overloading this name for other starting points (generator/coroutine/thread) is a logical extension.}
    616 called a \emph{generator main},which takes as its only parameter a reference to the generator type.
     779Each generator type must have a function named \lstinline|main|,
     780% \footnote{
     781% The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution.
     782% Leveraging starting semantics to this name for generator/coroutine/thread is a logical extension.}
     783called a \emph{generator main} (leveraging the starting semantics for program @main@ in C), which is connected to the generator type via its single reference parameter.
    617784The generator main contains @suspend@ statements that suspend execution without ending the generator versus @return@.
    618 For the Fibonacci generator-main,\footnote{
    619 The \CFA \lstinline|with| opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics.
    620 Multiple aggregates may be opened.}
     785For the Fibonacci generator-main,
    621786the top initialization state appears at the start and the middle execution state is denoted by statement @suspend@.
    622787Any local variables in @main@ \emph{are not retained} between calls;
     
    627792Resuming an ended (returned) generator is undefined.
    628793Function @resume@ returns its argument generator so it can be cascaded in an expression, in this case to print the next Fibonacci value @fn@ computed in the generator instance.
    629 Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA generator only needs one additional field, @next@, to handle retention of execution state.
    630 The computed @goto@ at the start of the generator main, which branches after the previous suspend, adds very little cost to the resume call.
    631 Finally, an explicit generator type provides both design and performance benefits, such as multiple type-safe interface functions taking and returning arbitrary types.\footnote{
    632 The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|++?|, \lstinline|?++|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|.
    633 }%
     794Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA asymmetric generator.
     795Only one execution-state field, @restart@, is needed to subscript the suspension points in the generator.
     796At the start of the generator main, the @static@ declaration, @states@, is initialized to the N suspend points in the generator (where operator @&&@ dereferences/references a label~\cite{gccValueLabels}).
     797Next, the computed @goto@ selects the last suspend point and branches to it.
     798The  cost of setting @restart@ and branching via the computed @goto@ adds very little cost to the suspend/resume calls.
     799
     800An advantage of the \CFA explicit generator type is the ability to allow multiple type-safe interface functions taking and returning arbitrary types.
    634801\begin{cfa}
    635802int ?()( Fib & fib ) { return `resume( fib )`.fn; } $\C[3.9in]{// function-call interface}$
    636 int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// use function-call interface to skip N values}$
    637 double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}\CRT$
    638 sout | (int)f1() | (double)f1() | f2( 2 ); // alternative interface, cast selects call based on return type, step 2 values
     803int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// add parameter to skip N values}$
     804double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}$
     805Fib f;  int i;  double d;
     806i = f();  i = f( 2 );  d = f();                                         $\C{// alternative interfaces}\CRT$
    639807\end{cfa}
    640808Now, the generator can be a separately compiled opaque-type only accessed through its interface functions.
    641809For contrast, Figure~\ref{f:PythonFibonacci} shows the equivalent Python Fibonacci generator, which does not use a generator type, and hence only has a single interface, but an implicit closure.
    642810
    643 Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden.
    644 (This restriction is removed by the coroutine in Section~\ref{s:Coroutine}.)
    645 This requirement follows from the generality of variable-size local-state, \eg local state with a variable-length array requires dynamic allocation because the array size is unknown at compile time.
     811\begin{figure}
     812%\centering
     813\newbox\myboxA
     814\begin{lrbox}{\myboxA}
     815\begin{python}[aboveskip=0pt,belowskip=0pt]
     816def Fib():
     817        fn1, fn = 0, 1
     818        while True:
     819                `yield fn1`
     820                fn1, fn = fn, fn1 + fn
     821f1 = Fib()
     822f2 = Fib()
     823for i in range( 10 ):
     824        print( next( f1 ), next( f2 ) )
     825
     826
     827
     828
     829
     830
     831
     832
     833
     834
     835\end{python}
     836\end{lrbox}
     837
     838\newbox\myboxB
     839\begin{lrbox}{\myboxB}
     840\begin{python}[aboveskip=0pt,belowskip=0pt]
     841def Fmt():
     842        try:
     843                while True:                                             $\C[2.5in]{\# until destructor call}$
     844                        for g in range( 5 ):            $\C{\# groups}$
     845                                for b in range( 4 ):    $\C{\# blocks}$
     846                                        while True:
     847                                                ch = (yield)    $\C{\# receive from send}$
     848                                                if '\n' not in ch: $\C{\# ignore newline}$
     849                                                        break
     850                                        print( ch, end='' )     $\C{\# print character}$
     851                                print( '  ', end='' )   $\C{\# block separator}$
     852                        print()                                         $\C{\# group separator}$
     853        except GeneratorExit:                           $\C{\# destructor}$
     854                if g != 0 | b != 0:                             $\C{\# special case}$
     855                        print()
     856fmt = Fmt()
     857`next( fmt )`                                                   $\C{\# prime, next prewritten}$
     858for i in range( 41 ):
     859        `fmt.send( 'a' );`                                      $\C{\# send to yield}$
     860\end{python}
     861\end{lrbox}
     862
     863\hspace{30pt}
     864\subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA}
     865\hspace{3pt}
     866\vrule
     867\hspace{3pt}
     868\subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB}
     869\caption{Python generator}
     870\label{f:PythonGenerator}
     871\end{figure}
     872
     873Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden (removed by the coroutine in Section~\ref{s:Coroutine}).
     874This manual requirement follows from the generality of allowing variable-size local-state, \eg local state with a variable-length array requires dynamic allocation as the array size is unknown at compile time.
    646875However, dynamic allocation significantly increases the cost of generator creation/destruction and is a showstopper for embedded real-time programming.
    647876But more importantly, the size of the generator type is tied to the local state in the generator main, which precludes separate compilation of the generator main, \ie a generator must be inlined or local state must be dynamically allocated.
    648 With respect to safety, we believe static analysis can discriminate local state from temporary variables in a generator, \ie variable usage spanning @suspend@, and generate a compile-time error.
    649 Finally, our current experience is that most generator problems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small.
     877With respect to safety, we believe static analysis can discriminate persistent generator state from temporary generator-main state and raise a compile-time error for temporary usage spanning suspend points.
     878Our experience using generators is that the problems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small.
    650879As well, C programmers are not afraid of this kind of semantic programming requirement, if it results in very small, fast generators.
    651880
     
    669898The example takes advantage of resuming a generator in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops.
    670899The destructor provides a newline, if formatted text ends with a full line.
    671 Figure~\ref{f:CFormatSim} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@.
    672 For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the Fibonacci generator.
    673 
    674 Figure~\ref{f:DeviceDriverGen} shows a \emph{killer} asymmetric generator, a device-driver, because device drivers caused 70\%-85\% of failures in Windows/Linux~\cite{Swift05}.
    675 Device drives follow the pattern of simple data state but complex execution state, \ie finite state-machine (FSM) parsing a protocol.
    676 For example, the following protocol:
     900Figure~\ref{f:CFormatGenImpl} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@.
     901For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the format generator.
     902
     903% https://dl-acm-org.proxy.lib.uwaterloo.ca/
     904
     905Figure~\ref{f:DeviceDriverGen} shows an important application for an asymmetric generator, a device-driver, because device drivers are a significant source of operating-system errors: 85\% in Windows XP~\cite[p.~78]{Swift05} and 51.6\% in Linux~\cite[p.~1358,]{Xiao19}. %\cite{Palix11}
     906Swift \etal~\cite[p.~86]{Swift05} restructure device drivers using the Extension Procedure Call (XPC) within the kernel via functions @nooks_driver_call@ and @nooks_kernel_call@, which have coroutine properties context switching to separate stacks with explicit hand-off calls;
     907however, the calls do not retain execution state, and hence always start from the top.
     908The alternative approach for implementing device drivers is using stack-ripping.
     909However, Adya \etal~\cite{Adya02} argue against stack ripping in Section 3.2 and suggest a hybrid approach in Section 4 using cooperatively scheduled \emph{fibers}, which is coroutining.
     910
     911As an example, the following protocol:
    677912\begin{center}
    678913\ldots\, STX \ldots\, message \ldots\, ESC ETX \ldots\, message \ldots\, ETX 2-byte crc \ldots
    679914\end{center}
    680 is a network message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check.
     915is for a simple network message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check.
    681916Control characters may appear in a message if preceded by an ESC.
    682917When a message byte arrives, it triggers an interrupt, and the operating system services the interrupt by calling the device driver with the byte read from a hardware register.
    683 The device driver returns a status code of its current state, and when a complete message is obtained, the operating system knows the message is in the message buffer.
    684 Hence, the device driver is an input/output generator.
    685 
    686 Note, the cost of creating and resuming the device-driver generator, @Driver@, is virtually identical to call/return, so performance in an operating-system kernel is excellent.
    687 As well, the data state is small, where variables @byte@ and @msg@ are communication variables for passing in message bytes and returning the message, and variables @lnth@, @crc@, and @sum@ are local variable that must be retained between calls and are manually hoisted into the generator type.
    688 % Manually, detecting and hoisting local-state variables is easy when the number is small.
    689 In contrast, the execution state is large, with one @resume@ and seven @suspend@s.
    690 Hence, the key benefits of the generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than using a table-driven approach.
    691 Because FSMs can be complex and frequently occur in important domains, direct generator support is important in a system programming language.
     918The device driver returns a status code of its current state, and when a complete message is obtained, the operating system read the message accumulated in the supplied buffer.
     919Hence, the device driver is an input/output generator, where the cost of resuming the device-driver generator is the same as call/return, so performance in an operating-system kernel is excellent.
     920The key benefits of using a generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than table lookup or stack ripping.
     921The conclusion is that FSMs are complex and occur in important domains, so direct generator support is important in a system programming language.
    692922
    693923\begin{figure}
    694924\centering
    695 \newbox\myboxA
    696 \begin{lrbox}{\myboxA}
    697 \begin{python}[aboveskip=0pt,belowskip=0pt]
    698 def Fib():
    699         fn1, fn = 0, 1
    700         while True:
    701                 `yield fn1`
    702                 fn1, fn = fn, fn1 + fn
    703 f1 = Fib()
    704 f2 = Fib()
    705 for i in range( 10 ):
    706         print( next( f1 ), next( f2 ) )
    707 
    708 
    709 
    710 
    711 
    712 
    713 \end{python}
    714 \end{lrbox}
    715 
    716 \newbox\myboxB
    717 \begin{lrbox}{\myboxB}
    718 \begin{python}[aboveskip=0pt,belowskip=0pt]
    719 def Fmt():
    720         try:
    721                 while True:
    722                         for g in range( 5 ):
    723                                 for b in range( 4 ):
    724                                         print( `(yield)`, end='' )
    725                                 print( '  ', end='' )
    726                         print()
    727         except GeneratorExit:
    728                 if g != 0 | b != 0:
    729                         print()
    730 fmt = Fmt()
    731 `next( fmt )`                    # prime, next prewritten
    732 for i in range( 41 ):
    733         `fmt.send( 'a' );`      # send to yield
    734 \end{python}
    735 \end{lrbox}
    736 \subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA}
    737 \hspace{3pt}
    738 \vrule
    739 \hspace{3pt}
    740 \subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB}
    741 \caption{Python generator}
    742 \label{f:PythonGenerator}
    743 
    744 \bigskip
    745 
    746925\begin{tabular}{@{}l|l@{}}
    747926\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     
    750929`generator` Driver {
    751930        Status status;
    752         unsigned char byte, * msg; // communication
    753         unsigned int lnth, sum;      // local state
    754         unsigned short int crc;
     931        char byte, * msg; // communication
     932        int lnth, sum;      // local state
     933        short int crc;
    755934};
    756935void ?{}( Driver & d, char * m ) { d.msg = m; }
     
    800979(The trivial cycle is a generator resuming itself.)
    801980This control flow is similar to recursion for functions but without stack growth.
    802 The steps for symmetric control-flow are creating, executing, and terminating the cycle.
     981Figure~\ref{f:PingPongFullCoroutineSteps} shows the steps for symmetric control-flow are creating, executing, and terminating the cycle.
    803982Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope.
    804983(This issue occurs for any cyclic data structure.)
    805 % The example creates all the generators and then assigns the partners that form the cycle.
    806 % Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.
    807 Once the cycle is formed, the program main resumes one of the generators, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s).
     984The example creates the generators, @ping@/@pong@, and then assigns the partners that form the cycle.
     985% (Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.)
     986Once the cycle is formed, the program main resumes one of the generators, @ping@, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s).
    808987Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example).
     988Note, the creator and starter may be different, \eg if the creator calls another function that starts the cycle.
    809989The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack.
    810 Also, since local variables are not retained in the generator function, it does not contain any objects with destructors that must be called, so the  cost is the same as a function return.
    811 Destructor cost occurs when the generator instance is deallocated, which is easily controlled by the programmer.
    812 
    813 Figure~\ref{f:CPingPongSim} shows the implementation of the symmetric generator, where the complexity is the @resume@, which needs an extension to the calling convention to perform a forward rather than backward jump.
    814 This jump-starts at the top of the next generator main to re-execute the normal calling convention to make space on the stack for its local variables.
    815 However, before the jump, the caller must reset its stack (and any registers) equivalent to a @return@, but subsequently jump forward.
    816 This semantics is basically a tail-call optimization, which compilers already perform.
    817 The example shows the assembly code to undo the generator's entry code before the direct jump.
    818 This assembly code depends on what entry code is generated, specifically if there are local variables and the level of optimization.
    819 To provide this new calling convention requires a mechanism built into the compiler, which is beyond the scope of \CFA at this time.
    820 Nevertheless, it is possible to hand generate any symmetric generators for proof of concept and performance testing.
    821 A compiler could also eliminate other artifacts in the generator simulation to further increase performance, \eg LLVM has various coroutine support~\cite{CoroutineTS}, and \CFA can leverage this support should it fork @clang@.
     990Also, since local variables are not retained in the generator function, there are no objects with destructors to be called, so the cost is the same as a function return.
     991Destructor cost occurs when the generator instance is deallocated by the creator.
    822992
    823993\begin{figure}
     
    826996\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    827997`generator PingPong` {
     998        int N, i;                               // local state
    828999        const char * name;
    829         int N;
    830         int i;                          // local state
    8311000        PingPong & partner; // rebindable reference
    8321001};
    8331002
    8341003void `main( PingPong & pp )` with(pp) {
     1004
     1005
    8351006        for ( ; i < N; i += 1 ) {
    8361007                sout | name | i;
     
    8501021\begin{cfa}[escapechar={},aboveskip=0pt,belowskip=0pt]
    8511022typedef struct PingPong {
     1023        int restart, N, i;
    8521024        const char * name;
    853         int N, i;
    8541025        struct PingPong * partner;
    855         void * next;
    8561026} PingPong;
    857 #define PPCtor(name, N) {name,N,0,NULL,NULL}
     1027#define PPCtor(name, N) {0, N, 0, name, NULL}
    8581028void comain( PingPong * pp ) {
    859         if ( pp->next ) goto *pp->next;
    860         pp->next = &&cycle;
     1029        static void * states[] = {&&s0, &&s1};
     1030        goto *states[pp->restart];
     1031  s0: pp->restart = 1;
    8611032        for ( ; pp->i < pp->N; pp->i += 1 ) {
    8621033                printf( "%s %d\n", pp->name, pp->i );
    8631034                asm( "mov  %0,%%rdi" : "=m" (pp->partner) );
    8641035                asm( "mov  %rdi,%rax" );
    865                 asm( "popq %rbx" );
     1036                asm( "add  $16, %rsp" );
     1037                asm( "popq %rbp" );
    8661038                asm( "jmp  comain" );
    867           cycle: ;
     1039          s1: ;
    8681040        }
    8691041}
     
    8811053\end{figure}
    8821054
    883 Finally, part of this generator work was inspired by the recent \CCtwenty generator proposal~\cite{C++20Coroutine19} (which they call coroutines).
     1055\begin{figure}
     1056\centering
     1057\input{FullCoroutinePhases.pstex_t}
     1058\vspace*{-10pt}
     1059\caption{Symmetric coroutine steps: Ping / Pong}
     1060\label{f:PingPongFullCoroutineSteps}
     1061\end{figure}
     1062
     1063Figure~\ref{f:CPingPongSim} shows the C implementation of the \CFA symmetric generator, where there is still only one additional field, @restart@, but @resume@ is more complex because it does a forward rather than backward jump.
     1064Before the jump, the parameter for the next call @partner@ is placed into the register used for the first parameter, @rdi@, and the remaining registers are reset for a return.
     1065The @jmp comain@ restarts the function but with a different parameter, so the new call's behaviour depends on the state of the coroutine type, i.e., branch to restart location with different data state.
     1066While the semantics of call forward is a tail-call optimization, which compilers perform, the generator state is different on each call rather a common state for a tail-recursive function (i.e., the parameter to the function never changes during the forward calls.
     1067However, this assembler code depends on what entry code is generated, specifically if there are local variables and the level of optimization.
     1068Hence, internal compiler support is necessary for any forward call (or backwards return), \eg LLVM has various coroutine support~\cite{CoroutineTS}, and \CFA can leverage this support should it eventually fork @clang@.
     1069For this reason, \CFA does not support general symmetric generators at this time, but, it is possible to hand generate any symmetric generators (as in Figure~\ref{f:CPingPongSim}) for proof of concept and performance testing.
     1070
     1071Finally, part of this generator work was inspired by the recent \CCtwenty coroutine proposal~\cite{C++20Coroutine19}, which uses the general term coroutine to mean generator.
    8841072Our work provides the same high-performance asymmetric generators as \CCtwenty, and extends their work with symmetric generators.
    8851073An additional \CCtwenty generator feature allows @suspend@ and @resume@ to be followed by a restricted compound statement that is executed after the current generator has reset its stack but before calling the next generator, specified with \CFA syntax:
     
    8961084\label{s:Coroutine}
    8971085
    898 Stackful coroutines extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main.
     1086Stackful coroutines (Table~\ref{t:ExecutionPropertyComposition} case 5) extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main.
    8991087A coroutine is specified by replacing @generator@ with @coroutine@ for the type.
    900 Coroutine generality results in higher cost for creation, due to dynamic stack allocation, execution, due to context switching among stacks, and terminating, due to possible stack unwinding and dynamic stack deallocation.
     1088Coroutine generality results in higher cost for creation, due to dynamic stack allocation, for execution, due to context switching among stacks, and for terminating, due to possible stack unwinding and dynamic stack deallocation.
    9011089A series of different kinds of coroutines and their implementations demonstrate how coroutines extend generators.
    9021090
    9031091First, the previous generator examples are converted to their coroutine counterparts, allowing local-state variables to be moved from the generator type into the coroutine main.
    904 \begin{description}
    905 \item[Fibonacci]
    906 Move the declaration of @fn1@ to the start of coroutine main.
     1092\begin{center}
     1093\begin{tabular}{@{}l|l|l|l@{}}
     1094\multicolumn{1}{c|}{Fibonacci} & \multicolumn{1}{c|}{Formatter} & \multicolumn{1}{c|}{Device Driver} & \multicolumn{1}{c}{PingPong} \\
     1095\hline
    9071096\begin{cfa}[xleftmargin=0pt]
    908 void main( Fib & fib ) with(fib) {
     1097void main( Fib & fib ) ...
    9091098        `int fn1;`
    910 \end{cfa}
    911 \item[Formatter]
    912 Move the declaration of @g@ and @b@ to the for loops in the coroutine main.
     1099
     1100
     1101\end{cfa}
     1102&
    9131103\begin{cfa}[xleftmargin=0pt]
    9141104for ( `g`; 5 ) {
    9151105        for ( `b`; 4 ) {
    916 \end{cfa}
    917 \item[Device Driver]
    918 Move the declaration of @lnth@ and @sum@ to their points of initialization.
     1106
     1107
     1108\end{cfa}
     1109&
    9191110\begin{cfa}[xleftmargin=0pt]
    920         status = CONT;
    921         `unsigned int lnth = 0, sum = 0;`
    922         ...
    923         `unsigned short int crc = byte << 8;`
    924 \end{cfa}
    925 \item[PingPong]
    926 Move the declaration of @i@ to the for loop in the coroutine main.
     1111status = CONT;
     1112`int lnth = 0, sum = 0;`
     1113...
     1114`short int crc = byte << 8;`
     1115\end{cfa}
     1116&
    9271117\begin{cfa}[xleftmargin=0pt]
    928 void main( PingPong & pp ) with(pp) {
     1118void main( PingPong & pp ) ...
    9291119        for ( `i`; N ) {
    930 \end{cfa}
    931 \end{description}
     1120
     1121
     1122\end{cfa}
     1123\end{tabular}
     1124\end{center}
    9321125It is also possible to refactor code containing local-state and @suspend@ statements into a helper function, like the computation of the CRC for the device driver.
    9331126\begin{cfa}
    934 unsigned int Crc() {
     1127int Crc() {
    9351128        `suspend;`
    936         unsigned short int crc = byte << 8;
     1129        short int crc = byte << 8;
    9371130        `suspend;`
    9381131        status = (crc | byte) == sum ? MSG : ECRC;
     
    9451138
    9461139\begin{comment}
    947 Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @next@.
     1140Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @restart@.
    9481141Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
    9491142The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@.
    950 The interface function @next@, takes a Fibonacci instance and context switches to it using @resume@;
     1143The interface function @restart@, takes a Fibonacci instance and context switches to it using @resume@;
    9511144on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
    9521145The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack;
     
    11141307\begin{figure}
    11151308\centering
    1116 \lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}% allow $
    11171309\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}}
    11181310\begin{cfa}
    11191311`coroutine` Prod {
    1120         Cons & c;                       // communication
     1312        Cons & c;                       $\C[1.5in]{// communication}$
    11211313        int N, money, receipt;
    11221314};
    11231315void main( Prod & prod ) with( prod ) {
    1124         // 1st resume starts here
    1125         for ( i; N ) {
     1316        for ( i; N ) {          $\C{// 1st resume}\CRT$
    11261317                int p1 = random( 100 ), p2 = random( 100 );
    1127                 sout | p1 | " " | p2;
    11281318                int status = delivery( c, p1, p2 );
    1129                 sout | " $" | money | nl | status;
    11301319                receipt += 1;
    11311320        }
    11321321        stop( c );
    1133         sout | "prod stops";
    11341322}
    11351323int payment( Prod & prod, int money ) {
     
    11521340\begin{cfa}
    11531341`coroutine` Cons {
    1154         Prod & p;                       // communication
     1342        Prod & p;                       $\C[1.5in]{// communication}$
    11551343        int p1, p2, status;
    11561344        bool done;
    11571345};
    11581346void ?{}( Cons & cons, Prod & p ) {
    1159         &cons.p = &p; // reassignable reference
     1347        &cons.p = &p;           $\C{// reassignable reference}$
    11601348        cons.[status, done ] = [0, false];
    11611349}
    11621350void main( Cons & cons ) with( cons ) {
    1163         // 1st resume starts here
    1164         int money = 1, receipt;
     1351        int money = 1, receipt; $\C{// 1st resume}\CRT$
    11651352        for ( ; ! done; ) {
    1166                 sout | p1 | " " | p2 | nl | " $" | money;
    11671353                status += 1;
    11681354                receipt = payment( p, money );
    1169                 sout | " #" | receipt;
    11701355                money += 1;
    11711356        }
    1172         sout | "cons stops";
    11731357}
    11741358int delivery( Cons & cons, int p1, int p2 ) {
     
    11911375This example is illustrative because both producer/consumer have two interface functions with @resume@s that suspend execution in these interface (helper) functions.
    11921376The program main creates the producer coroutine, passes it to the consumer coroutine in its initialization, and closes the cycle at the call to @start@ along with the number of items to be produced.
    1193 The first @resume@ of @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
    1194 @prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer.
    1195 
     1377The call to @start@ is the first @resume@ of @prod@, which remembers the program main as the starter and creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
     1378@prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer's @deliver@ function to transfer the values, and printing the status returned from the consumer.
    11961379The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
    1197 On the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine.
    1198 The consumer iterates until the @done@ flag is set, prints the values delivered by the producer, increments status, and calls back to the producer via @payment@, and on return from @payment@, prints the receipt from the producer and increments @money@ (inflation).
    1199 The call from the consumer to @payment@ introduces the cycle between producer and consumer.
    1200 When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed.
    1201 The context switch restarts the producer at the point where it last context switched, so it continues in @delivery@ after the resume.
    1202 @delivery@ returns the status value in @prod@'s coroutine main, where the status is printed.
    1203 The loop then repeats calling @delivery@, where each call resumes the consumer coroutine.
    1204 The context switch to the consumer continues in @payment@.
    1205 The consumer increments and returns the receipt to the call in @cons@'s coroutine main.
    1206 The loop then repeats calling @payment@, where each call resumes the producer coroutine.
     1380Similarly on the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine.
     1381The symmetric coroutine cycle forms when the consumer calls the producer's @payment@ function, which resumes the producer in the consumer's delivery function.
     1382When the producer calls @delivery@ again, it resumes the consumer in the @payment@ function.
     1383Both interface function than return to the their corresponding coroutine-main functions for the next cycle.
    12071384Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling.
     1385As a consequence of a coroutine retaining its last resumer for suspending back, these reverse pointers allow @suspend@ to cycle \emph{backwards} around a symmetric coroutine cycle.
    12081386
    12091387\begin{figure}
     
    12141392\caption{Producer / consumer runtime stacks}
    12151393\label{f:ProdConsRuntimeStacks}
    1216 
    1217 \medskip
    1218 
    1219 \begin{center}
    1220 \input{FullCoroutinePhases.pstex_t}
    1221 \end{center}
    1222 \vspace*{-10pt}
    1223 \caption{Ping / Pong coroutine steps}
    1224 \label{f:PingPongFullCoroutineSteps}
    12251394\end{figure}
    12261395
    12271396Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack.
    1228 Furthermore, each deallocated coroutine must guarantee all destructors are run for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
    1229 When a coroutine's main ends, its stack is already unwound so any stack allocated objects with destructors have been finalized.
     1397Furthermore, each deallocated coroutine must execute all destructors for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
     1398In the example, termination begins with the producer's loop stopping after N iterations and calling the consumer's @stop@ function, which sets the @done@ flag, resumes the consumer in function @payment@, terminating the call, and the consumer's loop in its coroutine main.
     1399% (Not shown is having @prod@ raise a nonlocal @stop@ exception at @cons@ after it finishes generating values and suspend back to @cons@, which catches the @stop@ exception to terminate its loop.)
     1400When the consumer's main ends, its stack is already unwound so any stack allocated objects with destructors are finalized.
     1401The question now is where does control continue?
     1402
    12301403The na\"{i}ve semantics for coroutine-cycle termination is to context switch to the last resumer, like executing a @suspend@/@return@ in a generator.
    12311404However, for coroutines, the last resumer is \emph{not} implicitly below the current stack frame, as for generators, because each coroutine's stack is independent.
    12321405Unfortunately, it is impossible to determine statically if a coroutine is in a cycle and unrealistic to check dynamically (graph-cycle problem).
    12331406Hence, a compromise solution is necessary that works for asymmetric (acyclic) and symmetric (cyclic) coroutines.
    1234 
    1235 Our solution is to context switch back to the first resumer (starter) once the coroutine ends.
     1407Our solution is to retain a coroutine's starter (first resumer), and context switch back to the starter when the coroutine ends.
     1408Hence, the consumer restarts its first resumer, @prod@, in @stop@, and when the producer ends, it restarts its first resumer, program main, in @start@ (see dashed lines from the end of the coroutine mains in Figure~\ref{f:ProdConsRuntimeStacks}).
    12361409This semantics works well for the most common asymmetric and symmetric coroutine usage patterns.
    1237 For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer.
    1238 All previous generators converted to coroutines have this property.
    1239 For symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle.
    1240 Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter.
    1241 Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end.
     1410For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer;
     1411for symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle.
    12421412For other scenarios, it is always possible to devise a solution with additional programming effort, such as forcing the cycle forward (backward) to a safe point before starting termination.
    12431413
    1244 The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first.
    1245 Assume generator @PingPong@ is converted to a coroutine.
    1246 Figure~\ref{f:PingPongFullCoroutineSteps} shows the creation, starter, and cyclic execution steps of the coroutine version.
    1247 The program main creates (declares) coroutine instances @ping@ and @pong@.
    1248 Next, program main resumes @ping@, making it @ping@'s starter, and @ping@'s main resumes @pong@'s main, making it @pong@'s starter.
    1249 Execution forms a cycle when @pong@ resumes @ping@, and cycles $N$ times.
    1250 By adjusting $N$ for either @ping@/@pong@, it is possible to have either one finish first, instead of @pong@ always ending first.
    1251 If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main in function @start@.
    1252 If @ping@ ends first, it resumes its starter the program main in function @start@.
    1253 Regardless of the cycle complexity, the starter stack always leads back to the program main, but the stack can be entered at an arbitrary point.
    1254 Once back at the program main, coroutines @ping@ and @pong@ are deallocated.
    1255 For generators, deallocation runs the destructors for all objects in the generator type.
    1256 For coroutines, deallocation deals with objects in the coroutine type and must also run the destructors for any objects pending on the coroutine's stack for any unterminated coroutine.
    1257 Hence, if a coroutine's destructor detects the coroutine is not ended, it implicitly raises a cancellation exception (uncatchable exception) at the coroutine and resumes it so the cancellation exception can propagate to the root of the coroutine's stack destroying all local variable on the stack.
    1258 So the \CFA semantics for the generator and coroutine, ensure both can be safely deallocated at any time, regardless of their current state, like any other aggregate object.
    1259 Explicitly raising normal exceptions at another coroutine can replace flag variables, like @stop@, \eg @prod@ raises a @stop@ exception at @cons@ after it finishes generating values and resumes @cons@, which catches the @stop@ exception to terminate its loop.
    1260 
    1261 Finally, there is an interesting effect for @suspend@ with symmetric coroutines.
    1262 A coroutine must retain its last resumer to suspend back because the resumer is on a different stack.
    1263 These reverse pointers allow @suspend@ to cycle \emph{backwards}, which may be useful in certain cases.
    1264 However, there is an anomaly if a coroutine resumes itself, because it overwrites its last resumer with itself, losing the ability to resume the last external resumer.
    1265 To prevent losing this information, a self-resume does not overwrite the last resumer.
     1414Note, the producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first.
     1415Assume generator @PingPong@ in Figure~\ref{f:PingPongSymmetricGenerator} is converted to a coroutine.
     1416Unlike generators, coroutines have a starter structure with multiple levels, where the program main starts @ping@ and @ping@ starts @pong@.
     1417By adjusting $N$ for either @ping@/@pong@, it is possible to have either finish first.
     1418If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main on return;
     1419if @ping@ ends first, it resumes its starter the program main on return.
     1420Regardless of the cycle complexity, the starter structure always leads back to the program main, but the path can be entered at an arbitrary point.
     1421Once back at the program main (creator), coroutines @ping@ and @pong@ are deallocated, runnning any destructors for objects within the coroutine and possibly deallocating any coroutine stacks for non-terminated coroutines, where stack deallocation implies stack unwinding to find destructors for allocated objects on the stack.
     1422Hence, the \CFA termination semantics for the generator and coroutine ensure correct deallocation semnatics, regardless of the coroutine's state (terminated or active), like any other aggregate object.
    12661423
    12671424
     
    12941451Users wanting to extend custom types or build their own can only do so in ways offered by the language.
    12951452Furthermore, implementing custom types without language support may display the power of a programming language.
    1296 \CFA blends the two approaches, providing custom type for idiomatic \CFA code, while extending and building new custom types is still possible, similar to Java concurrency with builtin and library.
     1453\CFA blends the two approaches, providing custom type for idiomatic \CFA code, while extending and building new custom types is still possible, similar to Java concurrency with builtin and library (@java.util.concurrent@) monitors.
    12971454
    12981455Part of the mechanism to generalize custom types is the \CFA trait~\cite[\S~2.3]{Moss18}, \eg the definition for custom-type @coroutine@ is anything satisfying the trait @is_coroutine@, and this trait both enforces and restricts the coroutine-interface functions.
     
    13041461forall( `dtype` T | is_coroutine(T) ) void $suspend$( T & ), resume( T & );
    13051462\end{cfa}
    1306 Note, copying generators/coroutines/threads is not meaningful.
    1307 For example, both the resumer and suspender descriptors can have bidirectional pointers;
    1308 copying these coroutines does not update the internal pointers so behaviour of both copies would be difficult to understand.
    1309 Furthermore, two coroutines cannot logically execute on the same stack.
    1310 A deep coroutine copy, which copies the stack, is also meaningless in an unmanaged language (no garbage collection), like C, because the stack may contain pointers to object within it that require updating for the copy.
     1463Note, copying generators/coroutines/threads is undefined because muliple objects cannot execute on a shared stack and stack copying does not work in unmanaged languages (no garbage collection), like C, because the stack may contain pointers to objects within it that require updating for the copy.
    13111464The \CFA @dtype@ property provides no \emph{implicit} copying operations and the @is_coroutine@ trait provides no \emph{explicit} copying operations, so all coroutines must be passed by reference (pointer).
    13121465The function definitions ensure there is a statically typed @main@ function that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the coroutine descriptor from its handle.
     
    13521505The combination of custom types and fundamental @trait@ description of these types allows a concise specification for programmers and tools, while more advanced programmers can have tighter control over memory layout and initialization.
    13531506
    1354 Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a task is similar).
     1507Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a thread is similar).
    13551508The coroutine handle is the @coroutine@ instance containing programmer specified type global/communication variables across interface functions.
    13561509The coroutine descriptor contains all implicit declarations needed by the runtime, \eg @suspend@/@resume@, and can be part of the coroutine handle or separate.
    13571510The coroutine stack can appear in a number of locations and be fixed or variable sized.
    1358 Hence, the coroutine's stack could be a VLS\footnote{
    1359 We are examining variable-sized structures (VLS), where fields can be variable-sized structures or arrays.
     1511Hence, the coroutine's stack could be a variable-length structure (VLS)\footnote{
     1512We are examining VLSs, where fields can be variable-sized structures or arrays.
    13601513Once allocated, a VLS is fixed sized.}
    13611514on the allocating stack, provided the allocating stack is large enough.
    13621515For a VLS stack allocation/deallocation is an inexpensive adjustment of the stack pointer, modulo any stack constructor costs (\eg initial frame setup).
    1363 For heap stack allocation, allocation/deallocation is an expensive heap allocation (where the heap can be a shared resource), modulo any stack constructor costs.
    1364 With heap stack allocation, it is also possible to use a split (segmented) stack calling convention, available with gcc and clang, so the stack is variable sized.
     1516For stack allocation in the heap, allocation/deallocation is an expensive allocation, where the heap can be a shared resource, modulo any stack constructor costs.
     1517It is also possible to use a split (segmented) stack calling convention, available with gcc and clang, allowing a variable-sized stack via a set of connected blocks in the heap.
    13651518Currently, \CFA supports stack/heap allocated descriptors but only fixed-sized heap allocated stacks.
    13661519In \CFA debug-mode, the fixed-sized stack is terminated with a write-only page, which catches most stack overflows.
    13671520Experience teaching concurrency with \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue for students.
    1368 Split-stack allocation is under development but requires recompilation of legacy code, which may be impossible.
     1521Split-stack allocation is under development but requires recompilation of legacy code, which is not always possible.
    13691522
    13701523\begin{figure}
     
    13801533
    13811534Concurrency is nondeterministic scheduling of independent sequential execution paths (threads), where each thread has its own stack.
    1382 A single thread with multiple call stacks, \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}.
    1383 In coroutining, coroutines self-schedule the thread across stacks so execution is deterministic.
     1535A single thread with multiple stacks, \ie coroutining, does \emph{not} imply concurrency~\cite[\S~3]{Buhr05a}.
     1536Coroutining self-schedule the thread across stacks so execution is deterministic.
    13841537(It is \emph{impossible} to generate a concurrency error when coroutining.)
    1385 However, coroutines are a stepping stone towards concurrency.
    1386 
    1387 The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a}.
     1538
     1539The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}.
    13881540Therefore, a minimal concurrency system requires coroutines \emph{in conjunction with a nondeterministic scheduler}.
    1389 The resulting execution system now follows a cooperative threading model~\cite{Adya02,libdill}, called \newterm{non-preemptive scheduling}.
    1390 Adding \newterm{preemption} introduces non-cooperative scheduling, where context switching occurs randomly between any two instructions often based on a timer interrupt, called \newterm{preemptive scheduling}.
    1391 While a scheduler introduces uncertain execution among explicit context switches, preemption introduces uncertainty by introducing implicit context switches.
     1541The resulting execution system now follows a cooperative threading-model~\cite{Adya02,libdill} because context-switching points to the scheduler (blocking) are known, but the next unblocking point is unknown due to the scheduler.
     1542Adding \newterm{preemption} introduces \newterm{non-cooperative} or \newterm{preemptive} scheduling, where context switching points to the scheduler are unknown as they can occur randomly between any two instructions often based on a timer interrupt.
    13921543Uncertainty gives the illusion of parallelism on a single processor and provides a mechanism to access and increase performance on multiple processors.
    13931544The reason is that the scheduler/runtime have complete knowledge about resources and how to best utilized them.
    1394 However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization}, which restrict nondeterminism for correctness;
     1545However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization}~\cite[\S~4]{Buhr05a}, which restrict nondeterminism for correctness;
    13951546otherwise, it is impossible to write meaningful concurrent programs.
    13961547Optimal concurrent performance is often obtained by having as much nondeterminism as mutual exclusion and synchronization correctness allow.
    13971548
    1398 A scheduler can either be a stackless or stackful.
     1549A scheduler can also be stackless or stackful.
    13991550For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch.
    14001551For stackful, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
     
    14051556\label{s:threads}
    14061557
    1407 Threading needs the ability to start a thread and wait for its completion.
     1558Threading (Table~\ref{t:ExecutionPropertyComposition} case 11) needs the ability to start a thread and wait for its completion.
    14081559A common API for this ability is @fork@ and @join@.
    1409 \begin{cquote}
    1410 \begin{tabular}{@{}lll@{}}
    1411 \multicolumn{1}{c}{\textbf{Java}} & \multicolumn{1}{c}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\
    1412 \begin{cfa}
    1413 class MyTask extends Thread {...}
    1414 mytask t = new MyTask(...);
     1560\vspace{4pt}
     1561\par\noindent
     1562\begin{tabular}{@{}l|l|l@{}}
     1563\multicolumn{1}{c|}{\textbf{Java}} & \multicolumn{1}{c|}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\
     1564\hline
     1565\begin{cfa}
     1566class MyThread extends Thread {...}
     1567mythread t = new MyThread(...);
    14151568`t.start();` // start
    14161569// concurrency
     
    14191572&
    14201573\begin{cfa}
    1421 class MyTask { ... } // functor
    1422 MyTask mytask;
    1423 `thread t( mytask, ... );` // start
     1574class MyThread { ... } // functor
     1575MyThread mythread;
     1576`thread t( mythread, ... );` // start
    14241577// concurrency
    14251578`t.join();` // wait
     
    14341587\end{cfa}
    14351588\end{tabular}
    1436 \end{cquote}
     1589\vspace{1pt}
     1590\par\noindent
    14371591\CFA has a simpler approach using a custom @thread@ type and leveraging declaration semantics (allocation/deallocation), where threads implicitly @fork@ after construction and @join@ before destruction.
    14381592\begin{cfa}
    1439 thread MyTask {};
    1440 void main( MyTask & this ) { ... }
     1593thread MyThread {};
     1594void main( MyThread & this ) { ... }
    14411595int main() {
    1442         MyTask team`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$
     1596        MyThread team`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$
    14431597        // concurrency
    14441598} $\C{// deallocate stack-based threads, implicit joins before destruction}$
     
    14481602Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic allocation.
    14491603\begin{cfa}
    1450 MyTask * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$
     1604MyThread * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$
    14511605int main() {
    1452         MyTask * team = factory( 10 );
     1606        MyThread * team = factory( 10 );
    14531607        // concurrency
    14541608        `delete( team );` $\C{// deallocate heap-based threads, implicit joins before destruction}\CRT$
     
    14961650
    14971651Threads in \CFA are user level run by runtime kernel threads (see Section~\ref{s:CFARuntimeStructure}), where user threads provide concurrency and kernel threads provide parallelism.
    1498 Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the task-interface functions.
     1652Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the thread-interface functions.
    14991653\begin{cquote}
    15001654\begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}}
     
    15271681\label{s:MutualExclusionSynchronization}
    15281682
    1529 Unrestricted nondeterminism is meaningless as there is no way to know when the result is completed without synchronization.
     1683Unrestricted nondeterminism is meaningless as there is no way to know when a result is completed and safe to access.
    15301684To produce meaningful execution requires clawing back some determinism using mutual exclusion and synchronization, where mutual exclusion provides access control for threads using shared data, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}.
    1531 Some concurrent systems eliminate mutable shared-state by switching to stateless communication like message passing~\cite{Thoth,Harmony,V-Kernel,MPI} (Erlang, MPI), channels~\cite{CSP} (CSP,Go), actors~\cite{Akka} (Akka, Scala), or functional techniques (Haskell).
     1685The shared data protected by mutual exlusion is called a \newterm{critical section}~\cite{Dijkstra65}, and the protection can be simple (only 1 thread) or complex (only N kinds of threads, \eg group~\cite{Joung00} or readers/writer~\cite{Courtois71}).
     1686Without synchronization control in a critical section, an arriving thread can barge ahead of preexisting waiter threads resulting in short/long-term starvation, staleness/freshness problems, and/or incorrect transfer of data.
     1687Preventing or detecting barging is a challenge with low-level locks, but made easier through higher-level constructs.
     1688This challenge is often split into two different approaches: barging \emph{avoidance} and \emph{prevention}.
     1689Approaches that unconditionally releasing a lock for competing threads to acquire must use barging avoidance with flag/counter variable(s) to force barging threads to wait;
     1690approaches that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely.
     1691
     1692At the lowest level, concurrent control is provided by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg spin locks, semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}.
     1693However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}.
     1694A significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock.
     1695Easing composability is another feature higher-level mutual-exclusion mechanisms can offer.
     1696Some concurrent systems eliminate mutable shared-state by switching to non-shared communication like message passing~\cite{Thoth,Harmony,V-Kernel,MPI} (Erlang, MPI), channels~\cite{CSP} (CSP,Go), actors~\cite{Akka} (Akka, Scala), or functional techniques (Haskell).
    15321697However, these approaches introduce a new communication mechanism for concurrency different from the standard communication using function call/return.
    15331698Hence, a programmer must learn and manipulate two sets of design/programming patterns.
    15341699While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account.
    1535 In contrast, approaches based on stateful models more closely resemble the standard call/return programming model, resulting in a single programming paradigm.
    1536 
    1537 At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}.
    1538 However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}.
    1539 A newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}.
    1540 While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it is rejected as the core paradigm for concurrency in \CFA.
    1541 
    1542 One of the most natural, elegant, and efficient mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor}.
    1543 First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
    1544 In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to simulate monitors.
    1545 For these reasons, \CFA selected monitors as the core high-level concurrency construct, upon which higher-level approaches can be easily constructed.
    1546 
    1547 
    1548 \subsection{Mutual Exclusion}
    1549 
    1550 A group of instructions manipulating a specific instance of shared data that must be performed atomically is called a \newterm{critical section}~\cite{Dijkstra65}, which is enforced by \newterm{simple mutual-exclusion}.
    1551 The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session use the resource simultaneously and different sessions are segregated, which is enforced by \newterm{complex mutual-exclusion} providing the correct kind and number of threads using a group critical-section.
    1552 The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers share a session but writers have a unique session.
    1553 
    1554 However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use.
    1555 Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use.
    1556 Ease of use comes by either guaranteeing some problems cannot occur, \eg deadlock free, or by offering a more explicit coupling between shared data and critical section.
    1557 For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations, \eg reading/writing, for numerical types.
    1558 However, a significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock.
    1559 Easing composability is another feature higher-level mutual-exclusion mechanisms can offer.
    1560 
    1561 
    1562 \subsection{Synchronization}
    1563 
    1564 Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships.
    1565 Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use;
    1566 higher-level mechanisms often simplify usage by adding better coupling between synchronization and data, \eg receive-specific versus receive-any thread in message passing or offering specialized solutions, \eg barrier lock.
    1567 Often synchronization is used to order access to a critical section, \eg ensuring a waiting writer thread enters the critical section before a calling reader thread.
    1568 If the calling reader is scheduled before the waiting writer, the reader has barged.
    1569 Barging can result in staleness/freshness problems, where a reader barges ahead of a writer and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from ever being read (lost computation).
    1570 Preventing or detecting barging is an involved challenge with low-level locks, which is made easier through higher-level constructs.
    1571 This challenge is often split into two different approaches: barging avoidance and prevention.
    1572 Algorithms that unconditionally releasing a lock for competing threads to acquire use barging avoidance during synchronization to force a barging thread to wait;
    1573 algorithms that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely.
     1700In contrast, approaches based on shared-state models more closely resemble the standard call/return programming model, resulting in a single programming paradigm.
     1701Finally, a newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}.
     1702While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive~\cite{Cascaval08,Boehm09} to be the main concurrency paradigm for system languages.
    15741703
    15751704
     
    15771706\label{s:Monitor}
    15781707
    1579 A \textbf{monitor} is a set of functions that ensure mutual exclusion when accessing shared state.
    1580 More precisely, a monitor is a programming technique that implicitly binds mutual exclusion to static function scope, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation).
     1708One of the most natural, elegant, efficient, high-level mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor} (Table~\ref{t:ExecutionPropertyComposition} case 2).
     1709First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
     1710In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to manually implement a monitor.
     1711For these reasons, \CFA selected monitors as the core high-level concurrency construct, upon which higher-level approaches can be easily constructed.
     1712
     1713Specifically, a \textbf{monitor} is a set of functions that ensure mutual exclusion when accessing shared state.
     1714More precisely, a monitor is a programming technique that implicitly binds mutual exclusion to static function scope by call/return, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation).
    15811715Restricting acquire/release points eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency.
    15821716\CFA uses a custom @monitor@ type and leverages declaration semantics (deallocation) to protect active or waiting threads in a monitor.
    15831717
    15841718The following is a \CFA monitor implementation of an atomic counter.
    1585 \begin{cfa}[morekeywords=nomutex]
     1719\begin{cfa}
    15861720`monitor` Aint { int cnt; }; $\C[4.25in]{// atomic integer counter}$
    1587 int ++?( Aint & `mutex`$\(_{opt}\)$ this ) with( this ) { return ++cnt; } $\C{// increment}$
    1588 int ?=?( Aint & `mutex`$\(_{opt}\)$ lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions with int}\CRT$
    1589 int ?=?( int & lhs, Aint & `mutex`$\(_{opt}\)$ rhs ) with( rhs ) { lhs = cnt; }
    1590 \end{cfa}
    1591 % The @Aint@ constructor, @?{}@, uses the \lstinline[morekeywords=nomutex]@nomutex@ qualifier indicating mutual exclusion is unnecessary during construction because an object is inaccessible (private) until after it is initialized.
    1592 % (While a constructor may publish its address into a global variable, doing so generates a race-condition.)
    1593 The prefix increment operation, @++?@, is normally @mutex@, indicating mutual exclusion is necessary during function execution, to protect the incrementing from race conditions, unless there is an atomic increment instruction for the implementation type.
    1594 The assignment operators provide bidirectional conversion between an atomic and normal integer without accessing field @cnt@;
    1595 these operations only need @mutex@, if reading/writing the implementation type is not atomic.
    1596 The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics, which is similar to the \CC template @std::atomic@.
     1721int ++?( Aint & `mutex` this ) with( this ) { return ++cnt; } $\C{// increment}$
     1722int ?=?( Aint & `mutex` lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions with int, mutex optional}\CRT$
     1723int ?=?( int & lhs, Aint & `mutex` rhs ) with( rhs ) { lhs = cnt; }
     1724\end{cfa}
     1725The operators use the parameter-only declaration type-qualifier @mutex@ to mark which parameters require locking during function execution to protect from race conditions.
     1726The assignment operators provide bidirectional conversion between an atomic and normal integer without accessing field @cnt@.
     1727(These operations only need @mutex@, if reading/writing the implementation type is not atomic.)
     1728The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics.
    15971729\begin{cfa}
    15981730int i = 0, j = 0, k = 5;
     
    16021734i = x; j = y; k = z;
    16031735\end{cfa}
     1736Note, like other concurrent programming languages, \CFA has specializations for the basic types using atomic instructions for performance and a general trait similar to the \CC template @std::atomic@.
    16041737
    16051738\CFA monitors have \newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling other interface functions.
     1739\newpage
    16061740\begin{cfa}
    16071741monitor M { ... } m;
     
    16121746\end{cfa}
    16131747\CFA monitors also ensure the monitor lock is released regardless of how an acquiring function ends (normal or exceptional), and returning a shared variable is safe via copying before the lock is released.
    1614 Similar safety is offered by \emph{explicit} mechanisms like \CC RAII;
    1615 monitor \emph{implicit} safety ensures no programmer usage errors.
     1748Similar safety is offered by \emph{explicit} opt-in disciplines like \CC RAII versus the monitor \emph{implicit} language-enforced safety guarantee ensuring no programmer usage errors.
    16161749Furthermore, RAII mechanisms cannot handle complex synchronization within a monitor, where the monitor lock may not be released on function exit because it is passed to an unblocking thread;
    16171750RAII is purely a mutual-exclusion mechanism (see Section~\ref{s:Scheduling}).
     
    16391772\end{cquote}
    16401773The @dtype@ property prevents \emph{implicit} copy operations and the @is_monitor@ trait provides no \emph{explicit} copy operations, so monitors must be passed by reference (pointer).
    1641 % Copying a lock is insecure because it is possible to copy an open lock and then use the open copy when the original lock is closed to simultaneously access the shared data.
    1642 % Copying a monitor is secure because both the lock and shared data are copies, but copying the shared data is meaningless because it no longer represents a unique entity.
    16431774Similarly, the function definitions ensures there is a mechanism to get (read) the monitor descriptor from its handle, and a special destructor to prevent deallocation if a thread using the shared data.
    16441775The custom monitor type also inserts any locks needed to implement the mutual exclusion semantics.
     
    16521783For example, a monitor may be passed through multiple helper functions before it is necessary to acquire the monitor's mutual exclusion.
    16531784
    1654 The benefit of mandatory monitor qualifiers is self-documentation, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameters is redundant.
    1655 Instead, the semantics has one qualifier as the default and the other required.
    1656 For example, make the safe @mutex@ qualifier the default because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors.
    1657 Alternatively, make the unsafe \lstinline[morekeywords=nomutex]@nomutex@ qualifier the default because it is the \emph{normal} parameter semantics while @mutex@ parameters are rare.
    1658 Providing a default qualifier implies knowing whether a parameter is a monitor.
    1659 Since \CFA relies heavily on traits as an abstraction mechanism, types can coincidentally match the monitor trait but not be a monitor, similar to inheritance where a shape and playing card can both be drawable.
    1660 For this reason, \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@.
     1785\CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@, because @mutex@ parameters are rare and no keyword is the \emph{normal} parameter semantics.
     1786Hence, @mutex@ parameters are documentation, at the function and its prototype, to both programmer and compiler, without other redundant keywords.
     1787Furthermore, \CFA relies heavily on traits as an abstraction mechanism, so the @mutex@ qualifier prevents coincidentally matching of a monitor trait with a type that is not a monitor, similar to coincidental inheritance where a shape and playing card can both be drawable.
    16611788
    16621789The next semantic decision is establishing which parameter \emph{types} may be qualified with @mutex@.
     
    16721799Function @f3@ has a multiple object matrix, and @f4@ a multiple object data structure.
    16731800While shown shortly, multiple object acquisition is possible, but the number of objects must be statically known.
    1674 Therefore, \CFA only acquires one monitor per parameter with at most one level of indirection, excluding pointers as it is impossible to statically determine the size.
     1801Therefore, \CFA only acquires one monitor per parameter with exactly one level of indirection, and exclude pointer types to unknown sized arrays.
    16751802
    16761803For object-oriented monitors, \eg Java, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@.
     
    16791806While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.}
    16801807called \newterm{bulk acquire}.
    1681 \CFA guarantees acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitors is safe from deadlock.
     1808\CFA guarantees bulk acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitors in a bulk acquire is safe from deadlock.
    16821809Figure~\ref{f:BankTransfer} shows a trivial solution to the bank transfer problem~\cite{BankTransfer}, where two resources must be locked simultaneously, using \CFA monitors with implicit locking and \CC with explicit locking.
    16831810A \CFA programmer only has to manage when to acquire mutual exclusion;
     
    16991826void transfer( BankAccount & `mutex` my,
    17001827        BankAccount & `mutex` your, int me2you ) {
    1701 
     1828        // bulk acquire
    17021829        deposit( my, -me2you ); // debit
    17031830        deposit( your, me2you ); // credit
     
    17291856void transfer( BankAccount & my,
    17301857                        BankAccount & your, int me2you ) {
    1731         `scoped_lock lock( my.m, your.m );`
     1858        `scoped_lock lock( my.m, your.m );` // bulk acquire
    17321859        deposit( my, -me2you ); // debit
    17331860        deposit( your, me2you ); // credit
     
    17571884\end{figure}
    17581885
    1759 Users can still force the acquiring order by using @mutex@/\lstinline[morekeywords=nomutex]@nomutex@.
     1886Users can still force the acquiring order by using or not using @mutex@.
    17601887\begin{cfa}
    17611888void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$
    1762 void bar( M & mutex m1, M & /* nomutex */ m2 ) { $\C{// acquire m1}$
     1889void bar( M & mutex m1, M & m2 ) { $\C{// only acquire m1}$
    17631890        ... foo( m1, m2 ); ... $\C{// acquire m2}$
    17641891}
    1765 void baz( M & /* nomutex */ m1, M & mutex m2 ) { $\C{// acquire m2}$
     1892void baz( M & m1, M & mutex m2 ) { $\C{// only acquire m2}$
    17661893        ... foo( m1, m2 ); ... $\C{// acquire m1}$
    17671894}
     
    18061933% There are many aspects of scheduling in a concurrency system, all related to resource utilization by waiting threads, \ie which thread gets the resource next.
    18071934% Different forms of scheduling include access to processors by threads (see Section~\ref{s:RuntimeStructureCluster}), another is access to a shared resource by a lock or monitor.
    1808 This section discusses monitor scheduling for waiting threads eligible for entry, \ie which thread gets the shared resource next. (See Section~\ref{s:RuntimeStructureCluster} for scheduling threads on virtual processors.)
    1809 While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer may be full/empty so produce/consumer threads must block.
    1810 Leaving the monitor and trying again (busy waiting) is impractical for high-level programming.
    1811 Monitors eliminate busy waiting by providing synchronization to schedule threads needing access to the shared data, where threads block versus spinning.
     1935This section discusses scheduling for waiting threads eligible for monitor entry, \ie which user thread gets the shared resource next. (See Section~\ref{s:RuntimeStructureCluster} for scheduling kernel threads on virtual processors.)
     1936While monitor mutual-exclusion provides safe access to its shared data, the data may indicate a thread cannot proceed, \eg a bounded buffer may be full/\-empty so produce/consumer threads must block.
     1937Leaving the monitor and retrying (busy waiting) is impractical for high-level programming.
     1938
     1939Monitors eliminate busy waiting by providing synchronization within the monitor critical-section to schedule threads needing access to the shared data, where threads block versus spin.
    18121940Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling.
    1813 \newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry.
    1814 Finally, \CFA monitors do not allow calling threads to barge ahead of signalled threads, which simplifies synchronization among threads in the monitor and increases correctness.
    1815 If barging is allowed, synchronization between a signaller and signallee is difficult, often requiring additional flags and multiple unblock/block cycles.
    1816 In fact, signals-as-hints is completely opposite from that proposed by Hoare in the seminal paper on monitors~\cite[p.~550]{Hoare74}.
     1941\newterm{Internal} (largely) schedules threads located \emph{inside} the monitor and is accomplished using condition variables with signal and wait.
     1942\newterm{External} (largely) schedules threads located \emph{outside} the monitor and is accomplished with the @waitfor@ statement.
     1943Note, internal scheduling has a small amount of external scheduling and vice versus, so the naming denotes where the majority of the block threads reside (inside or outside) for scheduling.
     1944For complex scheduling, the approaches can be combined, so there can be an equal number of threads waiting inside and outside.
     1945
     1946\CFA monitors do not allow calling threads to barge ahead of signalled threads (via barging prevention), which simplifies synchronization among threads in the monitor and increases correctness.
     1947A direct consequence of this semantics is that unblocked waiting threads are not required to recheck the waiting condition, \ie waits are not in a starvation-prone busy-loop as required by the signals-as-hints style with barging.
     1948Preventing barging comes directly from Hoare's semantics in the seminal paper on monitors~\cite[p.~550]{Hoare74}.
    18171949% \begin{cquote}
    18181950% However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program.
    18191951% It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74}
    18201952% \end{cquote}
    1821 Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of self barging.
    1822 Hence, a \CFA @wait@ statement is not enclosed in a @while@ loop retesting a blocking predicate, which can cause thread starvation due to barging.
    1823 
    1824 Figure~\ref{f:MonitorScheduling} shows general internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}).
    1825 External calling threads block on the calling queue, if the monitor is occupied, otherwise they enter in FIFO order.
    1826 Internal threads block on condition queues via @wait@ and reenter from the condition in FIFO order.
    1827 Alternatively, internal threads block on urgent from the @signal_block@ or @waitfor@, and reenter implicitly when the monitor becomes empty, \ie, the thread in the monitor exits or waits.
    1828 
    1829 There are three signalling mechanisms to unblock waiting threads to enter the monitor.
    1830 Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion, so either the signaller or signallee can proceed.
    1831 For internal scheduling, threads are unblocked from condition queues using @signal@, where the signallee is moved to urgent and the signaller continues (solid line).
    1832 Multiple signals move multiple signallees to urgent until the condition is empty.
    1833 When the signaller exits or waits, a thread blocked on urgent is processed before calling threads to prevent barging.
     1953Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit self barging.
     1954
     1955Monitor mutual-exclusion means signalling cannot have the signaller and signalled thread in the monitor simultaneously, so only the signaller or signallee can proceed.
     1956Figure~\ref{f:MonitorScheduling} shows internal/external scheduling for the bounded-buffer examples in Figure~\ref{f:GenericBoundedBuffer}.
     1957For internal scheduling in Figure~\ref{f:BBInt}, the @signal@ moves the signallee (front thread of the specified condition queue) to urgent and the signaller continues (solid line).
     1958Multiple signals move multiple signallees to urgent until the condition queue is empty.
     1959When the signaller exits or waits, a thread is implicitly unblocked from urgent (if available) before unblocking a calling thread to prevent barging.
    18341960(Java conceptually moves the signalled thread to the calling queue, and hence, allows barging.)
    1835 The alternative unblock is in the opposite order using @signal_block@, where the signaller is moved to urgent and the signallee continues (dashed line), and is implicitly unblocked from urgent when the signallee exits or waits.
    1836 
    1837 For external scheduling, the condition queues are not used;
    1838 instead threads are unblocked directly from the calling queue using @waitfor@ based on function names requesting mutual exclusion.
    1839 (The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.)
    1840 The @waitfor@ has the same semantics as @signal_block@, where the signalled thread executes before the signallee, which waits on urgent.
    1841 Executing multiple @waitfor@s from different signalled functions causes the calling threads to move to urgent.
    1842 External scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited.
    1843 Internal scheduling behaves the same for an urgent stack or queue, except for multiple signalling, where the threads unblock from urgent in reverse order from signalling.
    1844 If the restart order is important, multiple signalling by a signal thread can be transformed into daisy-chain signalling among threads, where each thread signals the next thread.
    1845 We tried both a stack for @waitfor@ and queue for signalling, but that resulted in complex semantics about which thread enters next.
    1846 Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling.
     1961Signal is used when the signaller is providing the cooperation needed by the signallee (\eg creating an empty slot in a buffer for a producer) and the signaller immediately exits the monitor to run concurrently (consume the buffer element) and passes control of the monitor to the signalled thread, which can immediately take advantage of the state change.
     1962Specifically, the @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list.
     1963Signalling is unconditional because signalling an empty condition queue does nothing.
     1964It is common to declare condition queues as monitor fields to prevent shared access, hence no locking is required for access as the queues are protected by the monitor lock.
     1965In \CFA, a condition queue can be created/stored independently.
    18471966
    18481967\begin{figure}
     
    18621981\end{figure}
    18631982
    1864 Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, detect the buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@.
    1865 The @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list.
    1866 The appropriate condition variable is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer.
    1867 Signalling is unconditional, because signalling an empty condition variable does nothing.
    1868 It is common to declare condition variables as monitor fields to prevent shared access, hence no locking is required for access as the conditions are protected by the monitor lock.
    1869 In \CFA, a condition variable can be created/stored independently.
    1870 % To still prevent expensive locking on access, a condition variable is tied to a \emph{group} of monitors on first use, called \newterm{branding}, resulting in a low-cost boolean test to detect sharing from other monitors.
    1871 
    1872 % Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means:
    1873 % \begin{enumerate}
    1874 % \item
    1875 % The signalling thread returns immediately and the signalled thread continues.
    1876 % \item
    1877 % The signalling thread continues and the signalled thread is marked for urgent unblocking at the next scheduling point (exit/wait).
    1878 % \item
    1879 % The signalling thread blocks but is marked for urgent unblocking at the next scheduling point and the signalled thread continues.
    1880 % \end{enumerate}
    1881 % The first approach is too restrictive, as it precludes solving a reasonable class of problems, \eg dating service (see Figure~\ref{f:DatingService}).
    1882 % \CFA supports the next two semantics as both are useful.
    1883 
    18841983\begin{figure}
    18851984\centering
     
    18931992                T elements[10];
    18941993        };
    1895         void ?{}( Buffer(T) & buffer ) with(buffer) {
     1994        void ?{}( Buffer(T) & buf ) with(buf) {
    18961995                front = back = count = 0;
    18971996        }
    1898         void insert( Buffer(T) & mutex buffer, T elem )
    1899                                 with(buffer) {
    1900                 if ( count == 10 ) `wait( empty )`;
    1901                 // insert elem into buffer
     1997
     1998        void insert(Buffer(T) & mutex buf, T elm) with(buf){
     1999                if ( count == 10 ) `wait( empty )`; // full ?
     2000                // insert elm into buf
    19022001                `signal( full )`;
    19032002        }
    1904         T remove( Buffer(T) & mutex buffer ) with(buffer) {
    1905                 if ( count == 0 ) `wait( full )`;
    1906                 // remove elem from buffer
     2003        T remove( Buffer(T) & mutex buf ) with(buf) {
     2004                if ( count == 0 ) `wait( full )`; // empty ?
     2005                // remove elm from buf
    19072006                `signal( empty )`;
    1908                 return elem;
     2007                return elm;
    19092008        }
    19102009}
    19112010\end{cfa}
    19122011\end{lrbox}
    1913 
    1914 % \newbox\myboxB
    1915 % \begin{lrbox}{\myboxB}
    1916 % \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1917 % forall( otype T ) { // distribute forall
    1918 %       monitor Buffer {
    1919 %
    1920 %               int front, back, count;
    1921 %               T elements[10];
    1922 %       };
    1923 %       void ?{}( Buffer(T) & buffer ) with(buffer) {
    1924 %               [front, back, count] = 0;
    1925 %       }
    1926 %       T remove( Buffer(T) & mutex buffer ); // forward
    1927 %       void insert( Buffer(T) & mutex buffer, T elem )
    1928 %                               with(buffer) {
    1929 %               if ( count == 10 ) `waitfor( remove, buffer )`;
    1930 %               // insert elem into buffer
    1931 %
    1932 %       }
    1933 %       T remove( Buffer(T) & mutex buffer ) with(buffer) {
    1934 %               if ( count == 0 ) `waitfor( insert, buffer )`;
    1935 %               // remove elem from buffer
    1936 %
    1937 %               return elem;
    1938 %       }
    1939 % }
    1940 % \end{cfa}
    1941 % \end{lrbox}
    19422012
    19432013\newbox\myboxB
    19442014\begin{lrbox}{\myboxB}
    19452015\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2016forall( otype T ) { // distribute forall
     2017        monitor Buffer {
     2018
     2019                int front, back, count;
     2020                T elements[10];
     2021        };
     2022        void ?{}( Buffer(T) & buf ) with(buf) {
     2023                front = back = count = 0;
     2024        }
     2025        T remove( Buffer(T) & mutex buf ); // forward
     2026        void insert(Buffer(T) & mutex buf, T elm) with(buf){
     2027                if ( count == 10 ) `waitfor( remove : buf )`;
     2028                // insert elm into buf
     2029
     2030        }
     2031        T remove( Buffer(T) & mutex buf ) with(buf) {
     2032                if ( count == 0 ) `waitfor( insert : buf )`;
     2033                // remove elm from buf
     2034
     2035                return elm;
     2036        }
     2037}
     2038\end{cfa}
     2039\end{lrbox}
     2040
     2041\subfloat[Internal scheduling]{\label{f:BBInt}\usebox\myboxA}
     2042\hspace{1pt}
     2043\vrule
     2044\hspace{3pt}
     2045\subfloat[External scheduling]{\label{f:BBExt}\usebox\myboxB}
     2046
     2047\caption{Generic bounded buffer}
     2048\label{f:GenericBoundedBuffer}
     2049\end{figure}
     2050
     2051The @signal_block@ provides the opposite unblocking order, where the signaller is moved to urgent and the signallee continues and a thread is implicitly unblocked from urgent when the signallee exits or waits (dashed line).
     2052Signal block is used when the signallee is providing the cooperation needed by the signaller (\eg if the buffer is removed and a producer hands off an item to a consumer, as in Figure~\ref{f:DatingSignalBlock}) so the signaller must wait until the signallee unblocks, provides the cooperation, exits the monitor to run concurrently, and passes control of the monitor to the signaller, which can immediately take advantage of the state change.
     2053Using @signal@ or @signal_block@ can be a dynamic decision based on whether the thread providing the cooperation arrives before or after the thread needing the cooperation.
     2054
     2055External scheduling in Figure~\ref{f:BBExt} simplifies internal scheduling by eliminating condition queues and @signal@/@wait@ (cases where it cannot are discussed shortly), and has existed in the programming language Ada for almost 40 years with variants in other languages~\cite{SR,ConcurrentC++,uC++}.
     2056While prior languages use external scheduling solely for thread interaction, \CFA generalizes it to both monitors and threads.
     2057External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on condition queues in the monitor.
     2058Scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion.
     2059Specifically, a thread calling the monitor is unblocked directly from the calling queue based on function names that can fulfill the cooperation required by the signaller.
     2060(The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.)
     2061Hence, the @waitfor@ has the same semantics as @signal_block@, where the signallee thread from the calling queue executes before the signaller, which waits on urgent.
     2062Now when a producer/consumer detects a full/empty buffer, the necessary cooperation for continuation is specified by indicating the next function call that can occur.
     2063For example, a producer detecting a full buffer must have cooperation from a consumer to remove an item so function @remove@ is accepted, which prevents producers from entering the monitor, and after a consumer calls @remove@, the producer waiting on urgent is \emph{implicitly} unblocked because it can now continue its insert operation.
     2064Hence, this mechanism is done in terms of control flow, next call, versus in terms of data, channels, as in Go/Rust @select@.
     2065While both mechanisms have strengths and weaknesses, \CFA uses the control-flow mechanism to be consistent with other language features.
     2066
     2067Figure~\ref{f:ReadersWriterLock} shows internal/external scheduling for a readers/writer lock with no barging and threads are serviced in FIFO order to eliminate staleness/freshness among the reader/writer threads.
     2068For internal scheduling in Figure~\ref{f:RWInt}, the readers and writers wait on the same condition queue in FIFO order, making it impossible to tell if a waiting thread is a reader or writer.
     2069To clawback the kind of thread, a \CFA condition can store user data in the node for a blocking thread at the @wait@, \ie whether the thread is a @READER@ or @WRITER@.
     2070An unblocked reader thread checks if the thread at the front of the queue is a reader and unblock it, \ie the readers daisy-chain signal the next group of readers demarcated by the next writer or end of the queue.
     2071For external scheduling in Figure~\ref{f:RWExt}, a waiting reader checks if a writer is using the resource, and if so, restricts further calls until the writer exits by calling @EndWrite@.
     2072The writer does a similar action for each reader or writer using the resource.
     2073Note, no new calls to @StartRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@.
     2074
     2075\begin{figure}
     2076\centering
     2077\newbox\myboxA
     2078\begin{lrbox}{\myboxA}
     2079\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2080enum RW { READER, WRITER };
    19462081monitor ReadersWriter {
    1947         int rcnt, wcnt; // readers/writer using resource
     2082        int rcnt, wcnt; // readers/writer using resource
     2083        `condition RWers;`
    19482084};
    19492085void ?{}( ReadersWriter & rw ) with(rw) {
     
    19522088void EndRead( ReadersWriter & mutex rw ) with(rw) {
    19532089        rcnt -= 1;
     2090        if ( rcnt == 0 ) `signal( RWers )`;
    19542091}
    19552092void EndWrite( ReadersWriter & mutex rw ) with(rw) {
    19562093        wcnt = 0;
     2094        `signal( RWers );`
    19572095}
    19582096void StartRead( ReadersWriter & mutex rw ) with(rw) {
    1959         if ( wcnt > 0 ) `waitfor( EndWrite, rw );`
     2097        if ( wcnt !=0 || ! empty( RWers ) )
     2098                `wait( RWers, READER )`;
    19602099        rcnt += 1;
     2100        if ( ! empty(RWers) && `front(RWers) == READER` )
     2101                `signal( RWers )`;  // daisy-chain signalling
    19612102}
    19622103void StartWrite( ReadersWriter & mutex rw ) with(rw) {
    1963         if ( wcnt > 0 ) `waitfor( EndWrite, rw );`
    1964         else while ( rcnt > 0 ) `waitfor( EndRead, rw );`
     2104        if ( wcnt != 0 || rcnt != 0 ) `wait( RWers, WRITER )`;
     2105
    19652106        wcnt = 1;
    19662107}
    1967 
    19682108\end{cfa}
    19692109\end{lrbox}
    19702110
    1971 \subfloat[Generic bounded buffer, internal scheduling]{\label{f:BBInt}\usebox\myboxA}
    1972 \hspace{3pt}
     2111\newbox\myboxB
     2112\begin{lrbox}{\myboxB}
     2113\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2114
     2115monitor ReadersWriter {
     2116        int rcnt, wcnt; // readers/writer using resource
     2117
     2118};
     2119void ?{}( ReadersWriter & rw ) with(rw) {
     2120        rcnt = wcnt = 0;
     2121}
     2122void EndRead( ReadersWriter & mutex rw ) with(rw) {
     2123        rcnt -= 1;
     2124
     2125}
     2126void EndWrite( ReadersWriter & mutex rw ) with(rw) {
     2127        wcnt = 0;
     2128
     2129}
     2130void StartRead( ReadersWriter & mutex rw ) with(rw) {
     2131        if ( wcnt > 0 ) `waitfor( EndWrite : rw );`
     2132
     2133        rcnt += 1;
     2134
     2135
     2136}
     2137void StartWrite( ReadersWriter & mutex rw ) with(rw) {
     2138        if ( wcnt > 0 ) `waitfor( EndWrite : rw );`
     2139        else while ( rcnt > 0 ) `waitfor( EndRead : rw );`
     2140        wcnt = 1;
     2141}
     2142\end{cfa}
     2143\end{lrbox}
     2144
     2145\subfloat[Internal scheduling]{\label{f:RWInt}\usebox\myboxA}
     2146\hspace{1pt}
    19732147\vrule
    19742148\hspace{3pt}
    1975 \subfloat[Readers / writer lock, external scheduling]{\label{f:RWExt}\usebox\myboxB}
    1976 
    1977 \caption{Internal / external scheduling}
    1978 \label{f:InternalExternalScheduling}
     2149\subfloat[External scheduling]{\label{f:RWExt}\usebox\myboxB}
     2150
     2151\caption{Readers / writer lock}
     2152\label{f:ReadersWriterLock}
    19792153\end{figure}
    19802154
    1981 Figure~\ref{f:BBInt} can be transformed into external scheduling by removing the condition variables and signals/waits, and adding the following lines at the locations of the current @wait@s in @insert@/@remove@, respectively.
    1982 \begin{cfa}[aboveskip=2pt,belowskip=1pt]
    1983 if ( count == 10 ) `waitfor( remove, buffer )`;       |      if ( count == 0 ) `waitfor( insert, buffer )`;
    1984 \end{cfa}
    1985 Here, the producers/consumers detects a full/\-empty buffer and prevents more producers/consumers from entering the monitor until there is a free/empty slot in the buffer.
    1986 External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion.
    1987 If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer.
    1988 Threads calling excluded functions block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor.
    1989 Figure~\ref{f:RWExt} shows a readers/writer lock written using external scheduling, where a waiting reader detects a writer using the resource and restricts further calls until the writer exits by calling @EndWrite@.
    1990 The writer does a similar action for each reader or writer using the resource.
    1991 Note, no new calls to @StarRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@.
    1992 External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on conditions in the monitor.
    1993 The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go @select@ on channels.
    1994 While both mechanisms have strengths and weaknesses, this project uses the control-flow mechanism to be consistent with other language features.
    1995 % Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor functions (see Section~\ref{s:Multi-MonitorScheduling}).
    1996 
    1997 Figure~\ref{f:DatingService} shows a dating service demonstrating non-blocking and blocking signalling.
    1998 The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers.
    1999 A thread blocks until an appropriate partner arrives.
    2000 The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property.
    2001 For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner.
    2002 For signal-block scheduling, the implicit urgent-queue replaces the explict @exchange@-condition and @signal_block@ puts the finding thread on the urgent condition and unblocks the matcher.
    2003 The dating service is an example of a monitor that cannot be written using external scheduling because it requires knowledge of calling parameters to make scheduling decisions, and parameters of waiting threads are unavailable;
    2004 as well, an arriving thread may not find a partner and must wait, which requires a condition variable, and condition variables imply internal scheduling.
    2005 Furthermore, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number.
    2006 Putting loops around the @wait@s does not correct the problem;
    2007 the simple solution must be restructured to account for barging.
     2155Finally, external scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited.
     2156Internal schedulling performing multiple signalling results in unblocking from urgent in the reverse order from signalling.
     2157It is rare for the unblocking order to be important as an unblocked thread can be time-sliced immediately after leaving the monitor.
     2158If the unblocking order is important, multiple signalling can be restructured into daisy-chain signalling, where each thread signals the next thread.
     2159Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling.
     2160(Advanced @waitfor@ features are discussed in Section~\ref{s:ExtendedWaitfor}.)
    20082161
    20092162\begin{figure}
     
    20192172};
    20202173int girl( DS & mutex ds, int phNo, int ccode ) {
    2021         if ( is_empty( Boys[ccode] ) ) {
     2174        if ( empty( Boys[ccode] ) ) {
    20222175                wait( Girls[ccode] );
    20232176                GirlPhNo = phNo;
     
    20462199};
    20472200int girl( DS & mutex ds, int phNo, int ccode ) {
    2048         if ( is_empty( Boys[ccode] ) ) { // no compatible
     2201        if ( empty( Boys[ccode] ) ) { // no compatible
    20492202                wait( Girls[ccode] ); // wait for boy
    20502203                GirlPhNo = phNo; // make phone number available
     
    20662219\qquad
    20672220\subfloat[\lstinline@signal_block@]{\label{f:DatingSignalBlock}\usebox\myboxB}
    2068 \caption{Dating service}
    2069 \label{f:DatingService}
     2221\caption{Dating service Monitor}
     2222\label{f:DatingServiceMonitor}
    20702223\end{figure}
    20712224
    2072 In summation, for internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread;
    2073 the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently.
    2074 The waiter unblocks next from the urgent queue, uses/takes the state, and exits the monitor.
    2075 Blocking signal is the reverse, where the waiter is providing the cooperation for the signalling thread;
    2076 the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter.
    2077 The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to use/take the state.
     2225Figure~\ref{f:DatingServiceMonitor} shows a dating service demonstrating non-blocking and blocking signalling.
     2226The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers.
     2227A thread blocks until an appropriate partner arrives.
     2228The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property.
     2229For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner.
     2230For signal-block scheduling, the implicit urgent-queue replaces the explicit @exchange@-condition and @signal_block@ puts the finding thread on the urgent stack and unblocks the matcher.
     2231
     2232The dating service is an important example of a monitor that cannot be written using external scheduling.
     2233First, because scheduling requires knowledge of calling parameters to make matching decisions, and parameters of calling threads are unavailable within the monitor.
     2234For example, a girl thread within the monitor cannot examine the @ccode@ of boy threads waiting on the calling queue to determine if there is a matching partner.
     2235Second, because a scheduling decision may be delayed when there is no immediate match, which requires a condition queue for waiting, and condition queues imply internal scheduling.
     2236For example, if a girl thread could determine there is no calling boy with the same @ccode@, it must wait until a matching boy arrives.
     2237Finally, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number.
     2238This situation shows rechecking the waiting condition and waiting again (signals-as-hints) fails, requiring significant restructured to account for barging.
    20782239
    20792240Both internal and external scheduling extend to multiple monitors in a natural way.
    20802241\begin{cquote}
    2081 \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
     2242\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}}
    20822243\begin{cfa}
    20832244monitor M { `condition e`; ... };
     
    20902251&
    20912252\begin{cfa}
    2092 void rtn$\(_1\)$( M & mutex m1, M & mutex m2 );
     2253void rtn$\(_1\)$( M & mutex m1, M & mutex m2 ); // overload rtn
    20932254void rtn$\(_2\)$( M & mutex m1 );
    20942255void bar( M & mutex m1, M & mutex m2 ) {
    2095         ... waitfor( `rtn` ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\), m1, m2 )}$
    2096         ... waitfor( `rtn, m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$
     2256        ... waitfor( `rtn`${\color{red}\(_1\)}$ ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\) : m1, m2 )}$
     2257        ... waitfor( `rtn${\color{red}\(_2\)}$ : m1` ); ...
    20972258}
    20982259\end{cfa}
     
    21012262For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex parameters, \ie @wait( e, m1, m2 )@.
    21022263To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@.
    2103 Wait cannot statically verifies the released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@.
    2104 While \CC supports bulk locking, @wait@ only accepts a single lock for a condition variable, so bulk locking with condition variables is asymmetric.
     2264Wait cannot statically verify the released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@.
     2265While \CC supports bulk locking, @wait@ only accepts a single lock for a condition queue, so bulk locking with condition queues is asymmetric.
    21052266Finally, a signaller,
    21062267\begin{cfa}
     
    21112272must have acquired at least the same locks as the waiting thread signalled from a condition queue to allow the locks to be passed, and hence, prevent barging.
    21122273
    2113 Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn, m1, m2 )@.
    2114 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@.
     2274Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn : m1, m2 )@.
     2275To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn : m1 )@.
    21152276@waitfor@ does statically verify the monitor types passed are the same as the acquired mutex-parameters of the given function or function pointer, hence the function (pointer) prototype must be accessible.
    21162277% When an overloaded function appears in an @waitfor@ statement, calls to any function with that name are accepted.
     
    21202281void rtn( M & mutex m );
    21212282`int` rtn( M & mutex m );
    2122 waitfor( (`int` (*)( M & mutex ))rtn, m );
    2123 \end{cfa}
    2124 
    2125 The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock.
     2283waitfor( (`int` (*)( M & mutex ))rtn : m );
     2284\end{cfa}
     2285
     2286The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock (see Section~\ref{s:MutexAcquisition}).
     2287\newpage
    21262288\begin{cfa}
    21272289void foo( M & mutex m1, M & mutex m2 ) {
    2128         ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired )}$
    2129 void bar( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$
     2290        ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired}$
     2291void bar( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2}$
    21302292        ... signal( `e` ); ...
    21312293\end{cfa}
    21322294The @wait@ only releases @m1@ so the signalling thread cannot acquire @m1@ and @m2@ to enter @bar@ and @signal@ the condition.
    2133 While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitors, not being perfectly composable.
    2134 
     2295While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitor locking is not perfectly composable.
    21352296
    21362297
    21372298\subsection{\texorpdfstring{Extended \protect\lstinline@waitfor@}{Extended waitfor}}
     2299\label{s:ExtendedWaitfor}
    21382300
    21392301Figure~\ref{f:ExtendedWaitfor} shows the extended form of the @waitfor@ statement to conditionally accept one of a group of mutex functions, with an optional statement to be performed \emph{after} the mutex function finishes.
     
    21462308Hence, the terminating @else@ clause allows a conditional attempt to accept a call without blocking.
    21472309If both @timeout@ and @else@ clause are present, the @else@ must be conditional, or the @timeout@ is never triggered.
    2148 There is also a traditional future wait queue (not shown) (\eg Microsoft (@WaitForMultipleObjects@)), to wait for a specified number of future elements in the queue.
     2310There is also a traditional future wait queue (not shown) (\eg Microsoft @WaitForMultipleObjects@), to wait for a specified number of future elements in the queue.
     2311Finally, there is a shorthand for specifying multiple functions using the same set of monitors: @waitfor( f, g, h : m1, m2, m3 )@.
    21492312
    21502313\begin{figure}
     
    21732336The right example accepts either @mem1@ or @mem2@ if @C1@ and @C2@ are true.
    21742337
    2175 An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated, \eg assume the bounded buffer is restructred from a monitor to a thread with the following @main@.
     2338An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated, \eg assume the bounded buffer is restructured from a monitor to a thread with the following @main@.
    21762339\begin{cfa}
    21772340void main( Buffer(T) & buffer ) with(buffer) {
    21782341        for () {
    2179                 `waitfor( ^?{}, buffer )` break;
    2180                 or when ( count != 20 ) waitfor( insert, buffer ) { ... }
    2181                 or when ( count != 0 ) waitfor( remove, buffer ) { ... }
     2342                `waitfor( ^?{} : buffer )` break;
     2343                or when ( count != 20 ) waitfor( insert : buffer ) { ... }
     2344                or when ( count != 0 ) waitfor( remove : buffer ) { ... }
    21822345        }
    21832346        // clean up
     
    22712434To support this efficient semantics (and prevent barging), the implementation maintains a list of monitors acquired for each blocked thread.
    22722435When a signaller exits or waits in a monitor function/statement, the front waiter on urgent is unblocked if all its monitors are released.
    2273 Implementing a fast subset check for the necessary released monitors is important.
     2436Implementing a fast subset check for the necessary released monitors is important and discussed in the following sections.
    22742437% The benefit is encapsulating complexity into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
    22752438
    22762439
    2277 \subsection{Loose Object Definitions}
    2278 \label{s:LooseObjectDefinitions}
    2279 
    2280 In an object-oriented programming language, a class includes an exhaustive list of operations.
    2281 A new class can add members via static inheritance but the subclass still has an exhaustive list of operations.
    2282 (Dynamic member adding, \eg JavaScript~\cite{JavaScript}, is not considered.)
    2283 In the object-oriented scenario, the type and all its operators are always present at compilation (even separate compilation), so it is possible to number the operations in a bit mask and use an $O(1)$ compare with a similar bit mask created for the operations specified in a @waitfor@.
    2284 
    2285 However, in \CFA, monitor functions can be statically added/removed in translation units, making a fast subset check difficult.
    2286 \begin{cfa}
    2287         monitor M { ... }; // common type, included in .h file
    2288 translation unit 1
    2289         void `f`( M & mutex m );
    2290         void g( M & mutex m ) { waitfor( `f`, m ); }
    2291 translation unit 2
    2292         void `f`( M & mutex m ); $\C{// replacing f and g for type M in this translation unit}$
    2293         void `g`( M & mutex m );
    2294         void h( M & mutex m ) { waitfor( `f`, m ) or waitfor( `g`, m ); } $\C{// extending type M in this translation unit}$
    2295 \end{cfa}
    2296 The @waitfor@ statements in each translation unit cannot form a unique bit-mask because the monitor type does not carry that information.
     2440\subsection{\texorpdfstring{\protect\lstinline@waitfor@ Implementation}{waitfor Implementation}}
     2441\label{s:waitforImplementation}
     2442
     2443In a statically-typed object-oriented programming language, a class has an exhaustive list of members, even when members are added via static inheritance (see Figure~\ref{f:uCinheritance}).
     2444Knowing all members at compilation (even separate compilation) allows uniquely numbered them so the accept-statement implementation can use a fast/compact bit mask with $O(1)$ compare.
     2445
     2446\begin{figure}
     2447\centering
     2448\begin{lrbox}{\myboxA}
     2449\begin{uC++}[aboveskip=0pt,belowskip=0pt]
     2450$\emph{translation unit 1}$
     2451_Monitor B { // common type in .h file
     2452        _Mutex virtual void `f`( ... );
     2453        _Mutex virtual void `g`( ... );
     2454        _Mutex virtual void w1( ... ) { ... _Accept(`f`, `g`); ... }
     2455};
     2456$\emph{translation unit 2}$
     2457// include B
     2458_Monitor D : public B { // inherit
     2459        _Mutex void `h`( ... ); // add
     2460        _Mutex void w2( ... ) { ... _Accept(`f`, `h`); ... }
     2461};
     2462\end{uC++}
     2463\end{lrbox}
     2464
     2465\begin{lrbox}{\myboxB}
     2466\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2467$\emph{translation unit 1}$
     2468monitor M { ... }; // common type in .h file
     2469void `f`( M & mutex m, ... );
     2470void `g`( M & mutex m, ... );
     2471void w1( M & mutex m, ... ) { ... waitfor(`f`, `g` : m); ... }
     2472
     2473$\emph{translation unit 2}$
     2474// include M
     2475extern void `f`( M & mutex m, ... ); // import f but not g
     2476void `h`( M & mutex m ); // add
     2477void w2( M & mutex m, ... ) { ... waitfor(`f`, `h` : m); ... }
     2478
     2479\end{cfa}
     2480\end{lrbox}
     2481
     2482\subfloat[\uC]{\label{f:uCinheritance}\usebox\myboxA}
     2483\hspace{3pt}
     2484\vrule
     2485\hspace{3pt}
     2486\subfloat[\CFA]{\label{f:CFinheritance}\usebox\myboxB}
     2487\caption{Member / Function visibility}
     2488\label{f:MemberFunctionVisibility}
     2489\end{figure}
     2490
     2491However, the @waitfor@ statement in translation unit 2 (see Figure~\ref{f:CFinheritance}) cannot see function @g@ in translation unit 1 precluding a unique numbering for a bit-mask because the monitor type only carries the protected shared-data.
     2492(A possible way to construct a dense mapping is at link or load-time.)
    22972493Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array.
    2298 Then, the same implementation approach used for the urgent stack is used for the calling queue.
    2299 Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (usually short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers.
    2300 (A possible way to construct a dense mapping is at link or load-time.)
     2494Then, the same implementation approach used for the urgent stack (see Section~\ref{s:Scheduling}) is used for the calling queue.
     2495Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers.
    23012496
    23022497
     
    23132508The solution is for the programmer to disambiguate:
    23142509\begin{cfa}
    2315 waitfor( f, `m2` ); $\C{// wait for call to f with argument m2}$
     2510waitfor( f : `m2` ); $\C{// wait for call to f with argument m2}$
    23162511\end{cfa}
    23172512Both locks are acquired by function @g@, so when function @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@.
     
    23202515monitor M { ... };
    23212516void f( M & mutex m1, M & mutex m2 );
    2322 void g( M & mutex m1, M & mutex m2 ) { waitfor( f, `m1, m2` ); $\C{// wait for call to f with arguments m1 and m2}$
     2517void g( M & mutex m1, M & mutex m2 ) { waitfor( f : `m1, m2` ); $\C{// wait for call to f with arguments m1 and m2}$
    23232518\end{cfa}
    23242519Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by the accepting function.
    2325 Also, the order of the monitors in a @waitfor@ statement is unimportant.
    2326 
    2327 Figure~\ref{f:UnmatchedMutexSets} shows an example where, for internal and external scheduling with multiple monitors, a signalling or accepting thread must match exactly, \ie partial matching results in waiting.
    2328 For both examples, the set of monitors is disjoint so unblocking is impossible.
     2520% Also, the order of the monitors in a @waitfor@ statement must match the order of the mutex parameters.
     2521
     2522Figure~\ref{f:UnmatchedMutexSets} shows internal and external scheduling with multiple monitors that must match exactly with a signalling or accepting thread, \ie partial matching results in waiting.
     2523In both cases, the set of monitors is disjoint so unblocking is impossible.
    23292524
    23302525\begin{figure}
     
    23552550}
    23562551void g( M1 & mutex m1, M2 & mutex m2 ) {
    2357         waitfor( f, m1, m2 );
     2552        waitfor( f : m1, m2 );
    23582553}
    23592554g( `m11`, m2 ); // block on accept
     
    23702565\end{figure}
    23712566
    2372 
    2373 \subsection{\texorpdfstring{\protect\lstinline@mutex@ Threads}{mutex Threads}}
    2374 
    2375 Threads in \CFA can also be monitors to allow \emph{direct communication} among threads, \ie threads can have mutex functions that are called by other threads.
    2376 Hence, all monitor features are available when using threads.
    2377 Figure~\ref{f:DirectCommunication} shows a comparison of direct call communication in \CFA with direct channel communication in Go.
    2378 (Ada provides a similar mechanism to the \CFA direct communication.)
    2379 The program main in both programs communicates directly with the other thread versus indirect communication where two threads interact through a passive monitor.
    2380 Both direct and indirection thread communication are valuable tools in structuring concurrent programs.
    2381 
    23822567\begin{figure}
    23832568\centering
     
    23862571
    23872572struct Msg { int i, j; };
    2388 thread GoRtn { int i;  float f;  Msg m; };
     2573monitor thread GoRtn { int i;  float f;  Msg m; };
    23892574void mem1( GoRtn & mutex gortn, int i ) { gortn.i = i; }
    23902575void mem2( GoRtn & mutex gortn, float f ) { gortn.f = f; }
     
    23962581        for () {
    23972582
    2398                 `waitfor( mem1, gortn )` sout | i;  // wait for calls
    2399                 or `waitfor( mem2, gortn )` sout | f;
    2400                 or `waitfor( mem3, gortn )` sout | m.i | m.j;
    2401                 or `waitfor( ^?{}, gortn )` break;
     2583                `waitfor( mem1 : gortn )` sout | i;  // wait for calls
     2584                or `waitfor( mem2 : gortn )` sout | f;
     2585                or `waitfor( mem3 : gortn )` sout | m.i | m.j;
     2586                or `waitfor( ^?{} : gortn )` break; // low priority
    24022587
    24032588        }
     
    24532638\hspace{3pt}
    24542639\subfloat[Go]{\label{f:Gochannel}\usebox\myboxB}
    2455 \caption{Direct communication}
    2456 \label{f:DirectCommunication}
     2640\caption{Direct versus indirect communication}
     2641\label{f:DirectCommunicationComparison}
     2642
     2643\medskip
     2644
     2645\begin{cfa}
     2646monitor thread DatingService {
     2647        condition Girls[CompCodes], Boys[CompCodes];
     2648        int girlPhoneNo, boyPhoneNo, ccode;
     2649};
     2650int girl( DatingService & mutex ds, int phoneno, int code ) with( ds ) {
     2651        girlPhoneNo = phoneno;  ccode = code;
     2652        `wait( Girls[ccode] );`                                                         $\C{// wait for boy}$
     2653        girlPhoneNo = phoneno;  return boyPhoneNo;
     2654}
     2655int boy( DatingService & mutex ds, int phoneno, int code ) with( ds ) {
     2656        boyPhoneNo = phoneno;  ccode = code;
     2657        `wait( Boys[ccode] );`                                                          $\C{// wait for girl}$
     2658        boyPhoneNo = phoneno;  return girlPhoneNo;
     2659}
     2660void main( DatingService & ds ) with( ds ) {                    $\C{// thread starts, ds defaults to mutex}$
     2661        for () {
     2662                waitfor( ^?{} ) break;                                                  $\C{// high priority}$
     2663                or waitfor( girl )                                                              $\C{// girl called, compatible boy ? restart boy then girl}$
     2664                        if ( ! is_empty( Boys[ccode] ) ) { `signal_block( Boys[ccode] );  signal_block( Girls[ccode] );` }
     2665                or waitfor( boy ) {                                                             $\C{// boy called, compatible girl ? restart girl then boy}$
     2666                        if ( ! is_empty( Girls[ccode] ) ) { `signal_block( Girls[ccode] );  signal_block( Boys[ccode] );` }
     2667        }
     2668}
     2669\end{cfa}
     2670\caption{Direct communication dating service}
     2671\label{f:DirectCommunicationDatingService}
    24572672\end{figure}
    24582673
     
    24692684void main( Ping & pi ) {
    24702685        for ( 10 ) {
    2471                 `waitfor( ping, pi );`
     2686                `waitfor( ping : pi );`
    24722687                `pong( po );`
    24732688        }
     
    24822697        for ( 10 ) {
    24832698                `ping( pi );`
    2484                 `waitfor( pong, po );`
     2699                `waitfor( pong : po );`
    24852700        }
    24862701}
     
    24972712
    24982713
    2499 \subsection{Execution Properties}
    2500 
    2501 Table~\ref{t:ObjectPropertyComposition} shows how the \CFA high-level constructs cover 3 fundamental execution properties: thread, stateful function, and mutual exclusion.
    2502 Case 1 is a basic object, with none of the new execution properties.
    2503 Case 2 allows @mutex@ calls to Case 1 to protect shared data.
    2504 Case 3 allows stateful functions to suspend/resume but restricts operations because the state is stackless.
    2505 Case 4 allows @mutex@ calls to Case 3 to protect shared data.
    2506 Cases 5 and 6 are the same as 3 and 4 without restriction because the state is stackful.
    2507 Cases 7 and 8 are rejected because a thread cannot execute without a stackful state in a preemptive environment when context switching from the signal handler.
    2508 Cases 9 and 10 have a stackful thread without and with @mutex@ calls.
    2509 For situations where threads do not require direct communication, case 9 provides faster creation/destruction by eliminating @mutex@ setup.
    2510 
    2511 \begin{table}
    2512 \caption{Object property composition}
    2513 \centering
    2514 \label{t:ObjectPropertyComposition}
    2515 \renewcommand{\arraystretch}{1.25}
    2516 %\setlength{\tabcolsep}{5pt}
    2517 \begin{tabular}{c|c||l|l}
    2518 \multicolumn{2}{c||}{object properties} & \multicolumn{2}{c}{mutual exclusion} \\
    2519 \hline
    2520 thread  & stateful                              & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\
    2521 \hline
    2522 \hline
    2523 No              & No                                    & \textbf{1}\ \ \ aggregate type                & \textbf{2}\ \ \ @monitor@ aggregate type \\
    2524 \hline
    2525 No              & Yes (stackless)               & \textbf{3}\ \ \ @generator@                   & \textbf{4}\ \ \ @monitor@ @generator@ \\
    2526 \hline
    2527 No              & Yes (stackful)                & \textbf{5}\ \ \ @coroutine@                   & \textbf{6}\ \ \ @monitor@ @coroutine@ \\
    2528 \hline
    2529 Yes             & No / Yes (stackless)  & \textbf{7}\ \ \ {\color{red}rejected} & \textbf{8}\ \ \ {\color{red}rejected} \\
    2530 \hline
    2531 Yes             & Yes (stackful)                & \textbf{9}\ \ \ @thread@                              & \textbf{10}\ \ @monitor@ @thread@ \\
    2532 \end{tabular}
    2533 \end{table}
     2714\subsection{\texorpdfstring{\protect\lstinline@monitor@ Generators / Coroutines / Threads}{monitor Generators / Coroutines / Threads}}
     2715
     2716\CFA generators, coroutines, and threads can also be monitors (Table~\ref{t:ExecutionPropertyComposition} cases 4, 6, 12) allowing safe \emph{direct communication} with threads, \ie the custom types can have mutex functions that are called by other threads.
     2717All monitor features are available within these mutex functions.
     2718For example, if the formatter generator (or coroutine equivalent) in Figure~\ref{f:CFAFormatGen} is extended with the monitor property and this interface function is used to communicate with the formatter:
     2719\begin{cfa}
     2720void fmt( Fmt & mutex fmt, char ch ) { fmt.ch = ch; resume( fmt ) }
     2721\end{cfa}
     2722multiple threads can safely pass characters for formatting.
     2723
     2724Figure~\ref{f:DirectCommunicationComparison} shows a comparison of direct call-communication in \CFA versus indirect channel-communication in Go.
     2725(Ada has a similar mechanism to \CFA direct communication.)
     2726The program thread in \CFA @main@ uses the call/return paradigm to directly communicate with the @GoRtn main@, whereas Go switches to the channel paradigm to indirectly communicate with the goroutine.
     2727Communication by multiple threads is safe for the @gortn@ thread via mutex calls in \CFA or channel assignment in Go.
     2728
     2729Figure~\ref{f:DirectCommunicationDatingService} shows the dating-service problem in Figure~\ref{f:DatingServiceMonitor} extended from indirect monitor communication to direct thread communication.
     2730When converting a monitor to a thread (server), the coding pattern is to move as much code as possible from the accepted members into the thread main so it does an much work as possible.
     2731Notice, the dating server is postponing requests for an unspecified time while continuing to accept new requests.
     2732For complex servers (web-servers), there can be hundreds of lines of code in the thread main and safe interaction with clients can be complex.
    25342733
    25352734
     
    25372736
    25382737For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc.
    2539 Some of these low-level mechanism are used in the \CFA runtime, but we strongly advocate using high-level mechanisms whenever possible.
     2738Some of these low-level mechanism are used to build the \CFA runtime, but we always advocate using high-level mechanisms whenever possible.
    25402739
    25412740
     
    25802779\begin{cfa}
    25812780struct Adder {
    2582     int * row, cols;
     2781        int * row, cols;
    25832782};
    25842783int operator()() {
     
    26392838\label{s:RuntimeStructureCluster}
    26402839
    2641 A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the (user) threads from its own ready queue (like an OS executing kernel threads).
     2840A \newterm{cluster} is a collection of user and kernel threads, where the kernel threads run the user threads from the cluster's ready queue, and the operating system runs the kernel threads on the processors from its ready queue.
     2841The term \newterm{virtual processor} is introduced as a synonym for kernel thread to disambiguate between user and kernel thread.
     2842From the language perspective, a virtual processor is an actual processor (core).
     2843
    26422844The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults.
    26432845The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors.
     
    26582860Programs may use more virtual processors than hardware processors.
    26592861On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel.
    2660 (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.)
     2862(It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux,affinityWindows}, which is used when caching issues occur or for heterogeneous hardware processors.) %, affinityFreebsd, affinityNetbsd, affinityMacosx
    26612863The \CFA runtime attempts to block unused processors and unblock processors as the system load increases;
    2662 balancing the workload with processors is difficult because it requires future knowledge, \ie what will the applicaton workload do next.
     2864balancing the workload with processors is difficult because it requires future knowledge, \ie what will the application workload do next.
    26632865Preemption occurs on virtual processors rather than user threads, via operating-system interrupts.
    26642866Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads.
     
    26952897Nondeterministic preemption provides fairness from long-running threads, and forces concurrent programmers to write more robust programs, rather than relying on code between cooperative scheduling to be atomic.
    26962898This atomic reliance can fail on multi-core machines, because execution across cores is nondeterministic.
    2697 A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Microsoft runtime does not support interrupts and on Linux systems, interrupts are complex (see below).
     2899A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Windows runtime does not support interrupts and on Linux systems, interrupts are complex (see below).
    26982900Preemption is normally handled by setting a countdown timer on each virtual processor.
    2699 When the timer expires, an interrupt is delivered, and the interrupt handler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code.
     2901When the timer expires, an interrupt is delivered, and its signal handler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code.
    27002902Multiple signal handlers may be pending.
    27012903When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal is delivered.
    27022904The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler;
    27032905therefore, the same signal mask is required for all virtual processors in a cluster.
    2704 Because preemption frequency is usually long (1 millisecond) performance cost is negligible.
    2705 
    2706 Linux switched a decade ago from specific to arbitrary process signal-delivery for applications with multiple kernel threads.
    2707 \begin{cquote}
    2708 A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked.
    2709 If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which it will deliver the signal.
    2710 SIGNAL(7) - Linux Programmer's Manual
    2711 \end{cquote}
     2906Because preemption interval is usually long (1 millisecond) performance cost is negligible.
     2907
     2908Linux switched a decade ago from specific to arbitrary virtual-processor signal-delivery for applications with multiple kernel threads.
     2909In the new semantics, a virtual-processor directed signal may be delivered to any virtual processor created by the application that does not have the signal blocked.
    27122910Hence, the timer-expiry signal, which is generated \emph{externally} by the Linux kernel to an application, is delivered to any of its Linux subprocesses (kernel threads).
    27132911To ensure each virtual processor receives a preemption signal, a discrete-event simulation is run on a special virtual processor, and only it sets and receives timer events.
     
    27272925\label{s:Performance}
    27282926
    2729 To verify the implementation of the \CFA runtime, a series of microbenchmarks are performed comparing \CFA with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0.
     2927To test the performance of the \CFA runtime, a series of microbenchmarks are used to compare \CFA with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js 12.14.1, and \uC 7.0.0.
    27302928For comparison, the package must be multi-processor (M:N), which excludes libdill/libmil~\cite{libdill} (M:1)), and use a shared-memory programming model, \eg not message passing.
    2731 The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and \CFA/\uC are compiled with gcc 6.5.
     2929The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and pthreads/\CFA/\uC are compiled with gcc 9.2.1.
    27322930
    27332931All benchmarks are run using the following harness. (The Java harness is augmented to circumvent JIT issues.)
    27342932\begin{cfa}
    2735 unsigned int N = 10_000_000;
    2736 #define BENCH( `run` ) Time before = getTimeNsec();  `run;`  Duration result = (getTimeNsec() - before) / N;
    2737 \end{cfa}
    2738 The method used to get time is @clock_gettime( CLOCK_REALTIME )@.
    2739 Each benchmark is performed @N@ times, where @N@ varies depending on the benchmark;
    2740 the total time is divided by @N@ to obtain the average time for a benchmark.
    2741 Each benchmark experiment is run 31 times.
     2933#define BENCH( `run` ) uint64_t start = cputime_ns();  `run;`  double result = (double)(cputime_ns() - start) / N;
     2934\end{cfa}
     2935where CPU time in nanoseconds is from the appropriate language clock.
     2936Each benchmark is performed @N@ times, where @N@ is selected so the benchmark runs in the range of 2--20 seconds for the specific programming language.
     2937The total time is divided by @N@ to obtain the average time for a benchmark.
     2938Each benchmark experiment is run 13 times and the average appears in the table.
    27422939All omitted tests for other languages are functionally identical to the \CFA tests and available online~\cite{CforallBenchMarks}.
    2743 % tar --exclude=.deps --exclude=Makefile --exclude=Makefile.in --exclude=c.c --exclude=cxx.cpp --exclude=fetch_add.c -cvhf benchmark.tar benchmark
    2744 
    2745 \paragraph{Object Creation}
    2746 
    2747 Object creation is measured by creating/deleting the specific kind of concurrent object.
    2748 Figure~\ref{f:creation} shows the code for \CFA, with results in Table~\ref{tab:creation}.
    2749 The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine to force stack creation, the creation cost is artificially low.
    2750 
    2751 \begin{multicols}{2}
    2752 \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
    2753 \begin{cfa}
    2754 @thread@ MyThread {};
    2755 void @main@( MyThread & ) {}
    2756 int main() {
    2757         BENCH( for ( N ) { @MyThread m;@ } )
    2758         sout | result`ns;
    2759 }
    2760 \end{cfa}
    2761 \captionof{figure}{\CFA object-creation benchmark}
    2762 \label{f:creation}
    2763 
    2764 \columnbreak
    2765 
    2766 \vspace*{-16pt}
    2767 \captionof{table}{Object creation comparison (nanoseconds)}
    2768 \label{tab:creation}
    2769 
    2770 \begin{tabular}[t]{@{}r*{3}{D{.}{.}{5.2}}@{}}
    2771 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2772 \CFA Coroutine Lazy             & 13.2          & 13.1          & 0.44          \\
    2773 \CFA Coroutine Eager    & 531.3         & 536.0         & 26.54         \\
    2774 \CFA Thread                             & 2074.9        & 2066.5        & 170.76        \\
    2775 \uC Coroutine                   & 89.6          & 90.5          & 1.83          \\
    2776 \uC Thread                              & 528.2         & 528.5         & 4.94          \\
    2777 Goroutine                               & 4068.0        & 4113.1        & 414.55        \\
    2778 Java Thread                             & 103848.5      & 104295.4      & 2637.57       \\
    2779 Pthreads                                & 33112.6       & 33127.1       & 165.90
    2780 \end{tabular}
    2781 \end{multicols}
    2782 
    2783 
    2784 \paragraph{Context-Switching}
     2940% tar --exclude-ignore=exclude -cvhf benchmark.tar benchmark
     2941
     2942\paragraph{Context Switching}
    27852943
    27862944In procedural programming, the cost of a function call is important as modularization (refactoring) increases.
    2787 (In many cases, a compiler inlines function calls to eliminate this cost.)
    2788 Similarly, when modularization extends to coroutines/tasks, the time for a context switch becomes a relevant factor.
     2945(In many cases, a compiler inlines function calls to increase the size and number of basic blocks for optimizing.)
     2946Similarly, when modularization extends to coroutines/threads, the time for a context switch becomes a relevant factor.
    27892947The coroutine test is from resumer to suspender and from suspender to resumer, which is two context switches.
     2948%For async-await systems, the test is scheduling and fulfilling @N@ empty promises, where all promises are allocated before versus interleaved with fulfillment to avoid garbage collection.
     2949For async-await systems, the test measures the cost of the @await@ expression entering the event engine by awaiting @N@ promises, where each created promise is resolved by an immediate event in the engine (using Node.js @setImmediate@).
    27902950The thread test is using yield to enter and return from the runtime kernel, which is two context switches.
    27912951The difference in performance between coroutine and thread context-switch is the cost of scheduling for threads, whereas coroutines are self-scheduling.
    2792 Figure~\ref{f:ctx-switch} only shows the \CFA code for coroutines/threads (other systems are similar) with all results in Table~\ref{tab:ctx-switch}.
     2952Figure~\ref{f:ctx-switch} shows the \CFA code for a coroutine/thread with results in Table~\ref{t:ctx-switch}.
     2953
     2954% From: Gregor Richards <gregor.richards@uwaterloo.ca>
     2955% To: "Peter A. Buhr" <pabuhr@plg2.cs.uwaterloo.ca>
     2956% Date: Fri, 24 Jan 2020 13:49:18 -0500
     2957%
     2958% I can also verify that the previous version, which just tied a bunch of promises together, *does not* go back to the
     2959% event loop at all in the current version of Node. Presumably they're taking advantage of the fact that the ordering of
     2960% events is intentionally undefined to just jump right to the next 'then' in the chain, bypassing event queueing
     2961% entirely. That's perfectly correct behavior insofar as its difference from the specified behavior isn't observable, but
     2962% it isn't typical or representative of much anything useful, because most programs wouldn't have whole chains of eager
     2963% promises. Also, it's not representative of *anything* you can do with async/await, as there's no way to encode such an
     2964% eager chain that way.
    27932965
    27942966\begin{multicols}{2}
     
    27962968\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    27972969@coroutine@ C {} c;
    2798 void main( C & ) { for ( ;; ) { @suspend;@ } }
     2970void main( C & ) { while () { @suspend;@ } }
    27992971int main() { // coroutine test
    28002972        BENCH( for ( N ) { @resume( c );@ } )
    2801         sout | result`ns;
    2802 }
    2803 int main() { // task test
     2973        sout | result;
     2974}
     2975int main() { // thread test
    28042976        BENCH( for ( N ) { @yield();@ } )
    2805         sout | result`ns;
     2977        sout | result;
    28062978}
    28072979\end{cfa}
     
    28132985\vspace*{-16pt}
    28142986\captionof{table}{Context switch comparison (nanoseconds)}
    2815 \label{tab:ctx-switch}
     2987\label{t:ctx-switch}
    28162988\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
    28172989\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2818 C function              & 1.8   & 1.8   & 0.01  \\
    2819 \CFA generator  & 2.4   & 2.2   & 0.25  \\
    2820 \CFA Coroutine  & 36.2  & 36.2  & 0.25  \\
    2821 \CFA Thread             & 93.2  & 93.5  & 2.09  \\
    2822 \uC Coroutine   & 52.0  & 52.1  & 0.51  \\
    2823 \uC Thread              & 96.2  & 96.3  & 0.58  \\
    2824 Goroutine               & 141.0 & 141.3 & 3.39  \\
    2825 Java Thread             & 374.0 & 375.8 & 10.38 \\
    2826 Pthreads Thread & 361.0 & 365.3 & 13.19
     2990C function                      & 1.8           & 1.8           & 0.0   \\
     2991\CFA generator          & 1.8           & 1.8           & 0.1   \\
     2992\CFA coroutine          & 32.5          & 32.9          & 0.8   \\
     2993\CFA thread                     & 93.8          & 93.6          & 2.2   \\
     2994\uC coroutine           & 50.3          & 50.3          & 0.2   \\
     2995\uC thread                      & 97.3          & 97.4          & 1.0   \\
     2996Python generator        & 40.9          & 41.3          & 1.5   \\
     2997Node.js generator       & 32.6          & 32.2          & 1.0   \\
     2998Node.js await           & 1852.2        & 1854.7        & 16.4  \\
     2999Goroutine thread        & 143.0         & 143.3         & 1.1   \\
     3000Rust thread                     & 332.0         & 331.4         & 2.4   \\
     3001Java thread                     & 405.0         & 415.0         & 17.6  \\
     3002Pthreads thread         & 334.3         & 335.2         & 3.9
    28273003\end{tabular}
    28283004\end{multicols}
    28293005
    2830 
    2831 \paragraph{Mutual-Exclusion}
    2832 
    2833 Uncontented mutual exclusion, which frequently occurs, is measured by entering/leaving a critical section.
    2834 For monitors, entering and leaving a monitor function is measured.
    2835 To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured.
    2836 Figure~\ref{f:mutex} shows the code for \CFA with all results in Table~\ref{tab:mutex}.
     3006\paragraph{Internal Scheduling}
     3007
     3008Internal scheduling is measured using a cycle of two threads signalling and waiting.
     3009Figure~\ref{f:schedint} shows the code for \CFA, with results in Table~\ref{t:schedint}.
    28373010Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
     3011Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.
    28383012
    28393013\begin{multicols}{2}
    28403014\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
    28413015\begin{cfa}
     3016volatile int go = 0;
     3017@condition c;@
    28423018@monitor@ M {} m1/*, m2, m3, m4*/;
    2843 void __attribute__((noinline))
    2844 do_call( M & @mutex m/*, m2, m3, m4*/@ ) {}
     3019void call( M & @mutex p1/*, p2, p3, p4*/@ ) {
     3020        @signal( c );@
     3021}
     3022void wait( M & @mutex p1/*, p2, p3, p4*/@ ) {
     3023        go = 1; // continue other thread
     3024        for ( N ) { @wait( c );@ } );
     3025}
     3026thread T {};
     3027void main( T & ) {
     3028        while ( go == 0 ) { yield(); } // waiter must start first
     3029        BENCH( for ( N ) { call( m1/*, m2, m3, m4*/ ); } )
     3030        sout | result;
     3031}
    28453032int main() {
    2846         BENCH(
    2847                 for( N ) do_call( m1/*, m2, m3, m4*/ );
    2848         )
    2849         sout | result`ns;
    2850 }
    2851 \end{cfa}
    2852 \captionof{figure}{\CFA acquire/release mutex benchmark}
    2853 \label{f:mutex}
     3033        T t;
     3034        wait( m1/*, m2, m3, m4*/ );
     3035}
     3036\end{cfa}
     3037\captionof{figure}{\CFA Internal-scheduling benchmark}
     3038\label{f:schedint}
    28543039
    28553040\columnbreak
    28563041
    28573042\vspace*{-16pt}
    2858 \captionof{table}{Mutex comparison (nanoseconds)}
    2859 \label{tab:mutex}
    2860 \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
    2861 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2862 test and test-and-test lock             & 19.1  & 18.9  & 0.40  \\
    2863 \CFA @mutex@ function, 1 arg.   & 45.9  & 46.6  & 1.45  \\
    2864 \CFA @mutex@ function, 2 arg.   & 105.0 & 104.7 & 3.08  \\
    2865 \CFA @mutex@ function, 4 arg.   & 165.0 & 167.6 & 5.65  \\
    2866 \uC @monitor@ member rtn.               & 54.0  & 53.7  & 0.82  \\
    2867 Java synchronized method                & 31.0  & 31.1  & 0.50  \\
    2868 Pthreads Mutex Lock                             & 33.6  & 32.6  & 1.14
     3043\captionof{table}{Internal-scheduling comparison (nanoseconds)}
     3044\label{t:schedint}
     3045\bigskip
     3046
     3047\begin{tabular}{@{}r*{3}{D{.}{.}{5.2}}@{}}
     3048\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     3049\CFA @signal@, 1 monitor        & 364.4         & 364.2         & 4.4           \\
     3050\CFA @signal@, 2 monitor        & 484.4         & 483.9         & 8.8           \\
     3051\CFA @signal@, 4 monitor        & 709.1         & 707.7         & 15.0          \\
     3052\uC @signal@ monitor            & 328.3         & 327.4         & 2.4           \\
     3053Rust cond. variable                     & 7514.0        & 7437.4        & 397.2         \\
     3054Java @notify@ monitor           & 9623.0        & 9654.6        & 236.2         \\
     3055Pthreads cond. variable         & 5553.7        & 5576.1        & 345.6
    28693056\end{tabular}
    28703057\end{multicols}
     
    28743061
    28753062External scheduling is measured using a cycle of two threads calling and accepting the call using the @waitfor@ statement.
    2876 Figure~\ref{f:ext-sched} shows the code for \CFA, with results in Table~\ref{tab:ext-sched}.
     3063Figure~\ref{f:schedext} shows the code for \CFA with results in Table~\ref{t:schedext}.
    28773064Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
    28783065
     
    28813068\vspace*{-16pt}
    28823069\begin{cfa}
    2883 volatile int go = 0;
    2884 @monitor@ M {} m;
     3070@monitor@ M {} m1/*, m2, m3, m4*/;
     3071void call( M & @mutex p1/*, p2, p3, p4*/@ ) {}
     3072void wait( M & @mutex p1/*, p2, p3, p4*/@ ) {
     3073        for ( N ) { @waitfor( call : p1/*, p2, p3, p4*/ );@ }
     3074}
    28853075thread T {};
    2886 void __attribute__((noinline))
    2887 do_call( M & @mutex@ ) {}
    28883076void main( T & ) {
    2889         while ( go == 0 ) { yield(); }
    2890         while ( go == 1 ) { do_call( m ); }
    2891 }
    2892 int __attribute__((noinline))
    2893 do_wait( M & @mutex@ m ) {
    2894         go = 1; // continue other thread
    2895         BENCH( for ( N ) { @waitfor( do_call, m );@ } )
    2896         go = 0; // stop other thread
    2897         sout | result`ns;
     3077        BENCH( for ( N ) { call( m1/*, m2, m3, m4*/ ); } )
     3078        sout | result;
    28983079}
    28993080int main() {
    29003081        T t;
    2901         do_wait( m );
     3082        wait( m1/*, m2, m3, m4*/ );
    29023083}
    29033084\end{cfa}
    29043085\captionof{figure}{\CFA external-scheduling benchmark}
    2905 \label{f:ext-sched}
     3086\label{f:schedext}
    29063087
    29073088\columnbreak
     
    29093090\vspace*{-16pt}
    29103091\captionof{table}{External-scheduling comparison (nanoseconds)}
    2911 \label{tab:ext-sched}
     3092\label{t:schedext}
    29123093\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
    29133094\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2914 \CFA @waitfor@, 1 @monitor@     & 376.4 & 376.8 & 7.63  \\
    2915 \CFA @waitfor@, 2 @monitor@     & 491.4 & 492.0 & 13.31 \\
    2916 \CFA @waitfor@, 4 @monitor@     & 681.0 & 681.7 & 19.10 \\
    2917 \uC @_Accept@                           & 331.1 & 331.4 & 2.66
     3095\CFA @waitfor@, 1 monitor       & 367.1 & 365.3 & 5.0   \\
     3096\CFA @waitfor@, 2 monitor       & 463.0 & 464.6 & 7.1   \\
     3097\CFA @waitfor@, 4 monitor       & 689.6 & 696.2 & 21.5  \\
     3098\uC \lstinline[language=uC++]|_Accept| monitor  & 328.2 & 329.1 & 3.4   \\
     3099Go \lstinline[language=Golang]|select| channel  & 365.0 & 365.5 & 1.2
    29183100\end{tabular}
    29193101\end{multicols}
    29203102
    2921 
    2922 \paragraph{Internal Scheduling}
    2923 
    2924 Internal scheduling is measured using a cycle of two threads signalling and waiting.
    2925 Figure~\ref{f:int-sched} shows the code for \CFA, with results in Table~\ref{tab:int-sched}.
    2926 Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
    2927 Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.
     3103\paragraph{Mutual-Exclusion}
     3104
     3105Uncontented mutual exclusion, which frequently occurs, is measured by entering/leaving a critical section.
     3106For monitors, entering and leaving a monitor function is measured, otherwise the language-appropriate mutex-lock is measured.
     3107For comparison, a spinning (versus blocking) test-and-test-set lock is presented.
     3108Figure~\ref{f:mutex} shows the code for \CFA with results in Table~\ref{t:mutex}.
     3109Note the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
    29283110
    29293111\begin{multicols}{2}
    29303112\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
    29313113\begin{cfa}
    2932 volatile int go = 0;
    2933 @monitor@ M { @condition c;@ } m;
    2934 void __attribute__((noinline))
    2935 do_call( M & @mutex@ a1 ) { @signal( c );@ }
    2936 thread T {};
    2937 void main( T & this ) {
    2938         while ( go == 0 ) { yield(); }
    2939         while ( go == 1 ) { do_call( m ); }
    2940 }
    2941 int  __attribute__((noinline))
    2942 do_wait( M & mutex m ) with(m) {
    2943         go = 1; // continue other thread
    2944         BENCH( for ( N ) { @wait( c );@ } );
    2945         go = 0; // stop other thread
    2946         sout | result`ns;
    2947 }
     3114@monitor@ M {} m1/*, m2, m3, m4*/;
     3115call( M & @mutex p1/*, p2, p3, p4*/@ ) {}
    29483116int main() {
    2949         T t;
    2950         do_wait( m );
    2951 }
    2952 \end{cfa}
    2953 \captionof{figure}{\CFA Internal-scheduling benchmark}
    2954 \label{f:int-sched}
     3117        BENCH( for( N ) call( m1/*, m2, m3, m4*/ ); )
     3118        sout | result;
     3119}
     3120\end{cfa}
     3121\captionof{figure}{\CFA acquire/release mutex benchmark}
     3122\label{f:mutex}
    29553123
    29563124\columnbreak
    29573125
    29583126\vspace*{-16pt}
    2959 \captionof{table}{Internal-scheduling comparison (nanoseconds)}
    2960 \label{tab:int-sched}
    2961 \bigskip
    2962 
    2963 \begin{tabular}{@{}r*{3}{D{.}{.}{5.2}}@{}}
    2964 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
    2965 \CFA @signal@, 1 @monitor@      & 372.6         & 374.3         & 14.17         \\
    2966 \CFA @signal@, 2 @monitor@      & 492.7         & 494.1         & 12.99         \\
    2967 \CFA @signal@, 4 @monitor@      & 749.4         & 750.4         & 24.74         \\
    2968 \uC @signal@                            & 320.5         & 321.0         & 3.36          \\
    2969 Java @notify@                           & 10160.5       & 10169.4       & 267.71        \\
    2970 Pthreads Cond. Variable         & 4949.6        & 5065.2        & 363
     3127\captionof{table}{Mutex comparison (nanoseconds)}
     3128\label{t:mutex}
     3129\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
     3130\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     3131test-and-test-set lock                  & 19.1  & 18.9  & 0.4   \\
     3132\CFA @mutex@ function, 1 arg.   & 48.3  & 47.8  & 0.9   \\
     3133\CFA @mutex@ function, 2 arg.   & 86.7  & 87.6  & 1.9   \\
     3134\CFA @mutex@ function, 4 arg.   & 173.4 & 169.4 & 5.9   \\
     3135\uC @monitor@ member rtn.               & 54.8  & 54.8  & 0.1   \\
     3136Goroutine mutex lock                    & 34.0  & 34.0  & 0.0   \\
     3137Rust mutex lock                                 & 33.0  & 33.2  & 0.8   \\
     3138Java synchronized method                & 31.0  & 31.0  & 0.0   \\
     3139Pthreads mutex Lock                             & 31.0  & 31.1  & 0.4
    29713140\end{tabular}
    29723141\end{multicols}
    29733142
     3143\paragraph{Creation}
     3144
     3145Creation is measured by creating/deleting a specific kind of control-flow object.
     3146Figure~\ref{f:creation} shows the code for \CFA with results in Table~\ref{t:creation}.
     3147Note, the call stacks of \CFA coroutines are lazily created on the first resume, therefore the cost of creation with and without a stack are presented.
     3148
     3149\begin{multicols}{2}
     3150\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
     3151\begin{cfa}
     3152@coroutine@ MyCoroutine {};
     3153void ?{}( MyCoroutine & this ) {
     3154#ifdef EAGER
     3155        resume( this );
     3156#endif
     3157}
     3158void main( MyCoroutine & ) {}
     3159int main() {
     3160        BENCH( for ( N ) { @MyCoroutine c;@ } )
     3161        sout | result;
     3162}
     3163\end{cfa}
     3164\captionof{figure}{\CFA creation benchmark}
     3165\label{f:creation}
     3166
     3167\columnbreak
     3168
     3169\vspace*{-16pt}
     3170\captionof{table}{Creation comparison (nanoseconds)}
     3171\label{t:creation}
     3172
     3173\begin{tabular}[t]{@{}r*{3}{D{.}{.}{5.2}}@{}}
     3174\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     3175\CFA generator                  & 0.6           & 0.6           & 0.0           \\
     3176\CFA coroutine lazy             & 13.4          & 13.1          & 0.5           \\
     3177\CFA coroutine eager    & 144.7         & 143.9         & 1.5           \\
     3178\CFA thread                             & 466.4         & 468.0         & 11.3          \\
     3179\uC coroutine                   & 155.6         & 155.7         & 1.7           \\
     3180\uC thread                              & 523.4         & 523.9         & 7.7           \\
     3181Python generator                & 123.2         & 124.3         & 4.1           \\
     3182Node.js generator               & 32.3          & 32.2          & 0.3           \\
     3183Goroutine thread                & 751.0         & 750.5         & 3.1           \\
     3184Rust thread                             & 53801.0       & 53896.8       & 274.9         \\
     3185Java thread                             & 120274.0      & 120722.9      & 2356.7        \\
     3186Pthreads thread                 & 31465.5       & 31419.5       & 140.4
     3187\end{tabular}
     3188\end{multicols}
     3189
     3190
     3191\subsection{Discussion}
     3192
     3193Languages using 1:1 threading based on pthreads can at best meet or exceed (due to language overhead) the pthread results.
     3194Note, pthreads has a fast zero-contention mutex lock checked in user space.
     3195Languages with M:N threading have better performance than 1:1 because there is no operating-system interactions.
     3196Languages with stackful coroutines have higher cost than stackless coroutines because of stack allocation and context switching;
     3197however, stackful \uC and \CFA coroutines have approximately the same performance as stackless Python and Node.js generators.
     3198The \CFA stackless generator is approximately 25 times faster for suspend/resume and 200 times faster for creation than stackless Python and Node.js generators.
     3199
    29743200
    29753201\section{Conclusion}
     
    29773203Advanced control-flow will always be difficult, especially when there is temporal ordering and nondeterminism.
    29783204However, many systems exacerbate the difficulty through their presentation mechanisms.
    2979 This paper shows it is possible to present a hierarchy of control-flow features, generator, coroutine, thread, and monitor, providing an integrated set of high-level, efficient, and maintainable control-flow features.
    2980 Eliminated from \CFA are spurious wakeup and barging, which are nonintuitive and lead to errors, and having to work with a bewildering set of low-level locks and acquisition techniques.
    2981 \CFA high-level race-free monitors and tasks provide the core mechanisms for mutual exclusion and synchronization, without having to resort to magic qualifiers like @volatile@/@atomic@.
     3205This paper shows it is possible to understand high-level control-flow using three properties: statefulness, thread, mutual-exclusion/synchronization.
     3206Combining these properties creates a number of high-level, efficient, and maintainable control-flow types: generator, coroutine, thread, each of which can be a monitor.
     3207Eliminated from \CFA are barging and spurious wakeup, which are nonintuitive and lead to errors, and having to work with a bewildering set of low-level locks and acquisition techniques.
     3208\CFA high-level race-free monitors and threads provide the core mechanisms for mutual exclusion and synchronization, without having to resort to magic qualifiers like @volatile@/@atomic@.
    29823209Extending these mechanisms to handle high-level deadlock-free bulk acquire across both mutual exclusion and synchronization is a unique contribution.
    29833210The \CFA runtime provides concurrency based on a preemptive M:N user-level threading-system, executing in clusters, which encapsulate scheduling of work on multiple kernel threads providing parallelism.
    29843211The M:N model is judged to be efficient and provide greater flexibility than a 1:1 threading model.
    29853212These concepts and the \CFA runtime-system are written in the \CFA language, extensively leveraging the \CFA type-system, which demonstrates the expressiveness of the \CFA language.
    2986 Performance comparisons with other concurrent systems/languages show the \CFA approach is competitive across all low-level operations, which translates directly into good performance in well-written concurrent applications.
    2987 C programmers should feel comfortable using these mechanisms for developing complex control-flow in applications, with the ability to obtain maximum available performance by selecting mechanisms at the appropriate level of need.
     3213Performance comparisons with other concurrent systems/languages show the \CFA approach is competitive across all basic operations, which translates directly into good performance in well-written applications with advanced control-flow.
     3214C programmers should feel comfortable using these mechanisms for developing complex control-flow in applications, with the ability to obtain maximum available performance by selecting mechanisms at the appropriate level of need using only calling communication.
    29883215
    29893216
     
    30053232\label{futur:nbio}
    30063233
    3007 Many modern workloads are not bound by computation but IO operations, a common case being web servers and XaaS~\cite{XaaS} (anything as a service).
     3234Many modern workloads are not bound by computation but IO operations, common cases being web servers and XaaS~\cite{XaaS} (anything as a service).
    30083235These types of workloads require significant engineering to amortizing costs of blocking IO-operations.
    30093236At its core, non-blocking I/O is an operating-system level feature queuing IO operations, \eg network operations, and registering for notifications instead of waiting for requests to complete.
     
    30333260\section{Acknowledgements}
    30343261
    3035 The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach and Michael Brooks on the features described in this paper.
    3036 Funding for this project has been provided by Huawei Ltd.\ (\url{http://www.huawei.com}). %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.
     3262The authors recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach, and Michael Brooks; David Dice for commenting and helping with the Java benchmarks; and Gregor Richards for helping with the Node.js benchmarks.
     3263This research is funded by a grant from Waterloo-Huawei (\url{http://www.huawei.com}) Joint Innovation Lab. %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.
    30373264
    30383265{%
    3039 \fontsize{9bp}{12bp}\selectfont%
     3266\fontsize{9bp}{11.5bp}\selectfont%
    30403267\bibliography{pl,local}
    30413268}%
Note: See TracChangeset for help on using the changeset viewer.