source: doc/papers/concurrency/Paper.tex @ 0161ddf

arm-ehjacob/cs343-translationjenkins-sandboxnew-astnew-ast-unique-expr
Last change on this file since 0161ddf was 0161ddf, checked in by Peter A. Buhr <pabuhr@…>, 2 years ago

rewrite introduction, add generator section, update coroutine section

  • Property mode set to 100644
File size: 145.9 KB
Line 
1\documentclass[AMA,STIX1COL]{WileyNJD-v2}
2
3\articletype{RESEARCH ARTICLE}%
4
5\received{XXXXX}
6\revised{XXXXX}
7\accepted{XXXXX}
8
9\raggedbottom
10
11%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
12
13% Latex packages used in the document.
14
15\usepackage{epic,eepic}
16\usepackage{xspace}
17\usepackage{comment}
18\usepackage{upquote}                                            % switch curled `'" to straight
19\usepackage{listings}                                           % format program code
20\usepackage[labelformat=simple,aboveskip=0pt,farskip=0pt]{subfig}
21\renewcommand{\thesubfigure}{(\Alph{subfigure})}
22\captionsetup{justification=raggedright,singlelinecheck=false}
23\usepackage{dcolumn}                                            % align decimal points in tables
24\usepackage{capt-of}
25
26\hypersetup{breaklinks=true}
27\definecolor{OliveGreen}{cmyk}{0.64 0 0.95 0.40}
28\definecolor{Mahogany}{cmyk}{0 0.85 0.87 0.35}
29\definecolor{Plum}{cmyk}{0.50 1 0 0}
30
31\usepackage[pagewise]{lineno}
32\renewcommand{\linenumberfont}{\scriptsize\sffamily}
33
34\renewcommand{\topfraction}{0.8}                        % float must be greater than X of the page before it is forced onto its own page
35\renewcommand{\bottomfraction}{0.8}                     % float must be greater than X of the page before it is forced onto its own page
36\renewcommand{\floatpagefraction}{0.8}          % float must be greater than X of the page before it is forced onto its own page
37\renewcommand{\textfraction}{0.0}                       % the entire page maybe devoted to floats with no text on the page at all
38
39\lefthyphenmin=3                                                        % hyphen only after 4 characters
40\righthyphenmin=3
41
42%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
43
44% Names used in the document.
45
46\newcommand{\CFAIcon}{\textsf{C}\raisebox{\depth}{\rotatebox{180}{\textsf{A}}}\xspace} % Cforall symbolic name
47\newcommand{\CFA}{\protect\CFAIcon}             % safe for section/caption
48\newcommand{\CFL}{\textrm{Cforall}\xspace}      % Cforall symbolic name
49\newcommand{\Celeven}{\textrm{C11}\xspace}      % C11 symbolic name
50\newcommand{\CC}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}\xspace} % C++ symbolic name
51\newcommand{\CCeleven}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}11\xspace} % C++11 symbolic name
52\newcommand{\CCfourteen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}14\xspace} % C++14 symbolic name
53\newcommand{\CCseventeen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}17\xspace} % C++17 symbolic name
54\newcommand{\CCtwenty}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}20\xspace} % C++20 symbolic name
55\newcommand{\Csharp}{C\raisebox{-0.7ex}{\Large$^\sharp$}\xspace} % C# symbolic name
56
57%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
58
59\newcommand{\Textbf}[2][red]{{\color{#1}{\textbf{#2}}}}
60\newcommand{\Emph}[2][red]{{\color{#1}\textbf{\emph{#2}}}}
61\newcommand{\uC}{$\mu$\CC}
62\newcommand{\TODO}[1]{{\Textbf{#1}}}
63
64%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
65
66% Default underscore is too low and wide. Cannot use lstlisting "literate" as replacing underscore
67% removes it as a variable-name character so keywords in variables are highlighted. MUST APPEAR
68% AFTER HYPERREF.
69%\DeclareTextCommandDefault{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.1ex}}}
70\renewcommand{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.075ex}}}
71
72\renewcommand*{\thefootnote}{\Alph{footnote}} % hack because fnsymbol does not work
73%\renewcommand*{\thefootnote}{\fnsymbol{footnote}}
74
75\makeatletter
76% parindent is relative, i.e., toggled on/off in environments like itemize, so store the value for
77% use rather than use \parident directly.
78\newlength{\parindentlnth}
79\setlength{\parindentlnth}{\parindent}
80
81\newcommand{\LstBasicStyle}[1]{{\lst@basicstyle{\lst@basicstyle{#1}}}}
82\newcommand{\LstKeywordStyle}[1]{{\lst@basicstyle{\lst@keywordstyle{#1}}}}
83\newcommand{\LstCommentStyle}[1]{{\lst@basicstyle{\lst@commentstyle{#1}}}}
84
85\newlength{\gcolumnposn}                                        % temporary hack because lstlisting does not handle tabs correctly
86\newlength{\columnposn}
87\setlength{\gcolumnposn}{3.5in}
88\setlength{\columnposn}{\gcolumnposn}
89
90\newcommand{\C}[2][\@empty]{\ifx#1\@empty\else\global\setlength{\columnposn}{#1}\global\columnposn=\columnposn\fi\hfill\makebox[\textwidth-\columnposn][l]{\lst@basicstyle{\LstCommentStyle{#2}}}}
91\newcommand{\CRT}{\global\columnposn=\gcolumnposn}
92
93% Denote newterms in particular font and index them without particular font and in lowercase, e.g., \newterm{abc}.
94% The option parameter provides an index term different from the new term, e.g., \newterm[\texttt{abc}]{abc}
95% The star version does not lowercase the index information, e.g., \newterm*{IBM}.
96\newcommand{\newtermFontInline}{\emph}
97\newcommand{\newterm}{\@ifstar\@snewterm\@newterm}
98\newcommand{\@newterm}[2][\@empty]{\lowercase{\def\temp{#2}}{\newtermFontInline{#2}}\ifx#1\@empty\index{\temp}\else\index{#1@{\protect#2}}\fi}
99\newcommand{\@snewterm}[2][\@empty]{{\newtermFontInline{#2}}\ifx#1\@empty\index{#2}\else\index{#1@{\protect#2}}\fi}
100
101% Latin abbreviation
102\newcommand{\abbrevFont}{\textit}                       % set empty for no italics
103\@ifundefined{eg}{
104\newcommand{\EG}{\abbrevFont{e}\abbrevFont{g}}
105\newcommand*{\eg}{%
106        \@ifnextchar{,}{\EG}%
107                {\@ifnextchar{:}{\EG}%
108                        {\EG,\xspace}}%
109}}{}%
110\@ifundefined{ie}{
111\newcommand{\IE}{\abbrevFont{i}\abbrevFont{e}}
112\newcommand*{\ie}{%
113        \@ifnextchar{,}{\IE}%
114                {\@ifnextchar{:}{\IE}%
115                        {\IE,\xspace}}%
116}}{}%
117\@ifundefined{etc}{
118\newcommand{\ETC}{\abbrevFont{etc}}
119\newcommand*{\etc}{%
120        \@ifnextchar{.}{\ETC}%
121        {\ETC.\xspace}%
122}}{}%
123\@ifundefined{etal}{
124\newcommand{\ETAL}{\abbrevFont{et}~\abbrevFont{al}}
125\newcommand*{\etal}{%
126        \@ifnextchar{.}{\protect\ETAL}%
127                {\protect\ETAL.\xspace}%
128}}{}%
129\@ifundefined{viz}{
130\newcommand{\VIZ}{\abbrevFont{viz}}
131\newcommand*{\viz}{%
132        \@ifnextchar{.}{\VIZ}%
133                {\VIZ.\xspace}%
134}}{}%
135\makeatother
136
137\newenvironment{cquote}
138               {\list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
139                \item\relax}
140               {\endlist}
141
142%\newenvironment{cquote}{%
143%\list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
144%\item\relax%
145%}{%
146%\endlist%
147%}% cquote
148
149% CFA programming language, based on ANSI C (with some gcc additions)
150\lstdefinelanguage{CFA}[ANSI]{C}{
151        morekeywords={
152                _Alignas, _Alignof, __alignof, __alignof__, asm, __asm, __asm__, __attribute, __attribute__,
153                auto, _Bool, catch, catchResume, choose, _Complex, __complex, __complex__, __const, __const__,
154                coroutine, disable, dtype, enable, exception, __extension__, fallthrough, fallthru, finally,
155                __float80, float80, __float128, float128, forall, ftype, generator, _Generic, _Imaginary, __imag, __imag__,
156                inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or,
157                otype, restrict, __restrict, __restrict__, __signed, __signed__, _Static_assert, suspend, thread,
158                _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__,
159                virtual, __volatile, __volatile__, waitfor, when, with, zero_t},
160        moredirectives={defined,include_next}%
161}
162
163\lstset{
164language=CFA,
165columns=fullflexible,
166basicstyle=\linespread{0.9}\sf,                                                 % reduce line spacing and use sanserif font
167stringstyle=\tt,                                                                                % use typewriter font
168tabsize=5,                                                                                              % N space tabbing
169xleftmargin=\parindentlnth,                                                             % indent code to paragraph indentation
170%mathescape=true,                                                                               % LaTeX math escape in CFA code $...$
171escapechar=\$,                                                                                  % LaTeX escape in CFA code
172keepspaces=true,                                                                                %
173showstringspaces=false,                                                                 % do not show spaces with cup
174showlines=true,                                                                                 % show blank lines at end of code
175aboveskip=4pt,                                                                                  % spacing above/below code block
176belowskip=3pt,
177% replace/adjust listing characters that look bad in sanserif
178literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
179        {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
180        {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
181        {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textrm{\textgreater}}}2,
182moredelim=**[is][\color{red}]{`}{`},
183}% lstset
184
185% uC++ programming language, based on ANSI C++
186\lstdefinelanguage{uC++}[ANSI]{C++}{
187        morekeywords={
188                _Accept, _AcceptReturn, _AcceptWait, _Actor, _At, _CatchResume, _Cormonitor, _Coroutine, _Disable,
189                _Else, _Enable, _Event, _Finally, _Monitor, _Mutex, _Nomutex, _PeriodicTask, _RealTimeTask,
190                _Resume, _Select, _SporadicTask, _Task, _Timeout, _When, _With, _Throw},
191}
192\lstdefinelanguage{Golang}{
193        morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,},
194        morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64,
195                bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface},
196        morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,},
197        morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,},
198        morekeywords=[5]{Println,Printf,Error,},
199        sensitive=true,
200        morecomment=[l]{//},
201        morecomment=[s]{/*}{*/},
202        morestring=[b]',
203        morestring=[b]",
204        morestring=[s]{`}{`},
205}
206
207\lstnewenvironment{cfa}[1][]
208{\lstset{#1}}
209{}
210\lstnewenvironment{C++}[1][]                            % use C++ style
211{\lstset{language=C++,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}}
212{}
213\lstnewenvironment{uC++}[1][]
214{\lstset{#1}}
215{}
216\lstnewenvironment{Go}[1][]
217{\lstset{language=go,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}}
218{}
219\lstnewenvironment{python}[1][]
220{\lstset{language=python,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}}
221{}
222
223% inline code @...@
224\lstMakeShortInline@%
225
226\let\OLDthebibliography\thebibliography
227\renewcommand\thebibliography[1]{
228  \OLDthebibliography{#1}
229  \setlength{\parskip}{0pt}
230  \setlength{\itemsep}{4pt plus 0.3ex}
231}
232
233\newbox\myboxA
234\newbox\myboxB
235\newbox\myboxC
236\newbox\myboxD
237
238\title{\texorpdfstring{Advanced Control-flow and Concurrency in \protect\CFA}{Advanced Control-flow in Cforall}}
239
240\author[1]{Thierry Delisle}
241\author[1]{Peter A. Buhr*}
242\authormark{DELISLE \textsc{et al.}}
243
244\address[1]{\orgdiv{Cheriton School of Computer Science}, \orgname{University of Waterloo}, \orgaddress{\state{Waterloo, ON}, \country{Canada}}}
245
246\corres{*Peter A. Buhr, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada. \email{pabuhr{\char`\@}uwaterloo.ca}}
247
248% \fundingInfo{Natural Sciences and Engineering Research Council of Canada}
249
250\abstract[Summary]{
251\CFA is a polymorphic, non-object-oriented, concurrent, backwards-compatible extension of the C programming language.
252This paper discusses the design philosophy and implementation of its advanced control-flow and concurrent/parallel features, along with the supporting runtime.
253These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library features like C pthreads.
254\CFA introduces modern language-level control-flow mechanisms, like generators, coroutines, user-level threading, and monitors for mutual exclusion and synchronization.
255Library extension for executors, futures, and actors are built on these basic mechanisms.
256The runtime provides significant programmer simplification and safety by eliminating spurious wakeup and optional monitor barging.
257The runtime also ensures multiple monitors can be safely acquired \emph{simultaneously} (deadlock free), and this feature is fully integrated with all monitor synchronization mechanisms.
258All control-flow features integrate with the \CFA polymorphic type-system and exception handling, while respecting the expectations and style of C programmers.
259Experimental results show comparable performance of the new features with similar mechanisms in other concurrent programming-languages.
260}%
261
262\keywords{generator, coroutine, concurrency, parallelism, thread, monitor, runtime, C, \CFA (Cforall)}
263
264
265\begin{document}
266\linenumbers                                            % comment out to turn off line numbering
267
268\maketitle
269
270
271\section{Introduction}
272
273This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA~\cite{Moss18} and its runtime.
274\CFA is a modern, polymorphic, non-object-oriented\footnote{
275\CFA has features often associated with object-oriented programming languages, such as constructors, destructors, virtuals and simple inheritance.
276However, functions \emph{cannot} be nested in structures, so there is no lexical binding between a structure and set of functions (member/method) implemented by an implicit \lstinline@this@ (receiver) parameter.},
277backwards-compatible extension of the C programming language.
278Within the \CFA framework, new control-flow features are created from scratch.
279ISO \Celeven defines only a subset of the \CFA extensions, where the overlapping features are concurrency~\cite[\S~7.26]{C11}.
280However, \Celeven concurrency is largely wrappers for a subset of the pthreads library~\cite{Butenhof97,Pthreads}.
281Furthermore, \Celeven and pthreads concurrency is simple, based on thread fork/join in a function and a few locks, which is low-level and error prone;
282no high-level language concurrency features are defined.
283Interestingly, almost a decade after publication of the \Celeven standard, neither gcc-8, clang-8 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating little interest in the C11 concurrency approach.
284Finally, while the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}.
285
286In contrast, there has been a renewed interest during the past decade in user-level (M:N, green) threading in old and new programming languages.
287As multi-core hardware became available in the 1980/90s, both user and kernel threading were examined.
288Kernel threading was chosen, largely because of its simplicity and fit with the simpler operating systems and hardware architectures at the time, which gave it a performance advantage~\cite{Drepper03}.
289Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads.
290As a result, languages like Java, Scala~\cite{Scala}, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms.
291From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,BoostThreads}, including putting green threads back into Java~\cite{Quasar}.
292The main argument for user-level threading is that they are lighter weight than kernel threads (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work-units to facilitate load balancing by the runtime~\cite{Verch12}.
293As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{vonBehren03}.
294Finally, performant user-threading implementations (both time and space) are largely competitive with direct kernel-threading implementations, while achieving the programming advantages of high concurrency levels and safety.
295
296A further effort over the past two decades is the development of language memory-models to deal with the conflict between language features and compiler/hardware optimizations, i.e., some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}.
297The consequence is that a language must provide sufficient tools to program around safety issues, as inline and library code is all sequential to the compiler.
298One solution is low-level qualifiers and functions (e.g., @volatile@ and atomics) allowing \emph{programmers} to explicitly write safe (race-free~\cite{Boehm12}) programs.
299A safer solution is high-level language constructs so the \emph{compiler} knows the optimization boundaries, and hence, provides implicit safety.
300This problem is best know with respect to concurrency, but applies to other complex control-flow, like exceptions\footnote{
301\CFA exception handling will be presented in a separate paper.
302The key feature that dovetails with this paper is non-local exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}
303} and coroutines.
304Finally, solutions in the language allows matching constructs with language paradigm, i.e., imperative and functional languages have different presentations of the same concept.
305
306Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary.
307Two concurrency violations of this philosophy are \emph{spurious wakeup} and \emph{barging}, i.e., random wakeup~\cite[\S~8]{Buhr05a} and signals-as-hints~\cite[\S~8]{Buhr05a}, where one is a consequence of the other, i.e., once there is spurious wakeup, signals-as-hints follows.
308However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice.
309Similarly, signals-as-hints is also a performance decision.
310We argue removing spurious wakeup and signals-as-hints makes concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation.
311(Experience by the authors teaching concurrency is that students are highly confused by these semantics.)
312Clawing back performance, when local non-determinism is unimportant, should be an option not the default.
313
314\begin{comment}
315For example, it is possible to provide exceptions, coroutines, monitors, and tasks as specialized types in an object-oriented language, integrating these constructs to allow leveraging the type-system (static type-checking) and all other object-oriented capabilities~\cite{uC++}.
316It is also possible to leverage call/return for blocking communication via new control structures, versus switching to alternative communication paradigms, like channels or message passing.
317As well, user threading is often a complementary feature, allowing light-weight threading to match with low-cost objects, while hiding the application/kernel boundary.
318User threading also allows layering of implicit concurrency models (no explicit thread creation), such executors, data-flow, actors, into a single language, so programmers can chose the model that best fits an algorithm.\footnote{
319All implicit concurrency models have explicit threading in their implementation, and hence, can be build from explicit threading;
320however, the reverse is seldom true, i.e., given implicit concurrency, e.g., actors, it is virtually impossible to create explicit concurrency, e.g., blocking thread objects.}
321Finally, with extended language features and user-level threading it is possible to discretely fold locking and non-blocking I/O multiplexing into the language's I/O libraries, so threading implicitly dovetails with the I/O subsystem.
322\CFA embraces language extensions and user-level threading to provide advanced control-flow (exception handling\footnote{
323\CFA exception handling will be presented in a separate paper.
324The key feature that dovetails with this paper is non-local exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}
325} and coroutines) and concurrency.
326
327Most augmented traditional (Fortran 18~\cite{Fortran18}, Cobol 14~\cite{Cobol14}, Ada 12~\cite{Ada12}, Java 11~\cite{Java11}) and new languages (Go~\cite{Go}, Rust~\cite{Rust}, and D~\cite{D}), except \CC, diverge from C with different syntax and semantics, only interoperate indirectly with C, and are not systems languages, for those with managed memory.
328As a result, there is a significant learning curve to move to these languages, and C legacy-code must be rewritten.
329While \CC, like \CFA, takes an evolutionary approach to extend C, \CC's constantly growing complex and interdependent features-set (e.g., objects, inheritance, templates, etc.) mean idiomatic \CC code is difficult to use from C, and C programmers must expend significant effort learning \CC.
330Hence, rewriting and retraining costs for these languages, even \CC, are prohibitive for companies with a large C software-base.
331\CFA with its orthogonal feature-set, its high-performance runtime, and direct access to all existing C libraries circumvents these problems.
332\end{comment}
333
334\CFA embraces user-level threading, language extensions for advanced control-flow, and safety as the default.
335We present comparative examples so the reader can judge if the \CFA control-flow extensions are better and safer than those in or proposed for \Celeven, \CC, and other concurrent, imperative programming-languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms.
336The main contributions of this work are:
337\begin{itemize}
338\item
339language-level generators, coroutines and user-level threading, which respect the expectations of C programmers.
340\item
341monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capability with all monitor synchronization mechanisms.
342\item
343providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features.
344\item
345library extensions for executors, futures, and actors built on the basic mechanisms.
346\item
347a runtime system with no spurious wakeup.
348\item
349a dynamic partitioning mechanism to segregate the execution environment for specialized requirements.
350\item
351a non-blocking I/O library
352\item
353experimental results showing comparable performance of the new features with similar mechanisms in other programming languages.
354\end{itemize}
355
356
357\section{Stateful Function}
358
359The generator/coroutine provides a stateful function, which is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}.
360A stateful function allows execution to be temporarily suspended and later resumed, e.g., plugin, device driver, finite-state machine.
361Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension.
362This capability is accomplished by retaining a data/execution \emph{closure} between invocations.
363If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, e.g., suspending outside the generator is prohibited.
364If the closure is variable sized, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions.
365Hence, refactoring a stackless coroutine may require changing it to stackful.
366A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, i.e., resume/suspend operations are remembered through the closure not the stack.
367As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
368A fixed closure activated by modified call/return is faster than a variable closure activated by context switching.
369Additionally, any storage management for the closure (especially in unmanaged languages, i.e., no garbage collection) must also be factored into design and performance.
370Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general.
371Note, creation cost is amortized across usage, so activation cost is usually the dominant factor.
372
373
374\begin{figure}
375\centering
376\begin{lrbox}{\myboxA}
377\begin{cfa}[aboveskip=0pt,belowskip=0pt]
378typedef struct {
379        int fn1, fn;
380} Fib;
381#define FibCtor { 1, 0 }
382int fib( Fib * f ) {
383
384
385
386        int fn = f->fn; f->fn = f->fn1;
387                f->fn1 = f->fn + fn;
388        return fn;
389
390}
391int main() {
392        Fib f1 = FibCtor, f2 = FibCtor;
393        for ( int i = 0; i < 10; i += 1 )
394                printf( "%d %d\n",
395                           fib( &f1 ), fib( &f2 ) );
396}
397\end{cfa}
398\end{lrbox}
399
400\begin{lrbox}{\myboxB}
401\begin{cfa}[aboveskip=0pt,belowskip=0pt]
402`generator` Fib {
403        int fn1, fn;
404};
405
406void `main(Fib & fib)` with(fib) {
407
408        [fn1, fn] = [1, 0];
409        for () {
410                `suspend;`
411                [fn1, fn] = [fn, fn + fn1];
412
413        }
414}
415int main() {
416        Fib f1, f2;
417        for ( 10 )
418                sout | `resume( f1 )`.fn
419                         | `resume( f2 )`.fn;
420}
421\end{cfa}
422\end{lrbox}
423
424\begin{lrbox}{\myboxC}
425\begin{cfa}[aboveskip=0pt,belowskip=0pt]
426typedef struct {
427        int fn1, fn;  void * `next`;
428} Fib;
429#define FibCtor { 1, 0, NULL }
430Fib * comain( Fib * f ) {
431        if ( f->next ) goto *f->next;
432        f->next = &&s1;
433        for ( ;; ) {
434                return f;
435          s1:; int fn = f->fn + f->fn1;
436                        f->fn1 = f->fn; f->fn = fn;
437        }
438}
439int main() {
440        Fib f1 = FibCtor, f2 = FibCtor;
441        for ( int i = 0; i < 10; i += 1 )
442                printf("%d %d\n",comain(&f1)->fn,
443                                 comain(&f2)->fn);
444}
445\end{cfa}
446\end{lrbox}
447
448\subfloat[C asymmetric generator]{\label{f:CFibonacci}\usebox\myboxA}
449\hspace{3pt}
450\vrule
451\hspace{3pt}
452\subfloat[\CFA asymmetric generator]{\label{f:CFAFibonacciGen}\usebox\myboxB}
453\hspace{3pt}
454\vrule
455\hspace{3pt}
456\subfloat[C generator implementation]{\label{f:CFibonacciSim}\usebox\myboxC}
457\caption{Fibonacci (output) Asymmetric Generator}
458\label{f:FibonacciAsymmetricGenerator}
459
460\bigskip
461
462\begin{lrbox}{\myboxA}
463\begin{cfa}[aboveskip=0pt,belowskip=0pt]
464`generator Fmt` {
465        char ch;
466        int g, b;
467};
468void ?{}( Fmt & fmt ) { `resume(fmt);` } // constructor
469void ^?{}( Fmt & f ) with(f) { $\C[1.75in]{// destructor}$
470        if ( g != 0 || b != 0 ) sout | nl; }
471void `main( Fmt & f )` with(f) {
472        for () { $\C{// until destructor call}$
473                for ( ; g < 5; g += 1 ) { $\C{// groups}$
474                        for ( ; b < 4; b += 1 ) { $\C{// blocks}$
475                                `suspend;` $\C{// wait for character}$
476                                while ( ch == '\n' ) `suspend;` // ignore
477                                sout | ch;                                              // newline
478                        } sout | " ";  // block spacer
479                } sout | nl; // group newline
480        }
481}
482int main() {
483        Fmt fmt; $\C{// fmt constructor called}$
484        for () {
485                sin | fmt.ch; $\C{// read into generator}$
486          if ( eof( sin ) ) break;
487                `resume( fmt );`
488        }
489
490} $\C{// fmt destructor called}\CRT$
491\end{cfa}
492\end{lrbox}
493
494\begin{lrbox}{\myboxB}
495\begin{cfa}[aboveskip=0pt,belowskip=0pt]
496typedef struct {
497        void * next;
498        char ch;
499        int g, b;
500} Fmt;
501void comain( Fmt * f ) {
502        if ( f->next ) goto *f->next;
503        f->next = &&s1;
504        for ( ;; ) {
505                for ( f->g = 0; f->g < 5; f->g += 1 ) {
506                        for ( f->b = 0; f->b < 4; f->b += 1 ) {
507                                return;
508                          s1:;  while ( f->ch == '\n' ) return;
509                                printf( "%c", f->ch );
510                        } printf( " " );
511                } printf( "\n" );
512        }
513}
514int main() {
515        Fmt fmt = { NULL };  comain( &fmt ); // prime
516        for ( ;; ) {
517                scanf( "%c", &fmt.ch );
518          if ( feof( stdin ) ) break;
519                comain( &fmt );
520        }
521        if ( fmt.g != 0 || fmt.b != 0 ) printf( "\n" );
522}
523\end{cfa}
524\end{lrbox}
525
526\subfloat[\CFA asymmetric generator]{\label{f:CFAFormatGen}\usebox\myboxA}
527\hspace{3pt}
528\vrule
529\hspace{3pt}
530\subfloat[C generator simulation]{\label{f:CFormatSim}\usebox\myboxB}
531\hspace{3pt}
532\caption{Formatter (input) Asymmetric Generator}
533\label{f:FormatterAsymmetricGenerator}
534\end{figure}
535
536
537\subsection{Generator}
538
539Stackless generators have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution.
540The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity.
541How this goal is accomplished is done through a series of different kinds of generators and their implementation.
542
543Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version.
544This kind of generator is an \emph{output generator}, producing a new result on each resumption.
545To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle.
546An additional requirement is the ability to create an arbitrary number of generators (of any kind), \ie retaining state in global variables is insufficient;
547hence, state is retained in a closure between calls.
548Figure~\ref{f:CFibonacci} shows the C approach of manually creating the closure in structure @Fib@, and multiple instances of this closure provide multiple Fibonacci generators.
549The C version only has the middle execution state because the top execution state becomes declaration initialization.
550Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a special \CFA @generator@ type.
551This generator type is then connected to a function named @main@ that takes as its only parameter a reference to the generator type, called a \emph{generator main}.
552The generator main contains @suspend@ statements that suspend execution without ending the generator versus @return@.
553For the Fibonacci generator-main,\footnote{
554The \CFA \lstinline|with| opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics.
555Multiple aggregates may be opened.}
556the top initialization state appears at the start and the middle execution state is denoted by statement @suspend@.
557Any local variables in @main@ \emph{are not retained} between calls;
558hence local variable are only for temporary computations \emph{between} suspends.
559All retained state \emph{must} appear in the generator's type.
560As well, generator code containing a @suspend@ cannot be refactored into a helper function called by the generator, because @suspend@ is implemented via @return@, so a return from the helper function goes back to the current generator not the resumer.
561The generator is started by calling function @resume@ with a generator instance, which begins execution at the top of the generator main, and subsequent @resume@ calls restart the generator at its point of last suspension.
562Resuming an ended (returned) generator is undefined.
563Function @resume@ returns its argument generator so it can be cascaded in an expression, in this case to print the next Fibonacci value @fn@ computed in the generator instance.
564Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA generator only needs one additional field, @next@, to handle retention of execution state.
565The computed @goto@ at the start of the generator main, which branches after the previous suspend, adds very little cost to the resume call.
566Finally, an explicit generator type provides both design and performance benefits, such as multiple type-safe interface functions taking and returning arbitrary types.
567\begin{cfa}
568int ?()( Fib & fib ) with( fib ) { return `resume( fib )`.fn; }   // function-call interface
569int ?()( Fib & fib, int N ) with( fib ) { for ( N - 1 ) `fib()`; return `fib()`; }   // use simple interface
570double ?()( Fib & fib ) with( fib ) { return (int)`fib()` / 3.14159; } // cast prevents recursive call
571sout | (int)f1() | (double)f1() | f2( 2 );   // simple interface, cast selects call based on return type, step 2 values
572\end{cfa}
573Now, the generator can be a separately-compiled opaque-type only accessed through its interface functions.
574For contrast, Figure~\ref{f:PythonFibonacci} shows the equivalent Python Fibonacci generator, which does not use a generator type, and hence only has a single interface, but an implicit closure.
575
576Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden and possible point of error.
577(This restriction is removed by the coroutine, see Section~\ref{s:Coroutine}.)
578Our concern is that static analysis to discriminate local state from temporary variables is complex and beyond the current scope of the \CFA project.
579As well, supporting variable-size local-state, like variable-length arrays, requires dynamic allocation of the local state, which significantly increases the cost of generator creation/destruction, and is a show-stopper for embedded programming.
580But more importantly, the size of the generator type is tied to the local state in the generator main, which precludes separate compilation of the generator main, \ie a generator must be inlined or local state must be dynamically allocated.
581Our current experience is that most generator problems have simple data state, including local state, but complex execution state.
582As well, C programmers are not afraid with this kind of semantic programming requirement, if it results in very small, fast generators.
583
584Figure~\ref{f:CFAFormatGen} shows an asymmetric \newterm{input generator}, @Fmt@, for restructuring text into groups of characters of fixed-size blocks, \ie the input on the left is reformatted into the output on the right, where newlines are ignored.
585\begin{center}
586\tt
587\begin{tabular}{@{}l|l@{}}
588\multicolumn{1}{c|}{\textbf{\textrm{input}}} & \multicolumn{1}{c}{\textbf{\textrm{output}}} \\
589\begin{tabular}[t]{@{}ll@{}}
590abcdefghijklmnopqrstuvwxyz \\
591abcdefghijklmnopqrstuvwxyz
592\end{tabular}
593&
594\begin{tabular}[t]{@{}lllll@{}}
595abcd    & efgh  & ijkl  & mnop  & qrst  \\
596uvwx    & yzab  & cdef  & ghij  & klmn  \\
597opqr    & stuv  & wxyz  &               &
598\end{tabular}
599\end{tabular}
600\end{center}
601The example takes advantage of resuming a generator in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops.
602The destructor provides a newline, if formatted text ends with a full line.
603Figure~\ref{f:CFormatSim} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@.
604For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the Fibonacci generator.
605
606Figure~\ref{f:DeviceDriverGen} shows a \emph{killer} asymmetric generator, a device-driver, because device drivers caused 70\%-85\% of failures in Windows/Linux~\cite{Swift05}.
607Device drives follow the pattern of simple data state but complex execution state, \ie finite state-machine (FSM) parsing a protocol.
608For example, the following protocol:
609\begin{center}
610\ldots\, STX \ldots\, message \ldots\, ESC ETX \ldots\, message \ldots\, ETX 2-byte crc \ldots
611\end{center}
612is a network message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check.
613Control characters may appear in a message if preceded by an ESC.
614When a message byte arrives, it triggers an interrupt, and the operating system services the interrupt by calling the device driver with the byte read from a hardware register.
615The device driver returns a status code of its current state, and when a complete message is obtained, the operating system knows the message is in the message buffer.
616Hence, the device driver is an input/output generator.
617
618Note, the cost of creating and resuming the device-driver generator, @Driver@, is virtually identical to call/return, so performance in an operating-system kernel is excellent.
619As well, the data state is small, where variables @byte@ and @msg@ are communication variables for passing in message bytes and returning the message, and variables @lnth@, @crc@, and @sum@ are local variable that must be retained between calls and are hoisted into the generator type.
620Manually, detecting and hoisting local-state variables is easy when the number is small.
621Finally, the execution state is large, with one @resume@ and seven @suspend@s.
622Hence, the key benefits of the generator are correctness, safety, and maintenance because the execution states of the FSM are transcribed directly into the programming language rather than using a table-driven approach.
623Because FSMs can be complex and occur frequently in important domains, direct support of the generator is crucial in a systems programming-language.
624
625\begin{figure}
626\centering
627\newbox\myboxA
628\begin{lrbox}{\myboxA}
629\begin{python}[aboveskip=0pt,belowskip=0pt]
630def Fib():
631    fn1, fn = 0, 1
632    while True:
633        `yield fn1`
634        fn1, fn = fn, fn1 + fn
635f1 = Fib()
636f2 = Fib()
637for i in range( 10 ):
638        print( next( f1 ), next( f2 ) )
639
640
641
642
643
644
645\end{python}
646\end{lrbox}
647
648\newbox\myboxB
649\begin{lrbox}{\myboxB}
650\begin{python}[aboveskip=0pt,belowskip=0pt]
651def Fmt():
652        try:
653                while True:
654                        for g in range( 5 ):
655                                for b in range( 4 ):
656                                        print( `(yield)`, end='' )
657                                print( '  ', end='' )
658                        print()
659        except GeneratorExit:
660                if g != 0 | b != 0:
661                        print()
662fmt = Fmt()
663`next( fmt )`                    # prime, next prewritten
664for i in range( 41 ):
665        `fmt.send( 'a' );`      # send to yield
666\end{python}
667\end{lrbox}
668\subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA}
669\hspace{3pt}
670\vrule
671\hspace{3pt}
672\subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB}
673\caption{Python Generator}
674\label{f:PythonGenerator}
675
676\bigskip
677
678\begin{tabular}{@{}l|l@{}}
679\begin{cfa}[aboveskip=0pt,belowskip=0pt]
680enum Status { CONT, MSG, ESTX,
681                                ELNTH, ECRC };
682`generator` Driver {
683        Status status;
684        unsigned char byte, * msg; // communication
685        unsigned int lnth, sum;      // local state
686        unsigned short int crc;
687};
688void ?{}( Driver & d, char * m ) { d.msg = m; }
689Status next( Driver & d, char b ) with( d ) {
690        byte = b; `resume( d );` return status;
691}
692void main( Driver & d ) with( d ) {
693        enum { STX = '\002', ESC = '\033',
694                        ETX = '\003', MaxMsg = 64 };
695  msg: for () { // parse message
696                status = CONT;
697                lnth = 0; sum = 0;
698                while ( byte != STX ) `suspend;`
699          emsg: for () {
700                        `suspend;` // process byte
701\end{cfa}
702&
703\begin{cfa}[aboveskip=0pt,belowskip=0pt]
704                        choose ( byte ) { // switch with implicit break
705                          case STX:
706                                status = ESTX; `suspend;` continue msg;
707                          case ETX:
708                                break emsg;
709                          case ESC:
710                                `suspend;`
711                        }
712                        if ( lnth >= MaxMsg ) { // buffer full ?
713                                status = ELNTH; `suspend;` continue msg; }
714                        msg[lnth++] = byte;
715                        sum += byte;
716                }
717                msg[lnth] = '\0'; // terminate string
718                `suspend;`
719                crc = byte << 8;
720                `suspend;`
721                status = (crc | byte) == sum ? MSG : ECRC;
722                `suspend;`
723        }
724}
725\end{cfa}
726\end{tabular}
727\caption{Device-driver generator for communication protocol}
728\label{f:DeviceDriverGen}
729\end{figure}
730
731Figure~\ref{f:CFAPingPongGen} shows a symmetric generator, where the generator resumes another generator, forming a resume/resume cycle.
732(The trivial cycle is a generator resuming itself.)
733This control flow is similar to recursion for functions, but without stack growth.
734The steps for symmetric control-flow are creating, executing, and terminating the cycle.
735Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope.
736(This issues occurs for any cyclic data-structure.)
737% The example creates all the generators and then assigns the partners that form the cycle.
738% Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.
739Once the cycle is formed, the program main resumes one of the generators, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s).
740Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example).
741The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack.
742Also, since local variables are not retained in the generator function, it does not contain an objects with destructors that must be called, so the  cost is the same as a function return.
743Destructor cost occurs when the generator instance is deallocated, which is easily controlled by the programmer.
744
745Figure~\ref{f:CPingPongSim} shows the implementation of the symmetric generator, where the complexity is the @resume@, which needs an extension to the calling convention to perform a forward rather than backward jump.
746This jump starts at the top of the next generator main to re-execute the normal calling convention to make space on the stack for its local variables.
747However, before the jump, the caller must reset its stack (and any registers) equivalent to a @return@, but subsequently jump forward not backwards.
748This semantics is basically a tail-call optimization, which compilers already perform.
749Hence, assembly code is manually insert in the example to undo the generator's entry code before the direct jump.
750This assembly code is fragile as it depends on what entry code is generated, specifically if there are local variables and the level of optimization.
751To provide this new calling convention requires a mechanism built into the compiler, which is beyond the scope of \CFA at this time.
752Nevertheless, it is possible to hand generate any symmetric generators for proof of concept and performance testing.
753A compiler could also eliminate other artifacts in the generator simulation to further increase performance.
754
755\begin{figure}
756\centering
757\begin{lrbox}{\myboxA}
758\begin{cfa}[aboveskip=0pt,belowskip=0pt]
759`generator PingPong` {
760        const char * name;
761        PingPong & partner; // rebindable reference
762        int N, i;
763};
764void ?{}(PingPong & pp, char * nm, int N) with(pp) {
765        name = nm;  partner = 0p;  pp.N = N;  i = 0; }
766void `main( PingPong & pp )` with(pp) {
767        for ( ; i < N; i += 1 ) {
768                sout | name | i;
769                `resume( partner );`
770        }
771}
772int main() {
773        enum { N = 5 };
774        PingPong ping = {"ping", N}, pong = {"pong", N};
775        &ping.partner = &pong;  &pong.partner = &ping;
776        `resume( ping );`
777}
778\end{cfa}
779\end{lrbox}
780
781\begin{lrbox}{\myboxB}
782\begin{cfa}[escapechar={},aboveskip=0pt,belowskip=0pt]
783typedef struct PingPong {
784        const char * name;
785        struct PingPong * partner;
786        int N, i;
787        void * next;
788} PingPong;
789#define PPCtor(name, N) { name, NULL, N, 0, NULL }
790void comain( PingPong * pp ) {
791        if ( pp->next ) goto *pp->next;
792        pp->next = &&cycle;
793        for ( ; pp->i < pp->N; pp->i += 1 ) {
794                printf( "%s %d\n", pp->name, pp->i );
795                asm( "mov  %0,%%rdi" : "=m" (pp->partner) );
796                asm( "mov  %rdi,%rax" );
797                asm( "popq %rbx" );
798                asm( "jmp  comain" );
799          cycle: ;
800        }
801}
802\end{cfa}
803\end{lrbox}
804
805\subfloat[\CFA symmetric generator]{\label{f:CFAPingPongGen}\usebox\myboxA}
806\hspace{3pt}
807\vrule
808\hspace{3pt}
809\subfloat[C generator simulation]{\label{f:CPingPongSim}\usebox\myboxB}
810\hspace{3pt}
811\caption{Ping-Pong Symmetric Generator}
812\label{f:PingPongSymmetricGenerator}
813\end{figure}
814
815Finally, part of this generator work was inspired by the recent \CCtwenty generator proposal~\cite{C++20Coroutine19} (which they call coroutines).
816Our work provides the same high-performance asymmetric-generators as \CCtwenty, and extends their work with symmetric generators.
817An additional \CCtwenty generator feature allows @suspend@ and @resume@ to be followed by a restricted compound-statement that is executed after the current generator has reset its stack but before calling the next generator, specified with the following \CFA syntax.
818\begin{cfa}
819... suspend`{ ... }`;
820... resume( C )`{ ... }` ...
821\end{cfa}
822Since the current generator's stack is released before calling the compound statement, the compound statement can only reference variables in the generator's type.
823This feature is useful when a generator is used in a concurrent context to ensure it is stopped before releasing a lock in the compound statement, which might immediately allow another thread to resume the generator.
824Hence, this mechanism provides a general and safe handoff of the generator among competing threads.
825
826
827\subsection{Coroutine}
828\label{s:Coroutine}
829
830Stackful coroutines extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main.
831A coroutine is specified by replacing @generator@ with @coroutine@ for the type.
832This generality results in higher cost for creation, due to dynamic stack allocation, execution, due to context switching among stacks, and terminating, due to possible stack unwinding and dynamic stack deallocation.
833How coroutines differ from generators is done through a series of examples.
834
835First, the previous generator examples are converted to their coroutine counterparts, allowing local-state variables to be moved from the generator type into the coroutine main.
836\begin{description}
837\item[Fibonacci]
838Move the declaration of @fn1@ to the start of coroutine main.
839\begin{cfa}[xleftmargin=0pt]
840void main( Fib & fib ) with(fib) {
841        `int fn1;`
842\end{cfa}
843\item[Formatter]
844Move the declaration of @g@ and @b@ to the for loops in the coroutine main.
845\begin{cfa}[xleftmargin=0pt]
846for ( `g`; 5 ) {
847        for ( `b`; 4 ) {
848\end{cfa}
849\item[Device Driver]
850Move the declaration of @lnth@ and @sum@ to their points of initialization.
851\begin{cfa}[xleftmargin=0pt]
852        status = CONT;
853        `unsigned int lnth = 0, sum = 0;`
854        ...
855        `unsigned short int crc = byte << 8;`
856\end{cfa}
857\item[PingPong]
858Move the declaration of @i@ to the for loop in the coroutine main.
859\begin{cfa}[xleftmargin=0pt]
860void main( PingPong & pp ) with(pp) {
861        for ( `i`; N ) {
862\end{cfa}
863\end{description}
864It is also possible to refactor code containing local-state and @suspend@ statements into a helper routine, like the computation of the CRC for the device driver.
865\begin{cfa}
866unsigned int Crc() {
867        `suspend;`
868        unsigned short int crc = byte << 8;
869        `suspend;`
870        status = (crc | byte) == sum ? MSG : ECRC;
871        return crc;
872}
873\end{cfa}
874A call to this function is placed at the end of the driver's coroutine-main.
875For complex finite-state machines, refactoring is part of normal program abstraction, especially when code is used in multiple places.
876Again, this complexity is usually associated with execution state rather than data state.
877
878\begin{comment}
879Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface routines, \eg @next@.
880Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
881The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@.
882The interface routine @next@, takes a Fibonacci instance and context switches to it using @resume@;
883on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
884The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack;
885when the coroutine main returns, its stack is deallocated.
886Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes.
887Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}.
888Coroutine generators are called \newterm{output coroutines} because values are only returned.
889
890\begin{figure}
891\centering
892\newbox\myboxA
893% \begin{lrbox}{\myboxA}
894% \begin{cfa}[aboveskip=0pt,belowskip=0pt]
895% `int fn1, fn2, state = 1;`   // single global variables
896% int fib() {
897%       int fn;
898%       `switch ( state )` {  // explicit execution state
899%         case 1: fn = 0;  fn1 = fn;  state = 2;  break;
900%         case 2: fn = 1;  fn2 = fn1;  fn1 = fn;  state = 3;  break;
901%         case 3: fn = fn1 + fn2;  fn2 = fn1;  fn1 = fn;  break;
902%       }
903%       return fn;
904% }
905% int main() {
906%
907%       for ( int i = 0; i < 10; i += 1 ) {
908%               printf( "%d\n", fib() );
909%       }
910% }
911% \end{cfa}
912% \end{lrbox}
913\begin{lrbox}{\myboxA}
914\begin{cfa}[aboveskip=0pt,belowskip=0pt]
915#define FibCtor { 0, 1 }
916typedef struct { int fn1, fn; } Fib;
917int fib( Fib * f ) {
918
919        int ret = f->fn1;
920        f->fn1 = f->fn;
921        f->fn = ret + f->fn;
922        return ret;
923}
924
925
926
927int main() {
928        Fib f1 = FibCtor, f2 = FibCtor;
929        for ( int i = 0; i < 10; i += 1 ) {
930                printf( "%d %d\n",
931                                fib( &f1 ), fib( &f2 ) );
932        }
933}
934\end{cfa}
935\end{lrbox}
936
937\newbox\myboxB
938\begin{lrbox}{\myboxB}
939\begin{cfa}[aboveskip=0pt,belowskip=0pt]
940`coroutine` Fib { int fn1; };
941void main( Fib & fib ) with( fib ) {
942        int fn;
943        [fn1, fn] = [0, 1];
944        for () {
945                `suspend;`
946                [fn1, fn] = [fn, fn1 + fn];
947        }
948}
949int ?()( Fib & fib ) with( fib ) {
950        return `resume( fib )`.fn1;
951}
952int main() {
953        Fib f1, f2;
954        for ( 10 ) {
955                sout | f1() | f2();
956}
957
958
959\end{cfa}
960\end{lrbox}
961
962\newbox\myboxC
963\begin{lrbox}{\myboxC}
964\begin{python}[aboveskip=0pt,belowskip=0pt]
965
966def Fib():
967
968    fn1, fn = 0, 1
969    while True:
970        `yield fn1`
971        fn1, fn = fn, fn1 + fn
972
973
974// next prewritten
975
976
977f1 = Fib()
978f2 = Fib()
979for i in range( 10 ):
980        print( next( f1 ), next( f2 ) )
981
982
983
984\end{python}
985\end{lrbox}
986
987\subfloat[C]{\label{f:GlobalVariables}\usebox\myboxA}
988\hspace{3pt}
989\vrule
990\hspace{3pt}
991\subfloat[\CFA]{\label{f:ExternalState}\usebox\myboxB}
992\hspace{3pt}
993\vrule
994\hspace{3pt}
995\subfloat[Python]{\label{f:ExternalState}\usebox\myboxC}
996\caption{Fibonacci Generator}
997\label{f:C-fibonacci}
998\end{figure}
999
1000\bigskip
1001
1002\newbox\myboxA
1003\begin{lrbox}{\myboxA}
1004\begin{cfa}[aboveskip=0pt,belowskip=0pt]
1005`coroutine` Fib { int fn; };
1006void main( Fib & fib ) with( fib ) {
1007        fn = 0;  int fn1 = fn; `suspend`;
1008        fn = 1;  int fn2 = fn1;  fn1 = fn; `suspend`;
1009        for () {
1010                fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `suspend`; }
1011}
1012int next( Fib & fib ) with( fib ) { `resume( fib );` return fn; }
1013int main() {
1014        Fib f1, f2;
1015        for ( 10 )
1016                sout | next( f1 ) | next( f2 );
1017}
1018\end{cfa}
1019\end{lrbox}
1020\newbox\myboxB
1021\begin{lrbox}{\myboxB}
1022\begin{python}[aboveskip=0pt,belowskip=0pt]
1023
1024def Fibonacci():
1025        fn = 0; fn1 = fn; `yield fn`  # suspend
1026        fn = 1; fn2 = fn1; fn1 = fn; `yield fn`
1027        while True:
1028                fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `yield fn`
1029
1030
1031f1 = Fibonacci()
1032f2 = Fibonacci()
1033for i in range( 10 ):
1034        print( `next( f1 )`, `next( f2 )` ) # resume
1035
1036\end{python}
1037\end{lrbox}
1038\subfloat[\CFA]{\label{f:Coroutine3States}\usebox\myboxA}
1039\qquad
1040\subfloat[Python]{\label{f:Coroutine1State}\usebox\myboxB}
1041\caption{Fibonacci input coroutine, 3 states, internal variables}
1042\label{f:cfa-fibonacci}
1043\end{figure}
1044\end{comment}
1045
1046\begin{figure}
1047\centering
1048\lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}% allow $
1049\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}}
1050\begin{cfa}
1051`coroutine` Prod {
1052        Cons & c;                       // communication
1053        int N, money, receipt;
1054};
1055void main( Prod & prod ) with( prod ) {
1056        // 1st resume starts here
1057        for ( i; N ) {
1058                int p1 = random( 100 ), p2 = random( 100 );
1059                sout | p1 | " " | p2;
1060                int status = delivery( c, p1, p2 );
1061                sout | " $" | money | nl | status;
1062                receipt += 1;
1063        }
1064        stop( c );
1065        sout | "prod stops";
1066}
1067int payment( Prod & prod, int money ) {
1068        prod.money = money;
1069        `resume( prod );`
1070        return prod.receipt;
1071}
1072void start( Prod & prod, int N, Cons &c ) {
1073        &prod.c = &c;
1074        prod.[N, receipt] = [N, 0];
1075        `resume( prod );`
1076}
1077int main() {
1078        Prod prod;
1079        Cons cons = { prod };
1080        start( prod, 5, cons );
1081}
1082\end{cfa}
1083&
1084\begin{cfa}
1085`coroutine` Cons {
1086        Prod & p;                       // communication
1087        int p1, p2, status;
1088        bool done;
1089};
1090void ?{}( Cons & cons, Prod & p ) {
1091        &cons.p = &p; // reassignable reference
1092        cons.[status, done ] = [0, false];
1093}
1094void main( Cons & cons ) with( cons ) {
1095        // 1st resume starts here
1096        int money = 1, receipt;
1097        for ( ; ! done; ) {
1098                sout | p1 | " " | p2 | nl | " $" | money;
1099                status += 1;
1100                receipt = payment( p, money );
1101                sout | " #" | receipt;
1102                money += 1;
1103        }
1104        sout | "cons stops";
1105}
1106int delivery( Cons & cons, int p1, int p2 ) {
1107        cons.[p1, p2] = [p1, p2];
1108        `resume( cons );`
1109        return cons.status;
1110}
1111void stop( Cons & cons ) {
1112        cons.done = true;
1113        `resume( cons );`
1114}
1115
1116\end{cfa}
1117\end{tabular}
1118\caption{Producer / consumer: resume-resume cycle, bi-directional communication}
1119\label{f:ProdCons}
1120
1121\medskip
1122
1123\begin{center}
1124\input{FullProdConsStack.pstex_t}
1125\end{center}
1126\vspace*{-10pt}
1127\caption{Producer / consumer runtime stacks}
1128\label{f:ProdConsRuntimeStacks}
1129
1130\medskip
1131
1132\begin{center}
1133\input{FullCoroutinePhases.pstex_t}
1134\end{center}
1135\vspace*{-10pt}
1136\caption{Ping / Pong coroutine steps}
1137\label{f:PingPongFullCoroutineSteps}
1138\end{figure}
1139
1140Figure~\ref{f:ProdCons} shows the ping-pong example in Figure~\ref{f:CFAPingPongGen} extended into a producer/consumer symmetric-coroutine performing bidirectional communication.
1141This example is illustrative because both producer/consumer have two interface functions with @resume@s that suspend execution in these interface (helper) functions.
1142The program main creates the producer coroutine, passes it to the consumer coroutine in its initialization, and closes the cycle at the call to @start@ along with the number of items to be produced.
1143The first @resume@ of @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
1144@prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer.
1145
1146The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
1147On the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine.
1148The consumer iterates until the @done@ flag is set, prints the values delivered by the producer, increments status, and calls back to the producer via @payment@, and on return from @payment@, prints the receipt from the producer and increments @money@ (inflation).
1149The call from the consumer to @payment@ introduces the cycle between producer and consumer.
1150When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed.
1151The context switch restarts the producer at the point where it last context switched, so it continues in @delivery@ after the resume.
1152@delivery@ returns the status value in @prod@'s coroutine main, where the status is printed.
1153The loop then repeats calling @delivery@, where each call resumes the consumer coroutine.
1154The context switch to the consumer continues in @payment@.
1155The consumer increments and returns the receipt to the call in @cons@'s coroutine main.
1156The loop then repeats calling @payment@, where each call resumes the producer coroutine.
1157Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling.
1158
1159Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack.
1160Furthermore, each deallocated coroutine must guarantee all destructors are run for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
1161When a coroutine's main ends, its stack is already unwound so any stack allocated objects with destructors have been finalized.
1162The na\"{i}ve semantics for coroutine-cycle termination is context switch to the last resumer, like executing a @suspend@/@return@ in a generator.
1163However, for coroutines, the last resumer is \emph{not} implicitly below the current stack frame, as for generators, because each coroutine's stack is independent.
1164Unfortunately, it is impossible to determine statically if a coroutine is in a cycle and unrealistic to check dynamically (graph-cycle problem).
1165Hence, a compromise solution is necessary that works for asymmetric (acyclic) and symmetric (cyclic) coroutines.
1166
1167Our solution for coroutine termination works well for the most common asymmetric and symmetric coroutine usage-patterns.
1168For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer.
1169All previous generators converted to coroutines have this property.
1170For symmetric coroutines, it is common for the cycle creator to persist for the life-time of the cycle.
1171Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter.
1172Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end.
1173For other scenarios, it is always possible to devise a solution with additional programming effort.
1174
1175The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first.
1176Assume generator @PingPong@ is converted to a coroutine.
1177Figure~\ref{f:PingPongFullCoroutineSteps} shows the creation, starter, and cyclic execution steps of the coroutine version.
1178The program main creates (declares) coroutine instances @ping@ and @pong@.
1179Next, program main resumes @ping@, making it @ping@'s starter, and @ping@'s main resumes @pong@'s main, making it @pong@'s starter.
1180Execution forms a cycle when @pong@ resumes @ping@, and cycles $N$ times.
1181By adjusting $N$ for either @ping@/@pong@, it is possible to have either one finish first, instead of @pong@ always ending first.
1182If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main in function @start@.
1183If @ping@ ends first, it resumes its starter the program main in function @start@.
1184Regardless of the cycle complexity, the starter stack always leads back to the program main, but the stack can be entered at an arbitrary point.
1185Once back at the program main, coroutines @ping@ and @pong@ are deallocated.
1186For generators, deallocation runs the destructors for all objects in the generator type.
1187For coroutines, deallocation deals with objects in the coroutine type and must also run the destructors for any objects pending on the coroutine's stack for any unterminated coroutine.
1188Hence, if a coroutine's destructor detects the coroutine is not ended, it implicitly raises a cancellation exception (uncatchable exception) at the coroutine and resumes it so the cancellation exception can propagate to the root of the coroutine's stack destroying all local variable on the stack.
1189So the \CFA semantics for the generator and coroutine, ensure both can be safely deallocated at any time, regardless of their current state, like any other aggregate object.
1190Explicitly raising normal exceptions at another coroutine can replace flag variables, like @stop@, \eg @prod@ raises a @stop@ exception at @cons@ after it finishes generating values and resumes @cons@, which catches the @stop@exception to terminate its loop.
1191
1192Finally, there is an interesting effect for @suspend@ with symmetric coroutines.
1193A coroutine must retain its last resumer to suspend back because the resumer is on a different stack.
1194These reverse pointers allow @suspend@ to cycle \emph{backwards}, which may be useful in certain cases.
1195However, there is an anomaly if a coroutine resumes itself, because it overwrites its last resumer with itself, losing the ability to resume the last external resumer.
1196To prevent losing this information, a self-resume does not overwrite the last resumer.
1197
1198
1199\subsection{Coroutine Implementation}
1200
1201A significant implementation challenge for coroutines (and threads, see Section~\ref{threads}) is adding extra fields and executing code after/before the coroutine constructor/destructor and coroutine main to create/initialize/de-initialize/destroy extra fields and the stack.
1202There are several solutions to this problem and the chosen option directed the \CFA coroutine design.
1203
1204For object-oriented languages, inheritance can be used to provide extra fields and code, but it requires explicitly writing the inheritance:
1205\begin{cfa}[morekeywords={class,inherits}]
1206class mycoroutine inherits baseCoroutine { ... }
1207\end{cfa}
1208In addition, the programming language and possibly its tool set, \eg debugger, @valgrind@ need to understand @baseCoroutine@ because of special stack property of coroutines.
1209Furthermore, the execution of constructors/destructors is in the wrong order for certain operations.
1210For example, for threads if the thread is implicitly started, it must start \emph{after} all constructors, because the thread relies on a completely initialized object, but the inherited constructor runs \emph{before} the derived.
1211
1212An alternative is composition:
1213\begin{cfa}
1214struct mycoroutine {
1215        ... // declarations
1216        baseCoroutine dummy; // composition, last declaration
1217}
1218\end{cfa}
1219which also requires an explicit declaration that must be last to ensure correct initialization order.
1220However, there is nothing preventing wrong placement or multiple declarations of @baseCoroutine@.
1221
1222For coroutines, as for threads, many implementations are based on function pointers or function objects~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.
1223For example, Boost implements coroutines in terms of four functor object-types:
1224\begin{cfa}
1225asymmetric_coroutine<>::pull_type
1226asymmetric_coroutine<>::push_type
1227symmetric_coroutine<>::call_type
1228symmetric_coroutine<>::yield_type
1229\end{cfa}
1230
1231Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a task is similar).
1232The coroutine handle is the @coroutine@ instance containing programmer specified global/communication variables.
1233The coroutine descriptor contains all implicit declarations needed by the runtime, \eg @suspend@/@resume@, and can be part of the coroutine handle or separate.\footnote{
1234We are examining variable-sized structures (VLS), where fields can be variable-sized structures or arrays.
1235Once allocated, a VLS is fixed sized.}
1236The coroutine stack can appear in a number of locations and forms, fixed or variable sized.
1237Hence, the coroutine's stack could be a VLS on the allocating stack, provided the allocating stack is large enough.
1238For stack allocation, allocation/deallocation only requires a few arithmetic operation to compute the size and adjust the stack point, modulo any constructor costs.
1239For heap allocation, allocation/deallocation is an expensive heap operation (where the heap can be a shared resource), modulo any constructor costs.
1240For heap stack allocation, it is also possible to use a split (segmented) stack calling-convention, available with gcc and clang, so the stack is variable sized.
1241Currently, \CFA supports stack/heap allocated descriptors but only heap allocated stacks;
1242split-stack allocation is under development.
1243In \CFA debug-mode, a fixed-sized stack is terminated with a write-only page, which catches most stack overflows.
1244
1245\begin{figure}
1246\centering
1247\input{corlayout.pstex_t}
1248\caption{Coroutine memory layout}
1249\label{f:CoroutineMemoryLayout}
1250\end{figure}
1251
1252Similarly, the canonical threading paradigm is often based on function pointers, \eg @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}.
1253However, the generic thread-handle (identifier) is limited (few operations), unless it is wrapped in a custom type, as in the pthreads approach.
1254\begin{cfa}
1255void mycor( coroutine_t cid, void * arg ) {
1256        int * value = (int *)arg;                               $\C{// type unsafe, pointer-size only}$
1257        // Coroutine body
1258}
1259int main() {
1260        int input = 0, output;
1261        coroutine_t cid = coroutine_create( &mycor, (void *)&input ); $\C{// type unsafe, pointer-size only}$
1262        coroutine_resume( cid, (void *)input, (void **)&output ); $\C{// type unsafe, pointer-size only}$
1263}
1264\end{cfa}
1265Since the custom type is simple to write in \CFA and solves several issues, added support for function/lambda-based coroutines adds very little.
1266
1267Note, the type @coroutine_t@ must be an abstract handle to the coroutine, because the coroutine descriptor and its stack are non-copyable.
1268Copying the coroutine descriptor results in copies being out of date with the current state of the stack.
1269Correspondingly, copying the stack results is copies being out of date with the coroutine descriptor, and pointers in the stack being out of date to data on the stack.
1270(There is no mechanism in C to find all stack-specific pointers and update them as part of a copy.)
1271
1272The selected approach is to use language support by introducing a new kind of aggregate (structure):
1273\begin{cfa}
1274coroutine Fibonacci {
1275        int fn; // communication variables
1276};
1277\end{cfa}
1278The @coroutine@ keyword means the compiler (and tool set) can find and inject code where needed.
1279The downside of this approach is that it makes coroutine a special case in the language.
1280Users wanting to extend coroutines or build their own for various reasons can only do so in ways offered by the language.
1281Furthermore, implementing coroutines without language supports also displays the power of a programming language.
1282While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed without language support.
1283The reserved keyword simply eases use for the common case.
1284
1285Part of the mechanism to generalize coroutines is using a \CFA trait, which defines a coroutine as anything satisfying the trait @is_coroutine@, and this trait restricts the available set of coroutine-manipulation routines:
1286\begin{cfa}
1287trait is_coroutine( `dtype` T ) {
1288        void main( T & );
1289        coroutine_desc * get_coroutine( T & );
1290};
1291forall( `dtype` T | is_coroutine(T) ) void suspend( T & );
1292forall( `dtype` T | is_coroutine(T) ) void resume( T & );
1293\end{cfa}
1294The @dtype@ property provides no implicit copying operations and the @is_coroutine@ trait provides no explicit copying operations, so all coroutines must be passed by reference (pointer).
1295The routine definitions ensures there is a statically-typed @main@ routine that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the currently executing coroutine handle.
1296The @main@ routine has no return value or additional parameters because the coroutine type allows an arbitrary number of interface routines with corresponding arbitrary typed input/output values versus fixed ones.
1297The advantage of this approach is that users can easily create different types of coroutines, \eg changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ routine, and possibly redefining @suspend@ and @resume@.
1298The \CFA keyword @coroutine@ implicitly implements the getter and forward declarations required for implementing the coroutine main:
1299\begin{cquote}
1300\begin{tabular}{@{}ccc@{}}
1301\begin{cfa}
1302coroutine MyCor {
1303        int value;
1304
1305};
1306\end{cfa}
1307&
1308{\Large $\Rightarrow$}
1309&
1310\begin{tabular}{@{}ccc@{}}
1311\begin{cfa}
1312struct MyCor {
1313        int value;
1314        coroutine_desc cor;
1315};
1316\end{cfa}
1317&
1318\begin{cfa}
1319static inline coroutine_desc *
1320get_coroutine( MyCor & this ) {
1321        return &this.cor;
1322}
1323\end{cfa}
1324&
1325\begin{cfa}
1326void main( MyCor * this );
1327
1328
1329
1330\end{cfa}
1331\end{tabular}
1332\end{tabular}
1333\end{cquote}
1334The combination of these two approaches allows an easy and concise specification to coroutining (and concurrency) for normal users, while more advanced users have tighter control on memory layout and initialization.
1335
1336
1337\section{Concurrency}
1338\label{s:Concurrency}
1339
1340At its core, concurrency is based on multiple call-stacks and scheduling threads executing on these stacks.
1341Multiple call stacks (or contexts) and a single thread of execution, called \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}.
1342In coroutining, the single thread is self-scheduling across the stacks, so execution is deterministic, \ie the execution path from input to output is fixed and predictable.
1343A \newterm{stackless} coroutine executes on the caller's stack~\cite{Python} but this approach is restrictive, \eg preventing modularization and supporting only iterator/generator-style programming;
1344a \newterm{stackful} coroutine executes on its own stack, allowing full generality.
1345Only stackful coroutines are a stepping stone to concurrency.
1346
1347The transition to concurrency, even for execution with a single thread and multiple stacks, occurs when coroutines also context switch to a \newterm{scheduling oracle}, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}.
1348Therefore, a minimal concurrency system is possible using coroutines (see Section \ref{coroutine}) in conjunction with a scheduler to decide where to context switch next.
1349The resulting execution system now follows a cooperative threading-model, called \newterm{non-preemptive scheduling}.
1350
1351Because the scheduler is special, it can either be a stackless or stackful coroutine.
1352For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch.
1353For stackful, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
1354A stackful scheduler is often used for simplicity and security.
1355
1356Regardless of the approach used, a subset of concurrency related challenges start to appear.
1357For the complete set of concurrency challenges to occur, the missing feature is \newterm{preemption}, where context switching occurs randomly between any two instructions, often based on a timer interrupt, called \newterm{preemptive scheduling}.
1358While a scheduler introduces uncertainty in the order of execution, preemption introduces uncertainty about where context switches occur.
1359Interestingly, uncertainty is necessary for the runtime (operating) system to give the illusion of parallelism on a single processor and increase performance on multiple processors.
1360The reason is that only the runtime has complete knowledge about resources and how to best utilized them.
1361However, the introduction of unrestricted non-determinism results in the need for \newterm{mutual exclusion} and \newterm{synchronization} to restrict non-determinism for correctness;
1362otherwise, it is impossible to write meaningful programs.
1363Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows.
1364
1365An important missing feature in C is threading\footnote{While the C11 standard defines a \protect\lstinline@threads.h@ header, it is minimal and defined as optional.
1366As such, library support for threading is far from widespread.
1367At the time of writing the paper, neither \protect\lstinline@gcc@ nor \protect\lstinline@clang@ support \protect\lstinline@threads.h@ in their standard libraries.}.
1368In modern programming languages, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism.
1369As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages.
1370Furthermore, because C is a system-level language, programmers expect to choose precisely which features they need and which cost they are willing to pay.
1371Hence, concurrent programs should be written using high-level mechanisms, and only step down to lower-level mechanisms when performance bottlenecks are encountered.
1372
1373
1374\subsection{Thread Interface}
1375\label{threads}
1376
1377Both user and kernel threads are supported, where user threads provide concurrency and kernel threads provide parallelism.
1378Like coroutines and for the same design reasons, the selected approach for user threads is to use language support by introducing a new kind of aggregate (structure) and a \CFA trait:
1379\begin{cquote}
1380\begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}}
1381\begin{cfa}
1382thread myThread {
1383        // communication variables
1384};
1385
1386
1387\end{cfa}
1388&
1389\begin{cfa}
1390trait is_thread( `dtype` T ) {
1391      void main( T & );
1392      thread_desc * get_thread( T & );
1393      void ^?{}( T & `mutex` );
1394};
1395\end{cfa}
1396\end{tabular}
1397\end{cquote}
1398(The qualifier @mutex@ for the destructor parameter is discussed in Section~\ref{s:Monitor}.)
1399Like a coroutine, the statically-typed @main@ routine is the starting point (first stack frame) of a user thread.
1400The difference is that a coroutine borrows a thread from its caller, so the first thread resuming a coroutine creates an instance of @main@;
1401whereas, a user thread receives its own thread from the runtime system, which starts in @main@ as some point after the thread constructor is run.\footnote{
1402The \lstinline@main@ routine is already a special routine in C, \ie where the program's initial thread begins, so it is a natural extension of this semantics to use overloading to declare \lstinline@main@s for user coroutines and threads.}
1403No return value or additional parameters are necessary for this routine because the task type allows an arbitrary number of interface routines with corresponding arbitrary typed input/output values.
1404
1405\begin{comment} % put in appendix with coroutine version ???
1406As such the @main@ routine of a thread can be defined as
1407\begin{cfa}
1408thread foo {};
1409
1410void main(foo & this) {
1411        sout | "Hello World!";
1412}
1413\end{cfa}
1414
1415In this example, threads of type @foo@ start execution in the @void main(foo &)@ routine, which prints @"Hello World!".@ While this paper encourages this approach to enforce strongly typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity.
1416With the static semantics it is trivial to write a thread type that takes a routine pointer as a parameter and executes it on its stack asynchronously.
1417\begin{cfa}
1418typedef void (*voidRtn)(int);
1419
1420thread RtnRunner {
1421        voidRtn func;
1422        int arg;
1423};
1424
1425void ?{}(RtnRunner & this, voidRtn inRtn, int arg) {
1426        this.func = inRtn;
1427        this.arg  = arg;
1428}
1429
1430void main(RtnRunner & this) {
1431        // thread starts here and runs the routine
1432        this.func( this.arg );
1433}
1434
1435void hello(/*unused*/ int) {
1436        sout | "Hello World!";
1437}
1438
1439int main() {
1440        RtnRunner f = {hello, 42};
1441        return 0?
1442}
1443\end{cfa}
1444A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{API}.
1445\end{comment}
1446
1447For user threads to be useful, it must be possible to start and stop the underlying thread, and wait for it to complete execution.
1448While using an API such as @fork@ and @join@ is relatively common, such an interface is awkward and unnecessary.
1449A simple approach is to use allocation/deallocation principles, and have threads implicitly @fork@ after construction and @join@ before destruction.
1450\begin{cfa}
1451thread World {};
1452void main( World & this ) {
1453        sout | "World!";
1454}
1455int main() {
1456        World w`[10]`;                                                  $\C{// implicit forks after creation}$
1457        sout | "Hello ";                                        $\C{// "Hello " and 10 "World!" printed concurrently}$
1458}                                                                                       $\C{// implicit joins before destruction}$
1459\end{cfa}
1460This semantics ensures a thread is started and stopped exactly once, eliminating some programming error, and scales to multiple threads for basic (termination) synchronization.
1461This tree-structure (lattice) create/delete from C block-structure is generalized by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created.
1462\begin{cfa}
1463int main() {
1464        MyThread * heapLive;
1465        {
1466                MyThread blockLive;                                     $\C{// fork block-based thread}$
1467                heapLive = `new`( MyThread );           $\C{// fork heap-based thread}$
1468                ...
1469        }                                                                               $\C{// join block-based thread}$
1470        ...
1471        `delete`( heapLive );                                   $\C{// join heap-based thread}$
1472}
1473\end{cfa}
1474The heap-based approach allows arbitrary thread-creation topologies, with respect to fork/join-style concurrency.
1475
1476Figure~\ref{s:ConcurrentMatrixSummation} shows concurrently adding the rows of a matrix and then totalling the subtotals sequentially, after all the row threads have terminated.
1477The program uses heap-based threads because each thread needs different constructor values.
1478(Python provides a simple iteration mechanism to initialize array elements to different values allowing stack allocation.)
1479The allocation/deallocation pattern appears unusual because allocated objects are immediately deallocated without any intervening code.
1480However, for threads, the deletion provides implicit synchronization, which is the intervening code.
1481While the subtotals are added in linear order rather than completion order, which slightly inhibits concurrency, the computation is restricted by the critical-path thread (\ie the thread that takes the longest), and so any inhibited concurrency is very small as totalling the subtotals is trivial.
1482
1483\begin{figure}
1484\begin{cfa}
1485`thread` Adder { int * row, cols, & subtotal; } $\C{// communication variables}$
1486void ?{}( Adder & adder, int row[], int cols, int & subtotal ) {
1487    adder.[ row, cols, &subtotal ] = [ row, cols, &subtotal ];
1488}
1489void main( Adder & adder ) with( adder ) {
1490    subtotal = 0;
1491    for ( int c = 0; c < cols; c += 1 ) { subtotal += row[c]; }
1492}
1493int main() {
1494    const int rows = 10, cols = 1000;
1495    int matrix[rows][cols], subtotals[rows], total = 0;
1496    // read matrix
1497    Adder * adders[rows];
1498    for ( int r = 0; r < rows; r += 1 ) {       $\C{// start threads to sum rows}$
1499                adders[r] = `new( matrix[r], cols, &subtotals[r] );`
1500    }
1501    for ( int r = 0; r < rows; r += 1 ) {       $\C{// wait for threads to finish}$
1502                `delete( adders[r] );`                          $\C{// termination join}$
1503                total += subtotals[r];                          $\C{// total subtotal}$
1504    }
1505    sout | total;
1506}
1507\end{cfa}
1508\caption{Concurrent Matrix Summation}
1509\label{s:ConcurrentMatrixSummation}
1510\end{figure}
1511
1512
1513\section{Mutual Exclusion / Synchronization}
1514
1515Uncontrolled non-deterministic execution is meaningless.
1516To reestablish meaningful execution requires mechanisms to reintroduce determinism, \ie restrict non-determinism, called mutual exclusion and synchronization, where mutual exclusion is an access-control mechanism on data shared by threads, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}.
1517Since many deterministic challenges appear with the use of mutable shared state, some languages/libraries disallow it, \eg Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka~\cite{Akka} (Scala).
1518In these paradigms, interaction among concurrent objects is performed by stateless message-passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely related to networking concepts, \eg channels~\cite{CSP,Go}.
1519However, in call/return-based languages, these approaches force a clear distinction, \ie introduce a new programming paradigm between regular and concurrent computation, \eg routine call versus message passing.
1520Hence, a programmer must learn and manipulate two sets of design patterns.
1521While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account.
1522In contrast, approaches based on stateful models more closely resemble the standard call/return programming-model, resulting in a single programming paradigm.
1523
1524At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}.
1525However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}.
1526A newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}.
1527While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it is rejected as the core paradigm for concurrency in \CFA.
1528
1529One of the most natural, elegant, and efficient mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor}.
1530First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming-languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
1531In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to simulate monitors.
1532For these reasons, \CFA selected monitors as the core high-level concurrency-construct, upon which higher-level approaches can be easily constructed.
1533
1534
1535\subsection{Mutual Exclusion}
1536
1537A group of instructions manipulating a specific instance of shared data that must be performed atomically is called an (individual) \newterm{critical-section}~\cite{Dijkstra65}.
1538The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously.
1539The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers have the same session and all writers have a unique session.
1540\newterm{Mutual exclusion} enforces that the correct kind and number of threads are using a critical section.
1541
1542However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use.
1543Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use.
1544Ease of use comes by either guaranteeing some problems cannot occur, \eg deadlock free, or by offering a more explicit coupling between shared data and critical section.
1545For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations, \eg reading/writing, for numerical types.
1546However, a significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock.
1547Easing composability is another feature higher-level mutual-exclusion mechanisms can offer.
1548
1549
1550\subsection{Synchronization}
1551
1552Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships.
1553Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use;
1554higher-level mechanisms often simplify usage by adding better coupling between synchronization and data, \eg message passing, or offering a simpler solution to otherwise involved challenges, \eg barrier lock.
1555Often synchronization is used to order access to a critical section, \eg ensuring a reader thread is the next kind of thread to enter a critical section.
1556If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, that reader \newterm{barged}.
1557Barging can result in staleness/freshness problems, where a reader barges ahead of a writer and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from ever being read (lost computation).
1558Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
1559This challenge is often split into two different approaches: barging avoidance and barging prevention.
1560Algorithms that allow a barger, but divert it until later using current synchronization state (flags), are avoiding the barger;
1561algorithms that preclude a barger from entering during synchronization in the critical section prevent barging completely.
1562Techniques like baton-passing locks~\cite{Andrews89} between threads instead of unconditionally releasing locks is an example of barging prevention.
1563
1564
1565\section{Monitor}
1566\label{s:Monitor}
1567
1568A \textbf{monitor} is a set of routines that ensure mutual exclusion when accessing shared state.
1569More precisely, a monitor is a programming technique that binds mutual exclusion to routine scope, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation).
1570The strong association with the call/return paradigm eases programmability, readability and maintainability, at a slight cost in flexibility and efficiency.
1571
1572Note, like coroutines/threads, both locks and monitors require an abstract handle to reference them, because at their core, both mechanisms are manipulating non-copyable shared-state.
1573Copying a lock is insecure because it is possible to copy an open lock and then use the open copy when the original lock is closed to simultaneously access the shared data.
1574Copying a monitor is secure because both the lock and shared data are copies, but copying the shared data is meaningless because it no longer represents a unique entity.
1575As for coroutines/tasks, the @dtype@ property provides no implicit copying operations and the @is_monitor@ trait provides no explicit copying operations, so all locks/monitors must be passed by reference (pointer).
1576\begin{cfa}
1577trait is_monitor( `dtype` T ) {
1578        monitor_desc * get_monitor( T & );
1579        void ^?{}( T & mutex );
1580};
1581\end{cfa}
1582
1583
1584\subsection{Mutex Acquisition}
1585\label{s:MutexAcquisition}
1586
1587While correctness implies a monitor's mutual exclusion is acquired and released, there are implementation options about when and where the locking/unlocking occurs.
1588(Much of this discussion also applies to basic locks.)
1589For example, a monitor may need to be passed through multiple helper routines before it becomes necessary to acquire the monitor mutual-exclusion.
1590\begin{cfa}[morekeywords=nomutex]
1591monitor Aint { int cnt; };                                      $\C{// atomic integer counter}$
1592void ?{}( Aint & `nomutex` this ) with( this ) { cnt = 0; } $\C{// constructor}$
1593int ?=?( Aint & `mutex`$\(_{opt}\)$ lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions}$
1594void ?{}( int & this, Aint & `mutex`$\(_{opt}\)$ v ) { this = v.cnt; }
1595int ?=?( int & lhs, Aint & `mutex`$\(_{opt}\)$ rhs ) with( rhs ) { lhs = cnt; }
1596int ++?( Aint & `mutex`$\(_{opt}\)$ this ) with( this ) { return ++cnt; } $\C{// increment}$
1597\end{cfa}
1598The @Aint@ constructor, @?{}@, uses the \lstinline[morekeywords=nomutex]@nomutex@ qualifier indicating mutual exclusion is unnecessary during construction because an object is inaccessible (private) until after it is initialized.
1599(While a constructor may publish its address into a global variable, doing so generates a race-condition.)
1600The conversion operators for initializing and assigning with a normal integer only need @mutex@, if reading/writing the implementation type is not atomic.
1601Finally, the prefix increment operato, @++?@, is normally @mutex@ to protect the incrementing from race conditions, unless there is an atomic increment instruction for the implementation type.
1602
1603The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics, which is similar to the \CC template @std::atomic@.
1604\begin{cfa}
1605Aint x, y, z;
1606++x; ++y; ++z;                                                          $\C{// safe increment by multiple threads}$
1607x = 2; y = 2; z = 2;                                            $\C{// conversions}$
1608int i = x, j = y, k = z;
1609i = x; j = y; k = z;
1610\end{cfa}
1611
1612For maximum usability, monitors have \newterm{multi-acquire} semantics allowing a thread to acquire it multiple times without deadlock.
1613\begin{cfa}
1614monitor M { ... } m;
1615void foo( M & mutex m ) { ... }                         $\C{// acquire mutual exclusion}$
1616void bar( M & mutex m ) {                                       $\C{// acquire mutual exclusion}$
1617        ... `foo( m );` ...                                             $\C{// reacquire mutual exclusion}$
1618}
1619`bar( m );`                                                                     $\C{// nested monitor call}$
1620\end{cfa}
1621
1622The benefit of mandatory monitor qualifiers is self-documentation, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameters is redundant.
1623Instead, the semantics have one qualifier as the default, and the other required.
1624For example, make the safe @mutex@ qualifier the default because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors.
1625Alternatively, make the unsafe \lstinline[morekeywords=nomutex]@nomutex@ qualifier the default because it is the \emph{normal} parameter semantics while @mutex@ parameters are rare.
1626Providing a default qualifier implies knowing whether a parameter is a monitor.
1627Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred.
1628For this reason, \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@.
1629
1630The next semantic decision is establishing which parameter \emph{types} may be qualified with @mutex@.
1631Given:
1632\begin{cfa}
1633monitor M { ... }
1634int f1( M & mutex m );
1635int f2( M * mutex m );
1636int f3( M * mutex m[] );
1637int f4( stack( M * ) & mutex m );
1638\end{cfa}
1639the issue is that some of these parameter types are composed of multiple objects.
1640For @f1@, there is only a single parameter object.
1641Adding indirection in @f2@ still identifies a single object.
1642However, the matrix in @f3@ introduces multiple objects.
1643While shown shortly, multiple acquisition is possible;
1644however array lengths are often unknown in C.
1645This issue is exacerbated in @f4@, where the data structure must be safely traversed to acquire all of its elements.
1646
1647To make the issue tractable, \CFA only acquires one monitor per parameter with at most one level of indirection.
1648However, there is an ambiguity in the C type-system with respects to arrays.
1649Is the argument for @f2@ a single object or an array of objects?
1650If it is an array, only the first element of the array is acquired, which seems unsafe;
1651hence, @mutex@ is disallowed for array parameters.
1652\begin{cfa}
1653int f1( M & mutex m );                                          $\C{// allowed: recommended case}$
1654int f2( M * mutex m );                                          $\C{// disallowed: could be an array}$
1655int f3( M mutex m[$\,$] );                                      $\C{// disallowed: array length unknown}$
1656int f4( M ** mutex m );                                         $\C{// disallowed: could be an array}$
1657int f5( M * mutex m[$\,$] );                            $\C{// disallowed: array length unknown}$
1658\end{cfa}
1659% Note, not all array routines have distinct types: @f2@ and @f3@ have the same type, as do @f4@ and @f5@.
1660% However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic.
1661
1662For object-oriented monitors, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@.
1663\CFA has no receiver, and hence, must use an explicit mechanism to specify which object acquires mutual exclusion.
1664A positive consequence of this design decision is the ability to support multi-monitor routines.
1665\begin{cfa}
1666int f( M & mutex x, M & mutex y );              $\C{// multiple monitor parameter of any type}$
1667M m1, m2;
1668f( m1, m2 );
1669\end{cfa}
1670(While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.)
1671In practice, writing multi-locking routines that do not deadlock is tricky.
1672Having language support for such a feature is therefore a significant asset for \CFA.
1673
1674The capability to acquire multiple locks before entering a critical section is called \newterm{bulk acquire} (see Section~\ref{s:Implementation} for implementation details).
1675In the previous example, \CFA guarantees the order of acquisition is consistent across calls to different routines using the same monitors as arguments.
1676This consistent ordering means acquiring multiple monitors is safe from deadlock.
1677However, users can force the acquiring order.
1678For example, notice the use of @mutex@/\lstinline[morekeywords=nomutex]@nomutex@ and how this affects the acquiring order:
1679\begin{cfa}
1680void foo( M & mutex m1, M & mutex m2 );         $\C{// acquire m1 and m2}$
1681void bar( M & mutex m1, M & /* nomutex */ m2 ) { $\C{// acquire m1}$
1682        ... foo( m1, m2 ); ...                                  $\C{// acquire m2}$
1683}
1684void baz( M & /* nomutex */ m1, M & mutex m2 ) { $\C{// acquire m2}$
1685        ... foo( m1, m2 ); ...                                  $\C{// acquire m1}$
1686}
1687\end{cfa}
1688The multi-acquire semantics allows @bar@ or @baz@ to acquire a monitor lock and reacquire it in @foo@.
1689In the calls to @bar@ and @baz@, the monitors are acquired in opposite order.
1690
1691However, such use leads to lock acquiring order problems resulting in deadlock~\cite{Lister77}, where detecting it requires dynamic tracking of monitor calls, and dealing with it requires rollback semantics~\cite{Dice10}.
1692In \CFA, a safety aid is provided by using bulk acquire of all monitors to shared objects, whereas other monitor systems provide no aid.
1693While \CFA provides only a partial solution, it handles many useful cases, \eg:
1694\begin{cfa}
1695monitor BankAccount { ... };
1696void deposit( BankAccount & `mutex` b, int deposit );
1697void transfer( BankAccount & `mutex` my, BankAccount & `mutex` your, int me2you ) {
1698        deposit( my, `-`me2you );                               $\C{// debit}$
1699        deposit( your, me2you );                                $\C{// credit}$
1700}
1701\end{cfa}
1702This example shows a trivial solution to the bank-account transfer problem.
1703Without multi- and bulk acquire, the solution to this problem requires careful engineering.
1704
1705
1706\subsection{\protect\lstinline@mutex@ statement}
1707\label{mutex-stmt}
1708
1709The monitor call-semantics associate all locking semantics to routines.
1710Like Java, \CFA offers an alternative @mutex@ statement to reduce refactoring and naming.
1711\begin{cquote}
1712\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
1713\begin{cfa}
1714monitor M { ... };
1715void foo( M & mutex m1, M & mutex m2 ) {
1716        // critical section
1717}
1718void bar( M & m1, M & m2 ) {
1719        foo( m1, m2 );
1720}
1721\end{cfa}
1722&
1723\begin{cfa}
1724
1725void bar( M & m1, M & m2 ) {
1726        mutex( m1, m2 ) {       // remove refactoring and naming
1727                // critical section
1728        }
1729}
1730
1731\end{cfa}
1732\\
1733\multicolumn{1}{c}{\textbf{routine call}} & \multicolumn{1}{c}{\lstinline@mutex@ \textbf{statement}}
1734\end{tabular}
1735\end{cquote}
1736
1737
1738\section{Scheduling}
1739\label{s:Scheduling}
1740
1741While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed.
1742For example, Figure~\ref{f:GenericBoundedBuffer} shows a bounded buffer that may be full/empty so produce/consumer threads must block.
1743Leaving the monitor and trying again (busy waiting) is impractical for high-level programming.
1744Monitors eliminate busy waiting by providing synchronization to schedule threads needing access to the shared data, where threads block versus spinning.
1745Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} defines which thread acquires the critical section next.
1746\newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry.
1747
1748Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, see the buffer is full/empty, and block on an appropriate condition lock, @full@/@empty@.
1749The @wait@ routine atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the routine's parameter list.
1750The appropriate condition lock is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer.
1751Signalling is unconditional, because signalling an empty condition lock does nothing.
1752
1753Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means:
1754\begin{enumerate}
1755\item
1756The signalling thread returns immediately, and the signalled thread continues.
1757\item
1758The signalling thread continues and the signalled thread is marked for urgent unblocking at the next scheduling point (exit/wait).
1759\item
1760The signalling thread blocks but is marked for urgrent unblocking at the next scheduling point and the signalled thread continues.
1761\end{enumerate}
1762The first approach is too restrictive, as it precludes solving a reasonable class of problems, \eg dating service (see Figure~\ref{f:DatingService}).
1763\CFA supports the next two semantics as both are useful.
1764Finally, while it is common to store a @condition@ as a field of the monitor, in \CFA, a @condition@ variable can be created/stored independently.
1765Furthermore, a condition variable is tied to a \emph{group} of monitors on first use, called \newterm{branding}, which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors.
1766
1767\begin{figure}
1768\centering
1769\newbox\myboxA
1770\begin{lrbox}{\myboxA}
1771\begin{cfa}[aboveskip=0pt,belowskip=0pt]
1772forall( otype T ) { // distribute forall
1773        monitor Buffer {
1774                `condition` full, empty;
1775                int front, back, count;
1776                T elements[10];
1777        };
1778        void ?{}( Buffer(T) & buffer ) with(buffer) {
1779                [front, back, count] = 0;
1780        }
1781
1782        void insert( Buffer(T) & mutex buffer, T elem )
1783                                with(buffer) {
1784                if ( count == 10 ) `wait( empty )`;
1785                // insert elem into buffer
1786                `signal( full )`;
1787        }
1788        T remove( Buffer(T) & mutex buffer ) with(buffer) {
1789                if ( count == 0 ) `wait( full )`;
1790                // remove elem from buffer
1791                `signal( empty )`;
1792                return elem;
1793        }
1794}
1795\end{cfa}
1796\end{lrbox}
1797
1798\newbox\myboxB
1799\begin{lrbox}{\myboxB}
1800\begin{cfa}[aboveskip=0pt,belowskip=0pt]
1801forall( otype T ) { // distribute forall
1802        monitor Buffer {
1803
1804                int front, back, count;
1805                T elements[10];
1806        };
1807        void ?{}( Buffer(T) & buffer ) with(buffer) {
1808                [front, back, count] = 0;
1809        }
1810        T remove( Buffer(T) & mutex buffer ); // forward
1811        void insert( Buffer(T) & mutex buffer, T elem )
1812                                with(buffer) {
1813                if ( count == 10 ) `waitfor( remove, buffer )`;
1814                // insert elem into buffer
1815
1816        }
1817        T remove( Buffer(T) & mutex buffer ) with(buffer) {
1818                if ( count == 0 ) `waitfor( insert, buffer )`;
1819                // remove elem from buffer
1820
1821                return elem;
1822        }
1823}
1824\end{cfa}
1825\end{lrbox}
1826
1827\subfloat[Internal Scheduling]{\label{f:BBInt}\usebox\myboxA}
1828%\qquad
1829\subfloat[External Scheduling]{\label{f:BBExt}\usebox\myboxB}
1830\caption{Generic Bounded-Buffer}
1831\label{f:GenericBoundedBuffer}
1832\end{figure}
1833
1834Figure~\ref{f:BBExt} shows a \CFA generic bounded-buffer with external scheduling, where producers/consumers detecting a full/empty buffer block and prevent more producers/consumers from entering the monitor until there is a free/empty slot in the buffer.
1835External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion.
1836If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer.
1837Threads making calls to routines that are currently excluded, block outside of (external to) the monitor on a calling queue, versus blocking on condition queues inside of (internal to) the monitor.
1838External scheduling allows users to wait for events from other threads without concern of unrelated events occurring.
1839The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go channels.
1840While both mechanisms have strengths and weaknesses, this project uses a control-flow mechanism to stay consistent with other language semantics.
1841Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor routines (see Section~\ref{s:Multi-MonitorScheduling}).
1842
1843For internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread;
1844the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently.
1845The waiter unblocks next from the urgent queue, uses/takes the state, and exits the monitor.
1846Blocking signalling is the reverse, where the waiter is providing the cooperation for the signalling thread;
1847the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter.
1848The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to use/take the state.
1849
1850Figure~\ref{f:DatingService} shows a dating service demonstrating non-blocking and blocking signalling.
1851The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers.
1852A thread blocks until an appropriate partner arrives.
1853The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property.
1854For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner.
1855For signal-block scheduling, the implicit urgent-queue replaces the explict @exchange@-condition and @signal_block@ puts the finding thread on the urgent condition and unblocks the matcher.
1856
1857The dating service is an example of a monitor that cannot be written using external scheduling because it requires knowledge of calling parameters to make scheduling decisions, and parameters of waiting threads are unavailable;
1858as well, an arriving thread may not find a partner and must wait, which requires a condition variable, and condition variables imply internal scheduling.
1859
1860\begin{figure}
1861\centering
1862\newbox\myboxA
1863\begin{lrbox}{\myboxA}
1864\begin{cfa}[aboveskip=0pt,belowskip=0pt]
1865enum { CCodes = 20 };
1866monitor DS {
1867        int GirlPhNo, BoyPhNo;
1868        condition Girls[CCodes], Boys[CCodes];
1869        condition exchange;
1870};
1871int girl( DS & mutex ds, int phNo, int ccode ) {
1872        if ( is_empty( Boys[ccode] ) ) {
1873                wait( Girls[ccode] );
1874                GirlPhNo = phNo;
1875                `signal( exchange );`
1876        } else {
1877                GirlPhNo = phNo;
1878                `signal( Boys[ccode] );`
1879                `wait( exchange );`
1880        } // if
1881        return BoyPhNo;
1882}
1883int boy( DS & mutex ds, int phNo, int ccode ) {
1884        // as above with boy/girl interchanged
1885}
1886\end{cfa}
1887\end{lrbox}
1888
1889\newbox\myboxB
1890\begin{lrbox}{\myboxB}
1891\begin{cfa}[aboveskip=0pt,belowskip=0pt]
1892
1893monitor DS {
1894        int GirlPhNo, BoyPhNo;
1895        condition Girls[CCodes], Boys[CCodes];
1896
1897};
1898int girl( DS & mutex ds, int phNo, int ccode ) {
1899        if ( is_empty( Boys[ccode] ) ) { // no compatible
1900                wait( Girls[ccode] ); // wait for boy
1901                GirlPhNo = phNo; // make phone number available
1902
1903        } else {
1904                GirlPhNo = phNo; // make phone number available
1905                `signal_block( Boys[ccode] );` // restart boy
1906
1907        } // if
1908        return BoyPhNo;
1909}
1910int boy( DS & mutex ds, int phNo, int ccode ) {
1911        // as above with boy/girl interchanged
1912}
1913\end{cfa}
1914\end{lrbox}
1915
1916\subfloat[\lstinline@signal@]{\label{f:DatingSignal}\usebox\myboxA}
1917\qquad
1918\subfloat[\lstinline@signal_block@]{\label{f:DatingSignalBlock}\usebox\myboxB}
1919\caption{Dating service. }
1920\label{f:DatingService}
1921\end{figure}
1922
1923Both internal and external scheduling extend to multiple monitors in a natural way.
1924\begin{cquote}
1925\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
1926\begin{cfa}
1927monitor M { `condition e`; ... };
1928void foo( M & mutex m1, M & mutex m2 ) {
1929        ... wait( `e` ); ...   // wait( e, m1, m2 )
1930        ... wait( `e, m1` ); ...
1931        ... wait( `e, m2` ); ...
1932}
1933\end{cfa}
1934&
1935\begin{cfa}
1936void rtn$\(_1\)$( M & mutex m1, M & mutex m2 );
1937void rtn$\(_2\)$( M & mutex m1 );
1938void bar( M & mutex m1, M & mutex m2 ) {
1939        ... waitfor( `rtn` ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\), m1, m2 )}$
1940        ... waitfor( `rtn, m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$
1941}
1942\end{cfa}
1943\end{tabular}
1944\end{cquote}
1945For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@.
1946To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@.
1947Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe.
1948Finally, a signaller,
1949\begin{cfa}
1950void baz( M & mutex m1, M & mutex m2 ) {
1951        ... signal( e ); ...
1952}
1953\end{cfa}
1954must have acquired at least the same locks as the waiting thread signalled from the condition queue.
1955
1956Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@.
1957To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@.
1958@waitfor@ statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer.
1959To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible.
1960% When an overloaded routine appears in an @waitfor@ statement, calls to any routine with that name are accepted.
1961% The rationale is that members with the same name should perform a similar function, and therefore, all should be eligible to accept a call.
1962Overloaded routines can be disambiguated using a cast:
1963\begin{cfa}
1964void rtn( M & mutex m );
1965`int` rtn( M & mutex m );
1966waitfor( (`int` (*)( M & mutex ))rtn, m );
1967\end{cfa}
1968
1969The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock.
1970\begin{cfa}
1971void foo( M & mutex m1, M & mutex m2 ) {
1972        ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired )}$
1973void bar( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$
1974        ... signal( `e` ); ...
1975\end{cfa}
1976The @wait@ only releases @m1@ so the signalling thread cannot acquire both @m1@ and @m2@ to  enter @bar@ to get to the @signal@.
1977While deadlock issues can occur with multiple/nesting acquisition, this issue results from the fact that locks, and by extension monitors, are not perfectly composable.
1978
1979Finally, an important aspect of monitor implementation is barging, \ie can calling threads barge ahead of signalled threads?
1980If barging is allowed, synchronization between a signaller and signallee is difficult, often requiring multiple unblock/block cycles (looping around a wait rechecking if a condition is met).
1981In fact, signals-as-hints is completely opposite from that proposed by Hoare in the seminal paper on monitors:
1982\begin{quote}
1983However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program.
1984It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74}
1985\end{quote}
1986\CFA scheduling \emph{precludes} barging, which simplifies synchronization among threads in the monitor and increases correctness.
1987Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implict form of barging.
1988For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}.
1989Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency.
1990
1991
1992\subsection{Barging Prevention}
1993
1994Figure~\ref{f:BargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signalling semantics.
1995The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors.
1996The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired.
1997When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread.
1998However, both the signalling and waiting thread W1 still need monitor @m1@.
1999
2000\begin{figure}
2001\newbox\myboxA
2002\begin{lrbox}{\myboxA}
2003\begin{cfa}[aboveskip=0pt,belowskip=0pt]
2004monitor M m1, m2;
2005condition c;
2006mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$
2007        ...
2008        mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$
2009                ... `signal( c )`; ...
2010                // m1, m2 acquired
2011        } // $\LstCommentStyle{\color{red}release m2}$
2012        // m1 acquired
2013} // release m1
2014\end{cfa}
2015\end{lrbox}
2016
2017\newbox\myboxB
2018\begin{lrbox}{\myboxB}
2019\begin{cfa}[aboveskip=0pt,belowskip=0pt]
2020
2021
2022mutex( m1 ) {
2023        ...
2024        mutex( m1, m2 ) {
2025                ... `wait( c )`; // block and release m1, m2
2026                // m1, m2 acquired
2027        } // $\LstCommentStyle{\color{red}release m2}$
2028        // m1 acquired
2029} // release m1
2030\end{cfa}
2031\end{lrbox}
2032
2033\newbox\myboxC
2034\begin{lrbox}{\myboxC}
2035\begin{cfa}[aboveskip=0pt,belowskip=0pt]
2036
2037
2038mutex( m2 ) {
2039        ... `wait( c )`; ...
2040        // m2 acquired
2041} // $\LstCommentStyle{\color{red}release m2}$
2042
2043
2044
2045
2046\end{cfa}
2047\end{lrbox}
2048
2049\begin{cquote}
2050\subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxA}
2051\hspace{2\parindentlnth}
2052\subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB}
2053\hspace{2\parindentlnth}
2054\subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC}
2055\end{cquote}
2056\caption{Barging Prevention}
2057\label{f:BargingPrevention}
2058\end{figure}
2059
2060One scheduling solution is for the signaller to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling.
2061However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting in options when the signaller finishes the inner mutex-statement.
2062The signaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@.
2063In the latter case, waiter W2 must eventually pass @m2@ to waiter W1, which is complex because W1 may have waited before W2, so W2 is unaware of it.
2064Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves.
2065
2066While a number of approaches were examined~\cite[\S~4.3]{Delisle18}, the solution chosen for \CFA is a novel techique called \newterm{partial signalling}.
2067Signalled threads are moved to the urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock.
2068Partial signalling transfers ownership of monitors to the front waiter.
2069When the signaller thread exits or waits in the monitor, the front waiter is unblocked if all its monitors are released.
2070The benefit is encapsulating complexity into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
2071
2072
2073\subsection{Loose Object Definitions}
2074\label{s:LooseObjectDefinitions}
2075
2076In an object-oriented programming-language, a class includes an exhaustive list of operations.
2077However, new members can be added via static inheritance or dynamic members, \eg JavaScript~\cite{JavaScript}.
2078Similarly, monitor routines can be added at any time in \CFA, making it less clear for programmers and more difficult to implement.
2079\begin{cfa}
2080monitor M { ... };
2081void `f`( M & mutex m );
2082void g( M & mutex m ) { waitfor( `f` ); }       $\C{// clear which f}$
2083void `f`( M & mutex m, int );                           $\C{// different f}$
2084void h( M & mutex m ) { waitfor( `f` ); }       $\C{// unclear which f}$
2085\end{cfa}
2086Hence, the cfa-code for entering a monitor looks like:
2087\begin{cfa}
2088if ( $\textrm{\textit{monitor is free}}$ ) $\LstCommentStyle{// \color{red}enter}$
2089else if ( $\textrm{\textit{already own monitor}}$ ) $\LstCommentStyle{// \color{red}continue}$
2090else if ( $\textrm{\textit{monitor accepts me}}$ ) $\LstCommentStyle{// \color{red}enter}$
2091else $\LstCommentStyle{// \color{red}block}$
2092\end{cfa}
2093For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instructions.
2094However, a fast check for \emph{monitor accepts me} is much harder to implement depending on the constraints put on the monitors.
2095Figure~\ref{fig:ClassicalMonitor} shows monitors are often expressed as an entry (calling) queue, some acceptor queues, and an urgent stack/queue.
2096
2097\begin{figure}
2098\centering
2099\subfloat[Classical monitor] {
2100\label{fig:ClassicalMonitor}
2101{\resizebox{0.45\textwidth}{!}{\input{monitor.pstex_t}}}
2102}% subfloat
2103\quad
2104\subfloat[Bulk acquire monitor] {
2105\label{fig:BulkMonitor}
2106{\resizebox{0.45\textwidth}{!}{\input{ext_monitor.pstex_t}}}
2107}% subfloat
2108\caption{Monitor Implementation}
2109\label{f:MonitorImplementation}
2110\end{figure}
2111
2112For a fixed (small) number of mutex routines (\eg 128), the accept check reduces to a bitmask of allowed callers, which can be checked with a single instruction.
2113This approach requires a unique dense ordering of routines with a small upper-bound and the ordering must be consistent across translation units.
2114For object-oriented languages these constraints are common, but \CFA mutex routines can be added in any scope and are only visible in certain translation unit, precluding program-wide dense-ordering among mutex routines.
2115
2116Figure~\ref{fig:BulkMonitor} shows the \CFA monitor implementation.
2117The mutex routine called is associated with each thread on the entry queue, while a list of acceptable routines is kept separately.
2118The accepted list is a variable-sized array of accepted routine pointers, so the single instruction bitmask comparison is replaced by dereferencing a pointer followed by a (usually short) linear search.
2119
2120
2121\subsection{Multi-Monitor Scheduling}
2122\label{s:Multi-MonitorScheduling}
2123
2124External scheduling, like internal scheduling, becomes significantly more complex for multi-monitor semantics.
2125Even in the simplest case, new semantics needs to be established.
2126\newpage
2127\begin{cfa}
2128monitor M { ... };
2129void f( M & mutex m1 );
2130void g( M & mutex m1, M & mutex m2 ) {
2131        waitfor( f );                                                   $\C{\color{red}// pass m1 or m2 to f?}$
2132}
2133\end{cfa}
2134The solution is for the programmer to disambiguate:
2135\begin{cfa}
2136        waitfor( f, m2 );                                               $\C{\color{red}// wait for call to f with argument m2}$
2137\end{cfa}
2138Both locks are acquired by routine @g@, so when routine @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@.
2139This behaviour can be extended to the multi-monitor @waitfor@ statement.
2140\begin{cfa}
2141monitor M { ... };
2142void f( M & mutex m1, M & mutex m2 );
2143void g( M & mutex m1, M & mutex m2 ) {
2144        waitfor( f, m1, m2 );                                   $\C{\color{red}// wait for call to f with arguments m1 and m2}$
2145}
2146\end{cfa}
2147Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by the accepting routine.
2148Also, the order of the monitors in a @waitfor@ statement is unimportant.
2149
2150Figure~\ref{f:UnmatchedMutexSets} shows an example where, for internal and external scheduling with multiple monitors, a signalling or accepting thread must match exactly, \ie partial matching results in waiting.
2151For both examples, the set of monitors is disjoint so unblocking is impossible.
2152
2153\begin{figure}
2154\lstDeleteShortInline@%
2155\begin{tabular}{@{}l@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
2156\begin{cfa}
2157monitor M1 {} m11, m12;
2158monitor M2 {} m2;
2159condition c;
2160void f( M1 & mutex m1, M2 & mutex m2 ) {
2161        signal( c );
2162}
2163void g( M1 & mutex m1, M2 & mutex m2 ) {
2164        wait( c );
2165}
2166g( `m11`, m2 ); // block on wait
2167f( `m12`, m2 ); // cannot fulfil
2168\end{cfa}
2169&
2170\begin{cfa}
2171monitor M1 {} m11, m12;
2172monitor M2 {} m2;
2173
2174void f( M1 & mutex m1, M2 & mutex m2 ) {
2175
2176}
2177void g( M1 & mutex m1, M2 & mutex m2 ) {
2178        waitfor( f, m1, m2 );
2179}
2180g( `m11`, m2 ); // block on accept
2181f( `m12`, m2 ); // cannot fulfil
2182\end{cfa}
2183\end{tabular}
2184\lstMakeShortInline@%
2185\caption{Unmatched \protect\lstinline@mutex@ sets}
2186\label{f:UnmatchedMutexSets}
2187\end{figure}
2188
2189
2190\subsection{Extended \protect\lstinline@waitfor@}
2191
2192Figure~\ref{f:ExtendedWaitfor} show the extended form of the @waitfor@ statement to conditionally accept one of a group of mutex routines, with a specific action to be performed \emph{after} the mutex routine finishes.
2193For a @waitfor@ clause to be executed, its @when@ must be true and an outstanding call to its corresponding member(s) must exist.
2194The \emph{conditional-expression} of a @when@ may call a routine, but the routine must not block or context switch.
2195If there are multiple acceptable mutex calls, selection occurs top-to-bottom (prioritized) in the @waitfor@ clauses, whereas some programming languages with similar mechanisms accept non-deterministically for this case, \eg Go \lstinline[morekeywords=select]@select@.
2196If some accept guards are true and there are no outstanding calls to these members, the acceptor is accept-blocked until a call to one of these members is made.
2197If all the accept guards are false, the statement does nothing, unless there is a terminating @else@ clause with a true guard, which is executed instead.
2198Hence, the terminating @else@ clause allows a conditional attempt to accept a call without blocking.
2199If there is a @timeout@ clause, it provides an upper bound on waiting.
2200If both a @timeout@ clause and an @else@ clause are present, the @else@ must be conditional, or the @timeout@ is never triggered.
2201In all cases, the statement following is executed \emph{after} a clause is executed to know which of the clauses executed.
2202
2203\begin{figure}
2204\begin{cfa}
2205`when` ( $\emph{conditional-expression}$ )      $\C{// optional guard}$
2206        waitfor( $\emph{mutex-member-name}$ )
2207                $\emph{statement}$                                      $\C{// action after call}$
2208`or` `when` ( $\emph{conditional-expression}$ ) $\C{// optional guard}$
2209        waitfor( $\emph{mutex-member-name}$ )
2210                $\emph{statement}$                                      $\C{// action after call}$
2211`or`    ...                                                                     $\C{// list of waitfor clauses}$
2212`when` ( $\emph{conditional-expression}$ )      $\C{// optional guard}$
2213        `timeout`                                                               $\C{// optional terminating timeout clause}$
2214                $\emph{statement}$                                      $\C{// action after timeout}$
2215`when` ( $\emph{conditional-expression}$ )      $\C{// optional guard}$
2216        `else`                                                                  $\C{// optional terminating clause}$
2217                $\emph{statement}$                                      $\C{// action when no immediate calls}$
2218\end{cfa}
2219\caption{Extended \protect\lstinline@waitfor@}
2220\label{f:ExtendedWaitfor}
2221\end{figure}
2222
2223Note, a group of conditional @waitfor@ clauses is \emph{not} the same as a group of @if@ statements, e.g.:
2224\begin{cfa}
2225if ( C1 ) waitfor( mem1 );                       when ( C1 ) waitfor( mem1 );
2226else if ( C2 ) waitfor( mem2 );         or when ( C2 ) waitfor( mem2 );
2227\end{cfa}
2228The left example accepts only @mem1@ if @C1@ is true or only @mem2@ if @C2@ is true.
2229The right example accepts either @mem1@ or @mem2@ if @C1@ and @C2@ are true.
2230
2231An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated.
2232\begin{cfa}
2233void insert( Buffer(T) & mutex buffer, T elem ) with( buffer ) {
2234        if ( count == 10 )
2235                waitfor( remove, buffer ) {
2236                        // insert elem into buffer
2237                } or `waitfor( ^?{}, buffer )` throw insertFail;
2238}
2239\end{cfa}
2240When the buffer is deallocated, the current waiter is unblocked and informed, so it can perform an appropriate action.
2241However, the basic @waitfor@ semantics do not support this functionality, since using an object after its destructor is called is undefined.
2242Therefore, to make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the call to the destructor is placed on the urgent queue and the acceptor continues execution, which throws an exception to the acceptor and then the caller is unblocked from the urgent queue to deallocate the object.
2243Accepting the destructor is an idiomatic way to terminate a thread in \CFA.
2244
2245
2246\subsection{\protect\lstinline@mutex@ Threads}
2247
2248Threads in \CFA are monitors to allow direct communication among threads, \ie threads can have mutex routines that are called by other threads.
2249Hence, all monitor features are available when using threads.
2250The following shows an example of two threads directly calling each other and accepting calls from each other in a cycle.
2251\begin{cfa}
2252thread Ping {} pi;
2253thread Pong {} po;
2254void ping( Ping & mutex ) {}
2255void pong( Pong & mutex ) {}
2256int main() {}
2257\end{cfa}
2258\vspace{-0.8\baselineskip}
2259\begin{cquote}
2260\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
2261\begin{cfa}
2262void main( Ping & pi ) {
2263        for ( int i = 0; i < 10; i += 1 ) {
2264                `waitfor( ping, pi );`
2265                `pong( po );`
2266        }
2267}
2268\end{cfa}
2269&
2270\begin{cfa}
2271void main( Pong & po ) {
2272        for ( int i = 0; i < 10; i += 1 ) {
2273                `ping( pi );`
2274                `waitfor( pong, po );`
2275        }
2276}
2277\end{cfa}
2278\end{tabular}
2279\end{cquote}
2280% \lstMakeShortInline@%
2281% \caption{Threads ping/pong using external scheduling}
2282% \label{f:pingpong}
2283% \end{figure}
2284Note, the ping/pong threads are globally declared, @pi@/@po@, and hence, start (and possibly complete) before the program main starts.
2285
2286
2287\subsection{Low-level Locks}
2288
2289For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc.
2290
2291
2292\section{Parallelism}
2293
2294Historically, computer performance was about processor speeds.
2295However, with heat dissipation being a direct consequence of speed increase, parallelism is the new source for increased performance~\cite{Sutter05, Sutter05b}.
2296Now, high-performance applications must care about parallelism, which requires concurrency.
2297The lowest-level approach of parallelism is to use \newterm{kernel threads} in combination with semantics like @fork@, @join@, \etc.
2298However, kernel threads are better as an implementation tool because of complexity and higher cost.
2299Therefore, different abstractions are often layered onto kernel threads to simplify them, \eg pthreads.
2300
2301
2302\subsection{User Threads with Preemption}
2303
2304A direct improvement on kernel threads is user threads, \eg Erlang~\cite{Erlang} and \uC~\cite{uC++book}.
2305This approach provides an interface that matches the language paradigms, more control over concurrency by the language runtime, and an abstract (and portable) interface to the underlying kernel threads across operating systems.
2306In many cases, user threads can be used on a much larger scale (100,000 threads).
2307Like kernel threads, user threads support preemption, which maximizes nondeterminism, but introduces the same concurrency errors: race, livelock, starvation, and deadlock.
2308\CFA adopts user-threads as they represent the truest realization of concurrency and can build any other concurrency approach, \eg thread pools and actors~\cite{Actors}.
2309
2310
2311\subsection{User Threads without Preemption (Fiber)}
2312\label{s:fibers}
2313
2314A variant of user thread is \newterm{fibers}, which removes preemption, \eg Go~\cite{Go} @goroutine@s.
2315Like functional programming, which removes mutation and its associated problems, removing preemption from concurrency reduces nondeterminism, making race and deadlock errors more difficult to generate.
2316However, preemption is necessary for concurrency that relies on spinning, so there are a class of problems that cannot be programmed without preemption.
2317
2318
2319\subsection{Thread Pools}
2320
2321In contrast to direct threading is indirect \newterm{thread pools}, where small jobs (work units) are inserted into a work pool for execution.
2322If the jobs are dependent, \ie interact, there is an implicit/explicit dependency graph that ties them together.
2323While removing direct concurrency, and hence the amount of context switching, thread pools significantly limit the interaction that can occur among jobs.
2324Indeed, jobs should not block because that also blocks the underlying thread, which effectively means the CPU utilization, and therefore throughput, suffers.
2325While it is possible to tune the thread pool with sufficient threads, it becomes difficult to obtain high throughput and good core utilization as job interaction increases.
2326As well, concurrency errors return, which threads pools are suppose to mitigate.
2327
2328
2329\section{\protect\CFA Runtime Structure}
2330
2331Figure~\ref{f:RunTimeStructure} illustrates the runtime structure of a \CFA program.
2332In addition to the new kinds of objects introduced by \CFA, there are two more runtime entities used to control parallel execution: cluster and (virtual) processor.
2333An executing thread is illustrated by its containment in a processor.
2334
2335\begin{figure}
2336\centering
2337\input{RunTimeStructure}
2338\caption{\CFA Runtime Structure}
2339\label{f:RunTimeStructure}
2340\end{figure}
2341
2342
2343\subsection{Cluster}
2344\label{s:RuntimeStructureCluster}
2345
2346A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the threads (like a virtual machine).
2347The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults.
2348The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors.
2349However, the scheduler is pluggable, supporting alternative schedulers.
2350If several clusters exist, both threads and virtual processors, can be explicitly migrated from one cluster to another.
2351No automatic load balancing among clusters is performed by \CFA.
2352
2353When a \CFA program begins execution, it creates a user cluster with a single processor and a special processor to handle preemption that does not execute user threads.
2354The user cluster is created to contain the application user-threads.
2355Having all threads execute on the one cluster often maximizes utilization of processors, which minimizes runtime.
2356However, because of limitations of the underlying operating system, heterogeneous hardware, or scheduling requirements (real-time), multiple clusters are sometimes necessary.
2357
2358
2359\subsection{Virtual Processor}
2360\label{s:RuntimeStructureProcessor}
2361
2362A virtual processor is implemented by a kernel thread (\eg UNIX process), which is subsequently scheduled for execution on a hardware processor by the underlying operating system.
2363Programs may use more virtual processors than hardware processors.
2364On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel.
2365(It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.)
2366The \CFA runtime attempts to block unused processors and unblock processors as the system load increases;
2367balancing the workload with processors is difficult.
2368Preemption occurs on virtual processors rather than user threads, via operating-system interrupts.
2369Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads.
2370Turning off preemption transforms user threads into fibers.
2371
2372
2373\subsection{Debug Kernel}
2374
2375There are two versions of the \CFA runtime kernel: debug and non-debug.
2376The debugging version has many runtime checks and internal assertions, \eg stack (non-writable) guard page, and checks for stack overflow whenever context switches occur among coroutines and threads, which catches most stack overflows.
2377After a program is debugged, the non-debugging version can be used to decrease space and increase performance.
2378
2379
2380\section{Implementation}
2381\label{s:Implementation}
2382
2383Currently, \CFA has fixed-sized stacks, where the stack size can be set at coroutine/thread creation but with no subsequent growth.
2384Schemes exist for dynamic stack-growth, such as stack copying and chained stacks.
2385However, stack copying requires pointer adjustment to items on the stack, which is impossible without some form of garbage collection.
2386As well, chained stacks require all modules be recompiled to use this feature, which breaks backward compatibility with existing C libraries.
2387In the long term, it is likely C libraries will migrate to stack chaining to support concurrency, at only a minimal cost to sequential programs.
2388Nevertheless, experience teaching \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue in most concurrent programs.
2389
2390A primary implementation challenge is avoiding contention from dynamically allocating memory because of bulk acquire, \eg the internal-scheduling design is (almost) free of allocations.
2391All blocking operations are made by parking threads onto queues, therefore all queues are designed with intrusive nodes, where each node has preallocated link fields for chaining.
2392Furthermore, several bulk-acquire operations need a variable amount of memory.
2393This storage is allocated at the base of a thread's stack before blocking, which means programmers must add a small amount of extra space for stacks.
2394
2395In \CFA, ordering of monitor acquisition relies on memory ordering to prevent deadlock~\cite{Havender68}, because all objects have distinct non-overlapping memory layouts, and mutual-exclusion for a monitor is only defined for its lifetime.
2396When a mutex call is made, pointers to the concerned monitors are aggregated into a variable-length array and sorted.
2397This array persists for the entire duration of the mutual exclusion and is used extensively for synchronization operations.
2398
2399To improve performance and simplicity, context switching occurs inside a routine call, so only callee-saved registers are copied onto the stack and then the stack register is switched;
2400the corresponding registers are then restored for the other context.
2401Note, the instruction pointer is untouched since the context switch is always inside the same routine.
2402Unlike coroutines, threads do not context switch among each other;
2403they context switch to the cluster scheduler.
2404This method is a 2-step context-switch and provides a clear distinction between user and kernel code, where scheduling and other system operations happen.
2405The alternative 1-step context-switch uses the \emph{from} thread's stack to schedule and then context-switches directly to the \emph{to} thread's stack.
2406Experimental results (not presented) show the performance of these two approaches is virtually equivalent, because both approaches are dominated by locking to prevent a race condition.
2407
2408All kernel threads (@pthreads@) created a stack.
2409Each \CFA virtual processor is implemented as a coroutine and these coroutines run directly on the kernel-thread stack, effectively stealing this stack.
2410The exception to this rule is the program main, \ie the initial kernel thread that is given to any program.
2411In order to respect C expectations, the stack of the initial kernel thread is used by program main rather than the main processor, allowing it to grow dynamically as in a normal C program.
2412
2413Finally, an important aspect for a complete threading system is preemption, which introduces extra non-determinism via transparent interleaving, rather than cooperation among threads for proper scheduling and processor fairness from long-running threads.
2414Because preemption frequency is usually long (1 millisecond) performance cost is negligible.
2415Preemption is normally handled by setting a count-down timer on each virtual processor.
2416When the timer expires, an interrupt is delivered, and the interrupt handler resets the count-down timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime-kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code.
2417Multiple signal handlers may be pending.
2418When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal is delivered.
2419The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler;
2420therefore, the same signal mask is required for all virtual processors in a cluster.
2421
2422However, on current UNIX systems:
2423\begin{quote}
2424A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked.
2425If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.
2426SIGNAL(7) - Linux Programmer's Manual
2427\end{quote}
2428Hence, the timer-expiry signal, which is generated \emph{externally} by the UNIX kernel to the UNIX process, is delivered to any of its UNIX subprocesses (kernel threads).
2429To ensure each virtual processor receives its own preemption signals, a discrete-event simulation is run on a special virtual processor, and only it sets and receives timer events.
2430Virtual processors register an expiration time with the discrete-event simulator, which is inserted in sorted order.
2431The simulation sets the count-down timer to the value at the head of the event list, and when the timer expires, all events less than or equal to the current time are processed.
2432Processing a preemption event sends an \emph{internal} @SIGUSR1@ signal to the registered virtual processor, which is always delivered to that processor.
2433
2434
2435\section{Performance}
2436\label{results}
2437
2438To verify the implementation of the \CFA runtime, a series of microbenchmarks are performed comparing \CFA with other widely used programming languages with concurrency.
2439Table~\ref{t:machine} shows the specifications of the computer used to run the benchmarks, and the versions of the software used in the comparison.
2440
2441\begin{table}
2442\centering
2443\caption{Experiment environment}
2444\label{t:machine}
2445
2446\begin{tabular}{|l|r||l|r|}
2447\hline
2448Architecture            & x86\_64                               & NUMA node(s)  & 8 \\
2449\hline
2450CPU op-mode(s)          & 32-bit, 64-bit                & Model name    & AMD Opteron\texttrademark\ Processor 6380 \\
2451\hline
2452Byte Order                      & Little Endian                 & CPU Freq              & 2.5 GHz \\
2453\hline
2454CPU(s)                          & 64                                    & L1d cache     & 16 KiB \\
2455\hline
2456Thread(s) per core      & 2                                     & L1i cache     & 64 KiB \\
2457\hline
2458Core(s) per socket      & 8                                     & L2 cache              & 2048 KiB \\
2459\hline
2460Socket(s)                       & 4                                     & L3 cache              & 6144 KiB \\
2461\hline
2462\hline
2463Operating system        & Ubuntu 16.04.3 LTS    & Kernel                & Linux 4.4-97-generic \\
2464\hline
2465gcc                                     & 6.3                                   & \CFA                  & 1.0.0 \\
2466\hline
2467Java                            & OpenJDK-9                     & Go                    & 1.9.2 \\
2468\hline
2469\end{tabular}
2470\end{table}
2471
2472All benchmarks are run using the following harness:
2473\begin{cfa}
2474unsigned int N = 10_000_000;
2475#define BENCH( run ) Time before = getTimeNsec(); run; Duration result = (getTimeNsec() - before) / N;
2476\end{cfa}
2477The method used to get time is @clock_gettime( CLOCK_REALTIME )@.
2478Each benchmark is performed @N@ times, where @N@ varies depending on the benchmark;
2479the total time is divided by @N@ to obtain the average time for a benchmark.
2480All omitted tests for other languages are functionally identical to the shown \CFA test.
2481
2482
2483\paragraph{Context-Switching}
2484
2485In procedural programming, the cost of a routine call is important as modularization (refactoring) increases.
2486(In many cases, a compiler inlines routine calls to eliminate this cost.)
2487Similarly, when modularization extends to coroutines/tasks, the time for a context switch becomes a relevant factor.
2488The coroutine context-switch is 2-step using resume/suspend, \ie from resumer to suspender and from suspender to resumer.
2489The thread context switch is 2-step using yield, \ie enter and return from the runtime kernel.
2490Figure~\ref{f:ctx-switch} shows the code for coroutines/threads with all results in Table~\ref{tab:ctx-switch}.
2491The difference in performance between coroutine and thread context-switch is the cost of scheduling for threads, whereas coroutines are self-scheduling.
2492
2493\begin{figure}
2494\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
2495
2496\newbox\myboxA
2497\begin{lrbox}{\myboxA}
2498\begin{cfa}[aboveskip=0pt,belowskip=0pt]
2499coroutine C {} c;
2500void main( C & ) { for ( ;; ) { @suspend();@ } }
2501int main() {
2502        BENCH(
2503                for ( size_t i = 0; i < N; i += 1 ) { @resume( c );@ } )
2504        sout | result`ns;
2505}
2506\end{cfa}
2507\end{lrbox}
2508
2509\newbox\myboxB
2510\begin{lrbox}{\myboxB}
2511\begin{cfa}[aboveskip=0pt,belowskip=0pt]
2512
2513
2514int main() {
2515        BENCH(
2516                for ( size_t i = 0; i < N; i += 1 ) { @yield();@ } )
2517        sout | result`ns;
2518}
2519\end{cfa}
2520\end{lrbox}
2521
2522\subfloat[Coroutine]{\usebox\myboxA}
2523\quad
2524\subfloat[Thread]{\usebox\myboxB}
2525\captionof{figure}{\CFA context-switch benchmark}
2526\label{f:ctx-switch}
2527
2528\centering
2529
2530\captionof{table}{Context switch comparison (nanoseconds)}
2531\label{tab:ctx-switch}
2532\bigskip
2533\begin{tabular}{|r|*{3}{D{.}{.}{3.2}|}}
2534\cline{2-4}
2535\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Median} &\multicolumn{1}{c|}{Average} & \multicolumn{1}{c|}{Std Dev} \\
2536\hline
2537Kernel Thread   & 333.5 & 332.96        & 4.1 \\
2538\CFA Coroutine  & 49            & 48.68 & 0.47    \\
2539\CFA Thread             & 105           & 105.57        & 1.37 \\
2540\uC Coroutine   & 44            & 44            & 0 \\
2541\uC Thread              & 100           & 99.29 & 0.96 \\
2542Goroutine               & 145           & 147.25        & 4.15 \\
2543Java Thread             & 373.5 & 375.14        & 8.72 \\
2544\hline
2545\end{tabular}
2546
2547\bigskip
2548\bigskip
2549
2550\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
2551\begin{cfa}
2552monitor M { ... } m1/*, m2, m3, m4*/;
2553void __attribute__((noinline)) do_call( M & mutex m/*, m2, m3, m4*/ ) {}
2554int main() {
2555        BENCH( for( size_t i = 0; i < N; i += 1 ) { @do_call( m1/*, m2, m3, m4*/ );@ } )
2556        sout | result`ns;
2557}
2558\end{cfa}
2559\captionof{figure}{\CFA acquire/release mutex benchmark}
2560\label{f:mutex}
2561
2562\centering
2563
2564\captionof{table}{Mutex comparison (nanoseconds)}
2565\label{tab:mutex}
2566\bigskip
2567
2568\begin{tabular}{|r|*{3}{D{.}{.}{3.2}|}}
2569\cline{2-4}
2570\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Median} &\multicolumn{1}{c|}{Average} & \multicolumn{1}{c|}{Std Dev} \\
2571\hline
2572C routine                                       & 2             & 2             & 0    \\
2573FetchAdd + FetchSub                     & 26            & 26            & 0    \\
2574Pthreads Mutex Lock                     & 31            & 31.71 & 0.97 \\
2575\uC @monitor@ member routine            & 31            & 31            & 0    \\
2576\CFA @mutex@ routine, 1 argument        & 46            & 46.68 & 0.93  \\
2577\CFA @mutex@ routine, 2 argument        & 84            & 85.36 & 1.99 \\
2578\CFA @mutex@ routine, 4 argument        & 158           & 161           & 4.22 \\
2579Java synchronized routine               & 27.5  & 29.79 & 2.93  \\
2580\hline
2581\end{tabular}
2582\end{figure}
2583
2584
2585\paragraph{Mutual-Exclusion}
2586
2587Mutual exclusion is measured by entering/leaving a critical section.
2588For monitors, entering and leaving a monitor routine is measured.
2589Figure~\ref{f:mutex} shows the code for \CFA with all results in Table~\ref{tab:mutex}.
2590To put the results in context, the cost of entering a non-inline routine and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured.
2591Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
2592
2593
2594\paragraph{Internal Scheduling}
2595
2596Internal scheduling is measured by waiting on and signalling a condition variable.
2597Figure~\ref{f:int-sched} shows the code for \CFA, with results in Table~\ref{tab:int-sched}.
2598Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
2599Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.
2600
2601\begin{figure}
2602\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
2603\begin{cfa}
2604volatile int go = 0;
2605condition c;
2606monitor M { ... } m;
2607void __attribute__((noinline)) do_call( M & mutex a1 ) { signal( c ); }
2608thread T {};
2609void main( T & this ) {
2610        while ( go == 0 ) { yield(); }  // wait for other thread to start
2611        while ( go == 1 ) { @do_call( m );@ }
2612}
2613int  __attribute__((noinline)) do_wait( M & mutex m ) {
2614        go = 1; // continue other thread
2615        BENCH( for ( size_t i = 0; i < N; i += 1 ) { @wait( c );@ } );
2616        go = 0; // stop other thread
2617        sout | result`ns;
2618}
2619int main() {
2620        T t;
2621        do_wait( m );
2622}
2623\end{cfa}
2624\captionof{figure}{\CFA Internal-scheduling benchmark}
2625\label{f:int-sched}
2626
2627\centering
2628\captionof{table}{Internal-scheduling comparison (nanoseconds)}
2629\label{tab:int-sched}
2630\bigskip
2631
2632\begin{tabular}{|r|*{3}{D{.}{.}{5.2}|}}
2633\cline{2-4}
2634\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Median} &\multicolumn{1}{c|}{Average} & \multicolumn{1}{c|}{Std Dev} \\
2635\hline
2636Pthreads Condition Variable             & 6005  & 5681.43       & 835.45 \\
2637\uC @signal@                                    & 324           & 325.54        & 3,02   \\
2638\CFA @signal@, 1 @monitor@              & 368.5         & 370.61        & 4.77   \\
2639\CFA @signal@, 2 @monitor@              & 467           & 470.5 & 6.79   \\
2640\CFA @signal@, 4 @monitor@              & 700.5         & 702.46        & 7.23  \\
2641Java @notify@                                   & 15471 & 172511        & 5689 \\
2642\hline
2643\end{tabular}
2644\end{figure}
2645
2646
2647\paragraph{External Scheduling}
2648
2649External scheduling is measured by accepting a call using the @waitfor@ statement (@_Accept@ in \uC).
2650Figure~\ref{f:ext-sched} shows the code for \CFA, with results in Table~\ref{tab:ext-sched}.
2651Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
2652
2653\begin{figure}
2654\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
2655\begin{cfa}
2656volatile int go = 0;
2657monitor M { ... } m;
2658thread T {};
2659void __attribute__((noinline)) do_call( M & mutex ) {}
2660void main( T & ) {
2661        while ( go == 0 ) { yield(); }  // wait for other thread to start
2662        while ( go == 1 ) { @do_call( m );@ }
2663}
2664int __attribute__((noinline)) do_wait( M & mutex m ) {
2665        go = 1; // continue other thread
2666        BENCH( for ( size_t i = 0; i < N; i += 1 ) { @waitfor( do_call, m );@ } )
2667        go = 0; // stop other thread
2668        sout | result`ns;
2669}
2670int main() {
2671        T t;
2672        do_wait( m );
2673}
2674\end{cfa}
2675\captionof{figure}{\CFA external-scheduling benchmark}
2676\label{f:ext-sched}
2677
2678\centering
2679
2680\captionof{table}{External-scheduling comparison (nanoseconds)}
2681\label{tab:ext-sched}
2682\bigskip
2683\begin{tabular}{|r|*{3}{D{.}{.}{3.2}|}}
2684\cline{2-4}
2685\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Median} &\multicolumn{1}{c|}{Average} & \multicolumn{1}{c|}{Std Dev} \\
2686\hline
2687\uC @_Accept@                           & 358           & 359.11        & 2.53  \\
2688\CFA @waitfor@, 1 @monitor@     & 359           & 360.93        & 4.07  \\
2689\CFA @waitfor@, 2 @monitor@     & 450           & 449.39        & 6.62  \\
2690\CFA @waitfor@, 4 @monitor@     & 652           & 655.64        & 7.73 \\
2691\hline
2692\end{tabular}
2693
2694\bigskip
2695\medskip
2696
2697\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
2698\begin{cfa}
2699thread MyThread {};
2700void main( MyThread & ) {}
2701int main() {
2702        BENCH( for ( size_t i = 0; i < N; i += 1 ) { @MyThread m;@ } )
2703        sout | result`ns;
2704}
2705\end{cfa}
2706\captionof{figure}{\CFA object-creation benchmark}
2707\label{f:creation}
2708
2709\centering
2710
2711\captionof{table}{Creation comparison (nanoseconds)}
2712\label{tab:creation}
2713\bigskip
2714
2715\begin{tabular}{|r|*{3}{D{.}{.}{5.2}|}}
2716\cline{2-4}
2717\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{Average} & \multicolumn{1}{c|}{Std Dev} \\
2718\hline
2719Pthreads                                & 28091         & 28073.39      & 163.1  \\
2720\CFA Coroutine Lazy             & 6                     & 6.07          & 0.26   \\
2721\CFA Coroutine Eager    & 520           & 520.61        & 2.04   \\
2722\CFA Thread                             & 2032  & 2016.29       & 112.07  \\
2723\uC Coroutine                   & 106           & 107.36        & 1.47   \\
2724\uC Thread                              & 536.5 & 537.07        & 4.64   \\
2725Goroutine                               & 3103  & 3086.29       & 90.25  \\
2726Java Thread                             & 103416.5      & 103732.29     & 1137 \\
2727\hline
2728\end{tabular}
2729\end{figure}
2730
2731
2732\paragraph{Object Creation}
2733
2734Object creation is measured by creating/deleting the specific kind of concurrent object.
2735Figure~\ref{f:creation} shows the code for \CFA, with results in Table~\ref{tab:creation}.
2736The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine to force stack creation, the creation cost is artificially low.
2737
2738
2739\section{Conclusion}
2740
2741This paper demonstrates a concurrency API that is simple, efficient, and able to build higher-level concurrency features.
2742The approach provides concurrency based on a preemptive M:N user-level threading-system, executing in clusters, which encapsulate scheduling of work on multiple kernel threads providing parallelism.
2743The M:N model is judged to be efficient and provide greater flexibility than a 1:1 threading model.
2744High-level objects (monitor/task) are the core mechanism for mutual exclusion and synchronization.
2745A novel aspect is allowing multiple mutex-objects to be accessed simultaneously reducing the potential for deadlock for this complex scenario.
2746These concepts and the entire \CFA runtime-system are written in the \CFA language, demonstrating the expressiveness of the \CFA language.
2747Performance comparisons with other concurrent systems/languages show the \CFA approach is competitive across all low-level operations, which translates directly into good performance in well-written concurrent applications.
2748C programmers should feel comfortable using these mechanisms for developing concurrent applications, with the ability to obtain maximum available performance by mechanisms at the appropriate level.
2749
2750
2751\section{Future Work}
2752
2753While concurrency in \CFA has a strong start, development is still underway and there are missing features.
2754
2755\paragraph{Flexible Scheduling}
2756\label{futur:sched}
2757
2758An important part of concurrency is scheduling.
2759Different scheduling algorithms can affect performance (both in terms of average and variation).
2760However, no single scheduler is optimal for all workloads and therefore there is value in being able to change the scheduler for given programs.
2761One solution is to offer various tweaking options, allowing the scheduler to be adjusted to the requirements of the workload.
2762However, to be truly flexible, a pluggable scheduler is necessary.
2763Currently, the \CFA pluggable scheduler is too simple to handle complex scheduling, \eg quality of service and real-time, where the scheduler must interact with mutex objects to deal with issues like priority inversion.
2764
2765\paragraph{Non-Blocking I/O}
2766\label{futur:nbio}
2767
2768Many modern workloads are not bound by computation but IO operations, a common case being web servers and XaaS~\cite{XaaS} (anything as a service).
2769These types of workloads require significant engineering to amortizing costs of blocking IO-operations.
2770At its core, non-blocking I/O is an operating-system level feature queuing IO operations, \eg network operations, and registering for notifications instead of waiting for requests to complete.
2771Current trends use asynchronous programming like callbacks, futures, and/or promises, \eg Node.js~\cite{NodeJs} for JavaScript, Spring MVC~\cite{SpringMVC} for Java, and Django~\cite{Django} for Python.
2772However, these solutions lead to code that is hard to create, read and maintain.
2773A better approach is to tie non-blocking I/O into the concurrency system to provide ease of use with low overhead, \eg thread-per-connection web-services.
2774A non-blocking I/O library is currently under development for \CFA.
2775
2776\paragraph{Other Concurrency Tools}
2777\label{futur:tools}
2778
2779While monitors offer flexible and powerful concurrency for \CFA, other concurrency tools are also necessary for a complete multi-paradigm concurrency package.
2780Examples of such tools can include futures and promises~\cite{promises}, executors and actors.
2781These additional features are useful when monitors offer a level of abstraction that is inadequate for certain tasks.
2782As well, new \CFA extensions should make it possible to create a uniform interface for virtually all mutual exclusion, including monitors and low-level locks.
2783
2784\paragraph{Implicit Threading}
2785\label{futur:implcit}
2786
2787Basic concurrent (embarrassingly parallel) applications can benefit greatly from implicit concurrency, where sequential programs are converted to concurrent, possibly with some help from pragmas to guide the conversion.
2788This type of concurrency can be achieved both at the language level and at the library level.
2789The canonical example of implicit concurrency is concurrent nested @for@ loops, which are amenable to divide and conquer algorithms~\cite{uC++book}.
2790The \CFA language features should make it possible to develop a reasonable number of implicit concurrency mechanism to solve basic HPC data-concurrency problems.
2791However, implicit concurrency is a restrictive solution with significant limitations, so it can never replace explicit concurrent programming.
2792
2793
2794\section{Acknowledgements}
2795
2796The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz and Andrew Beach on the features described in this paper.
2797Funding for this project has been provided by Huawei Ltd.\ (\url{http://www.huawei.com}), and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.
2798
2799{%
2800\fontsize{9bp}{12bp}\selectfont%
2801\bibliography{pl,local}
2802}%
2803
2804\end{document}
2805
2806% Local Variables: %
2807% tab-width: 4 %
2808% fill-column: 120 %
2809% compile-command: "make" %
2810% End: %
Note: See TracBrowser for help on using the repository browser.