1 | \documentclass[AMA,STIX1COL]{WileyNJD-v2} |
---|
2 | |
---|
3 | \articletype{RESEARCH ARTICLE}% |
---|
4 | |
---|
5 | % Referees |
---|
6 | % Doug Lea, dl@cs.oswego.edu, SUNY Oswego |
---|
7 | % Herb Sutter, hsutter@microsoft.com, Microsoft Corp |
---|
8 | % Gor Nishanov, gorn@microsoft.com, Microsoft Corp |
---|
9 | % James Noble, kjx@ecs.vuw.ac.nz, Victoria University of Wellington, School of Engineering and Computer Science |
---|
10 | |
---|
11 | \received{XXXXX} |
---|
12 | \revised{XXXXX} |
---|
13 | \accepted{XXXXX} |
---|
14 | |
---|
15 | \raggedbottom |
---|
16 | |
---|
17 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
18 | |
---|
19 | % Latex packages used in the document. |
---|
20 | |
---|
21 | \usepackage{epic,eepic} |
---|
22 | \usepackage{xspace} |
---|
23 | \usepackage{enumitem} |
---|
24 | \usepackage{comment} |
---|
25 | \usepackage{upquote} % switch curled `'" to straight |
---|
26 | \usepackage{listings} % format program code |
---|
27 | \usepackage[labelformat=simple,aboveskip=0pt,farskip=0pt]{subfig} |
---|
28 | \renewcommand{\thesubfigure}{(\Alph{subfigure})} |
---|
29 | \captionsetup{justification=raggedright,singlelinecheck=false} |
---|
30 | \usepackage{dcolumn} % align decimal points in tables |
---|
31 | \usepackage{capt-of} |
---|
32 | \setlength{\multicolsep}{6.0pt plus 2.0pt minus 1.5pt} |
---|
33 | |
---|
34 | \hypersetup{breaklinks=true} |
---|
35 | \definecolor{OliveGreen}{cmyk}{0.64 0 0.95 0.40} |
---|
36 | \definecolor{Mahogany}{cmyk}{0 0.85 0.87 0.35} |
---|
37 | \definecolor{Plum}{cmyk}{0.50 1 0 0} |
---|
38 | |
---|
39 | \usepackage[pagewise]{lineno} |
---|
40 | \renewcommand{\linenumberfont}{\scriptsize\sffamily} |
---|
41 | |
---|
42 | \renewcommand{\topfraction}{0.8} % float must be greater than X of the page before it is forced onto its own page |
---|
43 | \renewcommand{\bottomfraction}{0.8} % float must be greater than X of the page before it is forced onto its own page |
---|
44 | \renewcommand{\floatpagefraction}{0.8} % float must be greater than X of the page before it is forced onto its own page |
---|
45 | \renewcommand{\textfraction}{0.0} % the entire page maybe devoted to floats with no text on the page at all |
---|
46 | |
---|
47 | \lefthyphenmin=3 % hyphen only after 4 characters |
---|
48 | \righthyphenmin=3 |
---|
49 | |
---|
50 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
51 | |
---|
52 | % Names used in the document. |
---|
53 | |
---|
54 | \newcommand{\CFAIcon}{\textsf{C}\raisebox{\depth}{\rotatebox{180}{\textsf{A}}}\xspace} % Cforall symbolic name |
---|
55 | \newcommand{\CFA}{\protect\CFAIcon} % safe for section/caption |
---|
56 | \newcommand{\CFL}{\textrm{Cforall}\xspace} % Cforall symbolic name |
---|
57 | \newcommand{\Celeven}{\textrm{C11}\xspace} % C11 symbolic name |
---|
58 | \newcommand{\CC}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}\xspace} % C++ symbolic name |
---|
59 | \newcommand{\CCeleven}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}11\xspace} % C++11 symbolic name |
---|
60 | \newcommand{\CCfourteen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}14\xspace} % C++14 symbolic name |
---|
61 | \newcommand{\CCseventeen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}17\xspace} % C++17 symbolic name |
---|
62 | \newcommand{\CCtwenty}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}20\xspace} % C++20 symbolic name |
---|
63 | \newcommand{\Csharp}{C\raisebox{-0.7ex}{\large$^\sharp$}\xspace} % C# symbolic name |
---|
64 | |
---|
65 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
66 | |
---|
67 | \newcommand{\Textbf}[2][red]{{\color{#1}{\textbf{#2}}}} |
---|
68 | \newcommand{\Emph}[2][red]{{\color{#1}\textbf{\emph{#2}}}} |
---|
69 | \newcommand{\uC}{$\mu$\CC} |
---|
70 | \newcommand{\TODO}[1]{{\Textbf{#1}}} |
---|
71 | |
---|
72 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
73 | |
---|
74 | % Default underscore is too low and wide. Cannot use lstlisting "literate" as replacing underscore |
---|
75 | % removes it as a variable-name character so keywords in variables are highlighted. MUST APPEAR |
---|
76 | % AFTER HYPERREF. |
---|
77 | %\DeclareTextCommandDefault{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.1ex}}} |
---|
78 | \renewcommand{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.075ex}}} |
---|
79 | |
---|
80 | \renewcommand*{\thefootnote}{\Alph{footnote}} % hack because fnsymbol does not work |
---|
81 | %\renewcommand*{\thefootnote}{\fnsymbol{footnote}} |
---|
82 | |
---|
83 | \makeatletter |
---|
84 | % parindent is relative, i.e., toggled on/off in environments like itemize, so store the value for |
---|
85 | % use rather than use \parident directly. |
---|
86 | \newlength{\parindentlnth} |
---|
87 | \setlength{\parindentlnth}{\parindent} |
---|
88 | |
---|
89 | \newcommand{\LstBasicStyle}[1]{{\lst@basicstyle{\lst@basicstyle{#1}}}} |
---|
90 | \newcommand{\LstKeywordStyle}[1]{{\lst@basicstyle{\lst@keywordstyle{#1}}}} |
---|
91 | \newcommand{\LstCommentStyle}[1]{{\lst@basicstyle{\lst@commentstyle{#1}}}} |
---|
92 | |
---|
93 | \newlength{\gcolumnposn} % temporary hack because lstlisting does not handle tabs correctly |
---|
94 | \newlength{\columnposn} |
---|
95 | \setlength{\gcolumnposn}{3.5in} |
---|
96 | \setlength{\columnposn}{\gcolumnposn} |
---|
97 | |
---|
98 | \newcommand{\C}[2][\@empty]{\ifx#1\@empty\else\global\setlength{\columnposn}{#1}\global\columnposn=\columnposn\fi\hfill\makebox[\textwidth-\columnposn][l]{\lst@basicstyle{\LstCommentStyle{#2}}}} |
---|
99 | \newcommand{\CRT}{\global\columnposn=\gcolumnposn} |
---|
100 | |
---|
101 | % Denote newterms in particular font and index them without particular font and in lowercase, \eg \newterm{abc}. |
---|
102 | % The option parameter provides an index term different from the new term, \eg \newterm[\texttt{abc}]{abc} |
---|
103 | % The star version does not lowercase the index information, e.g., \newterm*{IBM}. |
---|
104 | \newcommand{\newtermFontInline}{\emph} |
---|
105 | \newcommand{\newterm}{\@ifstar\@snewterm\@newterm} |
---|
106 | \newcommand{\@newterm}[2][\@empty]{\lowercase{\def\temp{#2}}{\newtermFontInline{#2}}\ifx#1\@empty\index{\temp}\else\index{#1@{\protect#2}}\fi} |
---|
107 | \newcommand{\@snewterm}[2][\@empty]{{\newtermFontInline{#2}}\ifx#1\@empty\index{#2}\else\index{#1@{\protect#2}}\fi} |
---|
108 | |
---|
109 | % Latin abbreviation |
---|
110 | \newcommand{\abbrevFont}{\textit} % set empty for no italics |
---|
111 | \@ifundefined{eg}{ |
---|
112 | %\newcommand{\EG}{\abbrevFont{e}\abbrevFont{g}} |
---|
113 | \newcommand{\EG}{for example} |
---|
114 | \newcommand*{\eg}{% |
---|
115 | \@ifnextchar{,}{\EG}% |
---|
116 | {\@ifnextchar{:}{\EG}% |
---|
117 | {\EG,\xspace}}% |
---|
118 | }}{}% |
---|
119 | \@ifundefined{ie}{ |
---|
120 | %\newcommand{\IE}{\abbrevFont{i}\abbrevFont{e}} |
---|
121 | \newcommand{\IE}{that is} |
---|
122 | \newcommand*{\ie}{% |
---|
123 | \@ifnextchar{,}{\IE}% |
---|
124 | {\@ifnextchar{:}{\IE}% |
---|
125 | {\IE,\xspace}}% |
---|
126 | }}{}% |
---|
127 | \@ifundefined{etc}{ |
---|
128 | \newcommand{\ETC}{\abbrevFont{etc}} |
---|
129 | \newcommand*{\etc}{% |
---|
130 | \@ifnextchar{.}{\ETC}% |
---|
131 | {\ETC.\xspace}% |
---|
132 | }}{}% |
---|
133 | \@ifundefined{etal}{ |
---|
134 | \newcommand{\ETAL}{\abbrevFont{et}~\abbrevFont{al}} |
---|
135 | \newcommand*{\etal}{% |
---|
136 | \@ifnextchar{.}{\ETAL}% |
---|
137 | {\ETAL.\xspace}% |
---|
138 | }}{}% |
---|
139 | \@ifundefined{viz}{ |
---|
140 | \newcommand{\VIZ}{\abbrevFont{viz}} |
---|
141 | \newcommand*{\viz}{% |
---|
142 | \@ifnextchar{.}{\VIZ}% |
---|
143 | {\VIZ.\xspace}% |
---|
144 | }}{}% |
---|
145 | \makeatother |
---|
146 | |
---|
147 | \newenvironment{cquote} |
---|
148 | {\list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}% |
---|
149 | \item\relax} |
---|
150 | {\endlist} |
---|
151 | |
---|
152 | %\newenvironment{cquote}{% |
---|
153 | %\list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}% |
---|
154 | %\item\relax% |
---|
155 | %}{% |
---|
156 | %\endlist% |
---|
157 | %}% cquote |
---|
158 | |
---|
159 | % CFA programming language, based on ANSI C (with some gcc additions) |
---|
160 | \lstdefinelanguage{CFA}[ANSI]{C}{ |
---|
161 | morekeywords={ |
---|
162 | _Alignas, _Alignof, __alignof, __alignof__, asm, __asm, __asm__, __attribute, __attribute__, |
---|
163 | auto, _Bool, catch, catchResume, choose, _Complex, __complex, __complex__, __const, __const__, |
---|
164 | coroutine, disable, dtype, enable, exception, __extension__, fallthrough, fallthru, finally, |
---|
165 | __float80, float80, __float128, float128, forall, ftype, generator, _Generic, _Imaginary, __imag, __imag__, |
---|
166 | inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or, |
---|
167 | otype, restrict, resume, __restrict, __restrict__, __signed, __signed__, _Static_assert, suspend, thread, |
---|
168 | _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__, |
---|
169 | virtual, __volatile, __volatile__, waitfor, when, with, zero_t}, |
---|
170 | moredirectives={defined,include_next}, |
---|
171 | % replace/adjust listing characters that look bad in sanserif |
---|
172 | literate={-}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1 |
---|
173 | {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1 |
---|
174 | {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1 |
---|
175 | {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textrm{\textgreater}}}2, |
---|
176 | } |
---|
177 | |
---|
178 | \lstset{ |
---|
179 | language=CFA, |
---|
180 | columns=fullflexible, |
---|
181 | basicstyle=\linespread{0.9}\sf, % reduce line spacing and use sanserif font |
---|
182 | stringstyle=\tt, % use typewriter font |
---|
183 | tabsize=5, % N space tabbing |
---|
184 | xleftmargin=\parindentlnth, % indent code to paragraph indentation |
---|
185 | %mathescape=true, % LaTeX math escape in CFA code $...$ |
---|
186 | escapechar=\$, % LaTeX escape in CFA code |
---|
187 | keepspaces=true, % |
---|
188 | showstringspaces=false, % do not show spaces with cup |
---|
189 | showlines=true, % show blank lines at end of code |
---|
190 | aboveskip=4pt, % spacing above/below code block |
---|
191 | belowskip=3pt, |
---|
192 | moredelim=**[is][\color{red}]{`}{`}, |
---|
193 | }% lstset |
---|
194 | |
---|
195 | % uC++ programming language, based on ANSI C++ |
---|
196 | \lstdefinelanguage{uC++}[ANSI]{C++}{ |
---|
197 | morekeywords={ |
---|
198 | _Accept, _AcceptReturn, _AcceptWait, _Actor, _At, _CatchResume, _Cormonitor, _Coroutine, _Disable, |
---|
199 | _Else, _Enable, _Event, _Finally, _Monitor, _Mutex, _Nomutex, _PeriodicTask, _RealTimeTask, |
---|
200 | _Resume, _Select, _SporadicTask, _Task, _Timeout, _When, _With, _Throw}, |
---|
201 | } |
---|
202 | |
---|
203 | % Go programming language: https://github.com/julienc91/listings-golang/blob/master/listings-golang.sty |
---|
204 | \lstdefinelanguage{Golang}{ |
---|
205 | morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,}, |
---|
206 | morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64, |
---|
207 | bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface}, |
---|
208 | morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,}, |
---|
209 | morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,}, |
---|
210 | morekeywords=[5]{Println,Printf,Error,}, |
---|
211 | sensitive=true, |
---|
212 | morecomment=[l]{//}, |
---|
213 | morecomment=[s]{/*}{*/}, |
---|
214 | morestring=[b]', |
---|
215 | morestring=[b]", |
---|
216 | morestring=[s]{`}{`}, |
---|
217 | % replace/adjust listing characters that look bad in sanserif |
---|
218 | literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1 |
---|
219 | {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1 |
---|
220 | {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1 |
---|
221 | {<-}{\makebox[2ex][c]{\textrm{\textless}\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}}2, |
---|
222 | } |
---|
223 | |
---|
224 | \lstnewenvironment{cfa}[1][] |
---|
225 | {\lstset{#1}} |
---|
226 | {} |
---|
227 | \lstnewenvironment{C++}[1][] % use C++ style |
---|
228 | {\lstset{language=C++,moredelim=**[is][\protect\color{red}]{`}{`}}\lstset{#1}} |
---|
229 | {} |
---|
230 | \lstnewenvironment{uC++}[1][] |
---|
231 | {\lstset{language=uC++,moredelim=**[is][\protect\color{red}]{`}{`}}\lstset{#1}} |
---|
232 | {} |
---|
233 | \lstnewenvironment{Go}[1][] |
---|
234 | {\lstset{language=Golang,moredelim=**[is][\protect\color{red}]{`}{`}}\lstset{#1}} |
---|
235 | {} |
---|
236 | \lstnewenvironment{python}[1][] |
---|
237 | {\lstset{language=python,moredelim=**[is][\protect\color{red}]{`}{`}}\lstset{#1}} |
---|
238 | {} |
---|
239 | \lstnewenvironment{java}[1][] |
---|
240 | {\lstset{language=java,moredelim=**[is][\protect\color{red}]{`}{`}}\lstset{#1}} |
---|
241 | {} |
---|
242 | |
---|
243 | % inline code @...@ |
---|
244 | \lstMakeShortInline@% |
---|
245 | |
---|
246 | \newcommand{\commenttd}[1]{{\color{red}{Thierry : #1}}} |
---|
247 | |
---|
248 | \let\OLDthebibliography\thebibliography |
---|
249 | \renewcommand\thebibliography[1]{ |
---|
250 | \OLDthebibliography{#1} |
---|
251 | \setlength{\parskip}{0pt} |
---|
252 | \setlength{\itemsep}{4pt plus 0.3ex} |
---|
253 | } |
---|
254 | |
---|
255 | \newsavebox{\myboxA} |
---|
256 | \newsavebox{\myboxB} |
---|
257 | \newsavebox{\myboxC} |
---|
258 | \newsavebox{\myboxD} |
---|
259 | |
---|
260 | \title{\texorpdfstring{Advanced Control-flow and Concurrency in \protect\CFA}{Advanced Control-flow in Cforall}} |
---|
261 | |
---|
262 | \author[1]{Thierry Delisle} |
---|
263 | \author[1]{Peter A. Buhr*} |
---|
264 | \authormark{DELISLE \textsc{et al.}} |
---|
265 | |
---|
266 | \address[1]{\orgdiv{Cheriton School of Computer Science}, \orgname{University of Waterloo}, \orgaddress{\state{Waterloo, ON}, \country{Canada}}} |
---|
267 | |
---|
268 | \corres{*Peter A. Buhr, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada. \email{pabuhr{\char`\@}uwaterloo.ca}} |
---|
269 | |
---|
270 | % \fundingInfo{Natural Sciences and Engineering Research Council of Canada} |
---|
271 | |
---|
272 | \abstract[Summary]{ |
---|
273 | \CFA is a polymorphic, nonobject-oriented, concurrent, backwards compatible extension of the C programming language. |
---|
274 | This paper discusses the design philosophy and implementation of its advanced control-flow and concurrent/parallel features, along with the supporting runtime written in \CFA. |
---|
275 | These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library approaches like pthreads. |
---|
276 | \CFA introduces modern language-level control-flow mechanisms, like generators, coroutines, user-level threading, and monitors for mutual exclusion and synchronization. |
---|
277 | % Library extension for executors, futures, and actors are built on these basic mechanisms. |
---|
278 | The runtime provides significant programmer simplification and safety by eliminating spurious wakeup and monitor barging. |
---|
279 | The runtime also ensures multiple monitors can be safely acquired in a deadlock-free way, and this feature is fully integrated with all monitor synchronization mechanisms. |
---|
280 | All control-flow features integrate with the \CFA polymorphic type-system and exception handling, while respecting the expectations and style of C programmers. |
---|
281 | Experimental results show comparable performance of the new features with similar mechanisms in other concurrent programming languages. |
---|
282 | }% |
---|
283 | |
---|
284 | \keywords{C \CFA (Cforall) coroutine concurrency generator monitor parallelism runtime thread} |
---|
285 | |
---|
286 | |
---|
287 | \begin{document} |
---|
288 | %\linenumbers % comment out to turn off line numbering |
---|
289 | |
---|
290 | \maketitle |
---|
291 | |
---|
292 | |
---|
293 | \section{Introduction} |
---|
294 | |
---|
295 | \CFA~\cite{Moss18,Cforall} is a modern, polymorphic, nonobject-oriented\footnote{ |
---|
296 | \CFA has object-oriented features, such as constructors, destructors, and simple trait/interface inheritance. |
---|
297 | % Go interfaces, Rust traits, Swift Protocols, Haskell Type Classes and Java Interfaces. |
---|
298 | % "Trait inheritance" works for me. "Interface inheritance" might also be a good choice, and distinguish clearly from implementation inheritance. |
---|
299 | % You'll want to be a little bit careful with terms like "structural" and "nominal" inheritance as well. CFA has structural inheritance (I think Go as well) -- it's inferred based on the structure of the code. |
---|
300 | % Java, Rust, and Haskell (not sure about Swift) have nominal inheritance, where there needs to be a specific statement that "this type inherits from this type". |
---|
301 | However, functions \emph{cannot} be nested in structures and there is no mechanism to designate a function parameter as a receiver, \lstinline@this@, parameter.}, |
---|
302 | , backward-compatible extension of the C programming language. |
---|
303 | In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a vehicle for new typing and control-flow capabilities on top of a highly popular programming language\footnote{ |
---|
304 | The TIOBE index~\cite{TIOBE} for May 2020 ranks the top five \emph{popular} programming languages as C 17\%, Java 16\%, Python 9\%, \CC 6\%, and \Csharp 4\% = 52\%, and over the past 30 years, C has always ranked either first or second in popularity.} |
---|
305 | allowing immediate dissemination. |
---|
306 | This paper discusses the design philosophy and implementation of \CFA's advanced control-flow and concurrent/parallel features, along with the supporting runtime written in \CFA. |
---|
307 | |
---|
308 | % The call/return extensions retain state between callee and caller versus losing the callee's state on return; |
---|
309 | % the concurrency extensions allow high-level management of threads. |
---|
310 | |
---|
311 | The \CFA control-flow framework extends ISO \Celeven~\cite{C11} with new call/return and concurrent/parallel control-flow. |
---|
312 | Call/return control-flow with argument and parameter passing appeared in the first programming languages. |
---|
313 | Over the past 50 years, call/return has been augmented with features like static and dynamic call, exceptions (multilevel return) and generators/coroutines (see Section~\ref{s:StatefulFunction}). |
---|
314 | While \CFA has mechanisms for dynamic call (algebraic effects~\cite{Zhang19}) and exceptions\footnote{ |
---|
315 | \CFA exception handling will be presented in a separate paper. |
---|
316 | The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}} |
---|
317 | , this work only discusses retaining state between calls via generators and coroutines. |
---|
318 | \newterm{Coroutining} was introduced by Conway~\cite{Conway63}, discussed by Knuth~\cite[\S~1.4.2]{Knuth73V1}, implemented in Simula67~\cite{Simula67}, formalized by Marlin~\cite{Marlin80}, and is now popular and appears in old and new programming languages: CLU~\cite{CLU}, \Csharp~\cite{Csharp}, Ruby~\cite{Ruby}, Python~\cite{Python}, JavaScript~\cite{JavaScript}, Lua~\cite{Lua}, \CCtwenty~\cite{C++20Coroutine19}. |
---|
319 | Coroutining is sequential execution requiring direct handoff among coroutines, \ie only the programmer is controlling execution order. |
---|
320 | If coroutines transfer to an internal event-engine for scheduling the next coroutines (as in async-await), the program transitions into the realm of concurrency~\cite[\S~3]{Buhr05a}. |
---|
321 | Coroutines are only a stepping stone toward concurrency where the commonality is that coroutines and threads retain state between calls. |
---|
322 | |
---|
323 | \Celeven and \CCeleven define concurrency~\cite[\S~7.26]{C11}, but it is largely wrappers for a subset of the pthreads library~\cite{Pthreads}.\footnote{Pthreads concurrency is based on simple thread fork and join in a function and mutex or condition locks, which is low-level and error-prone} |
---|
324 | Interestingly, almost a decade after the \Celeven standard, the most recent versions of gcc, clang, and msvc do not support the \Celeven include @threads.h@, indicating no interest in the C11 concurrency approach (possibly because of the recent effort to add concurrency to \CC). |
---|
325 | While the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}, as for \CC. |
---|
326 | In contrast, there has been a renewed interest during the past decade in user-level (M:N, green) threading in old and new programming languages. |
---|
327 | As multicore hardware became available in the 1980/1990s, both user and kernel threading were examined. |
---|
328 | Kernel threading was chosen, largely because of its simplicity and fit with the simpler operating systems and hardware architectures at the time, which gave it a performance advantage~\cite{Drepper03}. |
---|
329 | Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads. |
---|
330 | As a result, many languages adopt the 1:1 kernel-threading model, like Java (Scala), Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, C\#~\cite{Csharp} and Rust~\cite{Rust}, with a variety of presentation mechanisms. |
---|
331 | From 2000 onward, several language implementations have championed the M:N user-threading model, like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book}, including putting green threads back into Java~\cite{Quasar}, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}. |
---|
332 | The main argument for user-level threading is that it is lighter weight than kernel threading because locking and context switching do not cross the kernel boundary, so there is less restriction on programming styles that encourages large numbers of threads performing medium-sized work to facilitate load balancing by the runtime~\cite{Verch12}. |
---|
333 | As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{Adya02,vonBehren03}. |
---|
334 | Finally, performant user-threading implementations, both in time and space, meet or exceed direct kernel-threading implementations, while achieving the programming advantages of high concurrency levels and safety. |
---|
335 | |
---|
336 | A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \eg some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}. |
---|
337 | The consequence is that a language must provide sufficient tools to program around safety issues, as inline and library code is compiled as sequential without any explicit concurrent directive. |
---|
338 | One solution is low-level qualifiers and functions, \eg @volatile@ and atomics, allowing \emph{programmers} to explicitly write safe, race-free~\cite{Boehm12} programs. |
---|
339 | A safer solution is high-level language constructs so the \emph{compiler} knows the concurrency boundaries, \ie where mutual exclusion and synchronization are acquired and released, and provide implicit safety at and across these boundaries. |
---|
340 | While the optimization problem is best known with respect to concurrency, it applies to other complex control-flows like exceptions and coroutines. |
---|
341 | As well, language solutions allow matching the language paradigm with the approach, \eg matching the functional paradigm with data-flow programming or the imperative paradigm with thread programming. |
---|
342 | |
---|
343 | Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary. |
---|
344 | Two concurrency violations of this philosophy are \emph{spurious} or \emph{random wakeup}~\cite[\S~9]{Buhr05a}, and \emph{barging}\footnote{ |
---|
345 | Barging is competitive succession instead of direct handoff, \ie after a lock is released both arriving and preexisting waiter threads compete to acquire the lock. |
---|
346 | Hence, an arriving thread can temporally \emph{barge} ahead of threads already waiting for an event, which can repeat indefinitely leading to starvation of waiter threads. |
---|
347 | } or signals-as-hints~\cite[\S~8]{Buhr05a}, where one is a consequence of the other, \ie once there is spurious wakeup, barging follows. |
---|
348 | (Author experience teaching concurrency is that students are confused by these semantics.) |
---|
349 | However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~9]{Buhr05a}; |
---|
350 | it is a performance design choice. |
---|
351 | We argue removing spurious wakeup and signals-as-hints make concurrent programming simpler and safer as there is less local nondeterminism to manage. |
---|
352 | If barging acquisition is allowed, its specialized performance advantage should be available as an option not the default. |
---|
353 | |
---|
354 | \CFA embraces language extensions for advanced control-flow, user-level threading, and safety as the default. |
---|
355 | We present comparative examples to support our argument that the \CFA control-flow extensions are as expressive and safe as those in other concurrent imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms. |
---|
356 | The main contributions of this work are: |
---|
357 | \begin{itemize}[topsep=3pt,itemsep=0pt] |
---|
358 | \item |
---|
359 | a set of fundamental execution properties that dictate which language-level control-flow features need to be supported, |
---|
360 | |
---|
361 | \item |
---|
362 | integration of these language-level control-flow features, while respecting the style and expectations of C programmers, |
---|
363 | |
---|
364 | \item |
---|
365 | monitor synchronization without barging, and the ability to safely acquiring multiple monitors in a deadlock-free way, while seamlessly integrating these capabilities with all monitor synchronization mechanisms, |
---|
366 | |
---|
367 | \item |
---|
368 | providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features, |
---|
369 | |
---|
370 | % \item |
---|
371 | % library extensions for executors, futures, and actors built on the basic mechanisms. |
---|
372 | |
---|
373 | \item |
---|
374 | a runtime system without spurious wake-up and no performance loss, |
---|
375 | |
---|
376 | \item |
---|
377 | a dynamic partitioning mechanism to segregate groups of executing user and kernel threads performing specialized work, \eg web-server or compute engine, or requiring different scheduling, \eg NUMA or real-time. |
---|
378 | |
---|
379 | % \item |
---|
380 | % a nonblocking I/O library |
---|
381 | |
---|
382 | \item |
---|
383 | experimental results showing comparable performance of the \CFA features with similar mechanisms in other languages. |
---|
384 | \end{itemize} |
---|
385 | |
---|
386 | Section~\ref{s:FundamentalExecutionProperties} presents the compositional hierarchy of execution properties directing the design of control-flow features in \CFA. |
---|
387 | Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls producing constructs @generator@ and @coroutine@. |
---|
388 | Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread producing the @thread@ construct. |
---|
389 | Section~\ref{s:MutualExclusionSynchronization} discusses the two mechanisms to restricted nondeterminism when controlling shared access to resources, called mutual exclusion, and timing relationships among threads, called synchronization. |
---|
390 | Section~\ref{s:Monitor} shows how both mutual exclusion and synchronization are safely embedded in the @monitor@ and @thread@ constructs. |
---|
391 | Section~\ref{s:CFARuntimeStructure} describes the large-scale mechanism to structure threads and virtual processors (kernel threads). |
---|
392 | Section~\ref{s:Performance} uses microbenchmarks to compare \CFA threading with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js v12.18.0, and \uC 7.0.0. |
---|
393 | |
---|
394 | |
---|
395 | \section{Fundamental Execution Properties} |
---|
396 | \label{s:FundamentalExecutionProperties} |
---|
397 | |
---|
398 | The features in a programming language should be composed of a set of fundamental properties rather than an ad hoc collection chosen by the designers. |
---|
399 | To this end, the control-flow features created for \CFA are based on the fundamental properties of any language with function-stack control-flow (see also \uC~\cite[pp.~140-142]{uC++}). |
---|
400 | The fundamental properties are execution state, thread, and mutual-exclusion/synchronization. |
---|
401 | These independent properties can be used to compose different language features, forming a compositional hierarchy, where the combination of all three is the most advanced feature, called a thread. |
---|
402 | While it is possible for a language to only provide threads for composing programs~\cite{Hermes90}, this unnecessarily complicates and makes inefficient solutions to certain classes of problems. |
---|
403 | As is shown, each of the non-rejected composed language features solves a particular set of problems, and hence, has a defensible position in a programming language. |
---|
404 | If a compositional feature is missing, a programmer has too few fundamental properties resulting in a complex and/or inefficient solution. |
---|
405 | |
---|
406 | In detail, the fundamental properties are: |
---|
407 | \begin{description}[leftmargin=\parindent,topsep=3pt,parsep=0pt] |
---|
408 | \item[\newterm{execution state}:] |
---|
409 | It is the state information needed by a control-flow feature to initialize and manage both compute data and execution location(s), and de-initialize. |
---|
410 | For example, calling a function initializes a stack frame including contained objects with constructors, manages local data in blocks and return locations during calls, and de-initializes the frame by running any object destructors and management operations. |
---|
411 | State is retained in fixed-sized aggregate structures (objects) and dynamic-sized stack(s), often allocated in the heap(s) managed by the runtime system. |
---|
412 | The lifetime of state varies with the control-flow feature, where longer life-time and dynamic size provide greater power but also increase usage complexity and cost. |
---|
413 | Control-flow transfers among execution states in multiple ways, such as function call, context switch, asynchronous await, etc. |
---|
414 | Because the programming language determines what constitutes an execution state, implicitly manages this state, and defines movement mechanisms among states, execution state is an elementary property of the semantics of a programming language. |
---|
415 | % An execution-state is related to the notion of a process continuation \cite{Hieb90}. |
---|
416 | |
---|
417 | \item[\newterm{threading}:] |
---|
418 | It is execution of code that occurs independently of other execution, where an individual thread's execution is sequential. |
---|
419 | Multiple threads provide \emph{concurrent execution}; |
---|
420 | concurrent execution becomes parallel when run on multiple processing units, \eg hyper-threading, cores, or sockets. |
---|
421 | A programmer needs mechanisms to create, block and unblock, and join with a thread, even if these basic mechanisms are supplied indirectly through high-level features. |
---|
422 | |
---|
423 | \item[\newterm{mutual-exclusion / synchronization (MES)}:] |
---|
424 | It is the concurrency mechanism to perform an action without interruption and establish timing relationships among multiple threads. |
---|
425 | We contented these two properties are independent, \ie mutual exclusion cannot provide synchronization and vice versa without introducing additional threads~\cite[\S~4]{Buhr05a}. |
---|
426 | Limiting MES functionality results in contrived solutions and inefficiency on multicore von Neumann computers where shared memory is a foundational aspect of its design. |
---|
427 | \end{description} |
---|
428 | These properties are fundamental as they cannot be built from existing language features, \eg a basic programming language like C99~\cite{C99} cannot create new control-flow features, concurrency, or provide MES without (atomic) hardware mechanisms. |
---|
429 | |
---|
430 | |
---|
431 | \subsection{Structuring execution properties} |
---|
432 | |
---|
433 | Programming languages seldom present the fundamental execution properties directly to programmers. |
---|
434 | Instead, the properties are packaged into higher-level constructs that encapsulate details and provide safety to these low-level mechanisms. |
---|
435 | Interestingly, language designers often pick and choose among these execution properties proving a varying subset of constructs. |
---|
436 | |
---|
437 | Table~\ref{t:ExecutionPropertyComposition} shows all combinations of the three fundamental execution properties available to language designers. |
---|
438 | (When doing combination case-analysis, not all combinations are meaningful.) |
---|
439 | The combinations of state, thread, and MES compose a hierarchy of control-flow features all of which have appeared in prior programming languages, where each of these languages have found the feature useful. |
---|
440 | To understand the table, it is important to review the basic von Neumann execution requirement of at least one thread and execution state providing some form of call stack. |
---|
441 | For table entries missing these minimal components, the property is borrowed from the invoker (caller). |
---|
442 | Each entry in the table, numbered \textbf{1}--\textbf{12}, is discussed with respect to how the execution properties combine to generate a high-level language construct. |
---|
443 | |
---|
444 | \begin{table} |
---|
445 | \caption{Execution property composition} |
---|
446 | \centering |
---|
447 | \label{t:ExecutionPropertyComposition} |
---|
448 | \renewcommand{\arraystretch}{1.25} |
---|
449 | %\setlength{\tabcolsep}{5pt} |
---|
450 | \vspace*{-5pt} |
---|
451 | \begin{tabular}{c|c||l|l} |
---|
452 | \multicolumn{2}{c||}{Execution properties} & \multicolumn{2}{c}{Mutual exclusion / synchronization} \\ |
---|
453 | \hline |
---|
454 | stateful & thread & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\ |
---|
455 | \hline |
---|
456 | \hline |
---|
457 | No & No & \textbf{1}\ \ \ @struct@ & \textbf{2}\ \ \ @mutex@ @struct@ \\ |
---|
458 | \hline |
---|
459 | Yes (stackless) & No & \textbf{3}\ \ \ @generator@ & \textbf{4}\ \ \ @mutex@ @generator@ \\ |
---|
460 | \hline |
---|
461 | Yes (stackful) & No & \textbf{5}\ \ \ @coroutine@ & \textbf{6}\ \ \ @mutex@ @coroutine@ \\ |
---|
462 | \hline |
---|
463 | No & Yes & \textbf{7}\ \ \ {\color{red}rejected} & \textbf{8}\ \ \ {\color{red}rejected} \\ |
---|
464 | \hline |
---|
465 | Yes (stackless) & Yes & \textbf{9}\ \ \ {\color{red}rejected} & \textbf{10}\ \ \ {\color{red}rejected} \\ |
---|
466 | \hline |
---|
467 | Yes (stackful) & Yes & \textbf{11}\ \ \ @thread@ & \textbf{12}\ \ @mutex@ @thread@ \\ |
---|
468 | \end{tabular} |
---|
469 | \vspace*{-8pt} |
---|
470 | \end{table} |
---|
471 | |
---|
472 | Case 1 is a structure where access functions borrow local state (stack frame/activation) and thread from the invoker and retain this state across \emph{callees}, \ie function local-variables are retained on the borrowed stack during calls. |
---|
473 | Structures are a foundational mechanism for data organization, and access functions provide interface abstraction and code sharing in all programming languages. |
---|
474 | Case 2 is case 1 with thread safety to a structure's state where access functions provide serialization (mutual exclusion) and scheduling among calling threads (synchronization). |
---|
475 | A @mutex@ structure, often called a \newterm{monitor}, provides a high-level interface for race-free access of shared data in concurrent programming languages. |
---|
476 | Case 3 is case 1 where the structure can implicitly retain execution state and access functions use this execution state to resume/suspend across \emph{callers}, but resume/suspend does not retain a function's local state. |
---|
477 | A stackless structure, often called a \newterm{generator} or \emph{iterator}, is \newterm{stackless} because it still borrows the caller's stack and thread, but the stack is used only to preserve state across its callees not callers. |
---|
478 | Generators provide the first step toward directly solving problems like finite-state machines (FSMs) that retain data and execution state between calls, whereas normal functions restart on each call. |
---|
479 | Case 4 is cases 2 and 3 with thread safety during execution of the generator's access functions. |
---|
480 | A @mutex@ generator extends generators into the concurrent domain. |
---|
481 | Cases 5 and 6 are like cases 3 and 4 where the structure is extended with an implicit separate stack, so only the thread is borrowed by access functions. |
---|
482 | A stackful generator, often called a \newterm{coroutine}, is \newterm{stackful} because resume/suspend now context switch to/from the caller's and coroutine's stack. |
---|
483 | A coroutine extends the state retained between calls beyond the generator's structure to arbitrary call depth in the access functions. |
---|
484 | Cases 7, 8, 9 and 10 are rejected because a new thread must have its own stack, where the thread begins and stack frames are stored for calls, \ie it is unrealistic for a thread to borrow a stack. |
---|
485 | For cases 9 and 10, the stackless frame is not growable, precluding accepting nested calls, making calls, blocking as it requires calls, or preemption as it requires pushing an interrupt frame, all of which compound to require an unknown amount of execution state. |
---|
486 | Hence, if this kind of uninterruptable thread exists, it must execute to completion, \ie computation only, which severely restricts runtime management. |
---|
487 | Cases 11 and 12 are a stackful thread with and without safe access to shared state. |
---|
488 | A thread is the language mechanism to start another thread of control in a program with growable execution state for call/return execution. |
---|
489 | In general, language constructs with more execution properties increase the cost of creation and execution along with complexity of usage. |
---|
490 | |
---|
491 | Given the execution-properties taxonomy, programmers now ask three basic questions: is state necessary across callers and how much, is a separate thread necessary, is thread safety necessary. |
---|
492 | Table~\ref{t:ExecutionPropertyComposition} then suggests the optimal language feature needed for implementing a programming problem. |
---|
493 | The following sections describe how \CFA fills in \emph{all} the nonrejected table entries with language features, while other programming languages may only provide a subset of the table. |
---|
494 | |
---|
495 | |
---|
496 | \subsection{Design requirements} |
---|
497 | |
---|
498 | The following design requirements largely stem from building \CFA on top of C. |
---|
499 | \begin{itemize}[topsep=3pt,parsep=0pt] |
---|
500 | \item |
---|
501 | All communication must be statically type checkable for early detection of errors and efficient code generation. |
---|
502 | This requirement is consistent with the fact that C is a statically typed programming language. |
---|
503 | |
---|
504 | \item |
---|
505 | Direct interaction among language features must be possible allowing any feature to be selected without restricting comm\-unication. |
---|
506 | For example, many concurrent languages do not provide direct communication calls among threads, \ie threads only communicate indirectly through monitors, channels, messages, and/or futures. |
---|
507 | Indirect communication increases the number of objects, consuming more resources, and requires additional synchronization and possibly data transfer. |
---|
508 | |
---|
509 | \item |
---|
510 | All communication is performed using function calls, \ie data are transmitted from argument to parameter and results are returned from function calls. |
---|
511 | Alternative forms of communication, such as call-backs, message passing, channels, or communication ports, step outside of C's normal form of communication. |
---|
512 | |
---|
513 | \item |
---|
514 | All stateful features must follow the same declaration scopes and lifetimes as other language data. |
---|
515 | For C that means at program startup, during block and function activation, and on demand using dynamic allocation. |
---|
516 | |
---|
517 | \item |
---|
518 | MES must be available implicitly in language constructs, \eg Java built-in monitors, as well as explicitly for specialized requirements, \eg @java.util.concurrent@, because requiring programmers to build MES using low-level locks often leads to incorrect programs. |
---|
519 | Furthermore, reducing synchronization scope by encapsulating it within language constructs further reduces errors in concurrent programs. |
---|
520 | |
---|
521 | \item |
---|
522 | Both synchronous and asynchronous communication are needed. |
---|
523 | However, we believe the best way to provide asynchrony, such as call-buffering/chaining and/or returning futures~\cite{multilisp}, is building it from expressive synchronous features. |
---|
524 | |
---|
525 | \item |
---|
526 | Synchronization must be able to control the service order of requests including prioritizing selection from different kinds of outstanding requests, and postponing a request for an unspecified time while continuing to accept new requests. |
---|
527 | Otherwise, certain concurrency problems are difficult, \eg web server, disk scheduling, and the amount of concurrency is inhibited~\cite{Gentleman81}. |
---|
528 | \end{itemize} |
---|
529 | We have satisfied these requirements in \CFA while maintaining backwards compatibility with the huge body of legacy C programs. |
---|
530 | % In contrast, other new programming languages must still access C programs (\eg operating-system service routines), but do so through fragile C interfaces. |
---|
531 | |
---|
532 | |
---|
533 | \subsection{Asynchronous await / call} |
---|
534 | |
---|
535 | Asynchronous await/call is a caller mechanism for structuring programs and/or increasing concurrency, where the caller (client) postpones an action into the future, which is subsequently executed by a callee (server). |
---|
536 | The caller detects the action's completion through a \newterm{future} or \newterm{promise}. |
---|
537 | The benefit is asynchronous caller execution with respect to the callee until future resolution. |
---|
538 | For single-threaded languages like JavaScript, an asynchronous call passes a callee action, which is queued in the event-engine, and continues execution with a promise. |
---|
539 | When the caller needs the promise to be fulfilled, it executes @await@. |
---|
540 | A promise-completion call-back can be part of the callee action or the caller is rescheduled; |
---|
541 | in either case, the call back is executed after the promise is fulfilled. |
---|
542 | While asynchronous calls generate new callee (server) events, we contend this mechanism is insufficient for advanced control-flow mechanisms like generators or coroutines, which are discussed next. |
---|
543 | Specifically, control between caller and callee occurs indirectly through the event-engine precluding direct handoff and cycling among events, and requires complex resolution of a control promise and data. |
---|
544 | Note, @async-await@ is just syntactic-sugar over the event engine so it does not solve these deficiencies. |
---|
545 | For multithreaded languages like Java, the asynchronous call queues a callee action with an executor (server), which subsequently executes the work by a thread in the executor thread-pool. |
---|
546 | The problem is when concurrent work-units need to interact and/or block as this effects the executor by stopping threads. |
---|
547 | While it is possible to extend this approach to support the necessary mechanisms, \eg message passing in Actors, we show monitors and threads provide an equally competitive approach that does not deviate from normal call communication and can be used to build asynchronous call, as is done in Java. |
---|
548 | |
---|
549 | |
---|
550 | \section{Stateful Function} |
---|
551 | \label{s:StatefulFunction} |
---|
552 | |
---|
553 | A \emph{stateful function} has the ability to remember state between calls, where state can be either data or execution, \eg plugin, device driver, FSM. |
---|
554 | A simple technique to retain data state between calls is @static@ declarations within a function, which is often implemented by hoisting the declarations to the global scope but hiding the names within the function using name mangling. |
---|
555 | However, each call starts the function at the top making it difficult to determine the last point of execution in an algorithm, and requiring multiple flag variables and testing to reestablish the continuation point. |
---|
556 | Hence, the next step of generalizing function state is implicitly remembering the return point between calls and reentering the function at this point rather than the top, called \emph{generators}\,/\,\emph{iterators} or \emph{stackless coroutines}. |
---|
557 | For example, a Fibonacci generator retains data and execution state allowing it to remember prior values needed to generate the next value and the location in the algorithm to compute that value. |
---|
558 | The next step of generalization is instantiating the function to allow multiple named instances, \eg multiple Fibonacci generators, where each instance has its own state, and hence, can generate an independent sequence of values. |
---|
559 | Note, a subset of generator state is a function \emph{closure}, \ie the technique of capturing lexical references when returning a nested function. |
---|
560 | A further generalization is adding a stack to a generator's state, called a \emph{coroutine}, so it can suspend outside of itself, \eg call helper functions to arbitrary depth before suspending back to its resumer without unwinding these calls. |
---|
561 | For example, a coroutine iterator for a binary tree can stop the traversal at the visit point (pre, infix, post traversal), return the node value to the caller, and then continue the recursive traversal from the current node on the next call. |
---|
562 | |
---|
563 | There are two styles of activating a stateful function, \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles). |
---|
564 | These styles \emph{do not} cause incremental stack growth, \eg a million resume/suspend or resume/resume cycles do not remember each cycle just the last resumer for each cycle. |
---|
565 | Selecting between stackless/stackful semantics and asymmetric/symmetric style is a tradeoff between programming requirements, performance, and design, where stackless is faster and smaller using modified call/return between closures, stackful is more general but slower and larger using context switching between distinct stacks, and asymmetric is simpler control-flow than symmetric. |
---|
566 | Additionally, storage management for the closure/stack must be factored into design and performance, especially in unmanaged languages without garbage collection. |
---|
567 | Note, creation cost (closure/stack) is amortized across usage, so activation cost (resume/suspend) is usually the dominant factor. |
---|
568 | |
---|
569 | % The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine. |
---|
570 | % Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension. |
---|
571 | % If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited. |
---|
572 | % If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions. |
---|
573 | % Hence, refactoring a stackless coroutine may require changing it to stackful. |
---|
574 | % A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack. |
---|
575 | % As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles). |
---|
576 | % A fixed closure activated by modified call/return is faster than a variable closure activated by context switching. |
---|
577 | % Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance. |
---|
578 | % Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general. |
---|
579 | % nppNote, creation cost is amortized across usage, so activation cost is usually the dominant factor. |
---|
580 | |
---|
581 | For example, Python presents asymmetric generators as a function object, \uC presents symmetric coroutines as a \lstinline[language=C++]|class|-like object, and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}. |
---|
582 | \begin{center} |
---|
583 | \begin{tabular}{@{}l|l|l@{}} |
---|
584 | \multicolumn{1}{@{}c|}{Python asymmetric generator} & \multicolumn{1}{c|}{\uC symmetric coroutine} & \multicolumn{1}{c@{}}{Pthreads thread} \\ |
---|
585 | \hline |
---|
586 | \begin{python} |
---|
587 | `def Gen():` $\LstCommentStyle{\color{red}// function}$ |
---|
588 | ... yield val ... |
---|
589 | gen = Gen() |
---|
590 | for i in range( 10 ): |
---|
591 | print( next( gen ) ) |
---|
592 | \end{python} |
---|
593 | & |
---|
594 | \begin{uC++} |
---|
595 | `_Coroutine Cycle {` $\LstCommentStyle{\color{red}// class}$ |
---|
596 | Cycle * p; |
---|
597 | void main() { p->cycle(); } |
---|
598 | void cycle() { resume(); } `};` |
---|
599 | Cycle c1, c2; c1.p=&c2; c2.p=&c1; c1.cycle(); |
---|
600 | \end{uC++} |
---|
601 | & |
---|
602 | \begin{cfa} |
---|
603 | void * `rtn`( void * arg ) { ... } |
---|
604 | int i = 3, rc; |
---|
605 | pthread_t t; $\C{// thread id}$ |
---|
606 | $\LstCommentStyle{\color{red}// function pointer}$ |
---|
607 | rc=pthread_create(&t, `rtn`, (void *)i); |
---|
608 | \end{cfa} |
---|
609 | \end{tabular} |
---|
610 | \end{center} |
---|
611 | \CFA's preferred presentation model for generators/coroutines/threads is a hybrid of functions and classes, giving an object-oriented flavor. |
---|
612 | Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type via the type's name. |
---|
613 | The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed for flexibility reasons. |
---|
614 | Each custom type is discussed in detail in the following sections. |
---|
615 | |
---|
616 | |
---|
617 | \subsection{Generator} |
---|
618 | |
---|
619 | Stackless generators (Table~\ref{t:ExecutionPropertyComposition} case 3) have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution. |
---|
620 | The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity. |
---|
621 | A series of different kinds of generators and their implementation demonstrate how this goal is accomplished.\footnote{ |
---|
622 | The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|?++|, \lstinline|++?|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|. |
---|
623 | Operator \lstinline+|+ is overloaded for printing, like bit-shift \lstinline|<<| in \CC. |
---|
624 | The \CFA \lstinline|with| clause opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics; |
---|
625 | multiple aggregates may be opened. |
---|
626 | \CFA has rebindable references \lstinline|int i, & ip = i, j; `&ip = &j;`| and nonrebindable references \lstinline|int i, & `const` ip = i, j; `&ip = &j;` // disallowed|. |
---|
627 | }% |
---|
628 | |
---|
629 | \begin{figure} |
---|
630 | \centering |
---|
631 | \begin{lrbox}{\myboxA} |
---|
632 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
633 | typedef struct { |
---|
634 | int fn1, fn; |
---|
635 | } Fib; |
---|
636 | #define FibCtor { 1, 0 } |
---|
637 | int fib( Fib * f ) { |
---|
638 | |
---|
639 | |
---|
640 | |
---|
641 | |
---|
642 | |
---|
643 | int fn = f->fn; f->fn = f->fn1; |
---|
644 | f->fn1 = f->fn + fn; |
---|
645 | return fn; |
---|
646 | } |
---|
647 | int main() { |
---|
648 | Fib f1 = FibCtor, f2 = FibCtor; |
---|
649 | for ( int i = 0; i < 10; i += 1 ) |
---|
650 | printf( "%d %d\n", |
---|
651 | fib( &f1 ), fib( &f2 ) ); |
---|
652 | } |
---|
653 | \end{cfa} |
---|
654 | \end{lrbox} |
---|
655 | |
---|
656 | \begin{lrbox}{\myboxB} |
---|
657 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
658 | `generator` Fib { |
---|
659 | int fn1, fn; |
---|
660 | }; |
---|
661 | |
---|
662 | void `main(Fib & fib)` with(fib) { |
---|
663 | |
---|
664 | |
---|
665 | [fn1, fn] = [1, 0]; |
---|
666 | for () { |
---|
667 | `suspend;` |
---|
668 | [fn1, fn] = [fn, fn + fn1]; |
---|
669 | |
---|
670 | } |
---|
671 | } |
---|
672 | int main() { |
---|
673 | Fib f1, f2; |
---|
674 | for ( 10 ) |
---|
675 | sout | `resume( f1 )`.fn |
---|
676 | | `resume( f2 )`.fn; |
---|
677 | } |
---|
678 | \end{cfa} |
---|
679 | \end{lrbox} |
---|
680 | |
---|
681 | \begin{lrbox}{\myboxC} |
---|
682 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
683 | typedef struct { |
---|
684 | int `restart`, fn1, fn; |
---|
685 | } Fib; |
---|
686 | #define FibCtor { `0`, 1, 0 } |
---|
687 | Fib * comain( Fib * f ) { |
---|
688 | `static void * states[] = {&&s0, &&s1};` |
---|
689 | `goto *states[f->restart];` |
---|
690 | s0: f->`restart` = 1; |
---|
691 | for ( ;; ) { |
---|
692 | return f; |
---|
693 | s1:; int fn = f->fn + f->fn1; |
---|
694 | f->fn1 = f->fn; f->fn = fn; |
---|
695 | } |
---|
696 | } |
---|
697 | int main() { |
---|
698 | Fib f1 = FibCtor, f2 = FibCtor; |
---|
699 | for ( int i = 0; i < 10; i += 1 ) |
---|
700 | printf("%d %d\n",comain(&f1)->fn, |
---|
701 | comain(&f2)->fn); |
---|
702 | } |
---|
703 | \end{cfa} |
---|
704 | \end{lrbox} |
---|
705 | |
---|
706 | \subfloat[C]{\label{f:CFibonacci}\usebox\myboxA} |
---|
707 | \hspace{3pt} |
---|
708 | \vrule |
---|
709 | \hspace{3pt} |
---|
710 | \subfloat[\CFA]{\label{f:CFAFibonacciGen}\usebox\myboxB} |
---|
711 | \hspace{3pt} |
---|
712 | \vrule |
---|
713 | \hspace{3pt} |
---|
714 | \subfloat[C generated code for \CFA version]{\label{f:CFibonacciSim}\usebox\myboxC} |
---|
715 | \caption{Fibonacci output asymmetric generator} |
---|
716 | \label{f:FibonacciAsymmetricGenerator} |
---|
717 | |
---|
718 | \bigskip |
---|
719 | |
---|
720 | \begin{lrbox}{\myboxA} |
---|
721 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
722 | `generator Fmt` { |
---|
723 | char ch; |
---|
724 | int g, b; |
---|
725 | }; |
---|
726 | void ?{}( Fmt & fmt ) { `resume(fmt);` } // constructor |
---|
727 | void ^?{}( Fmt & f ) with(f) { $\C[2.25in]{// destructor}$ |
---|
728 | if ( g != 0 || b != 0 ) sout | nl; } |
---|
729 | void `main( Fmt & f )` with(f) { |
---|
730 | for () { $\C{// until destructor call}$ |
---|
731 | for ( ; g < 5; g += 1 ) { $\C{// groups}$ |
---|
732 | for ( ; b < 4; b += 1 ) { $\C{// blocks}$ |
---|
733 | do { `suspend;` $\C{// wait for character}$ |
---|
734 | while ( ch == '\n' ); // ignore newline |
---|
735 | sout | ch; $\C{// print character}$ |
---|
736 | } sout | " "; $\C{// block separator}$ |
---|
737 | } sout | nl; $\C{// group separator}$ |
---|
738 | } |
---|
739 | } |
---|
740 | int main() { |
---|
741 | Fmt fmt; $\C{// fmt constructor called}$ |
---|
742 | for () { |
---|
743 | sin | fmt.ch; $\C{// read into generator}$ |
---|
744 | if ( eof( sin ) ) break; |
---|
745 | `resume( fmt );` |
---|
746 | } |
---|
747 | |
---|
748 | } $\C{// fmt destructor called}\CRT$ |
---|
749 | \end{cfa} |
---|
750 | \end{lrbox} |
---|
751 | |
---|
752 | \begin{lrbox}{\myboxB} |
---|
753 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
754 | typedef struct { |
---|
755 | int `restart`, g, b; |
---|
756 | char ch; |
---|
757 | } Fmt; |
---|
758 | void comain( Fmt * f ) { |
---|
759 | `static void * states[] = {&&s0, &&s1};` |
---|
760 | `goto *states[f->restart];` |
---|
761 | s0: f->`restart` = 1; |
---|
762 | for ( ;; ) { |
---|
763 | for ( f->g = 0; f->g < 5; f->g += 1 ) { |
---|
764 | for ( f->b = 0; f->b < 4; f->b += 1 ) { |
---|
765 | do { return; s1: ; |
---|
766 | } while ( f->ch == '\n' ); |
---|
767 | printf( "%c", f->ch ); |
---|
768 | } printf( " " ); |
---|
769 | } printf( "\n" ); |
---|
770 | } |
---|
771 | } |
---|
772 | int main() { |
---|
773 | Fmt fmt = { `0` }; comain( &fmt ); // prime |
---|
774 | for ( ;; ) { |
---|
775 | scanf( "%c", &fmt.ch ); |
---|
776 | if ( feof( stdin ) ) break; |
---|
777 | comain( &fmt ); |
---|
778 | } |
---|
779 | if ( fmt.g != 0 || fmt.b != 0 ) printf( "\n" ); |
---|
780 | } |
---|
781 | \end{cfa} |
---|
782 | \end{lrbox} |
---|
783 | |
---|
784 | \subfloat[\CFA]{\label{f:CFAFormatGen}\usebox\myboxA} |
---|
785 | \hspace{35pt} |
---|
786 | \vrule |
---|
787 | \hspace{3pt} |
---|
788 | \subfloat[C generated code for \CFA version]{\label{f:CFormatGenImpl}\usebox\myboxB} |
---|
789 | \hspace{3pt} |
---|
790 | \caption{Formatter input asymmetric generator} |
---|
791 | \label{f:FormatterAsymmetricGenerator} |
---|
792 | \end{figure} |
---|
793 | |
---|
794 | Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written left to right in C, \CFA, and showing the underlying C implementation for the \CFA version. |
---|
795 | This generator is an \emph{output generator}, producing a new result on each resumption. |
---|
796 | To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle. |
---|
797 | An additional requirement is the ability to create an arbitrary number of generators of any kind, \ie retaining one state in global variables is insufficient; |
---|
798 | hence, state is retained in a closure between calls. |
---|
799 | Figure~\ref{f:CFibonacci} shows the C approach of manually creating the closure in structure @Fib@, and multiple instances of this closure provide multiple Fibonacci generators. |
---|
800 | The C version only has the middle execution state because the top execution state is declaration initialization. |
---|
801 | Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a custom \CFA @generator@ type. |
---|
802 | Each generator type must have a function named \lstinline|main|, |
---|
803 | % \footnote{ |
---|
804 | % The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution. |
---|
805 | % Leveraging starting semantics to this name for generator/coroutine/thread is a logical extension.} |
---|
806 | called a \emph{generator main} (leveraging the starting semantics for program @main@ in C), which is connected to the generator type via its single reference parameter. |
---|
807 | The generator main contains @suspend@ statements that suspend execution without ending the generator versus @return@. |
---|
808 | For the Fibonacci generator-main, the top initialization state appears at the start and the middle execution state is denoted by statement @suspend@. |
---|
809 | Any local variables in @main@ \emph{are not retained} between calls; |
---|
810 | hence local variables are only for temporary computations \emph{between} suspends. |
---|
811 | All retained state \emph{must} appear in the generator's type. |
---|
812 | As well, generator code containing a @suspend@ cannot be refactored into a helper function called by the generator, because @suspend@ is implemented via @return@, so a return from the helper function goes back to the current generator not the resumer. |
---|
813 | The generator is started by calling function @resume@ with a generator instance, which begins execution at the top of the generator main, and subsequent @resume@ calls restart the generator at its point of last suspension. |
---|
814 | Resuming an ended (returned) generator is undefined. |
---|
815 | Function @resume@ returns its argument generator so it can be cascaded in an expression, in this case to print the next Fibonacci value @fn@ computed in the generator instance. |
---|
816 | Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA asymmetric generator. |
---|
817 | Only one execution-state field, @restart@, is needed to subscript the suspension points in the generator. |
---|
818 | At the start of the generator main, the @static@ declaration, @states@, is initialized to the N suspend points in the generator, where operator @&&@ dereferences or references a label~\cite{gccValueLabels}. |
---|
819 | Next, the computed @goto@ selects the last suspend point and branches to it. |
---|
820 | The cost of setting @restart@ and branching via the computed @goto@ adds very little cost to the suspend and resume calls. |
---|
821 | |
---|
822 | An advantage of the \CFA explicit generator type is the ability to allow multiple type-safe interface functions taking and returning arbitrary types. |
---|
823 | \begin{cfa} |
---|
824 | int ?()( Fib & fib ) { return `resume( fib )`.fn; } $\C[3.9in]{// function-call interface}$ |
---|
825 | int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// add parameter to skip N values}$ |
---|
826 | double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}$ |
---|
827 | Fib f; int i; double d; |
---|
828 | i = f(); i = f( 2 ); d = f(); $\C{// alternative interfaces}\CRT$ |
---|
829 | \end{cfa} |
---|
830 | Now, the generator can be a separately compiled opaque-type only accessed through its interface functions. |
---|
831 | For contrast, Figure~\ref{f:PythonFibonacci} shows the equivalent Python Fibonacci generator, which does not use a generator type, and hence only has a single interface, but an implicit closure. |
---|
832 | |
---|
833 | \begin{figure} |
---|
834 | %\centering |
---|
835 | \newbox\myboxA |
---|
836 | \begin{lrbox}{\myboxA} |
---|
837 | \begin{python}[aboveskip=0pt,belowskip=0pt] |
---|
838 | def Fib(): |
---|
839 | fn1, fn = 0, 1 |
---|
840 | while True: |
---|
841 | `yield fn1` |
---|
842 | fn1, fn = fn, fn1 + fn |
---|
843 | f1 = Fib() |
---|
844 | f2 = Fib() |
---|
845 | for i in range( 10 ): |
---|
846 | print( next( f1 ), next( f2 ) ) |
---|
847 | |
---|
848 | |
---|
849 | |
---|
850 | |
---|
851 | |
---|
852 | |
---|
853 | |
---|
854 | |
---|
855 | |
---|
856 | |
---|
857 | \end{python} |
---|
858 | \end{lrbox} |
---|
859 | |
---|
860 | \newbox\myboxB |
---|
861 | \begin{lrbox}{\myboxB} |
---|
862 | \begin{python}[aboveskip=0pt,belowskip=0pt] |
---|
863 | def Fmt(): |
---|
864 | try: |
---|
865 | while True: $\C[2.5in]{\# until destructor call}$ |
---|
866 | for g in range( 5 ): $\C{\# groups}$ |
---|
867 | for b in range( 4 ): $\C{\# blocks}$ |
---|
868 | while True: |
---|
869 | ch = (yield) $\C{\# receive from send}$ |
---|
870 | if '\n' not in ch: $\C{\# ignore newline}$ |
---|
871 | break |
---|
872 | print( ch, end='' ) $\C{\# print character}$ |
---|
873 | print( ' ', end='' ) $\C{\# block separator}$ |
---|
874 | print() $\C{\# group separator}$ |
---|
875 | except GeneratorExit: $\C{\# destructor}$ |
---|
876 | if g != 0 | b != 0: $\C{\# special case}$ |
---|
877 | print() |
---|
878 | fmt = Fmt() |
---|
879 | `next( fmt )` $\C{\# prime, next prewritten}$ |
---|
880 | for i in range( 41 ): |
---|
881 | `fmt.send( 'a' );` $\C{\# send to yield}$ |
---|
882 | \end{python} |
---|
883 | \end{lrbox} |
---|
884 | |
---|
885 | \hspace{30pt} |
---|
886 | \subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA} |
---|
887 | \hspace{3pt} |
---|
888 | \vrule |
---|
889 | \hspace{3pt} |
---|
890 | \subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB} |
---|
891 | \caption{Python generator} |
---|
892 | \label{f:PythonGenerator} |
---|
893 | \end{figure} |
---|
894 | |
---|
895 | Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden (removed by the coroutine in Section~\ref{s:Coroutine}). |
---|
896 | This manual requirement follows from the generality of allowing variable-size local-state, \eg local state with a variable-length array requires dynamic allocation as the array size is unknown at compile time. |
---|
897 | However, dynamic allocation significantly increases the cost of generator creation/destruction and is a showstopper for embedded real-time programming. |
---|
898 | But more importantly, the size of the generator type is tied to the local state in the generator main, which precludes separate compilation of the generator main, \ie a generator must be inlined or local state must be dynamically allocated. |
---|
899 | With respect to safety, we believe static analysis can discriminate persistent generator state from temporary generator-main state and raise a compile-time error for temporary usage spanning suspend points. |
---|
900 | Our experience using generators is that the problems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small. |
---|
901 | As well, C programmers are not afraid of this kind of semantic programming requirement, if it results in very small and fast generators. |
---|
902 | |
---|
903 | Figure~\ref{f:CFAFormatGen} shows an asymmetric \newterm{input generator}, @Fmt@, for restructuring text into groups of characters of fixed-size blocks, \ie the input on the left is reformatted into the output on the right, where newlines are ignored. |
---|
904 | \begin{center} |
---|
905 | \tt |
---|
906 | \begin{tabular}{@{}l|l@{}} |
---|
907 | \multicolumn{1}{c|}{\textbf{\textrm{input}}} & \multicolumn{1}{c}{\textbf{\textrm{output}}} \\ |
---|
908 | \begin{tabular}[t]{@{}ll@{}} |
---|
909 | abcdefghijklmnopqrstuvwxyz \\ |
---|
910 | abcdefghijklmnopqrstuvwxyz |
---|
911 | \end{tabular} |
---|
912 | & |
---|
913 | \begin{tabular}[t]{@{}lllll@{}} |
---|
914 | abcd & efgh & ijkl & mnop & qrst \\ |
---|
915 | uvwx & yzab & cdef & ghij & klmn \\ |
---|
916 | opqr & stuv & wxyz & & |
---|
917 | \end{tabular} |
---|
918 | \end{tabular} |
---|
919 | \end{center} |
---|
920 | The example takes advantage of resuming a generator in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops. |
---|
921 | The destructor provides a newline, if formatted text ends with a full line. |
---|
922 | Figure~\ref{f:CFormatGenImpl} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@. |
---|
923 | For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the \CFA format generator. |
---|
924 | |
---|
925 | % https://dl-acm-org.proxy.lib.uwaterloo.ca/ |
---|
926 | |
---|
927 | An important application for the asymmetric generator is a device-driver, because device drivers are a significant source of operating-system errors: 85\% in Windows XP~\cite[p.~78]{Swift05} and 51.6\% in Linux~\cite[p.~1358,]{Xiao19}. %\cite{Palix11} |
---|
928 | Swift \etal~\cite[p.~86]{Swift05} restructure device drivers using the Extension Procedure Call (XPC) within the kernel via functions @nooks_driver_call@ and @nooks_kernel_call@, which have coroutine properties context switching to separate stacks with explicit hand-off calls; |
---|
929 | however, the calls do not retain execution state, and hence always start from the top. |
---|
930 | The alternative approach for implementing device drivers is using stack-ripping. |
---|
931 | However, Adya \etal~\cite{Adya02} argue against stack ripping in Section 3.2 and suggest a hybrid approach in Section 4 using cooperatively scheduled \emph{fibers}, which is coroutining. |
---|
932 | |
---|
933 | Figure~\ref{f:DeviceDriverGen} shows the generator advantages in implementing a simple network device-driver with the following protocol: |
---|
934 | \begin{center} |
---|
935 | \ldots\, STX \ldots\, message \ldots\, ESC ETX \ldots\, message \ldots\, ETX 2-byte crc \ldots |
---|
936 | \end{center} |
---|
937 | where the network message begins with the control character STX, ends with an ETX, and is followed by a two-byte cyclic-redundancy check. |
---|
938 | Control characters may appear in a message if preceded by an ESC. |
---|
939 | When a message byte arrives, it triggers an interrupt, and the operating system services the interrupt by calling the device driver with the byte read from a hardware register. |
---|
940 | The device driver returns a status code of its current state, and when a complete message is obtained, the operating system reads the message accumulated in the supplied buffer. |
---|
941 | Hence, the device driver is an input/output generator, where the cost of resuming the device-driver generator is the same as call and return, so performance in an operating-system kernel is excellent. |
---|
942 | The key benefits of using a generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than table lookup or stack ripping. |
---|
943 | % The conclusion is that FSMs are complex and occur in important domains, so direct generator support is important in a system programming language. |
---|
944 | |
---|
945 | \begin{figure} |
---|
946 | \centering |
---|
947 | \begin{tabular}{@{}l|l@{}} |
---|
948 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
949 | enum Status { CONT, MSG, ESTX, |
---|
950 | ELNTH, ECRC }; |
---|
951 | `generator` Driver { |
---|
952 | Status status; |
---|
953 | char byte, * msg; // communication |
---|
954 | int lnth, sum; // local state |
---|
955 | short int crc; |
---|
956 | }; |
---|
957 | void ?{}( Driver & d, char * m ) { d.msg = m; } |
---|
958 | Status next( Driver & d, char b ) with( d ) { |
---|
959 | byte = b; `resume( d );` return status; |
---|
960 | } |
---|
961 | void main( Driver & d ) with( d ) { |
---|
962 | enum { STX = '\002', ESC = '\033', |
---|
963 | ETX = '\003', MaxMsg = 64 }; |
---|
964 | msg: for () { // parse message |
---|
965 | status = CONT; |
---|
966 | lnth = 0; sum = 0; |
---|
967 | while ( byte != STX ) `suspend;` |
---|
968 | emsg: for () { |
---|
969 | `suspend;` // process byte |
---|
970 | \end{cfa} |
---|
971 | & |
---|
972 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
973 | choose ( byte ) { // switch with implicit break |
---|
974 | case STX: |
---|
975 | status = ESTX; `suspend;` continue msg; |
---|
976 | case ETX: |
---|
977 | break emsg; |
---|
978 | case ESC: |
---|
979 | `suspend;` |
---|
980 | } |
---|
981 | if ( lnth >= MaxMsg ) { // buffer full ? |
---|
982 | status = ELNTH; `suspend;` continue msg; } |
---|
983 | msg[lnth++] = byte; |
---|
984 | sum += byte; |
---|
985 | } |
---|
986 | msg[lnth] = '\0'; // terminate string |
---|
987 | `suspend;` |
---|
988 | crc = byte << 8; |
---|
989 | `suspend;` |
---|
990 | status = (crc | byte) == sum ? MSG : ECRC; |
---|
991 | `suspend;` |
---|
992 | } |
---|
993 | } |
---|
994 | \end{cfa} |
---|
995 | \end{tabular} |
---|
996 | \caption{Device-driver generator for communication protocol} |
---|
997 | \label{f:DeviceDriverGen} |
---|
998 | \end{figure} |
---|
999 | |
---|
1000 | Generators can also have symmetric activation using resume/resume to create control-flow cycles among generators. |
---|
1001 | (The trivial cycle is a generator resuming itself.) |
---|
1002 | This control flow is similar to recursion for functions but without stack growth. |
---|
1003 | Figure~\ref{f:PingPongFullCoroutineSteps} shows the steps for symmetric control-flow using for the ping/pong program in Figure~\ref{f:CFAPingPongGen}. |
---|
1004 | The program starts by creating the generators, @ping@ and @pong@, and then assigns the partners that form the cycle. |
---|
1005 | Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope. |
---|
1006 | (This issue occurs for any cyclic data structure.) |
---|
1007 | % (Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.) |
---|
1008 | Once the cycle is formed, the program main resumes one of the generators, @ping@, and the generators can then traverse an arbitrary number of cycles using @resume@ to activate partner generator(s). |
---|
1009 | Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example). |
---|
1010 | Note, the creator and starter may be different, \eg if the creator calls another function that starts the cycle. |
---|
1011 | The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack. |
---|
1012 | Also, since local variables are not retained in the generator function, there are no objects with destructors to be called, so the cost is the same as a function return. |
---|
1013 | Destructor cost occurs when the generator instance is deallocated by the creator. |
---|
1014 | |
---|
1015 | \begin{figure} |
---|
1016 | \centering |
---|
1017 | \input{FullCoroutinePhases.pstex_t} |
---|
1018 | \vspace*{-10pt} |
---|
1019 | \caption{Symmetric coroutine steps: Ping / Pong} |
---|
1020 | \label{f:PingPongFullCoroutineSteps} |
---|
1021 | \end{figure} |
---|
1022 | |
---|
1023 | \begin{figure} |
---|
1024 | \centering |
---|
1025 | \begin{lrbox}{\myboxA} |
---|
1026 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1027 | `generator PingPong` { |
---|
1028 | int N, i; // local state |
---|
1029 | const char * name; |
---|
1030 | PingPong & partner; // rebindable reference |
---|
1031 | }; |
---|
1032 | |
---|
1033 | void `main( PingPong & pp )` with(pp) { |
---|
1034 | |
---|
1035 | |
---|
1036 | for ( ; i < N; i += 1 ) { |
---|
1037 | sout | name | i; |
---|
1038 | `resume( partner );` |
---|
1039 | } |
---|
1040 | } |
---|
1041 | int main() { |
---|
1042 | enum { N = 5 }; |
---|
1043 | PingPong ping = {"ping",N,0}, pong = {"pong",N,0}; |
---|
1044 | &ping.partner = &pong; &pong.partner = &ping; |
---|
1045 | `resume( ping );` |
---|
1046 | } |
---|
1047 | \end{cfa} |
---|
1048 | \end{lrbox} |
---|
1049 | |
---|
1050 | \begin{lrbox}{\myboxB} |
---|
1051 | \begin{cfa}[escapechar={},aboveskip=0pt,belowskip=0pt] |
---|
1052 | typedef struct PingPong { |
---|
1053 | int restart, N, i; |
---|
1054 | const char * name; |
---|
1055 | struct PingPong * partner; |
---|
1056 | } PingPong; |
---|
1057 | #define PPCtor(name, N) {0, N, 0, name, NULL} |
---|
1058 | void comain( PingPong * pp ) { |
---|
1059 | static void * states[] = {&&s0, &&s1}; |
---|
1060 | goto *states[pp->restart]; |
---|
1061 | s0: pp->restart = 1; |
---|
1062 | for ( ; pp->i < pp->N; pp->i += 1 ) { |
---|
1063 | printf( "%s %d\n", pp->name, pp->i ); |
---|
1064 | asm( "mov %0,%%rdi" : "=m" (pp->partner) ); |
---|
1065 | asm( "mov %rdi,%rax" ); |
---|
1066 | asm( "add $16, %rsp" ); |
---|
1067 | asm( "popq %rbp" ); |
---|
1068 | asm( "jmp comain" ); |
---|
1069 | s1: ; |
---|
1070 | } |
---|
1071 | } |
---|
1072 | \end{cfa} |
---|
1073 | \end{lrbox} |
---|
1074 | |
---|
1075 | \subfloat[\CFA symmetric generator]{\label{f:CFAPingPongGen}\usebox\myboxA} |
---|
1076 | \hspace{3pt} |
---|
1077 | \vrule |
---|
1078 | \hspace{3pt} |
---|
1079 | \subfloat[C generator simulation]{\label{f:CPingPongSim}\usebox\myboxB} |
---|
1080 | \hspace{3pt} |
---|
1081 | \caption{Ping-Pong symmetric generator} |
---|
1082 | \label{f:PingPongSymmetricGenerator} |
---|
1083 | \end{figure} |
---|
1084 | |
---|
1085 | Figure~\ref{f:CPingPongSim} shows the C implementation of the \CFA symmetric generator, where there is still only one additional field, @restart@, but @resume@ is more complex because it does a forward rather than backward jump. |
---|
1086 | Before the jump, the parameter for the next call @partner@ is placed into the register used for the first parameter, @rdi@, and the remaining registers are reset for a return. |
---|
1087 | The @jmp comain@ restarts the function but with a different parameter, so the new call's behavior depends on the state of the coroutine type, \ie branch to restart location with different data state. |
---|
1088 | While the semantics of call forward is a tail-call optimization, which compilers perform, the generator state is different on each call rather a common state for a tail-recursive function (\ie the parameter to the function never changes during the forward calls). |
---|
1089 | However, this assembler code depends on what entry code is generated, specifically if there are local variables and the level of optimization. |
---|
1090 | Hence, internal compiler support is necessary for any forward call or backwards return, \eg LLVM has various coroutine support~\cite{CoroutineTS}, and \CFA can leverage this support should it eventually fork @clang@. |
---|
1091 | For this reason, \CFA does not support general symmetric generators at this time, but, it is possible to hand generate any symmetric generators, as in Figure~\ref{f:CPingPongSim}, for proof of concept and performance testing. |
---|
1092 | |
---|
1093 | Finally, part of this generator work was inspired by the recent \CCtwenty coroutine proposal~\cite{C++20Coroutine19}, which uses the general term coroutine to mean generator. |
---|
1094 | Our work provides the same high-performance asymmetric generators as \CCtwenty, and extends their work with symmetric generators. |
---|
1095 | An additional \CCtwenty generator feature allows @suspend@ and @resume@ to be followed by a restricted compound statement that is executed after the current generator has reset its stack but before calling the next generator, specified with \CFA syntax: |
---|
1096 | \begin{cfa} |
---|
1097 | ... suspend`{ ... }`; |
---|
1098 | ... resume( C )`{ ... }` ... |
---|
1099 | \end{cfa} |
---|
1100 | Since the current generator's stack is released before calling the compound statement, the compound statement can only reference variables in the generator's type. |
---|
1101 | This feature is useful when a generator is used in a concurrent context to ensure it is stopped before releasing a lock in the compound statement, which might immediately allow another thread to resume the generator. |
---|
1102 | Hence, this mechanism provides a general and safe handoff of the generator among competing threads. |
---|
1103 | |
---|
1104 | |
---|
1105 | \subsection{Coroutine} |
---|
1106 | \label{s:Coroutine} |
---|
1107 | |
---|
1108 | Stackful coroutines (Table~\ref{t:ExecutionPropertyComposition} case 5) extend generator semantics with an implicit closure and @suspend@ may appear in a helper function called from the coroutine main because of the separate stack. |
---|
1109 | Note, simulating coroutines with stacks of generators, \eg Python with @yield from@ cannot handle symmetric control-flow. |
---|
1110 | Furthermore, all stack components must be of generators, so it is impossible to call a library function passing a generator that yields. |
---|
1111 | Creating a generator copy of the library function maybe impossible because the library function is opaque. |
---|
1112 | |
---|
1113 | A \CFA coroutine is specified by replacing @generator@ with @coroutine@ for the type. |
---|
1114 | Coroutine generality results in higher cost for creation, due to dynamic stack allocation, for execution, due to context switching among stacks, and for terminating, due to possible stack unwinding and dynamic stack deallocation. |
---|
1115 | A series of different kinds of coroutines and their implementations demonstrate how coroutines extend generators. |
---|
1116 | |
---|
1117 | First, the previous generator examples are converted to their coroutine counterparts, allowing local-state variables to be moved from the generator type into the coroutine main. |
---|
1118 | Now the coroutine type only contains communication variables between interface functions and the coroutine main. |
---|
1119 | \begin{center} |
---|
1120 | \begin{tabular}{@{}l|l|l|l@{}} |
---|
1121 | \multicolumn{1}{c|}{Fibonacci} & \multicolumn{1}{c|}{Formatter} & \multicolumn{1}{c|}{Device Driver} & \multicolumn{1}{c}{PingPong} \\ |
---|
1122 | \hline |
---|
1123 | \begin{cfa}[xleftmargin=0pt] |
---|
1124 | void main( Fib & fib ) ... |
---|
1125 | `int fn1;` |
---|
1126 | |
---|
1127 | |
---|
1128 | \end{cfa} |
---|
1129 | & |
---|
1130 | \begin{cfa}[xleftmargin=0pt] |
---|
1131 | for ( `g`; 5 ) { |
---|
1132 | for ( `b`; 4 ) { |
---|
1133 | |
---|
1134 | |
---|
1135 | \end{cfa} |
---|
1136 | & |
---|
1137 | \begin{cfa}[xleftmargin=0pt] |
---|
1138 | status = CONT; |
---|
1139 | `int lnth = 0, sum = 0;` |
---|
1140 | ... |
---|
1141 | `short int crc = byte << 8;` |
---|
1142 | \end{cfa} |
---|
1143 | & |
---|
1144 | \begin{cfa}[xleftmargin=0pt] |
---|
1145 | void main( PingPong & pp ) ... |
---|
1146 | for ( `i`; N ) { |
---|
1147 | |
---|
1148 | |
---|
1149 | \end{cfa} |
---|
1150 | \end{tabular} |
---|
1151 | \end{center} |
---|
1152 | It is also possible to refactor code containing local-state and @suspend@ statements into a helper function, like the computation of the CRC for the device driver. |
---|
1153 | \begin{cfa} |
---|
1154 | int Crc() { |
---|
1155 | `suspend;` short int crc = byte << 8; |
---|
1156 | `suspend;` status = (crc | byte) == sum ? MSG : ECRC; |
---|
1157 | return crc; |
---|
1158 | } |
---|
1159 | \end{cfa} |
---|
1160 | A call to this function is placed at the end of the device driver's coroutine-main. |
---|
1161 | For complex FSMs, refactoring is part of normal program abstraction, especially when code is used in multiple places. |
---|
1162 | Again, this complexity is usually associated with execution state rather than data state. |
---|
1163 | |
---|
1164 | \begin{comment} |
---|
1165 | Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @restart@. |
---|
1166 | Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the overloaded coroutine main. |
---|
1167 | The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@. |
---|
1168 | The interface function @restart@, takes a Fibonacci instance and context switches to it using @resume@; |
---|
1169 | on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned. |
---|
1170 | The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack; |
---|
1171 | when the coroutine main returns, its stack is deallocated. |
---|
1172 | Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes. |
---|
1173 | Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}. |
---|
1174 | Coroutine generators are called \newterm{output coroutines} because values are only returned. |
---|
1175 | |
---|
1176 | \begin{figure} |
---|
1177 | \centering |
---|
1178 | \newbox\myboxA |
---|
1179 | % \begin{lrbox}{\myboxA} |
---|
1180 | % \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1181 | % `int fn1, fn2, state = 1;` // single global variables |
---|
1182 | % int fib() { |
---|
1183 | % int fn; |
---|
1184 | % `switch ( state )` { // explicit execution state |
---|
1185 | % case 1: fn = 0; fn1 = fn; state = 2; break; |
---|
1186 | % case 2: fn = 1; fn2 = fn1; fn1 = fn; state = 3; break; |
---|
1187 | % case 3: fn = fn1 + fn2; fn2 = fn1; fn1 = fn; break; |
---|
1188 | % } |
---|
1189 | % return fn; |
---|
1190 | % } |
---|
1191 | % int main() { |
---|
1192 | % |
---|
1193 | % for ( int i = 0; i < 10; i += 1 ) { |
---|
1194 | % printf( "%d\n", fib() ); |
---|
1195 | % } |
---|
1196 | % } |
---|
1197 | % \end{cfa} |
---|
1198 | % \end{lrbox} |
---|
1199 | \begin{lrbox}{\myboxA} |
---|
1200 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1201 | #define FibCtor { 0, 1 } |
---|
1202 | typedef struct { int fn1, fn; } Fib; |
---|
1203 | int fib( Fib * f ) { |
---|
1204 | |
---|
1205 | int ret = f->fn1; |
---|
1206 | f->fn1 = f->fn; |
---|
1207 | f->fn = ret + f->fn; |
---|
1208 | return ret; |
---|
1209 | } |
---|
1210 | |
---|
1211 | |
---|
1212 | |
---|
1213 | int main() { |
---|
1214 | Fib f1 = FibCtor, f2 = FibCtor; |
---|
1215 | for ( int i = 0; i < 10; i += 1 ) { |
---|
1216 | printf( "%d %d\n", |
---|
1217 | fib( &f1 ), fib( &f2 ) ); |
---|
1218 | } |
---|
1219 | } |
---|
1220 | \end{cfa} |
---|
1221 | \end{lrbox} |
---|
1222 | |
---|
1223 | \newbox\myboxB |
---|
1224 | \begin{lrbox}{\myboxB} |
---|
1225 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1226 | `coroutine` Fib { int fn1; }; |
---|
1227 | void main( Fib & fib ) with( fib ) { |
---|
1228 | int fn; |
---|
1229 | [fn1, fn] = [0, 1]; |
---|
1230 | for () { |
---|
1231 | `suspend;` |
---|
1232 | [fn1, fn] = [fn, fn1 + fn]; |
---|
1233 | } |
---|
1234 | } |
---|
1235 | int ?()( Fib & fib ) with( fib ) { |
---|
1236 | return `resume( fib )`.fn1; |
---|
1237 | } |
---|
1238 | int main() { |
---|
1239 | Fib f1, f2; |
---|
1240 | for ( 10 ) { |
---|
1241 | sout | f1() | f2(); |
---|
1242 | } |
---|
1243 | |
---|
1244 | |
---|
1245 | \end{cfa} |
---|
1246 | \end{lrbox} |
---|
1247 | |
---|
1248 | \newbox\myboxC |
---|
1249 | \begin{lrbox}{\myboxC} |
---|
1250 | \begin{python}[aboveskip=0pt,belowskip=0pt] |
---|
1251 | |
---|
1252 | def Fib(): |
---|
1253 | |
---|
1254 | fn1, fn = 0, 1 |
---|
1255 | while True: |
---|
1256 | `yield fn1` |
---|
1257 | fn1, fn = fn, fn1 + fn |
---|
1258 | |
---|
1259 | |
---|
1260 | // next prewritten |
---|
1261 | |
---|
1262 | |
---|
1263 | f1 = Fib() |
---|
1264 | f2 = Fib() |
---|
1265 | for i in range( 10 ): |
---|
1266 | print( next( f1 ), next( f2 ) ) |
---|
1267 | |
---|
1268 | |
---|
1269 | |
---|
1270 | \end{python} |
---|
1271 | \end{lrbox} |
---|
1272 | |
---|
1273 | \subfloat[C]{\label{f:GlobalVariables}\usebox\myboxA} |
---|
1274 | \hspace{3pt} |
---|
1275 | \vrule |
---|
1276 | \hspace{3pt} |
---|
1277 | \subfloat[\CFA]{\label{f:ExternalState}\usebox\myboxB} |
---|
1278 | \hspace{3pt} |
---|
1279 | \vrule |
---|
1280 | \hspace{3pt} |
---|
1281 | \subfloat[Python]{\label{f:ExternalState}\usebox\myboxC} |
---|
1282 | \caption{Fibonacci generator} |
---|
1283 | \label{f:C-fibonacci} |
---|
1284 | \end{figure} |
---|
1285 | |
---|
1286 | \bigskip |
---|
1287 | |
---|
1288 | \newbox\myboxA |
---|
1289 | \begin{lrbox}{\myboxA} |
---|
1290 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1291 | `coroutine` Fib { int fn; }; |
---|
1292 | void main( Fib & fib ) with( fib ) { |
---|
1293 | fn = 0; int fn1 = fn; `suspend`; |
---|
1294 | fn = 1; int fn2 = fn1; fn1 = fn; `suspend`; |
---|
1295 | for () { |
---|
1296 | fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `suspend`; } |
---|
1297 | } |
---|
1298 | int next( Fib & fib ) with( fib ) { `resume( fib );` return fn; } |
---|
1299 | int main() { |
---|
1300 | Fib f1, f2; |
---|
1301 | for ( 10 ) |
---|
1302 | sout | next( f1 ) | next( f2 ); |
---|
1303 | } |
---|
1304 | \end{cfa} |
---|
1305 | \end{lrbox} |
---|
1306 | \newbox\myboxB |
---|
1307 | \begin{lrbox}{\myboxB} |
---|
1308 | \begin{python}[aboveskip=0pt,belowskip=0pt] |
---|
1309 | |
---|
1310 | def Fibonacci(): |
---|
1311 | fn = 0; fn1 = fn; `yield fn` # suspend |
---|
1312 | fn = 1; fn2 = fn1; fn1 = fn; `yield fn` |
---|
1313 | while True: |
---|
1314 | fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `yield fn` |
---|
1315 | |
---|
1316 | |
---|
1317 | f1 = Fibonacci() |
---|
1318 | f2 = Fibonacci() |
---|
1319 | for i in range( 10 ): |
---|
1320 | print( `next( f1 )`, `next( f2 )` ) # resume |
---|
1321 | |
---|
1322 | \end{python} |
---|
1323 | \end{lrbox} |
---|
1324 | \subfloat[\CFA]{\label{f:Coroutine3States}\usebox\myboxA} |
---|
1325 | \qquad |
---|
1326 | \subfloat[Python]{\label{f:Coroutine1State}\usebox\myboxB} |
---|
1327 | \caption{Fibonacci input coroutine, 3 states, internal variables} |
---|
1328 | \label{f:cfa-fibonacci} |
---|
1329 | \end{figure} |
---|
1330 | \end{comment} |
---|
1331 | |
---|
1332 | \begin{figure} |
---|
1333 | \centering |
---|
1334 | \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}} |
---|
1335 | \begin{cfa} |
---|
1336 | `coroutine` Prod { |
---|
1337 | Cons & c; $\C[1.5in]{// communication}$ |
---|
1338 | int N, money, receipt; |
---|
1339 | }; |
---|
1340 | void main( Prod & prod ) with( prod ) { |
---|
1341 | for ( i; N ) { $\C{// 1st resume}\CRT$ |
---|
1342 | int p1 = random( 100 ), p2 = random( 100 ); |
---|
1343 | int status = delivery( c, p1, p2 ); |
---|
1344 | receipt += 1; |
---|
1345 | } |
---|
1346 | stop( c ); |
---|
1347 | } |
---|
1348 | int payment( Prod & prod, int money ) { |
---|
1349 | prod.money = money; |
---|
1350 | `resume( prod );` |
---|
1351 | return prod.receipt; |
---|
1352 | } |
---|
1353 | void start( Prod & prod, int N, Cons &c ) { |
---|
1354 | &prod.c = &c; |
---|
1355 | prod.[N, receipt] = [N, 0]; |
---|
1356 | `resume( prod );` |
---|
1357 | } |
---|
1358 | int main() { |
---|
1359 | Prod prod; |
---|
1360 | Cons cons = { prod }; |
---|
1361 | start( prod, 5, cons ); |
---|
1362 | } |
---|
1363 | \end{cfa} |
---|
1364 | & |
---|
1365 | \begin{cfa} |
---|
1366 | `coroutine` Cons { |
---|
1367 | Prod & p; $\C[1.5in]{// communication}$ |
---|
1368 | int p1, p2, status; |
---|
1369 | bool done; |
---|
1370 | }; |
---|
1371 | void ?{}( Cons & cons, Prod & p ) { |
---|
1372 | &cons.p = &p; $\C{// reassignable reference}$ |
---|
1373 | cons.[status, done ] = [0, false]; |
---|
1374 | } |
---|
1375 | void main( Cons & cons ) with( cons ) { |
---|
1376 | int money = 1, receipt; $\C{// 1st resume}\CRT$ |
---|
1377 | for ( ; ! done; ) { |
---|
1378 | status += 1; |
---|
1379 | receipt = payment( p, money ); |
---|
1380 | money += 1; |
---|
1381 | } |
---|
1382 | } |
---|
1383 | int delivery( Cons & cons, int p1, int p2 ) { |
---|
1384 | cons.[p1, p2] = [p1, p2]; |
---|
1385 | `resume( cons );` |
---|
1386 | return cons.status; |
---|
1387 | } |
---|
1388 | void stop( Cons & cons ) { |
---|
1389 | cons.done = true; |
---|
1390 | `resume( cons );` |
---|
1391 | } |
---|
1392 | |
---|
1393 | \end{cfa} |
---|
1394 | \end{tabular} |
---|
1395 | \caption{Producer / consumer: resume-resume cycle, bidirectional communication} |
---|
1396 | \label{f:ProdCons} |
---|
1397 | \end{figure} |
---|
1398 | |
---|
1399 | Figure~\ref{f:ProdCons} shows the ping-pong example in Figure~\ref{f:CFAPingPongGen} extended into a producer/consumer symmetric-coroutine performing bidirectional communication. |
---|
1400 | This example is illustrative because both producer and consumer have two interface functions with @resume@s that suspend execution in these interface functions. |
---|
1401 | The program main creates the producer coroutine, passes it to the consumer coroutine in its initialization, and closes the cycle at the call to @start@ along with the number of items to be produced. |
---|
1402 | The call to @start@ is the first @resume@ of @prod@, which remembers the program main as the starter and creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it. |
---|
1403 | @prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer's @deliver@ function to transfer the values, and printing the status returned from the consumer. |
---|
1404 | The producer's call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status. |
---|
1405 | Similarly on the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine. |
---|
1406 | The symmetric coroutine cycle forms when the consumer calls the producer's @payment@ function, which resumes the producer in the consumer's delivery function. |
---|
1407 | When the producer calls @delivery@ again, it resumes the consumer in the @payment@ function. |
---|
1408 | Both interface functions then return to their corresponding coroutine-main functions for the next cycle. |
---|
1409 | Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling. |
---|
1410 | As a consequence of a coroutine retaining its last resumer for suspending back, these reverse pointers allow @suspend@ to cycle \emph{backwards} around a symmetric coroutine cycle. |
---|
1411 | |
---|
1412 | \begin{figure} |
---|
1413 | \begin{center} |
---|
1414 | \input{FullProdConsStack.pstex_t} |
---|
1415 | \end{center} |
---|
1416 | \vspace*{-10pt} |
---|
1417 | \caption{Producer / consumer runtime stacks} |
---|
1418 | \label{f:ProdConsRuntimeStacks} |
---|
1419 | \end{figure} |
---|
1420 | |
---|
1421 | Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack. |
---|
1422 | Furthermore, each deallocated coroutine must execute all destructors for objects allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep. |
---|
1423 | In the example, termination begins with the producer's loop stopping after N iterations and calling the consumer's @stop@ function, which sets the @done@ flag, resumes the consumer in function @payment@, terminating the call, and the consumer's loop in its coroutine main. |
---|
1424 | % (Not shown is having @prod@ raise a nonlocal @stop@ exception at @cons@ after it finishes generating values and suspend back to @cons@, which catches the @stop@ exception to terminate its loop.) |
---|
1425 | When the consumer's main ends, its stack is already unwound so any stack allocated objects with destructors are finalized. |
---|
1426 | The question now is where does control continue? |
---|
1427 | |
---|
1428 | The na\"{i}ve semantics for coroutine-cycle termination is to context switch to the last resumer, like executing a @suspend@ or @return@ in a generator. |
---|
1429 | However, for coroutines, the last resumer is \emph{not} implicitly below the current stack frame, as for generators, because each coroutine's stack is independent. |
---|
1430 | Unfortunately, it is impossible to determine statically if a coroutine is in a cycle and unrealistic to check dynamically (graph-cycle problem). |
---|
1431 | Hence, a compromise solution is necessary that works for asymmetric (acyclic) and symmetric (cyclic) coroutines. |
---|
1432 | Our solution is to retain a coroutine's starter (first resumer), and context switch back to the starter when the coroutine ends. |
---|
1433 | Hence, the consumer restarts its first resumer, @prod@, in @stop@, and when the producer ends, it restarts its first resumer, program main, in @start@ (see dashed lines from the end of the coroutine mains in Figure~\ref{f:ProdConsRuntimeStacks}). |
---|
1434 | This semantics works well for the most common asymmetric and symmetric coroutine usage patterns. |
---|
1435 | For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer; |
---|
1436 | for symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle. |
---|
1437 | For other scenarios, it is always possible to devise a solution with additional programming effort, such as forcing the cycle forward or backward to a safe point before starting termination. |
---|
1438 | |
---|
1439 | Note, the producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first. |
---|
1440 | Assume generator @PingPong@ in Figure~\ref{f:PingPongSymmetricGenerator} is converted to a coroutine. |
---|
1441 | Unlike generators, coroutines have a starter structure with multiple levels, where the program main starts @ping@ and @ping@ starts @pong@. |
---|
1442 | By adjusting $N$ for either @ping@ or @pong@, it is possible to have either finish first. |
---|
1443 | If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main on return; |
---|
1444 | if @ping@ ends first, it resumes its starter the program main on return. |
---|
1445 | Regardless of the cycle complexity, the starter structure always leads back to the program main, but the path can be entered at an arbitrary point. |
---|
1446 | Once back at the program main (creator), coroutines @ping@ and @pong@ are deallocated, running any destructors for objects within the coroutine and possibly deallocating any coroutine stacks for non-terminated coroutines, where stack deallocation implies stack unwinding to find destructors for allocated objects on the stack. |
---|
1447 | Hence, the \CFA termination semantics for the generator and coroutine ensure correct deallocation semantics, regardless of the coroutine's state (terminated or active), like any other aggregate object. |
---|
1448 | |
---|
1449 | |
---|
1450 | \subsection{Generator / coroutine implementation} |
---|
1451 | |
---|
1452 | A significant implementation challenge for generators and coroutines (and threads in Section~\ref{s:threads}) is adding extra fields to the custom types and related functions, \eg inserting code after/before the coroutine constructor/destructor and @main@ to create/initialize/de-initialize/destroy any extra fields, \eg the coroutine stack. |
---|
1453 | There are several solutions to this problem, which follow from the object-oriented flavor of adopting custom types. |
---|
1454 | |
---|
1455 | For object-oriented languages, inheritance is used to provide extra fields and code via explicit inheritance: |
---|
1456 | \begin{cfa}[morekeywords={class,inherits}] |
---|
1457 | class myCoroutine inherits baseCoroutine { ... } |
---|
1458 | \end{cfa} |
---|
1459 | % The problem is that the programming language and its tool chain, \eg debugger, @valgrind@, need to understand @baseCoroutine@ because it infers special property, so type @baseCoroutine@ becomes a de facto keyword and all types inheriting from it are implicitly custom types. |
---|
1460 | The problem is that some special properties are not handled by existing language semantics, \eg the execution of constructors and destructors is in the wrong order to implicitly start threads because the thread must start \emph{after} all constructors as it relies on a completely initialized object, but the inherited constructor runs \emph{before} the derived. |
---|
1461 | Alternatives, such as explicitly starting threads as in Java, are repetitive and forgetting to call start is a common source of errors. |
---|
1462 | An alternative is composition: |
---|
1463 | \begin{cfa} |
---|
1464 | struct myCoroutine { |
---|
1465 | ... // declaration/communication variables |
---|
1466 | baseCoroutine dummy; // composition, last declaration |
---|
1467 | } |
---|
1468 | \end{cfa} |
---|
1469 | which also requires an explicit declaration that must be last to ensure correct initialization order. |
---|
1470 | However, there is nothing preventing wrong placement or multiple declarations. |
---|
1471 | |
---|
1472 | \CFA custom types make any special properties explicit to the language and its tool chain, \eg the language code-generator knows where to inject code |
---|
1473 | % and when it is unsafe to perform certain optimizations, |
---|
1474 | and IDEs using simple parsing can find and manipulate types with special properties. |
---|
1475 | The downside of this approach is that it makes custom types a special case in the language. |
---|
1476 | Users wanting to extend custom types or build their own can only do so in ways offered by the language. |
---|
1477 | Furthermore, implementing custom types without language support may display the power of a programming language. |
---|
1478 | \CFA blends the two approaches, providing custom type for idiomatic \CFA code, while extending and building new custom types is still possible, similar to Java concurrency with builtin and library (@java.util.concurrent@) monitors. |
---|
1479 | |
---|
1480 | Part of the mechanism to generalize custom types is the \CFA trait~\cite[\S~2.3]{Moss18}, \eg the definition for custom-type @coroutine@ is anything satisfying the trait @is_coroutine@, and this trait both enforces and restricts the coroutine-interface functions. |
---|
1481 | \begin{cfa} |
---|
1482 | trait is_coroutine( `dtype` T ) { |
---|
1483 | void main( T & ); |
---|
1484 | coroutine_desc * get_coroutine( T & ); |
---|
1485 | }; |
---|
1486 | forall( `dtype` T | is_coroutine(T) ) void $suspend$( T & ), resume( T & ); |
---|
1487 | \end{cfa} |
---|
1488 | Note, copying generators, coroutines, and threads is undefined because multiple objects cannot execute on a shared stack and stack copying does not work in unmanaged languages (no garbage collection), like C, because the stack may contain pointers to objects within it that require updating for the copy. |
---|
1489 | The \CFA @dtype@ property provides no \emph{implicit} copying operations and the @is_coroutine@ trait provides no \emph{explicit} copying operations, so all coroutines must be passed by reference or pointer. |
---|
1490 | The function definitions ensure there is a statically typed @main@ function that is the starting point (first stack frame) of a coroutine, and a mechanism to read the coroutine descriptor from its handle. |
---|
1491 | The @main@ function has no return value or additional parameters because the coroutine type allows an arbitrary number of interface functions with arbitrary typed input and output values versus fixed ones. |
---|
1492 | The advantage of this approach is that users can easily create different types of coroutines, \eg changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ function, and possibly redefining \textsf{suspend} and @resume@. |
---|
1493 | |
---|
1494 | The \CFA custom-type @coroutine@ implicitly implements the getter and forward declarations for the coroutine main. |
---|
1495 | \begin{cquote} |
---|
1496 | \begin{tabular}{@{}ccc@{}} |
---|
1497 | \begin{cfa} |
---|
1498 | coroutine MyCor { |
---|
1499 | int value; |
---|
1500 | |
---|
1501 | }; |
---|
1502 | \end{cfa} |
---|
1503 | & |
---|
1504 | {\Large $\Rightarrow$} |
---|
1505 | & |
---|
1506 | \begin{tabular}{@{}ccc@{}} |
---|
1507 | \begin{cfa} |
---|
1508 | struct MyCor { |
---|
1509 | int value; |
---|
1510 | coroutine_desc cor; |
---|
1511 | }; |
---|
1512 | \end{cfa} |
---|
1513 | & |
---|
1514 | \begin{cfa} |
---|
1515 | static inline coroutine_desc * |
---|
1516 | get_coroutine( MyCor & this ) { |
---|
1517 | return &this.cor; |
---|
1518 | } |
---|
1519 | \end{cfa} |
---|
1520 | & |
---|
1521 | \begin{cfa} |
---|
1522 | void main( MyCor * this ); |
---|
1523 | |
---|
1524 | |
---|
1525 | |
---|
1526 | \end{cfa} |
---|
1527 | \end{tabular} |
---|
1528 | \end{tabular} |
---|
1529 | \end{cquote} |
---|
1530 | The combination of custom types and fundamental @trait@ description of these types allows a concise specification for programmers and tools, while more advanced programmers can have tighter control over memory layout and initialization. |
---|
1531 | |
---|
1532 | Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a thread is similar). |
---|
1533 | The coroutine handle is the @coroutine@ instance containing programmer specified type global and communication variables across interface functions. |
---|
1534 | The coroutine descriptor contains all implicit declarations needed by the runtime, \eg @suspend@/@resume@, and can be part of the coroutine handle or separate. |
---|
1535 | The coroutine stack can appear in a number of locations and be fixed or variable sized. |
---|
1536 | Hence, the coroutine's stack could be a variable-length structure (VLS) |
---|
1537 | % \footnote{ |
---|
1538 | % We are examining VLSs, where fields can be variable-sized structures or arrays. |
---|
1539 | % Once allocated, a VLS is fixed sized.} |
---|
1540 | on the allocating stack, provided the allocating stack is large enough. |
---|
1541 | For a VLS stack allocation and deallocation is an inexpensive adjustment of the stack pointer, modulo any stack constructor costs to initial frame setup. |
---|
1542 | For stack allocation in the heap, allocation and deallocation is an expensive allocation, where the heap can be a shared resource, modulo any stack constructor costs. |
---|
1543 | It is also possible to use a split or segmented stack calling convention, available with gcc and clang, allowing a variable-sized stack via a set of connected blocks in the heap. |
---|
1544 | Currently, \CFA supports stack and heap allocated descriptors but only fixed-sized heap allocated stacks. |
---|
1545 | In \CFA debug-mode, the fixed-sized stack is terminated with a write-only page, which catches most stack overflows. |
---|
1546 | Experience teaching concurrency with \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue for students. |
---|
1547 | Split-stack allocation is under development but requires recompilation of legacy code, which is not always possible. |
---|
1548 | |
---|
1549 | \begin{figure} |
---|
1550 | \centering |
---|
1551 | \input{corlayout.pstex_t} |
---|
1552 | \caption{Coroutine memory layout} |
---|
1553 | \label{f:CoroutineMemoryLayout} |
---|
1554 | \end{figure} |
---|
1555 | |
---|
1556 | |
---|
1557 | \section{Concurrency} |
---|
1558 | \label{s:Concurrency} |
---|
1559 | |
---|
1560 | Concurrency is nondeterministic scheduling of independent sequential execution paths (threads), where each thread has its own stack. |
---|
1561 | A single thread with multiple stacks, \ie coroutining, does \emph{not} imply concurrency~\cite[\S~3]{Buhr05a}. |
---|
1562 | Coroutining self-schedule the thread across stacks so execution is deterministic. |
---|
1563 | (It is \emph{impossible} to generate a concurrency error when coroutining.) |
---|
1564 | |
---|
1565 | The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}. |
---|
1566 | Therefore, a minimal concurrency system requires coroutines \emph{in conjunction with a nondeterministic scheduler}. |
---|
1567 | The resulting execution system now follows a cooperative threading-model~\cite{Adya02,libdill} because context-switching points to the scheduler are known, but the next unblocking point is unknown due to the scheduler. |
---|
1568 | Adding \newterm{preemption} introduces \newterm{non-cooperative} or \newterm{preemptive} scheduling, where context switching points to the scheduler are unknown as they can occur randomly between any two instructions often based on a timer interrupt. |
---|
1569 | Uncertainty gives the illusion of parallelism on a single processor and provides a mechanism to access and increase performance on multiple processors. |
---|
1570 | The reason is that the scheduler and runtime have complete knowledge about resources and how to best utilized them. |
---|
1571 | However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization}~\cite[\S~4]{Buhr05a}, which restrict nondeterminism for correctness; |
---|
1572 | otherwise, it is impossible to write meaningful concurrent programs. |
---|
1573 | Optimal concurrent performance is often obtained by having as much nondeterminism as mutual exclusion and synchronization correctness allow. |
---|
1574 | |
---|
1575 | A scheduler can also be stackless or stackful. |
---|
1576 | For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch. |
---|
1577 | For stackful, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches. |
---|
1578 | The \CFA runtime uses a stackful scheduler for uniformity and security. |
---|
1579 | |
---|
1580 | |
---|
1581 | \subsection{Thread} |
---|
1582 | \label{s:threads} |
---|
1583 | |
---|
1584 | Threading (Table~\ref{t:ExecutionPropertyComposition} case 11) needs the ability to start a thread and wait for its completion, where a common API is @fork@ and @join@. |
---|
1585 | \vspace{4pt} |
---|
1586 | \par\noindent |
---|
1587 | \begin{tabular}{@{}l|l|l@{}} |
---|
1588 | \multicolumn{1}{c|}{\textbf{Java}} & \multicolumn{1}{c|}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\ |
---|
1589 | \hline |
---|
1590 | \begin{cfa} |
---|
1591 | class MyThread extends Thread {...} |
---|
1592 | mythread t = new MyThread(...); |
---|
1593 | `t.start();` // start |
---|
1594 | // concurrency |
---|
1595 | `t.join();` // wait |
---|
1596 | \end{cfa} |
---|
1597 | & |
---|
1598 | \begin{cfa} |
---|
1599 | class MyThread { ... } // functor |
---|
1600 | MyThread mythread; |
---|
1601 | `thread t( mythread, ... );` // start |
---|
1602 | // concurrency |
---|
1603 | `t.join();` // wait |
---|
1604 | \end{cfa} |
---|
1605 | & |
---|
1606 | \begin{cfa} |
---|
1607 | void * rtn( void * arg ) {...} |
---|
1608 | pthread_t t; int i = 3; |
---|
1609 | `pthread_create( &t, rtn, (void *)i );` // start |
---|
1610 | // concurrency |
---|
1611 | `pthread_join( t, NULL );` // wait |
---|
1612 | \end{cfa} |
---|
1613 | \end{tabular} |
---|
1614 | \vspace{1pt} |
---|
1615 | \par\noindent |
---|
1616 | \CFA has a simpler approach using a custom @thread@ type and leveraging declaration semantics, allocation and deallocation, where threads implicitly @fork@ after construction and @join@ before destruction. |
---|
1617 | \begin{cfa} |
---|
1618 | thread MyThread {}; |
---|
1619 | void main( MyThread & this ) { ... } |
---|
1620 | int main() { |
---|
1621 | MyThread team`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$ |
---|
1622 | // concurrency |
---|
1623 | } $\C{// deallocate stack-based threads, implicit joins before destruction}$ |
---|
1624 | \end{cfa} |
---|
1625 | This semantic ensures a thread is started and stopped exactly once, eliminating some programming error, and scales to multiple threads for basic termination synchronization. |
---|
1626 | For block allocation to arbitrary depth, including recursion, threads are created and destroyed in a lattice structure (tree with top and bottom). |
---|
1627 | Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic allocation. |
---|
1628 | \begin{cfa} |
---|
1629 | MyThread * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$ |
---|
1630 | int main() { |
---|
1631 | MyThread * team = factory( 10 ); |
---|
1632 | // concurrency |
---|
1633 | `adelete( team );` $\C{// deallocate heap-based threads, implicit joins before destruction}\CRT$ |
---|
1634 | } |
---|
1635 | \end{cfa} |
---|
1636 | |
---|
1637 | Figure~\ref{s:ConcurrentMatrixSummation} shows concurrently adding the rows of a matrix and then totalling the subtotals sequentially, after all the row threads have terminated. |
---|
1638 | The program uses heap-based threads because each thread needs different constructor values. |
---|
1639 | (Python provides a simple iteration mechanism to initialize array elements to different values allowing stack allocation.) |
---|
1640 | The allocation/deallocation pattern appears unusual because allocated objects are immediately deallocated without any intervening code. |
---|
1641 | However, for threads, the deletion provides implicit synchronization, which is the intervening code. |
---|
1642 | % While the subtotals are added in linear order rather than completion order, which slightly inhibits concurrency, the computation is restricted by the critical-path thread (\ie the thread that takes the longest), and so any inhibited concurrency is very small as totalling the subtotals is trivial. |
---|
1643 | |
---|
1644 | \begin{figure} |
---|
1645 | \begin{cfa} |
---|
1646 | `thread` Adder { int * row, cols, & subtotal; } $\C{// communication variables}$ |
---|
1647 | void ?{}( Adder & adder, int row[], int cols, int & subtotal ) { |
---|
1648 | adder.[ row, cols, &subtotal ] = [ row, cols, &subtotal ]; |
---|
1649 | } |
---|
1650 | void main( Adder & adder ) with( adder ) { |
---|
1651 | subtotal = 0; |
---|
1652 | for ( c; cols ) { subtotal += row[c]; } |
---|
1653 | } |
---|
1654 | int main() { |
---|
1655 | const int rows = 10, cols = 1000; |
---|
1656 | int matrix[rows][cols], subtotals[rows], total = 0; |
---|
1657 | // read matrix |
---|
1658 | Adder * adders[rows]; |
---|
1659 | for ( r; rows; ) { $\C{// start threads to sum rows}$ |
---|
1660 | adders[r] = `new( matrix[r], cols, &subtotals[r] );` |
---|
1661 | } |
---|
1662 | for ( r; rows ) { $\C{// wait for threads to finish}$ |
---|
1663 | `delete( adders[r] );` $\C{// termination join}$ |
---|
1664 | total += subtotals[r]; $\C{// total subtotal}$ |
---|
1665 | } |
---|
1666 | sout | total; |
---|
1667 | } |
---|
1668 | \end{cfa} |
---|
1669 | \caption{Concurrent matrix summation} |
---|
1670 | \label{s:ConcurrentMatrixSummation} |
---|
1671 | \end{figure} |
---|
1672 | |
---|
1673 | |
---|
1674 | \subsection{Thread implementation} |
---|
1675 | |
---|
1676 | Threads in \CFA are user level run by runtime kernel threads (see Section~\ref{s:CFARuntimeStructure}), where user threads provide concurrency and kernel threads provide parallelism. |
---|
1677 | Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the thread-interface functions. |
---|
1678 | \begin{cquote} |
---|
1679 | \begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}} |
---|
1680 | \begin{cfa} |
---|
1681 | thread myThread { |
---|
1682 | ... // declaration/communication variables |
---|
1683 | }; |
---|
1684 | |
---|
1685 | |
---|
1686 | \end{cfa} |
---|
1687 | & |
---|
1688 | \begin{cfa} |
---|
1689 | trait is_thread( `dtype` T ) { |
---|
1690 | void main( T & ); |
---|
1691 | thread_desc * get_thread( T & ); |
---|
1692 | void ^?{}( T & `mutex` ); |
---|
1693 | }; |
---|
1694 | \end{cfa} |
---|
1695 | \end{tabular} |
---|
1696 | \end{cquote} |
---|
1697 | Like coroutines, the @dtype@ property prevents \emph{implicit} copy operations and the @is_thread@ trait provides no \emph{explicit} copy operations, so threads must be passed by reference or pointer. |
---|
1698 | Similarly, the function definitions ensure there is a statically typed @main@ function that is the thread starting point (first stack frame), a mechanism to read the thread descriptor from its handle, and a special destructor to prevent deallocation while the thread is executing. |
---|
1699 | (The qualifier @mutex@ for the destructor parameter is discussed in Section~\ref{s:Monitor}.) |
---|
1700 | The difference between the coroutine and thread is that a coroutine borrows a thread from its caller, so the first thread resuming a coroutine creates the coroutine's stack and starts running the coroutine main on the stack; |
---|
1701 | whereas, a thread is scheduling for execution in @main@ immediately after its constructor is run. |
---|
1702 | No return value or additional parameters are necessary for this function because the @thread@ type allows an arbitrary number of interface functions with corresponding arbitrary typed input and output values. |
---|
1703 | |
---|
1704 | |
---|
1705 | \section{Mutual Exclusion / Synchronization} |
---|
1706 | \label{s:MutualExclusionSynchronization} |
---|
1707 | |
---|
1708 | Unrestricted nondeterminism is meaningless as there is no way to know when a result is completed and safe to access. |
---|
1709 | To produce meaningful execution requires clawing back some determinism using mutual exclusion and synchronization, where mutual exclusion provides access control for threads using shared data, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}. |
---|
1710 | The shared data protected by mutual exclusion is called a \newterm{critical section}~\cite{Dijkstra65}, and the protection can be simple, only 1 thread, or complex, only N kinds of threads, \eg group~\cite{Joung00} or readers/writer~\cite{Courtois71} problems. |
---|
1711 | Without synchronization control in a critical section, an arriving thread can barge ahead of preexisting waiter threads resulting in short/long-term starvation, staleness and freshness problems, and incorrect transfer of data. |
---|
1712 | Preventing or detecting barging is a challenge with low-level locks, but made easier through higher-level constructs. |
---|
1713 | This challenge is often split into two different approaches: barging \emph{avoidance} and \emph{prevention}. |
---|
1714 | Approaches that unconditionally releasing a lock for competing threads to acquire must use barging avoidance with flag/counter variable(s) to force barging threads to wait; |
---|
1715 | approaches that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely. |
---|
1716 | |
---|
1717 | At the lowest level, concurrent control is provided by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg spin locks, semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}. |
---|
1718 | However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}. |
---|
1719 | A significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock. |
---|
1720 | Easing composability is another feature higher-level mutual-exclusion mechanisms can offer. |
---|
1721 | Some concurrent systems eliminate mutable shared-state by switching to non-shared communication like message passing~\cite{Thoth,Harmony,V-Kernel,MPI} (Erlang, MPI), channels~\cite{CSP} (CSP,Go), actors~\cite{Akka} (Akka, Scala), or functional techniques (Haskell). |
---|
1722 | However, these approaches introduce a new communication mechanism for concurrency different from the standard communication using function call/return. |
---|
1723 | Hence, a programmer must learn and manipulate two sets of design and programming patterns. |
---|
1724 | While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account. |
---|
1725 | In contrast, approaches based on shared-state models more closely resemble the standard call and return programming model, resulting in a single programming paradigm. |
---|
1726 | Finally, a newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}. |
---|
1727 | While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive~\cite{Cascaval08,Boehm09} to be the main concurrency paradigm for system languages. |
---|
1728 | |
---|
1729 | |
---|
1730 | \section{Monitor} |
---|
1731 | \label{s:Monitor} |
---|
1732 | |
---|
1733 | One of the most natural, elegant, efficient, high-level mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor} (Table~\ref{t:ExecutionPropertyComposition} case 2). |
---|
1734 | First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}. |
---|
1735 | In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to manually implement a monitor. |
---|
1736 | For these reasons, \CFA selected monitors as the core high-level concurrency construct, upon which higher-level approaches can be easily constructed. |
---|
1737 | |
---|
1738 | Figure~\ref{f:AtomicCounter} compares a \CFA and Java monitor implementing an atomic counter. |
---|
1739 | (Like other concurrent programming languages, \CFA and Java have performant specializations for the basic types using atomic instructions.) |
---|
1740 | A \newterm{monitor} is a set of functions that ensure mutual exclusion when accessing shared state. |
---|
1741 | (Note, in \CFA, @monitor@ is short-hand for @mutex struct@.) |
---|
1742 | More precisely, a monitor is a programming technique that implicitly binds mutual exclusion to static function scope by call and return, as opposed to locks, where mutual exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation). |
---|
1743 | Restricting acquire and release points eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency. |
---|
1744 | As for other special types, \CFA has a custom @monitor@ type. |
---|
1745 | |
---|
1746 | \begin{figure} |
---|
1747 | \centering |
---|
1748 | |
---|
1749 | \begin{lrbox}{\myboxA} |
---|
1750 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1751 | `monitor` Aint { // atomic integer counter |
---|
1752 | int cnt; |
---|
1753 | }; |
---|
1754 | int ++?( Aint & `mutex` this ) with(this) { return ++cnt; } |
---|
1755 | int ?=?( Aint & `mutex` lhs, int rhs ) with(lhs) { cnt = rhs; } |
---|
1756 | int ?=?(int & lhs, Aint & rhs) with(rhs) { lhs = cnt; } |
---|
1757 | |
---|
1758 | int i = 0, j = 0, k = 5; |
---|
1759 | Aint x = { 0 }, y = { 0 }, z = { 5 }; // no mutex |
---|
1760 | ++x; ++y; ++z; // mutex |
---|
1761 | x = 2; y = i; z = k; // mutex |
---|
1762 | i = x; j = y; k = z; // no mutex |
---|
1763 | \end{cfa} |
---|
1764 | \end{lrbox} |
---|
1765 | |
---|
1766 | \begin{lrbox}{\myboxB} |
---|
1767 | \begin{java}[aboveskip=0pt,belowskip=0pt] |
---|
1768 | class Aint { |
---|
1769 | private int cnt; |
---|
1770 | public Aint( int init ) { cnt = init; } |
---|
1771 | `synchronized` public int inc() { return ++cnt; } |
---|
1772 | `synchronized` public void set( int rhs ) {cnt=rhs;} |
---|
1773 | public int get() { return cnt; } |
---|
1774 | } |
---|
1775 | int i = 0, j = 0, k = 5; |
---|
1776 | Aint x=new Aint(0), y=new Aint(0), z=new Aint(5); |
---|
1777 | x.inc(); y.inc(); z.inc(); |
---|
1778 | x.set( 2 ); y.set( i ); z.set( k ); |
---|
1779 | i = x.get(); j = y.get(); k = z.get(); |
---|
1780 | \end{java} |
---|
1781 | \end{lrbox} |
---|
1782 | |
---|
1783 | \subfloat[\CFA]{\label{f:AtomicCounterCFA}\usebox\myboxA} |
---|
1784 | \hspace{3pt} |
---|
1785 | \vrule |
---|
1786 | \hspace{3pt} |
---|
1787 | \subfloat[Java]{\label{f:AtomicCounterJava}\usebox\myboxB} |
---|
1788 | \caption{Atomic counter} |
---|
1789 | \label{f:AtomicCounter} |
---|
1790 | \end{figure} |
---|
1791 | |
---|
1792 | Like Java, \CFA monitors have \newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling other interface functions. |
---|
1793 | % \begin{cfa} |
---|
1794 | % monitor M { ... } m; |
---|
1795 | % void foo( M & mutex m ) { ... } $\C{// acquire mutual exclusion}$ |
---|
1796 | % void bar( M & mutex m ) { $\C{// acquire mutual exclusion}$ |
---|
1797 | % ... `bar( m );` ... `foo( m );` ... $\C{// reacquire mutual exclusion}$ |
---|
1798 | % } |
---|
1799 | % \end{cfa} |
---|
1800 | \CFA monitors also ensure the monitor lock is released regardless of how an acquiring function ends, normal or exceptional, and returning a shared variable is safe via copying before the lock is released. |
---|
1801 | Similar safety is offered by \emph{explicit} opt-in disciplines like \CC RAII versus the monitor \emph{implicit} language-enforced safety guarantee ensuring no programmer usage errors. |
---|
1802 | However, RAII mechanisms cannot handle complex synchronization within a monitor, where the monitor lock may not be released on function exit because it is passed to an unblocking thread; |
---|
1803 | RAII is purely a mutual-exclusion mechanism (see Section~\ref{s:Scheduling}). |
---|
1804 | |
---|
1805 | Both Java and \CFA use a keyword @mutex@/\lstinline[language=java]|synchronized| to designate functions that implicitly acquire/release the monitor lock on call/return providing mutual exclusion to the stared data. |
---|
1806 | Non-designated functions provide no mutual exclusion for read-only access or as an interface to a multi-step protocol requiring several steps of acquiring and releasing the monitor. |
---|
1807 | Monitor objects can be passed through multiple helper functions without acquiring mutual exclusion, until a designated function associated with the object is called. |
---|
1808 | \CFA designated functions are marked by an explicitly parameter-only pointer/reference qualifier @mutex@ (discussed further in Section\ref{s:MutexAcquisition}). |
---|
1809 | Whereas, Java designated members are marked with \lstinline[language=java]|synchronized| that applies to the implicit reference parameter @this@. |
---|
1810 | In the example, the increment and setter operations need mutual exclusion while the read-only getter operation can be nonmutex if reading the implementation is atomic. |
---|
1811 | |
---|
1812 | |
---|
1813 | \subsection{Monitor implementation} |
---|
1814 | |
---|
1815 | For the same design reasons, \CFA provides a custom @monitor@ type and a @trait@ to enforce and restrict the monitor-interface functions. |
---|
1816 | \begin{cquote} |
---|
1817 | \begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}} |
---|
1818 | \begin{cfa} |
---|
1819 | monitor M { |
---|
1820 | ... // shared data |
---|
1821 | }; |
---|
1822 | |
---|
1823 | \end{cfa} |
---|
1824 | & |
---|
1825 | \begin{cfa} |
---|
1826 | trait is_monitor( `dtype` T ) { |
---|
1827 | monitor_desc * get_monitor( T & ); |
---|
1828 | void ^?{}( T & mutex ); |
---|
1829 | }; |
---|
1830 | \end{cfa} |
---|
1831 | \end{tabular} |
---|
1832 | \end{cquote} |
---|
1833 | The @dtype@ property prevents \emph{implicit} copy operations and the @is_monitor@ trait provides no \emph{explicit} copy operations, so monitors must be passed by reference or pointer. |
---|
1834 | Similarly, the function definitions ensure there is a mechanism to read the monitor descriptor from its handle, and a special destructor to prevent deallocation if a thread is using the shared data. |
---|
1835 | The custom monitor type also inserts any locks needed to implement the mutual exclusion semantics. |
---|
1836 | \CFA relies heavily on traits as an abstraction mechanism, so the @mutex@ qualifier prevents coincidentally matching of a monitor trait with a type that is not a monitor, similar to coincidental inheritance where a shape and playing card can both be drawable. |
---|
1837 | |
---|
1838 | |
---|
1839 | \subsection{Mutex acquisition} |
---|
1840 | \label{s:MutexAcquisition} |
---|
1841 | |
---|
1842 | For object-oriented programming languages, the mutex property applies to one object, the implicit pointer/reference to the monitor type. |
---|
1843 | Because \CFA uses a pointer qualifier, other possibilities exist, \eg: |
---|
1844 | \begin{cfa} |
---|
1845 | monitor M { ... }; |
---|
1846 | int f1( M & mutex m ); $\C{// single parameter object}$ |
---|
1847 | int f2( M * mutex m ); $\C{// single or multiple parameter object}$ |
---|
1848 | int f3( M * mutex m[$\,$] ); $\C{// multiple parameter object}$ |
---|
1849 | int f4( stack( M * ) & mutex m ); $\C{// multiple parameters object}$ |
---|
1850 | \end{cfa} |
---|
1851 | Function @f1@ has a single object parameter, while functions @f2@ to @f4@ can be a single or multi-element parameter with statically unknown size. |
---|
1852 | Because of the statically unknown size, \CFA only supports a single reference @mutex@ parameter, @f1@. |
---|
1853 | |
---|
1854 | The \CFA @mutex@ qualifier does allow the ability to support multimonitor functions,\footnote{ |
---|
1855 | While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.} |
---|
1856 | where the number of acquisitions is statically known, called \newterm{bulk acquire}. |
---|
1857 | \CFA guarantees bulk acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitors in a bulk acquire is safe from deadlock. |
---|
1858 | Figure~\ref{f:BankTransfer} shows a trivial solution to the bank transfer problem~\cite{BankTransfer}, where two resources must be locked simultaneously, using \CFA monitors with implicit locking and \CC with explicit locking. |
---|
1859 | A \CFA programmer only has to manage when to acquire mutual exclusion; |
---|
1860 | a \CC programmer must select the correct lock and acquisition mechanism from a panoply of locking options. |
---|
1861 | Making good choices for common cases in \CFA simplifies the programming experience and enhances safety. |
---|
1862 | |
---|
1863 | \begin{figure} |
---|
1864 | \centering |
---|
1865 | \begin{lrbox}{\myboxA} |
---|
1866 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1867 | monitor BankAccount { |
---|
1868 | |
---|
1869 | int balance; |
---|
1870 | } b1 = { 0 }, b2 = { 0 }; |
---|
1871 | void deposit( BankAccount & `mutex` b, |
---|
1872 | int deposit ) with(b) { |
---|
1873 | balance += deposit; |
---|
1874 | } |
---|
1875 | void transfer( BankAccount & `mutex` my, |
---|
1876 | BankAccount & `mutex` your, int me2you ) { |
---|
1877 | // bulk acquire |
---|
1878 | deposit( my, -me2you ); // debit |
---|
1879 | deposit( your, me2you ); // credit |
---|
1880 | } |
---|
1881 | `thread` Person { BankAccount & b1, & b2; }; |
---|
1882 | void main( Person & person ) with(person) { |
---|
1883 | for ( 10_000_000 ) { |
---|
1884 | if ( random() % 3 ) deposit( b1, 3 ); |
---|
1885 | if ( random() % 3 ) transfer( b1, b2, 7 ); |
---|
1886 | } |
---|
1887 | } |
---|
1888 | int main() { |
---|
1889 | `Person p1 = { b1, b2 }, p2 = { b2, b1 };` |
---|
1890 | |
---|
1891 | } // wait for threads to complete |
---|
1892 | \end{cfa} |
---|
1893 | \end{lrbox} |
---|
1894 | |
---|
1895 | \begin{lrbox}{\myboxB} |
---|
1896 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1897 | struct BankAccount { |
---|
1898 | `recursive_mutex m;` |
---|
1899 | int balance = 0; |
---|
1900 | } b1, b2; |
---|
1901 | void deposit( BankAccount & b, int deposit ) { |
---|
1902 | `scoped_lock lock( b.m );` |
---|
1903 | b.balance += deposit; |
---|
1904 | } |
---|
1905 | void transfer( BankAccount & my, |
---|
1906 | BankAccount & your, int me2you ) { |
---|
1907 | `scoped_lock lock( my.m, your.m );` // bulk acquire |
---|
1908 | deposit( my, -me2you ); // debit |
---|
1909 | deposit( your, me2you ); // credit |
---|
1910 | } |
---|
1911 | |
---|
1912 | void person( BankAccount & b1, BankAccount & b2 ) { |
---|
1913 | for ( int i = 0; i < 10$'$000$'$000; i += 1 ) { |
---|
1914 | if ( random() % 3 ) deposit( b1, 3 ); |
---|
1915 | if ( random() % 3 ) transfer( b1, b2, 7 ); |
---|
1916 | } |
---|
1917 | } |
---|
1918 | int main() { |
---|
1919 | `thread p1(person, ref(b1), ref(b2)), p2(person, ref(b2), ref(b1));` |
---|
1920 | `p1.join(); p2.join();` |
---|
1921 | } |
---|
1922 | \end{cfa} |
---|
1923 | \end{lrbox} |
---|
1924 | |
---|
1925 | \subfloat[\CFA]{\label{f:CFABank}\usebox\myboxA} |
---|
1926 | \hspace{3pt} |
---|
1927 | \vrule |
---|
1928 | \hspace{3pt} |
---|
1929 | \subfloat[\CC]{\label{f:C++Bank}\usebox\myboxB} |
---|
1930 | \hspace{3pt} |
---|
1931 | \caption{Bank transfer problem} |
---|
1932 | \label{f:BankTransfer} |
---|
1933 | \end{figure} |
---|
1934 | |
---|
1935 | Users can still force the acquiring order by using or not using @mutex@. |
---|
1936 | \begin{cfa} |
---|
1937 | void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$ |
---|
1938 | void bar( M & mutex m1, M & m2 ) { $\C{// only acquire m1}$ |
---|
1939 | ... foo( m1, m2 ); ... $\C{// acquire m2}$ |
---|
1940 | } |
---|
1941 | void baz( M & m1, M & mutex m2 ) { $\C{// only acquire m2}$ |
---|
1942 | ... foo( m1, m2 ); ... $\C{// acquire m1}$ |
---|
1943 | } |
---|
1944 | \end{cfa} |
---|
1945 | The bulk-acquire semantics allow @bar@ or @baz@ to acquire a monitor lock and reacquire it in @foo@. |
---|
1946 | The calls to @bar@ and @baz@ acquired the monitors in opposite order, possibly resulting in deadlock. |
---|
1947 | However, this case is the simplest instance of the \emph{nested-monitor problem}~\cite{Lister77}, where monitors are acquired in sequence versus bulk. |
---|
1948 | Detecting the nested-monitor problem requires dynamic tracking of monitor calls, and dealing with it requires rollback semantics~\cite{Dice10}. |
---|
1949 | \CFA does not deal with this fundamental problem. |
---|
1950 | |
---|
1951 | Finally, like Java, \CFA offers an alternative @mutex@ statement to reduce refactoring and naming. |
---|
1952 | \begin{cquote} |
---|
1953 | \renewcommand{\arraystretch}{0.0} |
---|
1954 | \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}} |
---|
1955 | \multicolumn{1}{c}{\textbf{\lstinline@mutex@ call}} & \multicolumn{1}{c}{\lstinline@mutex@ \textbf{statement}} \\ |
---|
1956 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1957 | monitor M { ... }; |
---|
1958 | void foo( M & mutex m1, M & mutex m2 ) { |
---|
1959 | // critical section |
---|
1960 | } |
---|
1961 | void bar( M & m1, M & m2 ) { |
---|
1962 | foo( m1, m2 ); |
---|
1963 | } |
---|
1964 | \end{cfa} |
---|
1965 | & |
---|
1966 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
1967 | |
---|
1968 | void bar( M & m1, M & m2 ) { |
---|
1969 | mutex( m1, m2 ) { // remove refactoring and naming |
---|
1970 | // critical section |
---|
1971 | } |
---|
1972 | } |
---|
1973 | |
---|
1974 | \end{cfa} |
---|
1975 | \end{tabular} |
---|
1976 | \end{cquote} |
---|
1977 | |
---|
1978 | |
---|
1979 | \subsection{Scheduling} |
---|
1980 | \label{s:Scheduling} |
---|
1981 | |
---|
1982 | % There are many aspects of scheduling in a concurrency system, all related to resource utilization by waiting threads, \ie which thread gets the resource next. |
---|
1983 | % Different forms of scheduling include access to processors by threads (see Section~\ref{s:RuntimeStructureCluster}), another is access to a shared resource by a lock or monitor. |
---|
1984 | This section discusses scheduling for waiting threads eligible for monitor entry~\cite{Buhr95b}, \ie which user thread gets the shared resource next. |
---|
1985 | (See Section~\ref{s:RuntimeStructureCluster} for scheduling kernel threads on virtual processors.) |
---|
1986 | While monitor mutual-exclusion provides safe access to its shared data, the data may indicate a thread cannot proceed, \eg a bounded buffer may be full/\-empty so produce/consumer threads must block. |
---|
1987 | Leaving the monitor and retrying (busy waiting) is impractical for high-level programming. |
---|
1988 | |
---|
1989 | Monitors eliminate busy waiting by providing synchronization within the monitor critical-section to schedule threads needing access to the shared data, where threads block versus spin. |
---|
1990 | Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling. |
---|
1991 | \newterm{Internal} largely schedules threads located \emph{inside} the monitor and is accomplished using condition variables with signal and wait. |
---|
1992 | \newterm{External} largely schedules threads located \emph{outside} the monitor and is accomplished with the @waitfor@ statement. |
---|
1993 | Note, internal scheduling has a small amount of external scheduling and vice versa, so the naming denotes where the majority of the block threads reside (inside or outside) for scheduling. |
---|
1994 | For complex scheduling, the approaches can be combined, so there are threads waiting inside and outside. |
---|
1995 | |
---|
1996 | \CFA monitors do not allow calling threads to barge ahead of signaled threads via barging prevention, which simplifies synchronization among threads in the monitor and increases correctness. |
---|
1997 | A direct consequence of this semantics is that unblocked waiting threads are not required to recheck the waiting condition, \ie waits are not in a starvation-prone busy-loop as required by the signals-as-hints style with barging. |
---|
1998 | Preventing barging comes directly from Hoare's semantics in the seminal paper on monitors~\cite[p.~550]{Hoare74}. |
---|
1999 | % \begin{cquote} |
---|
2000 | % However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program. |
---|
2001 | % It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signaling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74} |
---|
2002 | % \end{cquote} |
---|
2003 | Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit self barging. |
---|
2004 | |
---|
2005 | Monitor mutual-exclusion means signaling cannot have the signaller and signaled thread in the monitor simultaneously, so only the signaller or signallee can proceed and the other waits on an implicit urgent list~\cite[p.~551]{Hoare74}. |
---|
2006 | Figure~\ref{f:MonitorScheduling} shows internal and external scheduling for the bounded-buffer examples in Figure~\ref{f:GenericBoundedBuffer}. |
---|
2007 | For internal scheduling in Figure~\ref{f:BBInt}, the @signal@ moves the signallee, front thread of the specified condition queue, to the urgent list (see Figure~\ref{f:MonitorScheduling}) and the signaller continues (solid line). |
---|
2008 | Multiple signals move multiple signallees to urgent until the condition queue is empty. |
---|
2009 | When the signaller exits or waits, a thread is implicitly unblocked from urgent, if available, before unblocking a calling thread to prevent barging. |
---|
2010 | (Java conceptually moves the signaled thread to the calling queue, and hence, allows barging.) |
---|
2011 | Signal is used when the signaller is providing the cooperation needed by the signallee, \eg creating an empty slot in a buffer for a producer, and the signaller immediately exits the monitor to run concurrently consuming the buffer element, and passes control of the monitor to the signaled thread, which can immediately take advantage of the state change. |
---|
2012 | Specifically, the @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list. |
---|
2013 | Signalling is unconditional because signaling an empty condition queue does nothing. |
---|
2014 | It is common to declare condition queues as monitor fields to prevent shared access, hence no locking is required for access as the queues are protected by the monitor lock. |
---|
2015 | In \CFA, a condition queue can be created and stored independently. |
---|
2016 | |
---|
2017 | \begin{figure} |
---|
2018 | \centering |
---|
2019 | % \subfloat[Scheduling Statements] { |
---|
2020 | % \label{fig:SchedulingStatements} |
---|
2021 | % {\resizebox{0.45\textwidth}{!}{\input{CondSigWait.pstex_t}}} |
---|
2022 | \input{CondSigWait.pstex_t} |
---|
2023 | % }% subfloat |
---|
2024 | % \quad |
---|
2025 | % \subfloat[Bulk acquire monitor] { |
---|
2026 | % \label{fig:BulkMonitor} |
---|
2027 | % {\resizebox{0.45\textwidth}{!}{\input{ext_monitor.pstex_t}}} |
---|
2028 | % }% subfloat |
---|
2029 | \caption{Monitor Scheduling} |
---|
2030 | \label{f:MonitorScheduling} |
---|
2031 | \end{figure} |
---|
2032 | |
---|
2033 | \begin{figure} |
---|
2034 | \centering |
---|
2035 | \newbox\myboxA |
---|
2036 | \begin{lrbox}{\myboxA} |
---|
2037 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2038 | forall( otype T ) { // distribute forall |
---|
2039 | monitor Buffer { |
---|
2040 | `condition` full, empty; |
---|
2041 | int front, back, count; |
---|
2042 | T elements[10]; |
---|
2043 | }; |
---|
2044 | void ?{}( Buffer(T) & buf ) with(buf) { |
---|
2045 | front = back = count = 0; |
---|
2046 | } |
---|
2047 | |
---|
2048 | void insert(Buffer(T) & mutex buf, T elm) with(buf){ |
---|
2049 | if ( count == 10 ) `wait( empty )`; // full ? |
---|
2050 | // insert elm into buf |
---|
2051 | `signal( full )`; |
---|
2052 | } |
---|
2053 | T remove( Buffer(T) & mutex buf ) with(buf) { |
---|
2054 | if ( count == 0 ) `wait( full )`; // empty ? |
---|
2055 | // remove elm from buf |
---|
2056 | `signal( empty )`; |
---|
2057 | return elm; |
---|
2058 | } |
---|
2059 | } |
---|
2060 | \end{cfa} |
---|
2061 | \end{lrbox} |
---|
2062 | |
---|
2063 | \newbox\myboxB |
---|
2064 | \begin{lrbox}{\myboxB} |
---|
2065 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2066 | forall( otype T ) { // distribute forall |
---|
2067 | monitor Buffer { |
---|
2068 | |
---|
2069 | int front, back, count; |
---|
2070 | T elements[10]; |
---|
2071 | }; |
---|
2072 | void ?{}( Buffer(T) & buf ) with(buf) { |
---|
2073 | front = back = count = 0; |
---|
2074 | } |
---|
2075 | T remove( Buffer(T) & mutex buf ); // forward |
---|
2076 | void insert(Buffer(T) & mutex buf, T elm) with(buf){ |
---|
2077 | if ( count == 10 ) `waitfor( remove : buf )`; |
---|
2078 | // insert elm into buf |
---|
2079 | |
---|
2080 | } |
---|
2081 | T remove( Buffer(T) & mutex buf ) with(buf) { |
---|
2082 | if ( count == 0 ) `waitfor( insert : buf )`; |
---|
2083 | // remove elm from buf |
---|
2084 | |
---|
2085 | return elm; |
---|
2086 | } |
---|
2087 | } |
---|
2088 | \end{cfa} |
---|
2089 | \end{lrbox} |
---|
2090 | |
---|
2091 | \subfloat[Internal scheduling]{\label{f:BBInt}\usebox\myboxA} |
---|
2092 | \hspace{1pt} |
---|
2093 | \vrule |
---|
2094 | \hspace{3pt} |
---|
2095 | \subfloat[External scheduling]{\label{f:BBExt}\usebox\myboxB} |
---|
2096 | |
---|
2097 | \caption{Generic bounded buffer} |
---|
2098 | \label{f:GenericBoundedBuffer} |
---|
2099 | \end{figure} |
---|
2100 | |
---|
2101 | The @signal_block@ provides the opposite unblocking order, where the signaller is moved to urgent and the signallee continues and a thread is implicitly unblocked from urgent when the signallee exits or waits (dashed line)~\cite[p.~551]{Hoare74}. |
---|
2102 | Signal block is used when the signallee is providing the cooperation needed by the signaller, \eg if the buffer is removed and a producer hands off an item to a consumer as in Figure~\ref{f:DatingSignalBlock}, so the signaller must wait until the signallee unblocks, provides the cooperation, exits the monitor to run concurrently, and passes control of the monitor to the signaller, which can immediately take advantage of the state change. |
---|
2103 | Using @signal@ or @signal_block@ can be a dynamic decision based on whether the thread providing the cooperation arrives before or after the thread needing the cooperation. |
---|
2104 | |
---|
2105 | For external scheduling in Figure~\ref{f:BBExt}, the internal scheduling is replaced, eliminating condition queues and @signal@/@wait@ (cases where it cannot are discussed shortly), and has existed in the programming language Ada for almost 40 years with variants in other languages~\cite{SR,ConcurrentC++,uC++}. |
---|
2106 | While prior languages use external scheduling solely for thread interaction, \CFA generalizes it to both monitors and threads. |
---|
2107 | External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on condition queues in the monitor. |
---|
2108 | Scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion. |
---|
2109 | Specifically, a thread calling the monitor is unblocked directly from the calling queue based on function names that can fulfill the cooperation required by the signaller. |
---|
2110 | (The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.) |
---|
2111 | Hence, the @waitfor@ has the same semantics as @signal_block@, where the signallee thread from the calling queue executes before the signaller, which waits on urgent. |
---|
2112 | Now when a producer/consumer detects a full/empty buffer, the necessary cooperation for continuation is specified by indicating the next function call that can occur. |
---|
2113 | For example, a producer detecting a full buffer must have cooperation from a consumer to remove an item so function @remove@ is accepted, which prevents producers from entering the monitor, and after a consumer calls @remove@, the producer waiting on urgent is \emph{implicitly} unblocked because it can now continue its insert operation. |
---|
2114 | Hence, this mechanism is done in terms of control flow, next call, versus in terms of data, channels, as in Go and Rust @select@. |
---|
2115 | While both mechanisms have strengths and weaknesses, \CFA uses the control-flow mechanism to be consistent with other language features. |
---|
2116 | |
---|
2117 | Figure~\ref{f:ReadersWriterLock} shows internal and external scheduling for a readers/writer lock with no barging and threads are serviced in FIFO order to eliminate staleness and freshness among the reader/writer threads. |
---|
2118 | For internal scheduling in Figure~\ref{f:RWInt}, the readers and writers wait on the same condition queue in FIFO order, making it impossible to tell if a waiting thread is a reader or writer. |
---|
2119 | To clawback the kind of thread, a \CFA condition can store user data in the node for a blocking thread at the @wait@, \ie whether the thread is a @READER@ or @WRITER@. |
---|
2120 | An unblocked reader thread checks if the thread at the front of the queue is a reader and unblock it, \ie the readers daisy-chain signal the next group of readers demarcated by the next writer or end of the queue. |
---|
2121 | For external scheduling in Figure~\ref{f:RWExt}, a waiting reader checks if a writer is using the resource, and if so, restricts further calls until the writer exits by calling @EndWrite@. |
---|
2122 | The writer does a similar action for each reader or writer using the resource. |
---|
2123 | Note, no new calls to @StartRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@. |
---|
2124 | |
---|
2125 | \begin{figure} |
---|
2126 | \centering |
---|
2127 | \newbox\myboxA |
---|
2128 | \begin{lrbox}{\myboxA} |
---|
2129 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2130 | enum RW { READER, WRITER }; |
---|
2131 | monitor ReadersWriter { |
---|
2132 | int rcnt, wcnt; // readers/writer using resource |
---|
2133 | `condition RWers;` |
---|
2134 | }; |
---|
2135 | void ?{}( ReadersWriter & rw ) with(rw) { |
---|
2136 | rcnt = wcnt = 0; |
---|
2137 | } |
---|
2138 | void EndRead( ReadersWriter & mutex rw ) with(rw) { |
---|
2139 | rcnt -= 1; |
---|
2140 | if ( rcnt == 0 ) `signal( RWers )`; |
---|
2141 | } |
---|
2142 | void EndWrite( ReadersWriter & mutex rw ) with(rw) { |
---|
2143 | wcnt = 0; |
---|
2144 | `signal( RWers );` |
---|
2145 | } |
---|
2146 | void StartRead( ReadersWriter & mutex rw ) with(rw) { |
---|
2147 | if ( wcnt !=0 || ! empty( RWers ) ) |
---|
2148 | `wait( RWers, READER )`; |
---|
2149 | rcnt += 1; |
---|
2150 | if ( ! empty(RWers) && `front(RWers) == READER` ) |
---|
2151 | `signal( RWers )`; // daisy-chain signaling |
---|
2152 | } |
---|
2153 | void StartWrite( ReadersWriter & mutex rw ) with(rw) { |
---|
2154 | if ( wcnt != 0 || rcnt != 0 ) `wait( RWers, WRITER )`; |
---|
2155 | |
---|
2156 | wcnt = 1; |
---|
2157 | } |
---|
2158 | \end{cfa} |
---|
2159 | \end{lrbox} |
---|
2160 | |
---|
2161 | \newbox\myboxB |
---|
2162 | \begin{lrbox}{\myboxB} |
---|
2163 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2164 | |
---|
2165 | monitor ReadersWriter { |
---|
2166 | int rcnt, wcnt; // readers/writer using resource |
---|
2167 | |
---|
2168 | }; |
---|
2169 | void ?{}( ReadersWriter & rw ) with(rw) { |
---|
2170 | rcnt = wcnt = 0; |
---|
2171 | } |
---|
2172 | void EndRead( ReadersWriter & mutex rw ) with(rw) { |
---|
2173 | rcnt -= 1; |
---|
2174 | |
---|
2175 | } |
---|
2176 | void EndWrite( ReadersWriter & mutex rw ) with(rw) { |
---|
2177 | wcnt = 0; |
---|
2178 | |
---|
2179 | } |
---|
2180 | void StartRead( ReadersWriter & mutex rw ) with(rw) { |
---|
2181 | if ( wcnt > 0 ) `waitfor( EndWrite : rw );` |
---|
2182 | |
---|
2183 | rcnt += 1; |
---|
2184 | |
---|
2185 | |
---|
2186 | } |
---|
2187 | void StartWrite( ReadersWriter & mutex rw ) with(rw) { |
---|
2188 | if ( wcnt > 0 ) `waitfor( EndWrite : rw );` |
---|
2189 | else while ( rcnt > 0 ) `waitfor( EndRead : rw );` |
---|
2190 | wcnt = 1; |
---|
2191 | } |
---|
2192 | \end{cfa} |
---|
2193 | \end{lrbox} |
---|
2194 | |
---|
2195 | \subfloat[Internal scheduling]{\label{f:RWInt}\usebox\myboxA} |
---|
2196 | \hspace{1pt} |
---|
2197 | \vrule |
---|
2198 | \hspace{3pt} |
---|
2199 | \subfloat[External scheduling]{\label{f:RWExt}\usebox\myboxB} |
---|
2200 | |
---|
2201 | \caption{Readers / writer lock} |
---|
2202 | \label{f:ReadersWriterLock} |
---|
2203 | \end{figure} |
---|
2204 | |
---|
2205 | Finally, external scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited. |
---|
2206 | Internal scheduling performing multiple signaling results in unblocking from urgent in the reverse order from signaling. |
---|
2207 | It is rare for the unblocking order to be important as an unblocked thread can be time-sliced immediately after leaving the monitor. |
---|
2208 | If the unblocking order is important, multiple signaling can be restructured into daisy-chain signaling, where each thread signals the next thread. |
---|
2209 | Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signaling. |
---|
2210 | (Advanced @waitfor@ features are discussed in Section~\ref{s:ExtendedWaitfor}.) |
---|
2211 | |
---|
2212 | \begin{figure} |
---|
2213 | \centering |
---|
2214 | \newbox\myboxA |
---|
2215 | \begin{lrbox}{\myboxA} |
---|
2216 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2217 | enum { CCodes = 20 }; |
---|
2218 | monitor DS { |
---|
2219 | int GirlPhNo, BoyPhNo; |
---|
2220 | condition Girls[CCodes], Boys[CCodes]; |
---|
2221 | `condition exchange;` |
---|
2222 | }; |
---|
2223 | int girl( DS & mutex ds, int phNo, int ccode ) { |
---|
2224 | if ( empty( Boys[ccode] ) ) { |
---|
2225 | wait( Girls[ccode] ); |
---|
2226 | GirlPhNo = phNo; |
---|
2227 | `signal( exchange );` |
---|
2228 | } else { |
---|
2229 | GirlPhNo = phNo; |
---|
2230 | `signal( Boys[ccode] );` |
---|
2231 | `wait( exchange );` |
---|
2232 | } |
---|
2233 | return BoyPhNo; |
---|
2234 | } |
---|
2235 | int boy( DS & mutex ds, int phNo, int ccode ) { |
---|
2236 | // as above with boy/girl interchanged |
---|
2237 | } |
---|
2238 | \end{cfa} |
---|
2239 | \end{lrbox} |
---|
2240 | |
---|
2241 | \newbox\myboxB |
---|
2242 | \begin{lrbox}{\myboxB} |
---|
2243 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2244 | |
---|
2245 | monitor DS { |
---|
2246 | int GirlPhNo, BoyPhNo; |
---|
2247 | condition Girls[CCodes], Boys[CCodes]; |
---|
2248 | |
---|
2249 | }; |
---|
2250 | int girl( DS & mutex ds, int phNo, int ccode ) { |
---|
2251 | if ( empty( Boys[ccode] ) ) { // no compatible |
---|
2252 | wait( Girls[ccode] ); // wait for boy |
---|
2253 | GirlPhNo = phNo; // make phone number available |
---|
2254 | |
---|
2255 | } else { |
---|
2256 | GirlPhNo = phNo; // make phone number available |
---|
2257 | `signal_block( Boys[ccode] );` // restart boy |
---|
2258 | |
---|
2259 | } // if |
---|
2260 | return BoyPhNo; |
---|
2261 | } |
---|
2262 | int boy( DS & mutex ds, int phNo, int ccode ) { |
---|
2263 | // as above with boy/girl interchanged |
---|
2264 | } |
---|
2265 | \end{cfa} |
---|
2266 | \end{lrbox} |
---|
2267 | |
---|
2268 | \subfloat[\lstinline@signal@]{\label{f:DatingSignal}\usebox\myboxA} |
---|
2269 | \qquad |
---|
2270 | \subfloat[\lstinline@signal_block@]{\label{f:DatingSignalBlock}\usebox\myboxB} |
---|
2271 | \caption{Dating service Monitor} |
---|
2272 | \label{f:DatingServiceMonitor} |
---|
2273 | \end{figure} |
---|
2274 | |
---|
2275 | Figure~\ref{f:DatingServiceMonitor} shows a dating service demonstrating nonblocking and blocking signaling. |
---|
2276 | The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers. |
---|
2277 | A thread blocks until an appropriate partner arrives. |
---|
2278 | The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property. |
---|
2279 | For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner. |
---|
2280 | For signal-block scheduling, the implicit urgent-queue replaces the explicit @exchange@-condition and @signal_block@ puts the finding thread on the urgent stack and unblocks the matcher. |
---|
2281 | Note, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number. |
---|
2282 | This situation shows rechecking the waiting condition and waiting again (signals-as-hints) fails, requiring significant restructured to account for barging. |
---|
2283 | |
---|
2284 | Given external and internal scheduling, what guidelines can a programmer use to select between them? |
---|
2285 | In general, external scheduling is easier to understand and code because only the next logical action (mutex function(s)) is stated, and the monitor implicitly handles all the details. |
---|
2286 | Therefore, there are no condition variables, and hence, no wait and signal, which reduces coding complexity and synchronization errors. |
---|
2287 | If external scheduling is simpler than internal, why not use it all the time? |
---|
2288 | Unfortunately, external scheduling cannot be used if: scheduling depends on parameter value(s) or scheduling must block across an unknown series of calls on a condition variable, \ie internal scheduling. |
---|
2289 | For example, the dating service cannot be written using external scheduling. |
---|
2290 | First, scheduling requires knowledge of calling parameters to make matching decisions and parameters of calling threads are unavailable within the monitor. |
---|
2291 | Specifically, a thread within the monitor cannot examine the @ccode@ of threads waiting on the calling queue to determine if there is a matching partner. |
---|
2292 | (Similarly, if the bounded buffer or readers/writer are restructured with a single interface function with a parameter denoting producer/consumer or reader/write, they cannot be solved with external scheduling.) |
---|
2293 | Second, a scheduling decision may be delayed across an unknown number of calls when there is no immediate match so the thread in the monitor must block on a condition. |
---|
2294 | Specifically, if a thread determines there is no opposite calling thread with the same @ccode@, it must wait an unknown period until a matching thread arrives. |
---|
2295 | For complex synchronization, both external and internal scheduling can be used to take advantage of best of properties of each. |
---|
2296 | |
---|
2297 | Finally, both internal and external scheduling extend to multiple monitors in a natural way. |
---|
2298 | \begin{cquote} |
---|
2299 | \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}} |
---|
2300 | \begin{cfa} |
---|
2301 | monitor M { `condition e`; ... }; |
---|
2302 | void foo( M & mutex m1, M & mutex m2 ) { |
---|
2303 | ... wait( `e` ); ... // wait( e, m1, m2 ) |
---|
2304 | ... wait( `e, m1` ); ... |
---|
2305 | ... wait( `e, m2` ); ... |
---|
2306 | } |
---|
2307 | \end{cfa} |
---|
2308 | & |
---|
2309 | \begin{cfa} |
---|
2310 | void rtn$\(_1\)$( M & mutex m1, M & mutex m2 ); // overload rtn |
---|
2311 | void rtn$\(_2\)$( M & mutex m1 ); |
---|
2312 | void bar( M & mutex m1, M & mutex m2 ) { |
---|
2313 | ... waitfor( `rtn`${\color{red}\(_1\)}$ ); ... // $\LstCommentStyle{waitfor( rtn\(_1\) : m1, m2 )}$ |
---|
2314 | ... waitfor( `rtn${\color{red}\(_2\)}$ : m1` ); ... |
---|
2315 | } |
---|
2316 | \end{cfa} |
---|
2317 | \end{tabular} |
---|
2318 | \end{cquote} |
---|
2319 | For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex parameters, \ie @wait( e, m1, m2 )@. |
---|
2320 | To override the implicit multimonitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@. |
---|
2321 | Wait cannot statically verify the released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@. |
---|
2322 | While \CC supports bulk locking, @wait@ only accepts a single lock for a condition queue, so bulk locking with condition queues is asymmetric. |
---|
2323 | Finally, a signaller, |
---|
2324 | \begin{cfa} |
---|
2325 | void baz( M & mutex m1, M & mutex m2 ) { |
---|
2326 | ... signal( e ); ... |
---|
2327 | } |
---|
2328 | \end{cfa} |
---|
2329 | must have acquired at least the same locks as the waiting thread signaled from a condition queue to allow the locks to be passed, and hence, prevent barging. |
---|
2330 | |
---|
2331 | Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn : m1, m2 )@. |
---|
2332 | To override the implicit multimonitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn : m1 )@. |
---|
2333 | @waitfor@ does statically verify the monitor types passed are the same as the acquired mutex-parameters of the given function or function pointer, hence the prototype must be accessible. |
---|
2334 | % When an overloaded function appears in an @waitfor@ statement, calls to any function with that name are accepted. |
---|
2335 | % The rationale is that functions with the same name should perform a similar actions, and therefore, all should be eligible to accept a call. |
---|
2336 | Overloaded functions can be disambiguated using a cast |
---|
2337 | \begin{cfa} |
---|
2338 | void rtn( M & mutex m ); |
---|
2339 | `int` rtn( M & mutex m ); |
---|
2340 | waitfor( (`int` (*)( M & mutex ))rtn : m ); |
---|
2341 | \end{cfa} |
---|
2342 | |
---|
2343 | The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock (see Section~\ref{s:MutexAcquisition}). |
---|
2344 | \begin{cfa} |
---|
2345 | void foo( M & mutex m1, M & mutex m2 ) { |
---|
2346 | ... wait( `e, m1` ); ... $\C{// release m1, keeping m2 acquired}$ |
---|
2347 | void bar( M & mutex m1, M & mutex m2 ) { $\C{// must acquire m1 and m2}$ |
---|
2348 | ... signal( `e` ); ... |
---|
2349 | \end{cfa} |
---|
2350 | The @wait@ only releases @m1@ so the signaling thread cannot acquire @m1@ and @m2@ to enter @bar@ and @signal@ the condition. |
---|
2351 | While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitor locking is not perfectly composable. |
---|
2352 | |
---|
2353 | |
---|
2354 | \subsection{\texorpdfstring{Extended \protect\lstinline@waitfor@}{Extended waitfor}} |
---|
2355 | \label{s:ExtendedWaitfor} |
---|
2356 | |
---|
2357 | Figure~\ref{f:ExtendedWaitfor} shows the extended form of the @waitfor@ statement to conditionally accept one of a group of mutex functions, with an optional statement to be performed \emph{after} the mutex function finishes. |
---|
2358 | For a @waitfor@ clause to be executed, its @when@ must be true and an outstanding call to its corresponding function(s) must exist. |
---|
2359 | The \emph{conditional-expression} of a @when@ may call a function, but the function must not block or context switch. |
---|
2360 | If there are multiple acceptable mutex calls, selection is prioritized top-to-bottom among the @waitfor@ clauses, whereas some programming languages with similar mechanisms accept nondeterministically for this case, \eg Go \lstinline[morekeywords=select]@select@. |
---|
2361 | If some accept guards are true and there are no outstanding calls to these functions, the acceptor is blocked until a call to one of these functions is made. |
---|
2362 | If there is a @timeout@ clause, it provides an upper bound on waiting. |
---|
2363 | If all the accept guards are false, the statement does nothing, unless there is a terminating @else@ clause with a true guard, which is executed instead. |
---|
2364 | Hence, the terminating @else@ clause allows a conditional attempt to accept a call without blocking. |
---|
2365 | If both @timeout@ and @else@ clause are present, the @else@ must be conditional, or the @timeout@ is never triggered. |
---|
2366 | % There is also a traditional future wait queue (not shown) (\eg Microsoft @WaitForMultipleObjects@), to wait for a specified number of future elements in the queue. |
---|
2367 | Finally, there is a shorthand for specifying multiple functions using the same set of monitors: @waitfor( f, g, h : m1, m2, m3 )@. |
---|
2368 | |
---|
2369 | \begin{figure} |
---|
2370 | \centering |
---|
2371 | \begin{cfa} |
---|
2372 | `when` ( $\emph{conditional-expression}$ ) $\C{// optional guard}$ |
---|
2373 | waitfor( $\emph{mutex-function-name}$ ) $\emph{statement}$ $\C{// action after call}$ |
---|
2374 | `or` `when` ( $\emph{conditional-expression}$ ) $\C{// any number of functions}$ |
---|
2375 | waitfor( $\emph{mutex-function-name}$ ) $\emph{statement}$ |
---|
2376 | `or` ... |
---|
2377 | `when` ( $\emph{conditional-expression}$ ) $\C{// optional guard}$ |
---|
2378 | `timeout` $\emph{statement}$ $\C{// optional terminating timeout clause}$ |
---|
2379 | `when` ( $\emph{conditional-expression}$ ) $\C{// optional guard}$ |
---|
2380 | `else` $\emph{statement}$ $\C{// optional terminating clause}$ |
---|
2381 | \end{cfa} |
---|
2382 | \caption{Extended \protect\lstinline@waitfor@} |
---|
2383 | \label{f:ExtendedWaitfor} |
---|
2384 | \end{figure} |
---|
2385 | |
---|
2386 | Note, a group of conditional @waitfor@ clauses is \emph{not} the same as a group of @if@ statements, \eg: |
---|
2387 | \begin{cfa} |
---|
2388 | if ( C1 ) waitfor( mem1 ); when ( C1 ) waitfor( mem1 ); |
---|
2389 | else if ( C2 ) waitfor( mem2 ); or when ( C2 ) waitfor( mem2 ); |
---|
2390 | \end{cfa} |
---|
2391 | The left example only accepts @mem1@ if @C1@ is true or only @mem2@ if @C2@ is true. |
---|
2392 | The right example accepts either @mem1@ or @mem2@ if @C1@ and @C2@ are true. |
---|
2393 | Hence, the @waitfor@ has parallel semantics, accepting any true @when@ clause. |
---|
2394 | |
---|
2395 | An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated, \eg assume the bounded buffer is restructured from a monitor to a thread with the following @main@. |
---|
2396 | \begin{cfa} |
---|
2397 | void main( Buffer(T) & buffer ) with(buffer) { |
---|
2398 | for () { |
---|
2399 | `waitfor( ^?{} : buffer )` break; |
---|
2400 | or when ( count != 20 ) waitfor( insert : buffer ) { ... } |
---|
2401 | or when ( count != 0 ) waitfor( remove : buffer ) { ... } |
---|
2402 | } |
---|
2403 | // clean up |
---|
2404 | } |
---|
2405 | \end{cfa} |
---|
2406 | When the program main deallocates the buffer, it first calls the buffer's destructor, which is accepted, the destructor runs, and the buffer is deallocated. |
---|
2407 | However, the buffer thread cannot continue after the destructor call because the object is gone; |
---|
2408 | hence, clean up in @main@ cannot occur, which means destructors for local objects are not run. |
---|
2409 | To make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the destructor call is placed on urgent and the acceptor continues execution, which ends the loop, cleans up, and the thread terminates. |
---|
2410 | Then, the destructor caller unblocks from urgent to deallocate the object. |
---|
2411 | Accepting the destructor is the idiomatic way in \CFA to terminate a thread performing direct communication. |
---|
2412 | |
---|
2413 | |
---|
2414 | \subsection{Bulk barging prevention} |
---|
2415 | |
---|
2416 | Figure~\ref{f:BulkBargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signaling semantics. |
---|
2417 | The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors. |
---|
2418 | The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired. |
---|
2419 | When the signaling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread. |
---|
2420 | However, both the signaling and waiting threads W1 and W2 need some subset of monitors @m1@ and @m2@. |
---|
2421 | \begin{cquote} |
---|
2422 | condition c: (order 1) W2(@m2@), W1(@m1@,@m2@)\ \ \ or\ \ \ (order 2) W1(@m1@,@m2@), W2(@m2@) \\ |
---|
2423 | S: acq. @m1@ $\rightarrow$ acq. @m1,m2@ $\rightarrow$ @signal(c)@ $\rightarrow$ rel. @m2@ $\rightarrow$ pass @m2@ unblock W2 (order 2) $\rightarrow$ rel. @m1@ $\rightarrow$ pass @m1,m2@ unblock W1 \\ |
---|
2424 | \hspace*{2.75in}$\rightarrow$ rel. @m1@ $\rightarrow$ pass @m1,m2@ unblock W1 (order 1) |
---|
2425 | \end{cquote} |
---|
2426 | |
---|
2427 | \begin{figure} |
---|
2428 | \newbox\myboxA |
---|
2429 | \begin{lrbox}{\myboxA} |
---|
2430 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2431 | monitor M m1, m2; |
---|
2432 | condition c; |
---|
2433 | mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$ |
---|
2434 | ... |
---|
2435 | mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$ |
---|
2436 | ... `signal( c )`; ... |
---|
2437 | // m1, m2 still acquired |
---|
2438 | } // $\LstCommentStyle{\color{red}release m2}$ |
---|
2439 | // m1 acquired |
---|
2440 | } // release m1 |
---|
2441 | \end{cfa} |
---|
2442 | \end{lrbox} |
---|
2443 | |
---|
2444 | \newbox\myboxB |
---|
2445 | \begin{lrbox}{\myboxB} |
---|
2446 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2447 | |
---|
2448 | |
---|
2449 | mutex( m1 ) { |
---|
2450 | ... |
---|
2451 | mutex( m1, m2 ) { |
---|
2452 | ... `wait( c )`; // release m1, m2 |
---|
2453 | // m1, m2 reacquired |
---|
2454 | } // $\LstCommentStyle{\color{red}release m2}$ |
---|
2455 | // m1 acquired |
---|
2456 | } // release m1 |
---|
2457 | \end{cfa} |
---|
2458 | \end{lrbox} |
---|
2459 | |
---|
2460 | \newbox\myboxC |
---|
2461 | \begin{lrbox}{\myboxC} |
---|
2462 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2463 | |
---|
2464 | |
---|
2465 | mutex( m2 ) { |
---|
2466 | ... `wait( c )`; // release m2 |
---|
2467 | // m2 reacquired |
---|
2468 | } // $\LstCommentStyle{\color{red}release m2}$ |
---|
2469 | |
---|
2470 | |
---|
2471 | |
---|
2472 | |
---|
2473 | \end{cfa} |
---|
2474 | \end{lrbox} |
---|
2475 | |
---|
2476 | \begin{cquote} |
---|
2477 | \subfloat[Signalling Thread (S)]{\label{f:SignallingThread}\usebox\myboxA} |
---|
2478 | \hspace{3\parindentlnth} |
---|
2479 | \subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB} |
---|
2480 | \hspace{2\parindentlnth} |
---|
2481 | \subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC} |
---|
2482 | \end{cquote} |
---|
2483 | \caption{Bulk Barging Prevention} |
---|
2484 | \label{f:BulkBargingPrevention} |
---|
2485 | \end{figure} |
---|
2486 | |
---|
2487 | One scheduling solution is for the signaller S to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behavior of single-monitor scheduling. |
---|
2488 | However, this solution is inefficient if W2 waited first and immediate passed @m2@ when released, while S retains @m1@ until completion of the outer mutex statement. |
---|
2489 | If W1 waited first, the signaller must retain @m1@ amd @m2@ until completion of the outer mutex statement and then pass both to W1. |
---|
2490 | % Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves. |
---|
2491 | To support these efficient semantics and prevent barging, the implementation maintains a list of monitors acquired for each blocked thread. |
---|
2492 | When a signaller exits or waits in a mutex function or statement, the front waiter on urgent is unblocked if all its monitors are released. |
---|
2493 | Implementing a fast subset check for the necessarily released monitors is important and discussed in the following sections. |
---|
2494 | % The benefit is encapsulating complexity into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. |
---|
2495 | |
---|
2496 | |
---|
2497 | \subsection{\texorpdfstring{\protect\lstinline@waitfor@ Implementation}{waitfor Implementation}} |
---|
2498 | \label{s:waitforImplementation} |
---|
2499 | |
---|
2500 | In a statically typed object-oriented programming language, a class has an exhaustive list of members, even when members are added via static inheritance (see Figure~\ref{f:uCinheritance}). |
---|
2501 | Knowing all members at compilation, even separate compilation, allows uniquely numbered them so the accept-statement implementation can use a fast and compact bit mask with $O(1)$ compare. |
---|
2502 | |
---|
2503 | \begin{figure} |
---|
2504 | \centering |
---|
2505 | \begin{lrbox}{\myboxA} |
---|
2506 | \begin{uC++}[aboveskip=0pt,belowskip=0pt] |
---|
2507 | $\emph{translation unit 1}$ |
---|
2508 | _Monitor B { // common type in .h file |
---|
2509 | _Mutex virtual void `f`( ... ); |
---|
2510 | _Mutex virtual void `g`( ... ); |
---|
2511 | _Mutex virtual void w1( ... ) { ... _Accept(`f`, `g`); ... } |
---|
2512 | }; |
---|
2513 | $\emph{translation unit 2}$ |
---|
2514 | // include B |
---|
2515 | _Monitor D : public B { // inherit |
---|
2516 | _Mutex void `h`( ... ); // add |
---|
2517 | _Mutex void w2( ... ) { ... _Accept(`f`, `h`); ... } |
---|
2518 | }; |
---|
2519 | \end{uC++} |
---|
2520 | \end{lrbox} |
---|
2521 | |
---|
2522 | \begin{lrbox}{\myboxB} |
---|
2523 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2524 | $\emph{translation unit 1}$ |
---|
2525 | monitor M { ... }; // common type in .h file |
---|
2526 | void `f`( M & mutex m, ... ); |
---|
2527 | void `g`( M & mutex m, ... ); |
---|
2528 | void w1( M & mutex m, ... ) { ... waitfor(`f`, `g` : m); ... } |
---|
2529 | |
---|
2530 | $\emph{translation unit 2}$ |
---|
2531 | // include M |
---|
2532 | extern void `f`( M & mutex m, ... ); // import f but not g |
---|
2533 | void `h`( M & mutex m ); // add |
---|
2534 | void w2( M & mutex m, ... ) { ... waitfor(`f`, `h` : m); ... } |
---|
2535 | |
---|
2536 | \end{cfa} |
---|
2537 | \end{lrbox} |
---|
2538 | |
---|
2539 | \subfloat[\uC]{\label{f:uCinheritance}\usebox\myboxA} |
---|
2540 | \hspace{3pt} |
---|
2541 | \vrule |
---|
2542 | \hspace{3pt} |
---|
2543 | \subfloat[\CFA]{\label{f:CFinheritance}\usebox\myboxB} |
---|
2544 | \caption{Member / function visibility} |
---|
2545 | \label{f:MemberFunctionVisibility} |
---|
2546 | \end{figure} |
---|
2547 | |
---|
2548 | However, the @waitfor@ statement in translation unit 2 (see Figure~\ref{f:CFinheritance}) cannot see function @g@ in translation unit 1 precluding a unique numbering for a bit-mask because the monitor type only carries the protected shared data. |
---|
2549 | (A possible way to construct a dense mapping is at link or load-time.) |
---|
2550 | Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array. |
---|
2551 | Then, the same implementation approach used for the urgent stack (see Section~\ref{s:Scheduling}) is used for the calling queue. |
---|
2552 | Each caller has a list of monitors acquired, and the @waitfor@ statement performs a short linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transferred. |
---|
2553 | |
---|
2554 | |
---|
2555 | \subsection{Multimonitor scheduling} |
---|
2556 | \label{s:Multi-MonitorScheduling} |
---|
2557 | |
---|
2558 | External scheduling, like internal scheduling, becomes significantly more complex for multimonitor semantics. |
---|
2559 | Even in the simplest case, new semantics need to be established. |
---|
2560 | \begin{cfa} |
---|
2561 | monitor M { ... }; |
---|
2562 | void f( M & mutex m1 ); |
---|
2563 | void g( M & mutex m1, M & mutex m2 ) { `waitfor( f );` } $\C{// pass m1 or m2 to f?}$ |
---|
2564 | \end{cfa} |
---|
2565 | The solution is for the programmer to disambiguate: |
---|
2566 | \begin{cfa} |
---|
2567 | waitfor( f : `m2` ); $\C{// wait for call to f with argument m2}$ |
---|
2568 | \end{cfa} |
---|
2569 | Both locks are acquired by function @g@, so when function @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@. |
---|
2570 | This behavior can be extended to the multimonitor @waitfor@ statement. |
---|
2571 | \begin{cfa} |
---|
2572 | monitor M { ... }; |
---|
2573 | void f( M & mutex m1, M & mutex m2 ); |
---|
2574 | void g( M & mutex m1, M & mutex m2 ) { waitfor( f : `m1, m2` ); $\C{// wait for call to f with arguments m1 and m2}$ |
---|
2575 | \end{cfa} |
---|
2576 | Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by the accepting function. |
---|
2577 | % Also, the order of the monitors in a @waitfor@ statement must match the order of the mutex parameters. |
---|
2578 | |
---|
2579 | Figure~\ref{f:UnmatchedMutexSets} shows internal and external scheduling with multiple monitors that must match exactly with a signaling or accepting thread, \ie partial matching results in waiting. |
---|
2580 | In both cases, the set of monitors is disjoint so unblocking is impossible. |
---|
2581 | |
---|
2582 | \begin{figure} |
---|
2583 | \centering |
---|
2584 | \begin{lrbox}{\myboxA} |
---|
2585 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2586 | monitor M1 {} m11, m12; |
---|
2587 | monitor M2 {} m2; |
---|
2588 | condition c; |
---|
2589 | void f( M1 & mutex m1, M2 & mutex m2 ) { |
---|
2590 | signal( c ); |
---|
2591 | } |
---|
2592 | void g( M1 & mutex m1, M2 & mutex m2 ) { |
---|
2593 | wait( c ); |
---|
2594 | } |
---|
2595 | g( `m11`, m2 ); // block on wait |
---|
2596 | f( `m12`, m2 ); // cannot fulfil |
---|
2597 | \end{cfa} |
---|
2598 | \end{lrbox} |
---|
2599 | |
---|
2600 | \begin{lrbox}{\myboxB} |
---|
2601 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2602 | monitor M1 {} m11, m12; |
---|
2603 | monitor M2 {} m2; |
---|
2604 | |
---|
2605 | void f( M1 & mutex m1, M2 & mutex m2 ) { |
---|
2606 | |
---|
2607 | } |
---|
2608 | void g( M1 & mutex m1, M2 & mutex m2 ) { |
---|
2609 | waitfor( f : m1, m2 ); |
---|
2610 | } |
---|
2611 | g( `m11`, m2 ); // block on accept |
---|
2612 | f( `m12`, m2 ); // cannot fulfil |
---|
2613 | \end{cfa} |
---|
2614 | \end{lrbox} |
---|
2615 | \subfloat[Internal scheduling]{\label{f:InternalScheduling}\usebox\myboxA} |
---|
2616 | \hspace{3pt} |
---|
2617 | \vrule |
---|
2618 | \hspace{3pt} |
---|
2619 | \subfloat[External scheduling]{\label{f:ExternalScheduling}\usebox\myboxB} |
---|
2620 | \caption{Unmatched \protect\lstinline@mutex@ sets} |
---|
2621 | \label{f:UnmatchedMutexSets} |
---|
2622 | \end{figure} |
---|
2623 | |
---|
2624 | \begin{figure} |
---|
2625 | \centering |
---|
2626 | \begin{lrbox}{\myboxA} |
---|
2627 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
2628 | |
---|
2629 | struct Msg { int i, j; }; |
---|
2630 | mutex thread GoRtn { int i; float f; Msg m; }; |
---|
2631 | void mem1( GoRtn & mutex gortn, int i ) { gortn.i = i; } |
---|
2632 | void mem2( GoRtn & mutex gortn, float f ) { gortn.f = f; } |
---|
2633 | void mem3( GoRtn & mutex gortn, Msg m ) { gortn.m = m; } |
---|
2634 | void ^?{}( GoRtn & mutex ) {} |
---|
2635 | |
---|
2636 | void main( GoRtn & mutex gortn ) with(gortn) { // thread starts |
---|
2637 | |
---|
2638 | for () { |
---|
2639 | |
---|
2640 | `waitfor( mem1 : gortn )` sout | i; // wait for calls |
---|
2641 | or `waitfor( mem2 : gortn )` sout | f; |
---|
2642 | or `waitfor( mem3 : gortn )` sout | m.i | m.j; |
---|
2643 | or `waitfor( ^?{} : gortn )` break; // low priority |
---|
2644 | |
---|
2645 | } |
---|
2646 | |
---|
2647 | } |
---|
2648 | int main() { |
---|
2649 | GoRtn gortn; $\C[2.0in]{// start thread}$ |
---|
2650 | `mem1( gortn, 0 );` $\C{// different calls}\CRT$ |
---|
2651 | `mem2( gortn, 2.5 );` |
---|
2652 | `mem3( gortn, (Msg){1, 2} );` |
---|
2653 | |
---|
2654 | |
---|
2655 | } // wait for completion |
---|
2656 | \end{cfa} |
---|
2657 | \end{lrbox} |
---|
2658 | |
---|
2659 | \begin{lrbox}{\myboxB} |
---|
2660 | \begin{Go}[aboveskip=0pt,belowskip=0pt] |
---|
2661 | func main() { |
---|
2662 | type Msg struct{ i, j int } |
---|
2663 | |
---|
2664 | ch1 := make( chan int ) |
---|
2665 | ch2 := make( chan float32 ) |
---|
2666 | ch3 := make( chan Msg ) |
---|
2667 | hand := make( chan string ) |
---|
2668 | shake := make( chan string ) |
---|
2669 | gortn := func() { $\C[1.5in]{// thread starts}$ |
---|
2670 | var i int; var f float32; var m Msg |
---|
2671 | L: for { |
---|
2672 | select { $\C{// wait for messages}$ |
---|
2673 | case `i = <- ch1`: fmt.Println( i ) |
---|
2674 | case `f = <- ch2`: fmt.Println( f ) |
---|
2675 | case `m = <- ch3`: fmt.Println( m ) |
---|
2676 | case `<- hand`: break L $\C{// sentinel}$ |
---|
2677 | } |
---|
2678 | } |
---|
2679 | `shake <- "SHAKE"` $\C{// completion}$ |
---|
2680 | } |
---|
2681 | |
---|
2682 | go gortn() $\C{// start thread}$ |
---|
2683 | `ch1 <- 0` $\C{// different messages}$ |
---|
2684 | `ch2 <- 2.5` |
---|
2685 | `ch3 <- Msg{1, 2}` |
---|
2686 | `hand <- "HAND"` $\C{// sentinel value}$ |
---|
2687 | `<- shake` $\C{// wait for completion}\CRT$ |
---|
2688 | } |
---|
2689 | \end{Go} |
---|
2690 | \end{lrbox} |
---|
2691 | |
---|
2692 | \subfloat[\CFA]{\label{f:CFAwaitfor}\usebox\myboxA} |
---|
2693 | \hspace{3pt} |
---|
2694 | \vrule |
---|
2695 | \hspace{3pt} |
---|
2696 | \subfloat[Go]{\label{f:Gochannel}\usebox\myboxB} |
---|
2697 | \caption{Direct versus indirect communication} |
---|
2698 | \label{f:DirectCommunicationComparison} |
---|
2699 | |
---|
2700 | \medskip |
---|
2701 | |
---|
2702 | \begin{cfa} |
---|
2703 | mutex thread DatingService { |
---|
2704 | condition Girls[CompCodes], Boys[CompCodes]; |
---|
2705 | int girlPhoneNo, boyPhoneNo, ccode; |
---|
2706 | }; |
---|
2707 | int girl( DatingService & mutex ds, int phoneno, int code ) with( ds ) { |
---|
2708 | girlPhoneNo = phoneno; ccode = code; |
---|
2709 | `wait( Girls[ccode] );` $\C{// wait for boy}$ |
---|
2710 | girlPhoneNo = phoneno; return boyPhoneNo; |
---|
2711 | } |
---|
2712 | int boy( DatingService & mutex ds, int phoneno, int code ) with( ds ) { |
---|
2713 | boyPhoneNo = phoneno; ccode = code; |
---|
2714 | `wait( Boys[ccode] );` $\C{// wait for girl}$ |
---|
2715 | boyPhoneNo = phoneno; return girlPhoneNo; |
---|
2716 | } |
---|
2717 | void main( DatingService & ds ) with( ds ) { $\C{// thread starts, ds defaults to mutex}$ |
---|
2718 | for () { |
---|
2719 | waitfor( ^?{} ) break; $\C{// high priority}$ |
---|
2720 | or waitfor( girl ) $\C{// girl called, compatible boy ? restart boy then girl}$ |
---|
2721 | if ( ! is_empty( Boys[ccode] ) ) { `signal_block( Boys[ccode] ); signal_block( Girls[ccode] );` } |
---|
2722 | or waitfor( boy ) { $\C{// boy called, compatible girl ? restart girl then boy}$ |
---|
2723 | if ( ! is_empty( Girls[ccode] ) ) { `signal_block( Girls[ccode] ); signal_block( Boys[ccode] );` } |
---|
2724 | } |
---|
2725 | } |
---|
2726 | \end{cfa} |
---|
2727 | \caption{Direct communication dating service} |
---|
2728 | \label{f:DirectCommunicationDatingService} |
---|
2729 | \end{figure} |
---|
2730 | |
---|
2731 | \begin{comment} |
---|
2732 | The following shows an example of two threads directly calling each other and accepting calls from each other in a cycle. |
---|
2733 | \begin{cfa} |
---|
2734 | \end{cfa} |
---|
2735 | \vspace{-0.8\baselineskip} |
---|
2736 | \begin{cquote} |
---|
2737 | \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}} |
---|
2738 | \begin{cfa} |
---|
2739 | thread Ping {} pi; |
---|
2740 | void ping( Ping & mutex ) {} |
---|
2741 | void main( Ping & pi ) { |
---|
2742 | for ( 10 ) { |
---|
2743 | `waitfor( ping : pi );` |
---|
2744 | `pong( po );` |
---|
2745 | } |
---|
2746 | } |
---|
2747 | int main() {} |
---|
2748 | \end{cfa} |
---|
2749 | & |
---|
2750 | \begin{cfa} |
---|
2751 | thread Pong {} po; |
---|
2752 | void pong( Pong & mutex ) {} |
---|
2753 | void main( Pong & po ) { |
---|
2754 | for ( 10 ) { |
---|
2755 | `ping( pi );` |
---|
2756 | `waitfor( pong : po );` |
---|
2757 | } |
---|
2758 | } |
---|
2759 | |
---|
2760 | \end{cfa} |
---|
2761 | \end{tabular} |
---|
2762 | \end{cquote} |
---|
2763 | % \lstMakeShortInline@% |
---|
2764 | % \caption{Threads ping/pong using external scheduling} |
---|
2765 | % \label{f:pingpong} |
---|
2766 | % \end{figure} |
---|
2767 | Note, the ping/pong threads are globally declared, @pi@/@po@, and hence, start and possibly complete before the program main starts. |
---|
2768 | \end{comment} |
---|
2769 | |
---|
2770 | |
---|
2771 | \subsection{\texorpdfstring{\protect\lstinline@mutex@ Generators / coroutines / threads}{monitor Generators / coroutines / threads}} |
---|
2772 | |
---|
2773 | \CFA generators, coroutines, and threads can also be @mutex@ (Table~\ref{t:ExecutionPropertyComposition} cases 4, 6, 12) allowing safe \emph{direct communication} with threads, \ie the custom types can have mutex functions that are called by other threads. |
---|
2774 | All monitor features are available within these mutex functions. |
---|
2775 | For example, if the formatter generator or coroutine equivalent in Figure~\ref{f:CFAFormatGen} is extended with the monitor property and this interface function is used to communicate with the formatter: |
---|
2776 | \begin{cfa} |
---|
2777 | void fmt( Fmt & mutex fmt, char ch ) { fmt.ch = ch; resume( fmt ) } |
---|
2778 | \end{cfa} |
---|
2779 | multiple threads can safely pass characters for formatting. |
---|
2780 | |
---|
2781 | Figure~\ref{f:DirectCommunicationComparison} shows a comparison of direct call-communication in \CFA versus indirect channel-communication in Go. |
---|
2782 | (Ada has a similar mechanism to \CFA direct communication.) |
---|
2783 | % The thread main function is by default @mutex@, so the @mutex@ qualifier for the thread parameter is optional. |
---|
2784 | % The reason is that the thread logically starts instantaneously in the thread main acquiring its mutual exclusion, so it starts before any calls to prepare for synchronizing these calls. |
---|
2785 | The \CFA program @main@ uses the call/return paradigm to directly communicate with the @GoRtn main@, whereas Go switches to the unbuffered channel paradigm to indirectly communicate with the goroutine. |
---|
2786 | Communication by multiple threads is safe for the @gortn@ thread via mutex calls in \CFA or channel assignment in Go. |
---|
2787 | The difference between call and channel send occurs for buffered channels making the send asynchronous. |
---|
2788 | In \CFA, asynchronous call and multiple buffers are provided using an administrator and worker threads~\cite{Gentleman81} and/or futures (not discussed). |
---|
2789 | |
---|
2790 | Figure~\ref{f:DirectCommunicationDatingService} shows the dating-service problem in Figure~\ref{f:DatingServiceMonitor} extended from indirect monitor communication to direct thread communication. |
---|
2791 | When converting a monitor to a thread (server), the coding pattern is to move as much code as possible from the accepted functions into the thread main so it does as much work as possible. |
---|
2792 | Notice, the dating server is postponing requests for an unspecified time while continuing to accept new requests. |
---|
2793 | For complex servers, \eg web-servers, there can be hundreds of lines of code in the thread main and safe interaction with clients can be complex. |
---|
2794 | |
---|
2795 | |
---|
2796 | \subsection{Low-level Locks} |
---|
2797 | |
---|
2798 | For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc. |
---|
2799 | Some of these low-level mechanisms are used to build the \CFA runtime, but we always advocate using high-level mechanisms whenever possible. |
---|
2800 | |
---|
2801 | |
---|
2802 | % \section{Parallelism} |
---|
2803 | % \label{s:Parallelism} |
---|
2804 | % |
---|
2805 | % Historically, computer performance was about processor speeds. |
---|
2806 | % However, with heat dissipation being a direct consequence of speed increase, parallelism is the new source for increased performance~\cite{Sutter05, Sutter05b}. |
---|
2807 | % Therefore, high-performance applications must care about parallelism, which requires concurrency. |
---|
2808 | % The lowest-level approach of parallelism is to use \newterm{kernel threads} in combination with semantics like @fork@, @join@, \etc. |
---|
2809 | % However, kernel threads are better as an implementation tool because of complexity and higher cost. |
---|
2810 | % Therefore, different abstractions are often layered onto kernel threads to simplify them, \eg pthreads. |
---|
2811 | % |
---|
2812 | % |
---|
2813 | % \subsection{User threads} |
---|
2814 | % |
---|
2815 | % A direct improvement on kernel threads is user threads, \eg Erlang~\cite{Erlang} and \uC~\cite{uC++book}. |
---|
2816 | % This approach provides an interface that matches the language paradigms, gives more control over concurrency by the language runtime, and an abstract (and portable) interface to the underlying kernel threads across operating systems. |
---|
2817 | % In many cases, user threads can be used on a much larger scale (100,000 threads). |
---|
2818 | % Like kernel threads, user threads support preemption, which maximizes nondeterminism, but increases the potential for concurrency errors: race, livelock, starvation, and deadlock. |
---|
2819 | % \CFA adopts user-threads to provide more flexibility and a low-cost mechanism to build any other concurrency approach, \eg thread pools and actors~\cite{Actors}. |
---|
2820 | % |
---|
2821 | % A variant of user thread is \newterm{fibres}, which removes preemption, \eg Go~\cite{Go} @goroutine@s. |
---|
2822 | % Like functional programming, which removes mutation and its associated problems, removing preemption from concurrency reduces nondeterminism, making race and deadlock errors more difficult to generate. |
---|
2823 | % However, preemption is necessary for fairness and to reduce tail-latency. |
---|
2824 | % For concurrency that relies on spinning, if all cores spin the system is livelocked, whereas preemption breaks the livelock. |
---|
2825 | |
---|
2826 | |
---|
2827 | \begin{comment} |
---|
2828 | \subsection{Thread pools} |
---|
2829 | |
---|
2830 | In contrast to direct threading is indirect \newterm{thread pools}, \eg Java @executor@, where small jobs (work units) are inserted into a work pool for execution. |
---|
2831 | If the jobs are dependent, \ie interact, there is an implicit dependency graph that ties them together. |
---|
2832 | While removing direct concurrency, and hence the amount of context switching, thread pools significantly limit the interaction that can occur among jobs. |
---|
2833 | Indeed, jobs should not block because that also blocks the underlying thread, which effectively means the CPU utilization, and therefore throughput, suffers. |
---|
2834 | While it is possible to tune the thread pool with sufficient threads, it becomes difficult to obtain high throughput and good core utilization as job interaction increases. |
---|
2835 | As well, concurrency errors return, which threads pools are suppose to mitigate. |
---|
2836 | |
---|
2837 | \begin{figure} |
---|
2838 | \centering |
---|
2839 | \begin{tabular}{@{}l|l@{}} |
---|
2840 | \begin{cfa} |
---|
2841 | struct Adder { |
---|
2842 | int * row, cols; |
---|
2843 | }; |
---|
2844 | int operator()() { |
---|
2845 | subtotal = 0; |
---|
2846 | for ( int c = 0; c < cols; c += 1 ) |
---|
2847 | subtotal += row[c]; |
---|
2848 | return subtotal; |
---|
2849 | } |
---|
2850 | void ?{}( Adder * adder, int row[$\,$], int cols, int & subtotal ) { |
---|
2851 | adder.[rows, cols, subtotal] = [rows, cols, subtotal]; |
---|
2852 | } |
---|
2853 | |
---|
2854 | |
---|
2855 | |
---|
2856 | |
---|
2857 | \end{cfa} |
---|
2858 | & |
---|
2859 | \begin{cfa} |
---|
2860 | int main() { |
---|
2861 | const int rows = 10, cols = 10; |
---|
2862 | int matrix[rows][cols], subtotals[rows], total = 0; |
---|
2863 | // read matrix |
---|
2864 | Executor executor( 4 ); // kernel threads |
---|
2865 | Adder * adders[rows]; |
---|
2866 | for ( r; rows ) { // send off work for executor |
---|
2867 | adders[r] = new( matrix[r], cols, &subtotal[r] ); |
---|
2868 | executor.send( *adders[r] ); |
---|
2869 | } |
---|
2870 | for ( r; rows ) { // wait for results |
---|
2871 | delete( adders[r] ); |
---|
2872 | total += subtotals[r]; |
---|
2873 | } |
---|
2874 | sout | total; |
---|
2875 | } |
---|
2876 | \end{cfa} |
---|
2877 | \end{tabular} |
---|
2878 | \caption{Executor} |
---|
2879 | \end{figure} |
---|
2880 | \end{comment} |
---|
2881 | |
---|
2882 | |
---|
2883 | \section{Runtime Structure} |
---|
2884 | \label{s:CFARuntimeStructure} |
---|
2885 | |
---|
2886 | Figure~\ref{f:RunTimeStructure} illustrates the runtime structure of a \CFA program. |
---|
2887 | In addition to the new kinds of objects introduced by \CFA, there are two more runtime entities used to control parallel execution: cluster and (virtual) processor. |
---|
2888 | An executing thread is illustrated by its containment in a processor. |
---|
2889 | |
---|
2890 | \begin{figure} |
---|
2891 | \centering |
---|
2892 | \input{RunTimeStructure} |
---|
2893 | \caption{\CFA Runtime structure} |
---|
2894 | \label{f:RunTimeStructure} |
---|
2895 | \end{figure} |
---|
2896 | |
---|
2897 | |
---|
2898 | \subsection{Cluster} |
---|
2899 | \label{s:RuntimeStructureCluster} |
---|
2900 | |
---|
2901 | A \newterm{cluster} is a collection of user and kernel threads, where the kernel threads run the user threads from the cluster's ready queue, and the operating system runs the kernel threads on the processors from its ready queue~\cite{Buhr90a}. |
---|
2902 | The term \newterm{virtual processor} is introduced as a synonym for kernel thread to disambiguate between user and kernel thread. |
---|
2903 | From the language perspective, a virtual processor is an actual processor (core). |
---|
2904 | |
---|
2905 | The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults. |
---|
2906 | The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors. |
---|
2907 | However, the design allows changing the scheduler, \eg multi-queue multiserver with work-stealing/sharing across the virtual processors. |
---|
2908 | If several clusters exist, both threads and virtual processors, can be explicitly migrated from one cluster to another. |
---|
2909 | No automatic load balancing among clusters is performed by \CFA. |
---|
2910 | |
---|
2911 | When a \CFA program begins execution, it creates a user cluster with a single processor and a special processor to handle preemption that does not execute user threads. |
---|
2912 | The user cluster is created to contain the application user-threads. |
---|
2913 | Having all threads execute on the one cluster often maximizes utilization of processors, which minimizes runtime. |
---|
2914 | However, because of limitations of scheduling requirements (real-time), NUMA architecture, heterogeneous hardware, or issues with the underlying operating system, multiple clusters are sometimes necessary. |
---|
2915 | |
---|
2916 | |
---|
2917 | \subsection{Virtual processor} |
---|
2918 | \label{s:RuntimeStructureProcessor} |
---|
2919 | |
---|
2920 | A virtual processor is implemented by a kernel thread, \eg UNIX process, which are scheduled for execution on a hardware processor by the underlying operating system. |
---|
2921 | Programs may use more virtual processors than hardware processors. |
---|
2922 | On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel. |
---|
2923 | (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux,affinityWindows}, which is used when caching issues occur or for heterogeneous hardware processors.) %, affinityFreebsd, affinityNetbsd, affinityMacosx |
---|
2924 | The \CFA runtime attempts to block unused processors and unblock processors as the system load increases; |
---|
2925 | balancing the workload with processors is difficult because it requires future knowledge, \ie what will the application workload do next. |
---|
2926 | Preemption occurs on virtual processors rather than user threads, via operating-system interrupts. |
---|
2927 | Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads. |
---|
2928 | Turning off preemption transforms user threads into fibres. |
---|
2929 | |
---|
2930 | |
---|
2931 | \begin{comment} |
---|
2932 | \section{Implementation} |
---|
2933 | \label{s:Implementation} |
---|
2934 | |
---|
2935 | A primary implementation challenge is avoiding contention from dynamically allocating memory because of bulk acquire, \eg the internal-scheduling design is almost free of allocations. |
---|
2936 | All blocking operations are made by parking threads onto queues, therefore all queues are designed with intrusive nodes, where each node has preallocated link fields for chaining. |
---|
2937 | Furthermore, several bulk-acquire operations need a variable amount of memory. |
---|
2938 | This storage is allocated at the base of a thread's stack before blocking, which means programmers must add a small amount of extra space for stacks. |
---|
2939 | |
---|
2940 | In \CFA, ordering of monitor acquisition relies on memory ordering to prevent deadlock~\cite{Havender68}, because all objects have distinct nonoverlapping memory layouts, and mutual-exclusion for a monitor is only defined for its lifetime. |
---|
2941 | When a mutex call is made, pointers to the concerned monitors are aggregated into a variable-length array and sorted. |
---|
2942 | This array persists for the entire duration of the mutual exclusion and is used extensively for synchronization operations. |
---|
2943 | |
---|
2944 | To improve performance and simplicity, context switching occurs inside a function call, so only callee-saved registers are copied onto the stack and then the stack register is switched; |
---|
2945 | the corresponding registers are then restored for the other context. |
---|
2946 | Note, the instruction pointer is untouched since the context switch is always inside the same function. |
---|
2947 | Experimental results (not presented) for a stackless or stackful scheduler (1 versus 2 context switches) (see Section~\ref{s:Concurrency}) show the performance is virtually equivalent, because both approaches are dominated by locking to prevent a race condition. |
---|
2948 | |
---|
2949 | All kernel threads (@pthreads@) created a stack. |
---|
2950 | Each \CFA virtual processor is implemented as a coroutine and these coroutines run directly on the kernel-thread stack, effectively stealing this stack. |
---|
2951 | The exception to this rule is the program main, \ie the initial kernel thread that is given to any program. |
---|
2952 | In order to respect C expectations, the stack of the initial kernel thread is used by program main rather than the main processor, allowing it to grow dynamically as in a normal C program. |
---|
2953 | \end{comment} |
---|
2954 | |
---|
2955 | |
---|
2956 | \subsection{Preemption} |
---|
2957 | |
---|
2958 | Nondeterministic preemption provides fairness from long-running threads, and forces concurrent programmers to write more robust programs, rather than relying on code between cooperative scheduling to be atomic. |
---|
2959 | This atomic reliance can fail on multicore machines, because execution across cores is nondeterministic. |
---|
2960 | A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Windows runtime does not support interrupts and on Linux systems, interrupts are complex (see below). |
---|
2961 | Preemption is normally handled by setting a countdown timer on each virtual processor. |
---|
2962 | When the timer expires, an interrupt is delivered, and its signal handler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code. |
---|
2963 | Multiple signal handlers may be pending. |
---|
2964 | When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal is delivered. |
---|
2965 | The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler; |
---|
2966 | therefore, the same signal mask is required for all virtual processors in a cluster. |
---|
2967 | Because preemption interval is usually long (1 ms) performance cost is negligible. |
---|
2968 | |
---|
2969 | Linux switched a decade ago from specific to arbitrary virtual-processor signal-delivery for applications with multiple kernel threads. |
---|
2970 | In the new semantics, a virtual-processor directed signal may be delivered to any virtual processor created by the application that does not have the signal blocked. |
---|
2971 | Hence, the timer-expiry signal, which is generated \emph{externally} by the Linux kernel to an application, is delivered to any of its Linux subprocesses (kernel threads). |
---|
2972 | To ensure each virtual processor receives a preemption signal, a discrete-event simulation is run on a special virtual processor, and only it sets and receives timer events. |
---|
2973 | Virtual processors register an expiration time with the discrete-event simulator, which is inserted in sorted order. |
---|
2974 | The simulation sets the countdown timer to the value at the head of the event list, and when the timer expires, all events less than or equal to the current time are processed. |
---|
2975 | Processing a preemption event sends an \emph{internal} @SIGUSR1@ signal to the registered virtual processor, which is always delivered to that processor. |
---|
2976 | |
---|
2977 | |
---|
2978 | \subsection{Debug kernel} |
---|
2979 | |
---|
2980 | There are two versions of the \CFA runtime kernel: debug and nondebug. |
---|
2981 | The debugging version has many runtime checks and internal assertions, \eg stack nonwritable guard page, and checks for stack overflow whenever context switches occur among coroutines and threads, which catches most stack overflows. |
---|
2982 | After a program is debugged, the nondebugging version can be used to significantly decrease space and increase performance. |
---|
2983 | |
---|
2984 | |
---|
2985 | \section{Performance} |
---|
2986 | \label{s:Performance} |
---|
2987 | |
---|
2988 | To test the performance of the \CFA runtime, a series of microbenchmarks are used to compare \CFA with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js 12.14.1, and \uC 7.0.0. |
---|
2989 | For comparison, the package must be multiprocessor (M:N), which excludes libdil and libmil~\cite{libdill} (M:1)), and use a shared-memory programming model, \eg not message passing. |
---|
2990 | The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and pthreads/\CFA/\uC are compiled with gcc 9.2.1. |
---|
2991 | |
---|
2992 | All benchmarks are run using the following harness. |
---|
2993 | (The Java harness is augmented to circumvent JIT issues.) |
---|
2994 | \begin{cfa} |
---|
2995 | #define BENCH( `run` ) uint64_t start = cputime_ns(); `run;` double result = (double)(cputime_ns() - start) / N; |
---|
2996 | \end{cfa} |
---|
2997 | where CPU time in nanoseconds is from the appropriate language clock. |
---|
2998 | Each benchmark is performed @N@ times, where @N@ is selected so the benchmark runs in the range of 2--20 s for the specific programming language; |
---|
2999 | each @N@ appears after the experiment name in the following tables. |
---|
3000 | The total time is divided by @N@ to obtain the average time for a benchmark. |
---|
3001 | Each benchmark experiment is run 13 times and the average appears in the table. |
---|
3002 | For languages with a runtime JIT (Java, Node.js, Python), a single half-hour long experiment is run to check stability; |
---|
3003 | all long-experiment results are statistically equivalent, \ie median/average/SD correlate with the short-experiment results, indicating the short experiments reached a steady state. |
---|
3004 | All omitted tests for other languages are functionally identical to the \CFA tests and available online~\cite{CforallConcurrentBenchmarks}. |
---|
3005 | |
---|
3006 | \subsection{Creation} |
---|
3007 | |
---|
3008 | Creation is measured by creating and deleting a specific kind of control-flow object. |
---|
3009 | Figure~\ref{f:creation} shows the code for \CFA with results in Table~\ref{t:creation}. |
---|
3010 | Note, the call stacks of \CFA coroutines are lazily created on the first resume, therefore the cost of creation with and without a stack are presented. |
---|
3011 | |
---|
3012 | \begin{multicols}{2} |
---|
3013 | \begin{cfa}[xleftmargin=0pt] |
---|
3014 | `coroutine` MyCoroutine {}; |
---|
3015 | void ?{}( MyCoroutine & this ) { |
---|
3016 | #ifdef EAGER |
---|
3017 | resume( this ); |
---|
3018 | #endif |
---|
3019 | } |
---|
3020 | void main( MyCoroutine & ) {} |
---|
3021 | int main() { |
---|
3022 | BENCH( for ( N ) { `MyCoroutine c;` } ) |
---|
3023 | sout | result; |
---|
3024 | } |
---|
3025 | \end{cfa} |
---|
3026 | \captionof{figure}{\CFA creation benchmark} |
---|
3027 | \label{f:creation} |
---|
3028 | |
---|
3029 | \columnbreak |
---|
3030 | |
---|
3031 | \vspace*{-16pt} |
---|
3032 | \captionof{table}{Creation comparison (nanoseconds)} |
---|
3033 | \label{t:creation} |
---|
3034 | |
---|
3035 | \begin{tabular}[t]{@{}r*{3}{D{.}{.}{5.2}}@{}} |
---|
3036 | \multicolumn{1}{@{}r}{Object(N)\hspace*{10pt}} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ |
---|
3037 | \CFA generator (1B) & 0.6 & 0.6 & 0.0 \\ |
---|
3038 | \CFA coroutine lazy (100M) & 13.4 & 13.1 & 0.5 \\ |
---|
3039 | \CFA coroutine eager (10M) & 144.7 & 143.9 & 1.5 \\ |
---|
3040 | \CFA thread (10M) & 466.4 & 468.0 & 11.3 \\ |
---|
3041 | \uC coroutine (10M) & 155.6 & 155.7 & 1.7 \\ |
---|
3042 | \uC thread (10M) & 523.4 & 523.9 & 7.7 \\ |
---|
3043 | Python generator (10M) & 123.2 & 124.3 & 4.1 \\ |
---|
3044 | Node.js generator (10M) & 33.4 & 33.5 & 0.3 \\ |
---|
3045 | Goroutine thread (10M) & 751.0 & 750.5 & 3.1 \\ |
---|
3046 | Rust tokio thread (10M) & 1860.0 & 1881.1 & 37.6 \\ |
---|
3047 | Rust thread (250K) & 53801.0 & 53896.8 & 274.9 \\ |
---|
3048 | Java thread (250K) & 119256.0 & 119679.2 & 2244.0 \\ |
---|
3049 | % Java thread (1 000 000) & 123100.0 & 123052.5 & 751.6 \\ |
---|
3050 | Pthreads thread (250K) & 31465.5 & 31419.5 & 140.4 |
---|
3051 | \end{tabular} |
---|
3052 | \end{multicols} |
---|
3053 | |
---|
3054 | \vspace*{-10pt} |
---|
3055 | \subsection{Internal scheduling} |
---|
3056 | |
---|
3057 | Internal scheduling is measured using a cycle of two threads signaling and waiting. |
---|
3058 | Figure~\ref{f:schedint} shows the code for \CFA, with results in Table~\ref{t:schedint}. |
---|
3059 | Note, the \CFA incremental cost for bulk acquire is a fixed cost for small numbers of mutex objects. |
---|
3060 | User-level threading has one kernel thread, eliminating contention between the threads (direct handoff of the kernel thread). |
---|
3061 | Kernel-level threading has two kernel threads allowing some contention. |
---|
3062 | |
---|
3063 | \begin{multicols}{2} |
---|
3064 | \setlength{\tabcolsep}{3pt} |
---|
3065 | \begin{cfa}[xleftmargin=0pt] |
---|
3066 | volatile int go = 0; |
---|
3067 | `condition c;` |
---|
3068 | `monitor` M {} m1/*, m2, m3, m4*/; |
---|
3069 | void call( M & `mutex p1/*, p2, p3, p4*/` ) { |
---|
3070 | `signal( c );` |
---|
3071 | } |
---|
3072 | void wait( M & `mutex p1/*, p2, p3, p4*/` ) { |
---|
3073 | go = 1; // continue other thread |
---|
3074 | for ( N ) { `wait( c );` } ); |
---|
3075 | } |
---|
3076 | thread T {}; |
---|
3077 | void main( T & ) { |
---|
3078 | while ( go == 0 ) { yield(); } // waiter must start first |
---|
3079 | BENCH( for ( N ) { call( m1/*, m2, m3, m4*/ ); } ) |
---|
3080 | sout | result; |
---|
3081 | } |
---|
3082 | int main() { |
---|
3083 | T t; |
---|
3084 | wait( m1/*, m2, m3, m4*/ ); |
---|
3085 | } |
---|
3086 | \end{cfa} |
---|
3087 | \vspace*{-8pt} |
---|
3088 | \captionof{figure}{\CFA Internal-scheduling benchmark} |
---|
3089 | \label{f:schedint} |
---|
3090 | |
---|
3091 | \columnbreak |
---|
3092 | |
---|
3093 | \vspace*{-16pt} |
---|
3094 | \captionof{table}{Internal-scheduling comparison (nanoseconds)} |
---|
3095 | \label{t:schedint} |
---|
3096 | \bigskip |
---|
3097 | |
---|
3098 | \begin{tabular}{@{}r*{3}{D{.}{.}{5.2}}@{}} |
---|
3099 | \multicolumn{1}{@{}r}{Object(N)\hspace*{10pt}} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ |
---|
3100 | \CFA @signal@, 1 monitor (10M) & 364.4 & 364.2 & 4.4 \\ |
---|
3101 | \CFA @signal@, 2 monitor (10M) & 484.4 & 483.9 & 8.8 \\ |
---|
3102 | \CFA @signal@, 4 monitor (10M) & 709.1 & 707.7 & 15.0 \\ |
---|
3103 | \uC @signal@ monitor (10M) & 328.3 & 327.4 & 2.4 \\ |
---|
3104 | Rust cond. variable (1M) & 7514.0 & 7437.4 & 397.2 \\ |
---|
3105 | Java @notify@ monitor (1M) & 8717.0 & 8774.1 & 471.8 \\ |
---|
3106 | % Java @notify@ monitor (100 000 000) & 8634.0 & 8683.5 & 330.5 \\ |
---|
3107 | Pthreads cond. variable (1M) & 5553.7 & 5576.1 & 345.6 |
---|
3108 | \end{tabular} |
---|
3109 | \end{multicols} |
---|
3110 | |
---|
3111 | |
---|
3112 | \subsection{External scheduling} |
---|
3113 | |
---|
3114 | External scheduling is measured using a cycle of two threads calling and accepting the call using the @waitfor@ statement. |
---|
3115 | Figure~\ref{f:schedext} shows the code for \CFA with results in Table~\ref{t:schedext}. |
---|
3116 | Note, the \CFA incremental cost for bulk acquire is a fixed cost for small numbers of mutex objects. |
---|
3117 | |
---|
3118 | \begin{multicols}{2} |
---|
3119 | \setlength{\tabcolsep}{5pt} |
---|
3120 | \vspace*{-16pt} |
---|
3121 | \begin{cfa}[xleftmargin=0pt] |
---|
3122 | `monitor` M {} m1/*, m2, m3, m4*/; |
---|
3123 | void call( M & `mutex p1/*, p2, p3, p4*/` ) {} |
---|
3124 | void wait( M & `mutex p1/*, p2, p3, p4*/` ) { |
---|
3125 | for ( N ) { `waitfor( call : p1/*, p2, p3, p4*/ );` } |
---|
3126 | } |
---|
3127 | thread T {}; |
---|
3128 | void main( T & ) { |
---|
3129 | BENCH( for ( N ) { call( m1/*, m2, m3, m4*/ ); } ) |
---|
3130 | sout | result; |
---|
3131 | } |
---|
3132 | int main() { |
---|
3133 | T t; |
---|
3134 | wait( m1/*, m2, m3, m4*/ ); |
---|
3135 | } |
---|
3136 | \end{cfa} |
---|
3137 | \captionof{figure}{\CFA external-scheduling benchmark} |
---|
3138 | \label{f:schedext} |
---|
3139 | |
---|
3140 | \columnbreak |
---|
3141 | |
---|
3142 | \vspace*{-18pt} |
---|
3143 | \captionof{table}{External-scheduling comparison (nanoseconds)} |
---|
3144 | \label{t:schedext} |
---|
3145 | \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}} |
---|
3146 | \multicolumn{1}{@{}r}{Object(N)\hspace*{10pt}} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ |
---|
3147 | \CFA @waitfor@, 1 monitor (10M) & 367.1 & 365.3 & 5.0 \\ |
---|
3148 | \CFA @waitfor@, 2 monitor (10M) & 463.0 & 464.6 & 7.1 \\ |
---|
3149 | \CFA @waitfor@, 4 monitor (10M) & 689.6 & 696.2 & 21.5 \\ |
---|
3150 | \uC \lstinline[language=uC++]|_Accept| monitor (10M) & 328.2 & 329.1 & 3.4 \\ |
---|
3151 | Go \lstinline[language=Golang]|select| channel (10M) & 365.0 & 365.5 & 1.2 |
---|
3152 | \end{tabular} |
---|
3153 | \end{multicols} |
---|
3154 | |
---|
3155 | \subsection{Mutual-Exclusion} |
---|
3156 | |
---|
3157 | Uncontented mutual exclusion, which frequently occurs, is measured by entering and leaving a critical section. |
---|
3158 | For monitors, entering and leaving a mutex function are measured, otherwise the language-appropriate mutex-lock is measured. |
---|
3159 | For comparison, a spinning (vs.\ blocking) test-and-test-set lock is presented. |
---|
3160 | Figure~\ref{f:mutex} shows the code for \CFA with results in Table~\ref{t:mutex}. |
---|
3161 | Note the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects. |
---|
3162 | |
---|
3163 | \begin{multicols}{2} |
---|
3164 | \setlength{\tabcolsep}{3pt} |
---|
3165 | \begin{cfa}[xleftmargin=0pt] |
---|
3166 | `monitor` M {} m1/*, m2, m3, m4*/; |
---|
3167 | call( M & `mutex p1/*, p2, p3, p4*/` ) {} |
---|
3168 | int main() { |
---|
3169 | BENCH( for( N ) call( m1/*, m2, m3, m4*/ ); ) |
---|
3170 | sout | result; |
---|
3171 | } |
---|
3172 | \end{cfa} |
---|
3173 | \captionof{figure}{\CFA acquire/release mutex benchmark} |
---|
3174 | \label{f:mutex} |
---|
3175 | |
---|
3176 | \columnbreak |
---|
3177 | |
---|
3178 | \vspace*{-16pt} |
---|
3179 | \captionof{table}{Mutex comparison (nanoseconds)} |
---|
3180 | \label{t:mutex} |
---|
3181 | \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}} |
---|
3182 | \multicolumn{1}{@{}r}{Object(N)\hspace*{10pt}} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ |
---|
3183 | test-and-test-set lock (50M) & 19.1 & 18.9 & 0.4 \\ |
---|
3184 | \CFA @mutex@ function, 1 arg. (50M) & 48.3 & 47.8 & 0.9 \\ |
---|
3185 | \CFA @mutex@ function, 2 arg. (50M) & 86.7 & 87.6 & 1.9 \\ |
---|
3186 | \CFA @mutex@ function, 4 arg. (50M) & 173.4 & 169.4 & 5.9 \\ |
---|
3187 | \uC @monitor@ member rtn. (50M) & 54.8 & 54.8 & 0.1 \\ |
---|
3188 | Goroutine mutex lock (50M) & 34.0 & 34.0 & 0.0 \\ |
---|
3189 | Rust mutex lock (50M) & 33.0 & 33.2 & 0.8 \\ |
---|
3190 | Java synchronized method (50M) & 31.0 & 30.9 & 0.5 \\ |
---|
3191 | % Java synchronized method (10 000 000 000) & 31.0 & 30.2 & 0.9 \\ |
---|
3192 | Pthreads mutex Lock (50M) & 31.0 & 31.1 & 0.4 |
---|
3193 | \end{tabular} |
---|
3194 | \end{multicols} |
---|
3195 | |
---|
3196 | \subsection{Context switching} |
---|
3197 | |
---|
3198 | In procedural programming, the cost of a function call is important as modularization (refactoring) increases. |
---|
3199 | (In many cases, a compiler inlines function calls to increase the size and number of basic blocks for optimizing.) |
---|
3200 | Similarly, when modularization extends to coroutines and threads, the time for a context switch becomes a relevant factor. |
---|
3201 | The coroutine test is from resumer to suspender and from suspender to resumer, which is two context switches. |
---|
3202 | %For async-await systems, the test is scheduling and fulfilling @N@ empty promises, where all promises are allocated before versus interleaved with fulfillment to avoid garbage collection. |
---|
3203 | For async-await systems, the test measures the cost of the @await@ expression entering the event engine by awaiting @N@ promises, where each created promise is resolved by an immediate event in the engine (using Node.js @setImmediate@). |
---|
3204 | The thread test is using yield to enter and return from the runtime kernel, which is two context switches. |
---|
3205 | The difference in performance between coroutine and thread context-switch is the cost of scheduling for threads, whereas coroutines are self-scheduling. |
---|
3206 | Figure~\ref{f:ctx-switch} shows the \CFA code for a coroutine and thread with results in Table~\ref{t:ctx-switch}. |
---|
3207 | |
---|
3208 | % From: Gregor Richards <gregor.richards@uwaterloo.ca> |
---|
3209 | % To: "Peter A. Buhr" <pabuhr@plg2.cs.uwaterloo.ca> |
---|
3210 | % Date: Fri, 24 Jan 2020 13:49:18 -0500 |
---|
3211 | % |
---|
3212 | % I can also verify that the previous version, which just tied a bunch of promises together, *does not* go back to the |
---|
3213 | % event loop at all in the current version of Node. Presumably they're taking advantage of the fact that the ordering of |
---|
3214 | % events is intentionally undefined to just jump right to the next 'then' in the chain, bypassing event queueing |
---|
3215 | % entirely. That's perfectly correct behavior insofar as its difference from the specified behavior isn't observable, but |
---|
3216 | % it isn't typical or representative of much anything useful, because most programs wouldn't have whole chains of eager |
---|
3217 | % promises. Also, it's not representative of *anything* you can do with async/await, as there's no way to encode such an |
---|
3218 | % eager chain that way. |
---|
3219 | |
---|
3220 | \begin{multicols}{2} |
---|
3221 | \begin{cfa}[xleftmargin=0pt] |
---|
3222 | `coroutine` C {}; |
---|
3223 | void main( C & ) { for () { `suspend;` } } |
---|
3224 | int main() { // coroutine test |
---|
3225 | C c; |
---|
3226 | BENCH( for ( N ) { `resume( c );` } ) |
---|
3227 | sout | result; |
---|
3228 | } |
---|
3229 | int main() { // thread test |
---|
3230 | BENCH( for ( N ) { `yield();` } ) |
---|
3231 | sout | result; |
---|
3232 | } |
---|
3233 | \end{cfa} |
---|
3234 | \captionof{figure}{\CFA context-switch benchmark} |
---|
3235 | \label{f:ctx-switch} |
---|
3236 | |
---|
3237 | \columnbreak |
---|
3238 | |
---|
3239 | \vspace*{-16pt} |
---|
3240 | \captionof{table}{Context switch comparison (nanoseconds)} |
---|
3241 | \label{t:ctx-switch} |
---|
3242 | \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}} |
---|
3243 | \multicolumn{1}{@{}r}{Object(N)\hspace*{10pt}} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ |
---|
3244 | C function (10B) & 1.8 & 1.8 & 0.0 \\ |
---|
3245 | \CFA generator (5B) & 1.8 & 2.0 & 0.3 \\ |
---|
3246 | \CFA coroutine (100M) & 32.5 & 32.9 & 0.8 \\ |
---|
3247 | \CFA thread (100M) & 93.8 & 93.6 & 2.2 \\ |
---|
3248 | \uC coroutine (100M) & 50.3 & 50.3 & 0.2 \\ |
---|
3249 | \uC thread (100M) & 97.3 & 97.4 & 1.0 \\ |
---|
3250 | Python generator (100M) & 40.9 & 41.3 & 1.5 \\ |
---|
3251 | Node.js await (5M) & 1852.2 & 1854.7 & 16.4 \\ |
---|
3252 | Node.js generator (100M) & 33.3 & 33.4 & 0.3 \\ |
---|
3253 | Goroutine thread (100M) & 143.0 & 143.3 & 1.1 \\ |
---|
3254 | Rust async await (100M) & 32.0 & 32.0 & 0.0 \\ |
---|
3255 | Rust tokio thread (100M) & 143.0 & 143.0 & 1.7 \\ |
---|
3256 | Rust thread (25M) & 332.0 & 331.4 & 2.4 \\ |
---|
3257 | Java thread (100M) & 405.0 & 415.0 & 17.6 \\ |
---|
3258 | % Java thread ( 100 000 000) & 413.0 & 414.2 & 6.2 \\ |
---|
3259 | % Java thread (5 000 000 000) & 415.0 & 415.2 & 6.1 \\ |
---|
3260 | Pthreads thread (25M) & 334.3 & 335.2 & 3.9 |
---|
3261 | \end{tabular} |
---|
3262 | \end{multicols} |
---|
3263 | |
---|
3264 | |
---|
3265 | \subsection{Discussion} |
---|
3266 | |
---|
3267 | Languages using 1:1 threading based on pthreads can at best meet or exceed, due to language overhead, the pthread results. |
---|
3268 | Note, pthreads has a fast zero-contention mutex lock checked in user space. |
---|
3269 | Languages with M:N threading have better performance than 1:1 because there is no operating-system interactions (context-switching or locking). |
---|
3270 | As well, for locking experiments, M:N threading has less contention if only one kernel thread is used. |
---|
3271 | Languages with stackful coroutines have higher cost than stackless coroutines because of stack allocation and context switching; |
---|
3272 | however, stackful \uC and \CFA coroutines have approximately the same performance as stackless Python and Node.js generators. |
---|
3273 | The \CFA stackless generator is approximately 25 times faster for suspend/resume and 200 times faster for creation than stackless Python and Node.js generators. |
---|
3274 | The Node.js context-switch is costly when asynchronous await must enter the event engine because a promise is not fulfilled. |
---|
3275 | Finally, the benchmark results correlate across programming languages with and without JIT, indicating the JIT has completed any runtime optimizations. |
---|
3276 | |
---|
3277 | |
---|
3278 | \section{Conclusions and Future Work} |
---|
3279 | |
---|
3280 | Advanced control-flow will always be difficult, especially when there is temporal ordering and nondeterminism. |
---|
3281 | However, many systems exacerbate the difficulty through their presentation mechanisms. |
---|
3282 | This paper shows it is possible to understand high-level control-flow using three properties: statefulness, thread, mutual-exclusion/synchronization. |
---|
3283 | Combining these properties creates a number of high-level, efficient, and maintainable control-flow types: generator, coroutine, thread, each of which can be a monitor. |
---|
3284 | Eliminated from \CFA are barging and spurious wakeup, which are nonintuitive and lead to errors, and having to work with a bewildering set of low-level locks and acquisition techniques. |
---|
3285 | \CFA high-level race-free monitors and threads, when used with mutex access function, provide the core mechanisms for mutual exclusion and synchronization, without having to resort to magic qualifiers like @volatile@ or @atomic@. |
---|
3286 | Extending these mechanisms to handle high-level deadlock-free bulk acquire across both mutual exclusion and synchronization is a unique contribution. |
---|
3287 | The \CFA runtime provides concurrency based on a preemptive M:N user-level threading-system, executing in clusters, which encapsulate scheduling of work on multiple kernel threads providing parallelism. |
---|
3288 | The M:N model is judged to be efficient and provide greater flexibility than a 1:1 threading model. |
---|
3289 | These concepts and the \CFA runtime-system are written in the \CFA language, extensively leveraging the \CFA type-system, which demonstrates the expressiveness of the \CFA language. |
---|
3290 | Performance comparisons with other concurrent systems and languages show the \CFA approach is competitive across all basic operations, which translates directly into good performance in well-written applications with advanced control-flow. |
---|
3291 | C programmers should feel comfortable using these mechanisms for developing complex control-flow in applications, with the ability to obtain maximum available performance by selecting mechanisms at the appropriate level of need using only calling communication. |
---|
3292 | |
---|
3293 | While control flow in \CFA has a strong start, development is still underway to complete a number of missing features. |
---|
3294 | |
---|
3295 | \medskip |
---|
3296 | \textbf{Flexible scheduling:} |
---|
3297 | An important part of concurrency is scheduling. |
---|
3298 | Different scheduling algorithms can affect performance, both in terms of average and variation. |
---|
3299 | However, no single scheduler is optimal for all workloads and therefore there is value in being able to change the scheduler for given programs. |
---|
3300 | One solution is to offer various tuning options, allowing the scheduler to be adjusted to the requirements of the workload. |
---|
3301 | However, to be truly flexible, a pluggable scheduler is necessary. |
---|
3302 | Currently, the \CFA pluggable scheduler is too simple to handle complex scheduling, \eg quality of service and real time, where the scheduler must interact with mutex objects to deal with issues like priority inversion~\cite{Buhr00b}. |
---|
3303 | |
---|
3304 | \smallskip |
---|
3305 | \textbf{Non-Blocking I/O:} |
---|
3306 | Many modern workloads are not bound by computation but IO operations, common cases being web servers and XaaS~\cite{XaaS} (anything as a service). |
---|
3307 | These types of workloads require significant engineering to amortizing costs of blocking IO-operations. |
---|
3308 | At its core, nonblocking I/O is an operating-system level feature queuing IO operations, \eg network operations, and registering for notifications instead of waiting for requests to complete. |
---|
3309 | Current trends use asynchronous programming like callbacks, futures, and/or promises, \eg Node.js~\cite{NodeJs} for JavaScript, Spring MVC~\cite{SpringMVC} for Java, and Django~\cite{Django} for Python. |
---|
3310 | However, these solutions lead to code that is hard to create, read, and maintain. |
---|
3311 | A better approach is to tie nonblocking I/O into the concurrency system to provide ease of use with low overhead, \eg thread-per-connection web-services. |
---|
3312 | A nonblocking I/O library is currently under development for \CFA. |
---|
3313 | |
---|
3314 | \smallskip |
---|
3315 | \textbf{Other concurrency tools:} |
---|
3316 | While monitors offer flexible and powerful concurrency for \CFA, other concurrency tools are also necessary for a complete multi-paradigm concurrency package. |
---|
3317 | Examples of such tools can include futures and promises~\cite{promises}, executors and actors. |
---|
3318 | These additional features are useful for applications that can be constructed without shared data and direct blocking. |
---|
3319 | As well, new \CFA extensions should make it possible to create a uniform interface for virtually all mutual exclusion, including monitors and low-level locks. |
---|
3320 | |
---|
3321 | \smallskip |
---|
3322 | \textbf{Implicit threading:} |
---|
3323 | Basic \emph{embarrassingly parallel} applications can benefit greatly from implicit concurrency, where sequential programs are converted to concurrent, with some help from pragmas to guide the conversion. |
---|
3324 | This type of concurrency can be achieved both at the language level and at the library level. |
---|
3325 | The canonical example of implicit concurrency is concurrent nested @for@ loops, which are amenable to divide and conquer algorithms~\cite{uC++book}. |
---|
3326 | The \CFA language features should make it possible to develop a reasonable number of implicit concurrency mechanisms to solve basic HPC data-concurrency problems. |
---|
3327 | However, implicit concurrency is a restrictive solution with significant limitations, so it can never replace explicit concurrent programming. |
---|
3328 | |
---|
3329 | |
---|
3330 | \section{Acknowledgements} |
---|
3331 | |
---|
3332 | The authors recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach, and Michael Brooks; David Dice for commenting and helping with the Java benchmarks; and Gregor Richards for helping with the Node.js benchmarks. |
---|
3333 | This research is funded by the NSERC/Waterloo-Huawei (\url{http://www.huawei.com}) Joint Innovation Lab. %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada. |
---|
3334 | |
---|
3335 | {% |
---|
3336 | \fontsize{9bp}{11.5bp}\selectfont% |
---|
3337 | \bibliography{pl,local} |
---|
3338 | }% |
---|
3339 | |
---|
3340 | \end{document} |
---|
3341 | |
---|
3342 | % Local Variables: % |
---|
3343 | % tab-width: 4 % |
---|
3344 | % fill-column: 120 % |
---|
3345 | % compile-command: "make" % |
---|
3346 | % End: % |
---|