Changeset 80dbf6a for doc/papers
- Timestamp:
- Feb 6, 2020, 10:30:05 AM (4 years ago)
- Branches:
- ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- 9fb8f01
- Parents:
- 53a49cc
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/papers/concurrency/Paper.tex
r53a49cc r80dbf6a 61 61 \newcommand{\CCseventeen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}17\xspace} % C++17 symbolic name 62 62 \newcommand{\CCtwenty}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}20\xspace} % C++20 symbolic name 63 \newcommand{\Csharp}{C\raisebox{-0.7ex}{\ Large$^\sharp$}\xspace} % C# symbolic name63 \newcommand{\Csharp}{C\raisebox{-0.7ex}{\large$^\sharp$}\xspace} % C# symbolic name 64 64 65 65 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% … … 127 127 \newcommand*{\etc}{% 128 128 \@ifnextchar{.}{\ETC}% 129 129 {\ETC.\xspace}% 130 130 }}{}% 131 131 \@ifundefined{etal}{ 132 132 \newcommand{\ETAL}{\abbrevFont{et}~\abbrevFont{al}} 133 133 \newcommand*{\etal}{% 134 \@ifnextchar{.}{\ protect\ETAL}%135 {\ protect\ETAL.\xspace}%134 \@ifnextchar{.}{\ETAL}% 135 {\ETAL.\xspace}% 136 136 }}{}% 137 137 \@ifundefined{viz}{ … … 163 163 __float80, float80, __float128, float128, forall, ftype, generator, _Generic, _Imaginary, __imag, __imag__, 164 164 inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or, 165 otype, restrict, __restrict, __restrict__, __signed, __signed__, _Static_assert, thread,165 otype, restrict, resume, __restrict, __restrict__, __signed, __signed__, _Static_assert, suspend, thread, 166 166 _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__, 167 167 virtual, __volatile, __volatile__, waitfor, when, with, zero_t}, 168 168 moredirectives={defined,include_next}, 169 169 % replace/adjust listing characters that look bad in sanserif 170 literate={-}{\makebox[1ex][c]{\raisebox{0. 4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1170 literate={-}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1 171 171 {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1 172 172 {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1 … … 197 197 _Else, _Enable, _Event, _Finally, _Monitor, _Mutex, _Nomutex, _PeriodicTask, _RealTimeTask, 198 198 _Resume, _Select, _SporadicTask, _Task, _Timeout, _When, _With, _Throw}, 199 }200 \lstdefinelanguage{Golang}{201 morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,},202 morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64,203 bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface},204 morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,},205 morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,},206 morekeywords=[5]{Println,Printf,Error,},207 sensitive=true,208 morecomment=[l]{//},209 morecomment=[s]{/*}{*/},210 morestring=[b]',211 morestring=[b]",212 morestring=[s]{`}{`},213 199 } 214 200 … … 241 227 {} 242 228 \lstnewenvironment{uC++}[1][] 243 {\lstset{ #1}}229 {\lstset{language=uC++,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}} 244 230 {} 245 231 \lstnewenvironment{Go}[1][] … … 280 266 \CFA is a polymorphic, non-object-oriented, concurrent, backwards-compatible extension of the C programming language. 281 267 This paper discusses the design philosophy and implementation of its advanced control-flow and concurrent/parallel features, along with the supporting runtime written in \CFA. 282 These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library features like pthreads.268 These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library approaches like pthreads. 283 269 \CFA introduces modern language-level control-flow mechanisms, like generators, coroutines, user-level threading, and monitors for mutual exclusion and synchronization. 284 270 % Library extension for executors, futures, and actors are built on these basic mechanisms. … … 293 279 294 280 \begin{document} 295 \linenumbers 281 \linenumbers % comment out to turn off line numbering 296 282 297 283 \maketitle … … 300 286 \section{Introduction} 301 287 302 This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA~\cite{Moss18,Cforall} and its runtime, which is written entirely in \CFA. 303 \CFA is a modern, polymorphic, non-object-oriented\footnote{ 304 \CFA has features often associated with object-oriented programming languages, such as constructors, destructors, virtuals and simple inheritance. 288 \CFA~\cite{Moss18,Cforall} is a modern, polymorphic, non-object-oriented\footnote{ 289 \CFA has object-oriented features, such as constructors, destructors, virtuals and simple trait/interface inheritance. 290 % Go interfaces, Rust traits, Swift Protocols, Haskell Type Classes and Java Interfaces. 291 % "Trait inheritance" works for me. "Interface inheritance" might also be a good choice, and distinguish clearly from implementation inheritance. 292 % You'll want to be a little bit careful with terms like "structural" and "nominal" inheritance as well. CFA has structural inheritance (I think Go as well) -- it's inferred based on the structure of the code. Java, Rust, and Haskell (not sure about Swift) have nominal inheritance, where there needs to be a specific statement that "this type inherits from this type". 305 293 However, functions \emph{cannot} be nested in structures, so there is no lexical binding between a structure and set of functions (member/method) implemented by an implicit \lstinline@this@ (receiver) parameter.}, 306 294 backwards-compatible extension of the C programming language. 307 In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a \emph{research vehicle} for new typing and control-flow capabilities on top of a highly popular programming language allowing immediate dissemination. 308 Within the \CFA framework, new control-flow features are created from scratch because ISO \Celeven defines only a subset of the \CFA extensions, where the overlapping features are concurrency~\cite[\S~7.26]{C11}. 309 However, \Celeven concurrency is largely wrappers for a subset of the pthreads library~\cite{Butenhof97,Pthreads}, and \Celeven and pthreads concurrency is simple, based on thread fork/join in a function and mutex/condition locks, which is low-level and error-prone; 310 no high-level language concurrency features are defined. 311 Interestingly, almost a decade after publication of the \Celeven standard, neither gcc-8, clang-9 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating little interest in the C11 concurrency approach (possibly because the effort to add concurrency to \CC). 312 Finally, while the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}. 313 295 In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a \emph{research vehicle} for new typing and control-flow capabilities on top of a highly popular programming language\footnote{ 296 The TIOBE index~\cite{TIOBE} for December 2019 ranks the top five \emph{popular} programming languages as Java 17\%, C 16\%, Python 10\%, and \CC 6\%, \Csharp 5\% = 54\%, and over the past 30 years, C has always ranked either first or second in popularity.} 297 allowing immediate dissemination. 298 This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA and its runtime, which is written entirely in \CFA. 299 The \CFA control-flow framework extends ISO \Celeven~\cite{C11} with new call/return and concurrent/parallel control-flow. 300 301 % The call/return extensions retain state between callee and caller versus losing the callee's state on return; 302 % the concurrency extensions allow high-level management of threads. 303 304 Call/return control-flow with argument/parameter passing appeared in the first programming languages. 305 Over the past 50 years, call/return has been augmented with features like static/dynamic call, exceptions (multi-level return) and generators/coroutines (retain state between calls). 306 While \CFA has mechanisms for dynamic call (algebraic effects) and exceptions\footnote{ 307 \CFA exception handling will be presented in a separate paper. 308 The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}}, this work only discusses retaining state between calls via generators/coroutines. 309 \newterm{Coroutining} was introduced by Conway~\cite{Conway63} (1963), discussed by Knuth~\cite[\S~1.4.2]{Knuth73V1}, implemented in Simula67~\cite{Simula67}, formalized by Marlin~\cite{Marlin80}, and is now popular and appears in old and new programming languages: CLU~\cite{CLU}, \Csharp~\cite{Csharp}, Ruby~\cite{Ruby}, Python~\cite{Python}, JavaScript~\cite{JavaScript}, Lua~\cite{Lua}, \CCtwenty~\cite{C++20Coroutine19}. 310 Coroutining is sequential execution requiring direct handoff among coroutines, \ie only the programmer is controlling execution order. 311 If coroutines transfer to an internal event-engine for scheduling the next coroutines, the program transitions into the realm of concurrency~\cite[\S~3]{Buhr05a}. 312 Coroutines are only a stepping stone towards concurrency where the commonality is that coroutines and threads retain state between calls. 313 314 \Celeven/\CCeleven define concurrency~\cite[\S~7.26]{C11}, but it is largely wrappers for a subset of the pthreads library~\cite{Pthreads}.\footnote{Pthreads concurrency is based on simple thread fork/join in a function and mutex/condition locks, which is low-level and error-prone} 315 Interestingly, almost a decade after the \Celeven standard, neither gcc-9, clang-9 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating no interest in the C11 concurrency approach (possibly because of the recent effort to add concurrency to \CC). 316 While the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}, as for \CC. 314 317 In contrast, there has been a renewed interest during the past decade in user-level (M:N, green) threading in old and new programming languages. 315 318 As multi-core hardware became available in the 1980/90s, both user and kernel threading were examined. 316 319 Kernel threading was chosen, largely because of its simplicity and fit with the simpler operating systems and hardware architectures at the time, which gave it a performance advantage~\cite{Drepper03}. 317 320 Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads. 318 As a result, languages like Java, Scala, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms.319 From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}, including putting green threads back into Java~\cite{Quasar}.320 The main argument for user-level threading is that it is lighter weight than kernel threading (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work unitsto facilitate load balancing by the runtime~\cite{Verch12}.321 As a result, many current languages implementations adopt the 1:1 kernel-threading model, like Java (Scala), Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, C\#~\cite{Csharp} and Rust~\cite{Rust}, with a variety of presentation mechanisms. 322 From 2000 onwards, several language implementations have championed the M:N user-threading model, like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book}, including putting green threads back into Java~\cite{Quasar}, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}. 323 The main argument for user-level threading is that it is lighter weight than kernel threading (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourages large numbers of threads performing medium-sized work to facilitate load balancing by the runtime~\cite{Verch12}. 321 324 As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{Adya02,vonBehren03}. 322 325 Finally, performant user-threading implementations (both time and space) meet or exceed direct kernel-threading implementations, while achieving the programming advantages of high concurrency levels and safety. 323 326 324 A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \ iesome language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}.327 A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \eg some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}. 325 328 The consequence is that a language must provide sufficient tools to program around safety issues, as inline and library code is all sequential to the compiler. 326 329 One solution is low-level qualifiers and functions (\eg @volatile@ and atomics) allowing \emph{programmers} to explicitly write safe (race-free~\cite{Boehm12}) programs. 327 A safer solution is high-level language constructs so the \emph{compiler} knows the optimization boundaries, and hence, provides implicit safety. 328 This problem is best known with respect to concurrency, but applies to other complex control-flow, like exceptions\footnote{ 329 \CFA exception handling will be presented in a separate paper. 330 The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++} 331 } and coroutines. 332 Finally, language solutions allow matching constructs with language paradigm, \ie imperative and functional languages often have different presentations of the same concept to fit their programming model. 333 334 Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary. 335 Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~8]{Buhr05a}) and \emph{barging}\footnote{ 336 The notion of competitive succession instead of direct handoff, \ie a lock owner releases the lock and an arriving thread acquires it ahead of preexisting waiter threads. 330 A safer solution is high-level language constructs so the \emph{compiler} knows the concurrency boundaries (where mutual exclusion and synchronization are acquired/released) and provide implicit safety at and across these boundaries. 331 While the optimization problem is best known with respect to concurrency, it applies to other complex control-flow, like exceptions and coroutines. 332 As well, language solutions allow matching the language paradigm with the approach, \eg matching the functional paradigm with data-flow programming or the imperative paradigm with thread programming. 333 334 Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety (unsafe code) for performance when necessary. 335 Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~9]{Buhr05a}) and \emph{barging}\footnote{ 336 Barging is competitive succession instead of direct handoff, \ie after a lock is released both arriving and preexisting waiter threads compete to acquire the lock. 337 Hence, an arriving thread can temporally \emph{barge} ahead of threads already waiting for an event, which can repeat indefinitely leading to starvation of waiter threads. 337 338 } (signals-as-hints~\cite[\S~8]{Buhr05a}), where one is a consequence of the other, \ie once there is spurious wakeup, signals-as-hints follow. 338 However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice. 339 Similarly, signals-as-hints are often a performance decision. 340 We argue removing spurious wakeup and signals-as-hints make concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation. 341 (Author experience teaching concurrency is that students are highly confused by these semantics.) 342 Clawing back performance, when local non-determinism is unimportant, should be an option not the default. 343 344 \begin{comment} 345 Most augmented traditional (Fortran 18~\cite{Fortran18}, Cobol 14~\cite{Cobol14}, Ada 12~\cite{Ada12}, Java 11~\cite{Java11}) and new languages (Go~\cite{Go}, Rust~\cite{Rust}, and D~\cite{D}), except \CC, diverge from C with different syntax and semantics, only interoperate indirectly with C, and are not systems languages, for those with managed memory. 346 As a result, there is a significant learning curve to move to these languages, and C legacy-code must be rewritten. 347 While \CC, like \CFA, takes an evolutionary approach to extend C, \CC's constantly growing complex and interdependent features-set (\eg objects, inheritance, templates, etc.) mean idiomatic \CC code is difficult to use from C, and C programmers must expend significant effort learning \CC. 348 Hence, rewriting and retraining costs for these languages, even \CC, are prohibitive for companies with a large C software-base. 349 \CFA with its orthogonal feature-set, its high-performance runtime, and direct access to all existing C libraries circumvents these problems. 350 \end{comment} 351 352 \CFA embraces user-level threading, language extensions for advanced control-flow, and safety as the default. 353 We present comparative examples so the reader can judge if the \CFA control-flow extensions are better and safer than those in other concurrent, imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms. 339 (Author experience teaching concurrency is that students are confused by these semantics.) 340 However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~9]{Buhr05a}; 341 it is a performance design choice. 342 We argue removing spurious wakeup and signals-as-hints make concurrent programming simpler and safer as there is less local non-determinism to manage. 343 If barging acquisition is allowed, its specialized performance advantage should be available as an option not the default. 344 345 \CFA embraces language extensions for advanced control-flow, user-level threading, and safety as the default. 346 We present comparative examples to support our argument that the \CFA control-flow extensions are as expressive and safe as those in other concurrent imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms. 354 347 The main contributions of this work are: 355 \begin{itemize}[topsep=3pt,itemsep= 1pt]348 \begin{itemize}[topsep=3pt,itemsep=0pt] 356 349 \item 357 language-level generators, coroutines and user-level threading, which respect the expectations of C programmers. 350 a set of fundamental execution properties that dictate which language-level control-flow features need to be supported, 351 358 352 \item 359 monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capabilities with all monitor synchronization mechanisms. 353 integration of these language-level control-flow features, while respecting the style and expectations of C programmers, 354 360 355 \item 361 providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features. 356 monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capabilities with all monitor synchronization mechanisms, 357 358 \item 359 providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features, 360 362 361 % \item 363 362 % library extensions for executors, futures, and actors built on the basic mechanisms. 363 364 364 \item 365 a runtime system with no spurious wakeup. 365 a runtime system without spurious wake-up and no performance loss, 366 366 367 \item 367 a dynamic partitioning mechanism to segregate the execution environment for specialized requirements. 368 a dynamic partitioning mechanism to segregate groups of executing user and kernel threads performing specialized work (\eg web-server or compute engine) or requiring different scheduling (\eg NUMA or real-time). 369 368 370 % \item 369 371 % a non-blocking I/O library 372 370 373 \item 371 experimental results showing comparable performance of the new features with similar mechanisms in other programminglanguages.374 experimental results showing comparable performance of the \CFA features with similar mechanisms in other languages. 372 375 \end{itemize} 373 376 374 Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls, which produces constructs @generator@ and @coroutine@. 375 Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread, which produces the @thread@ construct. 377 Section~\ref{s:FundamentalExecutionProperties} presents the compositional hierarchy of execution properties directing the design of control-flow features in \CFA. 378 Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls producing constructs @generator@ and @coroutine@. 379 Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread producing the @thread@ construct. 376 380 Section~\ref{s:MutualExclusionSynchronization} discusses the two mechanisms to restricted nondeterminism when controlling shared access to resources (mutual exclusion) and timing relationships among threads (synchronization). 377 381 Section~\ref{s:Monitor} shows how both mutual exclusion and synchronization are safely embedded in the @monitor@ and @thread@ constructs. 378 382 Section~\ref{s:CFARuntimeStructure} describes the large-scale mechanism to structure (cluster) threads and virtual processors (kernel threads). 379 Section~\ref{s:Performance} uses a series of microbenchmarks to compare \CFA threading with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0. 383 Section~\ref{s:Performance} uses a series of microbenchmarks to compare \CFA threading with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js 12.14.1, and \uC 7.0.0. 384 385 386 \section{Fundamental Execution Properties} 387 \label{s:FundamentalExecutionProperties} 388 389 The features in a programming language should be composed from a set of fundamental properties rather than an ad hoc collection chosen by the designers. 390 To this end, the control-flow features created for \CFA are based on the fundamental properties of any language with function-stack control-flow (see also \uC~\cite[pp.~140-142]{uC++}). 391 The fundamental properties are execution state, thread, and mutual-exclusion/synchronization (MES). 392 These independent properties can be used alone, in pairs, or in triplets to compose different language features, forming a compositional hierarchy where the most advanced feature has all the properties (state/thread/MES). 393 While it is possible for a language to only support the most advanced feature~\cite{Hermes90}, this unnecessarily complicates and makes inefficient solutions to certain classes of problems. 394 As is shown, each of the (non-rejected) composed features solves a particular set of problems, and hence, has a defensible position in a programming language. 395 If a compositional feature is missing, a programmer has too few/many fundamental properties resulting in a complex and/or is inefficient solution. 396 397 In detail, the fundamental properties are: 398 \begin{description}[leftmargin=\parindent,topsep=3pt,parsep=0pt] 399 \item[\newterm{execution state}:] 400 is the state information needed by a control-flow feature to initialize, manage compute data and execution location(s), and de-initialize. 401 State is retained in fixed-sized aggregate structures and dynamic-sized stack(s), often allocated in the heap(s) managed by the runtime system. 402 The lifetime of the state varies with the control-flow feature, where longer life-time and dynamic size provide greater power but also increase usage complexity and cost. 403 Control-flow transfers among execution states occurs in multiple ways, such as function call, context switch, asynchronous await, etc. 404 Because the programming language determines what constitutes an execution state, implicitly manages this state, and defines movement mechanisms among states, execution state is an elementary property of the semantics of a programming language. 405 % An execution-state is related to the notion of a process continuation \cite{Hieb90}. 406 407 \item[\newterm{threading}:] 408 is execution of code that occurs independently of other execution, \ie the execution resulting from a thread is sequential. 409 Multiple threads provide \emph{concurrent execution}; 410 concurrent execution becomes parallel when run on multiple processing units (hyper-threading, cores, sockets). 411 There must be language mechanisms to create, block/unblock, and join with a thread. 412 413 \item[\newterm{MES}:] 414 is the concurrency mechanisms to perform an action without interruption and establish timing relationships among multiple threads. 415 These two properties are independent, \ie mutual exclusion cannot provide synchronization and vice versa without introducing additional threads~\cite[\S~4]{Buhr05a}. 416 Limiting MES, \eg no access to shared data, results in contrived solutions and inefficiency on multi-core von Neumann computers where shared memory is a foundational aspect of its design. 417 \end{description} 418 These properties are fundamental because they cannot be built from existing language features, \eg a basic programming language like C99~\cite{C99} cannot create new control-flow features, concurrency, or provide MES using atomic hardware mechanisms. 419 420 421 \subsection{Execution Properties} 422 423 Table~\ref{t:ExecutionPropertyComposition} shows how the three fundamental execution properties: state, thread, and mutual exclusion compose a hierarchy of control-flow features needed in a programming language. 424 (When doing case analysis, not all combinations are meaningful.) 425 Note, basic von Neumann execution requires at least one thread and an execution state providing some form of call stack. 426 For table entries missing these minimal components, the property is borrowed from the invoker (caller). 427 428 Case 1 is a function that borrows storage for its state (stack frame/activation) and a thread from its invoker and retains this state across \emph{callees}, \ie function local-variables are retained on the stack across calls. 429 Case 2 is case 1 with access to shared state so callers are restricted during update (mutual exclusion) and scheduling for other threads (synchronization). 430 Case 3 is a stateful function supporting resume/suspend along with call/return to retain state across \emph{callers}, but has some restrictions because the function's state is stackless. 431 Note, stackless functions still borrow the caller's stack and thread, where the stack is used to preserve state across its callees. 432 Case 4 is cases 2 and 3 with protection to shared state for stackless functions. 433 Cases 5 and 6 are the same as 3 and 4 but only the thread is borrowed as the function state is stackful, so resume/suspend is a context switch from the caller's to the function's stack. 434 Cases 7 and 8 are rejected because a function that is given a new thread must have its own stack where the thread begins and stack frames are stored for calls, \ie there is no stack to borrow. 435 Cases 9 and 10 are rejected because a thread with a fixed state (no stack) cannot accept calls, make calls, block, or be preempted, all of which require an unknown amount of additional dynamic state. 436 Hence, once started, this kind of thread must execute to completion, \ie computation only, which severely restricts runtime management. 437 Cases 11 and 12 have a stackful thread with and without safe access to shared state. 438 Execution properties increase the cost of creation and execution along with complexity of usage. 439 440 \begin{table} 441 \caption{Execution property composition} 442 \centering 443 \label{t:ExecutionPropertyComposition} 444 \renewcommand{\arraystretch}{1.25} 445 %\setlength{\tabcolsep}{5pt} 446 \begin{tabular}{c|c||l|l} 447 \multicolumn{2}{c||}{execution properties} & \multicolumn{2}{c}{mutual exclusion / synchronization} \\ 448 \hline 449 stateful & thread & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\ 450 \hline 451 \hline 452 No & No & \textbf{1}\ \ \ function & \textbf{2}\ \ \ @monitor@ function \\ 453 \hline 454 Yes (stackless) & No & \textbf{3}\ \ \ @generator@ & \textbf{4}\ \ \ @monitor@ @generator@ \\ 455 \hline 456 Yes (stackful) & No & \textbf{5}\ \ \ @coroutine@ & \textbf{6}\ \ \ @monitor@ @coroutine@ \\ 457 \hline 458 No & Yes & \textbf{7}\ \ \ {\color{red}rejected} & \textbf{8}\ \ \ {\color{red}rejected} \\ 459 \hline 460 Yes (stackless) & Yes & \textbf{9}\ \ \ {\color{red}rejected} & \textbf{10}\ \ \ {\color{red}rejected} \\ 461 \hline 462 Yes (stackful) & Yes & \textbf{11}\ \ \ @thread@ & \textbf{12}\ \ @monitor@ @thread@ \\ 463 \end{tabular} 464 \end{table} 465 466 Given the execution-properties taxonomy, programmers can now answer three basic questions: is state necessary across calls and how much, is a separate thread necessary, is access to shared state necessary. 467 The answers define the optimal language feature need for implementing a programming problem. 468 The next sections discusses how \CFA fills in the table with language features, while other programming languages may only provide a subset of the table. 469 470 471 \subsection{Design Requirements} 472 473 The following design requirements largely stem from building \CFA on top of C. 474 \begin{itemize}[topsep=3pt,parsep=0pt] 475 \item 476 All communication must be statically type checkable for early detection of errors and efficient code generation. 477 This requirement is consistent with the fact that C is a statically-typed programming-language. 478 479 \item 480 Direct interaction among language features must be possible allowing any feature to be selected without restricting comm\-unication. 481 For example, many concurrent languages do not provide direct communication (calls) among threads, \ie threads only communicate indirectly through monitors, channels, messages, and/or futures. 482 Indirect communication increases the number of objects, consuming more resources, and require additional synchronization and possibly data transfer. 483 484 \item 485 All communication is performed using function calls, \ie data is transmitted from argument to parameter and results are returned from function calls. 486 Alternative forms of communication, such as call-backs, message passing, channels, or communication ports, step outside of C's normal form of communication. 487 488 \item 489 All stateful features must follow the same declaration scopes and lifetimes as other language data. 490 For C that means at program startup, during block and function activation, and on demand using dynamic allocation. 491 492 \item 493 MES must be available implicitly in language constructs as well as explicitly for specialized requirements, because requiring programmers to build MES using low-level locks often leads to incorrect programs. 494 Furthermore, reducing synchronization scope by encapsulating it within language constructs further reduces errors in concurrent programs. 495 496 \item 497 Both synchronous and asynchronous communication are needed. 498 However, we believe the best way to provide asynchrony, such as call-buffering/chaining and/or returning futures~\cite{multilisp}, is building it from expressive synchronous features. 499 500 \item 501 Synchronization must be able to control the service order of requests including prioritizing selection from different kinds of outstanding requests, and postponing a request for an unspecified time while continuing to accept new requests. 502 Otherwise, certain concurrency problems are difficult, e.g.\ web server, disk scheduling, and the amount of concurrency is inhibited~\cite{Gentleman81}. 503 \end{itemize} 504 We have satisfied these requirements in \CFA while maintaining backwards compatibility with the huge body of legacy C programs. 505 % In contrast, other new programming languages must still access C programs (\eg operating-system service routines), but do so through fragile C interfaces. 506 507 508 \subsection{Asynchronous Await / Call} 509 510 Asynchronous await/call is a caller mechanism for structuring programs and/or increasing concurrency, where the caller (client) postpones an action into the future, which is subsequently executed by a callee (server). 511 The caller detects the action's completion through a \newterm{future}/\newterm{promise}. 512 The benefit is asynchronous caller execution with respect to the callee until future resolution. 513 For single-threaded languages like JavaScript, an asynchronous call passes a callee action, which is queued in the event-engine, and continues execution with a promise. 514 When the caller needs the promise to be fulfilled, it executes @await@. 515 A promise-completion call-back can be part of the callee action or the caller is rescheduled; 516 in either case, the call back is executed after the promise is fulfilled. 517 While asynchronous calls generate new callee (server) events, we content this mechanism is insufficient for advanced control-flow mechanisms like generators or coroutines (which are discussed next). 518 Specifically, control between caller and callee occurs indirectly through the event-engine precluding direct handoff and cycling among events, and requires complex resolution of a control promise and data. 519 Note, @async-await@ is just syntactic-sugar over the event engine so it does not solve these deficiencies. 520 For multi-threaded languages like Java, the asynchronous call queues a callee action with an executor (server), which subsequently executes the work by a thread in the executor thread-pool. 521 The problem is when concurrent work-units need to interact and/or block as this effects the executor, \eg stops threads. 522 While it is possible to extend this approach to support the necessary mechanisms, \eg message passing in Actors, we show monitors and threads provide an equally competitive approach that does not deviate from normal call communication and can be used to build asynchronous call, as is done in Java. 380 523 381 524 … … 383 526 \label{s:StatefulFunction} 384 527 385 The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine. 386 Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension. 387 This capability is accomplished by retaining a data/execution \emph{closure} between invocations. 388 If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited. 389 If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions. 390 Hence, refactoring a stackless coroutine may require changing it to stackful. 391 A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack. 392 As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles). 393 A fixed closure activated by modified call/return is faster than a variable closure activated by context switching. 394 Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance. 395 Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general. 396 Note, creation cost is amortized across usage, so activation cost is usually the dominant factor. 528 A \emph{stateful function} has the ability to remember state between calls, where state can be either data or execution, \eg plugin, device driver, finite-state machine (FSM). 529 A simple technique to retain data state between calls is @static@ declarations within a function, which is often implemented by hoisting the declarations to the global scope but hiding the names within the function using name mangling. 530 However, each call starts the function at the top making it difficult to determine the last point of execution in an algorithm, and requiring multiple flag variables and testing to reestablish the continuation point. 531 Hence, the next step of generalizing function state is implicitly remembering the return point between calls and reentering the function at this point rather than the top, called \emph{generators}\,/\,\emph{iterators} or \emph{stackless coroutines}. 532 For example, a Fibonacci generator retains data and execution state allowing it to remember prior values needed to generate the next value and the location in the algorithm to compute that value. 533 The next step of generalization is instantiating the function to allow multiple named instances, \eg multiple Fibonacci generators, where each instance has its own state, and hence, can generate an independent sequence of values. 534 Note, a subset of generator state is a function \emph{closure}, \ie the technique of capturing lexical references when returning a nested function. 535 A further generalization is adding a stack to a generator's state, called a \emph{coroutine}, so it can suspend outside of itself, \eg call helper functions to arbitrary depth before suspending back to its resumer without unwinding these calls. 536 For example, a coroutine iterator for a binary tree can stop the traversal at the visit point (pre, infix, post traversal), return the node value to the caller, and then continue the recursive traversal from the current node on the next call. 537 538 There are two styles of activating a stateful function, \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles). 539 These styles \emph{do not} cause incremental stack growth, \eg a million resume/suspend or resume/resume cycles do not remember each cycle just the last resumer for each cycle. 540 Selecting between stackless/stackful semantics and asymmetric/symmetric style is a tradeoff between programming requirements, performance, and design, where stackless is faster and smaller (modified call/return between closures), stackful is more general but slower and larger (context switching between distinct stacks), and asymmetric is simpler control-flow than symmetric. 541 Additionally, storage management for the closure/stack (especially in unmanaged languages, \ie no garbage collection) must be factored into design and performance. 542 Note, creation cost (closure/stack) is amortized across usage, so activation cost (resume/suspend) is usually the dominant factor. 543 544 % The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine. 545 % Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension. 546 % If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited. 547 % If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions. 548 % Hence, refactoring a stackless coroutine may require changing it to stackful. 549 % A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack. 550 % As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles). 551 % A fixed closure activated by modified call/return is faster than a variable closure activated by context switching. 552 % Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance. 553 % Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general. 554 % nppNote, creation cost is amortized across usage, so activation cost is usually the dominant factor. 555 556 For example, Python presents asymmetric generators as a function object, \uC presents symmetric coroutines as a \lstinline[language=C++]|class|-like object, and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}. 557 \begin{center} 558 \begin{tabular}{@{}l|l|l@{}} 559 \multicolumn{1}{@{}c|}{Python asymmetric generator} & \multicolumn{1}{c|}{\uC symmetric coroutine} & \multicolumn{1}{c@{}}{Pthreads thread} \\ 560 \hline 561 \begin{python} 562 `def Gen():` $\LstCommentStyle{\color{red}// function}$ 563 ... yield val ... 564 gen = Gen() 565 for i in range( 10 ): 566 print( next( gen ) ) 567 \end{python} 568 & 569 \begin{uC++} 570 `_Coroutine Cycle {` $\LstCommentStyle{\color{red}// class}$ 571 Cycle * p; 572 void main() { p->cycle(); } 573 void cycle() { resume(); } `};` 574 Cycle c1, c2; c1.p=&c2; c2.p=&c1; c1.cycle(); 575 \end{uC++} 576 & 577 \begin{cfa} 578 void * rtn( void * arg ) { ... } 579 int i = 3, rc; 580 pthread_t t; $\C{// thread id}$ 581 $\LstCommentStyle{\color{red}// function pointer}$ 582 rc=pthread_create(&t, `rtn`, (void *)i); 583 \end{cfa} 584 \end{tabular} 585 \end{center} 586 \CFA's preferred presentation model for generators/coroutines/threads is a hybrid of functions and classes, giving an object-oriented flavour. 587 Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type via the type's name. 588 The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed for flexibility reasons. 589 Each custom type is discussed in detail in the following sections. 590 591 592 \subsection{Generator} 593 594 Stackless generators (Table~\ref{t:ExecutionPropertyComposition} case 3) have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution. 595 The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity. 596 A series of different kinds of generators and their implementation demonstrate how this goal is accomplished.\footnote{ 597 The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|?++|, \lstinline|++?|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|. 598 Operator \lstinline+|+ is overloaded for printing, like bit-shift \lstinline|<<| in \CC. 599 The \CFA \lstinline|with| clause opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics; 600 multiple aggregates may be opened. 601 \CFA has rebindable references \lstinline|int i, & ip = i, j; `&ip = &j;`| and non-rebindable references \lstinline|int i, & `const` ip = i, j; `&ip = &j;` // disallowed|. 602 }% 397 603 398 604 \begin{figure} … … 408 614 409 615 616 617 410 618 int fn = f->fn; f->fn = f->fn1; 411 619 f->fn1 = f->fn + fn; 412 620 return fn; 413 414 621 } 415 622 int main() { … … 430 637 void `main(Fib & fib)` with(fib) { 431 638 639 432 640 [fn1, fn] = [1, 0]; 433 641 for () { … … 449 657 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 450 658 typedef struct { 451 int fn1, fn; void * `next`;659 int `restart`, fn1, fn; 452 660 } Fib; 453 #define FibCtor { 1, 0, NULL}661 #define FibCtor { `0`, 1, 0 } 454 662 Fib * comain( Fib * f ) { 455 if ( f->next ) goto *f->next; 456 f->next = &&s1; 663 `static void * states[] = {&&s0, &&s1};` 664 `goto *states[f->restart];` 665 s0: f->`restart` = 1; 457 666 for ( ;; ) { 458 667 return f; 459 668 s1:; int fn = f->fn + f->fn1; 460 669 f->fn1 = f->fn; f->fn = fn; 461 670 } 462 671 } … … 470 679 \end{lrbox} 471 680 472 \subfloat[C asymmetric generator]{\label{f:CFibonacci}\usebox\myboxA}681 \subfloat[C]{\label{f:CFibonacci}\usebox\myboxA} 473 682 \hspace{3pt} 474 683 \vrule 475 684 \hspace{3pt} 476 \subfloat[\CFA asymmetric generator]{\label{f:CFAFibonacciGen}\usebox\myboxB}685 \subfloat[\CFA]{\label{f:CFAFibonacciGen}\usebox\myboxB} 477 686 \hspace{3pt} 478 687 \vrule 479 688 \hspace{3pt} 480 \subfloat[C generat or implementation]{\label{f:CFibonacciSim}\usebox\myboxC}689 \subfloat[C generated code for \CFA version]{\label{f:CFibonacciSim}\usebox\myboxC} 481 690 \caption{Fibonacci (output) asymmetric generator} 482 691 \label{f:FibonacciAsymmetricGenerator} … … 491 700 }; 492 701 void ?{}( Fmt & fmt ) { `resume(fmt);` } // constructor 493 void ^?{}( Fmt & f ) with(f) { $\C[ 1.75in]{// destructor}$702 void ^?{}( Fmt & f ) with(f) { $\C[2.25in]{// destructor}$ 494 703 if ( g != 0 || b != 0 ) sout | nl; } 495 704 void `main( Fmt & f )` with(f) { … … 497 706 for ( ; g < 5; g += 1 ) { $\C{// groups}$ 498 707 for ( ; b < 4; b += 1 ) { $\C{// blocks}$ 499 `suspend;` $\C{// wait for character}$500 while ( ch == '\n' ) `suspend;` // ignore501 sout | ch; // newline502 } sout | " "; // block spacer503 } sout | nl; // group newline708 do { `suspend;` $\C{// wait for character}$ 709 while ( ch == '\n' ); // ignore newline 710 sout | ch; $\C{// print character}$ 711 } sout | " "; $\C{// block separator}$ 712 } sout | nl; $\C{// group separator}$ 504 713 } 505 714 } … … 519 728 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 520 729 typedef struct { 521 void * next;730 int `restart`, g, b; 522 731 char ch; 523 int g, b;524 732 } Fmt; 525 733 void comain( Fmt * f ) { 526 if ( f->next ) goto *f->next; 527 f->next = &&s1; 734 `static void * states[] = {&&s0, &&s1};` 735 `goto *states[f->restart];` 736 s0: f->`restart` = 1; 528 737 for ( ;; ) { 529 738 for ( f->g = 0; f->g < 5; f->g += 1 ) { 530 739 for ( f->b = 0; f->b < 4; f->b += 1 ) { 531 return;532 s1:; while ( f->ch == '\n' ) return;740 do { return; s1: ; 741 } while ( f->ch == '\n' ); 533 742 printf( "%c", f->ch ); 534 743 } printf( " " ); … … 537 746 } 538 747 int main() { 539 Fmt fmt = { NULL}; comain( &fmt ); // prime748 Fmt fmt = { `0` }; comain( &fmt ); // prime 540 749 for ( ;; ) { 541 750 scanf( "%c", &fmt.ch ); … … 548 757 \end{lrbox} 549 758 550 \subfloat[\CFA asymmetric generator]{\label{f:CFAFormatGen}\usebox\myboxA}551 \hspace{3 pt}759 \subfloat[\CFA]{\label{f:CFAFormatGen}\usebox\myboxA} 760 \hspace{35pt} 552 761 \vrule 553 762 \hspace{3pt} 554 \subfloat[C generat or simulation]{\label{f:CFormatSim}\usebox\myboxB}763 \subfloat[C generated code for \CFA version]{\label{f:CFormatGenImpl}\usebox\myboxB} 555 764 \hspace{3pt} 556 765 \caption{Formatter (input) asymmetric generator} … … 558 767 \end{figure} 559 768 560 Stateful functions appear as generators, coroutines, and threads, where presentations are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}. 561 For example, Python presents generators as a function object: 562 \begin{python} 563 def Gen(): 564 ... `yield val` ... 565 gen = Gen() 566 for i in range( 10 ): 567 print( next( gen ) ) 568 \end{python} 569 Boost presents coroutines in terms of four functor object-types: 570 \begin{cfa} 571 asymmetric_coroutine<>::pull_type 572 asymmetric_coroutine<>::push_type 573 symmetric_coroutine<>::call_type 574 symmetric_coroutine<>::yield_type 575 \end{cfa} 576 and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}, \eg pthreads: 577 \begin{cfa} 578 void * rtn( void * arg ) { ... } 579 int i = 3, rc; 580 pthread_t t; $\C{// thread id}$ 581 `rc = pthread_create( &t, rtn, (void *)i );` $\C{// create and initialized task, type-unsafe input parameter}$ 582 \end{cfa} 583 % void mycor( pthread_t cid, void * arg ) { 584 % int * value = (int *)arg; $\C{// type unsafe, pointer-size only}$ 585 % // thread body 586 % } 587 % int main() { 588 % int input = 0, output; 589 % coroutine_t cid = coroutine_create( &mycor, (void *)&input ); $\C{// type unsafe, pointer-size only}$ 590 % coroutine_resume( cid, (void *)input, (void **)&output ); $\C{// type unsafe, pointer-size only}$ 591 % } 592 \CFA's preferred presentation model for generators/coroutines/threads is a hybrid of objects and functions, with an object-oriented flavour. 593 Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type. 594 The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed. 595 596 597 \subsection{Generator} 598 599 Stackless generators have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution. 600 The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity. 601 A series of different kinds of generators and their implementation demonstrate how this goal is accomplished. 602 603 Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version. 769 Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written (left to right) in C, \CFA, and showing the underlying C implementation for the \CFA version. 604 770 This generator is an \emph{output generator}, producing a new result on each resumption. 605 771 To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle. … … 609 775 The C version only has the middle execution state because the top execution state is declaration initialization. 610 776 Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a custom \CFA @generator@ type. 611 This generator type is then connected to a function that \emph{must be named \lstinline|main|},\footnote{ 612 The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution. 613 Hence, overloading this name for other starting points (generator/coroutine/thread) is a logical extension.} 614 called a \emph{generator main},which takes as its only parameter a reference to the generator type. 777 Each generator type must have a function named \lstinline|main|, 778 % \footnote{ 779 % The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution. 780 % Leveraging starting semantics to this name for generator/coroutine/thread is a logical extension.} 781 called a \emph{generator main} (leveraging the starting semantics for program @main@ in C), which is connected to the generator type via its single reference parameter. 615 782 The generator main contains @suspend@ statements that suspend execution without ending the generator versus @return@. 616 For the Fibonacci generator-main,\footnote{ 617 The \CFA \lstinline|with| opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics. 618 Multiple aggregates may be opened.} 783 For the Fibonacci generator-main, 619 784 the top initialization state appears at the start and the middle execution state is denoted by statement @suspend@. 620 785 Any local variables in @main@ \emph{are not retained} between calls; … … 625 790 Resuming an ended (returned) generator is undefined. 626 791 Function @resume@ returns its argument generator so it can be cascaded in an expression, in this case to print the next Fibonacci value @fn@ computed in the generator instance. 627 Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA generator only needs one additional field, @next@, to handle retention of execution state. 628 The computed @goto@ at the start of the generator main, which branches after the previous suspend, adds very little cost to the resume call. 629 Finally, an explicit generator type provides both design and performance benefits, such as multiple type-safe interface functions taking and returning arbitrary types.\footnote{ 630 The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|++?|, \lstinline|?++|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|. 631 }% 792 Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA asymmetric generator. 793 Only one execution-state field, @restart@, is needed to subscript the suspension points in the generator. 794 At the start of the generator main, the @static@ declaration, @states@, is initialized to the N suspend points in the generator (where operator @&&@ dereferences/references a label~\cite{gccValueLabels}). 795 Next, the computed @goto@ selects the last suspend point and branches to it. 796 The cost of setting @restart@ and branching via the computed @goto@ adds very little cost to the suspend/resume calls. 797 798 An advantage of the \CFA explicit generator type is the ability to allow multiple type-safe interface functions taking and returning arbitrary types. 632 799 \begin{cfa} 633 800 int ?()( Fib & fib ) { return `resume( fib )`.fn; } $\C[3.9in]{// function-call interface}$ 634 int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// use function-call interface to skip N values}$ 635 double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}\CRT$ 636 sout | (int)f1() | (double)f1() | f2( 2 ); // alternative interface, cast selects call based on return type, step 2 values 801 int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// add parameter to skip N values}$ 802 double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}$ 803 Fib f; int i; double d; 804 i = f(); i = f( 2 ); d = f(); $\C{// alternative interfaces}\CRT$ 637 805 \end{cfa} 638 806 Now, the generator can be a separately compiled opaque-type only accessed through its interface functions. 639 807 For contrast, Figure~\ref{f:PythonFibonacci} shows the equivalent Python Fibonacci generator, which does not use a generator type, and hence only has a single interface, but an implicit closure. 640 808 641 Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden. 642 (This restriction is removed by the coroutine in Section~\ref{s:Coroutine}.) 643 This requirement follows from the generality of variable-size local-state, \eg local state with a variable-length array requires dynamic allocation because the array size is unknown at compile time. 809 \begin{figure} 810 %\centering 811 \newbox\myboxA 812 \begin{lrbox}{\myboxA} 813 \begin{python}[aboveskip=0pt,belowskip=0pt] 814 def Fib(): 815 fn1, fn = 0, 1 816 while True: 817 `yield fn1` 818 fn1, fn = fn, fn1 + fn 819 f1 = Fib() 820 f2 = Fib() 821 for i in range( 10 ): 822 print( next( f1 ), next( f2 ) ) 823 824 825 826 827 828 829 830 831 832 833 \end{python} 834 \end{lrbox} 835 836 \newbox\myboxB 837 \begin{lrbox}{\myboxB} 838 \begin{python}[aboveskip=0pt,belowskip=0pt] 839 def Fmt(): 840 try: 841 while True: $\C[2.5in]{\# until destructor call}$ 842 for g in range( 5 ): $\C{\# groups}$ 843 for b in range( 4 ): $\C{\# blocks}$ 844 while True: 845 ch = (yield) $\C{\# receive from send}$ 846 if '\n' not in ch: $\C{\# ignore newline}$ 847 break 848 print( ch, end='' ) $\C{\# print character}$ 849 print( ' ', end='' ) $\C{\# block separator}$ 850 print() $\C{\# group separator}$ 851 except GeneratorExit: $\C{\# destructor}$ 852 if g != 0 | b != 0: $\C{\# special case}$ 853 print() 854 fmt = Fmt() 855 `next( fmt )` $\C{\# prime, next prewritten}$ 856 for i in range( 41 ): 857 `fmt.send( 'a' );` $\C{\# send to yield}$ 858 \end{python} 859 \end{lrbox} 860 861 \hspace{30pt} 862 \subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA} 863 \hspace{3pt} 864 \vrule 865 \hspace{3pt} 866 \subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB} 867 \caption{Python generator} 868 \label{f:PythonGenerator} 869 \end{figure} 870 871 Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden (removed by the coroutine in Section~\ref{s:Coroutine}). 872 This manual requirement follows from the generality of allowing variable-size local-state, \eg local state with a variable-length array requires dynamic allocation as the array size is unknown at compile time. 644 873 However, dynamic allocation significantly increases the cost of generator creation/destruction and is a showstopper for embedded real-time programming. 645 874 But more importantly, the size of the generator type is tied to the local state in the generator main, which precludes separate compilation of the generator main, \ie a generator must be inlined or local state must be dynamically allocated. 646 With respect to safety, we believe static analysis can discriminate local state from temporary variables in a generator, \ie variable usage spanning @suspend@, and generate a compile-time error.647 Finally, our current experience is that most generatorproblems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small.875 With respect to safety, we believe static analysis can discriminate persistent generator state from temporary generator-main state and raise a compile-time error for temporary usage spanning suspend points. 876 Our experience using generators is that the problems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small. 648 877 As well, C programmers are not afraid of this kind of semantic programming requirement, if it results in very small, fast generators. 649 878 … … 667 896 The example takes advantage of resuming a generator in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops. 668 897 The destructor provides a newline, if formatted text ends with a full line. 669 Figure~\ref{f:CFormatSim} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@. 670 For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the Fibonacci generator. 671 672 Figure~\ref{f:DeviceDriverGen} shows a \emph{killer} asymmetric generator, a device-driver, because device drivers caused 70\%-85\% of failures in Windows/Linux~\cite{Swift05}. 673 Device drives follow the pattern of simple data state but complex execution state, \ie finite state-machine (FSM) parsing a protocol. 674 For example, the following protocol: 898 Figure~\ref{f:CFormatGenImpl} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@. 899 For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the format generator. 900 901 % https://dl-acm-org.proxy.lib.uwaterloo.ca/ 902 903 Figure~\ref{f:DeviceDriverGen} shows an important application for an asymmetric generator, a device-driver, because device drivers are a significant source of operating-system errors: 85\% in Windows XP~\cite[p.~78]{Swift05} and 51.6\% in Linux~\cite[p.~1358,]{Xiao19}. %\cite{Palix11} 904 Swift \etal~\cite[p.~86]{Swift05} restructure device drivers using the Extension Procedure Call (XPC) within the kernel via functions @nooks_driver_call@ and @nooks_kernel_call@, which have coroutine properties context switching to separate stacks with explicit hand-off calls; 905 however, the calls do not retain execution state, and hence always start from the top. 906 The alternative approach for implementing device drivers is using stack-ripping. 907 However, Adya \etal~\cite{Adya02} argue against stack ripping in Section 3.2 and suggest a hybrid approach in Section 4 using cooperatively scheduled \emph{fibers}, which is coroutining. 908 909 As an example, the following protocol: 675 910 \begin{center} 676 911 \ldots\, STX \ldots\, message \ldots\, ESC ETX \ldots\, message \ldots\, ETX 2-byte crc \ldots 677 912 \end{center} 678 is anetwork message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check.913 is for a simple network message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check. 679 914 Control characters may appear in a message if preceded by an ESC. 680 915 When a message byte arrives, it triggers an interrupt, and the operating system services the interrupt by calling the device driver with the byte read from a hardware register. 681 The device driver returns a status code of its current state, and when a complete message is obtained, the operating system knows the message is in the message buffer. 682 Hence, the device driver is an input/output generator. 683 684 Note, the cost of creating and resuming the device-driver generator, @Driver@, is virtually identical to call/return, so performance in an operating-system kernel is excellent. 685 As well, the data state is small, where variables @byte@ and @msg@ are communication variables for passing in message bytes and returning the message, and variables @lnth@, @crc@, and @sum@ are local variable that must be retained between calls and are manually hoisted into the generator type. 686 % Manually, detecting and hoisting local-state variables is easy when the number is small. 687 In contrast, the execution state is large, with one @resume@ and seven @suspend@s. 688 Hence, the key benefits of the generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than using a table-driven approach. 689 Because FSMs can be complex and frequently occur in important domains, direct generator support is important in a system programming language. 916 The device driver returns a status code of its current state, and when a complete message is obtained, the operating system read the message accumulated in the supplied buffer. 917 Hence, the device driver is an input/output generator, where the cost of resuming the device-driver generator is the same as call/return, so performance in an operating-system kernel is excellent. 918 The key benefits of using a generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than table lookup or stack ripping. 919 The conclusion is that FSMs are complex and occur in important domains, so direct generator support is important in a system programming language. 690 920 691 921 \begin{figure} 692 922 \centering 693 \newbox\myboxA694 \begin{lrbox}{\myboxA}695 \begin{python}[aboveskip=0pt,belowskip=0pt]696 def Fib():697 fn1, fn = 0, 1698 while True:699 `yield fn1`700 fn1, fn = fn, fn1 + fn701 f1 = Fib()702 f2 = Fib()703 for i in range( 10 ):704 print( next( f1 ), next( f2 ) )705 706 707 708 709 710 711 \end{python}712 \end{lrbox}713 714 \newbox\myboxB715 \begin{lrbox}{\myboxB}716 \begin{python}[aboveskip=0pt,belowskip=0pt]717 def Fmt():718 try:719 while True:720 for g in range( 5 ):721 for b in range( 4 ):722 print( `(yield)`, end='' )723 print( ' ', end='' )724 print()725 except GeneratorExit:726 if g != 0 | b != 0:727 print()728 fmt = Fmt()729 `next( fmt )` # prime, next prewritten730 for i in range( 41 ):731 `fmt.send( 'a' );` # send to yield732 \end{python}733 \end{lrbox}734 \subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA}735 \hspace{3pt}736 \vrule737 \hspace{3pt}738 \subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB}739 \caption{Python generator}740 \label{f:PythonGenerator}741 742 \bigskip743 744 923 \begin{tabular}{@{}l|l@{}} 745 924 \begin{cfa}[aboveskip=0pt,belowskip=0pt] … … 748 927 `generator` Driver { 749 928 Status status; 750 unsignedchar byte, * msg; // communication751 unsignedint lnth, sum; // local state752 unsignedshort int crc;929 char byte, * msg; // communication 930 int lnth, sum; // local state 931 short int crc; 753 932 }; 754 933 void ?{}( Driver & d, char * m ) { d.msg = m; } … … 798 977 (The trivial cycle is a generator resuming itself.) 799 978 This control flow is similar to recursion for functions but without stack growth. 800 The steps for symmetric control-flow are creating, executing, and terminating the cycle.979 Figure~\ref{f:PingPongFullCoroutineSteps} shows the steps for symmetric control-flow are creating, executing, and terminating the cycle. 801 980 Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope. 802 981 (This issue occurs for any cyclic data structure.) 803 % The example creates all the generatorsand then assigns the partners that form the cycle.804 % Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.805 Once the cycle is formed, the program main resumes one of the generators, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s).982 The example creates the generators, @ping@/@pong@, and then assigns the partners that form the cycle. 983 % (Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.) 984 Once the cycle is formed, the program main resumes one of the generators, @ping@, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s). 806 985 Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example). 986 Note, the creator and starter may be different, \eg if the creator calls another function that starts the cycle. 807 987 The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack. 808 Also, since local variables are not retained in the generator function, it does not contain any objects with destructors that must be called, so the cost is the same as a function return. 809 Destructor cost occurs when the generator instance is deallocated, which is easily controlled by the programmer. 810 811 Figure~\ref{f:CPingPongSim} shows the implementation of the symmetric generator, where the complexity is the @resume@, which needs an extension to the calling convention to perform a forward rather than backward jump. 812 This jump-starts at the top of the next generator main to re-execute the normal calling convention to make space on the stack for its local variables. 813 However, before the jump, the caller must reset its stack (and any registers) equivalent to a @return@, but subsequently jump forward. 814 This semantics is basically a tail-call optimization, which compilers already perform. 815 The example shows the assembly code to undo the generator's entry code before the direct jump. 816 This assembly code depends on what entry code is generated, specifically if there are local variables and the level of optimization. 817 To provide this new calling convention requires a mechanism built into the compiler, which is beyond the scope of \CFA at this time. 818 Nevertheless, it is possible to hand generate any symmetric generators for proof of concept and performance testing. 819 A compiler could also eliminate other artifacts in the generator simulation to further increase performance, \eg LLVM has various coroutine support~\cite{CoroutineTS}, and \CFA can leverage this support should it fork @clang@. 988 Also, since local variables are not retained in the generator function, there are no objects with destructors to be called, so the cost is the same as a function return. 989 Destructor cost occurs when the generator instance is deallocated by the creator. 820 990 821 991 \begin{figure} … … 824 994 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 825 995 `generator PingPong` { 996 int N, i; // local state 826 997 const char * name; 827 int N;828 int i; // local state829 998 PingPong & partner; // rebindable reference 830 999 }; 831 1000 832 1001 void `main( PingPong & pp )` with(pp) { 1002 1003 833 1004 for ( ; i < N; i += 1 ) { 834 1005 sout | name | i; … … 848 1019 \begin{cfa}[escapechar={},aboveskip=0pt,belowskip=0pt] 849 1020 typedef struct PingPong { 1021 int restart, N, i; 850 1022 const char * name; 851 int N, i;852 1023 struct PingPong * partner; 853 void * next;854 1024 } PingPong; 855 #define PPCtor(name, N) { name,N,0,NULL,NULL}1025 #define PPCtor(name, N) {0, N, 0, name, NULL} 856 1026 void comain( PingPong * pp ) { 857 if ( pp->next ) goto *pp->next; 858 pp->next = &&cycle; 1027 static void * states[] = {&&s0, &&s1}; 1028 goto *states[pp->restart]; 1029 s0: pp->restart = 1; 859 1030 for ( ; pp->i < pp->N; pp->i += 1 ) { 860 1031 printf( "%s %d\n", pp->name, pp->i ); 861 1032 asm( "mov %0,%%rdi" : "=m" (pp->partner) ); 862 1033 asm( "mov %rdi,%rax" ); 863 asm( "popq %rbx" ); 1034 asm( "add $16, %rsp" ); 1035 asm( "popq %rbp" ); 864 1036 asm( "jmp comain" ); 865 cycle: ;1037 s1: ; 866 1038 } 867 1039 } … … 879 1051 \end{figure} 880 1052 881 Finally, part of this generator work was inspired by the recent \CCtwenty generator proposal~\cite{C++20Coroutine19} (which they call coroutines). 1053 \begin{figure} 1054 \centering 1055 \input{FullCoroutinePhases.pstex_t} 1056 \vspace*{-10pt} 1057 \caption{Symmetric coroutine steps: Ping / Pong} 1058 \label{f:PingPongFullCoroutineSteps} 1059 \end{figure} 1060 1061 Figure~\ref{f:CPingPongSim} shows the C implementation of the \CFA symmetric generator, where there is still only one additional field, @restart@, but @resume@ is more complex because it does a forward rather than backward jump. 1062 Before the jump, the parameter for the next call @partner@ is placed into the register used for the first parameter, @rdi@, and the remaining registers are reset for a return. 1063 The @jmp comain@ restarts the function but with a different parameter, so the new call's behaviour depends on the state of the coroutine type, i.e., branch to restart location with different data state. 1064 While the semantics of call forward is a tail-call optimization, which compilers perform, the generator state is different on each call rather a common state for a tail-recursive function (i.e., the parameter to the function never changes during the forward calls. 1065 However, this assembler code depends on what entry code is generated, specifically if there are local variables and the level of optimization. 1066 Hence, internal compiler support is necessary for any forward call (or backwards return), \eg LLVM has various coroutine support~\cite{CoroutineTS}, and \CFA can leverage this support should it eventually fork @clang@. 1067 For this reason, \CFA does not support general symmetric generators at this time, but, it is possible to hand generate any symmetric generators (as in Figure~\ref{f:CPingPongSim}) for proof of concept and performance testing. 1068 1069 Finally, part of this generator work was inspired by the recent \CCtwenty coroutine proposal~\cite{C++20Coroutine19}, which uses the general term coroutine to mean generator. 882 1070 Our work provides the same high-performance asymmetric generators as \CCtwenty, and extends their work with symmetric generators. 883 1071 An additional \CCtwenty generator feature allows @suspend@ and @resume@ to be followed by a restricted compound statement that is executed after the current generator has reset its stack but before calling the next generator, specified with \CFA syntax: … … 894 1082 \label{s:Coroutine} 895 1083 896 Stackful coroutines extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main.1084 Stackful coroutines (Table~\ref{t:ExecutionPropertyComposition} case 5) extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main. 897 1085 A coroutine is specified by replacing @generator@ with @coroutine@ for the type. 898 Coroutine generality results in higher cost for creation, due to dynamic stack allocation, execution, due to context switching among stacks, andterminating, due to possible stack unwinding and dynamic stack deallocation.1086 Coroutine generality results in higher cost for creation, due to dynamic stack allocation, for execution, due to context switching among stacks, and for terminating, due to possible stack unwinding and dynamic stack deallocation. 899 1087 A series of different kinds of coroutines and their implementations demonstrate how coroutines extend generators. 900 1088 901 1089 First, the previous generator examples are converted to their coroutine counterparts, allowing local-state variables to be moved from the generator type into the coroutine main. 902 \begin{description} 903 \item[Fibonacci] 904 Move the declaration of @fn1@ to the start of coroutine main. 1090 \begin{center} 1091 \begin{tabular}{@{}l|l|l|l@{}} 1092 \multicolumn{1}{c|}{Fibonacci} & \multicolumn{1}{c|}{Formatter} & \multicolumn{1}{c|}{Device Driver} & \multicolumn{1}{c}{PingPong} \\ 1093 \hline 905 1094 \begin{cfa}[xleftmargin=0pt] 906 void main( Fib & fib ) with(fib) {1095 void main( Fib & fib ) ... 907 1096 `int fn1;` 908 \end{cfa} 909 \item[Formatter] 910 Move the declaration of @g@ and @b@ to the for loops in the coroutine main. 1097 1098 1099 \end{cfa} 1100 & 911 1101 \begin{cfa}[xleftmargin=0pt] 912 1102 for ( `g`; 5 ) { 913 1103 for ( `b`; 4 ) { 914 \end{cfa} 915 \item[Device Driver] 916 Move the declaration of @lnth@ and @sum@ to their points of initialization. 1104 1105 1106 \end{cfa} 1107 & 917 1108 \begin{cfa}[xleftmargin=0pt] 918 status = CONT; 919 `unsigned int lnth = 0, sum = 0;` 920 ... 921 `unsigned short int crc = byte << 8;` 922 \end{cfa} 923 \item[PingPong] 924 Move the declaration of @i@ to the for loop in the coroutine main. 1109 status = CONT; 1110 `int lnth = 0, sum = 0;` 1111 ... 1112 `short int crc = byte << 8;` 1113 \end{cfa} 1114 & 925 1115 \begin{cfa}[xleftmargin=0pt] 926 void main( PingPong & pp ) with(pp) {1116 void main( PingPong & pp ) ... 927 1117 for ( `i`; N ) { 928 \end{cfa} 929 \end{description} 1118 1119 1120 \end{cfa} 1121 \end{tabular} 1122 \end{center} 930 1123 It is also possible to refactor code containing local-state and @suspend@ statements into a helper function, like the computation of the CRC for the device driver. 931 1124 \begin{cfa} 932 unsignedint Crc() {1125 int Crc() { 933 1126 `suspend;` 934 unsignedshort int crc = byte << 8;1127 short int crc = byte << 8; 935 1128 `suspend;` 936 1129 status = (crc | byte) == sum ? MSG : ECRC; … … 943 1136 944 1137 \begin{comment} 945 Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @ next@.1138 Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @restart@. 946 1139 Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main. 947 1140 The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@. 948 The interface function @ next@, takes a Fibonacci instance and context switches to it using @resume@;1141 The interface function @restart@, takes a Fibonacci instance and context switches to it using @resume@; 949 1142 on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned. 950 1143 The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack; … … 1112 1305 \begin{figure} 1113 1306 \centering 1114 \lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}% allow $1115 1307 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}} 1116 1308 \begin{cfa} 1117 1309 `coroutine` Prod { 1118 Cons & c; // communication1310 Cons & c; $\C[1.5in]{// communication}$ 1119 1311 int N, money, receipt; 1120 1312 }; 1121 1313 void main( Prod & prod ) with( prod ) { 1122 // 1st resume starts here 1123 for ( i; N ) { 1314 for ( i; N ) { $\C{// 1st resume}\CRT$ 1124 1315 int p1 = random( 100 ), p2 = random( 100 ); 1125 sout | p1 | " " | p2;1126 1316 int status = delivery( c, p1, p2 ); 1127 sout | " $" | money | nl | status;1128 1317 receipt += 1; 1129 1318 } 1130 1319 stop( c ); 1131 sout | "prod stops";1132 1320 } 1133 1321 int payment( Prod & prod, int money ) { … … 1150 1338 \begin{cfa} 1151 1339 `coroutine` Cons { 1152 Prod & p; // communication1340 Prod & p; $\C[1.5in]{// communication}$ 1153 1341 int p1, p2, status; 1154 1342 bool done; 1155 1343 }; 1156 1344 void ?{}( Cons & cons, Prod & p ) { 1157 &cons.p = &p; // reassignable reference1345 &cons.p = &p; $\C{// reassignable reference}$ 1158 1346 cons.[status, done ] = [0, false]; 1159 1347 } 1160 1348 void main( Cons & cons ) with( cons ) { 1161 // 1st resume starts here 1162 int money = 1, receipt; 1349 int money = 1, receipt; $\C{// 1st resume}\CRT$ 1163 1350 for ( ; ! done; ) { 1164 sout | p1 | " " | p2 | nl | " $" | money;1165 1351 status += 1; 1166 1352 receipt = payment( p, money ); 1167 sout | " #" | receipt;1168 1353 money += 1; 1169 1354 } 1170 sout | "cons stops";1171 1355 } 1172 1356 int delivery( Cons & cons, int p1, int p2 ) { … … 1189 1373 This example is illustrative because both producer/consumer have two interface functions with @resume@s that suspend execution in these interface (helper) functions. 1190 1374 The program main creates the producer coroutine, passes it to the consumer coroutine in its initialization, and closes the cycle at the call to @start@ along with the number of items to be produced. 1191 The first @resume@ of @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it. 1192 @prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer. 1193 1375 The call to @start@ is the first @resume@ of @prod@, which remembers the program main as the starter and creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it. 1376 @prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer's @deliver@ function to transfer the values, and printing the status returned from the consumer. 1194 1377 The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status. 1195 On the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine. 1196 The consumer iterates until the @done@ flag is set, prints the values delivered by the producer, increments status, and calls back to the producer via @payment@, and on return from @payment@, prints the receipt from the producer and increments @money@ (inflation). 1197 The call from the consumer to @payment@ introduces the cycle between producer and consumer. 1198 When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed. 1199 The context switch restarts the producer at the point where it last context switched, so it continues in @delivery@ after the resume. 1200 @delivery@ returns the status value in @prod@'s coroutine main, where the status is printed. 1201 The loop then repeats calling @delivery@, where each call resumes the consumer coroutine. 1202 The context switch to the consumer continues in @payment@. 1203 The consumer increments and returns the receipt to the call in @cons@'s coroutine main. 1204 The loop then repeats calling @payment@, where each call resumes the producer coroutine. 1378 Similarly on the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine. 1379 The symmetric coroutine cycle forms when the consumer calls the producer's @payment@ function, which resumes the producer in the consumer's delivery function. 1380 When the producer calls @delivery@ again, it resumes the consumer in the @payment@ function. 1381 Both interface function than return to the their corresponding coroutine-main functions for the next cycle. 1205 1382 Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling. 1383 As a consequence of a coroutine retaining its last resumer for suspending back, these reverse pointers allow @suspend@ to cycle \emph{backwards} around a symmetric coroutine cycle. 1206 1384 1207 1385 \begin{figure} … … 1212 1390 \caption{Producer / consumer runtime stacks} 1213 1391 \label{f:ProdConsRuntimeStacks} 1214 1215 \medskip1216 1217 \begin{center}1218 \input{FullCoroutinePhases.pstex_t}1219 \end{center}1220 \vspace*{-10pt}1221 \caption{Ping / Pong coroutine steps}1222 \label{f:PingPongFullCoroutineSteps}1223 1392 \end{figure} 1224 1393 1225 1394 Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack. 1226 Furthermore, each deallocated coroutine must guarantee all destructors are run for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep. 1227 When a coroutine's main ends, its stack is already unwound so any stack allocated objects with destructors have been finalized. 1395 Furthermore, each deallocated coroutine must execute all destructors for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep. 1396 In the example, termination begins with the producer's loop stopping after N iterations and calling the consumer's @stop@ function, which sets the @done@ flag, resumes the consumer in function @payment@, terminating the call, and the consumer's loop in its coroutine main. 1397 % (Not shown is having @prod@ raise a nonlocal @stop@ exception at @cons@ after it finishes generating values and suspend back to @cons@, which catches the @stop@ exception to terminate its loop.) 1398 When the consumer's main ends, its stack is already unwound so any stack allocated objects with destructors are finalized. 1399 The question now is where does control continue? 1400 1228 1401 The na\"{i}ve semantics for coroutine-cycle termination is to context switch to the last resumer, like executing a @suspend@/@return@ in a generator. 1229 1402 However, for coroutines, the last resumer is \emph{not} implicitly below the current stack frame, as for generators, because each coroutine's stack is independent. 1230 1403 Unfortunately, it is impossible to determine statically if a coroutine is in a cycle and unrealistic to check dynamically (graph-cycle problem). 1231 1404 Hence, a compromise solution is necessary that works for asymmetric (acyclic) and symmetric (cyclic) coroutines. 1232 1233 Our solution is to context switch back to the first resumer (starter) once the coroutine ends.1405 Our solution is to retain a coroutine's starter (first resumer), and context switch back to the starter when the coroutine ends. 1406 Hence, the consumer restarts its first resumer, @prod@, in @stop@, and when the producer ends, it restarts its first resumer, program main, in @start@ (see dashed lines from the end of the coroutine mains in Figure~\ref{f:ProdConsRuntimeStacks}). 1234 1407 This semantics works well for the most common asymmetric and symmetric coroutine usage patterns. 1235 For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer. 1236 All previous generators converted to coroutines have this property. 1237 For symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle. 1238 Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter. 1239 Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end. 1408 For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer; 1409 for symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle. 1240 1410 For other scenarios, it is always possible to devise a solution with additional programming effort, such as forcing the cycle forward (backward) to a safe point before starting termination. 1241 1411 1242 The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first. 1243 Assume generator @PingPong@ is converted to a coroutine. 1244 Figure~\ref{f:PingPongFullCoroutineSteps} shows the creation, starter, and cyclic execution steps of the coroutine version. 1245 The program main creates (declares) coroutine instances @ping@ and @pong@. 1246 Next, program main resumes @ping@, making it @ping@'s starter, and @ping@'s main resumes @pong@'s main, making it @pong@'s starter. 1247 Execution forms a cycle when @pong@ resumes @ping@, and cycles $N$ times. 1248 By adjusting $N$ for either @ping@/@pong@, it is possible to have either one finish first, instead of @pong@ always ending first. 1249 If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main in function @start@. 1250 If @ping@ ends first, it resumes its starter the program main in function @start@. 1251 Regardless of the cycle complexity, the starter stack always leads back to the program main, but the stack can be entered at an arbitrary point. 1252 Once back at the program main, coroutines @ping@ and @pong@ are deallocated. 1253 For generators, deallocation runs the destructors for all objects in the generator type. 1254 For coroutines, deallocation deals with objects in the coroutine type and must also run the destructors for any objects pending on the coroutine's stack for any unterminated coroutine. 1255 Hence, if a coroutine's destructor detects the coroutine is not ended, it implicitly raises a cancellation exception (uncatchable exception) at the coroutine and resumes it so the cancellation exception can propagate to the root of the coroutine's stack destroying all local variable on the stack. 1256 So the \CFA semantics for the generator and coroutine, ensure both can be safely deallocated at any time, regardless of their current state, like any other aggregate object. 1257 Explicitly raising normal exceptions at another coroutine can replace flag variables, like @stop@, \eg @prod@ raises a @stop@ exception at @cons@ after it finishes generating values and resumes @cons@, which catches the @stop@ exception to terminate its loop. 1258 1259 Finally, there is an interesting effect for @suspend@ with symmetric coroutines. 1260 A coroutine must retain its last resumer to suspend back because the resumer is on a different stack. 1261 These reverse pointers allow @suspend@ to cycle \emph{backwards}, which may be useful in certain cases. 1262 However, there is an anomaly if a coroutine resumes itself, because it overwrites its last resumer with itself, losing the ability to resume the last external resumer. 1263 To prevent losing this information, a self-resume does not overwrite the last resumer. 1412 Note, the producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first. 1413 Assume generator @PingPong@ in Figure~\ref{f:PingPongSymmetricGenerator} is converted to a coroutine. 1414 Unlike generators, coroutines have a starter structure with multiple levels, where the program main starts @ping@ and @ping@ starts @pong@. 1415 By adjusting $N$ for either @ping@/@pong@, it is possible to have either finish first. 1416 If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main on return; 1417 if @ping@ ends first, it resumes its starter the program main on return. 1418 Regardless of the cycle complexity, the starter structure always leads back to the program main, but the path can be entered at an arbitrary point. 1419 Once back at the program main (creator), coroutines @ping@ and @pong@ are deallocated, runnning any destructors for objects within the coroutine and possibly deallocating any coroutine stacks for non-terminated coroutines, where stack deallocation implies stack unwinding to find destructors for allocated objects on the stack. 1420 Hence, the \CFA termination semantics for the generator and coroutine ensure correct deallocation semnatics, regardless of the coroutine's state (terminated or active), like any other aggregate object. 1264 1421 1265 1422 … … 1292 1449 Users wanting to extend custom types or build their own can only do so in ways offered by the language. 1293 1450 Furthermore, implementing custom types without language support may display the power of a programming language. 1294 \CFA blends the two approaches, providing custom type for idiomatic \CFA code, while extending and building new custom types is still possible, similar to Java concurrency with builtin and library .1451 \CFA blends the two approaches, providing custom type for idiomatic \CFA code, while extending and building new custom types is still possible, similar to Java concurrency with builtin and library (@java.util.concurrent@) monitors. 1295 1452 1296 1453 Part of the mechanism to generalize custom types is the \CFA trait~\cite[\S~2.3]{Moss18}, \eg the definition for custom-type @coroutine@ is anything satisfying the trait @is_coroutine@, and this trait both enforces and restricts the coroutine-interface functions. … … 1302 1459 forall( `dtype` T | is_coroutine(T) ) void $suspend$( T & ), resume( T & ); 1303 1460 \end{cfa} 1304 Note, copying generators/coroutines/threads is not meaningful. 1305 For example, both the resumer and suspender descriptors can have bidirectional pointers; 1306 copying these coroutines does not update the internal pointers so behaviour of both copies would be difficult to understand. 1307 Furthermore, two coroutines cannot logically execute on the same stack. 1308 A deep coroutine copy, which copies the stack, is also meaningless in an unmanaged language (no garbage collection), like C, because the stack may contain pointers to object within it that require updating for the copy. 1461 Note, copying generators/coroutines/threads is undefined because muliple objects cannot execute on a shared stack and stack copying does not work in unmanaged languages (no garbage collection), like C, because the stack may contain pointers to objects within it that require updating for the copy. 1309 1462 The \CFA @dtype@ property provides no \emph{implicit} copying operations and the @is_coroutine@ trait provides no \emph{explicit} copying operations, so all coroutines must be passed by reference (pointer). 1310 1463 The function definitions ensure there is a statically typed @main@ function that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the coroutine descriptor from its handle. … … 1350 1503 The combination of custom types and fundamental @trait@ description of these types allows a concise specification for programmers and tools, while more advanced programmers can have tighter control over memory layout and initialization. 1351 1504 1352 Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a t askis similar).1505 Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a thread is similar). 1353 1506 The coroutine handle is the @coroutine@ instance containing programmer specified type global/communication variables across interface functions. 1354 1507 The coroutine descriptor contains all implicit declarations needed by the runtime, \eg @suspend@/@resume@, and can be part of the coroutine handle or separate. 1355 1508 The coroutine stack can appear in a number of locations and be fixed or variable sized. 1356 Hence, the coroutine's stack could be a VLS\footnote{1357 We are examining variable-sized structures (VLS), where fields can be variable-sized structures or arrays.1509 Hence, the coroutine's stack could be a variable-length structure (VLS)\footnote{ 1510 We are examining VLSs, where fields can be variable-sized structures or arrays. 1358 1511 Once allocated, a VLS is fixed sized.} 1359 1512 on the allocating stack, provided the allocating stack is large enough. 1360 1513 For a VLS stack allocation/deallocation is an inexpensive adjustment of the stack pointer, modulo any stack constructor costs (\eg initial frame setup). 1361 For heap stack allocation, allocation/deallocation is an expensive heap allocation (where the heap can be a shared resource), modulo any stack constructor costs.1362 With heap stack allocation, it is also possible to use a split (segmented) stack calling convention, available with gcc and clang, so the stack is variable sized.1514 For stack allocation in the heap, allocation/deallocation is an expensive allocation, where the heap can be a shared resource, modulo any stack constructor costs. 1515 It is also possible to use a split (segmented) stack calling convention, available with gcc and clang, allowing a variable-sized stack via a set of connected blocks in the heap. 1363 1516 Currently, \CFA supports stack/heap allocated descriptors but only fixed-sized heap allocated stacks. 1364 1517 In \CFA debug-mode, the fixed-sized stack is terminated with a write-only page, which catches most stack overflows. 1365 1518 Experience teaching concurrency with \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue for students. 1366 Split-stack allocation is under development but requires recompilation of legacy code, which may be impossible.1519 Split-stack allocation is under development but requires recompilation of legacy code, which is not always possible. 1367 1520 1368 1521 \begin{figure} … … 1378 1531 1379 1532 Concurrency is nondeterministic scheduling of independent sequential execution paths (threads), where each thread has its own stack. 1380 A single thread with multiple call stacks, \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}.1381 In coroutining, coroutinesself-schedule the thread across stacks so execution is deterministic.1533 A single thread with multiple stacks, \ie coroutining, does \emph{not} imply concurrency~\cite[\S~3]{Buhr05a}. 1534 Coroutining self-schedule the thread across stacks so execution is deterministic. 1382 1535 (It is \emph{impossible} to generate a concurrency error when coroutining.) 1383 However, coroutines are a stepping stone towards concurrency. 1384 1385 The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a}. 1536 1537 The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}. 1386 1538 Therefore, a minimal concurrency system requires coroutines \emph{in conjunction with a nondeterministic scheduler}. 1387 The resulting execution system now follows a cooperative threading model~\cite{Adya02,libdill}, called \newterm{non-preemptive scheduling}. 1388 Adding \newterm{preemption} introduces non-cooperative scheduling, where context switching occurs randomly between any two instructions often based on a timer interrupt, called \newterm{preemptive scheduling}. 1389 While a scheduler introduces uncertain execution among explicit context switches, preemption introduces uncertainty by introducing implicit context switches. 1539 The resulting execution system now follows a cooperative threading-model~\cite{Adya02,libdill} because context-switching points to the scheduler (blocking) are known, but the next unblocking point is unknown due to the scheduler. 1540 Adding \newterm{preemption} introduces \newterm{non-cooperative} or \newterm{preemptive} scheduling, where context switching points to the scheduler are unknown as they can occur randomly between any two instructions often based on a timer interrupt. 1390 1541 Uncertainty gives the illusion of parallelism on a single processor and provides a mechanism to access and increase performance on multiple processors. 1391 1542 The reason is that the scheduler/runtime have complete knowledge about resources and how to best utilized them. 1392 However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization} , which restrict nondeterminism for correctness;1543 However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization}~\cite[\S~4]{Buhr05a}, which restrict nondeterminism for correctness; 1393 1544 otherwise, it is impossible to write meaningful concurrent programs. 1394 1545 Optimal concurrent performance is often obtained by having as much nondeterminism as mutual exclusion and synchronization correctness allow. 1395 1546 1396 A scheduler can either be astackless or stackful.1547 A scheduler can also be stackless or stackful. 1397 1548 For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch. 1398 1549 For stackful, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches. … … 1403 1554 \label{s:threads} 1404 1555 1405 Threading needs the ability to start a thread and wait for its completion.1556 Threading (Table~\ref{t:ExecutionPropertyComposition} case 11) needs the ability to start a thread and wait for its completion. 1406 1557 A common API for this ability is @fork@ and @join@. 1407 \begin{cquote} 1408 \begin{tabular}{@{}lll@{}} 1409 \multicolumn{1}{c}{\textbf{Java}} & \multicolumn{1}{c}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\ 1410 \begin{cfa} 1411 class MyTask extends Thread {...} 1412 mytask t = new MyTask(...); 1558 \vspace{4pt} 1559 \par\noindent 1560 \begin{tabular}{@{}l|l|l@{}} 1561 \multicolumn{1}{c|}{\textbf{Java}} & \multicolumn{1}{c|}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\ 1562 \hline 1563 \begin{cfa} 1564 class MyThread extends Thread {...} 1565 mythread t = new MyThread(...); 1413 1566 `t.start();` // start 1414 1567 // concurrency … … 1417 1570 & 1418 1571 \begin{cfa} 1419 class MyT ask{ ... } // functor1420 MyT ask mytask;1421 `thread t( myt ask, ... );` // start1572 class MyThread { ... } // functor 1573 MyThread mythread; 1574 `thread t( mythread, ... );` // start 1422 1575 // concurrency 1423 1576 `t.join();` // wait … … 1432 1585 \end{cfa} 1433 1586 \end{tabular} 1434 \end{cquote} 1587 \vspace{1pt} 1588 \par\noindent 1435 1589 \CFA has a simpler approach using a custom @thread@ type and leveraging declaration semantics (allocation/deallocation), where threads implicitly @fork@ after construction and @join@ before destruction. 1436 1590 \begin{cfa} 1437 thread MyT ask{};1438 void main( MyT ask& this ) { ... }1591 thread MyThread {}; 1592 void main( MyThread & this ) { ... } 1439 1593 int main() { 1440 MyT askteam`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$1594 MyThread team`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$ 1441 1595 // concurrency 1442 1596 } $\C{// deallocate stack-based threads, implicit joins before destruction}$ … … 1446 1600 Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic allocation. 1447 1601 \begin{cfa} 1448 MyT ask* factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$1602 MyThread * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$ 1449 1603 int main() { 1450 MyT ask* team = factory( 10 );1604 MyThread * team = factory( 10 ); 1451 1605 // concurrency 1452 1606 `delete( team );` $\C{// deallocate heap-based threads, implicit joins before destruction}\CRT$ … … 1494 1648 1495 1649 Threads in \CFA are user level run by runtime kernel threads (see Section~\ref{s:CFARuntimeStructure}), where user threads provide concurrency and kernel threads provide parallelism. 1496 Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the t ask-interface functions.1650 Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the thread-interface functions. 1497 1651 \begin{cquote} 1498 1652 \begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}} … … 1525 1679 \label{s:MutualExclusionSynchronization} 1526 1680 1527 Unrestricted nondeterminism is meaningless as there is no way to know when the result is completed without synchronization.1681 Unrestricted nondeterminism is meaningless as there is no way to know when a result is completed and safe to access. 1528 1682 To produce meaningful execution requires clawing back some determinism using mutual exclusion and synchronization, where mutual exclusion provides access control for threads using shared data, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}. 1529 Some concurrent systems eliminate mutable shared-state by switching to stateless communication like message passing~\cite{Thoth,Harmony,V-Kernel,MPI} (Erlang, MPI), channels~\cite{CSP} (CSP,Go), actors~\cite{Akka} (Akka, Scala), or functional techniques (Haskell). 1683 The shared data protected by mutual exlusion is called a \newterm{critical section}~\cite{Dijkstra65}, and the protection can be simple (only 1 thread) or complex (only N kinds of threads, \eg group~\cite{Joung00} or readers/writer~\cite{Courtois71}). 1684 Without synchronization control in a critical section, an arriving thread can barge ahead of preexisting waiter threads resulting in short/long-term starvation, staleness/freshness problems, and/or incorrect transfer of data. 1685 Preventing or detecting barging is a challenge with low-level locks, but made easier through higher-level constructs. 1686 This challenge is often split into two different approaches: barging \emph{avoidance} and \emph{prevention}. 1687 Approaches that unconditionally releasing a lock for competing threads to acquire must use barging avoidance with flag/counter variable(s) to force barging threads to wait; 1688 approaches that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely. 1689 1690 At the lowest level, concurrent control is provided by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg spin locks, semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}. 1691 However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}. 1692 A significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock. 1693 Easing composability is another feature higher-level mutual-exclusion mechanisms can offer. 1694 Some concurrent systems eliminate mutable shared-state by switching to non-shared communication like message passing~\cite{Thoth,Harmony,V-Kernel,MPI} (Erlang, MPI), channels~\cite{CSP} (CSP,Go), actors~\cite{Akka} (Akka, Scala), or functional techniques (Haskell). 1530 1695 However, these approaches introduce a new communication mechanism for concurrency different from the standard communication using function call/return. 1531 1696 Hence, a programmer must learn and manipulate two sets of design/programming patterns. 1532 1697 While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account. 1533 In contrast, approaches based on stateful models more closely resemble the standard call/return programming model, resulting in a single programming paradigm. 1534 1535 At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}. 1536 However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}. 1537 A newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}. 1538 While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it is rejected as the core paradigm for concurrency in \CFA. 1539 1540 One of the most natural, elegant, and efficient mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor}. 1541 First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}. 1542 In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to simulate monitors. 1543 For these reasons, \CFA selected monitors as the core high-level concurrency construct, upon which higher-level approaches can be easily constructed. 1544 1545 1546 \subsection{Mutual Exclusion} 1547 1548 A group of instructions manipulating a specific instance of shared data that must be performed atomically is called a \newterm{critical section}~\cite{Dijkstra65}, which is enforced by \newterm{simple mutual-exclusion}. 1549 The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session use the resource simultaneously and different sessions are segregated, which is enforced by \newterm{complex mutual-exclusion} providing the correct kind and number of threads using a group critical-section. 1550 The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers share a session but writers have a unique session. 1551 1552 However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. 1553 Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use. 1554 Ease of use comes by either guaranteeing some problems cannot occur, \eg deadlock free, or by offering a more explicit coupling between shared data and critical section. 1555 For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations, \eg reading/writing, for numerical types. 1556 However, a significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock. 1557 Easing composability is another feature higher-level mutual-exclusion mechanisms can offer. 1558 1559 1560 \subsection{Synchronization} 1561 1562 Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships. 1563 Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use; 1564 higher-level mechanisms often simplify usage by adding better coupling between synchronization and data, \eg receive-specific versus receive-any thread in message passing or offering specialized solutions, \eg barrier lock. 1565 Often synchronization is used to order access to a critical section, \eg ensuring a waiting writer thread enters the critical section before a calling reader thread. 1566 If the calling reader is scheduled before the waiting writer, the reader has barged. 1567 Barging can result in staleness/freshness problems, where a reader barges ahead of a writer and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from ever being read (lost computation). 1568 Preventing or detecting barging is an involved challenge with low-level locks, which is made easier through higher-level constructs. 1569 This challenge is often split into two different approaches: barging avoidance and prevention. 1570 Algorithms that unconditionally releasing a lock for competing threads to acquire use barging avoidance during synchronization to force a barging thread to wait; 1571 algorithms that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely. 1698 In contrast, approaches based on shared-state models more closely resemble the standard call/return programming model, resulting in a single programming paradigm. 1699 Finally, a newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}. 1700 While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive~\cite{Cascaval08,Boehm09} to be the main concurrency paradigm for system languages. 1572 1701 1573 1702 … … 1575 1704 \label{s:Monitor} 1576 1705 1577 A \textbf{monitor} is a set of functions that ensure mutual exclusion when accessing shared state. 1578 More precisely, a monitor is a programming technique that implicitly binds mutual exclusion to static function scope, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation). 1706 One of the most natural, elegant, efficient, high-level mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor} (Table~\ref{t:ExecutionPropertyComposition} case 2). 1707 First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}. 1708 In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to manually implement a monitor. 1709 For these reasons, \CFA selected monitors as the core high-level concurrency construct, upon which higher-level approaches can be easily constructed. 1710 1711 Specifically, a \textbf{monitor} is a set of functions that ensure mutual exclusion when accessing shared state. 1712 More precisely, a monitor is a programming technique that implicitly binds mutual exclusion to static function scope by call/return, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation). 1579 1713 Restricting acquire/release points eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency. 1580 1714 \CFA uses a custom @monitor@ type and leverages declaration semantics (deallocation) to protect active or waiting threads in a monitor. 1581 1715 1582 1716 The following is a \CFA monitor implementation of an atomic counter. 1583 \begin{cfa} [morekeywords=nomutex]1717 \begin{cfa} 1584 1718 `monitor` Aint { int cnt; }; $\C[4.25in]{// atomic integer counter}$ 1585 int ++?( Aint & `mutex`$\(_{opt}\)$ this ) with( this ) { return ++cnt; } $\C{// increment}$ 1586 int ?=?( Aint & `mutex`$\(_{opt}\)$ lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions with int}\CRT$ 1587 int ?=?( int & lhs, Aint & `mutex`$\(_{opt}\)$ rhs ) with( rhs ) { lhs = cnt; } 1588 \end{cfa} 1589 % The @Aint@ constructor, @?{}@, uses the \lstinline[morekeywords=nomutex]@nomutex@ qualifier indicating mutual exclusion is unnecessary during construction because an object is inaccessible (private) until after it is initialized. 1590 % (While a constructor may publish its address into a global variable, doing so generates a race-condition.) 1591 The prefix increment operation, @++?@, is normally @mutex@, indicating mutual exclusion is necessary during function execution, to protect the incrementing from race conditions, unless there is an atomic increment instruction for the implementation type. 1592 The assignment operators provide bidirectional conversion between an atomic and normal integer without accessing field @cnt@; 1593 these operations only need @mutex@, if reading/writing the implementation type is not atomic. 1594 The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics, which is similar to the \CC template @std::atomic@. 1719 int ++?( Aint & `mutex` this ) with( this ) { return ++cnt; } $\C{// increment}$ 1720 int ?=?( Aint & `mutex` lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions with int, mutex optional}\CRT$ 1721 int ?=?( int & lhs, Aint & `mutex` rhs ) with( rhs ) { lhs = cnt; } 1722 \end{cfa} 1723 The operators use the parameter-only declaration type-qualifier @mutex@ to mark which parameters require locking during function execution to protect from race conditions. 1724 The assignment operators provide bidirectional conversion between an atomic and normal integer without accessing field @cnt@. 1725 (These operations only need @mutex@, if reading/writing the implementation type is not atomic.) 1726 The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics. 1595 1727 \begin{cfa} 1596 1728 int i = 0, j = 0, k = 5; … … 1600 1732 i = x; j = y; k = z; 1601 1733 \end{cfa} 1734 Note, like other concurrent programming languages, \CFA has specializations for the basic types using atomic instructions for performance and a general trait similar to the \CC template @std::atomic@. 1602 1735 1603 1736 \CFA monitors have \newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling other interface functions. 1737 \newpage 1604 1738 \begin{cfa} 1605 1739 monitor M { ... } m; … … 1610 1744 \end{cfa} 1611 1745 \CFA monitors also ensure the monitor lock is released regardless of how an acquiring function ends (normal or exceptional), and returning a shared variable is safe via copying before the lock is released. 1612 Similar safety is offered by \emph{explicit} mechanisms like \CC RAII; 1613 monitor \emph{implicit} safety ensures no programmer usage errors. 1746 Similar safety is offered by \emph{explicit} opt-in disciplines like \CC RAII versus the monitor \emph{implicit} language-enforced safety guarantee ensuring no programmer usage errors. 1614 1747 Furthermore, RAII mechanisms cannot handle complex synchronization within a monitor, where the monitor lock may not be released on function exit because it is passed to an unblocking thread; 1615 1748 RAII is purely a mutual-exclusion mechanism (see Section~\ref{s:Scheduling}). … … 1637 1770 \end{cquote} 1638 1771 The @dtype@ property prevents \emph{implicit} copy operations and the @is_monitor@ trait provides no \emph{explicit} copy operations, so monitors must be passed by reference (pointer). 1639 % Copying a lock is insecure because it is possible to copy an open lock and then use the open copy when the original lock is closed to simultaneously access the shared data.1640 % Copying a monitor is secure because both the lock and shared data are copies, but copying the shared data is meaningless because it no longer represents a unique entity.1641 1772 Similarly, the function definitions ensures there is a mechanism to get (read) the monitor descriptor from its handle, and a special destructor to prevent deallocation if a thread using the shared data. 1642 1773 The custom monitor type also inserts any locks needed to implement the mutual exclusion semantics. … … 1650 1781 For example, a monitor may be passed through multiple helper functions before it is necessary to acquire the monitor's mutual exclusion. 1651 1782 1652 The benefit of mandatory monitor qualifiers is self-documentation, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameters is redundant. 1653 Instead, the semantics has one qualifier as the default and the other required. 1654 For example, make the safe @mutex@ qualifier the default because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors. 1655 Alternatively, make the unsafe \lstinline[morekeywords=nomutex]@nomutex@ qualifier the default because it is the \emph{normal} parameter semantics while @mutex@ parameters are rare. 1656 Providing a default qualifier implies knowing whether a parameter is a monitor. 1657 Since \CFA relies heavily on traits as an abstraction mechanism, types can coincidentally match the monitor trait but not be a monitor, similar to inheritance where a shape and playing card can both be drawable. 1658 For this reason, \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@. 1783 \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@, because @mutex@ parameters are rare and no keyword is the \emph{normal} parameter semantics. 1784 Hence, @mutex@ parameters are documentation, at the function and its prototype, to both programmer and compiler, without other redundant keywords. 1785 Furthermore, \CFA relies heavily on traits as an abstraction mechanism, so the @mutex@ qualifier prevents coincidentally matching of a monitor trait with a type that is not a monitor, similar to coincidental inheritance where a shape and playing card can both be drawable. 1659 1786 1660 1787 The next semantic decision is establishing which parameter \emph{types} may be qualified with @mutex@. … … 1670 1797 Function @f3@ has a multiple object matrix, and @f4@ a multiple object data structure. 1671 1798 While shown shortly, multiple object acquisition is possible, but the number of objects must be statically known. 1672 Therefore, \CFA only acquires one monitor per parameter with at most one level of indirection, excluding pointers as it is impossible to statically determine the size.1799 Therefore, \CFA only acquires one monitor per parameter with exactly one level of indirection, and exclude pointer types to unknown sized arrays. 1673 1800 1674 1801 For object-oriented monitors, \eg Java, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@. … … 1677 1804 While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.} 1678 1805 called \newterm{bulk acquire}. 1679 \CFA guarantees acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitorsis safe from deadlock.1806 \CFA guarantees bulk acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitors in a bulk acquire is safe from deadlock. 1680 1807 Figure~\ref{f:BankTransfer} shows a trivial solution to the bank transfer problem~\cite{BankTransfer}, where two resources must be locked simultaneously, using \CFA monitors with implicit locking and \CC with explicit locking. 1681 1808 A \CFA programmer only has to manage when to acquire mutual exclusion; … … 1697 1824 void transfer( BankAccount & `mutex` my, 1698 1825 BankAccount & `mutex` your, int me2you ) { 1699 1826 // bulk acquire 1700 1827 deposit( my, -me2you ); // debit 1701 1828 deposit( your, me2you ); // credit … … 1727 1854 void transfer( BankAccount & my, 1728 1855 BankAccount & your, int me2you ) { 1729 `scoped_lock lock( my.m, your.m );` 1856 `scoped_lock lock( my.m, your.m );` // bulk acquire 1730 1857 deposit( my, -me2you ); // debit 1731 1858 deposit( your, me2you ); // credit … … 1755 1882 \end{figure} 1756 1883 1757 Users can still force the acquiring order by using @mutex@/\lstinline[morekeywords=nomutex]@nomutex@.1884 Users can still force the acquiring order by using or not using @mutex@. 1758 1885 \begin{cfa} 1759 1886 void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$ 1760 void bar( M & mutex m1, M & /* nomutex */ m2 ) { $\C{//acquire m1}$1887 void bar( M & mutex m1, M & m2 ) { $\C{// only acquire m1}$ 1761 1888 ... foo( m1, m2 ); ... $\C{// acquire m2}$ 1762 1889 } 1763 void baz( M & /* nomutex */ m1, M & mutex m2 ) { $\C{//acquire m2}$1890 void baz( M & m1, M & mutex m2 ) { $\C{// only acquire m2}$ 1764 1891 ... foo( m1, m2 ); ... $\C{// acquire m1}$ 1765 1892 } … … 1804 1931 % There are many aspects of scheduling in a concurrency system, all related to resource utilization by waiting threads, \ie which thread gets the resource next. 1805 1932 % Different forms of scheduling include access to processors by threads (see Section~\ref{s:RuntimeStructureCluster}), another is access to a shared resource by a lock or monitor. 1806 This section discusses monitor scheduling for waiting threads eligible for entry, \ie which thread gets the shared resource next. (See Section~\ref{s:RuntimeStructureCluster} for scheduling threads on virtual processors.) 1807 While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer may be full/empty so produce/consumer threads must block. 1808 Leaving the monitor and trying again (busy waiting) is impractical for high-level programming. 1809 Monitors eliminate busy waiting by providing synchronization to schedule threads needing access to the shared data, where threads block versus spinning. 1933 This section discusses scheduling for waiting threads eligible for monitor entry, \ie which user thread gets the shared resource next. (See Section~\ref{s:RuntimeStructureCluster} for scheduling kernel threads on virtual processors.) 1934 While monitor mutual-exclusion provides safe access to its shared data, the data may indicate a thread cannot proceed, \eg a bounded buffer may be full/\-empty so produce/consumer threads must block. 1935 Leaving the monitor and retrying (busy waiting) is impractical for high-level programming. 1936 1937 Monitors eliminate busy waiting by providing synchronization within the monitor critical-section to schedule threads needing access to the shared data, where threads block versus spin. 1810 1938 Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling. 1811 \newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry. 1812 Finally, \CFA monitors do not allow calling threads to barge ahead of signalled threads, which simplifies synchronization among threads in the monitor and increases correctness. 1813 If barging is allowed, synchronization between a signaller and signallee is difficult, often requiring additional flags and multiple unblock/block cycles. 1814 In fact, signals-as-hints is completely opposite from that proposed by Hoare in the seminal paper on monitors~\cite[p.~550]{Hoare74}. 1939 \newterm{Internal} (largely) schedules threads located \emph{inside} the monitor and is accomplished using condition variables with signal and wait. 1940 \newterm{External} (largely) schedules threads located \emph{outside} the monitor and is accomplished with the @waitfor@ statement. 1941 Note, internal scheduling has a small amount of external scheduling and vice versus, so the naming denotes where the majority of the block threads reside (inside or outside) for scheduling. 1942 For complex scheduling, the approaches can be combined, so there can be an equal number of threads waiting inside and outside. 1943 1944 \CFA monitors do not allow calling threads to barge ahead of signalled threads (via barging prevention), which simplifies synchronization among threads in the monitor and increases correctness. 1945 A direct consequence of this semantics is that unblocked waiting threads are not required to recheck the waiting condition, \ie waits are not in a starvation-prone busy-loop as required by the signals-as-hints style with barging. 1946 Preventing barging comes directly from Hoare's semantics in the seminal paper on monitors~\cite[p.~550]{Hoare74}. 1815 1947 % \begin{cquote} 1816 1948 % However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program. 1817 1949 % It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74} 1818 1950 % \end{cquote} 1819 Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of self barging. 1820 Hence, a \CFA @wait@ statement is not enclosed in a @while@ loop retesting a blocking predicate, which can cause thread starvation due to barging. 1821 1822 Figure~\ref{f:MonitorScheduling} shows general internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}). 1823 External calling threads block on the calling queue, if the monitor is occupied, otherwise they enter in FIFO order. 1824 Internal threads block on condition queues via @wait@ and reenter from the condition in FIFO order. 1825 Alternatively, internal threads block on urgent from the @signal_block@ or @waitfor@, and reenter implicitly when the monitor becomes empty, \ie, the thread in the monitor exits or waits. 1826 1827 There are three signalling mechanisms to unblock waiting threads to enter the monitor. 1828 Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion, so either the signaller or signallee can proceed. 1829 For internal scheduling, threads are unblocked from condition queues using @signal@, where the signallee is moved to urgent and the signaller continues (solid line). 1830 Multiple signals move multiple signallees to urgent until the condition is empty. 1831 When the signaller exits or waits, a thread blocked on urgent is processed before calling threads to prevent barging. 1951 Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit self barging. 1952 1953 Monitor mutual-exclusion means signalling cannot have the signaller and signalled thread in the monitor simultaneously, so only the signaller or signallee can proceed. 1954 Figure~\ref{f:MonitorScheduling} shows internal/external scheduling for the bounded-buffer examples in Figure~\ref{f:GenericBoundedBuffer}. 1955 For internal scheduling in Figure~\ref{f:BBInt}, the @signal@ moves the signallee (front thread of the specified condition queue) to urgent and the signaller continues (solid line). 1956 Multiple signals move multiple signallees to urgent until the condition queue is empty. 1957 When the signaller exits or waits, a thread is implicitly unblocked from urgent (if available) before unblocking a calling thread to prevent barging. 1832 1958 (Java conceptually moves the signalled thread to the calling queue, and hence, allows barging.) 1833 The alternative unblock is in the opposite order using @signal_block@, where the signaller is moved to urgent and the signallee continues (dashed line), and is implicitly unblocked from urgent when the signallee exits or waits. 1834 1835 For external scheduling, the condition queues are not used; 1836 instead threads are unblocked directly from the calling queue using @waitfor@ based on function names requesting mutual exclusion. 1837 (The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.) 1838 The @waitfor@ has the same semantics as @signal_block@, where the signalled thread executes before the signallee, which waits on urgent. 1839 Executing multiple @waitfor@s from different signalled functions causes the calling threads to move to urgent. 1840 External scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited. 1841 Internal scheduling behaves the same for an urgent stack or queue, except for multiple signalling, where the threads unblock from urgent in reverse order from signalling. 1842 If the restart order is important, multiple signalling by a signal thread can be transformed into daisy-chain signalling among threads, where each thread signals the next thread. 1843 We tried both a stack for @waitfor@ and queue for signalling, but that resulted in complex semantics about which thread enters next. 1844 Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling. 1959 Signal is used when the signaller is providing the cooperation needed by the signallee (\eg creating an empty slot in a buffer for a producer) and the signaller immediately exits the monitor to run concurrently (consume the buffer element) and passes control of the monitor to the signalled thread, which can immediately take advantage of the state change. 1960 Specifically, the @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list. 1961 Signalling is unconditional because signalling an empty condition queue does nothing. 1962 It is common to declare condition queues as monitor fields to prevent shared access, hence no locking is required for access as the queues are protected by the monitor lock. 1963 In \CFA, a condition queue can be created/stored independently. 1845 1964 1846 1965 \begin{figure} … … 1860 1979 \end{figure} 1861 1980 1862 Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, detect the buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@.1863 The @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list.1864 The appropriate condition variable is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer.1865 Signalling is unconditional, because signalling an empty condition variable does nothing.1866 It is common to declare condition variables as monitor fields to prevent shared access, hence no locking is required for access as the conditions are protected by the monitor lock.1867 In \CFA, a condition variable can be created/stored independently.1868 % To still prevent expensive locking on access, a condition variable is tied to a \emph{group} of monitors on first use, called \newterm{branding}, resulting in a low-cost boolean test to detect sharing from other monitors.1869 1870 % Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means:1871 % \begin{enumerate}1872 % \item1873 % The signalling thread returns immediately and the signalled thread continues.1874 % \item1875 % The signalling thread continues and the signalled thread is marked for urgent unblocking at the next scheduling point (exit/wait).1876 % \item1877 % The signalling thread blocks but is marked for urgent unblocking at the next scheduling point and the signalled thread continues.1878 % \end{enumerate}1879 % The first approach is too restrictive, as it precludes solving a reasonable class of problems, \eg dating service (see Figure~\ref{f:DatingService}).1880 % \CFA supports the next two semantics as both are useful.1881 1882 1981 \begin{figure} 1883 1982 \centering … … 1891 1990 T elements[10]; 1892 1991 }; 1893 void ?{}( Buffer(T) & buf fer ) with(buffer) {1992 void ?{}( Buffer(T) & buf ) with(buf) { 1894 1993 front = back = count = 0; 1895 1994 } 1896 void insert( Buffer(T) & mutex buffer, T elem ) 1897 with(buffer){1898 if ( count == 10 ) `wait( empty )`; 1899 // insert el em into buffer1995 1996 void insert(Buffer(T) & mutex buf, T elm) with(buf){ 1997 if ( count == 10 ) `wait( empty )`; // full ? 1998 // insert elm into buf 1900 1999 `signal( full )`; 1901 2000 } 1902 T remove( Buffer(T) & mutex buf fer ) with(buffer) {1903 if ( count == 0 ) `wait( full )`; 1904 // remove el em from buffer2001 T remove( Buffer(T) & mutex buf ) with(buf) { 2002 if ( count == 0 ) `wait( full )`; // empty ? 2003 // remove elm from buf 1905 2004 `signal( empty )`; 1906 return el em;2005 return elm; 1907 2006 } 1908 2007 } 1909 2008 \end{cfa} 1910 2009 \end{lrbox} 1911 1912 % \newbox\myboxB1913 % \begin{lrbox}{\myboxB}1914 % \begin{cfa}[aboveskip=0pt,belowskip=0pt]1915 % forall( otype T ) { // distribute forall1916 % monitor Buffer {1917 %1918 % int front, back, count;1919 % T elements[10];1920 % };1921 % void ?{}( Buffer(T) & buffer ) with(buffer) {1922 % [front, back, count] = 0;1923 % }1924 % T remove( Buffer(T) & mutex buffer ); // forward1925 % void insert( Buffer(T) & mutex buffer, T elem )1926 % with(buffer) {1927 % if ( count == 10 ) `waitfor( remove, buffer )`;1928 % // insert elem into buffer1929 %1930 % }1931 % T remove( Buffer(T) & mutex buffer ) with(buffer) {1932 % if ( count == 0 ) `waitfor( insert, buffer )`;1933 % // remove elem from buffer1934 %1935 % return elem;1936 % }1937 % }1938 % \end{cfa}1939 % \end{lrbox}1940 2010 1941 2011 \newbox\myboxB 1942 2012 \begin{lrbox}{\myboxB} 1943 2013 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 2014 forall( otype T ) { // distribute forall 2015 monitor Buffer { 2016 2017 int front, back, count; 2018 T elements[10]; 2019 }; 2020 void ?{}( Buffer(T) & buf ) with(buf) { 2021 front = back = count = 0; 2022 } 2023 T remove( Buffer(T) & mutex buf ); // forward 2024 void insert(Buffer(T) & mutex buf, T elm) with(buf){ 2025 if ( count == 10 ) `waitfor( remove : buf )`; 2026 // insert elm into buf 2027 2028 } 2029 T remove( Buffer(T) & mutex buf ) with(buf) { 2030 if ( count == 0 ) `waitfor( insert : buf )`; 2031 // remove elm from buf 2032 2033 return elm; 2034 } 2035 } 2036 \end{cfa} 2037 \end{lrbox} 2038 2039 \subfloat[Internal scheduling]{\label{f:BBInt}\usebox\myboxA} 2040 \hspace{1pt} 2041 \vrule 2042 \hspace{3pt} 2043 \subfloat[External scheduling]{\label{f:BBExt}\usebox\myboxB} 2044 2045 \caption{Generic bounded buffer} 2046 \label{f:GenericBoundedBuffer} 2047 \end{figure} 2048 2049 The @signal_block@ provides the opposite unblocking order, where the signaller is moved to urgent and the signallee continues and a thread is implicitly unblocked from urgent when the signallee exits or waits (dashed line). 2050 Signal block is used when the signallee is providing the cooperation needed by the signaller (\eg if the buffer is removed and a producer hands off an item to a consumer, as in Figure~\ref{f:DatingSignalBlock}) so the signaller must wait until the signallee unblocks, provides the cooperation, exits the monitor to run concurrently, and passes control of the monitor to the signaller, which can immediately take advantage of the state change. 2051 Using @signal@ or @signal_block@ can be a dynamic decision based on whether the thread providing the cooperation arrives before or after the thread needing the cooperation. 2052 2053 External scheduling in Figure~\ref{f:BBExt} simplifies internal scheduling by eliminating condition queues and @signal@/@wait@ (cases where it cannot are discussed shortly), and has existed in the programming language Ada for almost 40 years with variants in other languages~\cite{SR,ConcurrentC++,uC++}. 2054 While prior languages use external scheduling solely for thread interaction, \CFA generalizes it to both monitors and threads. 2055 External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on condition queues in the monitor. 2056 Scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion. 2057 Specifically, a thread calling the monitor is unblocked directly from the calling queue based on function names that can fulfill the cooperation required by the signaller. 2058 (The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.) 2059 Hence, the @waitfor@ has the same semantics as @signal_block@, where the signallee thread from the calling queue executes before the signaller, which waits on urgent. 2060 Now when a producer/consumer detects a full/empty buffer, the necessary cooperation for continuation is specified by indicating the next function call that can occur. 2061 For example, a producer detecting a full buffer must have cooperation from a consumer to remove an item so function @remove@ is accepted, which prevents producers from entering the monitor, and after a consumer calls @remove@, the producer waiting on urgent is \emph{implicitly} unblocked because it can now continue its insert operation. 2062 Hence, this mechanism is done in terms of control flow, next call, versus in terms of data, channels, as in Go/Rust @select@. 2063 While both mechanisms have strengths and weaknesses, \CFA uses the control-flow mechanism to be consistent with other language features. 2064 2065 Figure~\ref{f:ReadersWriterLock} shows internal/external scheduling for a readers/writer lock with no barging and threads are serviced in FIFO order to eliminate staleness/freshness among the reader/writer threads. 2066 For internal scheduling in Figure~\ref{f:RWInt}, the readers and writers wait on the same condition queue in FIFO order, making it impossible to tell if a waiting thread is a reader or writer. 2067 To clawback the kind of thread, a \CFA condition can store user data in the node for a blocking thread at the @wait@, \ie whether the thread is a @READER@ or @WRITER@. 2068 An unblocked reader thread checks if the thread at the front of the queue is a reader and unblock it, \ie the readers daisy-chain signal the next group of readers demarcated by the next writer or end of the queue. 2069 For external scheduling in Figure~\ref{f:RWExt}, a waiting reader checks if a writer is using the resource, and if so, restricts further calls until the writer exits by calling @EndWrite@. 2070 The writer does a similar action for each reader or writer using the resource. 2071 Note, no new calls to @StartRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@. 2072 2073 \begin{figure} 2074 \centering 2075 \newbox\myboxA 2076 \begin{lrbox}{\myboxA} 2077 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 2078 enum RW { READER, WRITER }; 1944 2079 monitor ReadersWriter { 1945 int rcnt, wcnt; // readers/writer using resource 2080 int rcnt, wcnt; // readers/writer using resource 2081 `condition RWers;` 1946 2082 }; 1947 2083 void ?{}( ReadersWriter & rw ) with(rw) { … … 1950 2086 void EndRead( ReadersWriter & mutex rw ) with(rw) { 1951 2087 rcnt -= 1; 2088 if ( rcnt == 0 ) `signal( RWers )`; 1952 2089 } 1953 2090 void EndWrite( ReadersWriter & mutex rw ) with(rw) { 1954 2091 wcnt = 0; 2092 `signal( RWers );` 1955 2093 } 1956 2094 void StartRead( ReadersWriter & mutex rw ) with(rw) { 1957 if ( wcnt > 0 ) `waitfor( EndWrite, rw );` 2095 if ( wcnt !=0 || ! empty( RWers ) ) 2096 `wait( RWers, READER )`; 1958 2097 rcnt += 1; 2098 if ( ! empty(RWers) && `front(RWers) == READER` ) 2099 `signal( RWers )`; // daisy-chain signalling 1959 2100 } 1960 2101 void StartWrite( ReadersWriter & mutex rw ) with(rw) { 1961 if ( wcnt > 0 ) `waitfor( EndWrite, rw );`1962 else while ( rcnt > 0 ) `waitfor( EndRead, rw );` 2102 if ( wcnt != 0 || rcnt != 0 ) `wait( RWers, WRITER )`; 2103 1963 2104 wcnt = 1; 1964 2105 } 1965 1966 2106 \end{cfa} 1967 2107 \end{lrbox} 1968 2108 1969 \subfloat[Generic bounded buffer, internal scheduling]{\label{f:BBInt}\usebox\myboxA} 1970 \hspace{3pt} 2109 \newbox\myboxB 2110 \begin{lrbox}{\myboxB} 2111 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 2112 2113 monitor ReadersWriter { 2114 int rcnt, wcnt; // readers/writer using resource 2115 2116 }; 2117 void ?{}( ReadersWriter & rw ) with(rw) { 2118 rcnt = wcnt = 0; 2119 } 2120 void EndRead( ReadersWriter & mutex rw ) with(rw) { 2121 rcnt -= 1; 2122 2123 } 2124 void EndWrite( ReadersWriter & mutex rw ) with(rw) { 2125 wcnt = 0; 2126 2127 } 2128 void StartRead( ReadersWriter & mutex rw ) with(rw) { 2129 if ( wcnt > 0 ) `waitfor( EndWrite : rw );` 2130 2131 rcnt += 1; 2132 2133 2134 } 2135 void StartWrite( ReadersWriter & mutex rw ) with(rw) { 2136 if ( wcnt > 0 ) `waitfor( EndWrite : rw );` 2137 else while ( rcnt > 0 ) `waitfor( EndRead : rw );` 2138 wcnt = 1; 2139 } 2140 \end{cfa} 2141 \end{lrbox} 2142 2143 \subfloat[Internal scheduling]{\label{f:RWInt}\usebox\myboxA} 2144 \hspace{1pt} 1971 2145 \vrule 1972 2146 \hspace{3pt} 1973 \subfloat[ Readers / writer lock, external scheduling]{\label{f:RWExt}\usebox\myboxB}1974 1975 \caption{ Internal / external scheduling}1976 \label{f: InternalExternalScheduling}2147 \subfloat[External scheduling]{\label{f:RWExt}\usebox\myboxB} 2148 2149 \caption{Readers / writer lock} 2150 \label{f:ReadersWriterLock} 1977 2151 \end{figure} 1978 2152 1979 Figure~\ref{f:BBInt} can be transformed into external scheduling by removing the condition variables and signals/waits, and adding the following lines at the locations of the current @wait@s in @insert@/@remove@, respectively. 1980 \begin{cfa}[aboveskip=2pt,belowskip=1pt] 1981 if ( count == 10 ) `waitfor( remove, buffer )`; | if ( count == 0 ) `waitfor( insert, buffer )`; 1982 \end{cfa} 1983 Here, the producers/consumers detects a full/\-empty buffer and prevents more producers/consumers from entering the monitor until there is a free/empty slot in the buffer. 1984 External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion. 1985 If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer. 1986 Threads calling excluded functions block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor. 1987 Figure~\ref{f:RWExt} shows a readers/writer lock written using external scheduling, where a waiting reader detects a writer using the resource and restricts further calls until the writer exits by calling @EndWrite@. 1988 The writer does a similar action for each reader or writer using the resource. 1989 Note, no new calls to @StarRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@. 1990 External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on conditions in the monitor. 1991 The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go @select@ on channels. 1992 While both mechanisms have strengths and weaknesses, this project uses the control-flow mechanism to be consistent with other language features. 1993 % Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor functions (see Section~\ref{s:Multi-MonitorScheduling}). 1994 1995 Figure~\ref{f:DatingService} shows a dating service demonstrating non-blocking and blocking signalling. 1996 The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers. 1997 A thread blocks until an appropriate partner arrives. 1998 The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property. 1999 For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner. 2000 For signal-block scheduling, the implicit urgent-queue replaces the explict @exchange@-condition and @signal_block@ puts the finding thread on the urgent condition and unblocks the matcher. 2001 The dating service is an example of a monitor that cannot be written using external scheduling because it requires knowledge of calling parameters to make scheduling decisions, and parameters of waiting threads are unavailable; 2002 as well, an arriving thread may not find a partner and must wait, which requires a condition variable, and condition variables imply internal scheduling. 2003 Furthermore, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number. 2004 Putting loops around the @wait@s does not correct the problem; 2005 the simple solution must be restructured to account for barging. 2153 Finally, external scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited. 2154 Internal schedulling performing multiple signalling results in unblocking from urgent in the reverse order from signalling. 2155 It is rare for the unblocking order to be important as an unblocked thread can be time-sliced immediately after leaving the monitor. 2156 If the unblocking order is important, multiple signalling can be restructured into daisy-chain signalling, where each thread signals the next thread. 2157 Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling. 2158 (Advanced @waitfor@ features are discussed in Section~\ref{s:ExtendedWaitfor}.) 2006 2159 2007 2160 \begin{figure} … … 2017 2170 }; 2018 2171 int girl( DS & mutex ds, int phNo, int ccode ) { 2019 if ( is_empty( Boys[ccode] ) ) {2172 if ( empty( Boys[ccode] ) ) { 2020 2173 wait( Girls[ccode] ); 2021 2174 GirlPhNo = phNo; … … 2044 2197 }; 2045 2198 int girl( DS & mutex ds, int phNo, int ccode ) { 2046 if ( is_empty( Boys[ccode] ) ) { // no compatible2199 if ( empty( Boys[ccode] ) ) { // no compatible 2047 2200 wait( Girls[ccode] ); // wait for boy 2048 2201 GirlPhNo = phNo; // make phone number available … … 2064 2217 \qquad 2065 2218 \subfloat[\lstinline@signal_block@]{\label{f:DatingSignalBlock}\usebox\myboxB} 2066 \caption{Dating service }2067 \label{f:DatingService }2219 \caption{Dating service Monitor} 2220 \label{f:DatingServiceMonitor} 2068 2221 \end{figure} 2069 2222 2070 In summation, for internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread; 2071 the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently. 2072 The waiter unblocks next from the urgent queue, uses/takes the state, and exits the monitor. 2073 Blocking signal is the reverse, where the waiter is providing the cooperation for the signalling thread; 2074 the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter. 2075 The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to use/take the state. 2223 Figure~\ref{f:DatingServiceMonitor} shows a dating service demonstrating non-blocking and blocking signalling. 2224 The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers. 2225 A thread blocks until an appropriate partner arrives. 2226 The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property. 2227 For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner. 2228 For signal-block scheduling, the implicit urgent-queue replaces the explicit @exchange@-condition and @signal_block@ puts the finding thread on the urgent stack and unblocks the matcher. 2229 2230 The dating service is an important example of a monitor that cannot be written using external scheduling. 2231 First, because scheduling requires knowledge of calling parameters to make matching decisions, and parameters of calling threads are unavailable within the monitor. 2232 For example, a girl thread within the monitor cannot examine the @ccode@ of boy threads waiting on the calling queue to determine if there is a matching partner. 2233 Second, because a scheduling decision may be delayed when there is no immediate match, which requires a condition queue for waiting, and condition queues imply internal scheduling. 2234 For example, if a girl thread could determine there is no calling boy with the same @ccode@, it must wait until a matching boy arrives. 2235 Finally, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number. 2236 This situation shows rechecking the waiting condition and waiting again (signals-as-hints) fails, requiring significant restructured to account for barging. 2076 2237 2077 2238 Both internal and external scheduling extend to multiple monitors in a natural way. 2078 2239 \begin{cquote} 2079 \begin{tabular}{@{}l@{\hspace{ 3\parindentlnth}}l@{}}2240 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}} 2080 2241 \begin{cfa} 2081 2242 monitor M { `condition e`; ... }; … … 2088 2249 & 2089 2250 \begin{cfa} 2090 void rtn$\(_1\)$( M & mutex m1, M & mutex m2 ); 2251 void rtn$\(_1\)$( M & mutex m1, M & mutex m2 ); // overload rtn 2091 2252 void rtn$\(_2\)$( M & mutex m1 ); 2092 2253 void bar( M & mutex m1, M & mutex m2 ) { 2093 ... waitfor( `rtn` ); ... // $\LstCommentStyle{waitfor( rtn\(_1\),m1, m2 )}$2094 ... waitfor( `rtn , m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$2254 ... waitfor( `rtn`${\color{red}\(_1\)}$ ); ... // $\LstCommentStyle{waitfor( rtn\(_1\) : m1, m2 )}$ 2255 ... waitfor( `rtn${\color{red}\(_2\)}$ : m1` ); ... 2095 2256 } 2096 2257 \end{cfa} … … 2099 2260 For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex parameters, \ie @wait( e, m1, m2 )@. 2100 2261 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@. 2101 Wait cannot statically verif iesthe released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@.2102 While \CC supports bulk locking, @wait@ only accepts a single lock for a condition variable, so bulk locking with condition variables is asymmetric.2262 Wait cannot statically verify the released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@. 2263 While \CC supports bulk locking, @wait@ only accepts a single lock for a condition queue, so bulk locking with condition queues is asymmetric. 2103 2264 Finally, a signaller, 2104 2265 \begin{cfa} … … 2109 2270 must have acquired at least the same locks as the waiting thread signalled from a condition queue to allow the locks to be passed, and hence, prevent barging. 2110 2271 2111 Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn ,m1, m2 )@.2112 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn ,m1 )@.2272 Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn : m1, m2 )@. 2273 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn : m1 )@. 2113 2274 @waitfor@ does statically verify the monitor types passed are the same as the acquired mutex-parameters of the given function or function pointer, hence the function (pointer) prototype must be accessible. 2114 2275 % When an overloaded function appears in an @waitfor@ statement, calls to any function with that name are accepted. … … 2118 2279 void rtn( M & mutex m ); 2119 2280 `int` rtn( M & mutex m ); 2120 waitfor( (`int` (*)( M & mutex ))rtn, m ); 2121 \end{cfa} 2122 2123 The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock. 2281 waitfor( (`int` (*)( M & mutex ))rtn : m ); 2282 \end{cfa} 2283 2284 The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock (see Section~\ref{s:MutexAcquisition}). 2285 \newpage 2124 2286 \begin{cfa} 2125 2287 void foo( M & mutex m1, M & mutex m2 ) { 2126 ... wait( `e, m1` ); ... $\C{// release m1, keeping m2 acquired )}$2127 void bar( M & mutex m1, M & mutex m2 ) { $\C{// must acquire m1 and m2 )}$2288 ... wait( `e, m1` ); ... $\C{// release m1, keeping m2 acquired}$ 2289 void bar( M & mutex m1, M & mutex m2 ) { $\C{// must acquire m1 and m2}$ 2128 2290 ... signal( `e` ); ... 2129 2291 \end{cfa} 2130 2292 The @wait@ only releases @m1@ so the signalling thread cannot acquire @m1@ and @m2@ to enter @bar@ and @signal@ the condition. 2131 While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitors, not being perfectly composable. 2132 2293 While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitor locking is not perfectly composable. 2133 2294 2134 2295 2135 2296 \subsection{\texorpdfstring{Extended \protect\lstinline@waitfor@}{Extended waitfor}} 2297 \label{s:ExtendedWaitfor} 2136 2298 2137 2299 Figure~\ref{f:ExtendedWaitfor} shows the extended form of the @waitfor@ statement to conditionally accept one of a group of mutex functions, with an optional statement to be performed \emph{after} the mutex function finishes. … … 2144 2306 Hence, the terminating @else@ clause allows a conditional attempt to accept a call without blocking. 2145 2307 If both @timeout@ and @else@ clause are present, the @else@ must be conditional, or the @timeout@ is never triggered. 2146 There is also a traditional future wait queue (not shown) (\eg Microsoft (@WaitForMultipleObjects@)), to wait for a specified number of future elements in the queue. 2308 There is also a traditional future wait queue (not shown) (\eg Microsoft @WaitForMultipleObjects@), to wait for a specified number of future elements in the queue. 2309 Finally, there is a shorthand for specifying multiple functions using the same set of monitors: @waitfor( f, g, h : m1, m2, m3 )@. 2147 2310 2148 2311 \begin{figure} … … 2171 2334 The right example accepts either @mem1@ or @mem2@ if @C1@ and @C2@ are true. 2172 2335 2173 An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated, \eg assume the bounded buffer is restruct red from a monitor to a thread with the following @main@.2336 An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated, \eg assume the bounded buffer is restructured from a monitor to a thread with the following @main@. 2174 2337 \begin{cfa} 2175 2338 void main( Buffer(T) & buffer ) with(buffer) { 2176 2339 for () { 2177 `waitfor( ^?{} ,buffer )` break;2178 or when ( count != 20 ) waitfor( insert ,buffer ) { ... }2179 or when ( count != 0 ) waitfor( remove ,buffer ) { ... }2340 `waitfor( ^?{} : buffer )` break; 2341 or when ( count != 20 ) waitfor( insert : buffer ) { ... } 2342 or when ( count != 0 ) waitfor( remove : buffer ) { ... } 2180 2343 } 2181 2344 // clean up … … 2269 2432 To support this efficient semantics (and prevent barging), the implementation maintains a list of monitors acquired for each blocked thread. 2270 2433 When a signaller exits or waits in a monitor function/statement, the front waiter on urgent is unblocked if all its monitors are released. 2271 Implementing a fast subset check for the necessary released monitors is important .2434 Implementing a fast subset check for the necessary released monitors is important and discussed in the following sections. 2272 2435 % The benefit is encapsulating complexity into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. 2273 2436 2274 2437 2275 \subsection{Loose Object Definitions} 2276 \label{s:LooseObjectDefinitions} 2277 2278 In an object-oriented programming language, a class includes an exhaustive list of operations. 2279 A new class can add members via static inheritance but the subclass still has an exhaustive list of operations. 2280 (Dynamic member adding, \eg JavaScript~\cite{JavaScript}, is not considered.) 2281 In the object-oriented scenario, the type and all its operators are always present at compilation (even separate compilation), so it is possible to number the operations in a bit mask and use an $O(1)$ compare with a similar bit mask created for the operations specified in a @waitfor@. 2282 2283 However, in \CFA, monitor functions can be statically added/removed in translation units, making a fast subset check difficult. 2284 \begin{cfa} 2285 monitor M { ... }; // common type, included in .h file 2286 translation unit 1 2287 void `f`( M & mutex m ); 2288 void g( M & mutex m ) { waitfor( `f`, m ); } 2289 translation unit 2 2290 void `f`( M & mutex m ); $\C{// replacing f and g for type M in this translation unit}$ 2291 void `g`( M & mutex m ); 2292 void h( M & mutex m ) { waitfor( `f`, m ) or waitfor( `g`, m ); } $\C{// extending type M in this translation unit}$ 2293 \end{cfa} 2294 The @waitfor@ statements in each translation unit cannot form a unique bit-mask because the monitor type does not carry that information. 2438 \subsection{\texorpdfstring{\protect\lstinline@waitfor@ Implementation}{waitfor Implementation}} 2439 \label{s:waitforImplementation} 2440 2441 In a statically-typed object-oriented programming language, a class has an exhaustive list of members, even when members are added via static inheritance (see Figure~\ref{f:uCinheritance}). 2442 Knowing all members at compilation (even separate compilation) allows uniquely numbered them so the accept-statement implementation can use a fast/compact bit mask with $O(1)$ compare. 2443 2444 \begin{figure} 2445 \centering 2446 \begin{lrbox}{\myboxA} 2447 \begin{uC++}[aboveskip=0pt,belowskip=0pt] 2448 $\emph{translation unit 1}$ 2449 _Monitor B { // common type in .h file 2450 _Mutex virtual void `f`( ... ); 2451 _Mutex virtual void `g`( ... ); 2452 _Mutex virtual void w1( ... ) { ... _Accept(`f`, `g`); ... } 2453 }; 2454 $\emph{translation unit 2}$ 2455 // include B 2456 _Monitor D : public B { // inherit 2457 _Mutex void `h`( ... ); // add 2458 _Mutex void w2( ... ) { ... _Accept(`f`, `h`); ... } 2459 }; 2460 \end{uC++} 2461 \end{lrbox} 2462 2463 \begin{lrbox}{\myboxB} 2464 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 2465 $\emph{translation unit 1}$ 2466 monitor M { ... }; // common type in .h file 2467 void `f`( M & mutex m, ... ); 2468 void `g`( M & mutex m, ... ); 2469 void w1( M & mutex m, ... ) { ... waitfor(`f`, `g` : m); ... } 2470 2471 $\emph{translation unit 2}$ 2472 // include M 2473 extern void `f`( M & mutex m, ... ); // import f but not g 2474 void `h`( M & mutex m ); // add 2475 void w2( M & mutex m, ... ) { ... waitfor(`f`, `h` : m); ... } 2476 2477 \end{cfa} 2478 \end{lrbox} 2479 2480 \subfloat[\uC]{\label{f:uCinheritance}\usebox\myboxA} 2481 \hspace{3pt} 2482 \vrule 2483 \hspace{3pt} 2484 \subfloat[\CFA]{\label{f:CFinheritance}\usebox\myboxB} 2485 \caption{Member / Function visibility} 2486 \label{f:MemberFunctionVisibility} 2487 \end{figure} 2488 2489 However, the @waitfor@ statement in translation unit 2 (see Figure~\ref{f:CFinheritance}) cannot see function @g@ in translation unit 1 precluding a unique numbering for a bit-mask because the monitor type only carries the protected shared-data. 2490 (A possible way to construct a dense mapping is at link or load-time.) 2295 2491 Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array. 2296 Then, the same implementation approach used for the urgent stack is used for the calling queue. 2297 Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (usually short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers. 2298 (A possible way to construct a dense mapping is at link or load-time.) 2492 Then, the same implementation approach used for the urgent stack (see Section~\ref{s:Scheduling}) is used for the calling queue. 2493 Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers. 2299 2494 2300 2495 … … 2311 2506 The solution is for the programmer to disambiguate: 2312 2507 \begin{cfa} 2313 waitfor( f ,`m2` ); $\C{// wait for call to f with argument m2}$2508 waitfor( f : `m2` ); $\C{// wait for call to f with argument m2}$ 2314 2509 \end{cfa} 2315 2510 Both locks are acquired by function @g@, so when function @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@. … … 2318 2513 monitor M { ... }; 2319 2514 void f( M & mutex m1, M & mutex m2 ); 2320 void g( M & mutex m1, M & mutex m2 ) { waitfor( f ,`m1, m2` ); $\C{// wait for call to f with arguments m1 and m2}$2515 void g( M & mutex m1, M & mutex m2 ) { waitfor( f : `m1, m2` ); $\C{// wait for call to f with arguments m1 and m2}$ 2321 2516 \end{cfa} 2322 2517 Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by the accepting function. 2323 Also, the order of the monitors in a @waitfor@ statement is unimportant.2324 2325 Figure~\ref{f:UnmatchedMutexSets} shows an example where, for internal and external scheduling with multiple monitors, a signalling or accepting thread must match exactly, \ie partial matching results in waiting.2326 For both examples, the set of monitors is disjoint so unblocking is impossible.2518 % Also, the order of the monitors in a @waitfor@ statement must match the order of the mutex parameters. 2519 2520 Figure~\ref{f:UnmatchedMutexSets} shows internal and external scheduling with multiple monitors that must match exactly with a signalling or accepting thread, \ie partial matching results in waiting. 2521 In both cases, the set of monitors is disjoint so unblocking is impossible. 2327 2522 2328 2523 \begin{figure} … … 2353 2548 } 2354 2549 void g( M1 & mutex m1, M2 & mutex m2 ) { 2355 waitfor( f ,m1, m2 );2550 waitfor( f : m1, m2 ); 2356 2551 } 2357 2552 g( `m11`, m2 ); // block on accept … … 2368 2563 \end{figure} 2369 2564 2370 2371 \subsection{\texorpdfstring{\protect\lstinline@mutex@ Threads}{mutex Threads}}2372 2373 Threads in \CFA can also be monitors to allow \emph{direct communication} among threads, \ie threads can have mutex functions that are called by other threads.2374 Hence, all monitor features are available when using threads.2375 Figure~\ref{f:DirectCommunication} shows a comparison of direct call communication in \CFA with direct channel communication in Go.2376 (Ada provides a similar mechanism to the \CFA direct communication.)2377 The program main in both programs communicates directly with the other thread versus indirect communication where two threads interact through a passive monitor.2378 Both direct and indirection thread communication are valuable tools in structuring concurrent programs.2379 2380 2565 \begin{figure} 2381 2566 \centering … … 2384 2569 2385 2570 struct Msg { int i, j; }; 2386 thread GoRtn { int i; float f; Msg m; };2571 monitor thread GoRtn { int i; float f; Msg m; }; 2387 2572 void mem1( GoRtn & mutex gortn, int i ) { gortn.i = i; } 2388 2573 void mem2( GoRtn & mutex gortn, float f ) { gortn.f = f; } … … 2394 2579 for () { 2395 2580 2396 `waitfor( mem1 ,gortn )` sout | i; // wait for calls2397 or `waitfor( mem2 ,gortn )` sout | f;2398 or `waitfor( mem3 ,gortn )` sout | m.i | m.j;2399 or `waitfor( ^?{} , gortn )` break;2581 `waitfor( mem1 : gortn )` sout | i; // wait for calls 2582 or `waitfor( mem2 : gortn )` sout | f; 2583 or `waitfor( mem3 : gortn )` sout | m.i | m.j; 2584 or `waitfor( ^?{} : gortn )` break; // low priority 2400 2585 2401 2586 } … … 2451 2636 \hspace{3pt} 2452 2637 \subfloat[Go]{\label{f:Gochannel}\usebox\myboxB} 2453 \caption{Direct communication} 2454 \label{f:DirectCommunication} 2638 \caption{Direct versus indirect communication} 2639 \label{f:DirectCommunicationComparison} 2640 2641 \medskip 2642 2643 \begin{cfa} 2644 monitor thread DatingService { 2645 condition Girls[CompCodes], Boys[CompCodes]; 2646 int girlPhoneNo, boyPhoneNo, ccode; 2647 }; 2648 int girl( DatingService & mutex ds, int phoneno, int code ) with( ds ) { 2649 girlPhoneNo = phoneno; ccode = code; 2650 `wait( Girls[ccode] );` $\C{// wait for boy}$ 2651 girlPhoneNo = phoneno; return boyPhoneNo; 2652 } 2653 int boy( DatingService & mutex ds, int phoneno, int code ) with( ds ) { 2654 boyPhoneNo = phoneno; ccode = code; 2655 `wait( Boys[ccode] );` $\C{// wait for girl}$ 2656 boyPhoneNo = phoneno; return girlPhoneNo; 2657 } 2658 void main( DatingService & ds ) with( ds ) { $\C{// thread starts, ds defaults to mutex}$ 2659 for () { 2660 waitfor( ^?{} ) break; $\C{// high priority}$ 2661 or waitfor( girl ) $\C{// girl called, compatible boy ? restart boy then girl}$ 2662 if ( ! is_empty( Boys[ccode] ) ) { `signal_block( Boys[ccode] ); signal_block( Girls[ccode] );` } 2663 or waitfor( boy ) { $\C{// boy called, compatible girl ? restart girl then boy}$ 2664 if ( ! is_empty( Girls[ccode] ) ) { `signal_block( Girls[ccode] ); signal_block( Boys[ccode] );` } 2665 } 2666 } 2667 \end{cfa} 2668 \caption{Direct communication dating service} 2669 \label{f:DirectCommunicationDatingService} 2455 2670 \end{figure} 2456 2671 … … 2467 2682 void main( Ping & pi ) { 2468 2683 for ( 10 ) { 2469 `waitfor( ping ,pi );`2684 `waitfor( ping : pi );` 2470 2685 `pong( po );` 2471 2686 } … … 2480 2695 for ( 10 ) { 2481 2696 `ping( pi );` 2482 `waitfor( pong ,po );`2697 `waitfor( pong : po );` 2483 2698 } 2484 2699 } … … 2495 2710 2496 2711 2497 \subsection{Execution Properties} 2498 2499 Table~\ref{t:ObjectPropertyComposition} shows how the \CFA high-level constructs cover 3 fundamental execution properties: thread, stateful function, and mutual exclusion. 2500 Case 1 is a basic object, with none of the new execution properties. 2501 Case 2 allows @mutex@ calls to Case 1 to protect shared data. 2502 Case 3 allows stateful functions to suspend/resume but restricts operations because the state is stackless. 2503 Case 4 allows @mutex@ calls to Case 3 to protect shared data. 2504 Cases 5 and 6 are the same as 3 and 4 without restriction because the state is stackful. 2505 Cases 7 and 8 are rejected because a thread cannot execute without a stackful state in a preemptive environment when context switching from the signal handler. 2506 Cases 9 and 10 have a stackful thread without and with @mutex@ calls. 2507 For situations where threads do not require direct communication, case 9 provides faster creation/destruction by eliminating @mutex@ setup. 2508 2509 \begin{table} 2510 \caption{Object property composition} 2511 \centering 2512 \label{t:ObjectPropertyComposition} 2513 \renewcommand{\arraystretch}{1.25} 2514 %\setlength{\tabcolsep}{5pt} 2515 \begin{tabular}{c|c||l|l} 2516 \multicolumn{2}{c||}{object properties} & \multicolumn{2}{c}{mutual exclusion} \\ 2517 \hline 2518 thread & stateful & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\ 2519 \hline 2520 \hline 2521 No & No & \textbf{1}\ \ \ aggregate type & \textbf{2}\ \ \ @monitor@ aggregate type \\ 2522 \hline 2523 No & Yes (stackless) & \textbf{3}\ \ \ @generator@ & \textbf{4}\ \ \ @monitor@ @generator@ \\ 2524 \hline 2525 No & Yes (stackful) & \textbf{5}\ \ \ @coroutine@ & \textbf{6}\ \ \ @monitor@ @coroutine@ \\ 2526 \hline 2527 Yes & No / Yes (stackless) & \textbf{7}\ \ \ {\color{red}rejected} & \textbf{8}\ \ \ {\color{red}rejected} \\ 2528 \hline 2529 Yes & Yes (stackful) & \textbf{9}\ \ \ @thread@ & \textbf{10}\ \ @monitor@ @thread@ \\ 2530 \end{tabular} 2531 \end{table} 2712 \subsection{\texorpdfstring{\protect\lstinline@monitor@ Generators / Coroutines / Threads}{monitor Generators / Coroutines / Threads}} 2713 2714 \CFA generators, coroutines, and threads can also be monitors (Table~\ref{t:ExecutionPropertyComposition} cases 4, 6, 12) allowing safe \emph{direct communication} with threads, \ie the custom types can have mutex functions that are called by other threads. 2715 All monitor features are available within these mutex functions. 2716 For example, if the formatter generator (or coroutine equivalent) in Figure~\ref{f:CFAFormatGen} is extended with the monitor property and this interface function is used to communicate with the formatter: 2717 \begin{cfa} 2718 void fmt( Fmt & mutex fmt, char ch ) { fmt.ch = ch; resume( fmt ) } 2719 \end{cfa} 2720 multiple threads can safely pass characters for formatting. 2721 2722 Figure~\ref{f:DirectCommunicationComparison} shows a comparison of direct call-communication in \CFA versus indirect channel-communication in Go. 2723 (Ada has a similar mechanism to \CFA direct communication.) 2724 The program thread in \CFA @main@ uses the call/return paradigm to directly communicate with the @GoRtn main@, whereas Go switches to the channel paradigm to indirectly communicate with the goroutine. 2725 Communication by multiple threads is safe for the @gortn@ thread via mutex calls in \CFA or channel assignment in Go. 2726 2727 Figure~\ref{f:DirectCommunicationDatingService} shows the dating-service problem in Figure~\ref{f:DatingServiceMonitor} extended from indirect monitor communication to direct thread communication. 2728 When converting a monitor to a thread (server), the coding pattern is to move as much code as possible from the accepted members into the thread main so it does an much work as possible. 2729 Notice, the dating server is postponing requests for an unspecified time while continuing to accept new requests. 2730 For complex servers (web-servers), there can be hundreds of lines of code in the thread main and safe interaction with clients can be complex. 2532 2731 2533 2732 … … 2535 2734 2536 2735 For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc. 2537 Some of these low-level mechanism are used in the \CFA runtime, but we stronglyadvocate using high-level mechanisms whenever possible.2736 Some of these low-level mechanism are used to build the \CFA runtime, but we always advocate using high-level mechanisms whenever possible. 2538 2737 2539 2738 … … 2578 2777 \begin{cfa} 2579 2778 struct Adder { 2580 2779 int * row, cols; 2581 2780 }; 2582 2781 int operator()() { … … 2637 2836 \label{s:RuntimeStructureCluster} 2638 2837 2639 A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the (user) threads from its own ready queue (like an OS executing kernel threads). 2838 A \newterm{cluster} is a collection of user and kernel threads, where the kernel threads run the user threads from the cluster's ready queue, and the operating system runs the kernel threads on the processors from its ready queue. 2839 The term \newterm{virtual processor} is introduced as a synonym for kernel thread to disambiguate between user and kernel thread. 2840 From the language perspective, a virtual processor is an actual processor (core). 2841 2640 2842 The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults. 2641 2843 The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors. … … 2656 2858 Programs may use more virtual processors than hardware processors. 2657 2859 On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel. 2658 (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.)2860 (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux,affinityWindows}, which is used when caching issues occur or for heterogeneous hardware processors.) %, affinityFreebsd, affinityNetbsd, affinityMacosx 2659 2861 The \CFA runtime attempts to block unused processors and unblock processors as the system load increases; 2660 balancing the workload with processors is difficult because it requires future knowledge, \ie what will the applicat on workload do next.2862 balancing the workload with processors is difficult because it requires future knowledge, \ie what will the application workload do next. 2661 2863 Preemption occurs on virtual processors rather than user threads, via operating-system interrupts. 2662 2864 Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads. … … 2693 2895 Nondeterministic preemption provides fairness from long-running threads, and forces concurrent programmers to write more robust programs, rather than relying on code between cooperative scheduling to be atomic. 2694 2896 This atomic reliance can fail on multi-core machines, because execution across cores is nondeterministic. 2695 A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Microsoftruntime does not support interrupts and on Linux systems, interrupts are complex (see below).2897 A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Windows runtime does not support interrupts and on Linux systems, interrupts are complex (see below). 2696 2898 Preemption is normally handled by setting a countdown timer on each virtual processor. 2697 When the timer expires, an interrupt is delivered, and the interrupthandler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code.2899 When the timer expires, an interrupt is delivered, and its signal handler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code. 2698 2900 Multiple signal handlers may be pending. 2699 2901 When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal is delivered. 2700 2902 The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler; 2701 2903 therefore, the same signal mask is required for all virtual processors in a cluster. 2702 Because preemption frequency is usually long (1 millisecond) performance cost is negligible. 2703 2704 Linux switched a decade ago from specific to arbitrary process signal-delivery for applications with multiple kernel threads. 2705 \begin{cquote} 2706 A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. 2707 If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which it will deliver the signal. 2708 SIGNAL(7) - Linux Programmer's Manual 2709 \end{cquote} 2904 Because preemption interval is usually long (1 millisecond) performance cost is negligible. 2905 2906 Linux switched a decade ago from specific to arbitrary virtual-processor signal-delivery for applications with multiple kernel threads. 2907 In the new semantics, a virtual-processor directed signal may be delivered to any virtual processor created by the application that does not have the signal blocked. 2710 2908 Hence, the timer-expiry signal, which is generated \emph{externally} by the Linux kernel to an application, is delivered to any of its Linux subprocesses (kernel threads). 2711 2909 To ensure each virtual processor receives a preemption signal, a discrete-event simulation is run on a special virtual processor, and only it sets and receives timer events. … … 2725 2923 \label{s:Performance} 2726 2924 2727 To verify the implementation of the \CFA runtime, a series of microbenchmarks are performed comparing \CFA with pthreads, Java OpenJDK-9, Go 1.12.6and \uC 7.0.0.2925 To test the performance of the \CFA runtime, a series of microbenchmarks are used to compare \CFA with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js 12.14.1, and \uC 7.0.0. 2728 2926 For comparison, the package must be multi-processor (M:N), which excludes libdill/libmil~\cite{libdill} (M:1)), and use a shared-memory programming model, \eg not message passing. 2729 The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and \CFA/\uC are compiled with gcc 6.5.2927 The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and pthreads/\CFA/\uC are compiled with gcc 9.2.1. 2730 2928 2731 2929 All benchmarks are run using the following harness. (The Java harness is augmented to circumvent JIT issues.) 2732 2930 \begin{cfa} 2733 unsigned int N = 10_000_000; 2734 #define BENCH( `run` ) Time before = getTimeNsec(); `run;` Duration result = (getTimeNsec() - before) / N; 2735 \end{cfa} 2736 The method used to get time is @clock_gettime( CLOCK_REALTIME )@. 2737 Each benchmark is performed @N@ times, where @N@ varies depending on the benchmark; 2738 the total time is divided by @N@ to obtain the average time for a benchmark. 2739 Each benchmark experiment is run 31 times. 2931 #define BENCH( `run` ) uint64_t start = cputime_ns(); `run;` double result = (double)(cputime_ns() - start) / N; 2932 \end{cfa} 2933 where CPU time in nanoseconds is from the appropriate language clock. 2934 Each benchmark is performed @N@ times, where @N@ is selected so the benchmark runs in the range of 2--20 seconds for the specific programming language. 2935 The total time is divided by @N@ to obtain the average time for a benchmark. 2936 Each benchmark experiment is run 13 times and the average appears in the table. 2740 2937 All omitted tests for other languages are functionally identical to the \CFA tests and available online~\cite{CforallBenchMarks}. 2741 % tar --exclude=.deps --exclude=Makefile --exclude=Makefile.in --exclude=c.c --exclude=cxx.cpp --exclude=fetch_add.c -cvhf benchmark.tar benchmark 2742 2743 \paragraph{Object Creation} 2744 2745 Object creation is measured by creating/deleting the specific kind of concurrent object. 2746 Figure~\ref{f:creation} shows the code for \CFA, with results in Table~\ref{tab:creation}. 2747 The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine to force stack creation, the creation cost is artificially low. 2748 2749 \begin{multicols}{2} 2750 \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}} 2751 \begin{cfa} 2752 @thread@ MyThread {}; 2753 void @main@( MyThread & ) {} 2754 int main() { 2755 BENCH( for ( N ) { @MyThread m;@ } ) 2756 sout | result`ns; 2757 } 2758 \end{cfa} 2759 \captionof{figure}{\CFA object-creation benchmark} 2760 \label{f:creation} 2761 2762 \columnbreak 2763 2764 \vspace*{-16pt} 2765 \captionof{table}{Object creation comparison (nanoseconds)} 2766 \label{tab:creation} 2767 2768 \begin{tabular}[t]{@{}r*{3}{D{.}{.}{5.2}}@{}} 2769 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ 2770 \CFA Coroutine Lazy & 13.2 & 13.1 & 0.44 \\ 2771 \CFA Coroutine Eager & 531.3 & 536.0 & 26.54 \\ 2772 \CFA Thread & 2074.9 & 2066.5 & 170.76 \\ 2773 \uC Coroutine & 89.6 & 90.5 & 1.83 \\ 2774 \uC Thread & 528.2 & 528.5 & 4.94 \\ 2775 Goroutine & 4068.0 & 4113.1 & 414.55 \\ 2776 Java Thread & 103848.5 & 104295.4 & 2637.57 \\ 2777 Pthreads & 33112.6 & 33127.1 & 165.90 2778 \end{tabular} 2779 \end{multicols} 2780 2781 2782 \paragraph{Context-Switching} 2938 % tar --exclude-ignore=exclude -cvhf benchmark.tar benchmark 2939 2940 \paragraph{Context Switching} 2783 2941 2784 2942 In procedural programming, the cost of a function call is important as modularization (refactoring) increases. 2785 (In many cases, a compiler inlines function calls to eliminate this cost.)2786 Similarly, when modularization extends to coroutines/t asks, the time for a context switch becomes a relevant factor.2943 (In many cases, a compiler inlines function calls to increase the size and number of basic blocks for optimizing.) 2944 Similarly, when modularization extends to coroutines/threads, the time for a context switch becomes a relevant factor. 2787 2945 The coroutine test is from resumer to suspender and from suspender to resumer, which is two context switches. 2946 %For async-await systems, the test is scheduling and fulfilling @N@ empty promises, where all promises are allocated before versus interleaved with fulfillment to avoid garbage collection. 2947 For async-await systems, the test measures the cost of the @await@ expression entering the event engine by awaiting @N@ promises, where each created promise is resolved by an immediate event in the engine (using Node.js @setImmediate@). 2788 2948 The thread test is using yield to enter and return from the runtime kernel, which is two context switches. 2789 2949 The difference in performance between coroutine and thread context-switch is the cost of scheduling for threads, whereas coroutines are self-scheduling. 2790 Figure~\ref{f:ctx-switch} only shows the \CFA code for coroutines/threads (other systems are similar) with all results in Table~\ref{tab:ctx-switch}. 2950 Figure~\ref{f:ctx-switch} shows the \CFA code for a coroutine/thread with results in Table~\ref{t:ctx-switch}. 2951 2952 % From: Gregor Richards <gregor.richards@uwaterloo.ca> 2953 % To: "Peter A. Buhr" <pabuhr@plg2.cs.uwaterloo.ca> 2954 % Date: Fri, 24 Jan 2020 13:49:18 -0500 2955 % 2956 % I can also verify that the previous version, which just tied a bunch of promises together, *does not* go back to the 2957 % event loop at all in the current version of Node. Presumably they're taking advantage of the fact that the ordering of 2958 % events is intentionally undefined to just jump right to the next 'then' in the chain, bypassing event queueing 2959 % entirely. That's perfectly correct behavior insofar as its difference from the specified behavior isn't observable, but 2960 % it isn't typical or representative of much anything useful, because most programs wouldn't have whole chains of eager 2961 % promises. Also, it's not representative of *anything* you can do with async/await, as there's no way to encode such an 2962 % eager chain that way. 2791 2963 2792 2964 \begin{multicols}{2} … … 2794 2966 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 2795 2967 @coroutine@ C {} c; 2796 void main( C & ) { for ( ;;) { @suspend;@ } }2968 void main( C & ) { while () { @suspend;@ } } 2797 2969 int main() { // coroutine test 2798 2970 BENCH( for ( N ) { @resume( c );@ } ) 2799 sout | result `ns;2800 } 2801 int main() { // t asktest2971 sout | result; 2972 } 2973 int main() { // thread test 2802 2974 BENCH( for ( N ) { @yield();@ } ) 2803 sout | result `ns;2975 sout | result; 2804 2976 } 2805 2977 \end{cfa} … … 2811 2983 \vspace*{-16pt} 2812 2984 \captionof{table}{Context switch comparison (nanoseconds)} 2813 \label{t ab:ctx-switch}2985 \label{t:ctx-switch} 2814 2986 \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}} 2815 2987 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ 2816 C function & 1.8 & 1.8 & 0.01 \\ 2817 \CFA generator & 2.4 & 2.2 & 0.25 \\ 2818 \CFA Coroutine & 36.2 & 36.2 & 0.25 \\ 2819 \CFA Thread & 93.2 & 93.5 & 2.09 \\ 2820 \uC Coroutine & 52.0 & 52.1 & 0.51 \\ 2821 \uC Thread & 96.2 & 96.3 & 0.58 \\ 2822 Goroutine & 141.0 & 141.3 & 3.39 \\ 2823 Java Thread & 374.0 & 375.8 & 10.38 \\ 2824 Pthreads Thread & 361.0 & 365.3 & 13.19 2988 C function & 1.8 & 1.8 & 0.0 \\ 2989 \CFA generator & 1.8 & 1.8 & 0.1 \\ 2990 \CFA coroutine & 32.5 & 32.9 & 0.8 \\ 2991 \CFA thread & 93.8 & 93.6 & 2.2 \\ 2992 \uC coroutine & 50.3 & 50.3 & 0.2 \\ 2993 \uC thread & 97.3 & 97.4 & 1.0 \\ 2994 Python generator & 40.9 & 41.3 & 1.5 \\ 2995 Node.js generator & 32.6 & 32.2 & 1.0 \\ 2996 Node.js await & 1852.2 & 1854.7 & 16.4 \\ 2997 Goroutine thread & 143.0 & 143.3 & 1.1 \\ 2998 Rust thread & 332.0 & 331.4 & 2.4 \\ 2999 Java thread & 405.0 & 415.0 & 17.6 \\ 3000 Pthreads thread & 334.3 & 335.2 & 3.9 2825 3001 \end{tabular} 2826 3002 \end{multicols} 2827 3003 2828 2829 \paragraph{Mutual-Exclusion} 2830 2831 Uncontented mutual exclusion, which frequently occurs, is measured by entering/leaving a critical section. 2832 For monitors, entering and leaving a monitor function is measured. 2833 To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured. 2834 Figure~\ref{f:mutex} shows the code for \CFA with all results in Table~\ref{tab:mutex}. 3004 \paragraph{Internal Scheduling} 3005 3006 Internal scheduling is measured using a cycle of two threads signalling and waiting. 3007 Figure~\ref{f:schedint} shows the code for \CFA, with results in Table~\ref{t:schedint}. 2835 3008 Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects. 3009 Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking. 2836 3010 2837 3011 \begin{multicols}{2} 2838 3012 \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}} 2839 3013 \begin{cfa} 3014 volatile int go = 0; 3015 @condition c;@ 2840 3016 @monitor@ M {} m1/*, m2, m3, m4*/; 2841 void __attribute__((noinline)) 2842 do_call( M & @mutex m/*, m2, m3, m4*/@ ) {} 3017 void call( M & @mutex p1/*, p2, p3, p4*/@ ) { 3018 @signal( c );@ 3019 } 3020 void wait( M & @mutex p1/*, p2, p3, p4*/@ ) { 3021 go = 1; // continue other thread 3022 for ( N ) { @wait( c );@ } ); 3023 } 3024 thread T {}; 3025 void main( T & ) { 3026 while ( go == 0 ) { yield(); } // waiter must start first 3027 BENCH( for ( N ) { call( m1/*, m2, m3, m4*/ ); } ) 3028 sout | result; 3029 } 2843 3030 int main() { 2844 BENCH( 2845 for( N ) do_call( m1/*, m2, m3, m4*/ ); 2846 ) 2847 sout | result`ns; 2848 } 2849 \end{cfa} 2850 \captionof{figure}{\CFA acquire/release mutex benchmark} 2851 \label{f:mutex} 3031 T t; 3032 wait( m1/*, m2, m3, m4*/ ); 3033 } 3034 \end{cfa} 3035 \captionof{figure}{\CFA Internal-scheduling benchmark} 3036 \label{f:schedint} 2852 3037 2853 3038 \columnbreak 2854 3039 2855 3040 \vspace*{-16pt} 2856 \captionof{table}{Mutex comparison (nanoseconds)} 2857 \label{tab:mutex} 2858 \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}} 2859 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ 2860 test and test-and-test lock & 19.1 & 18.9 & 0.40 \\ 2861 \CFA @mutex@ function, 1 arg. & 45.9 & 46.6 & 1.45 \\ 2862 \CFA @mutex@ function, 2 arg. & 105.0 & 104.7 & 3.08 \\ 2863 \CFA @mutex@ function, 4 arg. & 165.0 & 167.6 & 5.65 \\ 2864 \uC @monitor@ member rtn. & 54.0 & 53.7 & 0.82 \\ 2865 Java synchronized method & 31.0 & 31.1 & 0.50 \\ 2866 Pthreads Mutex Lock & 33.6 & 32.6 & 1.14 3041 \captionof{table}{Internal-scheduling comparison (nanoseconds)} 3042 \label{t:schedint} 3043 \bigskip 3044 3045 \begin{tabular}{@{}r*{3}{D{.}{.}{5.2}}@{}} 3046 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ 3047 \CFA @signal@, 1 monitor & 364.4 & 364.2 & 4.4 \\ 3048 \CFA @signal@, 2 monitor & 484.4 & 483.9 & 8.8 \\ 3049 \CFA @signal@, 4 monitor & 709.1 & 707.7 & 15.0 \\ 3050 \uC @signal@ monitor & 328.3 & 327.4 & 2.4 \\ 3051 Rust cond. variable & 7514.0 & 7437.4 & 397.2 \\ 3052 Java @notify@ monitor & 9623.0 & 9654.6 & 236.2 \\ 3053 Pthreads cond. variable & 5553.7 & 5576.1 & 345.6 2867 3054 \end{tabular} 2868 3055 \end{multicols} … … 2872 3059 2873 3060 External scheduling is measured using a cycle of two threads calling and accepting the call using the @waitfor@ statement. 2874 Figure~\ref{f: ext-sched} shows the code for \CFA, with results in Table~\ref{tab:ext-sched}.3061 Figure~\ref{f:schedext} shows the code for \CFA with results in Table~\ref{t:schedext}. 2875 3062 Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects. 2876 3063 … … 2879 3066 \vspace*{-16pt} 2880 3067 \begin{cfa} 2881 volatile int go = 0; 2882 @monitor@ M {} m; 3068 @monitor@ M {} m1/*, m2, m3, m4*/; 3069 void call( M & @mutex p1/*, p2, p3, p4*/@ ) {} 3070 void wait( M & @mutex p1/*, p2, p3, p4*/@ ) { 3071 for ( N ) { @waitfor( call : p1/*, p2, p3, p4*/ );@ } 3072 } 2883 3073 thread T {}; 2884 void __attribute__((noinline))2885 do_call( M & @mutex@ ) {}2886 3074 void main( T & ) { 2887 while ( go == 0 ) { yield(); } 2888 while ( go == 1 ) { do_call( m ); } 2889 } 2890 int __attribute__((noinline)) 2891 do_wait( M & @mutex@ m ) { 2892 go = 1; // continue other thread 2893 BENCH( for ( N ) { @waitfor( do_call, m );@ } ) 2894 go = 0; // stop other thread 2895 sout | result`ns; 3075 BENCH( for ( N ) { call( m1/*, m2, m3, m4*/ ); } ) 3076 sout | result; 2896 3077 } 2897 3078 int main() { 2898 3079 T t; 2899 do_wait( m);3080 wait( m1/*, m2, m3, m4*/ ); 2900 3081 } 2901 3082 \end{cfa} 2902 3083 \captionof{figure}{\CFA external-scheduling benchmark} 2903 \label{f: ext-sched}3084 \label{f:schedext} 2904 3085 2905 3086 \columnbreak … … 2907 3088 \vspace*{-16pt} 2908 3089 \captionof{table}{External-scheduling comparison (nanoseconds)} 2909 \label{t ab:ext-sched}3090 \label{t:schedext} 2910 3091 \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}} 2911 3092 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ 2912 \CFA @waitfor@, 1 @monitor@ & 376.4 & 376.8 & 7.63 \\ 2913 \CFA @waitfor@, 2 @monitor@ & 491.4 & 492.0 & 13.31 \\ 2914 \CFA @waitfor@, 4 @monitor@ & 681.0 & 681.7 & 19.10 \\ 2915 \uC @_Accept@ & 331.1 & 331.4 & 2.66 3093 \CFA @waitfor@, 1 monitor & 367.1 & 365.3 & 5.0 \\ 3094 \CFA @waitfor@, 2 monitor & 463.0 & 464.6 & 7.1 \\ 3095 \CFA @waitfor@, 4 monitor & 689.6 & 696.2 & 21.5 \\ 3096 \uC \lstinline[language=uC++]|_Accept| monitor & 328.2 & 329.1 & 3.4 \\ 3097 Go \lstinline[language=Golang]|select| channel & 365.0 & 365.5 & 1.2 2916 3098 \end{tabular} 2917 3099 \end{multicols} 2918 3100 2919 2920 \paragraph{Internal Scheduling} 2921 2922 Internal scheduling is measured using a cycle of two threads signalling and waiting.2923 F igure~\ref{f:int-sched} shows the code for \CFA, with results in Table~\ref{tab:int-sched}.2924 Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.2925 Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.3101 \paragraph{Mutual-Exclusion} 3102 3103 Uncontented mutual exclusion, which frequently occurs, is measured by entering/leaving a critical section. 3104 For monitors, entering and leaving a monitor function is measured, otherwise the language-appropriate mutex-lock is measured. 3105 For comparison, a spinning (versus blocking) test-and-test-set lock is presented. 3106 Figure~\ref{f:mutex} shows the code for \CFA with results in Table~\ref{t:mutex}. 3107 Note the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects. 2926 3108 2927 3109 \begin{multicols}{2} 2928 3110 \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}} 2929 3111 \begin{cfa} 2930 volatile int go = 0; 2931 @monitor@ M { @condition c;@ } m; 2932 void __attribute__((noinline)) 2933 do_call( M & @mutex@ a1 ) { @signal( c );@ } 2934 thread T {}; 2935 void main( T & this ) { 2936 while ( go == 0 ) { yield(); } 2937 while ( go == 1 ) { do_call( m ); } 2938 } 2939 int __attribute__((noinline)) 2940 do_wait( M & mutex m ) with(m) { 2941 go = 1; // continue other thread 2942 BENCH( for ( N ) { @wait( c );@ } ); 2943 go = 0; // stop other thread 2944 sout | result`ns; 2945 } 3112 @monitor@ M {} m1/*, m2, m3, m4*/; 3113 call( M & @mutex p1/*, p2, p3, p4*/@ ) {} 2946 3114 int main() { 2947 T t;2948 do_wait( m );2949 } 2950 \end{cfa} 2951 \captionof{figure}{\CFA Internal-schedulingbenchmark}2952 \label{f: int-sched}3115 BENCH( for( N ) call( m1/*, m2, m3, m4*/ ); ) 3116 sout | result; 3117 } 3118 \end{cfa} 3119 \captionof{figure}{\CFA acquire/release mutex benchmark} 3120 \label{f:mutex} 2953 3121 2954 3122 \columnbreak 2955 3123 2956 3124 \vspace*{-16pt} 2957 \captionof{table}{Internal-scheduling comparison (nanoseconds)} 2958 \label{tab:int-sched} 2959 \bigskip 2960 2961 \begin{tabular}{@{}r*{3}{D{.}{.}{5.2}}@{}} 2962 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ 2963 \CFA @signal@, 1 @monitor@ & 372.6 & 374.3 & 14.17 \\ 2964 \CFA @signal@, 2 @monitor@ & 492.7 & 494.1 & 12.99 \\ 2965 \CFA @signal@, 4 @monitor@ & 749.4 & 750.4 & 24.74 \\ 2966 \uC @signal@ & 320.5 & 321.0 & 3.36 \\ 2967 Java @notify@ & 10160.5 & 10169.4 & 267.71 \\ 2968 Pthreads Cond. Variable & 4949.6 & 5065.2 & 363 3125 \captionof{table}{Mutex comparison (nanoseconds)} 3126 \label{t:mutex} 3127 \begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}} 3128 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ 3129 test-and-test-set lock & 19.1 & 18.9 & 0.4 \\ 3130 \CFA @mutex@ function, 1 arg. & 48.3 & 47.8 & 0.9 \\ 3131 \CFA @mutex@ function, 2 arg. & 86.7 & 87.6 & 1.9 \\ 3132 \CFA @mutex@ function, 4 arg. & 173.4 & 169.4 & 5.9 \\ 3133 \uC @monitor@ member rtn. & 54.8 & 54.8 & 0.1 \\ 3134 Goroutine mutex lock & 34.0 & 34.0 & 0.0 \\ 3135 Rust mutex lock & 33.0 & 33.2 & 0.8 \\ 3136 Java synchronized method & 31.0 & 31.0 & 0.0 \\ 3137 Pthreads mutex Lock & 31.0 & 31.1 & 0.4 2969 3138 \end{tabular} 2970 3139 \end{multicols} 2971 3140 3141 \paragraph{Creation} 3142 3143 Creation is measured by creating/deleting a specific kind of control-flow object. 3144 Figure~\ref{f:creation} shows the code for \CFA with results in Table~\ref{t:creation}. 3145 Note, the call stacks of \CFA coroutines are lazily created on the first resume, therefore the cost of creation with and without a stack are presented. 3146 3147 \begin{multicols}{2} 3148 \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}} 3149 \begin{cfa} 3150 @coroutine@ MyCoroutine {}; 3151 void ?{}( MyCoroutine & this ) { 3152 #ifdef EAGER 3153 resume( this ); 3154 #endif 3155 } 3156 void main( MyCoroutine & ) {} 3157 int main() { 3158 BENCH( for ( N ) { @MyCoroutine c;@ } ) 3159 sout | result; 3160 } 3161 \end{cfa} 3162 \captionof{figure}{\CFA creation benchmark} 3163 \label{f:creation} 3164 3165 \columnbreak 3166 3167 \vspace*{-16pt} 3168 \captionof{table}{Creation comparison (nanoseconds)} 3169 \label{t:creation} 3170 3171 \begin{tabular}[t]{@{}r*{3}{D{.}{.}{5.2}}@{}} 3172 \multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\ 3173 \CFA generator & 0.6 & 0.6 & 0.0 \\ 3174 \CFA coroutine lazy & 13.4 & 13.1 & 0.5 \\ 3175 \CFA coroutine eager & 144.7 & 143.9 & 1.5 \\ 3176 \CFA thread & 466.4 & 468.0 & 11.3 \\ 3177 \uC coroutine & 155.6 & 155.7 & 1.7 \\ 3178 \uC thread & 523.4 & 523.9 & 7.7 \\ 3179 Python generator & 123.2 & 124.3 & 4.1 \\ 3180 Node.js generator & 32.3 & 32.2 & 0.3 \\ 3181 Goroutine thread & 751.0 & 750.5 & 3.1 \\ 3182 Rust thread & 53801.0 & 53896.8 & 274.9 \\ 3183 Java thread & 120274.0 & 120722.9 & 2356.7 \\ 3184 Pthreads thread & 31465.5 & 31419.5 & 140.4 3185 \end{tabular} 3186 \end{multicols} 3187 3188 3189 \subsection{Discussion} 3190 3191 Languages using 1:1 threading based on pthreads can at best meet or exceed (due to language overhead) the pthread results. 3192 Note, pthreads has a fast zero-contention mutex lock checked in user space. 3193 Languages with M:N threading have better performance than 1:1 because there is no operating-system interactions. 3194 Languages with stackful coroutines have higher cost than stackless coroutines because of stack allocation and context switching; 3195 however, stackful \uC and \CFA coroutines have approximately the same performance as stackless Python and Node.js generators. 3196 The \CFA stackless generator is approximately 25 times faster for suspend/resume and 200 times faster for creation than stackless Python and Node.js generators. 3197 2972 3198 2973 3199 \section{Conclusion} … … 2975 3201 Advanced control-flow will always be difficult, especially when there is temporal ordering and nondeterminism. 2976 3202 However, many systems exacerbate the difficulty through their presentation mechanisms. 2977 This paper shows it is possible to present a hierarchy of control-flow features, generator, coroutine, thread, and monitor, providing an integrated set of high-level, efficient, and maintainable control-flow features. 2978 Eliminated from \CFA are spurious wakeup and barging, which are nonintuitive and lead to errors, and having to work with a bewildering set of low-level locks and acquisition techniques. 2979 \CFA high-level race-free monitors and tasks provide the core mechanisms for mutual exclusion and synchronization, without having to resort to magic qualifiers like @volatile@/@atomic@. 3203 This paper shows it is possible to understand high-level control-flow using three properties: statefulness, thread, mutual-exclusion/synchronization. 3204 Combining these properties creates a number of high-level, efficient, and maintainable control-flow types: generator, coroutine, thread, each of which can be a monitor. 3205 Eliminated from \CFA are barging and spurious wakeup, which are nonintuitive and lead to errors, and having to work with a bewildering set of low-level locks and acquisition techniques. 3206 \CFA high-level race-free monitors and threads provide the core mechanisms for mutual exclusion and synchronization, without having to resort to magic qualifiers like @volatile@/@atomic@. 2980 3207 Extending these mechanisms to handle high-level deadlock-free bulk acquire across both mutual exclusion and synchronization is a unique contribution. 2981 3208 The \CFA runtime provides concurrency based on a preemptive M:N user-level threading-system, executing in clusters, which encapsulate scheduling of work on multiple kernel threads providing parallelism. 2982 3209 The M:N model is judged to be efficient and provide greater flexibility than a 1:1 threading model. 2983 3210 These concepts and the \CFA runtime-system are written in the \CFA language, extensively leveraging the \CFA type-system, which demonstrates the expressiveness of the \CFA language. 2984 Performance comparisons with other concurrent systems/languages show the \CFA approach is competitive across all low-level operations, which translates directly into good performance in well-written concurrent applications.2985 C programmers should feel comfortable using these mechanisms for developing complex control-flow in applications, with the ability to obtain maximum available performance by selecting mechanisms at the appropriate level of need .3211 Performance comparisons with other concurrent systems/languages show the \CFA approach is competitive across all basic operations, which translates directly into good performance in well-written applications with advanced control-flow. 3212 C programmers should feel comfortable using these mechanisms for developing complex control-flow in applications, with the ability to obtain maximum available performance by selecting mechanisms at the appropriate level of need using only calling communication. 2986 3213 2987 3214 … … 3003 3230 \label{futur:nbio} 3004 3231 3005 Many modern workloads are not bound by computation but IO operations, a common casebeing web servers and XaaS~\cite{XaaS} (anything as a service).3232 Many modern workloads are not bound by computation but IO operations, common cases being web servers and XaaS~\cite{XaaS} (anything as a service). 3006 3233 These types of workloads require significant engineering to amortizing costs of blocking IO-operations. 3007 3234 At its core, non-blocking I/O is an operating-system level feature queuing IO operations, \eg network operations, and registering for notifications instead of waiting for requests to complete. … … 3031 3258 \section{Acknowledgements} 3032 3259 3033 The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach and Michael Brooks on the features described in this paper.3034 Funding for this project has been provided by Huawei Ltd.\ (\url{http://www.huawei.com}). %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.3260 The authors recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach, and Michael Brooks; David Dice for commenting and helping with the Java benchmarks; and Gregor Richards for helping with the Node.js benchmarks. 3261 This research is funded by a grant from Waterloo-Huawei (\url{http://www.huawei.com}) Joint Innovation Lab. %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada. 3035 3262 3036 3263 {% 3037 \fontsize{9bp}{1 2bp}\selectfont%3264 \fontsize{9bp}{11.5bp}\selectfont% 3038 3265 \bibliography{pl,local} 3039 3266 }%
Note: See TracChangeset
for help on using the changeset viewer.