# Changeset 287da46 for doc/papers/concurrency

Ignore:
Timestamp:
Jun 28, 2018, 1:33:48 PM (4 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, no_list, persistent-indexer
Children:
944ce47
Parents:
64188cc
Message:

 r64188cc \CFA is a modern, polymorphic, \emph{non-object-oriented} extension of the C programming language. This paper discusses the design of the concurrency and parallelism features in \CFA, and the concurrent runtime-system. These features are created from scratch as ISO C lacks concurrency, relying largely on the pthreads library. These features are created from scratch as ISO C lacks concurrency, relying largely on the pthreads library for concurrency. Coroutines and lightweight (user) threads are introduced into the language. In addition, monitors are added as a high-level mechanism for mutual exclusion and synchronization. A unique contribution is allowing multiple monitors to be safely acquired simultaneously. All features respect the expectations of C programmers, while being fully integrate with the \CFA polymorphic type-system and other language features. Finally, experimental results are presented to compare the performance of the new features with similar mechanisms in other concurrent programming-languages. Finally, experimental results show comparable performance of the new features with similar mechanisms in other concurrent programming-languages. }% While the simplest concurrency system is a thread and a lock, this low-level approach is hard to master. An easier approach for programmers is to support higher-level constructs as the basis of concurrency. Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{Hochstein05}. Examples of high-level approaches are jobs (tasks) based~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}. Indeed, for highly-productive concurrent-programming, high-level approaches are much more popular~\cite{Hochstein05}. Examples of high-level approaches are jobs (thread pool)~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}. The following terminology is used. A \newterm{thread} is a fundamental unit of execution that runs a sequence of code and requires a stack to maintain state. Multiple simultaneous threads give rise to \newterm{concurrency}, which requires locking to ensure safe communication and access to shared data. % Correspondingly, concurrency is defined as the concepts and challenges that occur when multiple independent (sharing memory, timing dependencies, \etc) concurrent threads are introduced. Multiple simultaneous threads give rise to \newterm{concurrency}, which requires locking to ensure access to shared data and safe communication. \newterm{Locking}, and by extension \newterm{locks}, are defined as a mechanism to prevent progress of threads to provide safety. \newterm{Parallelism} is running multiple threads simultaneously. Parallelism implies \emph{actual} simultaneous execution, where concurrency only requires \emph{apparent} simultaneous execution. As such, parallelism only affects performance, which is observed through differences in space and/or time at runtime. Hence, there are two problems to be solved: concurrency and parallelism. While these two concepts are often combined, they are distinct, requiring different tools~\cite[\S~2]{Buhr05a}. The proposed concurrency API is implemented in a dialect of C, called \CFA. The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantic, and type checking, and the corresponding high-performance runtime-library to implement the concurrency features. The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantics, and type checking, and the corresponding high-performance runtime-library to implement the concurrent features. Like C, the building blocks of \CFA are structures and routines. Virtually all of the code generated by the \CFA translator respects C memory layouts and calling conventions. While \CFA is not an object-oriented language, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C does have a notion of objects: region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}. While some \CFA features are common in object-oriented programming-languages, they are an independent capability allowing \CFA to adopt them while retaining a procedural paradigm. While \CFA is not object oriented, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C does have a notion of objects: region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}. While some \CFA features are common in object-oriented programming, they are independent capabilities allowing \CFA to adopt them while maintaining a procedural paradigm. Overloading is important for \CFA concurrency since the runtime system relies on creating different types to represent concurrency objects. Therefore, overloading eliminates long prefixes and other naming conventions to prevent name clashes. As seen in Section~\ref{basics}, routine @main@ is heavily overloaded. As seen in Section~\ref{s:Concurrency}, routine @main@ is heavily overloaded. For example, variable overloading is useful in the parallel semantics of the @with@ statement for fields with the same name: \begin{cfa} { VLA  x,            y = { 20, 0x01 },     z = y; $\C{// z points to y}$ //    x{};         y{ 20, 0x01 };          z{ z, y }; // $\LstCommentStyle{\color{red}\ \ \ x\{\};\ \ \ \ \ \ \ \ \ y\{ 20, 0x01 \};\ \ \ \ \ \ \ \ \ \ z\{ z, y \};\ \ \ \ \ \ \ implicit calls}$ ^x{};                                                                   $\C{// deallocate x}$ x{};                                                                    $\C{// reallocate x}$ y{ x };                                                                 $\C{// reallocate y, points to x}$ x{};                                                                    $\C{// reallocate x, not pointing to y}$ //  ^z{};  ^y{};  ^x{}; } }       //  $\LstCommentStyle{\color{red}\^{}z\{\};\ \ \^{}y\{\};\ \ \^{}x\{\};\ \ \ implicit calls}$ \end{cfa} Like \CC, construction is implicit on allocation (stack/heap) and destruction is implicit on deallocation. Assertions can be @otype@ or @dtype@. @otype@ refers to a complete'' object, \ie an object has a size, default constructor, copy constructor, destructor and an assignment operator. @dtype@ only guarantees an object has a size and alignment. Using the return type for discrimination, it is possible to write a type-safe @alloc@ based on the C @malloc@: @otype@ refers to a \emph{complete} object, \ie an object has a size, alignment, default constructor, copy constructor, destructor and an assignment operator. @dtype@ refers to an \emph{incomplete} object, \ie, an object only has a size and alignment. Using the return type for overload discrimination, it is possible to write a type-safe @alloc@ based on the C @malloc@: \begin{cfa} forall( dtype T | sized(T) ) T * alloc( void ) { return (T *)malloc( sizeof(T) ); } \section{Concurrency Basics}\label{basics} \section{Concurrency} \label{s:Concurrency} At its core, concurrency is based on multiple call-stacks and scheduling threads executing on these stacks. Multiple call stacks (or contexts) and a single thread of execution, called \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}. In coroutining, the single thread is self-scheduling across the stacks, so execution is deterministic, \ie given fixed inputs, the execution path to the outputs is fixed and predictable. In coroutining, the single thread is self-scheduling across the stacks, so execution is deterministic, \ie the execution path from input to output is fixed and predictable. A \newterm{stackless} coroutine executes on the caller's stack~\cite{Python} but this approach is restrictive, \eg preventing modularization and supporting only iterator/generator-style programming; a \newterm{stackfull} coroutine executes on its own stack, allowing full generality. Only stackfull coroutines are a stepping-stone to concurrency. The transition to concurrency, even for execution with a single thread and multiple stacks, occurs when coroutines also context switch to a scheduling oracle, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}. Only stackfull coroutines are a stepping stone to concurrency. The transition to concurrency, even for execution with a single thread and multiple stacks, occurs when coroutines also context switch to a \newterm{scheduling oracle}, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}. Therefore, a minimal concurrency system is possible using coroutines (see Section \ref{coroutine}) in conjunction with a scheduler to decide where to context switch next. The resulting execution system now follows a cooperative threading-model, called \newterm{non-preemptive scheduling}. \subsection{Coroutines: A Stepping Stone}\label{coroutine} While the focus of this discussion is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system. While the focus of this discussion is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system (but not concurrent among themselves). Coroutines are generalized routines allowing execution to be temporarily suspend and later resumed. Hence, unlike a normal routine, a coroutine may not terminate when it returns to its caller, allowing it to be restarted with the values and execution location present at the point of suspension. Therefore, the core \CFA coroutine-API for has two fundamental features: independent call-stacks and @suspend@/@resume@ operations. For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers, where Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C. For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers \begin{displaymath} \mathsf{fib}(n) = \left \{ \right. \end{displaymath} Figure~\ref{f:GlobalVariables} illustrates the following problems: unique unencapsulated global variables necessary to retain state between calls; only one Fibonacci generator; execution state must be explicitly retained via explicit state variables. Figure~\ref{f:ExternalState} addresses these issues: unencapsulated program global variables become encapsulated structure variables; unique global variables are replaced by multiple Fibonacci objects; explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $\mathsf{fib}(n-2)$. where Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C. Figure~\ref{f:GlobalVariables} illustrates the following problems: unique unencapsulated global variables necessary to retain state between calls, only one Fibonacci generator, and execution state must be explicitly retained via explicit state variables. Figure~\ref{f:ExternalState} addresses these issues: unencapsulated program global variables become encapsulated structure variables, unique global variables are replaced by multiple Fibonacci objects, and explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $\mathsf{fib}(n-2)$. \begin{figure} \end{lrbox} \subfloat[3 States: global variables]{\usebox\myboxA} \subfloat[3 States: global variables]{\label{f:GlobalVariables}\usebox\myboxA} \qquad \subfloat[1 State: external variables]{\usebox\myboxB} \subfloat[1 State: external variables]{\label{f:ExternalState}\usebox\myboxB} \caption{C Fibonacci Implementations} \label{f:C-fibonacci} \subfloat[1 State, internal variables]{\label{f:Coroutine1State}\usebox\myboxB} \caption{\CFA Coroutine Fibonacci Implementations} \label{f:fibonacci-cfa} \label{f:cfa-fibonacci} \end{figure} The interface routine @next@, takes a Fibonacci instance and context switches to it using @resume@; on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned. The first @resume@ is special because it cocalls the coroutine at its coroutine main and allocates the stack; The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack; when the coroutine main returns, its stack is deallocated. Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes. \end{quote} The example takes advantage of resuming a coroutine in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops. The destruction provides a newline if formatted text ends with a full line. The destruction provides a newline, if formatted text ends with a full line. Figure~\ref{f:CFmt} shows the C equivalent formatter, where the loops of the coroutine are flatten (linearized) and rechecked on each call because execution location is not retained between calls. (Linearized code is the bane of device drivers.) \begin{figure} The previous examples are \newterm{asymmetric (semi) coroutine}s because one coroutine always calls a resuming routine for another coroutine, and the resumed coroutine always suspends back to its last resumer, similar to call/return for normal routines. However,@resume@/@suspend@ context switch to existing stack-frames rather than create new ones so there is no stack growth. \newterm{Symmetric (full) coroutine}s have a coroutine call a resuming routine for another coroutine, which eventually forms a resuming-call cycle. However, @resume@ and @suspend@ context switch among existing stack-frames, rather than create new ones so there is no stack growth. \newterm{Symmetric (full) coroutine}s have a coroutine call to a resuming routine for another coroutine, and its coroutine main has a call to another resuming routine, which eventually forms a resuming-call cycle. (The trivial cycle is a coroutine resuming itself.) This control flow is similar to recursion for normal routines, but again there is no stack growth from the context switch. Figure~\ref{f:ProdCons} shows a producer/consumer symmetric-coroutine performing bi-directional communication. Since the solution involves a full-coroutining cycle, the program main creates one coroutine in isolation, passes this coroutine to its partner, and closes the cycle at the call to @start@. The @start@ routine communicates both the number of elements to be produced and the consumer into the producer's coroutine structure. The @start@ routine communicates both the number of elements to be produced and the consumer into the producer's coroutine-structure. Then the @resume@ to @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it. @prod@'s coroutine main starts, creates local variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer. For the first resume, @cons@'s stack is initialized, creating local variables retained between subsequent activations of the coroutine. The consumer iterates until the @done@ flag is set, prints the values delivered by the producer, increments status, and calls back to the producer via @payment@, and on return from @payment@, prints the receipt from the producer and increments @money@ (inflation). The call from the consumer to the @payment@ introduces the cycle between producer and consumer. The call from the consumer to @payment@ introduces the cycle between producer and consumer. When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed. The context switch restarts the producer at the point where it was last context switched, so it continues in @delivery@ after the resume. Object-oriented inheritance provides extra fields and code in a restricted context, but it requires programmers to explicitly perform the inheritance: \begin{cfa} struct mycoroutine $\textbf{\textsf{inherits}}$ baseCoroutine { ... } \begin{cfa}[morekeywords={class,inherits}] class mycoroutine inherits baseCoroutine { ... } \end{cfa} and the programming language (and possibly its tool set, \eg debugger) may need to understand @baseCoroutine@ because of the stack. \end{tabular} \end{cquote} (The qualifier @mutex@ for the destructor parameter is discussed in Section~\ref{s:Monitors}.) (The qualifier @mutex@ for the destructor parameter is discussed in Section~\ref{s:Monitor}.) Like a coroutine, the statically-typed @main@ routine is the starting point (first stack frame) of a user thread. The difference is that a coroutine borrows a thread from its caller, so the first thread resuming a coroutine creates an instance of @main@; The program uses heap-based threads because each thread needs different constructor values. (Python provides a simple iteration mechanism to initialize array elements to different values allowing stack allocation.) The allocation/deallocation pattern appears unusual because allocated objects are immediately deleted without any intervening code. The allocation/deallocation pattern appears unusual because allocated objects are immediately deallocated without any intervening code. However, for threads, the deletion provides implicit synchronization, which is the intervening code. While the subtotals are added in linear order rather than completion order, which slight inhibits concurrency, the computation is restricted by the critical-path thread (\ie the thread that takes the longest), and so any inhibited concurrency is very small as totalling the subtotals is trivial. Since many deterministic challenges appear with the use of mutable shared state, some languages/libraries disallow it, \eg Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka~\cite{Akka} (Scala). In these paradigms, interaction among concurrent objects is performed by stateless message-passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts, \eg channels~\cite{CSP,Go}. However, in call/return-based languages, these approaches force a clear distinction, \ie introduce a new programming paradigm, between regular and concurrent computation, \eg routine call versus message passing. However, in call/return-based languages, these approaches force a clear distinction, \ie introduce a new programming paradigm between regular and concurrent computation, \eg routine call versus message passing. Hence, a programmer must learn and manipulate two sets of design patterns. While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account. In contrast, approaches based on statefull models more closely resemble the standard call/return programming-model, resulting in a single programming paradigm. At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locks mechanism are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}. At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}. However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}. A newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}. \section{Monitors} \label{s:Monitors} \section{Monitor} \label{s:Monitor} A \textbf{monitor} is a set of routines that ensure mutual exclusion when accessing shared state. \end{cfa} Mandatory monitor qualifiers have the benefit of being self-documented, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameter is redundant. Mandatory monitor qualifiers have the benefit of being self-documenting, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameter is redundant. Instead, one of qualifier semantics can be the default, and the other required. For example, assume the safe @mutex@ option for a monitor parameter because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors. On the other hand, assuming \lstinline[morekeywords=nomutex]@nomutex@ is the \emph{normal} parameter behaviour, stating explicitly this parameter is not special''. For example, assume the safe @mutex@ qualifier for all monitor parameters because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors. On the other hand, assuming \lstinline[morekeywords=nomutex]@nomutex@ is the \emph{normal} parameter behaviour, stating explicitly \emph{this parameter is not special}. Providing a default qualifier implies knowing whether a parameter is a monitor. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. In the calls to @bar@ and @baz@, the monitors are acquired in opposite order. However, such use leads to lock acquiring order problems resulting in deadlock~\cite{Lister77}, where detecting it requires dynamically tracking of monitor calls, and dealing with it requires rollback semantics~\cite{Dice10}. However, such use leads to lock acquiring order problems resulting in deadlock~\cite{Lister77}, where detecting it requires dynamic tracking of monitor calls, and dealing with it requires rollback semantics~\cite{Dice10}. In \CFA, safety is guaranteed by using bulk acquire of all monitors to shared objects, whereas other monitor systems provide no aid. While \CFA provides only a partial solution, the \CFA partial solution handles many useful cases. \newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry. Figure~\ref{f:BBInt} shows a \CFA bounded-buffer with internal scheduling, where producers/consumers enter the monitor, see the buffer is full/empty, and block on an appropriate condition lock, @full@/@empty@. Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, see the buffer is full/empty, and block on an appropriate condition lock, @full@/@empty@. The @wait@ routine atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the routine's parameter list. The appropriate condition lock is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer. The signalling thread blocks but is marked for urgrent unblocking at the next scheduling point and the signalled thread continues. \end{enumerate} The first approach is too restrictive, as it precludes solving a reasonable class of problems, \eg dating service. The first approach is too restrictive, as it precludes solving a reasonable class of problems, \eg dating service (see Figure~\ref{f:DatingService}). \CFA supports the next two semantics as both are useful. Finally, while it is common to store a @condition@ as a field of the monitor, in \CFA, a @condition@ variable can be created/stored independently. Furthermore, a condition variable is tied to a \emph{group} of monitors on first use (called \newterm{branding}), which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors. Furthermore, a condition variable is tied to a \emph{group} of monitors on first use, called \newterm{branding}, which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors. \begin{figure} \end{figure} Figure~\ref{f:BBExt} shows a \CFA bounded-buffer with external scheduling, where producers/consumers detecting a full/empty buffer block and prevent more producers/consumers from entering the monitor until the buffer has a free/empty slot. Figure~\ref{f:BBExt} shows a \CFA generic bounded-buffer with external scheduling, where producers/consumers detecting a full/empty buffer block and prevent more producers/consumers from entering the monitor until the buffer has a free/empty slot. External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion. If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer. Threads making calls to routines that are currently excluded block outside (external) of the monitor on a calling queue, versus blocking on condition queues inside (internal) of the monitor. Threads making calls to routines that are currently excluded, block outside (external) of the monitor on a calling queue, versus blocking on condition queues inside (internal) of the monitor. % External scheduling is more constrained and explicit, which helps programmers reduce the non-deterministic nature of concurrency. External scheduling allows users to wait for events from other threads without concern of unrelated events occurring. For internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread; the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently. The waiter unblocks next, uses/takes the state, and exits the monitor. The waiter unblocks next from the urgent queue, uses/takes the state, and exits the monitor. Blocking signalling is the reverse, where the waiter is providing the cooperation for the signalling thread; the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter. The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to use/take the state. Figure~\ref{f:DatingService} shows a dating service demonstrating the two forms of signalling: non-blocking and blocking. Figure~\ref{f:DatingService} shows a dating service demonstrating non-blocking and blocking signalling. The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers. A thread blocks until an appropriate partner arrives. The complexity is exchanging phone number in the monitor because the monitor mutual-exclusion property prevents exchanging numbers. For internal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the oppose number, post its phone number, and unblock the partner. For external scheduling, the implicit urgent-condition replaces the explict @exchange@-condition and @signal_block@ puts the finding thread on the urgent condition and unblocks the matcher.. The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property. For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the oppose number, post its phone number, and unblock the partner. For signal-block scheduling, the implicit urgent-queue replaces the explict @exchange@-condition and @signal_block@ puts the finding thread on the urgent condition and unblocks the matcher. The dating service is an example of a monitor that cannot be written using external scheduling because it requires knowledge of calling parameters to make scheduling decisions, and parameters of waiting threads are unavailable; Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@. Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer. @waitfor@ statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer. To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible. For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency. Furthermore, \CFA concurrency has no superious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implict form of barging. While a number of approaches were examined~\cite[\S~4.3]{Delisle18}, the solution chosen for \CFA is a novel techique called \newterm{partial signalling}. Signalled threads are moved to an urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock. Signalled threads are moved to the urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock. Partial signalling transfers ownership of monitors to the front waiter. When the signaller thread exits or waits in the monitor the front waiter is unblocked if all its monitors are released. Figure~\ref{fig:BulkMonitor} shows the \CFA monitor implementation. The mutex routine called is associated with each thread on the entry queue, while a list of acceptable routines is kept separately. The accepted list is a variable-sized array of accepted routine pointers, so the single instruction bitmask comparison is replaced by dereferencing a pointer followed by a linear search. The accepted list is a variable-sized array of accepted routine pointers, so the single instruction bitmask comparison is replaced by dereferencing a pointer followed by a (usually short) linear search. \label{s:Multi-MonitorScheduling} External scheduling, like internal scheduling, becomes significantly more complex when introducing multi-monitor syntax. Even in the simplest possible case, new semantics needs to be established: External scheduling, like internal scheduling, becomes significantly more complex for multi-monitor semantics. Even in the simplest case, new semantics needs to be established. \begin{cfa} monitor M {}; } \end{cfa} Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by accepting routine. Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by the accepting routine. Note, for internal and external scheduling with multiple monitors, a signalling or accepting thread must match exactly, \ie partial matching results in waiting. \lstMakeShortInline@% \end{cquote} For both internal and external scheduling, the set of monitors is disjoint so unblocking is impossible. In all cases, the statement following is executed \emph{after} a clause is executed to know which of the clauses executed. A group of conditional @waitfor@ clauses is \emph{not} the same as a group of @if@ statements, e.g.: Note, a group of conditional @waitfor@ clauses is \emph{not} the same as a group of @if@ statements, e.g.: \begin{cfa} if ( C1 ) waitfor( mem1 );                       when ( C1 ) waitfor( mem1 ); The right example accepts either @mem1@ or @mem2@ if @C1@ and @C2@ are true. An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object deallocated. An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated. \begin{cfa} void insert( Buffer(T) & mutex buffer, T elem ) with( buffer ) { } \end{cfa} However, the @waitfor@ semantics do not work, since using an object after its destructor is called is undefined. Therefore, to make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the call to the destructor is placed on the urgent queue and the acceptor continues execution, which throws an exception to the acceptor and then deallocates the object. When the buffer is deallocated, the current waiter is unblocked and informed, so it can perform an appropriate action. However, the basic @waitfor@ semantics do not support this functionality, since using an object after its destructor is called is undefined. Therefore, to make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the call to the destructor is placed on the urgent queue and the acceptor continues execution, which throws an exception to the acceptor and then the caller is unbloced from the urgent queue to deallocate the object. Accepting the destructor is an idiomatic way to terminate a thread in \CFA. \subsection{\protect\lstinline@mutex@ Threads} Threads in \CFA are monitors, so all monitor features are available when using threads. Figure~\ref{f:pingpong} shows an example of two threads calling and accepting calls from each other in a cycle. Note, both ping/pong threads are globally declared, @pi@/@po@, and hence, start (and possibly complete) before the program starts. Threads in \CFA are monitors to allow direct communication among threads, \ie threads can have mutex routines that are called by other threads. Hence, all monitor features are available when using threads. Figure~\ref{f:pingpong} shows an example of two threads directly calling each other and accepting calls from each other in a cycle. Note, both ping/pong threads are globally declared, @pi@/@po@, and hence, start (and possibly complete) before the program main starts. \begin{figure} Now, high-performance applications must care about parallelism, which requires concurrency. The lowest-level approach of parallelism is to use \newterm{kernel threads} in combination with semantics like @fork@, @join@, \etc. However, kernel threads are better as an implementation tool because of complexity and high cost. Therefore, different abstractions are layered onto kernel threads to simplify them. However, kernel threads are better as an implementation tool because of complexity and higher cost. Therefore, different abstractions are often layered onto kernel threads to simplify them, \eg pthreads. A direct improvement on kernel threads is user threads, \eg Erlang~\cite{Erlang} and \uC~\cite{uC++book}. This approach provides an interface that matches the language paradigms, more control over concurrency in the language runtime, and an abstract (and portable) interface to the underlying kernel threads across operating systems. This approach provides an interface that matches the language paradigms, more control over concurrency by the language runtime, and an abstract (and portable) interface to the underlying kernel threads across operating systems. In many cases, user threads can be used on a much larger scale (100,000 threads). Like kernel threads, user threads support preemption, which maximizes nondeterminism, but introduces concurrency errors: race, livelock, starvation, and deadlock. \CFA adopts user-threads as they represent the truest realization of concurrency and can build the following approaches and more, \eg actors~\cite{Actors}. Like kernel threads, user threads support preemption, which maximizes nondeterminism, but introduces the same concurrency errors: race, livelock, starvation, and deadlock. \CFA adopts user-threads as they represent the truest realization of concurrency and can build any other concurrency approach, \eg thread pools and actors~\cite{Actors}. While it is possible to tune the thread pool with sufficient threads, it becomes difficult to obtain high throughput and good core utilization as job interaction increases. As well, concurrency errors return, which threads pools are suppose to mitigate. The gold standard for thread pool is Intel's TBB library~\cite{TBB}. Intel's TBB library~\cite{TBB} is the gold standard for thread pools. Figure~\ref{f:RunTimeStructure} illustrates the runtime structure of a \CFA program. In addition to the new kinds of objects introduced by \CFA, there are two more runtime entities used to control parallel execution. In addition to the new kinds of objects introduced by \CFA, there are two more runtime entities used to control parallel execution: cluster and (virtual) processor. An executing thread is illustrated by its containment in a processor. \label{s:RuntimeStructureCluster} A \newterm{cluster} is a collection of threads and virtual processors (abstraction a kernel thread) that execute the threads (like a virtual machine). A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the threads (like a virtual machine). The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults. The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors. No automatic load balancing among clusters is performed by \CFA. When a \CFA program begins execution, it creates two clusters: system and user. The system cluster contains a processor that does not execute user threads. Instead, the system cluster handles system-related operations, such as catching errors that occur on the user clusters, printing appropriate error information, and shutting down \CFA. A user cluster is created to contain the user threads. When a \CFA program begins execution, it creates a user cluster with a single processor and a special processor to handle preemption that does not execute user threads. The user cluster is created to contain the application user-threads. Having all threads execute on the one cluster often maximizes utilization of processors, which minimizes runtime. However, because of limitations of the underlying operating system, special hardware, or scheduling requirements (real-time), it is sometimes necessary to have multiple clusters. However, because of limitations of the underlying operating system, heterogeneous hardware, or scheduling requirements (real-time), it is sometimes necessary to have multiple clusters. Programs may use more virtual processors than hardware processors. On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel. (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processor.) (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.) The \CFA runtime attempts to block unused processors and unblock processors as the system load increases; balancing the workload with processors is difficult. As well, chained stacks require all modules be recompiled to use this feature, which breaks backward compatibility with existing C libraries. In the long term, it is likely C libraries will migrate to stack chaining to support concurrency, at only a minimal cost to sequential programs. Nevertheless, experience teaching \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue in the most concurrent programs. Nevertheless, experience teaching \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue in most concurrent programs. A primary implementation challenge is avoiding contention from dynamically allocating memory because of bulk acquire, \eg the internal-scheduling design is (almost) free of allocations. This method is a 2-step context-switch and provides a clear distinction between user and kernel code, where scheduling and other system operations happen. The alternative 1-step context-switch uses the \emph{from} thread's stack to schedule and then context-switches directly to the \emph{to} thread's stack. Experimental results (not shown) show the performance difference between these two approaches is virtually equivalent, because the 1-step performance is dominated by locking instructions to prevent a race condition. Experimental results (not presented) show the performance difference between these two approaches is virtually equivalent, because the 1-step performance is dominated by a lock instruction to prevent a race condition. All kernel threads (@pthreads@) created a stack. Finally, an important aspect for a complete threading system is preemption, which introduces extra non-determinism via transparent interleaving, rather than cooperation among threads for proper scheduling and processor fairness from long-running threads. Because preemption frequency is usually long, 1 millisecond, performance cost is negligible. Preemption is normally handled by setting a count-down timer on each virtual processor. When the timer expires, an interrupt is delivered, and the interrupt handler resets the count-down timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime-kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code. therefore, all virtual processors in a cluster need to have the same signal mask. However, on UNIX systems: However, on current UNIX systems: \begin{quote} A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. \end{quote} Hence, the timer-expiry signal, which is generated \emph{externally} by the UNIX kernel to the UNIX process, is delivered to any of its UNIX subprocesses (kernel threads). To ensure each virtual processor receives its own preemption signals, a discrete-event simulation is run on one virtual processor, and only it sets timer events. To ensure each virtual processor receives its own preemption signals, a discrete-event simulation is run on a special virtual processor, and only it sets and receives timer events. Virtual processors register an expiration time with the discrete-event simulator, which is inserted in sorted order. The simulation sets the count-down timer to the value at the head of the event list, and when the timer expires, all events less than or equal to the current time are processed. Table~\ref{t:machine} shows the specifications of the computer used to run the benchmarks, and the versions of the software used in the comparison. \begin{table}[h] \begin{table} \centering \caption{Experiment environment} The method used to get time is @clock_gettime( CLOCK_REALTIME )@. Each benchmark is performed @N@ times, where @N@ varies depending on the benchmark, the total time is divided by @N@ to obtain the average time for a benchmark. All omitted tests for other languages are functionally identical to the shown \CFA test. The thread context switch is 2-step using yield, \ie enter and return from the runtime kernel. Figure~\ref{f:ctx-switch} shows the code for coroutines/threads with all results in Table~\ref{tab:ctx-switch}. All omitted tests for other languages are functionally identical to this test (as for all other tests). The difference in performance between coroutine and thread context-switch is the cost of scheduling for threads, whereas coroutines are self-scheduling. \end{lrbox} \subfloat[Coroutine]{\label{f:GlobalVariables}\usebox\myboxA} \subfloat[Coroutine]{\usebox\myboxA} \quad \subfloat[Thread]{\label{f:ExternalState}\usebox\myboxB} \subfloat[Thread]{\usebox\myboxB} \captionof{figure}{\CFA context-switch benchmark} \label{f:ctx-switch}