# Changeset b199e54 for doc/papers/concurrency/Paper.tex

Ignore:
Timestamp:
Jun 27, 2018, 6:37:47 PM (4 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, no_list, persistent-indexer, pthread-emulation, qualifiedEnum
Children:
6d6cf5a
Parents:
203c667
Message:

first complete draft

File:
1 edited

### Legend:

Unmodified
 r203c667 \renewcommand{\thesubfigure}{(\Alph{subfigure})} \captionsetup{justification=raggedright,singlelinecheck=false} \usepackage{siunitx} \sisetup{binary-units=true} \usepackage{dcolumn}                                            % align decimal points in tables \hypersetup{breaklinks=true} An easier approach for programmers is to support higher-level constructs as the basis of concurrency. Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{Hochstein05}. Examples of high-level approaches are task (work) based~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}. Examples of high-level approaches are jobs (tasks) based~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}. The following terminology is used. \begin{tabular}{@{}ll@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}} \begin{cfa} int ++? (int op); int ?++ (int op); int ?+? (int op1, int op2); int ++?(int op); int ?++(int op); int ?+?(int op1, int op2); int ?<=?(int op1, int op2); int ?=? (int & op1, int op2); \label{s:ParametricPolymorphism} The signature feature of \CFA is parametric-polymorphic routines~\cite{} with routines generalized using a @forall@ clause (giving the language its name), which allow separately compiled routines to support generic usage over multiple types. The signature feature of \CFA is parametric-polymorphic routines~\cite{Cforall} with routines generalized using a @forall@ clause (giving the language its name), which allow separately compiled routines to support generic usage over multiple types. For example, the following sum routine works for any type that supports construction from 0 and addition: \begin{cfa} \end{lrbox} \subfloat[3 States: global variables]{\label{f:GlobalVariables}\usebox\myboxA} \subfloat[3 States: global variables]{\usebox\myboxA} \qquad \subfloat[1 State: external variables]{\label{f:ExternalState}\usebox\myboxB} \subfloat[1 State: external variables]{\usebox\myboxB} \caption{C Fibonacci Implementations} \label{f:C-fibonacci} symmetric_coroutine<>::yield_type \end{cfa} Similarly, the canonical threading paradigm is often based on routine pointers, \eg @pthreads@~\cite{pthreads}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}. Similarly, the canonical threading paradigm is often based on routine pointers, \eg @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}. However, the generic thread-handle (identifier) is limited (few operations), unless it is wrapped in a custom type. \begin{cfa} } \end{cfa} This example shows a trivial solution to the bank-account transfer problem~\cite{BankTransfer}. This example shows a trivial solution to the bank-account transfer problem. Without multi- and bulk acquire, the solution to this problem requires careful engineering. Like Java, \CFA offers an alternative @mutex@ statement to reduce refactoring and naming. \begin{cquote} \begin{tabular}{@{}c|@{\hspace{\parindentlnth}}c@{}} routine call & @mutex@ statement \\ \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}} \begin{cfa} monitor M {}; \end{cfa} \\ \multicolumn{1}{c}{\textbf{routine call}} & \multicolumn{1}{c}{\lstinline@mutex@ \textbf{statement}} \end{tabular} \end{cquote} wait( Girls[ccode] ); GirlPhNo = phNo; exchange.signal(); exchange.signal(); } else { GirlPhNo = phNo; signal( Boys[ccode] ); exchange.wait(); signal( Boys[ccode] ); exchange.wait(); } // if return BoyPhNo; } else { GirlPhNo = phNo; // make phone number available signal_block( Boys[ccode] ); // restart boy signal_block( Boys[ccode] ); // restart boy } // if Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer. To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible. When an overloaded routine appears in an @waitfor@ statement, calls to any routine with that name are accepted. The rationale is that members with the same name should perform a similar function, and therefore, all should be eligible to accept a call. As always, overloaded routines can be disambiguated using a cast: \begin{cfa} void rtn( M & mutex m ); int rtn( M & mutex m ); waitfor( (int (*)( M & mutex ))rtn, m1, m2 ); \end{cfa} Given the ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock. This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. \begin{comment} Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable. Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable. Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen: \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it. \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate. \\ Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{f:dependency}. In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means knowing when to release a group becomes complex and inefficient (see next section) and therefore effectively precludes this approach. \subsubsection{Dependency graphs} \begin{figure} \begin{multicols}{3} Thread $\alpha$ \begin{cfa}[numbers=left, firstnumber=1] acquire A acquire A & B wait A & B release A & B release A \end{cfa} \columnbreak Thread $\gamma$ \begin{cfa}[numbers=left, firstnumber=6, escapechar=|] acquire A acquire A & B |\label{line:signal-ab}|signal A & B |\label{line:release-ab}|release A & B |\label{line:signal-a}|signal A |\label{line:release-a}|release A \end{cfa} \columnbreak Thread $\beta$ \begin{cfa}[numbers=left, firstnumber=12, escapechar=|] acquire A wait A |\label{line:release-aa}|release A \end{cfa} \end{multicols} \begin{cfa}[caption={Pseudo-code for the three thread example.},label={f:dependency}] \end{cfa} \begin{center} \input{dependency} \end{center} \caption{Dependency graph of the statements in listing \ref{f:dependency}} \label{fig:dependency} \end{figure} In listing \ref{f:int-bulk-cfa}, there is a solution that satisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases @A & B@ and then the waiter transfers back ownership of @A@ back to the signaller when it releases it, then the problem is solved (@B@ is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem is effectively resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and has a super-linear complexity. This complexity can be seen in listing \ref{f:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions. Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks. \begin{figure} \begin{multicols}{2} \begin{cfa} acquire A acquire B acquire C wait A & B & C release C release B release A \end{cfa} \columnbreak \begin{cfa} acquire A acquire B acquire C signal A & B & C release C release B release A \end{cfa} \end{multicols} \begin{cfa}[caption={Extension to three monitors of listing \ref{f:int-bulk-cfa}},label={f:explosion}] \end{cfa} \end{figure} Given the three threads example in listing \ref{f:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (\eg $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold. Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. \end{comment} \begin{comment} \section{External scheduling} \label{extsched} \begin{table} \begin{tabular}{|c|c|c|} Internal Scheduling & External Scheduling & Go\\ \hline \begin{uC++}[tabsize=3] _Monitor Semaphore { condition c; bool inUse; public: void P() { if(inUse) wait(c); inUse = true; } void V() { inUse = false; signal(c); } } \end{uC++}&\begin{uC++}[tabsize=3] _Monitor Semaphore { bool inUse; public: void P() { if(inUse) _Accept(V); inUse = true; } void V() { inUse = false; } } \end{uC++}&\begin{Go}[tabsize=3] type MySem struct { inUse bool c     chan bool } // acquire func (s MySem) P() { if s.inUse { select { case <-s.c: } } s.inUse = true } // release func (s MySem) V() { s.inUse = false // This actually deadlocks // when single thread s.c <- false } \end{Go} \end{tabular} \caption{Different forms of scheduling.} \label{tbl:sched} \end{table} For the @P@ member above using internal scheduling, the call to @wait@ only guarantees that @V@ is the last routine to access the monitor, allowing a third routine, say @isInUse()@, acquire mutual exclusion several times while routine @P@ is waiting. On the other hand, external scheduling guarantees that while routine @P@ is waiting, no other routine than @V@ can acquire the monitor. \end{comment} \subsection{Loose Object Definitions} The accepted list is a variable-sized array of accepted routine pointers, so the single instruction bitmask comparison is replaced by dereferencing a pointer followed by a linear search. \begin{comment} \begin{figure} \begin{cfa}[caption={Example of nested external scheduling},label={f:nest-ext}] monitor M {}; void foo( M & mutex a ) {} void bar( M & mutex b ) { // Nested in the waitfor(bar, c) call waitfor(foo, b); } void baz( M & mutex c ) { waitfor(bar, c); } \end{cfa} \end{figure} Note that in the right picture, tasks need to always keep track of the monitors associated with mutex routines, and the routine mask needs to have both a routine pointer and a set of monitors, as is discussed in the next section. These details are omitted from the picture for the sake of simplicity. At this point, a decision must be made between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here, however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be  hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problem than writing locks that are as flexible as external scheduling in \CFA. \end{comment} \subsection{Multi-Monitor Scheduling} void f( M & mutex m1 ); void g( M & mutex m1, M & mutex m2 ) { waitfor( f );                                                   $\C{// pass m1 or m2 to f?}$ waitfor( f );                                                   $\C{\color{red}// pass m1 or m2 to f?}$ } \end{cfa} The solution is for the programmer to disambiguate: \begin{cfa} waitfor( f, m2 );                                               $\C{// wait for call to f with argument m2}$ \end{cfa} Routine @g@ has acquired both locks, so when routine @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@ (while @g@ still holds lock @m1@). waitfor( f, m2 );                                               $\C{\color{red}// wait for call to f with argument m2}$ \end{cfa} Routine @g@ has acquired both locks, so when routine @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@. This behaviour can be extended to the multi-monitor @waitfor@ statement. \begin{cfa} void f( M & mutex m1, M & mutex m2 ); void g( M & mutex m1, M & mutex m2 ) { waitfor( f, m1, m2 );                                   $\C{// wait for call to f with arguments m1 and m2}$ waitfor( f, m1, m2 );                                   $\C{\color{red}// wait for call to f with arguments m1 and m2}$ } \end{cfa} Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by accepting routine. An important behaviour to note is when a set of monitors only match partially: \begin{cfa} mutex struct A {}; mutex struct B {}; void g( A & mutex m1, B & mutex m2 ) { Note, for internal and external scheduling with multiple monitors, a signalling or accepting thread must match exactly, \ie partial matching results in waiting. \begin{cquote} \lstDeleteShortInline@% \begin{tabular}{@{}l@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}} \begin{cfa} monitor M1 {} m11, m12; monitor M2 {} m2; condition c; void f( M1 & mutex m1, M2 & mutex m2 ) { signal( c ); } void g( M1 & mutex m1, M2 & mutex m2 ) { wait( c ); } g( m11, m2 ); // block on accept f( m12, m2 ); // cannot fulfil \end{cfa} & \begin{cfa} monitor M1 {} m11, m12; monitor M2 {} m2; void f( M1 & mutex m1, M2 & mutex m2 ) { } void g( M1 & mutex m1, M2 & mutex m2 ) { waitfor( f, m1, m2 ); } A a1, a2; B b; void foo() { g( a1, b ); // block on accept } void bar() { f( a2, b ); // fulfill cooperation } \end{cfa} While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wakeup the waiting thread. It is also important to note that in the case of external scheduling the order of parameters is irrelevant; @waitfor(f,a,b)@ and @waitfor(f,b,a)@ are indistinguishable waiting condition. \subsection{\protect\lstinline|waitfor| Semantics} Syntactically, the @waitfor@ statement takes a routine identifier and a set of monitors. While the set of monitors can be any list of expressions, the routine name is more restricted because the compiler validates at compile time the validity of the routine type and the parameters used with the @waitfor@ statement. It checks that the set of monitors passed in matches the requirements for a routine call. Figure~\ref{f:waitfor} shows various usages of the waitfor statement and which are acceptable. The choice of the routine type is made ignoring any non-@mutex@ parameter. One limitation of the current implementation is that it does not handle overloading, but overloading is possible. g( m11, m2 ); // block on accept f( m12, m2 ); // cannot fulfil \end{cfa} \end{tabular} \lstMakeShortInline@% \end{cquote} \subsection{Extended \protect\lstinline@waitfor@} The extended form of the @waitfor@ statement conditionally accepts one of a group of mutex routines and allows a specific action to be performed \emph{after} the mutex routine finishes. \begin{cfa} when ( $\emph{conditional-expression}$ )      $\C{// optional guard}$ waitfor( $\emph{mutex-member-name}$ ) $\emph{statement}$                                      $\C{// action after call}$ or when ( $\emph{conditional-expression}$ ) $\C{// optional guard}$ waitfor( $\emph{mutex-member-name}$ ) $\emph{statement}$                                      $\C{// action after call}$ or    ...                                                                     $\C{// list of waitfor clauses}$ when ( $\emph{conditional-expression}$ )      $\C{// optional guard}$ timeout                                                               $\C{// optional terminating timeout clause}$ $\emph{statement}$                                      $\C{// action after timeout}$ when ( $\emph{conditional-expression}$ )      $\C{// optional guard}$ else                                                                  $\C{// optional terminating clause}$ $\emph{statement}$                                      $\C{// action when no immediate calls}$ \end{cfa} For a @waitfor@ clause to be executed, its @when@ must be true and an outstanding call to its corresponding member(s) must exist. The \emph{conditional-expression} of a @when@ may call a routine, but the routine must not block or context switch. If there are several mutex calls that can be accepted, selection occurs top-to-bottom in the @waitfor@ clauses versus non-deterministically. If some accept guards are true and there are no outstanding calls to these members, the acceptor is accept-blocked until a call to one of these members is made. If all the accept guards are false, the statement does nothing, unless there is a terminating @else@ clause with a true guard, which is executed instead. Hence, the terminating @else@ clause allows a conditional attempt to accept a call without blocking. If there is a @timeout@ clause, it provides an upper bound on waiting, and can only appear with a conditional @else@, otherwise the timeout cannot be triggered. In all cases, the statement following is executed \emph{after} a clause is executed to know which of the clauses executed. A group of conditional @waitfor@ clauses is \emph{not} the same as a group of @if@ statements, e.g.: \begin{cfa} if ( C1 ) waitfor( mem1 );                       when ( C1 ) waitfor( mem1 ); else if ( C2 ) waitfor( mem2 );         or when ( C2 ) waitfor( mem2 ); \end{cfa} The left example accepts only @mem1@ if @C1@ is true or only @mem2@ if @C2@ is true. The right example accepts either @mem1@ or @mem2@ if @C1@ and @C2@ are true. An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object deallocated. \begin{cfa} void insert( Buffer(T) & mutex buffer, T elem ) with( buffer ) { if ( count == BufferSize ) waitfor( remove, buffer ) { elements[back] = elem; back = ( back + 1 ) % BufferSize; count += 1; } or waitfor( ^?{}, buffer ) throw insertFail; } \end{cfa} However, the @waitfor@ semantics do not work, since using an object after its destructor is called is undefined. Therefore, to make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the call to the destructor is placed on the urgent queue and the acceptor continues execution, which throws an exception to the acceptor and then deallocates the object. Accepting the destructor is an idiomatic way to terminate a thread in \CFA. \subsection{\protect\lstinline@mutex@ Threads} Threads in \CFA are monitors, so all monitor features are available when using threads. Figure~\ref{f:pingpong} shows an example of two threads calling and accepting calls from each other in a cycle. Note, both ping/pong threads are globally declared, @pi@/@po@, and hence, start (and possibly complete) before the program starts. \begin{figure} \begin{cfa}[caption={Various correct and incorrect uses of the waitfor statement},label={f:waitfor}] monitor A{}; monitor B{}; void f1( A & mutex ); void f2( A & mutex, B & mutex ); void f3( A & mutex, int ); void f4( A & mutex, int ); void f4( A & mutex, double ); void foo( A & mutex a1, A & mutex a2, B & mutex b1, B & b2 ) { A * ap = & a1; void (*fp)( A & mutex ) = f1; waitfor(f1, a1);     // Correct : 1 monitor case waitfor(f2, a1, b1); // Correct : 2 monitor case waitfor(f3, a1);     // Correct : non-mutex arguments are ignored waitfor(f1, *ap);    // Correct : expression as argument waitfor(f1, a1, b1); // Incorrect : Too many mutex arguments waitfor(f2, a1);     // Incorrect : Too few mutex arguments waitfor(f2, a1, a2); // Incorrect : Mutex arguments don't match waitfor(f1, 1);      // Incorrect : 1 not a mutex argument waitfor(f9, a1);     // Incorrect : f9 routine does not exist waitfor(*fp, a1 );   // Incorrect : fp not an identifier waitfor(f4, a1);     // Incorrect : f4 ambiguous waitfor(f2, a1, b2); // Undefined behaviour : b2 not mutex } \end{cfa} \lstDeleteShortInline@% \begin{cquote} \begin{cfa} thread Ping {} pi; thread Pong {} po; void ping( Ping & mutex ) {} void pong( Pong & mutex ) {} int main() {} \end{cfa} \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}} \begin{cfa} void main( Ping & pi ) { for ( int i = 0; i < 10; i += 1 ) { waitfor( ping, pi ); pong( po ); } } \end{cfa} & \begin{cfa} void main( Pong & po ) { for ( int i = 0; i < 10; i += 1 ) { ping( pi ); waitfor( pong, po ); } } \end{cfa} \end{tabular} \lstMakeShortInline@% \end{cquote} \caption{Threads ping/pong using external scheduling} \label{f:pingpong} \end{figure} Finally, for added flexibility, \CFA supports constructing a complex @waitfor@ statement using the @or@, @timeout@ and @else@. Indeed, multiple @waitfor@ clauses can be chained together using @or@; this chain forms a single statement that uses baton pass to any routine that fits one of the routine+monitor set passed in. To enable users to tell which accepted routine executed, @waitfor@s are followed by a statement (including the null statement @;@) or a compound statement, which is executed after the clause is triggered. A @waitfor@ chain can also be followed by a @timeout@, to signify an upper bound on the wait, or an @else@, to signify that the call should be non-blocking, which checks for a matching routine call already arrived and otherwise continues. Any and all of these clauses can be preceded by a @when@ condition to dynamically toggle the accept clauses on or off based on some current state. Figure~\ref{f:waitfor2} demonstrates several complex masks and some incorrect ones. \section{Parallelism} Historically, computer performance was about processor speeds. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. Now, high-performance applications must care about parallelism, which requires concurrency. The lowest-level approach of parallelism is to use \newterm{kernel threads} in combination with semantics like @fork@, @join@, \etc. However, kernel threads are better as an implementation tool because of complexity and high cost. Therefore, different abstractions are layered onto kernel threads to simplify them. \subsection{User Threads with Preemption} A direct improvement on kernel threads is user threads, \eg Erlang~\cite{Erlang} and \uC~\cite{uC++book}. This approach provides an interface that matches the language paradigms, more control over concurrency in the language runtime, and an abstract (and portable) interface to the underlying kernel threads across operating systems. In many cases, user threads can be used on a much larger scale (100,000 threads). Like kernel threads, user threads support preemption, which maximizes nondeterminism, but introduces concurrency errors: race, livelock, starvation, and deadlock. \CFA adopts user-threads as they represent the truest realization of concurrency and can build the following approaches and more, \eg actors~\cite{Actors}. \subsection{User Threads without Preemption (Fiber)} \label{s:fibers} A variant of user thread is \newterm{fibers}, which removes preemption, \eg Go~\cite{Go}. Like functional programming, which removes mutation and its associated problems, removing preemption from concurrency reduces nondeterminism, hence race and deadlock errors are more difficult to generate. However, preemption is necessary for concurrency that relies on spinning, so there are a class of problems that cannot be programmed without preemption. \subsection{Thread Pools} In contrast to direct threading is indirect \newterm{thread pools}, where small jobs (work units) are insert into a work pool for execution. If the jobs are dependent, \ie interact, there is an implicit/explicit dependency graph that ties them together. While removing direct concurrency, and hence the amount of context switching, thread pools significantly limit the interaction that can occur among jobs. Indeed, jobs should not block because that also block the underlying thread, which effectively means the CPU utilization, and therefore throughput, suffers. While it is possible to tune the thread pool with sufficient threads, it becomes difficult to obtain high throughput and good core utilization as job interaction increases. As well, concurrency errors return, which threads pools are suppose to mitigate. The gold standard for thread pool is Intel's TBB library~\cite{TBB}. \section{\protect\CFA Runtime Structure} Figure~\ref{f:RunTimeStructure} illustrates the runtime structure of a \CFA program. In addition to the new kinds of objects introduced by \CFA, there are two more runtime entities used to control parallel execution. An executing thread is illustrated by its containment in a processor. \begin{figure} \lstset{language=CFA,deletedelim=**[is][]{}{}} \begin{cfa} monitor A{}; void f1( A & mutex ); void f2( A & mutex ); void foo( A & mutex a, bool b, int t ) { waitfor(f1, a);                                                 $\C{// Correct : blocking case}$ waitfor(f1, a) {                                                $\C{// Correct : block with statement}$ sout | "f1" | endl; } waitfor(f1, a) {                                                $\C{// Correct : block waiting for f1 or f2}$ sout | "f1" | endl; } or waitfor(f2, a) { sout | "f2" | endl; } waitfor(f1, a); or else;                                $\C{// Correct : non-blocking case}$ waitfor(f1, a) {                                                $\C{// Correct : non-blocking case}$ sout | "blocked" | endl; } or else { sout | "didn't block" | endl; } waitfor(f1, a) {                                                $\C{// Correct : block at most 10 seconds}$ sout | "blocked" | endl; } or timeout( 10s) { sout | "didn't block" | endl; } // Correct : block only if b == true if b == false, don't even make the call when(b) waitfor(f1, a); // Correct : block only if b == true if b == false, make non-blocking call waitfor(f1, a); or when(!b) else; // Correct : block only of t > 1 waitfor(f1, a); or when(t > 1) timeout(t); or else; // Incorrect : timeout clause is dead code waitfor(f1, a); or timeout(t); or else; // Incorrect : order must be waitfor [or waitfor... [or timeout] [or else]] timeout(t); or waitfor(f1, a); or else; } \end{cfa} \caption{Correct and incorrect uses of the or, else, and timeout clause around a waitfor statement} \label{f:waitfor2} \centering \input{RunTimeStructure} \caption{\CFA Runtime Structure} \label{f:RunTimeStructure} \end{figure} \subsection{Waiting For The Destructor} An interesting use for the @waitfor@ statement is destructor semantics. Indeed, the @waitfor@ statement can accept any @mutex@ routine, which includes the destructor (see section \ref{data}). However, with the semantics discussed until now, waiting for the destructor does not make any sense, since using an object after its destructor is called is undefined behaviour. The simplest approach is to disallow @waitfor@ on a destructor. However, a more expressive approach is to flip ordering of execution when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current @mutex@ routine, similarly to how a condition is signalled. \begin{figure} \begin{cfa}[caption={Example of an executor which executes action in series until the destructor is called.},label={f:dtor-order}] monitor Executer {}; struct  Action; void ^?{}   (Executer & mutex this); void execute(Executer & mutex this, const Action & ); void run    (Executer & mutex this) { while(true) { waitfor(execute, this); or waitfor(^?{}   , this) { break; } } } \end{cfa} \end{figure} For example, listing \ref{f:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop. Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction. \section{Parallelism} Historically, computer performance was about processor speeds and instruction counts. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. In this decade, it is no longer reasonable to create a high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest-level approach of parallelism is to use \textbf{kthread} in combination with semantics like @fork@, @join@, \etc. However, since these have significant costs and limitations, \textbf{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues that all have strengths and weaknesses. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads. \section{Paradigms} \subsection{User-Level Threads} A direct improvement on the \textbf{kthread} approach is to use \textbf{uthread}. These threads offer most of the same features that the operating system already provides but can be used on a much larger scale. This approach is the most powerful solution as it allows all the features of multithreading, while removing several of the more expensive costs of kernel threads. The downside is that almost none of the low-level threading problems are hidden; users still have to think about data races, deadlocks and synchronization issues. These issues can be somewhat alleviated by a concurrency toolkit with strong guarantees, but the parallelism toolkit offers very little to reduce complexity in itself. Examples of languages that support \textbf{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}. \subsection{Fibers : User-Level Threads Without Preemption} \label{fibers} A popular variant of \textbf{uthread} is what is often referred to as \textbf{fiber}. However, \textbf{fiber} do not present meaningful semantic differences with \textbf{uthread}. The significant difference between \textbf{uthread} and \textbf{fiber} is the lack of \textbf{preemption} in the latter. Advocates of \textbf{fiber} list their high performance and ease of implementation as major strengths, but the performance difference between \textbf{uthread} and \textbf{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignores fibers. An example of a language that uses fibers is Go~\cite{Go} \subsection{Jobs and Thread Pools} An approach on the opposite end of the spectrum is to base parallelism on \textbf{pool}. Indeed, \textbf{pool} offer limited flexibility but at the benefit of a simpler user interface. In \textbf{pool} based systems, users express parallelism as units of work, called jobs, and a dependency graph (either explicit or implicit) that ties them together. This approach means users need not worry about concurrency but significantly limit the interaction that can occur among jobs. Indeed, any \textbf{job} that blocks also block the underlying worker, which effectively means the CPU utilization, and therefore throughput, suffers noticeably. It can be argued that a solution to this problem is to use more workers than available cores. However, unless the number of jobs and the number of workers are comparable, having a significant number of blocked jobs always results in idles cores. The gold standard of this implementation is Intel's TBB library~\cite{TBB}. \subsection{Paradigm Performance} While the choice between the three paradigms listed above may have significant performance implications, it is difficult to pin down the performance implications of choosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantees equivalent performance across paradigms and that the \textbf{pool}-based system has the best efficiency thanks to the lower memory overhead (\ie no thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilization, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large, enough the paradigm choice is largely amortized by the actual work done. \section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel} A \textbf{cfacluster} is a group of \textbf{kthread} executed in isolation. \textbf{uthread} are scheduled on the \textbf{kthread} of a given \textbf{cfacluster}, allowing organization between \textbf{uthread} and \textbf{kthread}. It is important that \textbf{kthread} belonging to a same \textbf{cfacluster} have homogeneous settings, otherwise migrating a \textbf{uthread} from one \textbf{kthread} to the other can cause issues. A \textbf{cfacluster} also offers a pluggable scheduler that can optimize the workload generated by the \textbf{uthread}. \textbf{cfacluster} have not been fully implemented in the context of this paper. Currently \CFA only supports one \textbf{cfacluster}, the initial one. \subsection{Future Work: Machine Setup}\label{machine} While this was not done in the context of this paper, another important aspect of clusters is affinity. While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups. For example, a system using \textbf{numa} configurations may benefit from users being able to tie clusters and/or kernel threads to certain CPU cores. OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity. \subsection{Paradigms}\label{cfaparadigms} Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \textbf{uthread} is the default paradigm in \CFA. However, disabling \textbf{preemption} on a cluster means threads effectively become fibers. Since several \textbf{cfacluster} with different scheduling policy can coexist in the same application, this allows \textbf{fiber} and \textbf{uthread} to coexist in the runtime of an application. Finally, it is possible to build executors for thread pools from \textbf{uthread} or \textbf{fiber}, which includes specialized jobs like actors~\cite{Actors}. \section{Behind the Scenes} There are several challenges specific to \CFA when implementing concurrency. These challenges are a direct result of bulk acquire and loose object definitions. These two constraints are the root cause of most design decisions in the implementation. Furthermore, to avoid contention from dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs. This approach avoids the chicken and egg problem~\cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal means that memory management is a constant concern in the design of the system. The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues and all queues are designed with intrusive nodes, where each node has pre-allocated link fields for chaining, to avoid the need for memory allocation. Since several concurrency operations can use an unbound amount of memory (depending on bulk acquire), statically defining information in the intrusive fields of threads is insufficient.The only way to use a variable amount of memory without requiring memory allocation is to pre-allocate large buffers of memory eagerly and store the information in these buffers. Conveniently, the call stack fits that description and is easy to use, which is why it is used heavily in the implementation of internal scheduling, particularly variable-length arrays. Since stack allocation is based on scopes, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable-length array. The threads and the condition both have a fixed amount of memory, while @mutex@ routines and blocking calls allow for an unbound amount, within the stack size. Note that since the major contributions of this paper are extending monitor semantics to bulk acquire and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed. \section{Mutex Routines} The first step towards the monitor implementation is simple @mutex@ routines. In the single monitor case, mutual-exclusion is done using the entry/exit procedure in listing \ref{f:entry1}. The entry/exit procedures do not have to be extended to support multiple monitors. Indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlock~\cite{Havender68}. In \CFA, ordering of monitor acquisition relies on memory ordering. This approach is sufficient because all objects are guaranteed to have distinct non-overlapping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behaviour. When a mutex call is made, the concerned monitors are aggregated into a variable-length pointer array and sorted based on pointer values. \subsection{Cluster} \label{s:RuntimeStructureCluster} A \newterm{cluster} is a collection of threads and virtual processors (abstraction a kernel thread) that execute the threads (like a virtual machine). The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults. The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors. However, the scheduler is pluggable, supporting alternative schedulers. If several clusters exist, both threads and virtual processors, can be explicitly migrated from one cluster to another. No automatic load balancing among clusters is performed by \CFA. When a \CFA program begins execution, it creates two clusters: system and user. The system cluster contains a processor that does not execute user threads. Instead, the system cluster handles system-related operations, such as catching errors that occur on the user clusters, printing appropriate error information, and shutting down \CFA. A user cluster is created to contain the user threads. Having all threads execute on the one cluster often maximizes utilization of processors, which minimizes runtime. However, because of limitations of the underlying operating system, special hardware, or scheduling requirements (real-time), it is sometimes necessary to have multiple clusters. \subsection{Virtual Processor} \label{s:RuntimeStructureProcessor} A virtual processor is implemented by a kernel thread (\eg UNIX process), which is subsequently scheduled for execution on a hardware processor by the underlying operating system. Programs may use more virtual processors than hardware processors. On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel. (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processor.) The \CFA runtime attempts to block unused processors and unblock processors as the system load increases; balancing the workload with processors is difficult. Preemption occurs on virtual processors rather than user threads, via operating-system interrupts. Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads. Turning off preemption transforms user threads into fibers. \subsection{Debug Kernel} There are two versions of the \CFA runtime kernel: debug and non-debug. The debugging version has many runtime checks and internal assertions, \eg stack (non-writable) guard page, and checks for stack overflow whenever context switches occur among coroutines and threads, which catches most stack overflows. After a program is debugged, the non-debugging version can be used to decrease space and increase performance. \section{Implementation} Currently, \CFA has fixed-sized stacks, where the stack size can be set at coroutine/thread creation but with no subsequent growth. Schemes exist for dynamic stack-growth, such as stack copying and chained stacks. However, stack copying requires pointer adjustment to items on the stack, which is impossible without some form of garage collection. As well, chained stacks require all modules be recompiled to use this feature, which breaks backward compatibility with existing C libraries. In the long term, it is likely C libraries will migrate to stack chaining to support concurrency, at only a minimal cost to sequential programs. Nevertheless, experience teaching \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue in the most concurrent programs. A primary implementation challenge is avoiding contention from dynamically allocating memory because of bulk acquire, \eg the internal-scheduling design is (almost) free of allocations. All blocking operations are made by parking threads onto queues, therefore all queues are designed with intrusive nodes, where each node has preallocated link fields for chaining. Furthermore, several bulk-acquire operations need a variable amount of memory. This storage is allocated at the base of a thread's stack before blocking, which means programmers must add a small amount of extra space for stacks. In \CFA, ordering of monitor acquisition relies on memory ordering to prevent deadlock~\cite{Havender68}, because all objects are guaranteed to have distinct non-overlapping memory layouts, and mutual-exclusion for a monitor is only defined for its lifetime. When a mutex call is made, pointers to the concerned monitors are aggregated into a variable-length array and sorted. This array persists for the entire duration of the mutual-exclusion and its ordering reused extensively. \begin{figure} \begin{multicols}{2} Entry \begin{cfa} if monitor is free enter elif already own the monitor continue else block increment recursions \end{cfa} \columnbreak Exit \begin{cfa} decrement recursion if recursion == 0 if entry queue not empty wake-up thread \end{cfa} \end{multicols} \begin{cfa}[caption={Initial entry and exit routine for monitors},label={f:entry1}] \end{cfa} \end{figure} \subsection{Details: Interaction with polymorphism} Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. However, it is shown that entry-point locking solves most of the issues. First of all, interaction between @otype@ polymorphism (see Section~\ref{s:ParametricPolymorphism}) and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support @dtype@ polymorphism. It is important to present the difference between the two acquiring options: \textbf{callsite-locking} and entry-point locking, \ie acquiring the monitors before making a mutex routine-call or as the first operation of the mutex routine-call. For example: \begin{table} \begin{center} \begin{tabular}{|c|c|c|} Mutex & \textbf{callsite-locking} & \textbf{entry-point-locking} \\ call & cfa-code & cfa-code \\ \hline \begin{cfa}[tabsize=3] void foo(monitor& mutex a){ // Do Work //... } void main() { monitor a; foo(a); } \end{cfa} & \begin{cfa}[tabsize=3] foo(& a) { // Do Work //... } main() { monitor a; acquire(a); foo(a); release(a); } \end{cfa} & \begin{cfa}[tabsize=3] foo(& a) { acquire(a); // Do Work //... release(a); } main() { monitor a; foo(a); } \end{cfa} \end{tabular} \end{center} \caption{Call-site vs entry-point locking for mutex calls} \label{tbl:locking-site} \end{table} Note the @mutex@ keyword relies on the type system, which means that in cases where a generic monitor-routine is desired, writing the mutex routine is possible with the proper trait, \eg: \begin{cfa} // Incorrect: T may not be monitor forall(dtype T) void foo(T * mutex t); // Correct: this routine only works on monitors (any monitor) forall(dtype T | is_monitor(T)) void bar(T * mutex t)); \end{cfa} Both entry point and \textbf{callsite-locking} are feasible implementations. The current \CFA implementation uses entry-point locking because it requires less work when using \textbf{raii}, effectively transferring the burden of implementation to object construction/destruction. It is harder to use \textbf{raii} for call-site locking, as it does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, \ie the routine body. For example, the monitor call can appear in the middle of an expression. Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites. \section{Threading} \label{impl:thread} Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. Each component of the picture is explained in detail in the flowing sections. \begin{figure} \begin{center} {\resizebox{\textwidth}{!}{\input{system.pstex_t}}} \end{center} \caption{Overview of the entire system} \label{fig:system1} \end{figure} \subsection{Processors} Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically @pthread@s in the current implementation of \CFA. Indeed, any parallelism must go through operating-system libraries. However, \textbf{uthread} are still the main source of concurrency, processors are simply the underlying source of parallelism. Indeed, processor \textbf{kthread} simply fetch a \textbf{uthread} from the scheduler and run it; they are effectively executers for user-threads. The main benefit of this approach is that it offers a well-defined boundary between kernel code and user code, for example, kernel thread quiescing, scheduling and interrupt handling. Processors internally use coroutines to take advantage of the existing context-switching semantics. \subsection{Stack Management} One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all @pthread@s created also have a stack created with them, which should be used as much as possible. Normally, coroutines also create their own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the \textbf{kthread} stack, effectively stealing the processor stack. The exception to this rule is the Main Processor, \ie the initial \textbf{kthread} that is given to any program. In order to respect C user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large. \subsection{Context Switching} As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific routine call. This assumption means that the context-switch only has to copy the callee-saved registers onto the stack and then switch the stack registers with the ones of the target coroutine/thread. Note that the instruction pointer can be left untouched since the context-switch is always inside the same routine Threads, however, do not context-switch between each other directly. They context-switch to the scheduler. This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the kernel where scheduling and other system operations happen. Obviously, this doubles the context-switch cost because threads must context-switch to an intermediate stack. The alternative 1-step context-switch uses the stack of the from'' thread to schedule and then context-switches directly to the to'' thread. However, the performance of the 2-step context-switch is still superior to a @pthread_yield@ (see section \ref{results}). Additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch (akin to the Microsoft @SwitchToFiber@~\cite{switchToWindows} routine). This option is not currently present in \CFA, but the changes required to add it are strictly additive. \subsection{Preemption} \label{preemption} Finally, an important aspect for any complete threading system is preemption. As mentioned in section \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution. Indeed, preemption is desirable because it adds a degree of isolation among threads. In a fully cooperative system, any thread that runs a long loop can starve other threads, while in a preemptive system, starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly a programmer burden. Obviously, preemption is not optimal for every workload. However any preemptive system can become a cooperative system by making the time slices extremely large. Therefore, \CFA uses a preemptive threading system. Preemption in \CFA\footnote{Note that the implementation of preemption is strongly tied with the underlying threading system. For this reason, only the Linux implementation is cover, \CFA does not run on Windows at the time of writting} is based on kernel timers, which are used to run a discrete-event simulation. Every processor keeps track of the current time and registers an expiration time with the preemption system. When the preemption system receives a change in preemption, it inserts the time in a sorted order and sets a kernel timer for the closest one, effectively stepping through preemption events on each signal sent by the timer. These timers use the Linux signal {\tt SIGALRM}, which is delivered to the process rather than the kernel-thread. This results in an implementation problem, because when delivering signals to a process, the kernel can deliver the signal to any kernel thread for which the signal is not blocked, \ie: To improve performance and simplicity, context switching occur inside a routine call, so only callee-saved registers are copied onto the stack and then the stack register is switched; the corresponding registers are then restored for the other context. Note, the instruction pointer is untouched since the context switch is always inside the same routine. Unlike coroutines, threads do not context switch among each other; they context switch to the cluster scheduler. This method is a 2-step context-switch and provides a clear distinction between user and kernel code, where scheduling and other system operations happen. The alternative 1-step context-switch uses the \emph{from} thread's stack to schedule and then context-switches directly to the \emph{to} thread's stack. Experimental results (not shown) show the performance difference between these two approaches is virtually equivalent, because the 1-step performance is dominated by locking instructions to prevent a race condition. All kernel threads (@pthreads@) created a stack. Each \CFA virtual processor is implemented as a coroutine and these coroutines run directly on the kernel-thread stack, effectively stealing this stack. The exception to this rule is the program main, \ie the initial kernel thread that is given to any program. In order to respect C expectations, the stack of the initial kernel thread is used by program main rather than the main processor, allowing it to grow dynamically as in a normal C program. Finally, an important aspect for a complete threading system is preemption, which introduces extra non-determinism via transparent interleaving, rather than cooperation among threads for proper scheduling and processor fairness from long-running threads. Because preemption frequency is usually long, 1 millisecond, performance cost is negligible. Preemption is normally handled by setting a count-down timer on each virtual processor. When the timer expires, an interrupt is delivered, and the interrupt handler resets the count-down timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime-kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code. Multiple signal handlers may be pending. When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal was delivered. The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler; therefore, all virtual processors in a cluster need to have the same signal mask. However, on UNIX systems: \begin{quote} A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. SIGNAL(7) - Linux Programmer's Manual \end{quote} For the sake of simplicity, and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every kernel thread except one. Now because of how involuntary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread. Hence, involuntary context-switching is done by sending signal {\tt SIGUSR1} to the corresponding proces\-sor and having the thread yield from inside the signal handler. This approach effectively context-switches away from the signal handler back to the kernel and the signal handler frame is eventually unwound when the thread is scheduled again. As a result, a signal handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). It is important to note that signal handlers save and restore signal masks because user-thread migration can cause a signal mask to migrate from one kernel thread to another. This behaviour is only a problem if all kernel threads, among which a user thread can migrate, differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distinguishes async-signal-safe'' routines from other routines}. However, since the kernel thread handling preemption requires a different signal mask, executing user threads on the kernel-alarm thread can cause deadlocks. For this reason, the alarm thread is in a tight loop around a system call to @sigwaitinfo@, requiring very little CPU time for preemption. One final detail about the alarm thread is how to wake it when additional communication is required (\eg on thread termination). This unblocking is also done using {\tt SIGALRM}, but sent through the @pthread_sigqueue@. Indeed, @sigwait@ can differentiate signals sent from @pthread_sigqueue@ from signals sent from alarms or the kernel. \subsection{Scheduler} Finally, an aspect that was not mentioned yet is the scheduling algorithm. Currently, the \CFA scheduler uses a single ready queue for all processors, which is the simplest approach to scheduling. Further discussion on scheduling is present in section \ref{futur:sched}. \section{Internal Scheduling} \label{impl:intsched} The following figure is the traditional illustration of a monitor (repeated from page~\pageref{fig:ClassicalMonitor} for convenience): \begin{figure} \begin{center} {\resizebox{0.4\textwidth}{!}{\input{monitor.pstex_t}}} \end{center} \caption{Traditional illustration of a monitor} \end{figure} This picture has several components, the two most important being the entry queue and the AS-stack. The entry queue is an (almost) FIFO list where threads waiting to enter are parked, while the acceptor/signaller (AS) stack is a FILO list used for threads that have been signalled or otherwise marked as running next. For \CFA, this picture does not have support for blocking multiple monitors on a single condition. To support bulk acquire two changes to this picture are required. First, it is no longer helpful to attach the condition to \emph{a single} monitor. Secondly, the thread waiting on the condition has to be separated across multiple monitors, seen in figure \ref{fig:monitor_cfa}. \begin{figure} \begin{center} {\resizebox{0.8\textwidth}{!}{\input{int_monitor}}} \end{center} \caption{Illustration of \CFA Monitor} \label{fig:monitor_cfa} \end{figure} This picture and the proper entry and leave algorithms (see listing \ref{f:entry2}) is the fundamental implementation of internal scheduling. Note that when a thread is moved from the condition to the AS-stack, it is conceptually split into N pieces, where N is the number of monitors specified in the parameter list. The thread is woken up when all the pieces have popped from the AS-stacks and made active. In this picture, the threads are split into halves but this is only because there are two monitors. For a specific signalling operation every monitor needs a piece of thread on its AS-stack. \begin{figure} \begin{multicols}{2} Entry \begin{cfa} if monitor is free enter elif already own the monitor continue else block increment recursion \end{cfa} \columnbreak Exit \begin{cfa} decrement recursion if recursion == 0 if signal_stack not empty set_owner to thread if all monitors ready wake-up thread if entry queue not empty wake-up thread \end{cfa} \end{multicols} \begin{cfa}[caption={Entry and exit routine for monitors with internal scheduling},label={f:entry2}] \end{cfa} \end{figure} The solution discussed in \ref{s:InternalScheduling} can be seen in the exit routine of listing \ref{f:entry2}. Basically, the solution boils down to having a separate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. This solution is deadlock safe as well as preventing any potential barging. The data structures used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the call stack of the @wait@ and @signal_block@ routines. \begin{figure} \begin{center} {\resizebox{0.8\textwidth}{!}{\input{monitor_structs.pstex_t}}} \end{center} \caption{Data structures involved in internal/external scheduling} \label{fig:structs} \end{figure} Figure \ref{fig:structs} shows a high-level representation of these data structures. The main idea behind them is that, a thread cannot contain an arbitrary number of intrusive next'' pointers for linking onto monitors. The @condition node@ is the data structure that is queued onto a condition variable and, when signalled, the condition queue is popped and each @condition criterion@ is moved to the AS-stack. Once all the criteria have been popped from their respective AS-stacks, the thread is woken up, which is what is shown in listing \ref{f:entry2}. % ====================================================================== % ====================================================================== \section{External Scheduling} % ====================================================================== % ====================================================================== Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that waiting-thread queues are no longer specific to a single monitor, as mentioned in section \ref{extsched}. For internal scheduling, these queues are part of condition variables, which are still unique for a given scheduling operation (\ie no signal statement uses multiple conditions). However, in the case of external scheduling, there is no equivalent object which is associated with @waitfor@ statements. This absence means the queues holding the waiting threads must be stored inside at least one of the monitors that is acquired. These monitors being the only objects that have sufficient lifetime and are available on both sides of the @waitfor@ statement. This requires an algorithm to choose which monitor holds the relevant queue. It is also important that said algorithm be independent of the order in which users list parameters. The proposed algorithm is to fall back on monitor lock ordering (sorting by address) and specify that the monitor that is acquired first is the one with the relevant waiting queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. This algorithm choice has two consequences: \begin{itemize} \item The queue of the monitor with the lowest address is no longer a true FIFO queue because threads can be moved to the front of the queue. These queues need to contain a set of monitors for each of the waiting threads. Therefore, another thread whose set contains the same lowest address monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing. \item The queue of the lowest priority monitor is both required and potentially unused. Indeed, since it is not known at compile time which monitor is the monitor which has the lowest address, every monitor needs to have the correct queues even though it is possible that some queues go unused for the entire duration of the program, for example if a monitor is only used in a specific pair. \end{itemize} Therefore, the following modifications need to be made to support external scheduling: \begin{itemize} \item The threads waiting on the entry queue need to keep track of which routine they are trying to enter, and using which set of monitors. The @mutex@ routine already has all the required information on its stack, so the thread only needs to keep a pointer to that information. \item The monitors need to keep a mask of acceptable routines. This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it. It also needs storage to keep track of which routine was accepted. Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread. Note that if a thread has acquired two monitors but executes a @waitfor@ with only one monitor as a parameter, setting the mask of acceptable routines to both monitors will not cause any problems since the extra monitor will not change ownership regardless. This becomes relevant when @when@ clauses affect the number of monitors passed to a @waitfor@ statement. \item The entry/exit routines need to be updated as shown in listing \ref{f:entry3}. \end{itemize} \subsection{External Scheduling - Destructors} Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine. This routine is needed because of the storage requirements of the call order inversion. Indeed, when waiting for the destructors, storage is needed for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for. For regular @waitfor@ statements, the call stack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later. The @waitfor@ semantics can then be adjusted correspondingly, as seen in listing \ref{f:entry-dtor} \begin{figure} \begin{multicols}{2} Entry \begin{cfa} if monitor is free enter elif already own the monitor continue elif matches waitfor mask push criteria to AS-stack continue else block increment recursion \end{cfa} \columnbreak Exit \begin{cfa} decrement recursion if recursion == 0 if signal_stack not empty set_owner to thread if all monitors ready wake-up thread endif endif if entry queue not empty wake-up thread endif \end{cfa} \end{multicols} \begin{cfa}[caption={Entry and exit routine for monitors with internal scheduling and external scheduling},label={f:entry3}] \end{cfa} \end{figure} \begin{figure} \begin{multicols}{2} Destructor Entry \begin{cfa} if monitor is free enter elif already own the monitor increment recursion return create wait context if matches waitfor mask reset mask push self to AS-stack baton pass else wait increment recursion \end{cfa} \columnbreak Waitfor \begin{cfa} if matching thread is already there if found destructor push destructor to AS-stack unlock all monitors else push self to AS-stack baton pass endif return endif if non-blocking Unlock all monitors Return endif push self to AS-stack set waitfor mask block return \end{cfa} \end{multicols} \begin{cfa}[caption={Pseudo code for the \protect\lstinline|waitfor| routine and the \protect\lstinline|mutex| entry routine for destructors},label={f:entry-dtor}] \end{cfa} \end{figure} % ====================================================================== % ====================================================================== \section{Putting It All Together} % ====================================================================== % ====================================================================== \section{Threads As Monitors} As it was subtly alluded in section \ref{threads}, @thread@s in \CFA are in fact monitors, which means that all monitor features are available when using threads. For example, here is a very simple two thread pipeline that could be used for a simulator of a game engine: \begin{figure} \begin{cfa}[caption={Toy simulator using \protect\lstinline|thread|s and \protect\lstinline|monitor|s.},label={f:engine-v1}] // Visualization declaration thread Renderer {} renderer; Frame * simulate( Simulator & this ); // Simulation declaration thread Simulator{} simulator; void render( Renderer & this ); // Blocking call used as communication void draw( Renderer & mutex this, Frame * frame ); // Simulation loop void main( Simulator & this ) { while( true ) { Frame * frame = simulate( this ); draw( renderer, frame ); } } // Rendering loop void main( Renderer & this ) { while( true ) { waitfor( draw, this ); render( this ); } } \end{cfa} \end{figure} One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever. Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner: \begin{figure} \begin{cfa}[caption={Same toy simulator with proper termination condition.},label={f:engine-v2}] // Visualization declaration thread Renderer {} renderer; Frame * simulate( Simulator & this ); // Simulation declaration thread Simulator{} simulator; void render( Renderer & this ); // Blocking call used as communication void draw( Renderer & mutex this, Frame * frame ); // Simulation loop void main( Simulator & this ) { while( true ) { Frame * frame = simulate( this ); draw( renderer, frame ); // Exit main loop after the last frame if( frame->is_last ) break; } } // Rendering loop void main( Renderer & this ) { while( true ) { waitfor( draw, this ); or waitfor( ^?{}, this ) { // Add an exit condition break; } render( this ); } } // Call destructor for simulator once simulator finishes // Call destructor for renderer to signify shutdown \end{cfa} \end{figure} \section{Fibers \& Threads} As mentioned in section \ref{preemption}, \CFA uses preemptive threads by default but can use fibers on demand. Currently, using fibers is done by adding the following line of code to the program~: \begin{cfa} unsigned int default_preemption() { return 0; } \end{cfa} This routine is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice, \ie no preemption. However, once clusters are fully implemented, it will be possible to create fibers and \textbf{uthread} in the same system, as in listing \ref{f:fiber-uthread} \begin{figure} \lstset{language=CFA,deletedelim=**[is][]{}{}} \begin{cfa}[caption={Using fibers and \textbf{uthread} side-by-side in \CFA},label={f:fiber-uthread}] // Cluster forward declaration struct cluster; // Processor forward declaration struct processor; // Construct clusters with a preemption rate void ?{}(cluster& this, unsigned int rate); // Construct processor and add it to cluster void ?{}(processor& this, cluster& cluster); // Construct thread and schedule it on cluster void ?{}(thread& this, cluster& cluster); // Declare two clusters cluster thread_cluster = { 10ms };                     // Preempt every 10 ms cluster fibers_cluster = { 0 };                         // Never preempt // Construct 4 processors processor processors[4] = { //2 for the thread cluster thread_cluster; thread_cluster; //2 for the fibers cluster fibers_cluster; fibers_cluster; }; // Declares thread thread UThread {}; void ?{}(UThread& this) { // Construct underlying thread to automatically // be scheduled on the thread cluster (this){ thread_cluster } } void main(UThread & this); // Declares fibers thread Fiber {}; void ?{}(Fiber& this) { // Construct underlying thread to automatically // be scheduled on the fiber cluster (this.__thread){ fibers_cluster } } void main(Fiber & this); \end{cfa} \end{figure} % ====================================================================== % ====================================================================== \section{Performance Results} \label{results} % ====================================================================== % ====================================================================== \section{Machine Setup} Table \ref{tab:machine} shows the characteristics of the machine used to run the benchmarks. All tests were made on this machine. \begin{table} \begin{center} \begin{tabular}{| l | r | l | r |} Hence, the timer-expiry signal, which is generated \emph{externally} by the UNIX kernel to the UNIX process, is delivered to any of its UNIX subprocesses (kernel threads). To ensure each virtual processor receives its own preemption signals, a discrete-event simulation is run on one virtual processor, and only it sets timer events. Virtual processors register an expiration time with the discrete-event simulator, which is inserted in sorted order. The simulation sets the count-down timer to the value at the head of the event list, and when the timer expires, all events less than or equal to the current time are processed. Processing a preemption event sends an \emph{internal} @SIGUSR1@ signal to the registered virtual processor, which is always delivered to that processor. \section{Performance} \label{results} To verify the implementation of the \CFA runtime, a series of microbenchmarks are performed comparing \CFA with other widely used programming languages with concurrency. Table~\ref{t:machine} shows the specifications of the computer used to run the benchmarks, and the versions of the software used in the comparison. \begin{table}[h] \centering \caption{Experiment environment} \label{t:machine} \begin{tabular}{|l|r||l|r|} \hline Architecture            & x86\_64                       & NUMA node(s)  & 8 \\ Architecture            & x86\_64                               & NUMA node(s)  & 8 \\ \hline CPU op-mode(s)          & 32-bit, 64-bit                & Model name    & AMD Opteron\texttrademark  Processor 6380 \\ CPU op-mode(s)          & 32-bit, 64-bit                & Model name    & AMD Opteron\texttrademark\ Processor 6380 \\ \hline Byte Order                      & Little Endian                 & CPU Freq              & 2.5\si{\giga\hertz} \\ Byte Order                      & Little Endian                 & CPU Freq              & 2.5 GHz \\ \hline CPU(s)                  & 64                            & L1d cache     & \SI{16}{\kibi\byte} \\ CPU(s)                          & 64                                    & L1d cache     & 16 KiB \\ \hline Thread(s) per core      & 2                             & L1i cache     & \SI{64}{\kibi\byte} \\ Thread(s) per core      & 2                                     & L1i cache     & 64 KiB \\ \hline Core(s) per socket      & 8                             & L2 cache              & \SI{2048}{\kibi\byte} \\ Core(s) per socket      & 8                                     & L2 cache              & 2048 KiB \\ \hline Socket(s)                       & 4                             & L3 cache              & \SI{6144}{\kibi\byte} \\ Socket(s)                       & 4                                     & L3 cache              & 6144 KiB \\ \hline \hline Operating system                & Ubuntu 16.04.3 LTS    & Kernel                & Linux 4.4-97-generic \\ Operating system        & Ubuntu 16.04.3 LTS    & Kernel                & Linux 4.4-97-generic \\ \hline Compiler                        & GCC 6.3               & Translator    & CFA 1 \\ gcc                                     & 6.3                                   & \CFA                  & 1.0.0 \\ \hline Java version            & OpenJDK-9             & Go version    & 1.9.2 \\ Java                            & OpenJDK-9                     & Go                    & 1.9.2 \\ \hline \end{tabular} \end{center} \caption{Machine setup used for the tests} \label{tab:machine} \end{table} \section{Micro Benchmarks} All benchmarks are run using the same harness to produce the results, seen as the @BENCH()@ macro in the following examples. This macro uses the following logic to benchmark the code: \begin{cfa} #define BENCH(run, result) \ before = gettime(); \ run; \ after  = gettime(); \ result = (after - before) / N; \end{cfa} The method used to get time is @clock_gettime(CLOCK_THREAD_CPUTIME_ID);@. Each benchmark is using many iterations of a simple call to measure the cost of the call. The specific number of iterations depends on the specific benchmark. \subsection{Context-Switching} The first interesting benchmark is to measure how long context-switches take. The simplest approach to do this is to yield on a thread, which executes a 2-step context switch. Yielding causes the thread to context-switch to the scheduler and back, more precisely: from the \textbf{uthread} to the \textbf{kthread} then from the \textbf{kthread} back to the same \textbf{uthread} (or a different one in the general case). In order to make the comparison fair, coroutines also execute a 2-step context-switch by resuming another coroutine which does nothing but suspending in a tight loop, which is a resume/suspend cycle instead of a yield. Figure~\ref{f:ctx-switch} shows the code for coroutines and threads with the results in table \ref{tab:ctx-switch}. All omitted tests are functionally identical to one of these tests. The difference between coroutines and threads can be attributed to the cost of scheduling. All benchmarks are run using the following harness: \begin{cfa} unsigned int N = 10_000_000; #define BENCH( run, result ) Time before = getTimeNsec(); run; result = (getTimeNsec() - before) / N; \end{cfa} The method used to get time is @clock_gettime( CLOCK_REALTIME )@. Each benchmark is performed @N@ times, where @N@ varies depending on the benchmark, the total time is divided by @N@ to obtain the average time for a benchmark. \paragraph{Context-Switching} In procedural programming, the cost of a routine call is important as modularization (refactoring) increases. (In many cases, a compiler inlines routine calls to eliminate this cost.) Similarly, when modularization extends to coroutines/tasks, the time for a context switch becomes a relevant factor. The coroutine context-switch is 2-step using resume/suspend, \ie from resumer to suspender and from suspender to resumer. The thread context switch is 2-step using yield, \ie enter and return from the runtime kernel. Figure~\ref{f:ctx-switch} shows the code for coroutines/threads with all results in Table~\ref{tab:ctx-switch}. All omitted tests for other languages are functionally identical to this test (as for all other tests). The difference in performance between coroutine and thread context-switch is the cost of scheduling for threads, whereas coroutines are self-scheduling. \begin{figure} \begin{multicols}{2} \CFA Coroutines \begin{cfa} coroutine GreatSuspender {}; void main(GreatSuspender& this) { while(true) { suspend(); } } \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{}{}} \newbox\myboxA \begin{lrbox}{\myboxA} \begin{cfa}[aboveskip=0pt,belowskip=0pt] coroutine C {} c; void main( C & ) { for ( ;; ) { @suspend();@ } } int main() { GreatSuspender s; resume(s); Duration result; BENCH( for(size_t i=0; i