# Changeset 9a72c4d

Ignore:
Timestamp:
Jun 4, 2018, 6:24:01 PM (3 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, with_gc
Children:
6e3eaa57
Parents:
f77dbc0
Message:

Location:
doc/papers
Files:
2 edited

### Legend:

Unmodified
 rf77dbc0 \CFA is an extension of ISO-C, and hence, supports all C paradigms. %It is a non-object-oriented system-language, meaning most of the major abstractions have either no runtime overhead or can be opted out easily. Like C, the basics of \CFA revolve around structures and functions. Like C, the basics of \CFA revolve around structures and routines. Virtually all of the code generated by the \CFA translator respects C memory layouts and calling conventions. While \CFA is not an object-oriented language, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C does have a notion of objects: region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}. 'with' '(' $\emph{expression-list}$ ')' $\emph{compound-statement}$ \end{cfa} and may appear as the body of a function or nested within a function body. and may appear as the body of a routine or nested within a routine body. Each expression in the expression-list provides a type and object. The type must be an aggregate type. \CFA maximizes the ability to reuse names via overloading to aggressively address the naming problem. Both variables and functions may be overloaded, where selection is based on types, and number of returns (as in Ada~\cite{Ada}) and arguments. Both variables and routines may be overloaded, where selection is based on types, and number of returns (as in Ada~\cite{Ada}) and arguments. \begin{cquote} \vspace*{-\baselineskip}%??? Overloading is important for \CFA concurrency since the runtime system relies on creating different types to represent concurrency objects. Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions to prevent name clashes. As seen in Section~\ref{basics}, function @main@ is heavily overloaded. As seen in Section~\ref{basics}, routine @main@ is heavily overloaded. Variable overloading is useful in the parallel semantics of the @with@ statement for fields with the same name: Overloading also extends to operators. Operator-overloading syntax names a routine with the operator symbol and question marks for the operands: Operator-overloading syntax creates a routine name with an operator symbol and question marks for the operands: \begin{cquote} \lstDeleteShortInline@% \label{s:ParametricPolymorphism} The signature feature of \CFA is parametric-polymorphic functions~\cite{} with functions generalized using a @forall@ clause (giving the language its name), which allow separately compiled routines to support generic usage over multiple types. For example, the following sum function works for any type that supports construction from 0 and addition: The signature feature of \CFA is parametric-polymorphic routines~\cite{} with routines generalized using a @forall@ clause (giving the language its name), which allow separately compiled routines to support generic usage over multiple types. For example, the following sum routine works for any type that supports construction from 0 and addition: \begin{cfa} forall( otype T | { void ?{}( T *, zero_t ); T ?+?( T, T ); } ) // constraint type, 0 and + \end{cfa} \CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each function declaration: \CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each routine declaration: \begin{cfa} trait sumable( otype T ) { coroutine Fib { int fn; }; \end{cfa} which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, @next@. which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface routines @next@. Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main. The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code has the three suspend points, representing the three states in the Fibonacci formula, to context switch back to the caller's resume. The interface function, @next@, takes a Fibonacci instance and context switches to it using @resume@; The interface routine @next@, takes a Fibonacci instance and context switches to it using @resume@; on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned. The first @resume@ is special because it cocalls the coroutine at its coroutine main and allocates the stack; \end{figure} The previous examples are \newterm{asymmetric (semi) coroutine}s because one coroutine always calls a resuming function for another coroutine, and the resumed coroutine always suspends back to its last resumer, similar to call/return for normal functions. The previous examples are \newterm{asymmetric (semi) coroutine}s because one coroutine always calls a resuming routine for another coroutine, and the resumed coroutine always suspends back to its last resumer, similar to call/return for normal routines However, there is no stack growth because @resume@/@suspend@ context switch to existing stack-frames rather than create new ones. \newterm{Symmetric (full) coroutine}s have a coroutine call a resuming function for another coroutine, which eventually forms a resuming-call cycle. \newterm{Symmetric (full) coroutine}s have a coroutine call a resuming routine for another coroutine, which eventually forms a resuming-call cycle. (The trivial cycle is a coroutine resuming itself.) This control flow is similar to recursion for normal routines, but again there is no stack growth from the context switch. Figure~\ref{f:ProdCons} shows a producer/consumer symmetric-coroutine performing bi-directional communication. Since the solution involves a full-coroutining cycle, the program main creates one coroutine in isolation, passes this coroutine to its partner, and closes the cycle at the call to @start@. The @start@ function communicates both the number of elements to be produced and the consumer into the producer's coroutine structure. The @start@ routine communicates both the number of elements to be produced and the consumer into the producer's coroutine structure. Then the @resume@ to @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it. @prod@'s coroutine main starts, creates local variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer. However, there is nothing preventing wrong placement or multiple declarations. For coroutines as for threads, many implementations are based on routine pointers or function objects~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}. For coroutines as for threads, many implementations are based on routine pointers or routine objects~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}. For example, Boost implements coroutines in terms of four functor object-types: \begin{cfa} symmetric_coroutine<>::yield_type \end{cfa} Similarly, the canonical threading paradigm is often based on function pointers, \eg @pthread@~\cite{pthreads}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}. Similarly, the canonical threading paradigm is often based on routine pointers, \eg @pthread@~\cite{pthreads}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}. However, the generic thread-handle (identifier) is limited (few operations), unless it is wrapped in a custom type. \begin{cfa} \end{cfa} Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda-based coroutines adds very little. Note, the type @coroutine_t@ must be an abstract handle to the coroutine, because the coroutine descriptor and its stack are non-copyable. Copying the coroutine descriptor results in copies being out of date with the current state of the stack. Correspondingly, copying the stack results is copies being out of date with coroutine descriptor, and pointers in the stack being out of date to data on the stack. (There is no mechanism in C to find all stack-specific pointers and update them as part of a copy.) The selected approach is to use language support by introducing a new kind of aggregate (structure): The reserved keyword eases use for the common cases. Part of the mechanism to generalize coroutines is using a \CFA trait, which defines a coroutine as anything satisfying the trait @is_coroutine@, and this trait is used to restrict coroutine-manipulation functions: \begin{cfa} trait is_coroutine( dtype T ) { void main( T & this ); coroutine_desc * get_coroutine( T & this ); Part of the mechanism to generalize coroutines is using a \CFA trait, which defines a coroutine as anything satisfying the trait @is_coroutine@, and this trait is used to restrict coroutine-manipulation routines: \begin{cfa} trait is_coroutine( dtype T ) { void main( T & ); coroutine_desc * get_coroutine( T & ); }; forall( dtype T | is_coroutine(T) ) void get_coroutine( T & ); forall( dtype T | is_coroutine(T) ) void suspend( T & ); forall( dtype T | is_coroutine(T) ) void resume( T & ); \end{cfa} This definition ensures there is a statically-typed @main@ function that is the starting point (first stack frame) of a coroutine. No return value or additional parameters are necessary for this function, because the coroutine type allows an arbitrary number of interface functions with corresponding arbitrary typed input/output values. As well, any object passed to @suspend@ and @resume@ is a coroutine since it must satisfy the @is_coroutine@ trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ routine. forall( dtype T | is_coroutine(T) ) void suspend( T & ); forall( dtype T | is_coroutine(T) ) void resume( T & ); \end{cfa} The @dtype@ property of the trait ensures the coroutine descriptor is non-copyable, so all coroutines must be passed by reference (pointer). The routine definitions ensures there is a statically-typed @main@ routine that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the currently executing coroutine handle. The @main@ routine has no return value or additional parameters because the coroutine type allows an arbitrary number of interface routines with corresponding arbitrary typed input/output values versus fixed ones. The generic routines @suspend@ and @resume@ can be redefined, but any object passed to them is a coroutine since it must satisfy the @is_coroutine@ trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ routine, and possibly redefining @suspend@ and @resume@. The \CFA keyword @coroutine@ implicitly implements the getter and forward declarations required for implementing the coroutine main: \begin{cquote} Like coroutines and for the same design reasons, the selected approach for user threads is to use language support by introducing a new kind of aggregate (structure) and a \CFA trait: \begin{cquote} \begin{tabular}{@{}c@{\hspace{2\parindentlnth}}c@{}} \begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}} \begin{cfa} thread myThread { & \begin{cfa} trait is_thread( dtype T ) { void main( T & this ); thread_desc * get_thread( T & this ); void ^?{}( T & mutex this ); trait is_thread( dtype T ) { void main( T & ); thread_desc * get_thread( T & ); void ^?{}( T & mutex ); }; \end{cfa} \end{cquote} (The qualifier @mutex@ for the destructor parameter is discussed in Section~\ref{s:Monitors}.) Like a coroutine, the statically-typed @main@ function is the starting point (first stack frame) of a user thread. Like a coroutine, the statically-typed @main@ routine is the starting point (first stack frame) of a user thread. The difference is that a coroutine borrows a thread from its caller, so the first thread resuming a coroutine creates an instance of @main@; whereas, a user thread receives its own thread from the runtime system, which starts in @main@ as some point after the thread constructor is run.\footnote{ The \lstinline@main@ function is already a special routine in C (where the program begins), so it is a natural extension of the semantics to use overloading to declare mains for different coroutines/threads (the normal main being the main of the initial thread).} No return value or additional parameters are necessary for this function, because the task type allows an arbitrary number of interface functions with corresponding arbitrary typed input/output values. The \lstinline@main@ routine is already a special routine in C (where the program begins), so it is a natural extension of the semantics to use overloading to declare mains for different coroutines/threads (the normal main being the main of the initial thread).} No return value or additional parameters are necessary for this routine because the task type allows an arbitrary number of interface routines with corresponding arbitrary typed input/output values. \begin{comment} % put in appendix with coroutine version ??? In this example, threads of type @foo@ start execution in the @void main(foo &)@ routine, which prints @"Hello World!".@ While this paper encourages this approach to enforce strongly typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With the static semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously. \begin{cfa} typedef void (*voidFunc)(int); thread FuncRunner { voidFunc func; With the static semantics it is trivial to write a thread type that takes a routine pointer as a parameter and executes it on its stack asynchronously. \begin{cfa} typedef void (*voidRtn)(int); thread RtnRunner { voidRtn func; int arg; }; void ?{}(FuncRunner & this, voidFunc inFunc, int arg) { this.func = inFunc; void ?{}(RtnRunner & this, voidRtn inRtn, int arg) { this.func = inRtn; this.arg  = arg; } void main(FuncRunner & this) { // thread starts here and runs the function void main(RtnRunner & this) { // thread starts here and runs the routine this.func( this.arg ); } int main() { FuncRunner f = {hello, 42}; RtnRunner f = {hello, 42}; return 0? } \section{Synchronization / Mutual Exclusion} \section{Mutual Exclusion / Synchronization} Uncontrolled non-deterministic execution is meaningless. To reestablish meaningful execution requires mechanisms to reintroduce determinism (control non-determinism), called synchronization and mutual exclusion, where synchronization is a timing relationship among threads and mutual exclusion is an access-control mechanism on data shared by threads. Since many deterministic challenges appear with the use of mutable shared state, some languages/libraries disallow it (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka~\cite{Akka} (Scala)). To reestablish meaningful execution requires mechanisms to reintroduce determinism (\ie restrict non-determinism), called mutual exclusion and synchronization, where mutual exclusion is an access-control mechanism on data shared by threads, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}. Since many deterministic challenges appear with the use of mutable shared state, some languages/libraries disallow it, \eg Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka~\cite{Akka} (Scala). In these paradigms, interaction among concurrent objects is performed by stateless message-passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (\eg channels~\cite{CSP,Go}). However, in call/return-based languages, these approaches force a clear distinction (\ie introduce a new programming paradigm) between non-concurrent and concurrent computation (\ie function call versus message passing). This distinction means a programmers needs to learn two sets of design patterns. However, in call/return-based languages, these approaches force a clear distinction (\ie introduce a new programming paradigm) between regular and concurrent computation (\ie routine call versus message passing). Hence, a programmer must learn and manipulate two sets of design patterns. While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account. In contrast, approaches based on statefull models more closely resemble the standard call/return programming-model, resulting in a single programming paradigm. At the lowest level, concurrent control is implemented as atomic operations, upon which different kinds of locks mechanism are constructed, \eg semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locks mechanism are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}. However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}. A newer approach is transactional memory~\cite{Herlihy93}. A newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}. While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it was rejected as the core paradigm for concurrency in \CFA. One of the most natural, elegant, and efficient mechanisms for synchronization and mutual exclusion for shared-memory systems is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages -- \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java} -- provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency construct, upon which even higher-level approaches can be easily constructed.. One of the most natural, elegant, and efficient mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor}. First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming-languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to simulate monitors. For these reasons, \CFA selected monitors as the core high-level concurrency-construct, upon which higher-level approaches can be easily constructed. A group of instructions manipulating a specific instance of shared data that must be performed atomically is called an (individual) \newterm{critical-section}~\cite{Dijkstra65}. A generalization is a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously. The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously. The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers have the same session and all writers have a unique session. \newterm{Mutual exclusion} enforces the correction number of threads are using a critical section at the same time. \newterm{Mutual exclusion} enforces that the correct kind and number of threads are using a critical section. However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (\eg deadlock free), or by offering a more explicit coupling between shared data and critical section. For example, the \CC @std::atomic@ offers an easy way to express mutual-exclusion on a restricted set of operations (\eg reading/writing large types atomically). However, a significant challenge with (low-level) locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock. Easing composability is another feature higher-level mutual-exclusion mechanisms offer. For example, the \CC @std::atomic@ offers an easy way to express mutual-exclusion on a restricted set of operations (\eg reading/writing) for numerical types. However, a significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock. Easing composability is another feature higher-level mutual-exclusion mechanisms can offer. Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships. Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use. Higher-level mechanisms often simplify usage by adding better coupling between synchronization and data (\eg message passing), or offering a simpler solution to otherwise involved challenges, \eg barrier lock. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Often synchronization is used to order access to a critical section, \eg ensuring the next kind of thread to enter a critical section is a reader thread If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, the reader has \newterm{barged}. Barging can result in staleness/freshness problems, where a reader barges ahead of a write and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from having an opportunity to be read. Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use; higher-level mechanisms often simplify usage by adding better coupling between synchronization and data (\eg message passing), or offering a simpler solution to otherwise involved challenges, \eg barrier lock. Often synchronization is used to order access to a critical section, \eg ensuring a reader thread is the next kind of thread to enter a critical section. If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, that reader has \newterm{barged}. Barging can result in staleness/freshness problems, where a reader barges ahead of a writer and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from ever being read (lost computation). Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different approaches, barging avoidance and barging prevention. Algorithms that allow a barger but divert it until later are avoiding the barger, while algorithms that preclude a barger from entering during synchronization in the critical section prevent the barger completely. baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention. This challenge is often split into two different approaches: barging avoidance and barging prevention. Algorithms that allow a barger, but divert it until later using current synchronization state (flags), are avoiding the barger; algorithms that preclude a barger from entering during synchronization in the critical section prevent barging completely. Techniques like baton-pass locks~\cite{Andrews89} between threads instead of unconditionally releasing locks is an example of barging prevention. \label{s:Monitors} A \textbf{monitor} is a set of routines that ensure mutual-exclusion when accessing shared state. More precisely, a monitor is a programming technique that associates mutual-exclusion to routine scopes, as opposed to mutex locks, where mutual-exclusion is defined by lock/release calls independently of any scoping of the calling routine. This strong association eases readability and maintainability, at the cost of flexibility. Note that both monitors and mutex locks, require an abstract handle to identify them. This concept is generally associated with object-oriented languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OO semantics. The only requirement is the ability to declare a handle to a shared object and a set of routines that act on it: \begin{cfa} typedef /*some monitor type*/ monitor; int f(monitor & m); int main() { monitor m;  // Handle m f(m);       // Routine using handle } \end{cfa} % ====================================================================== % ====================================================================== \subsection{Call Semantics} \label{call} % ====================================================================== % ====================================================================== The above monitor example displays some of the intrinsic characteristics. First, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important, because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are non-copy-able objects (@dtype@). Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Passthrough can occur for generic helper routines (@swap@, @sort@, \etc) or specific helper routines like the following to implement an atomic counter: \begin{cfa} monitor counter_t { /*...see section $\ref{data}$...*/ }; void ?{}(counter_t & nomutex this); // constructor size_t ++?(counter_t & mutex this); // increment // need for mutex is platform dependent void ?{}(size_t * this, counter_t & mutex cnt); // conversion \end{cfa} This counter is used as follows: \begin{center} \begin{tabular}{c @{\hskip 0.35in} c @{\hskip 0.35in} c} \begin{cfa} // shared counter counter_t cnt1, cnt2; // multiple threads access counter thread 1 : cnt1++; cnt2++; thread 2 : cnt1++; cnt2++; thread 3 : cnt1++; cnt2++; ... thread N : cnt1++; cnt2++; \end{cfa} \end{tabular} \end{center} Notice how the counter is used without any explicit synchronization and yet supports thread-safe semantics for both reading and writing, which is similar in usage to the \CC template @std::atomic@. Here, the constructor (@?{}@) uses the @nomutex@ keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet constructed should never be shared and therefore does not require mutual exclusion. Furthermore, it allows the implementation greater freedom when it initializes the monitor locking. The prefix increment operator uses @mutex@ to protect the incrementing process from race conditions. Finally, there is a conversion operator from @counter_t@ to @size_t@. This conversion may or may not require the @mutex@ keyword depending on whether or not reading a @size_t@ is an atomic operation. For maximum usability, monitors use \textbf{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock. For example, listing \ref{fig:search} uses recursion and \textbf{multi-acq} to print values inside a binary tree. \begin{figure} \begin{cfa}[caption={Recursive printing algorithm using \textbf{multi-acq}.},label={fig:search}] monitor printer { ... }; struct tree { tree * left, right; char * value; A \textbf{monitor} is a set of routines that ensure mutual exclusion when accessing shared state. More precisely, a monitor is a programming technique that binds mutual exclusion to routine scope, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation). The strong association with the call/return paradigm eases programmability, readability and maintainability, at a slight cost in flexibility and efficiency. Note, like coroutines/threads, both locks and monitors require an abstract handle to reference them, because at their core, both mechanisms are manipulating non-copyable shared state. Copying a lock is insecure because it is possible to copy an open lock and then use the open copy when the original lock is closed to simultaneously access the shared data. Copying a monitor is secure because both the lock and shared data are copies, but copying the shared data is meaningless because it no longer represents a unique entity. As for coroutines/tasks, a non-copyable (@dtype@) trait is used to capture this requirement, so all locks/monitors must be passed by reference (pointer). \begin{cfa} trait is_monitor( dtype T ) { monitor_desc * get_monitor( T & ); void ^?{}( T & mutex ); }; void print(printer & mutex p, char * v); void print(printer & mutex p, tree * t) { print(p, t->value); print(p, t->left ); print(p, t->right); } \end{cfa} \end{figure} Having both @mutex@ and @nomutex@ keywords can be redundant, depending on the meaning of a routine having neither of these keywords. For example, it is reasonable that it should default to the safest option (@mutex@) when given a routine without qualifiers @void foo(counter_t & this)@, whereas assuming @nomutex@ is unsafe and may cause subtle errors. On the other hand, @nomutex@ is the normal'' parameter behaviour, it effectively states explicitly that this routine is not special''. Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. \end{cfa} \subsection{Mutex Acquisition} \label{s:MutexAcquisition} While correctness implicitly implies a monitor's mutual exclusion is acquired and released, there are implementation options about when and where the locking/unlocking occurs. (Much of this discussion also applies to basic locks.) For example, a monitor may need to be passed through multiple helper routines before it becomes necessary to acquire the monitor mutual-exclusion. \begin{cfa}[morekeywords=nomutex] monitor Aint { int cnt; };                                      $\C{// atomic integer counter}$ void ?{}( Aint & nomutex this ) with( this ) { cnt = 0; } $\C{// constructor}$ int ?=?( Aint & mutex$$$_{opt}$$$ lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions}$ void ?{}( int & this, Aint & mutex$$$_{opt}$$$ v ) { this = v.cnt; } int ?=?( int & lhs, Aint & mutex$$$_{opt}$$$ rhs ) with( rhs ) { lhs = cnt; } int ++?( Aint & mutex$$$_{opt}$$$ this ) with( this ) { return ++cnt; } $\C{// increment}$ \end{cfa} The @Aint@ constructor, @?{}@, uses the \lstinline[morekeywords=nomutex]@nomutex@ qualifier indicating mutual exclusion is unnecessary during construction because an object is inaccessible (private) until after it is initialized. (While a constructor may publish its address into a global variable, doing so generates a race-condition.) The conversion operators for initializing and assigning with a normal integer only need @mutex@, if reading/writing the implementation type is not atomic. Finally, the prefix increment operato, @++?@, is normally @mutex@ to protect the incrementing from race conditions, unless there is an atomic increment instruction for the implementation type. The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics, which is similar to the \CC template @std::atomic@. \begin{cfa} Aint x, y, z; ++x; ++y; ++z;                                                          $\C{// safe increment by multiple threads}$ x = 2; y = 2; z = 2;                                            $\C{// conversions}$ int i = x, j = y, k = z; i = x; j = y; k = z; \end{cfa} For maximum usability, monitors have \newterm{multi-acquire} semantics allowing a thread to acquire it multiple times without deadlock. For example, atomically printing the contents of a binary tree: \begin{cfa} monitor Tree { Tree * left, right; // value }; void print( Tree & mutex tree ) {                       $\C{// prefix traversal}$ // write value print( tree->left );                                    $\C{// multiply acquire monitor lock on each recursion}$ print( tree->right ); } \end{cfa} Mandatory monitor qualifiers have the benefit of being self-documented, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameter is redundant. Instead, one of qualifier semantics can be the default, and the other required. For example, assume the safe @mutex@ option for a monitor parameter because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors. On the other hand, assuming \lstinline[morekeywords=nomutex]@nomutex@ is the \emph{normal} parameter behaviour, stating explicitly this parameter is not special''. Providing a default qualifier implies knowing whether a parameter is a monitor. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the @mutex@ keyword and uses no keyword to mean @nomutex@. The next semantic decision is to establish when @mutex@ may be used as a type qualifier. Consider the following declarations: \begin{cfa} int f1(monitor & mutex m); int f2(const monitor & mutex m); int f3(monitor ** mutex m); int f4(monitor * mutex m []); int f5(graph(monitor *) & mutex m); \end{cfa} The problem is to identify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like @f1@ and @f2@ it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (@f3@) still allows the compiler and programmer to identify which object is acquired. However, adding in arrays (@f4@) makes it much harder. Array lengths are not necessarily known in C, and even then, making sure objects are only acquired once becomes none-trivial. This problem can be extended to absurd limits like @f5@, which uses a graph of monitors. To make the issue tractable, this project imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with at most one level of indirection (ignoring potential qualifiers). Also note that while routine @f3@ can be supported, meaning that monitor @**m@ is acquired, passing an array to this routine would be type-safe and yet result in undefined behaviour because only the first element of the array is acquired. However, this ambiguity is part of the C type-system with respects to arrays. For this reason, @mutex@ is disallowed in the context where arrays may be passed: \begin{cfa} int f1(monitor & mutex m);    // Okay : recommended case int f2(monitor * mutex m);    // Not Okay : Could be an array int f3(monitor mutex m []);  // Not Okay : Array of unknown length int f4(monitor ** mutex m);   // Not Okay : Could be an array int f5(monitor * mutex m []); // Not Okay : Array of unknown length \end{cfa} Note that not all array functions are actually distinct in the type system. However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic. Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion of the receiver object, \CFA uses an explicit mechanism to specify the object that acquires mutual-exclusion. A consequence of this approach is that it extends naturally to multi-monitor calls. \begin{cfa} int f(MonitorA & mutex a, MonitorB & mutex b); MonitorA a; MonitorB b; f(a,b); \end{cfa} While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found. The capability to acquire multiple locks before entering a critical section is called \emph{\textbf{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. For this reason, \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@. The next semantic decision is establishing which parameter \emph{types} may be qualified with @mutex@. Given: \begin{cfa} monitor M { ... } int f1( M & mutex m ); int f2( M * mutex m ); int f3( M * mutex m[] ); int f4( stack( M * ) & mutex m ); \end{cfa} the issue is that some of these parameter types are composed of multiple objects. For @f1@, there is only a single parameter object. Adding indirection in @f2@ still identifies a single object. However, the matrix in @f3@ introduces multiple objects. While shown shortly, multiple acquisition is possible; however array lengths are often unknown in C. This issue is exacerbated in @f4@, where the data structure must be safely traversed to acquire all of its elements. To make the issue tractable, \CFA only acquires one monitor per parameter with at most one level of indirection. However, the C type-system has an ambiguity with respects to arrays. Is the argument for @f2@ a single object or an array of objects? If it is an array, only the first element of the array is acquired, which seems unsafe; hence, @mutex@ is disallowed for array parameters. \begin{cfa} int f1( M & mutex m );                                          $\C{// allowed: recommended case}$ int f2( M * mutex m );                                          $\C{// disallowed: could be an array}$ int f3( M mutex m[$\,$] );                                      $\C{// disallowed: array length unknown}$ int f4( M ** mutex m );                                         $\C{// disallowed: could be an array}$ int f5( M * mutex m[$\,$] );                            $\C{// disallowed: array length unknown}$ \end{cfa} % Note, not all array routines have distinct types: @f2@ and @f3@ have the same type, as do @f4@ and @f5@. % However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic. For object-oriented monitors, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @rec.foo(...)@. \CFA has no receiver, and hence, must use an explicit mechanism to specify which object has mutual exclusion acquired. A positive consequence of this design decision is the ability to support multi-monitor routines. \begin{cfa} int f( M & mutex x, M & mutex y );              $\C{// multiple monitor parameter of any type}$ M m1, m2; f( m1, m2 ); \end{cfa} (While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.) In practice, writing multi-locking routines that do not deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of acquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors is safe from deadlock when using \textbf{bulk-acq}. However, users can still force the acquiring order. For example, notice which routines use @mutex@/@nomutex@ and how this affects acquiring order: \begin{cfa} void foo(A& mutex a, B& mutex b) { // acquire a & b ... } void bar(A& mutex a, B& /*nomutex*/ b) { // acquire a ... foo(a, b); ... // acquire b } void baz(A& /*nomutex*/ a, B& mutex b) { // acquire b ... foo(a, b); ... // acquire a } \end{cfa} The \textbf{multi-acq} monitor lock allows a monitor lock to be acquired by both @bar@ or @baz@ and acquired again in @foo@. In the calls to @bar@ and @baz@ the monitors are acquired in opposite order. However, such use leads to lock acquiring order problems. In the example above, the user uses implicit ordering in the case of function @foo@ but explicit ordering in the case of @bar@ and @baz@. This subtle difference means that calling these routines concurrently may lead to deadlock and is therefore undefined behaviour. As shown~\cite{Lister77}, solving this problem requires: \begin{enumerate} \item Dynamically tracking the monitor-call order. \item Implement rollback semantics. \end{enumerate} While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is still prohibitively complex~\cite{Dice10}. In \CFA, users simply need to be careful when acquiring multiple monitors at the same time or only use \textbf{bulk-acq} of all the monitors. While \CFA provides only a partial solution, most systems provide no solution and the \CFA partial solution handles many useful cases. For example, \textbf{multi-acq} and \textbf{bulk-acq} can be used together in interesting ways: \begin{cfa} monitor bank { ... }; void deposit( bank & mutex b, int deposit ); void transfer( bank & mutex mybank, bank & mutex yourbank, int me2you) { deposit( mybank, -me2you ); deposit( yourbank, me2you ); The capability to acquire multiple locks before entering a critical section is called \newterm{bulk acquire}. In previous example, \CFA guarantees the order of acquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors is safe from deadlock. However, users can force the acquiring order. For example, notice the use of @mutex@/\lstinline[morekeywords=nomutex]@nomutex@ and how this affects the acquiring order: \begin{cfa} void foo( M & mutex m1, M & mutex m2 );         $\C{// acquire m1 and m2}$ void bar( M & mutex m1, M & /* nomutex */ m2 ) { $\C{// acquire m1}$ ... foo( m1, m2 ); ...                                  $\C{// acquire m2}$ } void baz( M & /* nomutex */ m1, M & mutex m2 ) { $\C{// acquire m2}$ ... foo( m1, m2 ); ...                                  $\C{// acquire m1}$ } \end{cfa} The multi-acquire semantics allows @bar@ or @baz@ to acquire a monitor lock and reacquire it in @foo@. In the calls to @bar@ and @baz@, the monitors are acquired in opposite order. However, such use leads to lock acquiring order problems resulting in deadlock~\cite{Lister77}, where detecting it requires dynamically tracking of monitor calls, and dealing with it requires implement rollback semantics~\cite{Dice10}. In \CFA, safety is guaranteed by using bulk acquire of all monitors to shared objects, whereas other monitor systems provide no aid. While \CFA provides only a partial solution, the \CFA partial solution handles many useful cases. \begin{cfa} monitor Bank { ... }; void deposit( Bank & mutex b, int deposit ); void transfer( Bank & mutex mybank, Bank & mutex yourbank, int me2you) { deposit( mybank, -me2you );                   $\C{// debit}$ deposit( yourbank, me2you );                    $\C{// credit}$ } \end{cfa} This example shows a trivial solution to the bank-account transfer problem~\cite{BankTransfer}. Without \textbf{multi-acq} and \textbf{bulk-acq}, the solution to this problem is much more involved and requires careful engineering. Without multi- and bulk acquire, the solution to this problem requires careful engineering. \subsection{\protect\lstinline|mutex| statement} \label{mutex-stmt} The call semantics discussed above have one software engineering issue: only a routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the @mutex@ statement to work around the need for unnecessary names, avoiding a major software engineering problem~\cite{2FTwoHardThings}. Table \ref{f:mutex-stmt} shows an example of the @mutex@ statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the @mutex@ statement has no semantic difference from a routine call with @mutex@ parameters. \begin{table} \begin{center} \begin{tabular}{|c|c|} function call & @mutex@ statement \\ \hline \begin{cfa}[tabsize=3] The monitor call-semantics associate all locking semantics to routines. Like Java, \CFA offers an alternative @mutex@ statement to reduce refactoring and naming. \begin{cquote} \begin{tabular}{@{}c|@{\hspace{\parindentlnth}}c@{}} routine call & @mutex@ statement \\ \begin{cfa} monitor M {}; void foo( M & mutex m1, M & mutex m2 ) { // critical section } void bar( M & m1, M & m2 ) { foo( m1, m2 ); } \end{cfa}&\begin{cfa}[tabsize=3] monitor M {}; \end{cfa} & \begin{cfa} void bar( M & m1, M & m2 ) { mutex(m1, m2) { mutex( m1, m2 ) {       // remove refactoring and naming // critical section } } \end{cfa} \end{tabular} \end{center} \caption{Regular call semantics vs. \protect\lstinline|mutex| statement} \label{f:mutex-stmt} \end{table} % ====================================================================== % ====================================================================== \subsection{Data semantics} \label{data} % ====================================================================== % ====================================================================== Once the call semantics are established, the next step is to establish data semantics. Indeed, until now a monitor is used simply as a generic handle but in most cases monitors contain shared data. This data should be intrinsic to the monitor declaration to prevent any accidental use of data without its appropriate protection. For example, here is a complete version of the counter shown in section \ref{call}: \begin{cfa} monitor counter_t { int value; }; void ?{}(counter_t & this) { this.cnt = 0; } int ?++(counter_t & mutex this) { return ++this.value; } // need for mutex is platform dependent here void ?{}(int * this, counter_t & mutex cnt) { *this = (int)cnt; } \end{cfa} Like threads and coroutines, monitors are defined in terms of traits with some additional language support in the form of the @monitor@ keyword. The monitor trait is: \begin{cfa} trait is_monitor(dtype T) { monitor_desc * get_monitor( T & ); void ^?{}( T & mutex ); }; \end{cfa} Note that the destructor of a monitor must be a @mutex@ routine to prevent deallocation while a thread is accessing the monitor. As with any object, calls to a monitor, using @mutex@ or otherwise, is undefined behaviour after the destructor has run. % ====================================================================== % ====================================================================== \section{Internal Scheduling} \label{intsched} % ====================================================================== % ====================================================================== In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronization. With monitors, this capability is generally achieved with internal or external scheduling as in~\cite{Hoare74}. With \textbf{scheduling} loosely defined as deciding which thread acquires the critical section next, \textbf{internal scheduling} means making the decision from inside the critical section (\ie with access to the shared state), while \textbf{external scheduling} means making the decision when entering the critical section (\ie without access to the shared state). Since internal scheduling within a single monitor is mostly a solved problem, this paper concentrates on extending internal scheduling to multiple monitors. Indeed, like the \textbf{bulk-acq} semantics, internal scheduling extends to multiple monitors in a way that is natural to the user but requires additional complexity on the implementation side. \end{cquote} \section{Internal Scheduling} \label{s:InternalScheduling} While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer, Figure~\ref{f:BoundedBuffer}, may be full/empty so produce/consumer threads must block. Leaving the monitor and trying again (busy waiting) is impractical for high-level programming. Monitors eliminate busy waiting by providing internal synchronization to schedule threads needing access to the shared data, where the synchronization is blocking (threads are parked) versus spinning. The synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} is defined as indicating which thread acquires the critical section next. \newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and behalf of other threads attempting entry. Figure~\ref{f:BBInt} shows a \CFA bounded-buffer with internal scheduling, where producers/consumers enter the monitor, see the buffer is full/empty, and block on an appropriate condition lock, @full@/@empty@. The @wait@ routine atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the routine's parameter list. The appropriate condition lock is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer. Signalling is unconditional, because signalling an empty condition lock does nothing. Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means: \begin{enumerate} \item The signalling thread leaves immediately, and the signalled thread continues. \item The signalling thread continues and the signalled thread is marked for urgent unblocking at subsequent scheduling points (exit/wait). \item The signalling thread blocks but is marked for urgrent unblocking and the signalled thread continues. \end{enumerate} The first approach is too restrictive, as it precludes solving a reasonable class of problems (\eg dating service). \CFA supports the next two semantics as both are useful. Finally, while it is common to store a @condition@ as a field of the monitor, in \CFA, a @condition@ variable can be created/stored independently. \begin{figure} \centering \newbox\myboxA \begin{lrbox}{\myboxA} \begin{lstlisting}[aboveskip=0pt,belowskip=0pt] forall( otype T ) { // distribute forall monitor Buffer { condition full, empty; int front, back, count; T elements[10]; }; void ?{}( Buffer(T) & buffer ) with(buffer) { [front, back, count] = 0; } void insert( Buffer(T) & mutex buffer, T elem ) with(buffer) { if ( count == 10 ) wait( empty ); // insert elem into buffer signal( full ); } T remove( Buffer(T) & mutex buffer ) with(buffer) { if ( count == 0 ) wait( full ); // remove elem from buffer signal( empty ); return elem; } } \end{lstlisting} \end{lrbox} \newbox\myboxB \begin{lrbox}{\myboxB} \begin{lstlisting}[aboveskip=0pt,belowskip=0pt] forall( otype T ) { // distribute forall monitor Buffer { int front, back, count; T elements[10]; }; void ?{}( Buffer(T) & buffer ) with(buffer) { [front, back, count] = 0; } T remove( Buffer(T) & mutex buffer ); // forward void insert( Buffer(T) & mutex buffer, T elem ) with(buffer) { if ( count == 10 ) waitfor( remove, buffer ); // insert elem into buffer } T remove( Buffer(T) & mutex buffer ) with(buffer) { if ( count == 0 ) waitfor( insert, buffer ); // remove elem from buffer return elem; } } \end{lstlisting} \end{lrbox} \subfloat[Internal Scheduling]{\label{f:BBInt}\usebox\myboxA} %\qquad \subfloat[External Scheduling]{\label{f:BBExt}\usebox\myboxB} \caption{Generic Bounded-Buffer} \label{f:BoundedBuffer} \end{figure} Figure~\ref{f:BBExt} shows a \CFA bounded-buffer with external scheduling, where producers/consumers detecting a full/empty buffer block and prevent more producers/consumers from entering the monitor until the buffer has a free/empty slot. External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion. If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer. Threads making calls to routines that are currently excluded wait outside (externally) of the monitor on a calling queue. An important aspect of monitor implementation is barging, \ie can calling threads barge ahead of signalled threads? If barging is allowed, synchronization between a singller and signallee is difficult, often requiring multiple unblock/block cycles (looping around a wait rechecking if a condition is met). \CFA scheduling does \emph{not} have barging, which simplifies synchronization among threads in the monitor. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency. Indeed, like the bulk acquire semantics, internal scheduling extends to multiple monitors in a way that is natural to the user but requires additional complexity on the implementation side. First, here is a simple example of internal scheduling: } \end{cfa} There are two details to note here. First, @signal@ is a delayed operation; it only unblocks the waiting thread when it reaches the end of the critical section. This semantics is needed to respect mutual-exclusion, \ie the signaller and signalled thread cannot be in the monitor simultaneously. The alternative is to return immediately after the call to @signal@, which is significantly more restrictive. Second, in \CFA, while it is common to store a @condition@ as a field of the monitor, a @condition@ variable can be stored/created independently of a monitor. Here routine @foo@ waits for the @signal@ from @bar@ before making further progress, ensuring a basic ordering. An important aspect of the implementation is that \CFA does not allow barging, which means that once function @bar@ releases the monitor, @foo@ is guaranteed to be the next thread to acquire the monitor (unless some other thread waited on the same condition). This guarantee offers the benefit of not having to loop around waits to recheck that a condition is met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency. % ====================================================================== This semantic is a logical requirement for barging prevention. A direct extension of the previous example is a \textbf{bulk-acq} version: A direct extension of the previous example is a bulk acquire version: \begin{multicols}{2} \begin{cfa} \end{cfa} \end{multicols} \noindent This version uses \textbf{bulk-acq} (denoted using the {\sf\&} symbol), but the presence of multiple monitors does not add a particularly new meaning. \noindent This version uses bulk acquire (denoted using the {\sf\&} symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers a group of monitors. % ====================================================================== A larger example is presented to show complex issues for \textbf{bulk-acq} and its implementation options are analyzed. Figure~\ref{f:int-bulk-cfa} shows an example where \textbf{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{f:int-bulk-cfa} shows the corresponding \CFA code to implement the cfa-code in listing \ref{f:int-bulk-cfa}. A larger example is presented to show complex issues for bulk acquire and its implementation options are analyzed. Figure~\ref{f:int-bulk-cfa} shows an example where bulk acquire adds a significant layer of complexity to the internal signalling semantics, and listing \ref{f:int-bulk-cfa} shows the corresponding \CFA code to implement the cfa-code in listing \ref{f:int-bulk-cfa}. For the purpose of translating the given cfa-code into \CFA-code, any method of introducing a monitor is acceptable, \eg @mutex@ parameters, global variables, pointer parameters, or using locals with the @mutex@ statement. \end{cfa} \end{multicols} \begin{cfa}[caption={Internal scheduling with \textbf{bulk-acq}},label={f:int-bulk-cfa}] \begin{cfa}[caption={Internal scheduling with bulk acquire},label={f:int-bulk-cfa}] \end{cfa} \begin{center} The complexity begins at code sections 4 and 8 in listing \ref{f:int-bulk-cfa}, which are where the existing semantics of internal scheduling needs to be extended for multiple monitors. The root of the problem is that \textbf{bulk-acq} is used in a context where one of the monitors is already acquired, which is why it is important to define the behaviour of the previous cfa-code. The root of the problem is that bulk acquire is used in a context where one of the monitors is already acquired, which is why it is important to define the behaviour of the previous cfa-code. When the signaller thread reaches the location where it should release @A & B@'' (listing \ref{f:int-bulk-cfa} line \ref{line:releaseFirst}), it must actually transfer ownership of monitor @B@ to the waiting thread. This ownership transfer is required in order to prevent barging into @B@ by another thread, since both the signalling and signalled threads still need monitor @A@. }% subfloat \qquad \subfloat[\textbf{bulk-acq} Monitor] { \subfloat[bulk acquire Monitor] { \label{fig:BulkMonitor} {\resizebox{0.45\textwidth}{!}{\input{ext_monitor}}} Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept separate. Generating a mask dynamically means that the storage for the mask information can vary between calls to @waitfor@, allowing for more flexibility and extensions. Storing an array of accepted function pointers replaces the single instruction bitmask comparison with dereferencing a pointer followed by a linear search. Storing an array of accepted routine pointers replaces the single instruction bitmask comparison with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (\eg listing \ref{f:nest-ext}) may now require additional searches for the @waitfor@ statement to check if a routine is already queued. \end{figure} Note that in the right picture, tasks need to always keep track of the monitors associated with mutex routines, and the routine mask needs to have both a function pointer and a set of monitors, as is discussed in the next section. Note that in the right picture, tasks need to always keep track of the monitors associated with mutex routines, and the routine mask needs to have both a routine pointer and a set of monitors, as is discussed in the next section. These details are omitted from the picture for the sake of simplicity. % ====================================================================== Syntactically, the @waitfor@ statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expressions, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the @waitfor@ statement. It checks that the set of monitors passed in matches the requirements for a function call. Syntactically, the @waitfor@ statement takes a routine identifier and a set of monitors. While the set of monitors can be any list of expressions, the routine name is more restricted because the compiler validates at compile time the validity of the routine type and the parameters used with the @waitfor@ statement. It checks that the set of monitors passed in matches the requirements for a routine call. Figure~\ref{f:waitfor} shows various usages of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-@mutex@ parameter. The choice of the routine type is made ignoring any non-@mutex@ parameter. One limitation of the current implementation is that it does not handle overloading, but overloading is possible. \begin{figure} waitfor(f2, a1, a2); // Incorrect : Mutex arguments don't match waitfor(f1, 1);      // Incorrect : 1 not a mutex argument waitfor(f9, a1);     // Incorrect : f9 function does not exist waitfor(f9, a1);     // Incorrect : f9 routine does not exist waitfor(*fp, a1 );   // Incorrect : fp not an identifier waitfor(f4, a1);     // Incorrect : f4 ambiguous Finally, for added flexibility, \CFA supports constructing a complex @waitfor@ statement using the @or@, @timeout@ and @else@. Indeed, multiple @waitfor@ clauses can be chained together using @or@; this chain forms a single statement that uses baton pass to any function that fits one of the function+monitor set passed in. To enable users to tell which accepted function executed, @waitfor@s are followed by a statement (including the null statement @;@) or a compound statement, which is executed after the clause is triggered. A @waitfor@ chain can also be followed by a @timeout@, to signify an upper bound on the wait, or an @else@, to signify that the call should be non-blocking, which checks for a matching function call already arrived and otherwise continues. Indeed, multiple @waitfor@ clauses can be chained together using @or@; this chain forms a single statement that uses baton pass to any routine that fits one of the routine+monitor set passed in. To enable users to tell which accepted routine executed, @waitfor@s are followed by a statement (including the null statement @;@) or a compound statement, which is executed after the clause is triggered. A @waitfor@ chain can also be followed by a @timeout@, to signify an upper bound on the wait, or an @else@, to signify that the call should be non-blocking, which checks for a matching routine call already arrived and otherwise continues. Any and all of these clauses can be preceded by a @when@ condition to dynamically toggle the accept clauses on or off based on some current state. Figure~\ref{f:waitfor2} demonstrates several complex masks and some incorrect ones. \section{Behind the Scenes} There are several challenges specific to \CFA when implementing concurrency. These challenges are a direct result of \textbf{bulk-acq} and loose object definitions. These challenges are a direct result of bulk acquire and loose object definitions. These two constraints are the root cause of most design decisions in the implementation. Furthermore, to avoid contention from dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs. The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues and all queues are designed with intrusive nodes, where each node has pre-allocated link fields for chaining, to avoid the need for memory allocation. Since several concurrency operations can use an unbound amount of memory (depending on \textbf{bulk-acq}), statically defining information in the intrusive fields of threads is insufficient.The only way to use a variable amount of memory without requiring memory allocation is to pre-allocate large buffers of memory eagerly and store the information in these buffers. Since several concurrency operations can use an unbound amount of memory (depending on bulk acquire), statically defining information in the intrusive fields of threads is insufficient.The only way to use a variable amount of memory without requiring memory allocation is to pre-allocate large buffers of memory eagerly and store the information in these buffers. Conveniently, the call stack fits that description and is easy to use, which is why it is used heavily in the implementation of internal scheduling, particularly variable-length arrays. Since stack allocation is based on scopes, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable-length array. The threads and the condition both have a fixed amount of memory, while @mutex@ routines and blocking calls allow for an unbound amount, within the stack size. Note that since the major contributions of this paper are extending monitor semantics to \textbf{bulk-acq} and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed. Note that since the major contributions of this paper are extending monitor semantics to bulk acquire and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed. % ====================================================================== void foo(T * mutex t); // Correct: this function only works on monitors (any monitor) // Correct: this routine only works on monitors (any monitor) forall(dtype T | is_monitor(T)) void bar(T * mutex t)); Both entry point and \textbf{callsite-locking} are feasible implementations. The current \CFA implementation uses entry-point locking because it requires less work when using \textbf{raii}, effectively transferring the burden of implementation to object construction/destruction. It is harder to use \textbf{raii} for call-site locking, as it does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, \ie the function body. It is harder to use \textbf{raii} for call-site locking, as it does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, \ie the routine body. For example, the monitor call can appear in the middle of an expression. Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites. \subsection{Context Switching} As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific function call. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific routine call. This assumption means that the context-switch only has to copy the callee-saved registers onto the stack and then switch the stack registers with the ones of the target coroutine/thread. Note that the instruction pointer can be left untouched since the context-switch is always inside the same function. Note that the instruction pointer can be left untouched since the context-switch is always inside the same routine Threads, however, do not context-switch between each other directly. They context-switch to the scheduler. As a result, a signal handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). It is important to note that signal handlers save and restore signal masks because user-thread migration can cause a signal mask to migrate from one kernel thread to another. This behaviour is only a problem if all kernel threads, among which a user thread can migrate, differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distinguishes async-signal-safe'' functions from other functions.}. This behaviour is only a problem if all kernel threads, among which a user thread can migrate, differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distinguishes async-signal-safe'' routines from other routines}. However, since the kernel thread handling preemption requires a different signal mask, executing user threads on the kernel-alarm thread can cause deadlocks. For this reason, the alarm thread is in a tight loop around a system call to @sigwaitinfo@, requiring very little CPU time for preemption. For \CFA, this picture does not have support for blocking multiple monitors on a single condition. To support \textbf{bulk-acq} two changes to this picture are required. To support bulk acquire two changes to this picture are required. First, it is no longer helpful to attach the condition to \emph{a single} monitor. Secondly, the thread waiting on the condition has to be separated across multiple monitors, seen in figure \ref{fig:monitor_cfa}. \end{figure} The solution discussed in \ref{intsched} can be seen in the exit routine of listing \ref{f:entry2}. The solution discussed in \ref{s:InternalScheduling} can be seen in the exit routine of listing \ref{f:entry2}. Basically, the solution boils down to having a separate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. This solution is deadlock safe as well as preventing any potential barging. } \end{cfa} This function is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice, \ie no preemption. This routine is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice, \ie no preemption. However, once clusters are fully implemented, it will be possible to create fibers and \textbf{uthread} in the same system, as in listing \ref{f:fiber-uthread} \begin{figure} For monitors, the simplest approach is to measure how long it takes to enter and leave a monitor routine. Figure~\ref{f:mutex} shows the code for \CFA. To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured. To put the results in context, the cost of entering a non-inline routine and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured. The results can be shown in table \ref{tab:mutex}. Therefore, there is still significant work to improve performance. Many of the data structures and algorithms may change in the future to more efficient versions. For example, the number of monitors in a single \textbf{bulk-acq} is only bound by the stack size, this is probably unnecessarily generous. For example, the number of monitors in a single bulk acquire is only bound by the stack size, this is probably unnecessarily generous. It may be possible that limiting the number helps increase performance. However, it is not obvious that the benefit would be significant.