% ====================================================================== % ====================================================================== \chapter{Waituntil}\label{s:waituntil} % ====================================================================== % ====================================================================== Consider the following motivating problem. There are $N$ stalls (resources) in a bathroom and there are $M$ people (threads) using the bathroom. Each stall has its own lock since only one person may occupy a stall at a time. Humans solve this problem in the following way. They check if all of the stalls are occupied. If not, they enter and claim an available stall. If they are all occupied, people queue and watch the stalls until one is free, and then enter and lock the stall. This solution can be implemented on a computer easily, if all threads are waiting on all stalls and agree to queue. Now the problem is extended. Some stalls are wheelchair accessible and some stalls have specific gender identification. Each person (thread) may be limited to only one kind of stall or may choose among different kinds of stalls that match their criteria. Immediately, the problem becomes more difficult. A single queue no longer fully solves the problem. What happens when there is a stall available that the person at the front of the queue cannot choose? The na\"ive solution has each thread spin indefinitely continually checking the every matching kind of stall(s) until a suitable one is free. This approach is insufficient since it wastes cycles and results in unfairness among waiting threads as a thread can acquire the first matching stall without regard to the waiting time of other threads. Waiting for the first appropriate stall (resource) that becomes available without spinning is an example of \gls{synch_multiplex}: the ability to wait synchronously for one or more resources based on some selection criteria. \section{History of Synchronous Multiplexing} There is a history of tools that provide \gls{synch_multiplex}. Some well known \gls{synch_multiplex} tools include Unix system utilities: @select@~\cite{linux:select}, @poll@~\cite{linux:poll}, and @epoll@~\cite{linux:epoll}, and the @select@ statement provided by Go~\cite{go:selectref}, Ada~\cite[\S~9.7]{Ada16}, and \uC~\cite[\S~3.3.1]{uC++}. The concept and theory surrounding \gls{synch_multiplex} was introduced by Hoare in his 1985 book, Communicating Sequential Processes (CSP)~\cite{Hoare85}, \begin{quote} A communication is an event that is described by a pair $c.v$ where $c$ is the name of the channel on which the communication takes place and $v$ is the value of the message which passes.~\cite[p.~113]{Hoare85} \end{quote} The ideas in CSP were implemented by Roscoe and Hoare in the language Occam~\cite{Roscoe88}. Both CSP and Occam include the ability to wait for a \Newterm{choice} among receiver channels and \Newterm{guards} to toggle which receives are valid. For example, \begin{cfa}[mathescape] (@G1@(x) $\rightarrow$ P @|@ @G2@(y) $\rightarrow$ Q ) \end{cfa} waits for either channel @x@ or @y@ to have a value, if and only guards @G1@ and @G2@ are true; if only one guard is true, only one channel receives, and if both guards are false, no receive occurs. % extended CSP with a \gls{synch_multiplex} construct @ALT@, which waits for one resource to be available and then executes a corresponding block of code. In detail, waiting for one resource out of a set of resources can be thought of as a logical exclusive-or over the set of resources. Guards are a conditional operator similar to an @if@, except they apply to the resource being waited on. If a guard is false, then the resource it guards is not in the set of resources being waited on. If all guards are false, the ALT, Occam's \gls{synch_multiplex} statement, does nothing and the thread continues. Guards can be simulated using @if@ statements as shown in~\cite[rule~2.4, p~183]{Roscoe88} \begin{lstlisting}[basicstyle=\rm,mathescape] ALT( $b$ & $g$ $P$, $G$ ) = IF ( $b$ ALT($\,g$ $P$, $G$ ), $\neg\,$b ALT( $G$ ) ) (boolean guard elim). \end{lstlisting} but require $2^N-1$ @if@ statements, where $N$ is the number of guards. The exponential blowup comes from applying rule 2.4 repeatedly, since it works on one guard at a time. Figure~\ref{f:wu_if} shows an example of applying rule 2.4 for three guards. Also, notice the additional code duplication for statements @S1@, @S2@, and @S3@. \begin{figure} \centering \begin{lrbox}{\myboxA} \begin{cfa} when( G1 ) waituntil( R1 ) S1 or when( G2 ) waituntil( R2 ) S2 or when( G3 ) waituntil( R3 ) S3 \end{cfa} \end{lrbox} \begin{lrbox}{\myboxB} \begin{cfa} if ( G1 ) if ( G2 ) if ( G3 ) waituntil( R1 ) S1 or waituntil( R2 ) S2 or waituntil( R3 ) S3 else waituntil( R1 ) S1 or waituntil( R2 ) S2 else if ( G3 ) waituntil( R1 ) S1 or waituntil( R3 ) S3 else waituntil( R1 ) S1 else if ( G2 ) if ( G3 ) waituntil( R2 ) S2 or waituntil( R3 ) S3 else waituntil( R2 ) S2 else if ( G3 ) waituntil( R3 ) S3 \end{cfa} \end{lrbox} \subfloat[Guards]{\label{l:guards}\usebox\myboxA} \hspace*{5pt} \vrule \hspace*{5pt} \subfloat[Simulated Guards]{\label{l:simulated_guards}\usebox\myboxB} \caption{\CFA guard simulated with \lstinline{if} statement.} \label{f:wu_if} \end{figure} When discussing \gls{synch_multiplex} implementations, the resource being multiplexed is important. While CSP wait on channels, the earliest known implementation of synch\-ronous multiplexing is Unix's @select@~\cite{linux:select}, multiplexing over file descriptors. The @select@ system-call is passed three sets of file descriptors (read, write, exceptional) to wait on and an optional timeout. @select@ blocks until either some subset of file descriptors are available or the timeout expires. All file descriptors that are ready are returned by modifying the argument sets to only contain the ready descriptors. This early implementation differs from the theory presented in CSP: when the call from @select@ returns it may provide more than one ready file descriptor. As such, @select@ has logical-or multiplexing semantics, whereas the theory described exclusive-or semantics. It is possible to achieve exclusive-or semantics with @select@ by arbitrarily operating on only one of the returned descriptors. @select@ passes the interest set of file descriptors between application and kernel in the form of a worst-case sized bit-mask, where the worst-case is the largest numbered file descriptor. @poll@ reduces the size of the interest sets changing from a bit mask to a linked data structures, independent of the file-descriptor values. @epoll@ further reduces the data passed per call by keeping the interest set in the kernel, rather than supplying it on every call. These early \gls{synch_multiplex} tools interact directly with the operating system and others are used to communicate among processes. Later, \gls{synch_multiplex} started to appear in applications, via programming languages, to support fast multiplexed concurrent communication among threads. An early example of \gls{synch_multiplex} is the @select@ statement in Ada~\cite[\S~9.7]{Ichbiah79}. The @select@ statement in Ada allows a task object, with their own threads, to multiplex over a subset of asynchronous calls its methods. The Ada @select@ statement has the same exclusive-or semantics and guards as Occam ALT; however, it multiplexes over methods rather than channels. \begin{figure} \begin{lstlisting}[language=ada,literate=] task type buffer is -- thread ... -- buffer declarations count : integer := 0; begin -- thread starts here loop select when count < Size => -- guard accept insert( elem : in ElemType ) do -- method ... -- add to buffer count := count + 1; end; -- executed if this accept called or when count > 0 => -- guard accept remove( elem : out ElemType ) do -- method ... --remove and return from buffer via parameter count := count - 1; end; -- executed if this accept called or delay 10.0; -- unblock after 10 seconds without call or else -- do not block, cannot appear with delay end select; end loop; end buffer; var buf : buffer; -- create task object and start thread in task body \end{lstlisting} \caption{Ada Bounded Buffer} \label{f:BB_Ada} \end{figure} Figure~\ref{f:BB_Ada} shows the outline of a bounded buffer implemented with Ada task. Note, a task method is associated with the \lstinline[language=ada]{accept} clause of the \lstinline[language=ada]{select} statement, rather than being a separate routine. The thread executing the loop in the task body blocks at the \lstinline[language=ada]{select} until a call occurs to @insert@ or @remove@. Then the appropriate \lstinline[language=ada]{accept} method is run with the called arguments. Hence, the @select@ statement provides rendezvous points for threads, rather than providing channels with message passing. The \lstinline[language=ada]{select} statement also provides a timeout and @else@ (nonblocking), which changes synchronous multiplexing to asynchronous. Now the thread polls rather than blocks. Another example of programming-language \gls{synch_multiplex} is Go using a @select@ statement with channels~\cite{go:selectref}. Figure~\ref{l:BB_Go} shows the outline of a bounded buffer implemented with a Go routine. Here two channels are used for inserting and removing by client producers and consumers, respectively. (The @term@ and @stop@ channels are used to synchronize with the program main.) Go's @select@ has the same exclusive-or semantics as the ALT primitive from Occam and associated code blocks for each clause like ALT and Ada. However, unlike Ada and ALT, Go does not provide guards for the \lstinline[language=go]{case} clauses of the \lstinline[language=go]{select}. Go also provides a timeout via a channel and a @default@ clause like Ada @else@ for asynchronous multiplexing. \begin{figure} \centering \begin{lrbox}{\myboxA} \begin{lstlisting}[language=go,literate=] func main() { insert := make( chan int, Size ) remove := make( chan int, Size ) term := make( chan string ) finish := make( chan string ) buf := func() { L: for { select { // wait for message case i = <- buffer: case <- term: break L } remove <- i; } finish <- "STOP" // completion } go buf() // start thread in buf } \end{lstlisting} \end{lrbox} \begin{lrbox}{\myboxB} \begin{lstlisting}[language=uC++=] _Task BoundedBuffer { ... // buffer declarations int count = 0; public: void insert( int elem ) { ... // add to buffer count += 1; } int remove() { ... // remove and return from buffer count -= 1; } private: void main() { for ( ;; ) { _Accept( ~buffer ) break; or _When ( count < Size ) _Accept( insert ); or _When ( count > 0 ) _Accept( remove ); } } }; buffer buf; // start thread in main method \end{lstlisting} \end{lrbox} \subfloat[Go]{\label{l:BB_Go}\usebox\myboxA} \hspace*{5pt} \vrule \hspace*{5pt} \subfloat[\uC]{\label{l:BB_uC++}\usebox\myboxB} \caption{Bounded Buffer} \label{f:AdaMultiplexing} \end{figure} Finally, \uC provides \gls{synch_multiplex} with Ada-style @select@ over monitor and task methods with the @_Accept@ statement~\cite[\S~2.9.2.1]{uC++}, and over futures with the @_Select@ statement~\cite[\S~3.3.1]{uC++}. The @_Select@ statement extends the ALT/Go @select@ by offering both @and@ and @or@ semantics, which can be used together in the same statement. Both @_Accept@ and @_Select@ statements provide guards for multiplexing clauses, as well as, timeout, and @else@ clauses. There are other languages that provide \gls{synch_multiplex}, including Rust's @select!@ over futures~\cite{rust:select}, OCaml's @select@ over channels~\cite{ocaml:channel}, and C++14's @when_any@ over futures~\cite{cpp:whenany}. Note that while C++14 and Rust provide \gls{synch_multiplex}, the implementations leave much to be desired as both rely on polling to wait on multiple resources. \section{Other Approaches to Synchronous Multiplexing} To avoid the need for \gls{synch_multiplex}, all communication among threads/processes must come from a single source. For example, in Erlang each process has a single heterogeneous mailbox that is the sole source of concurrent communication, removing the need for \gls{synch_multiplex} as there is only one place to wait on resources. Similar, actor systems circumvent the \gls{synch_multiplex} problem as actors only block when waiting for the next message never in a behaviour. While these approaches solve the \gls{synch_multiplex} problem, they introduce other issues. Consider the case where a thread has a single source of communication and it wants a set of @N@ resources. It sequentially requests the @N@ resources and waits for each response. During the receives for the @N@ resources, it can receive other communication, and has to save and postpone these communications, or discard them. % If the requests for the other resources need to be retracted, the burden falls on the programmer to determine how to synchronize appropriately to ensure that only one resource is delivered. \section{\CFA's Waituntil Statement} The new \CFA \gls{synch_multiplex} utility introduced in this work is the @waituntil@ statement. There is a @waitfor@ statement in \CFA that supports Ada-style \gls{synch_multiplex} over monitor methods, so this @waituntil@ focuses on synchronizing over other resources. All of the \gls{synch_multiplex} features mentioned so far are monomorphic, only waiting on one kind of resource: Unix @select@ supports file descriptors, Go's @select@ supports channel operations, \uC's @select@ supports futures, and Ada's @select@ supports monitor method calls. The \CFA @waituntil@ is polymorphic and provides \gls{synch_multiplex} over any objects that satisfy the trait in Figure~\ref{f:wu_trait}. No other language provides a synchronous multiplexing tool polymorphic over resources like \CFA's @waituntil@. \begin{figure} \begin{cfa} forall(T & | sized(T)) trait is_selectable { // For registering a waituntil stmt on a selectable type bool register_select( T &, select_node & ); // For unregistering a waituntil stmt from a selectable type bool unregister_select( T &, select_node & ); // on_selected is run on the selecting thread prior to executing // the statement associated with the select_node bool on_selected( T &, select_node & ); }; \end{cfa} \caption{Trait for types that can be passed into \CFA's \lstinline{waituntil} statement.} \label{f:wu_trait} \end{figure} Currently locks, channels, futures and timeouts are supported by the @waituntil@ statement, and can be expanded through the @is_selectable@ trait as other use-cases arise. The @waituntil@ statement supports guarded clauses, both @or@ and @and@ semantics, and provides an @else@ for asynchronous multiplexing. Figure~\ref{f:wu_example} shows a \CFA @waituntil@ usage, which is waiting for either @Lock@ to be available \emph{or} for a value to be read from @Channel@ into @i@ \emph{and} for @Future@ to be fulfilled \emph{or} a timeout of one second. \begin{figure} \begin{cfa} future(int) Future; channel(int) Channel; owner_lock Lock; int i = 0; waituntil( Lock ) { ... } or when( i == 0 ) waituntil( i << Channel ) { ... } and waituntil( Future ) { ... } or waituntil( timeout( 1`s ) ) { ... } // else { ... } \end{cfa} \caption{Example of \CFA's waituntil statement} \label{f:wu_example} \end{figure} \section{Waituntil Semantics} The @waituntil@ semantics has two parts: the semantics of the statement itself, \ie @and@, @or@, @when@ guards, and @else@ semantics, and the semantics of how the @waituntil@ interacts with types like channels, locks and futures. \subsection{Statement Semantics} The @or@ semantics are the most straightforward and nearly match those laid out in the ALT statement from Occam. The clauses have an exclusive-or relationship where the first available one is run and only one clause is run. \CFA's @or@ semantics differ from ALT semantics: instead of randomly picking a clause when multiple are available, the first clause in the @waituntil@ that is available is executed. For example, in the following example, if @foo@ and @bar@ are both available, @foo@ is always selected since it comes first in the order of @waituntil@ clauses. \begin{cfa} future(int) bar, foo; waituntil( foo ) { ... } or waituntil( bar ) { ... } \end{cfa} The \CFA @and@ semantics match the @and@ semantics of \uC \lstinline[language=uC++]{_Select}. When multiple clauses are joined by @and@, the @waituntil@ makes a thread wait for all to be available, but still runs the corresponding code blocks \emph{as they become available}. When an @and@ clause becomes available, the waiting thread unblocks and runs that clause's code-block, and then the thread waits again for the next available clause or the @waituntil@ statement is now true. This semantics allows work to be done in parallel while synchronizing over a set of resources, and furthermore, gives a good reason to use the @and@ operator. If the @and@ operator waited for all clauses to be available before running, it would be the same as just acquiring those resources consecutively by a sequence of @waituntil@ statements. As for normal C expressions, the @and@ operator binds more tightly than the @or@. To give an @or@ operator higher precedence, parenthesis are used. For example, the following @waituntil@ unconditionally waits for @C@ and one of either @A@ or @B@, since the @or@ is given higher precedence via parenthesis. \begin{cfa} @(@ waituntil( A ) { ... } // bind tightly to or or waituntil( B ) { ... } @)@ and waituntil( C ) { ... } \end{cfa} The guards in the @waituntil@ statement are called @when@ clauses. Each boolean expression inside a @when@ is evaluated \emph{once} before the @waituntil@ statement is run. Like Occam's ALT, the guards toggle clauses on and off, where a @waituntil@ clause is only evaluated and waited on if the corresponding guard is @true@. In addition, the @waituntil@ guards require some nuance since both @and@ and @or@ operators are supported \see{Section~\ref{s:wu_guards}}. When a guard is false and a clause is removed, it can be thought of as removing that clause and its preceding operation from the statement. For example, in the following, the two @waituntil@ statements are semantically equivalent. \begin{lrbox}{\myboxA} \begin{cfa} when( true ) waituntil( A ) { ... } or when( false ) waituntil( B ) { ... } and waituntil( C ) { ... } \end{cfa} \end{lrbox} \begin{lrbox}{\myboxB} \begin{cfa} waituntil( A ) { ... } and waituntil( C ) { ... } \end{cfa} \end{lrbox} \begin{tabular}{@{}lcl@{}} \usebox\myboxA & $\equiv$ & \usebox\myboxB \end{tabular} The @else@ clause on the @waituntil@ has identical semantics to the @else@ clause in Ada. If all resources are not immediately available and there is an @else@ clause, the @else@ clause is run and the thread continues. \subsection{Type Semantics} As mentioned, to support interaction with the @waituntil@ statement a type must support the trait in Figure~\ref{f:wu_trait}. The @waituntil@ statement expects types to register and unregister themselves via calls to @register_select@ and @unregister_select@, respectively. When a resource becomes available, @on_selected@ is run, and if it returns false, the corresponding code block is not run. Many types do not need @on_selected@, but it is provided if a type needs to perform work or checks before the resource can be accessed in the code block. The register/unregister routines in the trait also return booleans. The return value of @register_select@ is @true@, if the resource is immediately available and @false@ otherwise. The return value of @unregister_select@ is @true@, if the corresponding code block should be run after unregistration and @false@ otherwise. The routine @on_selected@ and the return value of @unregister_select@ are needed to support channels as a resource. More detail on channels and their interaction with @waituntil@ appear in Section~\ref{s:wu_chans}. \section{\lstinline{waituntil} Implementation} The @waituntil@ statement is not inherently complex, and the pseudo code is presented in Figure~\ref{f:WU_Impl}. The complexity comes from the consideration of race conditions and synchronization needed when supporting various primitives. Figure~\ref{f:WU_Impl} aims to introduce the reader to the rudimentary idea and control flow of the @waituntil@. The following sections then use examples to fill in details that Figure~\ref{f:WU_Impl} does not provide. Finally, the full pseudocode of the waituntil is presented in Figure~\ref{f:WU_Full_Impl}. The basic steps of the @waituntil@ statement are: \begin{figure} \begin{cfa} select_nodes s[N]; $\C[3.25in]{// declare N select nodes}$ for ( node in s ) $\C{// register nodes}$ register_select( resource, node ); while ( statement predicate not satisfied ) { $\C{// check predicate}$ // block for ( resource in waituntil statement ) { $\C{// run true code blocks}$ if ( statement predicate is satisfied ) break; if ( resource is avail ) run code block } } for ( node in s ) $\C{// deregister nodes}\CRT$ if ( unregister_select( resource, node ) ) run code block \end{cfa} \caption{\lstinline{waituntil} Implementation} \label{f:WU_Impl} \end{figure} \begin{enumerate} \item The @waituntil@ statement declares $N$ @select_node@s, one per resource that is being waited on, which stores any @waituntil@ data pertaining to that resource. \item Each @select_node@ is then registered with the corresponding resource. \item The thread executing the @waituntil@ then loops until the statement's predicate is satisfied. In each iteration, if the predicate is unsatisfied, the thread blocks. If clauses becomes satisfied, the thread unblocks, and for each satisfied clause the block fails and the thread proceeds, otherwise the block succeeds (like a semaphore where a block is a @P()@ and a satisfied clause is a @V()@). After proceeding past the block all clauses are checked for completion and the completed clauses have their code blocks run. While checking clause completion, if enough clauses have been run such that the statement predicate is satisfied, the loop exits early. In the case where the block succeeds, the thread will be woken by the thread that marks one of the resources as available. \item Once the thread escapes the loop, the @select_nodes@ are unregistered from the resources. \end{enumerate} These steps give a basic overview of how the statement works. Digging into parts of the implementation will shed light on the specifics and provide more detail. \subsection{Locks}\label{s:wu_locks} The \CFA runtime supports a number of spinning and blocking locks, \eg semaphore, MCS, futex, Go mutex, spinlock, owner, \etc. Many of these locks satisfy the @is_selectable@ trait, and hence, are resources supported by the @waituntil@ statement. For example, the following waits until the thread has acquired lock @l1@ or locks @l2@ and @l3@. \begin{cfa} owner_lock l1, l2, l3; waituntil ( l1 ) { ... } or waituntil( l2 ) { ... } and waituntil( l3 ) { ... } \end{cfa} Implicitly, the @waituntil@ is calling the lock acquire for each of these locks to establish a position in the lock's queue of waiting threads. When the lock schedules this thread, it unblocks and performs the @waituntil@ code to determine if it can proceed. If it cannot proceed, it blocks again on the @waituntil@ lock, holding the acquired lock. In detail, when a thread waits on multiple locks via a @waituntil@, it enqueues a @select_node@ in each of the lock's waiting queues. When a @select_node@ reaches the front of the lock's queue and gains ownership, the thread blocked on the @waituntil@ is unblocked. Now, the lock is temporarily held by the @waituntil@ thread until the node is unregistered, versus the thread waiting on the lock. To prevent the waiting thread from holding many locks at once and potentially introducing a deadlock, the node is unregistered right after the corresponding code block is executed. This prevents deadlocks since the waiting thread will never hold a lock while waiting on another resource. As such the only nodes unregistered at the end are the ones that have not run. \subsection{Timeouts} Timeouts in the @waituntil@ take the form of a duration being passed to a @sleep@ or @timeout@ call. An example is shown in the following code. \begin{cfa} waituntil( sleep( 1`ms ) ) {} waituntil( timeout( 1`s ) ) {} or waituntil( timeout( 2`s ) ) {} waituntil( timeout( 1`ns ) ) {} and waituntil( timeout( 2`s ) ) {} \end{cfa} The timeout implementation highlights a key part of the @waituntil@ semantics, the expression inside a @waituntil()@ is evaluated once at the start of the @waituntil@ algorithm. As such, calls to these @sleep@ and @timeout@ routines do not block, but instead return a type that supports the @is_selectable@ trait. This feature leverages \CFA's ability to overload on return type; a call to @sleep@ outside a @waituntil@ will call a different @sleep@ that does not return a type, which will block for the appropriate duration. This mechanism of returning a selectable type is needed for types that want to support multiple operations such as channels that allow both reading and writing. \subsection{Channels}\label{s:wu_chans} To support both waiting on both reading and writing to channels, the operators @?<>?@ are used read and write to a channel respectively, where the left-hand operand is the value being read into/written and the right-hand operand is the channel. Channels require significant complexity to synchronously multiplex on for a few reasons. First, reading or writing to a channel is a mutating operation; If a read or write to a channel occurs, the state of the channel has changed. In comparison, for standard locks and futures, if a lock is acquired then released or a future is ready but not accessed, the state of the lock and the future is not permanently modified. In this way, a @waituntil@ over locks or futures that completes with resources available but not consumed is not an issue. However, if a thread modifies a channel on behalf of a thread blocked on a @waituntil@ statement, it is important that the corresponding @waituntil@ code block is run, otherwise there is a potentially erroneous mismatch between the channel state and associated side effects. As such, the @unregister_select@ routine has a boolean return that is used by channels to indicate when the operation was completed but the block was not run yet. When the return is @true@, the corresponding code block is run after the unregister. Furthermore, if both @and@ and @or@ operators are used, the @or@ operators have to stop behaving like exclusive-or semantics due to the race between channel operations and unregisters. It was deemed important that exclusive-or semantics were maintained when only @or@ operators were used, so this situation has been special-cased, and is handled by having all clauses race to set a value \emph{before} operating on the channel. This approach is infeasible in the case where @and@ and @or@ operators are used. To show this consider the following @waituntil@ statement. \begin{cfa} waituntil( i >> A ) {} and waituntil( i >> B ) {} or waituntil( i >> C ) {} and waituntil( i >> D ) {} \end{cfa} If exclusive-or semantics were followed, this @waituntil@ would only run the code blocks for @A@ and @B@, or the code blocks for @C@ and @D@. However, to race before operation completion in this case introduces a race whose complexity increases with the size of the @waituntil@ statement. In the example above, for @i@ to be inserted into @C@, to ensure the exclusive-or it must be ensured that @i@ can also be inserted into @D@. Furthermore, the race for the @or@ would also need to be won. However, due to TOCTOU issues, one cannot know that all resources are available without acquiring all the internal locks of channels in the subtree. This is not a good solution for two reasons. It is possible that once all the locks are acquired the subtree is not satisfied and the locks must all be released. This would incur a high cost for signalling threads and heavily increase contention on internal channel locks. Furthermore, the @waituntil@ statement is polymorphic and can support resources that do not have internal locks, which also makes this approach infeasible. As such, the exclusive-or semantics are lost when using both @and@ and @or@ operators since they can not be supported without significant complexity and hits to @waituntil@ statement performance. Channels introduce another interesting consideration in their implementation. Supporting both reading and writing to a channel in A @waituntil@ means that one @waituntil@ clause may be the notifier for another @waituntil@ clause. This poses a problem when dealing with the special-cased @or@ where the clauses need to win a race to operate on a channel. When both a special-case @or@ is inserting to a channel on one thread and another thread is blocked in a special-case @or@ consuming from the same channel there is not one but two races that need to be consolidated by the inserting thread. (This race can also occur in the mirrored case with a blocked producer and signalling consumer.) For the producing thread to know that the insert succeeded, they need to win the race for their own @waituntil@ and win the race for the other @waituntil@. Go solves this problem in their select statement by acquiring the internal locks of all channels before registering the select on the channels. This eliminates the race since no other threads can operate on the blocked channel since its lock will be held. This approach is not used in \CFA since the @waituntil@ is polymorphic. Not all types in a @waituntil@ have an internal lock, and when using non-channel types acquiring all the locks incurs extra unneeded overhead. Instead this race is consolidated in \CFA in two phases by having an intermediate pending status value for the race. This race case is detectable, and if detected, producer will first race to set its own race flag to be pending. If it succeeds, it then attempts to set the consumer's race flag to its success value. If the producer successfully sets the consumer race flag, then the operation can proceed, if not the signalling thread will set its own race flag back to the initial value. If any other threads attempt to set the producer's flag and see a pending value, they will wait until the value changes before proceeding to ensure that in the case that the producer fails, the signal will not be lost. This protocol ensures that signals will not be lost and that the two races can be resolved in a safe manner. Channels in \CFA have exception based shutdown mechanisms that the @waituntil@ statement needs to support. These exception mechanisms were what brought in the @on_selected@ routine. This routine is needed by channels to detect if they are closed upon waking from a @waituntil@ statement, to ensure that the appropriate behaviour is taken and an exception is thrown. \subsection{Guards and Statement Predicate}\label{s:wu_guards} Checking for when a synchronous multiplexing utility is done is trivial when it has an or/xor relationship, since any resource becoming available means that the blocked thread can proceed. In \uC and \CFA, their \gls{synch_multiplex} utilities involve both an @and@ and @or@ operator, which make the problem of checking for completion of the statement more difficult. In the \uC @_Select@ statement, this problem is solved by constructing a tree of the resources, where the internal nodes are operators and the leaves are booleans storing the state of each resource. The internal nodes also store the statuses of the two subtrees beneath them. When resources become available, their corresponding leaf node status is modified and then percolates up into the internal nodes to update the state of the statement. Once the root of the tree has both subtrees marked as @true@ then the statement is complete. As an optimization, when the internal nodes are updated, their subtrees marked as @true@ are pruned and are not touched again. To support statement guards in \uC, the tree prunes a branch if the corresponding guard is false. The \CFA @waituntil@ statement blocks a thread until a set of resources have become available that satisfy the underlying predicate. The waiting condition of the @waituntil@ statement can be represented as a predicate over the resources, joined by the @waituntil@ operators, where a resource is @true@ if it is available, and @false@ otherwise. In \CFA, this representation is used as the mechanism to check if a thread is done waiting on the @waituntil@. Leveraging the compiler, a predicate routine is generated per @waituntil@ that when passed the statuses of the resources, returns @true@ when the @waituntil@ is done, and false otherwise. To support guards on the \CFA @waituntil@ statement, the status of a resource disabled by a guard is set to a boolean value that ensures that the predicate function behaves as if that resource is no longer part of the predicate. \uC's @_Select@, supports operators both inside and outside of the clauses. \eg in the following example the code blocks will run once their corresponding predicate inside the round braces is satisfied. % C_TODO put this is uC++ code style not cfa-style \begin{cfa} Future_ISM A, B, C, D; _Select( A || B && C ) { ... } and _Select( D && E ) { ... } \end{cfa} This is more expressive that the @waituntil@ statement in \CFA. In \CFA, since the @waituntil@ statement supports more resources than just futures, implementing operators inside clauses was avoided for a few reasons. As a motivating example, suppose \CFA supported operators inside clauses and consider the code snippet in Figure~\ref{f:wu_inside_op}. \begin{figure} \begin{cfa} owner_lock A, B, C, D; waituntil( A && B ) { ... } or waituntil( C && D ) { ... } \end{cfa} \caption{Example of unsupported operators inside clauses in \CFA.} \label{f:wu_inside_op} \end{figure} If the @waituntil@ in Figure~\ref{f:wu_inside_op} works with the same semantics as described and acquires each lock as it becomes available, it opens itself up to possible deadlocks since it is now holding locks and waiting on other resources. Other semantics would be needed to ensure that this operation is safe. One possibility is to use \CC's @scoped_lock@ approach that was described in Section~\ref{s:DeadlockAvoidance}, however the potential for livelock leaves much to be desired. Another possibility would be to use resource ordering similar to \CFA's @mutex@ statement, but that alone is not sufficient if the resource ordering is not used everywhere. Additionally, using resource ordering could conflict with other semantics of the @waituntil@ statement. To show this conflict, consider if the locks in Figure~\ref{f:wu_inside_op} were ordered @D@, @B@, @C@, @A@. If all the locks are available, it becomes complex to both respect the ordering of the @waituntil@ in Figure~\ref{f:wu_inside_op} when choosing which code block to run and also respect the lock ordering of @D@, @B@, @C@, @A@ at the same time. One other way this could be implemented is to wait until all resources for a given clause are available before proceeding to acquire them, but this also quickly becomes a poor approach. This approach won't work due to TOCTOU issues; it is not possible to ensure that the full set resources are available without holding them all first. Operators inside clauses in \CFA could potentially be implemented with careful circumvention of the problems involved, but it was not deemed an important feature when taking into account the runtime cost that would need to be paid to handle these situations. The problem of operators inside clauses also becomes a difficult issue to handle when supporting channels. If internal operators were supported, it would require some way to ensure that channels used with internal operators are modified on if and only if the corresponding code block is run, but that is not feasible due to reasons described in the exclusive-or portion of Section~\ref{s:wu_chans}. \subsection{The full \lstinline{waituntil} picture} Now that the details have been discussed, the full pseudocode of the waituntil is presented in Figure~\ref{f:WU_Full_Impl}. \begin{figure} \begin{cfa} bool when_conditions[N]; for ( node in s ) $\C{// evaluate guards}$ if ( node has guard ) when_conditions[node] = node_guard; else when_conditions[node] = true; select_nodes s[N]; $\C[3.25in]{// declare N select nodes}$ try { for ( node in s ) $\C{// register nodes}$ if ( when_conditions[node] ) register_select( resource, node ); // ... set statuses for nodes with when_conditions[node] == false ... while ( statement predicate not satisfied ) { $\C{// check predicate}$ // block for ( resource in waituntil statement ) { $\C{// run true code blocks}$ if ( statement predicate is satisfied ) break; if ( resource is avail ) { try { if( on_selected( resource ) ) $\C{// conditionally run block}$ run code block } finally { $\C{// for exception safety}$ unregister_select( resource, node ); $\C{// immediate unregister}$ } } } } } finally { $\C{// for exception safety}$ for ( registered nodes in s ) $\C{// deregister nodes}$ if ( when_conditions[node] && unregister_select( resource, node ) && on_selected( resource ) ) run code block $\C{// conditionally run code block upon unregister}\CRT$ } \end{cfa} \caption{Full \lstinline{waituntil} Pseudocode Implementation} \label{f:WU_Full_Impl} \end{figure} In comparison to Figure~\ref{f:WU_Impl}, this pseudocode now includes the specifics discussed in this chapter. Some things to note are as follows: The @finally@ blocks provide exception-safe RAII unregistering of nodes, and in particular, the @finally@ inside the innermost loop performs the immediate unregistering required for deadlock-freedom that was mentioned in Section~\ref{s:wu_locks}. The @when_conditions@ array is used to store the boolean result of evaulating each guard at the beginning of the @waituntil@, and it is used to conditionally omit operations on resources with @false@ guards. As discussed in Section~\ref{s:wu_chans}, this pseudocode includes code blocks conditional on the result of both @on_selected@ and @unregister_select@, which allows the channel implementation to ensure that all available channel resources will have their corresponding code block run. \section{Waituntil Performance} The two \gls{synch_multiplex} utilities that are in the realm of comparability with the \CFA @waituntil@ statement are the Go @select@ statement and the \uC @_Select@ statement. As such, two microbenchmarks are presented, one for Go and one for \uC to contrast the systems. The similar utilities discussed at the start of this chapter in C, Ada, Rust, \CC, and OCaml are either not meaningful or feasible to benchmark against. The select(2) and related utilities in C are not comparable since they are system calls that go into the kernel and operate on file descriptors, whereas the @waituntil@ exists solely in user space. Ada's @select@ only operates on methods, which is done in \CFA via the @waitfor@ utility so it is not meaningful to benchmark against the @waituntil@, which cannot wait on the same resource. Rust and \CC only offer a busy-wait based approach which is not comparable to a blocking approach. OCaml's @select@ waits on channels that are not comparable with \CFA and Go channels, so OCaml @select@ is not benchmarked against Go's @select@ and \CFA's @waituntil@. Given the differences in features, polymorphism, and expressibility between @waituntil@ and @select@, and @_Select@, the aim of the microbenchmarking in this chapter is to show that these implementations lie in the same realm of performance, not to pick a winner. \subsection{Channel Benchmark} The channel multiplexing microbenchmarks compare \CFA's @waituntil@ and Go's select, where the resource being waited on is a set of channels. The basic structure of the microbenchmark has the number of cores split evenly between producer and consumer threads, \ie, with 8 cores there would be 4 producer threads and 4 consumer threads. The number of clauses @C@ is also varied, with results shown with 2, 4, and 8 clauses. Each clause has a respective channel that is operates on. Each producer and consumer repeatedly waits to either produce or consume from one of the @C@ clauses and respective channels. An example in \CFA syntax of the work loop in the consumer main with @C = 4@ clauses follows. \begin{cfa} for (;;) waituntil( val << chans[0] ) {} or waituntil( val << chans[1] ) {} or waituntil( val << chans[2] ) {} or waituntil( val << chans[3] ) {} \end{cfa} A successful consumption is counted as a channel operation, and the throughput of these operations is measured over 10 seconds. The first microbenchmark measures throughput of the producers and consumer synchronously waiting on the channels and the second has the threads asynchronously wait on the channels. The results are shown in Figures~\ref{f:select_contend_bench} and~\ref{f:select_spin_bench} respectively. \begin{figure} \centering \captionsetup[subfloat]{labelfont=footnotesize,textfont=footnotesize} \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Contend_2.pgf}} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Contend_2.pgf}} } \bigskip \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Contend_4.pgf}} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Contend_4.pgf}} } \bigskip \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Contend_8.pgf}} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Contend_8.pgf}} } \caption{The channel synchronous multiplexing benchmark comparing Go select and \CFA \lstinline{waituntil} statement throughput (higher is better).} \label{f:select_contend_bench} \end{figure} \begin{figure} \centering \captionsetup[subfloat]{labelfont=footnotesize,textfont=footnotesize} \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Spin_2.pgf}} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Spin_2.pgf}} } \bigskip \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Spin_4.pgf}} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Spin_4.pgf}} } \bigskip \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Spin_8.pgf}} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Spin_8.pgf}} } \caption{The asynchronous multiplexing channel benchmark comparing Go select and \CFA \lstinline{waituntil} statement throughput (higher is better).} \label{f:select_spin_bench} \end{figure} Both Figures~\ref{f:select_contend_bench} and~\ref{f:select_spin_bench} have similar results when comparing @select@ and @waituntil@. In the AMD benchmarks, the performance is very similar as the number of cores scale. The AMD machine has been observed to have higher caching contention cost, which creates on a bottleneck on the channel locks, which results in similar scaling between \CFA and Go. At low cores, Go has significantly better performance, which is likely due to an optimization in their scheduler. Go heavily optimizes thread handoffs on their local run-queue, which can result in very good performance for low numbers of threads which are parking/unparking each other~\cite{go:sched}. In the Intel benchmarks, \CFA performs better than Go as the number of cores scale and as the number of clauses scale. This is likely due to Go's implementation choice of acquiring all channel locks when registering and unregistering channels on a @select@. Go then has to hold a lock for every channel, so it follows that this results in worse performance as the number of channels increase. In \CFA, since races are consolidated without holding all locks, it scales much better both with cores and clauses since more work can occur in parallel. This scalability difference is more significant on the Intel machine than the AMD machine since the Intel machine has been observed to have lower cache contention costs. The Go approach of holding all internal channel locks in the select has some additional drawbacks. This approach results in some pathological cases where Go's system throughput on channels can greatly suffer. Consider the case where there are two channels, @A@ and @B@. There are both a producer thread and a consumer thread, @P1@ and @C1@, selecting both @A@ and @B@. Additionally, there is another producer and another consumer thread, @P2@ and @C2@, that are both operating solely on @B@. Compared to \CFA this setup results in significantly worse performance since @P2@ and @C2@ cannot operate in parallel with @P1@ and @C1@ due to all locks being acquired. This case may not be as pathological as it may seem. If the set of channels belonging to a select have channels that overlap with the set of another select, they lose the ability to operate on their select in parallel. The implementation in \CFA only ever holds a single lock at a time, resulting in better locking granularity. Comparison of this pathological case is shown in Table~\ref{t:pathGo}. The AMD results highlight the worst case scenario for Go since contention is more costly on this machine than the Intel machine. \begin{table}[t] \centering \setlength{\extrarowheight}{2pt} \setlength{\tabcolsep}{5pt} \caption{Throughput (channel operations per second) of \CFA and Go for a pathologically bad case for contention in Go's select implementation} \label{t:pathGo} \begin{tabular}{*{5}{r|}r} & \multicolumn{1}{c|}{\CFA} & \multicolumn{1}{c@{}}{Go} \\ \hline AMD & \input{data/nasus_Order} \\ \hline Intel & \input{data/pyke_Order} \end{tabular} \end{table} Another difference between Go and \CFA is the order of clause selection when multiple clauses are available. Go "randomly" selects a clause, but \CFA chooses the clause in the order they are listed~\cite{go:select}. This \CFA design decision allows users to set implicit priorities, which can result in more predictable behaviour, and even better performance in certain cases, such as the case shown in Table~\ref{t:pathGo}. If \CFA didn't have priorities, the performance difference in Table~\ref{t:pathGo} would be less significant since @P1@ and @C1@ would try to compete to operate on @B@ more often with random selection. \subsection{Future Benchmark} The future benchmark compares \CFA's @waituntil@ with \uC's @_Select@, with both utilities waiting on futures. Both \CFA's @waituntil@ and \uC's @_Select@ have very similar semantics, however @_Select@ can only wait on futures, whereas the @waituntil@ is polymorphic. They both support @and@ and @or@ operators, but the underlying implementation of the operators differs between @waituntil@ and @_Select@. The @waituntil@ statement checks for statement completion using a predicate function, whereas the @_Select@ statement maintains a tree that represents the state of the internal predicate. \begin{figure} \centering \subfloat[AMD Future Synchronization Benchmark]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Future.pgf}} \label{f:futureAMD} } \subfloat[Intel Future Synchronization Benchmark]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Future.pgf}} \label{f:futureIntel} } \caption{\CFA \lstinline{waituntil} and \uC \lstinline{_Select} statement throughput synchronizing on a set of futures with varying wait predicates (higher is better).} \caption{} \label{f:futurePerf} \end{figure} This microbenchmark aims to measure the impact of various predicates on the performance of the @waituntil@ and @_Select@ statements. This benchmark and section does not try to directly compare the @waituntil@ and @_Select@ statements since the performance of futures in \CFA and \uC differ by a significant margin, making them incomparable. Results of this benchmark are shown in Figure~\ref{f:futurePerf}. Each set of columns is marked with a name representing the predicate for that set of columns. The predicate name and corresponding @waituntil@ statement is shown below: \begin{cfa} #ifdef OR waituntil( A ) { get( A ); } or waituntil( B ) { get( B ); } or waituntil( C ) { get( C ); } #endif #ifdef AND waituntil( A ) { get( A ); } and waituntil( B ) { get( B ); } and waituntil( C ) { get( C ); } #endif #ifdef ANDOR waituntil( A ) { get( A ); } and waituntil( B ) { get( B ); } or waituntil( C ) { get( C ); } #endif #ifdef ORAND (waituntil( A ) { get( A ); } or waituntil( B ) { get( B ); }) // brackets create higher precedence for or and waituntil( C ) { get( C ); } #endif \end{cfa} In Figure~\ref{f:futurePerf}, the @OR@ column for \CFA is more performant than the other \CFA predicates, likely due to the special-casing of @waituntil@ statements with only @or@ operators. For both \uC and \CFA the @AND@ column is the least performant, which is expected since all three futures need to be fulfilled for each statement completion, unlike any of the other operators. Interestingly, \CFA has lower variation across predicates on the AMD (excluding the special OR case), whereas \uC has lower variation on the Intel.