source: doc/theses/colby_parsons_MMAth/text/waituntil.tex @ 5eb9327

ast-experimental
Last change on this file since 5eb9327 was 760c88c, checked in by caparsons <caparson@…>, 17 months ago

added some citation and waituntil chapter draft up until performance section

  • Property mode set to 100644
File size: 31.1 KB
Line 
1% ======================================================================
2% ======================================================================
3\chapter{Waituntil}\label{s:waituntil}
4% ======================================================================
5% ======================================================================
6
7Consider the following motivational problem that we shall title the bathroom problem.
8There are @N@ stalls (resources) in a bathroom and there are @M@ people (threads).
9Each stall has their own lock since only one person may occupy a stall at a time.
10The standard way that humans solve this problem is that they check if the stalls are occupied and if they all are they queue and watch the stalls until one is free and then enter and lock the stall.
11This solution is simple in real life, but can be difficult to implement in a concurrent context as it requires the threads to somehow wait on all the stalls at the same time.
12The naive solution to this is for each thread to spin indefinitely continually checking the stalls until one is free.
13This wastes cycles and also results in no fairness among threads waiting for stalls as a thread will jump in the first stall available without any regard to other waiting threads.
14The ability to wait for the first stall available without spinning can be done with concurrent tools that provide \gls{synch_multiplex}, the ability to wait synchronously for a resource or set of resources.
15
16\section{History of Synchronous Multiplexing}
17There is a history of tools that provide \gls{synch_multiplex}.
18Some of the most well known include the set of unix system utilities: select(2)\cite{linux:select}, poll(2)\cite{linux:poll}, and epoll(7)\cite{linux:epoll}, and the select statement provided by Go\cite{go:selectref}.
19
20Before one can examine the history of \gls{synch_multiplex} implementations in detail, the preceding theory must be discussed.
21The theory surrounding this topic was largely introduced by Hoare in his 1985 CSP book \cite{Hoare85} and his later work with Roscoe on the theoretical language Occam\cite{Roscoe88}.
22Both include guards for communication channels and the ability to wait for a single channel to be ready out of a set of channels.
23The work on Occam in \cite{Roscoe88} calls their \gls{synch_multiplex} primitive ALT, which waits for one resource to be available and then executes a corresponding block of code.
24Waiting for one resource out of a set of resources can be thought of as a logical exclusive or over the set of resources.
25Guards are a conditional operator similar to an @if@, except they apply to the resource being waited on.
26If a guard is false then the resource it guards is considered to not be in the set of resources being waited on.
27Guards can be simulated using if statements, but to do so requires \[2^N\] if cases, where @N@ is the number of guards.
28The equivalence between guards and exponential if statements comes from an Occam ALT statement rule~\cite{Roscoe88}, which is presented in \CFA syntax in Figure~\ref{f:wu_if}.
29Providing guards allows for easy toggling of waituntil clauses without introducing repeated code.
30
31\begin{figure}
32\begin{cfa}
33when( predicate ) waituntil( A ) {}
34or waituntil( B ) {}
35// ===
36if ( predicate ) {
37    waituntil( A ) {}
38    or waituntil( B ) {}
39} else {
40    waituntil( B ) {}
41}
42\end{cfa}
43\caption{Occam's guard to if statement equivalence shown in \CFA syntax.}
44\label{f:wu_if}
45\end{figure}
46
47Switching to implementations, it is important to discuss the resources being multiplexed.
48While the aforementioned theory discusses waiting on channels, the earliest known implementation of a synchronous multiplexing tool, Unix's select(2), is multiplexed over file descriptors.
49The select(2) system call takes in sets of file descriptors to wait on as arguments and an optional timeout.
50Select will block until either some subset of file descriptors are available or the timeout expires.
51All file descriptors that are ready will be returned.
52This early implementation differs from the theory as when the call from select returns it may provide more than one ready file descriptor.
53As such select has a logical or multiplexing semantics, whereas the theory described exclusive-or semantics.
54This is not a drawback.
55A user can easily achieve exclusive-or semantics with select by arbitrarily choosing only one of the returned descriptors to operate on.
56Select was followed by poll(2), and later epoll(7), which both improved upon drawbacks in their predecessors.
57The syscall poll(2) improved on select by allowing users to monitor descriptors with numbers higher than 1024 which was not supported by select.
58Epoll then improved on poll to return the set of file descriptors since poll would only say that some descriptor from the set was ready but not return which ones were ready.
59
60It is worth noting these \gls{synch_multiplex} tools mentioned so far interact directly with the operating system and are often used to communicate between processes.
61Later \gls{synch_multiplex} started to appear in user-space to support fast multiplexed concurrent communication between threads.
62An early example of \gls{synch_multiplex} is the select statement in Ada~\cite[\S~9.7]{Ichbiah79}.
63The select statement in Ada allows a task to multiplex over some subset of its own methods that it would like to @accept@ calls to.
64Tasks in Ada can be thought of as threads which are an object of a specific class, and as such have methods, fields, etc.
65This statement has the same exclusive-or semantics as ALT from Occam, and supports guards as described in Occam, however it multiplexes over methods on rather than multiplexing over channels.
66A code block is associated with each @accept@, and the method that is @accept@ed first has its corresponding code block run after the task unblocks.
67In this way the select statement in Ada provides rendezvous points for threads, rather than providing some resource through message passing.
68The select statement in Ada also supports an optional timeout with the same semantics as select(2), and provides an @else@.
69The @else@ changes the synchronous multiplexing to asynchronous multiplexing.
70If an @else@ clause is in a select statement and no calls to the @accept@ed methods are immediately available the code block associated with the @else@ is run and the task does not block.
71
72A popular example of user-space \gls{synch_multiplex} is Go with their select statement~\cite{go:selectref}.
73Go's select statement operates on channels and has the same exclusive-or semantics as the ALT primitive from Occam, and has associated code blocks for each clause like ALT and Ada.
74However, unlike Ada and ALT, Go does not provide any guards for their select statement cases.
75Go provides a timeout utility and also provides a @default@ clause which has the same semantics as Ada's @else@ clause.
76
77\uC provides \gls{synch_multiplex} over futures with their @_Select@ statement and Ada-style \gls{synch_multiplex} over monitor methods with their @_Accept@ statement~\cite{uC++}.
78Their @_Accept@ statement builds upon the select statement offered by Ada, by offering both @and@ and @or@ semantics, which can be used together in the same statement.
79These semantics are also supported for \uC's @_Select@ statement.
80This enables fully expressive \gls{synch_multiplex} predicates.
81
82There are many other languages that provide \gls{synch_multiplex}, including Rust's @select!@ over futures~\cite{rust:select}, OCaml's @select@ over channels~\cite{ocaml:channe}, and C++14's @when_any@ over futures~\cite{cpp:whenany}.
83Note that while C++14 and Rust provide \gls{synch_multiplex}, their implemetations leave much to be desired as they both rely on busy-waiting polling to wait on multiple resources.
84
85\section{Other Approaches to Synchronous Multiplexing}
86To avoid the need for \gls{synch_multiplex}, all communication between threads/processes has to come from a single source.
87One key example is Erlang, in which each process has a single heterogenous mailbox that is the sole source of concurrent communication, removing the need for \gls{synch_multiplex} as there is only one place to wait on resources.
88In a similar vein, actor systems circumvent the \gls{synch_multiplex} problem as actors are traditionally non-blocking, so they will only block when waiting for the next message.
89While these approaches solve the \gls{synch_multiplex} problem, they introduce other issues.
90Consider the case where a thread has a single source of communication (like erlang and actor systems) wants one of a set of @N@ resources.
91It requests @N@ resources and waits for responses.
92In the meantime the thread may receive other communication, and may either has to save and postpone the related work or discard it.
93After the thread receives one of the @N@ resources, it will continue to receive the other ones it requested, even if it does not need them.
94If the requests for the other resources need to be retracted, the burden falls on the programmer to determine how to synchronize appropriately to ensure that only one resource is delivered.
95
96\section{\CFA's Waituntil Statement}
97The new \CFA \gls{synch_multiplex} utility introduced in this work is the @waituntil@ statement.
98There is a @waitfor@ statement in \CFA that supports Ada-style \gls{synch_multiplex} over monitor methods, so this @waituntil@ focuses on synchronizing over other resources.
99All of the \gls{synch_multiplex} features mentioned so far are monomorphic, only supporting one resource to wait on, select(2) supports file descriptors, Go's select supports channel operations, \uC's select supports futures, and Ada's select supports monitor method calls.
100The waituntil statement in \CFA is polymorphic and provides \gls{synch_multiplex} over any objects that satisfy the trait in Figure~\ref{f:wu_trait}.
101
102\begin{figure}
103\begin{cfa}
104forall(T & | sized(T))
105trait is_selectable {
106    // For registering a waituntil stmt on a selectable type
107    bool register_select( T &, select_node & );
108
109    // For unregistering a waituntil stmt from a selectable type
110    bool unregister_select( T &, select_node & );
111
112    // on_selected is run on the selecting thread prior to executing the statement associated with the select_node
113    void on_selected( T &, select_node & );
114};
115\end{cfa}
116\caption{Trait for types that can be passed into \CFA's waituntil statement.}
117\label{f:wu_trait}
118\end{figure}
119
120Currently locks, channels, futures and timeouts are supported by the waituntil statement, but this will be expanded as other use cases arise.
121The waituntil statement supports guarded clauses, like Ada, and Occam, supports both @or@, and @and@ semantics, like \uC, and provides an @else@ for asynchronous multiplexing. An example of \CFA waituntil usage is shown in Figure~\ref{f:wu_example}. In Figure~\ref{f:wu_example} the waituntil statement is waiting for either @Lock@ to be available or for a value to be read from @Channel@ into @i@ and for @Future@ to be fulfilled. The semantics of the waituntil statement will be discussed in detail in the next section.
122
123\begin{figure}
124\begin{cfa}
125future(int) Future;
126channel(int) Channel;
127owner_lock Lock;
128int i = 0;
129
130waituntil( Lock ) { ... }
131or when( i == 0 ) waituntil( i << Channel ) { ... }
132and waituntil( Future ) { ... }
133\end{cfa}
134\caption{Example of \CFA's waituntil statement}
135\label{f:wu_example}
136\end{figure}
137
138\section{Waituntil Semantics}
139There are two parts of the waituntil semantics to discuss, the semantics of the statement itself, \ie @and@, @or@, @when@ guards, and @else@ semantics, and the semantics of how the waituntil interacts with types like channels, locks and futures.
140To start, the semantics of the statement itself will be discussed.
141
142\subsection{Waituntil Statement Semantics}
143The @or@ semantics are the most straightforward and nearly match those laid out in the ALT statement from Occam, the clauses have an exclusive-or relationship where the first one to be available will be run and only one clause is run.
144\CFA's @or@ semantics differ from ALT semantics in one respect, instead of randomly picking a clause when multiple are available, the clause that appears first in the order of clauses will be picked.
145\eg in the following example, if @foo@ and @bar@ are both available, @foo@ will always be selected since it comes first in the order of waituntil clauses.
146\begin{cfa}
147future(int) bar;
148future(int) foo;
149waituntil( foo ) { ... }
150or waituntil( bar ) { ... }
151\end{cfa}
152
153The @and@ semantics match the @and@ semantics used by \uC.
154When multiple clauses are joined by @and@, the waituntil will make a thread wait for all to be available, but will run the corresponding code blocks \emph{as they become available}.
155As @and@ clauses are made available, the thread will be woken to run those clauses' code blocks and then the thread will wait again until all clauses have been run.
156This allows work to be done in parallel while synchronizing over a set of resources, and furthermore gives a good reason to use the @and@ operator.
157If the @and@ operator waited for all clauses to be available before running, it would not provide much more use that just acquiring those resources one by one in subsequent lines of code.
158The @and@ operator binds more tightly than the @or@ operator.
159To give an @or@ operator higher precedence brackets can be used.
160\eg the following waituntil unconditionally waits for @C@ and one of either @A@ or @B@, since the @or@ is given higher precendence via brackets.
161\begin{cfa}
162(waituntil( A ) { ... }
163or waituntil( B ) { ... } )
164and waituntil( C ) { ... }
165\end{cfa}
166
167The guards in the waituntil statement are called @when@ clauses.
168The @when@ clause is passed a boolean expression.
169All the @when@ boolean expressions are evaluated before the waituntil statement is run.
170The guards in Occam's ALT effectively toggle clauses on and off, where a clause will only be evaluated and waited on if the corresponding guard is @true@.
171The guards in the waituntil statement operate the same way, but require some nuance since both @and@ and @or@ operators are supported.
172When a guard is false and a clause is removed, it can be thought of as removing that clause and its preceding operator from the statement.
173\eg in the following example the two waituntil statements are semantically the same.
174\begin{cfa}
175when(true) waituntil( A ) { ... }
176or when(false) waituntil( B ) { ... }
177and waituntil( C ) { ... }
178// ===
179waituntil( A ) { ... }
180and waituntil( C ) { ... }
181\end{cfa}
182
183The @else@ clause on the waituntil has identical semantics to the @else@ clause in Ada.
184If all resources are not immediately available and there is an @else@ clause, the @else@ clause is run and the thread will not block.
185
186\subsection{Waituntil Type Semantics}
187As described earlier, to support interaction with the waituntil statement a type must support the trait shown in Figure~\ref{f:wu_trait}.
188The waituntil statement expects types to register and unregister themselves via calls to @register_select@ and @unregister_select@ respectively.
189When a resource becomes available, @on_selected@ is run.
190Many types may not need @on_selected@, but it is provided since some types may need to check and set things before the resource can be accessed in the code block.
191The register/unregister routines in the trait return booleans.
192The return value of @register_select@ is @true@ if the resource is immediately available, and @false@ otherwise.
193The return value of @unregister_select@ is @true@ if the corresponding code block should be run after unregistration and @false@ otherwise.
194The routine @on_selected@, and the return value of @unregister_select@ were needed to support channels as a resource.
195More detail on channels and their interaction with waituntil will be discussed in Section~\ref{s:wu_chans}.
196
197\section{Waituntil Implementation}
198The waituntil statement is not inherently complex, and can be described as a few steps.
199The complexity of the statement comes from the consideration of race conditions and synchronization needed when supporting various primitives.
200The basic steps that the waituntil statement follows are the following.
201
202First the waituntil statement creates a @select_node@ per resource that is being waited on.
203The @select_node@ is an object that stores the waituntil data pertaining to one of the resources.
204Then, each @select_node@ is then registered with the corresponding resource.
205The thread executing the waituntil then enters a loop that will loop until the entire waituntil statement being satisfied.
206In each iteration of the loop the thread attempts to block.
207If any clauses are satified the block will fail and the thread will proceed, otherwise the block succeeds.
208After proceeding past the block all clauses are checked for completion and the completed clauses have their code blocks run.
209Once the thread escapes the loop, the @select_nodes@ are unregistered from the resources.
210In the case where the block suceeds, the thread will be woken by the thread that marks one of the resources as available.
211Pseudocode detailing these steps is presented in the following code block.
212
213\begin{cfa}
214select_nodes s[N]; // N select nodes
215for ( node in s )
216    register_select( resource, node );
217while( statement not satisfied ) {
218    // try to block
219    for ( resource in waituntil statement )
220        if ( resource is avail ) run code block
221}
222for ( node in s )
223    unregister_select( resource, node );
224\end{cfa}
225
226These steps give a basic, but mildly inaccurate overview of how the statement works.
227Digging into some parts of the implementation will shed light on more of the specifics and provide some accuracy.
228
229\subsection{Locks}
230Locks are one of the resources supported in the waituntil statement.
231When a thread waits on multiple locks via a waituntil, it enqueues a @select_node@ in each of the lock's waiting queues.
232When a @select_node@ reaches the front of the queue and gains ownership of a lock, the blocked thread is notified.
233The lock will be held until the node is unregistered.
234To prevent the waiting thread from holding many locks at once and potentially introducing a deadlock, the node is unregistered right after the corresponding code block is executed.
235This prevents deadlocks since the waiting thread will never hold a lock while waiting on another resource.
236As such the only nodes unregistered at the end are the ones that have not run.
237
238\subsection{Timeouts}
239Timeouts in the waituntil take the form of a duration being passed to a @sleep@ or @timeout@ call.
240An example is shown in the following code.
241
242\begin{cfa}
243waituntil( sleep( 1`ms ) ) {}
244waituntil( timeout( 1`s ) ) {} or waituntil( timeout( 2`s ) ) {}
245waituntil( timeout( 1`ns ) ) {} and waituntil( timeout( 2`s ) ) {}
246\end{cfa}
247
248The timeout implementation highlights a key part of the waituntil semantics, the expression is evaluated before the waituntil runs.
249As such calls to @sleep@ and @timeout@ do not block, but instead return a type that supports the @is_selectable@ trait.
250This mechanism is needed for types that want to support multiple operations such as channels that support reading and writing.
251
252\subsection{Channels}\label{s:wu_chans}
253To support both waiting on both reading and writing to channels, the opperators @?<<?@ and @?>>?@ are used to show reading and writing to a channel respectively, where the lefthand operand is the value and the righthand operand is the channel.
254Channels require significant complexity to wait on for a few reasons.
255The first reason is that reading or writing to a channel is a mutating operation.
256What this means is that if a read or write to a channel occurs, the state of the channel has changed.
257In comparison, for standard locks and futures, if a lock is acquired then released or a future is ready but not accessed, the states of the lock and the future are not modified.
258In this way if a waituntil over locks or futures have some resources available that were not consumed, it is not an issue.
259However, if a thread modifies a channel on behalf of a thread blocked on a waituntil statement, it is important that the corresponding waituntil code block is run, otherwise there is a potentially erroneous mismatch between the channel state and associated side effects.
260As such, the @unregister_select@ routine has a boolean return that is used by channels to indicate when the operation was completed but the block was not run yet.
261As such some channel code blocks may be run as part of the unregister.
262Furthermore if there are both @and@ and @or@ operators, the @or@ operators stop behaving like exclusive-or semantics since this race between operations and unregisters exists.
263
264It was deemed important that exclusive-or semantics were maintained when only @or@ operators were used, so this situation has been special-cased, and is handled by having all clauses race to set a value \emph{before} operating on the channel.
265This approach is infeasible in the case where @and@ and @or@ operators are used.
266To show this consider the following waituntil statement.
267
268\begin{cfa}
269waituntil( i >> A ) {} and waituntil( i >> B ) {}
270or waituntil( i >> C ) {} and waituntil( i >> D ) {}
271\end{cfa}
272
273If exclusive-or semantics were followed, this waituntil would only run the code blocks for @A@ and @B@, or the code blocks for @C@ and @D@.
274However, to race before operation completion in this case introduces a race whose complexity increases with the size of the waituntil statement.
275In the example above, for @i@ to be inserted into @C@, to ensure the exclusive-or it must be ensured that @i@ can also be inserted into @D@.
276Furthermore, the race for the @or@ would also need to be won.
277However, due to TOCTOU issues, one cannot know that all resources are available without acquiring all the internal locks of channels in the subtree.
278This is not a good solution for two reasons.
279It is possible that once all the locks are acquired that the subtree is not satisfied and they must all be released.
280This would incur high cost for signalling threads and also heavily increase contention on internal channel locks.
281Furthermore, the waituntil statement is polymorphic and can support resources that do not have internal locks, which also makes this approach infeasible.
282As such, the exclusive-or semantics are lost when using both @and@ and @or@ operators since they can not be supported without significant complexity and hits to waituntil statement performance.
283
284The mechanism by which the predicate of the waituntil is checked is discussed in more detail in Section~\ref{s:wu_guards}.
285
286Another consideration introduced by channels is that supporting both reading and writing to a channel in a waituntil means that one waituntil clause may be the notifier for another waituntil clause.
287This becomes a problem when dealing with the special-cased @or@ where the clauses need to win a race to operate on a channel.
288When you have both a special-case @or@ inserting on one thread and another special-case @or@ consuming is blocked on another thread there is not one but two races that need to be consolidated by the inserting thread.
289(The race can occur in the opposite case with a blocked producer and signalling consumer too.)
290For them to know that the insert succeeded, they need to win the race for their own waituntil and win the race for the other waituntil.
291Go solves this problem in their select statement by acquiring the internal locks of all channels before registering the select on the channels.
292This eliminates the race since no other threads can operate on the blocked channel since its lock will be held.
293
294This approach is not used in \CFA since the waituntil is polymorphic.
295Not all types in a waituntil have an internal lock, and when using non-channel types acquiring all the locks incurs extra uneeded overhead.
296Instead this race is consolidated in \CFA in two phases by having an intermediate pending status value for the race.
297This case is detectable, and if detected the thread attempting to signal will first race to set the race flag to be pending.
298If it succeeds, it then attempts to set the consumer's race flag to its success value.
299If the producer successfully sets the consumer race flag, then the operation can proceed, if not the signalling thread will set its own race flag back to the initial value.
300If any other threads attempt to set the producer's flag and see a pending value, they will wait until the value changes before proceeding to ensure that in the case that the producer fails, the signal will not be lost.
301This protocol ensures that signals will not be lost and that the two races can be resolved in a safe manner.
302
303Channels in \CFA have exception based shutdown mechanisms that the waituntil statement needs to support.
304These exception mechanisms were what brought in the @on_selected@ routine.
305This routine is needed by channels to detect if they are closed upon waking from a waituntil statement, to ensure that the appropriate behaviour is taken.
306
307\subsection{Guards and Statement Predicate}\label{s:wu_guards}
308Checking for when a synchronous multiplexing utility is done is trivial when it has an or/xor relationship, since any resource becoming available means that the blocked thread can proceed.
309In \uC and \CFA, their \gls{synch_multiplex} utilities involve both an @and@ and @or@ operator, which make the problem of checking for completion of the statement more difficult.
310
311In the \uC @_Select@ statement, they solve this problem by constructing a tree of the resources, where the internal nodes are operators and the leafs are the resources.
312The internal nodes also store the status of each of the subtrees beneath them.
313When resources become available, their status is modified and the status of the leaf nodes percolate into the internal nodes update the state of the statement.
314Once the root of the tree has both subtrees marked as @true@ then the statement is complete.
315As an optimization, when the internal nodes are updated, their subtrees marked as @true@ are effectively pruned and are not touched again.
316To support \uC's @_Select@ statement guards, the tree prunes the branch if the guard is false.
317
318The \CFA waituntil statement blocks a thread until a set of resources have become available that satisfy the underlying predicate.
319The waiting condition of the waituntil statement can be represented as a predicate over the resources, joined by the waituntil operators, where a resource is @true@ if it is available, and @false@ otherwise.
320In \CFA, this representation is used as the mechanism to check if a thread is done waiting on the waituntil.
321Leveraging the compiler, a routine is generated per waituntil that is passed the statuses of the resources and returns a boolean that is @true@ when the waituntil is done, and false otherwise.
322To support guards on the \CFA waituntil statement, the status of a resource disabled by a guard is set to ensure that the predicate function behaves as if that resource is no longer part of the predicate.
323
324In \uC's @_Select@, it supports operators both inside and outside the clauses of their statement.
325\eg in the following example the code blocks will run once their corresponding predicate inside the round braces is satisfied.
326
327% C_TODO put this is uC++ code style not cfa-style
328\begin{cfa}
329Future_ISM<int> A, B, C, D;
330_Select( A || B && C ) { ... }
331and _Select( D && E ) { ... }
332\end{cfa}
333
334This is more expressive that the waituntil statement in \CFA.
335In \CFA, since the waituntil statement supports more resources than just futures, implmenting operators inside clauses was avoided for a few reasons.
336As an example, suppose \CFA supported operators inside clauses and consider the code snippet in Figure~\ref{f:wu_inside_op}.
337
338\begin{figure}
339\begin{cfa}
340owner_lock A, B, C, D;
341waituntil( A && B ) { ... }
342or waituntil( C && D ) { ... }
343\end{cfa}
344\caption{Example of unsupported operators inside clauses in \CFA.}
345\label{f:wu_inside_op}
346\end{figure}
347
348If the waituntil in Figure~\ref{f:wu_inside_op} works with the same semantics as described and acquires each lock as it becomes available, it opens itself up to possible deadlocks since it is now holding locks and waiting on other resources.
349As such other semantics would be needed to ensure that this operation is safe.
350One possibility is to use \CC's @scoped_lock@ approach that was described in Section~\ref{s:DeadlockAvoidance}, however the potential for livelock leaves much to be desired.
351Another possibility would be to use resource ordering similar to \CFA's @mutex@ statement, but that alone is not sufficient if the resource ordering is not used everywhere.
352Additionally, using resource ordering could conflict with other semantics of the waituntil statement.
353To show this conflict, consider if the locks in Figure~\ref{f:wu_inside_op} were ordered @D@, @B@, @C@, @A@.
354If all the locks are available, it becomes complex to both respect the ordering of the waituntil in Figure~\ref{f:wu_inside_op} when choosing which code block to run and also respect the lock ordering of @D@, @B@, @C@, @A@ at the same time.
355One other way this could be implemented is to wait until all resources for a given clause are available before proceeding to acquire them, but this also quickly becomes a poor approach.
356This approach won't work due to TOCTOU issues, as it is not possible to ensure that the full set resources are available without holding them all first.
357Operators inside clauses in \CFA could potentially be implemented with careful circumvention of the problems involved, but it was not deemed an important feature when taking into account the runtime cost that would need to be paid to handle these situations.
358The problem of operators inside clauses also becomes a difficult issue to handle when supporting channels.
359If internal operators were supported, it would require some way to ensure that channels with internal operators are modified on if and only if the corresponding code block is run, but that is not feasible due to reasons described in the exclusive-or portion of Section~\ref{s:wu_chans}.
360
361\section{Waituntil Performance}
362The two \gls{synch_multiplex} utilities that are in the realm of comparability with the \CFA waituntil statement are the Go @select@ statement and the \uC @_Select@ statement.
363As such, two microbenchmarks are presented, one for Go and one for \uC to contrast the systems.
364The similar utilities discussed at the start of this chapter in C, Ada, Rust, \CC, and OCaml are either not meaningful or feasible to benchmark against.
365The select(2) and related utilities in C are not comparable since they are system calls that go into the kernel and operate on file descriptors, whereas the waituntil exists solely in userspace.
366Ada's @select@ only operates on methods, which is done in \CFA via the @waitfor@ utility so it is not feasible to benchmark against the @waituntil@, which cannot wait on the same resource.
367Rust and \CC only offer a busy-wait based approach which is not meaningly comparable to a blocking approach.
368OCaml's @select@ waits on channels that are not comparable with \CFA and Go channels, which makes the OCaml @select@ infeasible to compare it with Go's @select@ and \CFA's @waituntil@.
369Given the differences in features, polymorphism, and expressibility between the waituntil and @select@, and @_Select@, the aim of the microbenchmarking in this chapter is to show that these implementations lie in the same realm of performance, not to pick a winner.
370
371\subsection{Channel Benchmark}
372The channel microbenchmark compares \CFA's waituntil and Go's select, where the resource being waited on is a set of channels.
373
374%C_TODO explain benchmark
375
376%C_TODO show results
377
378%C_TODO discuss results
379
380\subsection{Future Benchmark}
381The future benchmark compares \CFA's waituntil with \uC's @_Select@, with both utilities waiting on futures.
382
383%C_TODO explain benchmark
384
385%C_TODO show results
386
387%C_TODO discuss results
Note: See TracBrowser for help on using the repository browser.