source: doc/theses/colby_parsons_MMAth/text/channels.tex @ d5926ae

Last change on this file since d5926ae was e0396d9, checked in by caparsons <caparson@…>, 16 months ago

added discussion of channel shorthand operators

  • Property mode set to 100644
File size: 30.0 KB
RevLine 
[5fd5de2]1% ======================================================================
2% ======================================================================
3\chapter{Channels}\label{s:channels}
4% ======================================================================
5% ======================================================================
6
[9921573]7Most modern concurrent programming languages do not subscribe to just one style of communication among threads and provide features that support multiple approaches.
[e9fffb1]8Channels are a concurrent-language feature used to perform \Newterm{message-passing concurrency}: a model of concurrency where threads communicate by sending data as messages (mostly non\-blocking) and synchronizing by receiving sent messages (blocking).
[9f1beb4]9This model is an alternative to shared-memory concurrency, where threads communicate directly by changing shared state.
[9921573]10
[5384793]11Channels were first introduced by Kahn~\cite{Kahn74} and extended by Hoare~\cite{CSP} (CSP).
[9921573]12Both papers present a pseudo (unimplemented) concurrent language where processes communicate using input/output channels to send data.
13Both languages are highly restrictive.
14Kahn's language restricts a reading process to only wait for data on a single channel at a time and different writing processes cannot send data on the same channel.
[9f1beb4]15Hoare's language restricts both the sender and receiver to explicitly name the process that is the destination of a channel send or the source of a channel receive.
16These channel semantics remove the ability to have an anonymous sender or receiver.
17Additionally all channel operations in CSP are synchronous (no buffering).
18Advanced channels as a programming language feature has been popularized in recent years by the language Go~\cite{Go}, which encourages the use of channels as its fundamental concurrent feature.
[44198fb9]19It was the popularity of Go channels that lead to their implementation in \CFA.
[76e77a4]20Neither Go nor \CFA channels have the restrictions of the early channel-based concurrent systems.
[5fd5de2]21
[ac5d22f]22Other popular languages and libraries that provide channels include C++ Boost~\cite{boost:channel}, Rust~\cite{rust:channel}, Haskell~\cite{haskell:channel}, and OCaml~\cite{ocaml:channel}.
23Boost channels only support asynchronous (non-blocking) operations, and Rust channels are limited to only having one consumer per channel.
24Haskell channels are unbounded in size, and OCaml channels are zero-size.
25These restrictions in Haskell and OCaml are likely due to their functional approach, which results in them both using a list as the underlying data structure for their channel.
26These languages and libraries are not discussed further, as their channel implementation is not comparable to the bounded-buffer style channels present in Go and \CFA.
27
[5fd5de2]28\section{Producer-Consumer Problem}
[e9fffb1]29A channel is an abstraction for a shared-memory buffer, which turns the implementation of a channel into the producer-consumer problem.
30The producer-consumer problem, also known as the bounded-buffer problem, was introduced by Dijkstra~\cite[\S~4.1]{Dijkstra65}.
31In the problem, threads interact with a buffer in two ways: producing threads insert values into the buffer and consuming threads remove values from the buffer.
32In general, a buffer needs protection to ensure a producer only inserts into a non-full buffer and a consumer only removes from a non-empty buffer (synchronization).
[9f1beb4]33As well, a buffer needs protection from concurrent access by multiple producers or consumers attempting to insert or remove simultaneously (MX).
[e9fffb1]34
[9f1beb4]35\section{Channel Size}\label{s:ChannelSize}
[e9fffb1]36Channels come in three flavours of buffers:
37\begin{enumerate}
38\item
39Zero sized implies the communication is synchronous, \ie the producer must wait for the consumer to arrive or vice versa for a value to be communicated.
40\item
41Fixed sized (bounded) implies the communication is asynchronous, \ie the producer can proceed up to the buffer size and vice versa for the consumer with respect to removal.
42\item
43Infinite sized (unbounded) implies the communication is asynchronous, \ie the producer never waits but the consumer waits when the buffer is empty.
44Since memory is finite, all unbounded buffers are ultimately bounded;
[2d831a1]45this restriction must be part of its implementation.
[e9fffb1]46\end{enumerate}
47
[ca8c91ce]48In general, the order values are processed by the consumer does not affect the correctness of the producer-consumer problem.
[5384793]49For example, the buffer can be \gls{lifo}, \gls{fifo}, or prioritized with respect to insertion and removal.
[e9fffb1]50However, like MX, a buffer should ensure every value is eventually removed after some reasonable bounded time (no long-term starvation).
51The simplest way to prevent starvation is to implement the buffer as a queue, either with a cyclic array or linked nodes.
[5fd5de2]52
53\section{First-Come First-Served}
[2d831a1]54As pointed out, a bounded buffer requires MX among multiple producers or consumers.
[5384793]55This MX should be fair among threads, independent of the \gls{fifo} buffer being fair among values.
[e9fffb1]56Fairness among threads is called \gls{fcfs} and was defined by Lamport~\cite[p.~454]{Lamport74}.
[3d5fba21]57\gls{fcfs} is defined in relation to a doorway~\cite[p.~330]{Lamport86II}, which is the point at which an ordering among threads can be established.
[ca8c91ce]58Given this doorway, a CS is said to be \gls{fcfs}, if threads access the shared resource in the order they proceed through the doorway.
59A consequence of \gls{fcfs} execution is the elimination of \Newterm{barging}, where barging means a thread arrives at a CS with waiting threads, and the MX protecting the CS allows the arriving thread to enter the CS ahead of one or more of the waiting threads.
[e9fffb1]60
[ca8c91ce]61\gls{fcfs} is a fairness property that prevents unequal access to the shared resource and prevents starvation, however it comes at a cost.
[9f1beb4]62Implementing an algorithm with \gls{fcfs} can lead to \Newterm{double blocking}, where arriving threads block outside the doorway waiting for a thread in the lock entry-protocol and inside the doorway waiting for a thread in the CS.
[ca8c91ce]63An analogue is boarding an airplane: first you wait to get through security to the departure gates (short term), and then wait again at the departure gate for the airplane (long term).
64As such, algorithms that are not \gls{fcfs} (barging) can be more performant by skipping the wait for the CS and entering directly;
65however, this performance gain comes by introducing unfairness with possible starvation for waiting threads.
[5fd5de2]66
67\section{Channel Implementation}
[9f1beb4]68Currently, only the Go programming language provides user-level threading where the primary communication mechanism is channels.
[44198fb9]69Experiments were conducted that varied the producer-consumer algorithm and lock type used inside the channel.
[5384793]70With the exception of non-\gls{fcfs} or non-\gls{fifo} algorithms, no algorithm or lock usage in the channel implementation was found to be consistently more performant that Go's choice of algorithm and lock implementation.
[76e77a4]71Performance of channels can be improved by sharding the underlying buffer \cite{Dice11}.
[5384793]72However, the \gls{fifo} property is lost, which is undesirable for user-facing channels.
[ca8c91ce]73Therefore, the low-level channel implementation in \CFA is largely copied from the Go implementation, but adapted to the \CFA type and runtime systems.
[9a5a2cd]74As such the research contributions added by \CFA's channel implementation lie in the realm of safety and productivity features.
[5fd5de2]75
[44198fb9]76The Go channel implementation utilizes cooperation among threads to achieve good performance~\cite{go:chan}.
77This cooperation only occurs when producers or consumers need to block due to the buffer being full or empty.
78In these cases, a blocking thread stores their relevant data in a shared location and the signalling thread completes the blocking thread's operation before waking them;
79\ie the blocking thread has no work to perform after it unblocks because the signalling threads has done this work.
80This approach is similar to wait morphing for locks~\cite[p.~82]{Butenhof97} and improves performance in a few ways.
81First, each thread interacting with the channel only acquires and releases the internal channel lock once.
[0ec4eb0]82As a result, contention on the internal lock is decreased; only entering threads compete for the lock since unblocking threads do not reacquire the lock.
[44198fb9]83The other advantage of Go's wait-morphing approach is that it eliminates the bottleneck of waiting for signalled threads to run.
84Note, the property of acquiring/releasing the lock only once can also be achieved with a different form of cooperation, called \Newterm{baton passing}.
85Baton passing occurs when one thread acquires a lock but does not release it, and instead signals a thread inside the critical section, conceptually ``passing'' the mutual exclusion from the signalling thread to the signalled thread.
86The baton-passing approach has threads cooperate to pass mutual exclusion without additional lock acquires or releases;
87the wait-morphing approach has threads cooperate by completing the signalled thread's operation, thus removing a signalled thread's need for mutual exclusion after unblocking.
88While baton passing is useful in some algorithms, it results in worse channel performance than the Go approach.
89In the baton-passing approach, all threads need to wait for the signalled thread to reach the front of the ready queue, context switch, and run before other operations on the channel can proceed, since the signalled thread holds mutual exclusion;
90in the wait-morphing approach, since the operation is completed before the signal, other threads can continue to operate on the channel without waiting for the signalled thread to run.
[2d831a1]91
[9f1beb4]92In this work, all channel sizes \see{Sections~\ref{s:ChannelSize}} are implemented with bounded buffers.
93However, only non-zero-sized buffers are analysed because of their complexity and higher usage.
[ca8c91ce]94
[6e83384]95\section{Safety and Productivity}
[3d5fba21]96Channels in \CFA come with safety and productivity features to aid users.
[9a5a2cd]97The features include the following.
[5fd5de2]98
[6e83384]99\begin{itemize}
[ca8c91ce]100\item Toggle-able statistic collection on channel behaviour that count channel and blocking operations.
[9f1beb4]101Tracking blocking operations helps illustrate usage for tuning the channel size, where the aim is to reduce blocking.
102
103\item Deadlock detection on channel deallocation.
104If threads are blocked inside a channel when it terminates, this case is detected and the user is informed, as this can cause a deadlock.
105
[ca8c91ce]106\item A @flush@ routine that delivers copies of an element to all waiting consumers, flushing the buffer.
[9f1beb4]107Programmers use this mechanism to broadcast a sentinel value to multiple consumers.
[ca8c91ce]108Additionally, the @flush@ routine is more performant then looping around the @insert@ operation since it can deliver the elements without having to reacquire mutual exclusion for each element sent.
[e0396d9]109
110\item Go-style @?>>?@ and @?<<?@ shorthand operators for inserting and removing respectively.
111\begin{cfa}
112channel(int) chan;
113int i = 2;
114i >> chan;                      $\C{// insert i into chan}$
115i << chan;              $\C{// remove element from chan into i}$
116\end{cfa}
[6e83384]117\end{itemize}
[5fd5de2]118
[9f1beb4]119\subsection{Toggle-able Statistics}
[76e77a4]120As discussed, a channel is a concurrent layer over a bounded buffer.
[44198fb9]121To achieve efficient buffering, users should aim for as few blocking operations on a channel as possible.
122Mechanisms to reduce blocking are: change the buffer size, shard a channel into multiple channels, or tweak the number of producer and consumer threads.
123For users to be able to make informed decisions when tuning channel usage, toggle-able channel statistics are provided.
124The statistics are toggled on during the \CFA build by defining the @CHAN_STATS@ macro, which guarantees zero cost when not using this feature.
125When statistics are turned on, four counters are maintained per channel, two for inserting (producers) and two for removing (consumers).
[76e77a4]126The two counters per type of operation track the number of blocking operations and total operations.
[44198fb9]127In the channel destructor, the counters are printed out aggregated and also per type of operation.
128An example use case is noting that producer inserts are blocking often while consumer removes do not block often.
129This information can be used to increase the number of consumers to decrease the blocking producer operations, thus increasing the channel throughput.
130Whereas, increasing the channel size in this scenario is unlikely to produce a benefit because the consumers can never keep up with the producers.
[9f1beb4]131
132\subsection{Deadlock Detection}
[44198fb9]133The deadlock detection in the \CFA channels is fairly basic but detects a very common channel mistake during termination.
134That is, it detects the case where threads are blocked on the channel during channel deallocation.
[e4a2198]135This case is guaranteed to deadlock since there are no other threads to supply or consume values needed by the waiting threads.
136Only if a user maintained a separate reference to the blocked threads and manually unblocks them outside the channel could the deadlock be avoid.
137However, without special semantics, this unblocking would generate other runtime errors where the unblocked thread attempts to access non-existing channel data or even a deallocated channel.
[44198fb9]138More robust deadlock detection needs to be implemented separate from channels since it requires knowledge about the threading system and other channel/thread state.
[9f1beb4]139
140\subsection{Program Shutdown}
[3d5fba21]141Terminating concurrent programs is often one of the most difficult parts of writing concurrent code, particularly if graceful termination is needed.
[e4a2198]142Graceful termination can be difficult to achieve with synchronization primitives that need to be handled carefully during shutdown.
[3d5fba21]143It is easy to deadlock during termination if threads are left behind on synchronization primitives.
144Additionally, most synchronization primitives are prone to \gls{toctou} issues where there is race between one thread checking the state of a concurrent object and another thread changing the state.
145\gls{toctou} issues with synchronization primitives often involve a race between one thread checking the primitive for blocked threads and another thread blocking on it.
[9f1beb4]146Channels are a particularly hard synchronization primitive to terminate since both sending and receiving to/from a channel can block.
[44198fb9]147Thus, improperly handled \gls{toctou} issues with channels often result in deadlocks as threads performing the termination may end up unexpectedly blocking in their attempt to help other threads exit the system.
[5fd5de2]148
[44198fb9]149\paragraph{Go channels} provide a set of tools to help with concurrent shutdown~\cite{go:chan} using a @close@ operation in conjunction with the \Go{select} statement.
[76e77a4]150The \Go{select} statement is discussed in \ref{s:waituntil}, where \CFA's @waituntil@ statement is compared with the Go \Go{select} statement.
[9f1beb4]151
[ca8c91ce]152The @close@ operation on a channel in Go changes the state of the channel.
[9f1beb4]153When a channel is closed, sends to the channel panic along with additional calls to @close@.
[76e77a4]154Receives are handled differently.
155Receivers (consumers) never block on a closed channel and continue to remove elements from the channel.
[9f1beb4]156Once a channel is empty, receivers can continue to remove elements, but receive the zero-value version of the element type.
157To avoid unwanted zero-value elements, Go provides the ability to iterate over a closed channel to remove the remaining elements.
158These Go design choices enforce a specific interaction style with channels during termination: careful thought is needed to ensure additional @close@ calls do not occur and no sends occur after a channel is closed.
[3d5fba21]159These design choices fit Go's paradigm of error management, where users are expected to explicitly check for errors, rather than letting errors occur and catching them.
[9f1beb4]160If errors need to occur in Go, return codes are used to pass error information up call levels.
161Note, panics in Go can be caught, but it is not the idiomatic way to write Go programs.
[9a5a2cd]162
[44198fb9]163While Go's channel-closing semantics are powerful enough to perform any concurrent termination needed by a program, their lack of ease of use leaves much to be desired.
[76e77a4]164Since both closing and sending panic once a channel is closed, a user often has to synchronize the senders (producers) before the channel can be closed to avoid panics.
[9f1beb4]165However, in doing so it renders the @close@ operation nearly useless, as the only utilities it provides are the ability to ensure receivers no longer block on the channel and receive zero-valued elements.
166This functionality is only useful if the zero-typed element is recognized as a sentinel value, but if another sentinel value is necessary, then @close@ only provides the non-blocking feature.
[ca8c91ce]167To avoid \gls{toctou} issues during shutdown, a busy wait with a \Go{select} statement is often used to add or remove elements from a channel.
[44198fb9]168Hence, due to Go's asymmetric approach to channel shutdown, separate synchronization between producers and consumers of a channel has to occur during shutdown.
[5fd5de2]169
[9f1beb4]170\paragraph{\CFA channels} have access to an extensive exception handling mechanism~\cite{Beach21}.
171As such \CFA uses an exception-based approach to channel shutdown that is symmetric for both producers and consumers, and supports graceful shutdown.
172
173Exceptions in \CFA support both termination and resumption.
174\Newterm{Termination exception}s perform a dynamic call that unwinds the stack preventing the exception handler from returning to the raise point, such as in \CC, Python and Java.
175\Newterm{Resumption exception}s perform a dynamic call that does not unwind the stack allowing the exception handler to return to the raise point.
176In \CFA, if a resumption exception is not handled, it is reraised as a termination exception.
177This mechanism is used to create a flexible and robust termination system for channels.
178
179When a channel in \CFA is closed, all subsequent calls to the channel raise a resumption exception at the caller.
180If the resumption is handled, the caller attempts to complete the channel operation.
[44198fb9]181However, if the channel operation would block, a termination exception is thrown.
[9f1beb4]182If the resumption is not handled, the exception is rethrown as a termination.
183These termination exceptions allow for non-local transfer that is used to great effect to eagerly and gracefully shut down a thread.
[3d5fba21]184When a channel is closed, if there are any blocked producers or consumers inside the channel, they are woken up and also have a resumption thrown at them.
[44198fb9]185The resumption exception, @channel_closed@, has internal fields to aid in handling the exception.
186The exception contains a pointer to the channel it is thrown from and a pointer to a buffer element.
187For exceptions thrown from @remove@, the buffer element pointer is null.
188For exceptions thrown from @insert@, the element pointer points to the buffer element that the thread attempted to insert.
[e4a2198]189Utility routines @bool is_insert( channel_closed & e );@ and @bool is_remove( channel_closed & e );@ are provided for convenient checking of the element pointer.
[3d5fba21]190This element pointer allows the handler to know which operation failed and also allows the element to not be lost on a failed insert since it can be moved elsewhere in the handler.
[44198fb9]191Furthermore, due to \CFA's powerful exception system, this data can be used to choose handlers based on which channel and operation failed.
192For example, exception handlers in \CFA have an optional predicate which can be used to trigger or skip handlers based on the content of the matching exception.
193It is worth mentioning that using exceptions for termination may incur a larger performance cost than the Go approach.
194However, this should not be an issue, since termination is rarely on the fast-path of an application.
195In contrast, ensuring termination can be easily implemented correctly is the aim of the exception approach.
[9a5a2cd]196
[9f1beb4]197\section{\CFA / Go channel Examples}
[44198fb9]198To highlight the differences between \CFA's and Go's close semantics, three examples are presented.
[76e77a4]199The first example is a simple shutdown case, where there are producer threads and consumer threads operating on a channel for a fixed duration.
[44198fb9]200Once the duration ends, producers and consumers terminate immediately leaving unprocessed elements in the channel.
201The second example extends the first by requiring the channel to be empty after shutdown.
[76e77a4]202Both the first and second example are shown in Figure~\ref{f:ChannelTermination}.
203
204\begin{figure}
205\centering
206
207\begin{lrbox}{\myboxA}
[44198fb9]208\begin{Golang}[aboveskip=0pt,belowskip=0pt]
209var channel chan int = make( chan int, 128 )
210var prodJoin chan int = make( chan int, 4 )
211var consJoin chan int = make( chan int, 4 )
212var cons_done, prod_done bool = false, false;
213func producer() {
214        for {
215                if prod_done { break }
216                channel <- 5
217        }
218        prodJoin <- 0 // synch with main thd
[76e77a4]219}
220
[44198fb9]221func consumer() {
222        for {
223                if cons_done { break }
224                <- channel
225        }
226        consJoin <- 0 // synch with main thd
[76e77a4]227}
228
[44198fb9]229
230func main() {
231        for j := 0; j < 4; j++ { go consumer() }
232        for j := 0; j < 4; j++ { go producer() }
233        time.Sleep( time.Second * 10 )
234        prod_done = true
235        for j := 0; j < 4 ; j++ { <- prodJoin }
236        cons_done = true
237        close(channel) // ensure no cons deadlock
238        @for elem := range channel {@
239                // process leftover values
240        @}@
241        for j := 0; j < 4; j++ { <- consJoin }
[76e77a4]242}
[44198fb9]243\end{Golang}
[76e77a4]244\end{lrbox}
245
246\begin{lrbox}{\myboxB}
247\begin{cfa}[aboveskip=0pt,belowskip=0pt]
[44198fb9]248channel( size_t ) chan{ 128 };
249thread Consumer {};
250thread Producer {};
[76e77a4]251
[44198fb9]252void main( Producer & this ) {
253        try {
254                for ()
255                        insert( chan, 5 );
256        } catch( channel_closed * ) {
257                // unhandled resume or full
258        }
[76e77a4]259}
[44198fb9]260void main( Consumer & this ) {
261        try {
262                for () { int i = remove( chan ); }
263        @} catchResume( channel_closed * ) {@
264                // handled resume => consume from chan
265        } catch( channel_closed * ) {
266                // empty or unhandled resume
267        }
268}
269int main() {
270        Consumer c[4];
271        Producer p[4];
272        sleep( 10`s );
273        close( chan );
[76e77a4]274}
[44198fb9]275
276
277
278
279
280
281
[76e77a4]282\end{cfa}
283\end{lrbox}
284
[44198fb9]285\subfloat[Go style]{\label{l:go_chan_term}\usebox\myboxA}
[76e77a4]286\hspace*{3pt}
287\vrule
288\hspace*{3pt}
[44198fb9]289\subfloat[\CFA style]{\label{l:cfa_chan_term}\usebox\myboxB}
[76e77a4]290\caption{Channel Termination Examples 1 and 2. Code specific to example 2 is highlighted.}
291\label{f:ChannelTermination}
292\end{figure}
293
[44198fb9]294Figure~\ref{l:go_chan_term} shows the Go solution.
295Since some of the elements being passed through the channel are zero-valued, closing the channel in Go does not aid in communicating shutdown.
296Instead, a different mechanism to communicate with the consumers and producers needs to be used.
297Flag variables are common in Go-channel shutdown-code to avoid panics on a channel, meaning the channel shutdown has to be communicated with threads before it occurs.
298Hence, the two flags @cons_done@ and @prod_done@ are used to communicate with the producers and consumers, respectively.
[e4a2198]299Furthermore, producers and consumers need to shutdown separately to ensure that producers terminate before the channel is closed to avoid panicking, and to avoid the case where all the consumers terminate first, which can result in a deadlock for producers if the channel is full.
[44198fb9]300The producer flag is set first;
301then after all producers terminate, the consumer flag is set and the channel is closed leaving elements in the buffer.
302To purge the buffer, a loop is added (red) that iterates over the closed channel to process any remaining values.
303
304Figure~\ref{l:cfa_chan_term} shows the \CFA solution.
305Here, shutdown is communicated directly to both producers and consumers via the @close@ call.
306A @Producer@ thread knows to stop producing when the @insert@ call on a closed channel raises exception @channel_closed@.
307If a @Consumer@ thread ignores the first resumption exception from the @close@, the exception is reraised as a termination exception and elements are left in the buffer.
308If a @Consumer@ thread handles the resumptions exceptions (red), control returns to complete the remove.
309A @Consumer@ thread knows to stop consuming after all elements of a closed channel are removed and the consumer would block, which causes a termination raise of @channel_closed@.
310The \CFA semantics allow users to communicate channel shutdown directly through the channel, without having to share extra state between threads.
311Additionally, when the channel needs to be drained, \CFA provides users with easy options for processing the leftover channel values in the main thread or in the consumer threads.
[76e77a4]312
[44198fb9]313Figure~\ref{f:ChannelBarrierTermination} shows a final shutdown example using channels to implement a barrier.
314A Go and \CFA style solution are presented but both are implemented using \CFA syntax so they can be easily compared.
315Implementing a barrier is interesting because threads are both producers and consumers on the barrier-internal channels, @entryWait@ and @barWait@.
316The outline for the barrier implementation starts by initially filling the @entryWait@ channel with $N$ tickets in the barrier constructor, allowing $N$ arriving threads to remove these values and enter the barrier.
[5d81edb]317After @entryWait@ is empty, arriving threads block when removing.
[44198fb9]318However, the arriving threads that entered the barrier cannot leave the barrier until $N$ threads have arrived.
[5d81edb]319Hence, the entering threads block on the empty @barWait@ channel until the $N$th arriving thread inserts $N-1$ elements into @barWait@ to unblock the $N-1$ threads calling @remove@.
[44198fb9]320The race between these arriving threads blocking on @barWait@ and the $N$th thread inserting values into @barWait@ does not affect correctness;
321\ie an arriving thread may or may not block on channel @barWait@ to get its value.
[5d81edb]322Finally, the last thread to remove from @barWait@ with ticket $N-2$, refills channel @entryWait@ with $N$ values to start the next group into the barrier.
[44198fb9]323
324Now, the two channels makes termination synchronization between producers and consumers difficult.
325Interestingly, the shutdown details for this problem are also applicable to other problems with threads producing and consuming from the same channel.
326The Go-style solution cannot use the Go @close@ call since all threads are both potentially producers and consumers, causing panics on close to be unavoidable without complex synchronization.
327As such in Figure \ref{l:go_chan_bar}, a flush routine is needed to insert a sentinel value, @-1@, to inform threads waiting in the buffer they need to leave the barrier.
328This sentinel value has to be checked at two points along the fast-path and sentinel values daisy-chained into the buffers.
329Furthermore, an additional flag @done@ is needed to communicate to threads once they have left the barrier that they are done.
[9a5a2cd]330Also note that in the Go version~\ref{l:go_chan_bar}, the size of the barrier channels has to be larger than in the \CFA version to ensure that the main thread does not block when attempting to clear the barrier.
[44198fb9]331For The \CFA solution~\ref{l:cfa_chan_bar}, the barrier shutdown results in an exception being thrown at threads operating on it, to inform waiting threads they must leave the barrier.
332This avoids the need to use a separate communication method other than the barrier, and avoids extra conditional checks on the fast path of the barrier implementation.
[5fd5de2]333
[9f1beb4]334\begin{figure}
335\centering
336
337\begin{lrbox}{\myboxA}
338\begin{cfa}[aboveskip=0pt,belowskip=0pt]
[6e83384]339struct barrier {
[9f1beb4]340        channel( int ) barWait, entryWait;
[3d5fba21]341        int size;
[9f1beb4]342};
343void ?{}( barrier & this, int size ) with(this) {
[44198fb9]344        barWait{size + 1};   entryWait{size + 1};
[3d5fba21]345        this.size = size;
[9f1beb4]346        for ( i; size )
347                insert( entryWait, i );
[6e83384]348}
[9f1beb4]349void wait( barrier & this ) with(this) {
350        int ticket = remove( entryWait );
[44198fb9]351        @if ( ticket == -1 ) { insert( entryWait, -1 ); return; }@
[3d5fba21]352        if ( ticket == size - 1 ) {
[9f1beb4]353                for ( i; size - 1 )
354                        insert( barWait, i );
[3d5fba21]355                return;
356        }
[9f1beb4]357        ticket = remove( barWait );
[44198fb9]358        @if ( ticket == -1 ) { insert( barWait, -1 ); return; }@
[9f1beb4]359        if ( size == 1 || ticket == size - 2 ) { // last ?
360                for ( i; size )
361                        insert( entryWait, i );
[3d5fba21]362        }
[6e83384]363}
[9f1beb4]364void flush(barrier & this) with(this) {
[44198fb9]365        @insert( entryWait, -1 );   insert( barWait, -1 );@
[9f1beb4]366}
367enum { Threads = 4 };
368barrier b{Threads};
[44198fb9]369@bool done = false;@
[9f1beb4]370thread Thread {};
371void main( Thread & this ) {
[44198fb9]372        for () {
373          @if ( done ) break;@
374                wait( b );
375        }
[6e83384]376}
377int main() {
[9f1beb4]378        Thread t[Threads];
379        sleep(10`s);
[44198fb9]380        done = true;
[9f1beb4]381        flush( b );
382} // wait for threads to terminate
[9363b1b]383\end{cfa}
[9f1beb4]384\end{lrbox}
[6e83384]385
[9f1beb4]386\begin{lrbox}{\myboxB}
387\begin{cfa}[aboveskip=0pt,belowskip=0pt]
[6e83384]388struct barrier {
[9f1beb4]389        channel( int ) barWait, entryWait;
[3d5fba21]390        int size;
[9f1beb4]391};
392void ?{}( barrier & this, int size ) with(this) {
[44198fb9]393        barWait{size};   entryWait{size};
[3d5fba21]394        this.size = size;
[9f1beb4]395        for ( i; size )
396                insert( entryWait, i );
[6e83384]397}
[9f1beb4]398void wait( barrier & this ) with(this) {
399        int ticket = remove( entryWait );
[44198fb9]400
[3d5fba21]401        if ( ticket == size - 1 ) {
[9f1beb4]402                for ( i; size - 1 )
403                        insert( barWait, i );
[3d5fba21]404                return;
405        }
[9f1beb4]406        ticket = remove( barWait );
[44198fb9]407
[9f1beb4]408        if ( size == 1 || ticket == size - 2 ) { // last ?
409                for ( i; size )
410                        insert( entryWait, i );
[3d5fba21]411        }
[6e83384]412}
[9f1beb4]413void flush(barrier & this) with(this) {
[44198fb9]414        @close( barWait );   close( entryWait );@
[9f1beb4]415}
416enum { Threads = 4 };
417barrier b{Threads};
[44198fb9]418
[9f1beb4]419thread Thread {};
420void main( Thread & this ) {
[44198fb9]421        @try {@
422                for ()
423                        wait( b );
424        @} catch ( channel_closed * ) {}@
[6e83384]425}
426int main() {
[9f1beb4]427        Thread t[Threads];
428        sleep(10`s);
[44198fb9]429
[9f1beb4]430        flush( b );
431} // wait for threads to terminate
[9363b1b]432\end{cfa}
[9f1beb4]433\end{lrbox}
434
[44198fb9]435\subfloat[Go style]{\label{l:go_chan_bar}\usebox\myboxA}
[9f1beb4]436\hspace*{3pt}
437\vrule
438\hspace*{3pt}
[44198fb9]439\subfloat[\CFA style]{\label{l:cfa_chan_bar}\usebox\myboxB}
[9f1beb4]440\caption{Channel Barrier Termination}
441\label{f:ChannelBarrierTermination}
442\end{figure}
[5fd5de2]443
444\section{Performance}
[6e83384]445
[9f1beb4]446Given that the base implementation of the \CFA channels is very similar to the Go implementation, this section aims to show the performance of the two implementations are comparable.
[76e77a4]447The microbenchmark for the channel comparison is similar to Figure~\ref{f:ChannelTermination}, where the number of threads and processors is set from the command line.
[9f1beb4]448The processors are divided equally between producers and consumers, with one producer or consumer owning each core.
[9921573]449The number of cores is varied to measure how throughput scales.
[9f1beb4]450
[3d5fba21]451The results of the benchmark are shown in Figure~\ref{f:chanPerf}.
452The performance of Go and \CFA channels on this microbenchmark is comparable.
[9f1beb4]453Note, the performance should decline as the number of cores increases as the channel operations occur in a critical section, so increasing cores results in higher contention with no increase in parallelism.
[6e83384]454
455\begin{figure}
[3d5fba21]456        \centering
457        \subfloat[AMD \CFA Channel Benchmark]{
458                \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Channel_Contention.pgf}}
459                \label{f:chanAMD}
460        }
461        \subfloat[Intel \CFA Channel Benchmark]{
462                \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Channel_Contention.pgf}}
463                \label{f:chanIntel}
464        }
465        \caption{The channel contention benchmark comparing \CFA and Go channel throughput (higher is better).}
466        \label{f:chanPerf}
[6e83384]467\end{figure}
[3d5fba21]468
469% Local Variables: %
470% tab-width: 4 %
471% End: %
Note: See TracBrowser for help on using the repository browser.