source: doc/theses/colby_parsons_MMAth/text/channels.tex @ 2d831a1

ADTast-experimental
Last change on this file since 2d831a1 was 2d831a1, checked in by caparsons <caparson@…>, 16 months ago

added various small edits and resolved some action items

  • Property mode set to 100644
File size: 21.0 KB
Line 
1% ======================================================================
2% ======================================================================
3\chapter{Channels}\label{s:channels}
4% ======================================================================
5% ======================================================================
6
7Most modern concurrent programming languages do not subscribe to just one style of communication among threads and provide features that support multiple approaches.
8Channels are a concurrent-language feature used to perform \Newterm{message-passing concurrency}: a model of concurrency where threads communicate by sending data as messages (mostly non\-blocking) and synchronizing by receiving sent messages (blocking).
9This model is an alternative to shared-memory concurrency, where threads can communicate directly by changing shared state.
10
11Channels were first introduced by Kahn~\cite{Kahn74} and extended by Hoare~\cite{Hoare78} (CSP).
12Both papers present a pseudo (unimplemented) concurrent language where processes communicate using input/output channels to send data.
13Both languages are highly restrictive.
14Kahn's language restricts a reading process to only wait for data on a single channel at a time and different writing processes cannot send data on the same channel.
15Hoare's language restricts channels such that both the sender and receiver need to explicitly name the process that is destination of a channel send or the source of a channel receive.
16These channel semantics remove the ability to have an anonymous sender or receiver and additionally all channel operations in CSP are synchronous (no buffering).
17Channels as a programming language feature has been popularized in recent years by the language Go, which encourages the use of channels as its fundamental concurrent feature.
18Go's restrictions are ... \CAP{The only restrictions in Go but not CFA that I can think of are the closing semantics and the functionality of select vs. waituntil. Is that worth mentioning here or should it be discussed later?}
19\CFA channels do not have these restrictions.
20
21\section{Producer-Consumer Problem}
22A channel is an abstraction for a shared-memory buffer, which turns the implementation of a channel into the producer-consumer problem.
23The producer-consumer problem, also known as the bounded-buffer problem, was introduced by Dijkstra~\cite[\S~4.1]{Dijkstra65}.
24In the problem, threads interact with a buffer in two ways: producing threads insert values into the buffer and consuming threads remove values from the buffer.
25In general, a buffer needs protection to ensure a producer only inserts into a non-full buffer and a consumer only removes from a non-empty buffer (synchronization).
26As well, a buffer needs protection from concurrent access by multiple producers or consumers attempt to insert or remove simultaneously (MX).
27
28Channels come in three flavours of buffers:
29\begin{enumerate}
30\item
31Zero sized implies the communication is synchronous, \ie the producer must wait for the consumer to arrive or vice versa for a value to be communicated.
32\item
33Fixed sized (bounded) implies the communication is asynchronous, \ie the producer can proceed up to the buffer size and vice versa for the consumer with respect to removal.
34\item
35Infinite sized (unbounded) implies the communication is asynchronous, \ie the producer never waits but the consumer waits when the buffer is empty.
36Since memory is finite, all unbounded buffers are ultimately bounded;
37this restriction must be part of its implementation.
38\end{enumerate}
39
40In general, the order values are processed by the consumer does not affect the correctness of the producer-consumer problem.
41For example, the buffer can be LIFO, FIFO, or prioritized with respect to insertion and removal.
42However, like MX, a buffer should ensure every value is eventually removed after some reasonable bounded time (no long-term starvation).
43The simplest way to prevent starvation is to implement the buffer as a queue, either with a cyclic array or linked nodes.
44
45\section{First-Come First-Served}
46As pointed out, a bounded buffer requires MX among multiple producers or consumers.
47This MX should be fair among threads, independent of the FIFO buffer being fair among values.
48Fairness among threads is called \gls{fcfs} and was defined by Lamport~\cite[p.~454]{Lamport74}.
49\gls{fcfs} is defined in relation to a doorway~\cite[p.~330]{Lamport86II}, which is the point at which an ordering among threads can be established.
50Given this doorway, a CS is said to be \gls{fcfs}, if threads access the shared resource in the order they proceed through the doorway.
51A consequence of \gls{fcfs} execution is the elimination of \Newterm{barging}, where barging means a thread arrives at a CS with waiting threads, and the MX protecting the CS allows the arriving thread to enter the CS ahead of one or more of the waiting threads.
52
53\gls{fcfs} is a fairness property that prevents unequal access to the shared resource and prevents starvation, however it comes at a cost.
54Implementing an algorithm with \gls{fcfs} can lead to double blocking, where arriving threads block outside the doorway waiting for a thread in the lock entry-protocol and inside the doorway waiting for a thread in the CS.
55An analogue is boarding an airplane: first you wait to get through security to the departure gates (short term), and then wait again at the departure gate for the airplane (long term).
56As such, algorithms that are not \gls{fcfs} (barging) can be more performant by skipping the wait for the CS and entering directly;
57however, this performance gain comes by introducing unfairness with possible starvation for waiting threads.
58
59\section{Channel Implementation}
60Currently, only the Go programming language~\cite{Go} provides user-level threading where the primary communication mechanism is channels.
61Experiments were conducted that varied the producer-consumer problem algorithm and lock type used inside the channel.
62With the exception of non-\gls{fcfs} algorithms, no algorithm or lock usage in the channel implementation was found to be consistently more performant that Go's choice of algorithm and lock implementation.
63Therefore, the low-level channel implementation in \CFA is largely copied from the Go implementation, but adapted to the \CFA type and runtime systems.
64As such the research contributions added by \CFA's channel implementation lie in the realm of safety and productivity features.
65
66\PAB{Discuss the Go channel implementation. Need to tie in FIFO buffer and FCFS locking.}
67
68In this work, all channels are implemented with bounded buffers, so there is no zero-sized buffering.
69\CAP{I do have zero size channels implemented, however I don't focus on them since I think they are uninteresting as they are just a thin layer over binary semaphores. Should I mention that I support it but omit discussion or just leave it out?}
70
71\section{Safety and Productivity}
72Channels in \CFA come with safety and productivity features to aid users.
73The features include the following.
74
75\begin{itemize}
76\item Toggle-able statistic collection on channel behaviour that count channel and blocking operations.
77Tracking blocking operations helps illustrate usage and then tune the channel size, where the aim is to reduce blocking.
78\item Deadlock detection on deallocation of the channel.
79If threads are blocked inside the channel when it terminates, this case is detected and the user is informed, as this can cause a deadlock.
80\item A @flush@ routine that delivers copies of an element to all waiting consumers, flushing the buffer.
81Programmers can use this to easily to broadcast data to multiple consumers.
82Additionally, the @flush@ routine is more performant then looping around the @insert@ operation since it can deliver the elements without having to reacquire mutual exclusion for each element sent.
83\end{itemize}
84
85The other safety and productivity feature of \CFA channels deals with concurrent termination.
86Terminating concurrent programs is often one of the most difficult parts of writing concurrent code, particularly if graceful termination is needed.
87The difficulty of graceful termination often arises from the usage of synchronization primitives which need to be handled carefully during shutdown.
88It is easy to deadlock during termination if threads are left behind on synchronization primitives.
89Additionally, most synchronization primitives are prone to \gls{toctou} issues where there is race between one thread checking the state of a concurrent object and another thread changing the state.
90\gls{toctou} issues with synchronization primitives often involve a race between one thread checking the primitive for blocked threads and another thread blocking on it.
91Channels are a particularly hard synchronization primitive to terminate since both sending and receiving off a channel can block.
92Thus, improperly handled \gls{toctou} issues with channels often result in deadlocks as threads trying to perform the termination may end up unexpectedly blocking in their attempt to help other threads exit the system.
93
94% C_TODO: add reference to select chapter, add citation to go channels info
95Go channels provide a set of tools to help with concurrent shutdown.
96Channels in Go have a @close@ operation and a \Go{select} statement that both can be used to help threads terminate.
97The \Go{select} statement will be discussed in \ref{}, where \CFA's @waituntil@ statement will be compared with the Go \Go{select} statement.
98The @close@ operation on a channel in Go changes the state of the channel.
99When a channel is closed, sends to the channel will panic and additional calls to @close@ will panic.
100Receives are handled differently where receivers will never block on a closed channel and will continue to remove elements from the channel.
101Once a channel is empty, receivers can continue to remove elements, but will receive the zero-value version of the element type.
102To aid in avoiding unwanted zero-value elements, Go provides the ability to iterate over a closed channel to remove the remaining elements.
103These design choices for Go channels enforce a specific interaction style with channels during termination, where careful thought is needed to ensure that additional @close@ calls don't occur and that no sends occur after channels are closed.
104These design choices fit Go's paradigm of error management, where users are expected to explicitly check for errors, rather than letting errors occur and catching them.
105If errors need to occur in Go, return codes are used to pass error information where they are needed.
106Note that panics in Go can be caught, but it is not considered an idiomatic way to write Go programs.
107
108While Go's channel closing semantics are powerful enough to perform any concurrent termination needed by a program, their lack of ease of use leaves much to be desired.
109Since both closing and sending panic, once a channel is closed, a user often has to synchronize the senders to a channel before the channel can be closed to avoid panics.
110However, in doing so it renders the @close@ operation nearly useless, as the only utilities it provides are the ability to ensure that receivers no longer block on the channel, and will receive zero-valued elements.
111This can be useful if the zero-typed element is recognized as a sentinel value, but if another sentinel value is preferred, then @close@ only provides its non-blocking feature.
112To avoid \gls{toctou} issues during shutdown, a busy wait with a \Go{select} statement is often used to add or remove elements from a channel.
113Due to Go's asymmetric approach to channel shutdown, separate synchronization between producers and consumers of a channel has to occur during shutdown.
114
115In \CFA, exception handling is an encouraged paradigm and has full language support \cite{Beach21}.
116As such \CFA uses an exception based approach to channel shutdown that is symmetric for both producers and consumers, and supports graceful shutdown.Exceptions in \CFA support both termination and resumption.Termination exceptions operate in the same way as exceptions seen in many popular programming languages such as \CC, Python and Java.
117Resumption exceptions are a style of exception that when caught run the corresponding catch block in the same way that termination exceptions do.
118The difference between the exception handling mechanisms arises after the exception is handled.
119In termination handling, the control flow continues into the code following the catch after the exception is handled.
120In resumption handling, the control flow returns to the site of the @throw@, allowing the control to continue where it left off.
121Note that in resumption, since control can return to the point of error propagation, the stack is not unwound during resumption propagation.
122In \CFA if a resumption is not handled, it is reraised as a termination.
123This mechanism can be used to create a flexible and robust termination system for channels.
124
125When a channel in \CFA is closed, all subsequent calls to the channel will throw a resumption exception at the caller.
126If the resumption is handled, then the caller will proceed to attempt to complete their operation.
127If the resumption is not handled it is then rethrown as a termination exception.
128Or, if the resumption is handled, but the subsequent attempt at an operation would block, a termination exception is thrown.
129These termination exceptions allow for non-local transfer that can be used to great effect to eagerly and gracefully shut down a thread.
130When a channel is closed, if there are any blocked producers or consumers inside the channel, they are woken up and also have a resumption thrown at them.
131The resumption exception, @channel_closed@, has a couple fields to aid in handling the exception.
132The exception contains a pointer to the channel it was thrown from, and a pointer to an element.
133In exceptions thrown from remove the element pointer will be null.
134In the case of insert the element pointer points to the element that the thread attempted to insert.
135This element pointer allows the handler to know which operation failed and also allows the element to not be lost on a failed insert since it can be moved elsewhere in the handler.
136Furthermore, due to \CFA's powerful exception system, this data can be used to choose handlers based which channel and operation failed.
137Exception handlers in \CFA have an optional predicate after the exception type which can be used to optionally trigger or skip handlers based on the content of an exception.
138It is worth mentioning that the approach of exceptions for termination may incur a larger performance cost during termination that the approach used in Go.
139This should not be an issue, since termination is rarely an fast-path of an application and ensuring that termination can be implemented correctly with ease is the aim of the exception approach.
140
141To highlight the differences between \CFA's and Go's close semantics, an example program is presented.
142The program is a barrier implemented using two channels shown in Listings~\ref{l:cfa_chan_bar} and \ref{l:go_chan_bar}.
143Both of these examples are implemented using \CFA syntax so that they can be easily compared.
144Listing~\ref{l:go_chan_bar} uses go-style channel close semantics and Listing~\ref{l:cfa_chan_bar} uses \CFA close semantics.
145In this problem it is infeasible to use the Go @close@ call since all tasks are both potentially producers and consumers, causing panics on close to be unavoidable.
146As such in Listing~\ref{l:go_chan_bar} to implement a flush routine for the buffer, a sentinel value of $-1$ has to be used to indicate to threads that they need to leave the barrier.
147This sentinel value has to be checked at two points.
148Furthermore, an additional flag @done@ is needed to communicate to threads once they have left the barrier that they are done.
149This use of an additional flag or communication method is common in Go channel shutdown code, since to avoid panics on a channel, the shutdown of a channel often has to be communicated with threads before it occurs.
150In the \CFA version~\ref{l:cfa_chan_bar}, the barrier shutdown results in an exception being thrown at threads operating on it, which informs the threads that they must terminate.
151This avoids the need to use a separate communication method other than the barrier, and avoids extra conditional checks on the fast path of the barrier implementation.
152Also note that in the Go version~\ref{l:go_chan_bar}, the size of the barrier channels has to be larger than in the \CFA version to ensure that the main thread does not block when attempting to clear the barrier.
153
154\begin{cfa}[caption={\CFA channel barrier termination},label={l:cfa_chan_bar}]
155struct barrier {
156        channel( int ) barWait;
157        channel( int ) entryWait;
158        int size;
159}
160void ?{}(barrier & this, int size) with(this) {
161        barWait{size};
162        entryWait{size};
163        this.size = size;
164        for ( j; size )
165                insert( *entryWait, j );
166}
167
168void flush(barrier & this) with(this) {
169        close(barWait);
170        close(entryWait);
171}
172void wait(barrier & this) with(this) {
173        int ticket = remove( *entryWait );
174        if ( ticket == size - 1 ) {
175                for ( j; size - 1 )
176                        insert( *barWait, j );
177                return;
178        }
179        ticket = remove( *barWait );
180
181        // last one out
182        if ( size == 1 || ticket == size - 2 ) {
183                for ( j; size )
184                        insert( *entryWait, j );
185        }
186}
187barrier b{Tasks};
188
189// thread main
190void main(Task & this) {
191        try {
192                for ( ;; ) {
193                        wait( b );
194                }
195        } catch ( channel_closed * e ) {}
196}
197
198int main() {
199        {
200                Task t[Tasks];
201
202                sleep(10`s);
203                flush( b );
204        } // wait for tasks to terminate
205        return 0;
206}
207\end{cfa}
208
209\begin{cfa}[caption={Go channel barrier termination},label={l:go_chan_bar}]
210
211struct barrier {
212        channel( int ) barWait;
213        channel( int ) entryWait;
214        int size;
215}
216void ?{}(barrier & this, int size) with(this) {
217        barWait{size + 1};
218        entryWait{size + 1};
219        this.size = size;
220        for ( j; size )
221                insert( *entryWait, j );
222}
223
224void flush(barrier & this) with(this) {
225        insert( *entryWait, -1 );
226        insert( *barWait, -1 );
227}
228void wait(barrier & this) with(this) {
229        int ticket = remove( *entryWait );
230        if ( ticket == -1 ) {
231                insert( *entryWait, -1 );
232                return;
233        }
234        if ( ticket == size - 1 ) {
235                for ( j; size - 1 )
236                        insert( *barWait, j );
237                return;
238        }
239        ticket = remove( *barWait );
240        if ( ticket == -1 ) {
241                insert( *barWait, -1 );
242                return;
243        }
244
245        // last one out
246        if ( size == 1 || ticket == size - 2 ) {
247                for ( j; size )
248                        insert( *entryWait, j );
249        }
250}
251barrier b;
252
253bool done = false;
254// thread main
255void main(Task & this) {
256        for ( ;; ) {
257                if ( done ) break;
258                wait( b );
259        }
260}
261
262int main() {
263        {
264                Task t[Tasks];
265
266                sleep(10`s);
267                done = true;
268
269                flush( b );
270        } // wait for tasks to terminate
271        return 0;
272}
273\end{cfa}
274
275In Listing~\ref{l:cfa_resume} an example of channel closing with resumption is used.
276This program uses resumption in the @Consumer@ thread main to ensure that all elements in the channel are removed before the consumer thread terminates.
277The producer only has a @catch@ so the moment it receives an exception it terminates, whereas the consumer will continue to remove from the closed channel via handling resumptions until the buffer is empty, which then throws a termination exception.
278If the same program was implemented in Go it would require explicit synchronization with both producers and consumers by some mechanism outside the channel to ensure that all elements were removed before task termination.
279
280\begin{cfa}[caption={\CFA channel resumption usage},label={l:cfa_resume}]
281channel( int ) chan{ 128 };
282
283// Consumer thread main
284void main(Consumer & this) {
285        size_t runs = 0;
286        try {
287                for ( ;; ) {
288                        remove( chan );
289                }
290        } catchResume ( channel_closed * e ) {}
291        catch ( channel_closed * e ) {}
292}
293
294// Producer thread main
295void main(Producer & this) {
296        int j = 0;
297        try {
298                for ( ;;j++ ) {
299                        insert( chan, j );
300                }
301        } catch ( channel_closed * e ) {}
302}
303
304int main( int argc, char * argv[] ) {
305        {
306                Consumers c[4];
307                Producer p[4];
308
309                sleep(10`s);
310
311                for ( i; Channels )
312                        close( channels[i] );
313        }
314        return 0;
315}
316\end{cfa}
317
318\section{Performance}
319
320Given that the base implementation of the \CFA channels is very similar to the Go implementation, this section aims to show that the performance of the two implementations are comparable.
321One microbenchmark is conducted to compare Go and \CFA.
322The benchmark is a ten second experiment where producers and consumers operate on a channel in parallel and throughput is measured.
323The number of cores is varied to measure how throughput scales.
324The cores are divided equally between producers and consumers, with one producer or consumer owning each core.
325The results of the benchmark are shown in Figure~\ref{f:chanPerf}.
326The performance of Go and \CFA channels on this microbenchmark is comparable.
327Note, it is expected for the performance to decline as the number of cores increases as the channel operations all occur in a critical section so an increase in cores results in higher contention with no increase in parallelism.
328
329
330\begin{figure}
331        \centering
332        \subfloat[AMD \CFA Channel Benchmark]{
333                \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Channel_Contention.pgf}}
334                \label{f:chanAMD}
335        }
336        \subfloat[Intel \CFA Channel Benchmark]{
337                \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Channel_Contention.pgf}}
338                \label{f:chanIntel}
339        }
340        \caption{The channel contention benchmark comparing \CFA and Go channel throughput (higher is better).}
341        \label{f:chanPerf}
342\end{figure}
343
344% Local Variables: %
345% tab-width: 4 %
346% End: %
Note: See TracBrowser for help on using the repository browser.