Changeset 9f1beb4 for doc/theses


Ignore:
Timestamp:
May 17, 2023, 11:31:20 AM (19 months ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master
Children:
41639089, a0c746df, f11010e
Parents:
c3e2131
Message:

more proofreading of the channel chapter

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/colby_parsons_MMAth/text/channels.tex

    rc3e2131 r9f1beb4  
    77Most modern concurrent programming languages do not subscribe to just one style of communication among threads and provide features that support multiple approaches.
    88Channels are a concurrent-language feature used to perform \Newterm{message-passing concurrency}: a model of concurrency where threads communicate by sending data as messages (mostly non\-blocking) and synchronizing by receiving sent messages (blocking).
    9 This model is an alternative to shared-memory concurrency, where threads can communicate directly by changing shared state.
     9This model is an alternative to shared-memory concurrency, where threads communicate directly by changing shared state.
    1010
    1111Channels were first introduced by Kahn~\cite{Kahn74} and extended by Hoare~\cite{Hoare78} (CSP).
     
    1313Both languages are highly restrictive.
    1414Kahn's language restricts a reading process to only wait for data on a single channel at a time and different writing processes cannot send data on the same channel.
    15 Hoare's language restricts channels such that both the sender and receiver need to explicitly name the process that is destination of a channel send or the source of a channel receive.
    16 These channel semantics remove the ability to have an anonymous sender or receiver and additionally all channel operations in CSP are synchronous (no buffering).
    17 Channels as a programming language feature has been popularized in recent years by the language Go, which encourages the use of channels as its fundamental concurrent feature.
    18 Go's restrictions are ... \CAP{The only restrictions in Go but not CFA that I can think of are the closing semantics and the functionality of select vs. waituntil. Is that worth mentioning here or should it be discussed later?}
    19 \CFA channels do not have these restrictions.
     15Hoare's language restricts both the sender and receiver to explicitly name the process that is the destination of a channel send or the source of a channel receive.
     16These channel semantics remove the ability to have an anonymous sender or receiver.
     17Additionally all channel operations in CSP are synchronous (no buffering).
     18Advanced channels as a programming language feature has been popularized in recent years by the language Go~\cite{Go}, which encourages the use of channels as its fundamental concurrent feature.
     19It was the popularity of Go channels that lead me to implement them in \CFA.
     20Neither Go nor \CFA channels have the restrictions in early channel-based concurrent systems.
    2021
    2122\section{Producer-Consumer Problem}
     
    2425In the problem, threads interact with a buffer in two ways: producing threads insert values into the buffer and consuming threads remove values from the buffer.
    2526In general, a buffer needs protection to ensure a producer only inserts into a non-full buffer and a consumer only removes from a non-empty buffer (synchronization).
    26 As well, a buffer needs protection from concurrent access by multiple producers or consumers attempt to insert or remove simultaneously (MX).
    27 
     27As well, a buffer needs protection from concurrent access by multiple producers or consumers attempting to insert or remove simultaneously (MX).
     28
     29\section{Channel Size}\label{s:ChannelSize}
    2830Channels come in three flavours of buffers:
    2931\begin{enumerate}
     
    5254
    5355\gls{fcfs} is a fairness property that prevents unequal access to the shared resource and prevents starvation, however it comes at a cost.
    54 Implementing an algorithm with \gls{fcfs} can lead to double blocking, where arriving threads block outside the doorway waiting for a thread in the lock entry-protocol and inside the doorway waiting for a thread in the CS.
     56Implementing an algorithm with \gls{fcfs} can lead to \Newterm{double blocking}, where arriving threads block outside the doorway waiting for a thread in the lock entry-protocol and inside the doorway waiting for a thread in the CS.
    5557An analogue is boarding an airplane: first you wait to get through security to the departure gates (short term), and then wait again at the departure gate for the airplane (long term).
    5658As such, algorithms that are not \gls{fcfs} (barging) can be more performant by skipping the wait for the CS and entering directly;
     
    5860
    5961\section{Channel Implementation}
    60 Currently, only the Go programming language~\cite{Go} provides user-level threading where the primary communication mechanism is channels.
     62Currently, only the Go programming language provides user-level threading where the primary communication mechanism is channels.
    6163Experiments were conducted that varied the producer-consumer problem algorithm and lock type used inside the channel.
    6264With the exception of non-\gls{fcfs} algorithms, no algorithm or lock usage in the channel implementation was found to be consistently more performant that Go's choice of algorithm and lock implementation.
     
    6668\PAB{Discuss the Go channel implementation. Need to tie in FIFO buffer and FCFS locking.}
    6769
    68 In this work, all channels are implemented with bounded buffers, so there is no zero-sized buffering.
    69 \CAP{I do have zero size channels implemented, however I don't focus on them since I think they are uninteresting as they are just a thin layer over binary semaphores. Should I mention that I support it but omit discussion or just leave it out?}
     70In this work, all channel sizes \see{Sections~\ref{s:ChannelSize}} are implemented with bounded buffers.
     71However, only non-zero-sized buffers are analysed because of their complexity and higher usage.
    7072
    7173\section{Safety and Productivity}
     
    7577\begin{itemize}
    7678\item Toggle-able statistic collection on channel behaviour that count channel and blocking operations.
    77 Tracking blocking operations helps illustrate usage and then tune the channel size, where the aim is to reduce blocking.
    78 \item Deadlock detection on deallocation of the channel.
    79 If threads are blocked inside the channel when it terminates, this case is detected and the user is informed, as this can cause a deadlock.
     79Tracking blocking operations helps illustrate usage for tuning the channel size, where the aim is to reduce blocking.
     80
     81\item Deadlock detection on channel deallocation.
     82If threads are blocked inside a channel when it terminates, this case is detected and the user is informed, as this can cause a deadlock.
     83
    8084\item A @flush@ routine that delivers copies of an element to all waiting consumers, flushing the buffer.
    81 Programmers can use this to easily to broadcast data to multiple consumers.
     85Programmers use this mechanism to broadcast a sentinel value to multiple consumers.
    8286Additionally, the @flush@ routine is more performant then looping around the @insert@ operation since it can deliver the elements without having to reacquire mutual exclusion for each element sent.
    8387\end{itemize}
    8488
    85 The other safety and productivity feature of \CFA channels deals with concurrent termination.
     89\subsection{Toggle-able Statistics}
     90\PAB{Discuss toggle-able statistics.}
     91
     92
     93\subsection{Deadlock Detection}
     94\PAB{Discuss deadlock detection.}
     95
     96\subsection{Program Shutdown}
     97% The other safety and productivity feature of \CFA channels deals with concurrent termination.
    8698Terminating concurrent programs is often one of the most difficult parts of writing concurrent code, particularly if graceful termination is needed.
    87 The difficulty of graceful termination often arises from the usage of synchronization primitives which need to be handled carefully during shutdown.
     99The difficulty of graceful termination often arises from the usage of synchronization primitives that need to be handled carefully during shutdown.
    88100It is easy to deadlock during termination if threads are left behind on synchronization primitives.
    89101Additionally, most synchronization primitives are prone to \gls{toctou} issues where there is race between one thread checking the state of a concurrent object and another thread changing the state.
    90102\gls{toctou} issues with synchronization primitives often involve a race between one thread checking the primitive for blocked threads and another thread blocking on it.
    91 Channels are a particularly hard synchronization primitive to terminate since both sending and receiving off a channel can block.
     103Channels are a particularly hard synchronization primitive to terminate since both sending and receiving to/from a channel can block.
    92104Thus, improperly handled \gls{toctou} issues with channels often result in deadlocks as threads trying to perform the termination may end up unexpectedly blocking in their attempt to help other threads exit the system.
    93105
    94106% C_TODO: add reference to select chapter, add citation to go channels info
    95 Go channels provide a set of tools to help with concurrent shutdown.
     107\paragraph{Go channels} provide a set of tools to help with concurrent shutdown.
    96108Channels in Go have a @close@ operation and a \Go{select} statement that both can be used to help threads terminate.
    97 The \Go{select} statement will be discussed in \ref{}, where \CFA's @waituntil@ statement will be compared with the Go \Go{select} statement.
     109The \Go{select} statement is discussed in \ref{waituntil}, where \CFA's @waituntil@ statement is compared with the Go \Go{select} statement.
     110
    98111The @close@ operation on a channel in Go changes the state of the channel.
    99 When a channel is closed, sends to the channel will panic and additional calls to @close@ will panic.
    100 Receives are handled differently where receivers will never block on a closed channel and will continue to remove elements from the channel.
    101 Once a channel is empty, receivers can continue to remove elements, but will receive the zero-value version of the element type.
    102 To aid in avoiding unwanted zero-value elements, Go provides the ability to iterate over a closed channel to remove the remaining elements.
    103 These design choices for Go channels enforce a specific interaction style with channels during termination, where careful thought is needed to ensure that additional @close@ calls don't occur and that no sends occur after channels are closed.
     112When a channel is closed, sends to the channel panic along with additional calls to @close@.
     113Receives are handled differently where receivers never block on a closed channel and continue to remove elements from the channel.
     114Once a channel is empty, receivers can continue to remove elements, but receive the zero-value version of the element type.
     115To avoid unwanted zero-value elements, Go provides the ability to iterate over a closed channel to remove the remaining elements.
     116These Go design choices enforce a specific interaction style with channels during termination: careful thought is needed to ensure additional @close@ calls do not occur and no sends occur after a channel is closed.
    104117These design choices fit Go's paradigm of error management, where users are expected to explicitly check for errors, rather than letting errors occur and catching them.
    105 If errors need to occur in Go, return codes are used to pass error information where they are needed.
    106 Note that panics in Go can be caught, but it is not considered an idiomatic way to write Go programs.
     118If errors need to occur in Go, return codes are used to pass error information up call levels.
     119Note, panics in Go can be caught, but it is not the idiomatic way to write Go programs.
    107120
    108121While Go's channel closing semantics are powerful enough to perform any concurrent termination needed by a program, their lack of ease of use leaves much to be desired.
    109 Since both closing and sending panic, once a channel is closed, a user often has to synchronize the senders to a channel before the channel can be closed to avoid panics.
    110 However, in doing so it renders the @close@ operation nearly useless, as the only utilities it provides are the ability to ensure that receivers no longer block on the channel, and will receive zero-valued elements.
    111 This can be useful if the zero-typed element is recognized as a sentinel value, but if another sentinel value is preferred, then @close@ only provides its non-blocking feature.
     122Since both closing and sending panic once a channel is closed, a user often has to synchronize the senders to a channel before the channel can be closed to avoid panics.
     123However, in doing so it renders the @close@ operation nearly useless, as the only utilities it provides are the ability to ensure receivers no longer block on the channel and receive zero-valued elements.
     124This functionality is only useful if the zero-typed element is recognized as a sentinel value, but if another sentinel value is necessary, then @close@ only provides the non-blocking feature.
    112125To avoid \gls{toctou} issues during shutdown, a busy wait with a \Go{select} statement is often used to add or remove elements from a channel.
    113126Due to Go's asymmetric approach to channel shutdown, separate synchronization between producers and consumers of a channel has to occur during shutdown.
    114127
    115 In \CFA, exception handling is an encouraged paradigm and has full language support \cite{Beach21}.
    116 As such \CFA uses an exception based approach to channel shutdown that is symmetric for both producers and consumers, and supports graceful shutdown.Exceptions in \CFA support both termination and resumption.Termination exceptions operate in the same way as exceptions seen in many popular programming languages such as \CC, Python and Java.
    117 Resumption exceptions are a style of exception that when caught run the corresponding catch block in the same way that termination exceptions do.
    118 The difference between the exception handling mechanisms arises after the exception is handled.
    119 In termination handling, the control flow continues into the code following the catch after the exception is handled.
    120 In resumption handling, the control flow returns to the site of the @throw@, allowing the control to continue where it left off.
    121 Note that in resumption, since control can return to the point of error propagation, the stack is not unwound during resumption propagation.
    122 In \CFA if a resumption is not handled, it is reraised as a termination.
    123 This mechanism can be used to create a flexible and robust termination system for channels.
    124 
    125 When a channel in \CFA is closed, all subsequent calls to the channel will throw a resumption exception at the caller.
    126 If the resumption is handled, then the caller will proceed to attempt to complete their operation.
    127 If the resumption is not handled it is then rethrown as a termination exception.
    128 Or, if the resumption is handled, but the subsequent attempt at an operation would block, a termination exception is thrown.
    129 These termination exceptions allow for non-local transfer that can be used to great effect to eagerly and gracefully shut down a thread.
     128\paragraph{\CFA channels} have access to an extensive exception handling mechanism~\cite{Beach21}.
     129As such \CFA uses an exception-based approach to channel shutdown that is symmetric for both producers and consumers, and supports graceful shutdown.
     130
     131Exceptions in \CFA support both termination and resumption.
     132\Newterm{Termination exception}s perform a dynamic call that unwinds the stack preventing the exception handler from returning to the raise point, such as in \CC, Python and Java.
     133\Newterm{Resumption exception}s perform a dynamic call that does not unwind the stack allowing the exception handler to return to the raise point.
     134In \CFA, if a resumption exception is not handled, it is reraised as a termination exception.
     135This mechanism is used to create a flexible and robust termination system for channels.
     136
     137When a channel in \CFA is closed, all subsequent calls to the channel raise a resumption exception at the caller.
     138If the resumption is handled, the caller attempts to complete the channel operation.
     139However, if channel operation would block, a termination exception is thrown.
     140If the resumption is not handled, the exception is rethrown as a termination.
     141These termination exceptions allow for non-local transfer that is used to great effect to eagerly and gracefully shut down a thread.
    130142When a channel is closed, if there are any blocked producers or consumers inside the channel, they are woken up and also have a resumption thrown at them.
    131143The resumption exception, @channel_closed@, has a couple fields to aid in handling the exception.
     
    139151This should not be an issue, since termination is rarely an fast-path of an application and ensuring that termination can be implemented correctly with ease is the aim of the exception approach.
    140152
     153\section{\CFA / Go channel Examples}
    141154To highlight the differences between \CFA's and Go's close semantics, an example program is presented.
    142 The program is a barrier implemented using two channels shown in Listings~\ref{l:cfa_chan_bar} and \ref{l:go_chan_bar}.
     155The program is a barrier implemented using two channels shown in Figure~\ref{f:ChannelBarrierTermination}.
    143156Both of these examples are implemented using \CFA syntax so that they can be easily compared.
    144 Listing~\ref{l:go_chan_bar} uses go-style channel close semantics and Listing~\ref{l:cfa_chan_bar} uses \CFA close semantics.
    145 In this problem it is infeasible to use the Go @close@ call since all tasks are both potentially producers and consumers, causing panics on close to be unavoidable.
    146 As such in Listing~\ref{l:go_chan_bar} to implement a flush routine for the buffer, a sentinel value of $-1$ has to be used to indicate to threads that they need to leave the barrier.
     157Figure~\ref{l:cfa_chan_bar} uses \CFA-style channel close semantics and Figure~\ref{l:go_chan_bar} uses Go-style close semantics.
     158In this problem it is infeasible to use the Go @close@ call since all threads are both potentially producers and consumers, causing panics on close to be unavoidable.
     159As such in Figure~\ref{l:go_chan_bar} to implement a flush routine for the buffer, a sentinel value of @-1@ has to be used to indicate to threads that they need to leave the barrier.
    147160This sentinel value has to be checked at two points.
    148161Furthermore, an additional flag @done@ is needed to communicate to threads once they have left the barrier that they are done.
     
    152165Also note that in the Go version~\ref{l:go_chan_bar}, the size of the barrier channels has to be larger than in the \CFA version to ensure that the main thread does not block when attempting to clear the barrier.
    153166
    154 \begin{cfa}[caption={\CFA channel barrier termination},label={l:cfa_chan_bar}]
     167\begin{figure}
     168\centering
     169
     170\begin{lrbox}{\myboxA}
     171\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    155172struct barrier {
    156         channel( int ) barWait;
    157         channel( int ) entryWait;
     173        channel( int ) barWait, entryWait;
    158174        int size;
    159 }
    160 void ?{}(barrier & this, int size) with(this) {
    161         barWait{size};
    162         entryWait{size};
     175};
     176void ?{}( barrier & this, int size ) with(this) {
     177        barWait{size};   entryWait{size};
    163178        this.size = size;
    164         for ( j; size )
    165                 insert( *entryWait, j );
    166 }
    167 
     179        for ( i; size )
     180                insert( entryWait, i );
     181}
     182void wait( barrier & this ) with(this) {
     183        int ticket = remove( entryWait );
     184
     185        if ( ticket == size - 1 ) {
     186                for ( i; size - 1 )
     187                        insert( barWait, i );
     188                return;
     189        }
     190        ticket = remove( barWait );
     191
     192        if ( size == 1 || ticket == size - 2 ) { // last ?
     193                for ( i; size )
     194                        insert( entryWait, i );
     195        }
     196}
    168197void flush(barrier & this) with(this) {
    169         close(barWait);
    170         close(entryWait);
    171 }
    172 void wait(barrier & this) with(this) {
    173         int ticket = remove( *entryWait );
     198        @close( barWait );   close( entryWait );@
     199}
     200enum { Threads = 4 };
     201barrier b{Threads};
     202
     203thread Thread {};
     204void main( Thread & this ) {
     205        @try {@
     206                for ()
     207                        wait( b );
     208        @} catch ( channel_closed * ) {}@
     209}
     210int main() {
     211        Thread t[Threads];
     212        sleep(10`s);
     213
     214        flush( b );
     215} // wait for threads to terminate
     216\end{cfa}
     217\end{lrbox}
     218
     219\begin{lrbox}{\myboxB}
     220\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     221struct barrier {
     222        channel( int ) barWait, entryWait;
     223        int size;
     224};
     225void ?{}( barrier & this, int size ) with(this) {
     226        barWait{size + 1};   entryWait{size + 1};
     227        this.size = size;
     228        for ( i; size )
     229                insert( entryWait, i );
     230}
     231void wait( barrier & this ) with(this) {
     232        int ticket = remove( entryWait );
     233        @if ( ticket == -1 ) { insert( entryWait, -1 ); return; }@
    174234        if ( ticket == size - 1 ) {
    175                 for ( j; size - 1 )
    176                         insert( *barWait, j );
     235                for ( i; size - 1 )
     236                        insert( barWait, i );
    177237                return;
    178238        }
    179         ticket = remove( *barWait );
    180 
    181         // last one out
    182         if ( size == 1 || ticket == size - 2 ) {
    183                 for ( j; size )
    184                         insert( *entryWait, j );
    185         }
    186 }
    187 barrier b{Tasks};
    188 
    189 // thread main
    190 void main(Task & this) {
    191         try {
    192                 for ( ;; ) {
    193                         wait( b );
    194                 }
    195         } catch ( channel_closed * e ) {}
    196 }
    197 
     239        ticket = remove( barWait );
     240        @if ( ticket == -1 ) { insert( barWait, -1 ); return; }@
     241        if ( size == 1 || ticket == size - 2 ) { // last ?
     242                for ( i; size )
     243                        insert( entryWait, i );
     244        }
     245}
     246void flush(barrier & this) with(this) {
     247        @insert( entryWait, -1 );   insert( barWait, -1 );@
     248}
     249enum { Threads = 4 };
     250barrier b{Threads};
     251@bool done = false;@
     252thread Thread {};
     253void main( Thread & this ) {
     254        for () {
     255          @if ( done ) break;@
     256                wait( b );
     257        }
     258}
    198259int main() {
    199         {
    200                 Task t[Tasks];
    201 
    202                 sleep(10`s);
    203                 flush( b );
    204         } // wait for tasks to terminate
    205         return 0;
    206 }
     260        Thread t[Threads];
     261        sleep(10`s);
     262        done = true;
     263        flush( b );
     264} // wait for threads to terminate
    207265\end{cfa}
    208 
    209 \begin{cfa}[caption={Go channel barrier termination},label={l:go_chan_bar}]
    210 
    211 struct barrier {
    212         channel( int ) barWait;
    213         channel( int ) entryWait;
    214         int size;
    215 }
    216 void ?{}(barrier & this, int size) with(this) {
    217         barWait{size + 1};
    218         entryWait{size + 1};
    219         this.size = size;
    220         for ( j; size )
    221                 insert( *entryWait, j );
    222 }
    223 
    224 void flush(barrier & this) with(this) {
    225         insert( *entryWait, -1 );
    226         insert( *barWait, -1 );
    227 }
    228 void wait(barrier & this) with(this) {
    229         int ticket = remove( *entryWait );
    230         if ( ticket == -1 ) {
    231                 insert( *entryWait, -1 );
    232                 return;
    233         }
    234         if ( ticket == size - 1 ) {
    235                 for ( j; size - 1 )
    236                         insert( *barWait, j );
    237                 return;
    238         }
    239         ticket = remove( *barWait );
    240         if ( ticket == -1 ) {
    241                 insert( *barWait, -1 );
    242                 return;
    243         }
    244 
    245         // last one out
    246         if ( size == 1 || ticket == size - 2 ) {
    247                 for ( j; size )
    248                         insert( *entryWait, j );
    249         }
    250 }
    251 barrier b;
    252 
    253 bool done = false;
    254 // thread main
    255 void main(Task & this) {
    256         for ( ;; ) {
    257                 if ( done ) break;
    258                 wait( b );
    259         }
    260 }
    261 
    262 int main() {
    263         {
    264                 Task t[Tasks];
    265 
    266                 sleep(10`s);
    267                 done = true;
    268 
    269                 flush( b );
    270         } // wait for tasks to terminate
    271         return 0;
    272 }
    273 \end{cfa}
    274 
    275 In Listing~\ref{l:cfa_resume} an example of channel closing with resumption is used.
    276 This program uses resumption in the @Consumer@ thread main to ensure that all elements in the channel are removed before the consumer thread terminates.
    277 The producer only has a @catch@ so the moment it receives an exception it terminates, whereas the consumer will continue to remove from the closed channel via handling resumptions until the buffer is empty, which then throws a termination exception.
    278 If the same program was implemented in Go it would require explicit synchronization with both producers and consumers by some mechanism outside the channel to ensure that all elements were removed before task termination.
     266\end{lrbox}
     267
     268\subfloat[\CFA style]{\label{l:cfa_chan_bar}\usebox\myboxA}
     269\hspace*{3pt}
     270\vrule
     271\hspace*{3pt}
     272\subfloat[Go style]{\label{l:go_chan_bar}\usebox\myboxB}
     273\caption{Channel Barrier Termination}
     274\label{f:ChannelBarrierTermination}
     275\end{figure}
     276
     277Listing~\ref{l:cfa_resume} is an example of a channel closing with resumption.
     278The @Producer@ thread-main knows to stop producing when the @insert@ call on a closed channel raises exception @channel_closed@.
     279The @Consumer@ thread-main knows to stop consuming after all elements of a closed channel are removed and the call to @remove@ would block.
     280Hence, the consumer knows the moment the channel closes because a resumption exception is raised, caught, and ignored, and then control returns to @remove@ to return another item from the buffer.
     281Only when the buffer is drained and the call to @removed@ would block is a termination exception raised to stop consuming.
     282The same program in Go would require explicit synchronization among producers and consumers by a mechanism outside the channel to ensure all elements are removed before threads terminate.
    279283
    280284\begin{cfa}[caption={\CFA channel resumption usage},label={l:cfa_resume}]
    281285channel( int ) chan{ 128 };
    282 
    283 // Consumer thread main
    284 void main(Consumer & this) {
     286thread Producer {};
     287void main( Producer & this ) {
     288        @try {@
     289                for ( i; 0~$@$ )
     290                        insert( chan, i );
     291        @} catch( channel_closed * ) {}@                $\C[3in]{// channel closed}$
     292}
     293thread Consumer {};
     294void main( Consumer & this ) {
    285295        size_t runs = 0;
    286         try {
    287                 for ( ;; ) {
    288                         remove( chan );
     296        @try {@
     297                for () {
     298                        int i = remove( chan );
    289299                }
    290         } catchResume ( channel_closed * e ) {}
    291         catch ( channel_closed * e ) {}
    292 }
    293 
    294 // Producer thread main
    295 void main(Producer & this) {
    296         int j = 0;
    297         try {
    298                 for ( ;;j++ ) {
    299                         insert( chan, j );
    300                 }
    301         } catch ( channel_closed * e ) {}
    302 }
    303 
    304 int main( int argc, char * argv[] ) {
    305         {
    306                 Consumers c[4];
    307                 Producer p[4];
    308 
    309                 sleep(10`s);
    310 
    311                 for ( i; Channels )
    312                         close( channels[i] );
    313         }
    314         return 0;
     300        @} catchResume( channel_closed * ) {}@  $\C{// remaining item in buffer \(\Rightarrow\) remove it}$
     301          @catch( channel_closed * ) {}@                $\C{// blocking call to remove \(\Rightarrow\) buffer empty}$
     302}
     303int main() {
     304        enum { Processors = 8 };
     305        processor p[Processors - 1];                    $\C{// one processor per thread, have one processor}$
     306        Consumer c[Processors / 2];                             $\C{// share processors}$
     307        Producer p[Processors / 2];
     308        sleep( 10`s );
     309        @close( chan );@                                                $\C{// stop producer and consumer}\CRT$
    315310}
    316311\end{cfa}
     
    318313\section{Performance}
    319314
    320 Given that the base implementation of the \CFA channels is very similar to the Go implementation, this section aims to show that the performance of the two implementations are comparable.
    321 One microbenchmark is conducted to compare Go and \CFA.
    322 The benchmark is a ten second experiment where producers and consumers operate on a channel in parallel and throughput is measured.
     315Given that the base implementation of the \CFA channels is very similar to the Go implementation, this section aims to show the performance of the two implementations are comparable.
     316The microbenchmark for the channel comparison is similar to listing~\ref{l:cfa_resume}, where the number of threads and processors is set from the command line.
     317The processors are divided equally between producers and consumers, with one producer or consumer owning each core.
    323318The number of cores is varied to measure how throughput scales.
    324 The cores are divided equally between producers and consumers, with one producer or consumer owning each core.
     319
    325320The results of the benchmark are shown in Figure~\ref{f:chanPerf}.
    326321The performance of Go and \CFA channels on this microbenchmark is comparable.
    327 Note, it is expected for the performance to decline as the number of cores increases as the channel operations all occur in a critical section so an increase in cores results in higher contention with no increase in parallelism.
    328 
     322Note, the performance should decline as the number of cores increases as the channel operations occur in a critical section, so increasing cores results in higher contention with no increase in parallelism.
    329323
    330324\begin{figure}
Note: See TracChangeset for help on using the changeset viewer.