source: doc/proposals/concurrency/text/basics.tex @ 3364962

aaron-thesisarm-ehcleanup-dtorsdeferred_resndemanglerjacob/cs343-translationjenkins-sandboxnew-astnew-ast-unique-exprnew-envno_listpersistent-indexerresolv-newwith_gc
Last change on this file since 3364962 was 3364962, checked in by Thierry Delisle <tdelisle@…>, 4 years ago

Updated concurrency draft and added new section for implementation.

  • Property mode set to 100644
File size: 20.3 KB
Line 
1% ======================================================================
2% ======================================================================
3\chapter{Concurrency Basics}\label{basics}
4% ======================================================================
5% ======================================================================
6Before any detailed discussion of the concurrency and parallelism in \CFA, it is important to describe the basics of concurrency and how they are expressed in \CFA user-code.
7
8\section{Basics of concurrency}
9At its core, concurrency is based on having multiple call-stacks and scheduling among threads of execution executing on these stacks. Concurrency without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution.
10
11Indeed, while execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining, execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread perspective) across the stacks is called concurrency.
12
13Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stackfull coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. Indeed, concurrency challenges appear with non-determinism. Using mutual-exclusion or synchronisation are ways of limiting the lack of determinism in a system. A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Now it is important to understand that uncertainty is not undesireable; uncertainty can often be used by systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows\cit.
14
15\section{\protect\CFA 's Thread Building Blocks}
16One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent and/or parallel programs. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
17
18\section{Coroutines: A stepping stone}\label{coroutine}
19While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. Coroutines need to deal with context-switchs and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \acrshort{api} of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}.
20
21A good example of a problem made easier with coroutines is genereting the fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Figure \ref{fig:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence will be used, while the rightmost approach requires to user to hold internal state between calls on behalf of th sequence generator and makes it much harder to handle corner cases like the Fibonacci seed.
22\begin{figure}
23\label{fig:fibonacci-c}
24\begin{center}
25\begin{tabular}{c @{\hskip 0.025in}|@{\hskip 0.025in} c @{\hskip 0.025in}|@{\hskip 0.025in} c}
26\begin{ccode}[tabsize=2]
27//Using callbacks
28void fibonacci_func(
29        int n,
30        void (*callback)(int)
31) {
32        int first = 0;
33        int second = 1;
34        int next, i;
35        for(i = 0; i < n; i++)
36        {
37                if(i <= 1)
38                        next = i;
39                else {
40                        next = f1 + f2;
41                        f1 = f2;
42                        f2 = next;
43                }
44                callback(next);
45        }
46}
47\end{ccode}&\begin{ccode}[tabsize=2]
48//Using output array
49void fibonacci_array(
50        int n,
51        int * array
52) {
53        int f1 = 0; int f2 = 1;
54        int next, i;
55        for(i = 0; i < n; i++)
56        {
57                if(i <= 1)
58                        next = i;
59                else {
60                        next = f1 + f2;
61                        f1 = f2;
62                        f2 = next;
63                }
64                *array = next;
65                array++;
66        }
67}
68\end{ccode}&\begin{ccode}[tabsize=2]
69//Using external state
70typedef struct {
71        int f1, f2;
72} iterator_t;
73
74int fibonacci_state(
75        iterator_t * it
76) {
77        int f;
78        f = it->f1 + it->f2;
79        it->f2 = it->f1;
80        it->f1 = f;
81        return f;
82}
83
84
85
86
87
88
89\end{ccode}
90\end{tabular}
91\end{center}
92\caption{Different implementations of a fibonacci sequence generator in C.}
93\end{figure}
94
95
96Figure \ref{fig:fibonacci-cfa} is an example of a solution to the fibonnaci problem using \CFA coroutines, using the coroutine stack to hold sufficient state for the generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is a easy to use as the \code{fibonacci_state} solution, while the imlpementation is very similar to the \code{fibonacci_func} example.
97
98\begin{figure}
99\label{fig:fibonacci-cfa}
100\begin{cfacode}
101coroutine Fibonacci {
102        int fn; //used for communication
103};
104
105void ?{}(Fibonacci & this) { //constructor
106        this.fn = 0;
107}
108
109//main automacically called on first resume
110void main(Fibonacci & this) {
111        int fn1, fn2;           //retained between resumes
112        this.fn = 0;
113        fn1 = this.fn;
114        suspend(this);          //return to last resume
115
116        this.fn = 1;
117        fn2 = fn1;
118        fn1 = this.fn;
119        suspend(this);          //return to last resume
120
121        for ( ;; ) {
122                this.fn = fn1 + fn2;
123                fn2 = fn1;
124                fn1 = this.fn;
125                suspend(this);  //return to last resume
126        }
127}
128
129int next(Fibonacci & this) {
130        resume(this); //transfer to last suspend
131        return this.fn;
132}
133
134void main() { //regular program main
135        Fibonacci f1, f2;
136        for ( int i = 1; i <= 10; i += 1 ) {
137                sout | next( f1 ) | next( f2 ) | endl;
138        }
139}
140\end{cfacode}
141\caption{Implementation of fibonacci using coroutines}
142\end{figure}
143
144\subsection{Construction}
145One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
146
147The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
148
149Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
150
151\begin{cfacode}
152//async: Runs function asynchronously on another thread
153forall(otype T)
154extern void async(void (*func)(T*), T* obj);
155
156forall(otype T)
157void noop(T *) {}
158
159void bar() {
160        int a;
161        async(noop, &a);
162}
163\end{cfacode}
164
165The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information:
166
167\begin{ccode}
168extern void async(/* omitted */, void (*func)(void *), void *obj);
169
170void noop(/* omitted */, void *obj){}
171
172void bar(){
173        int a;
174        void _thunk0(int *_p0){
175                /* omitted */
176                noop(/* omitted */, _p0);
177        }
178        /* omitted */
179        async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a));
180}
181\end{ccode}
182The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block. This extra challenge limits which solutions are viable because storing the function pointer for too long causes undefined behavior; i.e. the stack based thunk being destroyed before it was used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that the routines cannot be passed outside of the scope of the functions these were declared in. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks.
183
184\subsection{Alternative: Composition}
185One solution to this challenge is to use composition/containement, where uses add insert a coroutine field which contains the necessary information to manage the coroutine.
186
187\begin{cfacode}
188struct Fibonacci {
189        int fn; //used for communication
190        coroutine c; //composition
191};
192
193void ?{}(Fibonacci & this) {
194        this.fn = 0;
195        (this.c){}; //Call constructor to initialize coroutine
196}
197\end{cfacode}
198There are two downsides to this approach. The first, which is relatively minor, made aware of the main routine pointer. This information must either be store in the coroutine runtime data or in its static type structure. When using composition, all coroutine handles have the same static type structure which means the pointer to the main needs to be part of the runtime data. This requirement means the coroutine data must be made larger to store a value that is actually a compile time constant (address of the main routine). The second problem, which is both subtle and significant, is that now users can get the initialisation order of coroutines wrong. Indeed, every field of a \CFA struct is constructed but in declaration order, unless users explicitly write otherwise. This semantics means that users who forget to initialize the coroutine handle may resume the coroutine with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem. Figure \ref{fig:fmt-line} shows the \code{Format} coroutine which rearranges text in order to group characters into blocks of fixed size. This is a good example where the control flow is made much simpler from being able to resume the coroutine from the constructor and highlights the idea that interesting control flow can occor in the constructor.
199\begin{figure}
200\label{fig:fmt-line}
201\begin{cfacode}[tabsize=3]
202//format characters into blocks of 4 and groups of 5 blocks per line
203coroutine Format {
204        char ch;                                                                        //used for communication
205        int g, b;                                                               //global because used in destructor
206};
207
208void  ?{}(Format & fmt) {
209        resume( fmt );                                                  //prime (start) coroutine
210}
211
212void ^?{}(Format & fmt) with fmt {
213        if ( fmt.g != 0 || fmt.b != 0 )
214        sout | endl;
215}
216
217void main(Format & fmt) with fmt {
218        for ( ;; ) {                                                    //for as many characters
219                for(g = 0; g < 5; g++) {                //groups of 5 blocks
220                        for(b = 0; b < 4; fb++) {       //blocks of 4 characters
221                                suspend();
222                                sout | ch;                                      //print character
223                        }
224                        sout | "  ";                                    //print block separator
225                }
226                sout | endl;                                            //print group separator
227        }
228}
229
230void prt(Format & fmt, char ch) {
231        fmt.ch = ch;
232        resume(fmt);
233}
234
235int main() {
236        Format fmt;
237        char ch;
238        Eof: for ( ;; ) {                                               //read until end of file
239                sin | ch;                                                       //read one character
240                if(eof(sin)) break Eof;                 //eof ?
241                prt(fmt, ch);                                           //push character for formatting
242        }
243}
244\end{cfacode}
245\caption{Formatting text into lines of 5 blocks of 4 characters.}
246\end{figure}
247
248
249\subsection{Alternative: Reserved keyword}
250The next alternative is to use language support to annotate coroutines as follows:
251
252\begin{cfacode}
253coroutine Fibonacci {
254        int fn; //used for communication
255};
256\end{cfacode}
257This mean the compiler can solve problems by injecting code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users who would want to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can both be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases.
258
259\subsection{Alternative: Lamda Objects}
260
261For coroutines as for threads, many implementations are based on routine pointers or function objects\cit. For example, Boost implements coroutines in terms of four functor object types:
262\begin{cfacode}
263asymmetric_coroutine<>::pull_type
264asymmetric_coroutine<>::push_type
265symmetric_coroutine<>::call_type
266symmetric_coroutine<>::yield_type
267\end{cfacode}
268Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little.
269
270A variation of this would be to use an simple function pointer in the same way pthread does for threads :
271\begin{cfacode}
272void foo( coroutine_t cid, void * arg ) {
273        int * value = (int *)arg;
274        //Coroutine body
275}
276
277int main() {
278        int value = 0;
279        coroutine_t cid = coroutine_create( &foo, (void*)&value );
280        coroutine_resume( &cid );
281}
282\end{cfacode}
283This semantic is more common for thread interfaces than coroutines but would work equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity.
284
285\subsection{Alternative: Trait-based coroutines}
286
287Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as anything that satisfies the trait \code{is_coroutine} and is used as a coroutine.
288
289\begin{cfacode}
290trait is_coroutine(dtype T) {
291      void main(T & this);
292      coroutine_desc * get_coroutine(T & this);
293};
294
295forall( dtype T | is_coroutine(T) ) void suspend(T &);
296forall( dtype T | is_coroutine(T) ) void resume (T &);
297\end{cfacode}
298This ensures an object is not a coroutine until \code{resume} is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine.
299
300\begin{center}
301\begin{tabular}{c c c}
302\begin{cfacode}[tabsize=3]
303coroutine MyCoroutine {
304        int someValue;
305};
306\end{cfacode} & == & \begin{cfacode}[tabsize=3]
307struct MyCoroutine {
308        int someValue;
309        coroutine_desc __cor;
310};
311
312static inline
313coroutine_desc * get_coroutine(
314        struct MyCoroutine & this
315) {
316        return &this.__cor;
317}
318
319void main(struct MyCoroutine * this);
320\end{cfacode}
321\end{tabular}
322\end{center}
323
324The combination of these two approaches allows users new to coroutinning and concurrency to have an easy and concise specification, while more advanced users have tighter control on memory layout and initialization.
325
326\section{Thread Interface}\label{threads}
327The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. Both user and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism. User threads offer a flexible and lightweight interface. A thread can be declared using a struct declaration \code{thread} as follows:
328
329\begin{cfacode}
330        thread foo {};
331\end{cfacode}
332
333As for coroutines, the keyword is a thin wrapper arount a \CFA trait:
334
335\begin{cfacode}
336trait is_thread(dtype T) {
337      void ^?{}(T & mutex this);
338      void main(T & this);
339      thread_desc* get_thread(T & this);
340};
341\end{cfacode}
342
343Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), it is a natural extension of the semantics using overloading to declare mains for different threads (the normal main being the main of the initial thread). As such the \code{main} routine of a thread can be defined as
344\begin{cfacode}
345        thread foo {};
346
347        void main(foo & this) {
348                sout | "Hello World!" | endl;
349        }
350\end{cfacode}
351
352In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!"}. While this thesis encourages this approach to enforce strongly-typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously
353\begin{cfacode}
354        typedef void (*voidFunc)(int);
355
356        thread FuncRunner {
357                voidFunc func;
358                int arg;
359        };
360
361        void ?{}(FuncRunner & this, voidFunc inFunc, int arg) {
362                this.func = inFunc;
363        }
364
365        void main(FuncRunner & this) {
366                this.func( this.arg );
367        }
368\end{cfacode}
369
370An consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \acrshort{api}.
371
372Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is unnecessary. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} after the constructor has completed and \code{join} before the destructor runs.
373\begin{cfacode}
374thread World;
375
376void main(World & this) {
377        sout | "World!" | endl;
378}
379
380void main() {
381        World w;
382        //Thread forks here
383
384        //Printing "Hello " and "World!" are run concurrently
385        sout | "Hello " | endl;
386
387        //Implicit join at end of scope
388}
389\end{cfacode}
390
391This semantic has several advantages over explicit semantics: a thread is always started and stopped exaclty once and users cannot make any progamming errors and it naturally scales to multiple threads meaning basic synchronisation is very simple
392
393\begin{cfacode}
394thread MyThread {
395        //...
396};
397
398//main
399void main(MyThread & this) {
400        //...
401}
402
403void foo() {
404        MyThread thrds[10];
405        //Start 10 threads at the beginning of the scope
406
407        DoStuff();
408
409        //Wait for the 10 threads to finish
410}
411\end{cfacode}
412
413However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created
414
415\begin{cfacode}
416thread MyThread {
417        //...
418};
419
420void main(MyThread & this) {
421        //...
422}
423
424void foo() {
425        MyThread * long_lived;
426        {
427                //Start a thread at the beginning of the scope
428                MyThread short_lived;
429
430                //create another thread that will outlive the thread in this scope
431                long_lived = new MyThread;
432
433                DoStuff();
434
435                //Wait for the thread short_lived to finish
436        }
437        DoMoreStuff();
438
439        //Now wait for the long_lived to finish
440        delete long_lived;
441}
442\end{cfacode}
Note: See TracBrowser for help on using the repository browser.