source: doc/proposals/concurrency/text/basics.tex @ c029f4d

ADTaaron-thesisarm-ehast-experimentalcleanup-dtorsdeferred_resndemanglerenumforall-pointer-decayjacob/cs343-translationjenkins-sandboxnew-astnew-ast-unique-exprnew-envno_listpersistent-indexerpthread-emulationqualifiedEnumresolv-newwith_gc
Last change on this file since c029f4d was b3ffb61, checked in by Thierry Delisle <tdelisle@…>, 7 years ago

Added missing citations

  • Property mode set to 100644
File size: 20.2 KB
Line 
1% ======================================================================
2% ======================================================================
3\chapter{Concurrency Basics}\label{basics}
4% ======================================================================
5% ======================================================================
6Before any detailed discussion of the concurrency and parallelism in \CFA, it is important to describe the basics of concurrency and how they are expressed in \CFA user-code.
7
8\section{Basics of concurrency}
9At its core, concurrency is based on having multiple call-stacks and scheduling among threads of execution executing on these stacks. Concurrency without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution.
10
11Execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining. Execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread perspective) across the stacks is called concurrency.
12
13Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stackfull coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model (a.k.a non-preemptive scheduling). The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption.
14
15A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Mutual-exclusion and synchronisation are ways of limiting non-determinism in a concurrent system. Now it is important to understand that uncertainty is desireable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows.
16
17\section{\protect\CFA 's Thread Building Blocks}
18One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
19
20\section{Coroutines: A stepping stone}\label{coroutine}
21While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. Coroutines need to deal with context-switches and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \acrshort{api} of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}.
22
23\begin{figure}
24\begin{center}
25\begin{tabular}{c @{\hskip 0.025in}|@{\hskip 0.025in} c @{\hskip 0.025in}|@{\hskip 0.025in} c}
26\begin{ccode}[tabsize=2]
27//Using callbacks
28void fibonacci_func(
29        int n,
30        void (*callback)(int)
31) {
32        int first = 0;
33        int second = 1;
34        int next, i;
35        for(i = 0; i < n; i++)
36        {
37                if(i <= 1)
38                        next = i;
39                else {
40                        next = f1 + f2;
41                        f1 = f2;
42                        f2 = next;
43                }
44                callback(next);
45        }
46}
47
48int main() {
49        void print_fib(int n) {
50                printf("%d\n", n);
51        }
52
53        fibonacci_func(
54                10, print_fib
55        );
56
57
58
59}
60\end{ccode}&\begin{ccode}[tabsize=2]
61//Using output array
62void fibonacci_array(
63        int n,
64        int * array
65) {
66        int f1 = 0; int f2 = 1;
67        int next, i;
68        for(i = 0; i < n; i++)
69        {
70                if(i <= 1)
71                        next = i;
72                else {
73                        next = f1 + f2;
74                        f1 = f2;
75                        f2 = next;
76                }
77                array[i] = next;
78        }
79}
80
81
82int main() {
83        int a[10];
84
85        fibonacci_func(
86                10, a
87        );
88
89        for(int i=0;i<10;i++){
90                printf("%d\n", a[i]);
91        }
92
93}
94\end{ccode}&\begin{ccode}[tabsize=2]
95//Using external state
96typedef struct {
97        int f1, f2;
98} Iterator_t;
99
100int fibonacci_state(
101        Iterator_t * it
102) {
103        int f;
104        f = it->f1 + it->f2;
105        it->f2 = it->f1;
106        it->f1 = max(f,1);
107        return f;
108}
109
110
111
112
113
114
115
116int main() {
117        Iterator_t it={0,0};
118
119        for(int i=0;i<10;i++){
120                printf("%d\n",
121                        fibonacci_state(
122                                &it
123                        );
124                );
125        }
126
127}
128\end{ccode}
129\end{tabular}
130\end{center}
131\caption{Different implementations of a fibonacci sequence generator in C.}
132\label{lst:fibonacci-c}
133\end{figure}
134
135A good example of a problem made easier with coroutines is generators, like the fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Figure \ref{lst:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence is used, while the rightmost approach requires holding internal state between calls on behalf of the generator and makes it much harder to handle corner cases like the Fibonacci seed.
136
137Figure \ref{lst:fibonacci-cfa} is an example of a solution to the fibonnaci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is as easy to use as the \code{fibonacci_state} solution, while the imlpementation is very similar to the \code{fibonacci_func} example.
138
139\begin{figure}
140\begin{cfacode}
141coroutine Fibonacci {
142        int fn; //used for communication
143};
144
145void ?{}(Fibonacci & this) { //constructor
146        this.fn = 0;
147}
148
149//main automacically called on first resume
150void main(Fibonacci & this) with (this) {
151        int fn1, fn2;           //retained between resumes
152        fn  = 0;
153        fn1 = fn;
154        suspend(this);          //return to last resume
155
156        fn  = 1;
157        fn2 = fn1;
158        fn1 = fn;
159        suspend(this);          //return to last resume
160
161        for ( ;; ) {
162                fn  = fn1 + fn2;
163                fn2 = fn1;
164                fn1 = fn;
165                suspend(this);  //return to last resume
166        }
167}
168
169int next(Fibonacci & this) {
170        resume(this); //transfer to last suspend
171        return this.fn;
172}
173
174void main() { //regular program main
175        Fibonacci f1, f2;
176        for ( int i = 1; i <= 10; i += 1 ) {
177                sout | next( f1 ) | next( f2 ) | endl;
178        }
179}
180\end{cfacode}
181\caption{Implementation of fibonacci using coroutines}
182\label{lst:fibonacci-cfa}
183\end{figure}
184
185Figure \ref{lst:fmt-line} shows the \code{Format} coroutine which rearranges text in order to group characters into blocks of fixed size. The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor.
186
187\begin{figure}
188\begin{cfacode}[tabsize=3]
189//format characters into blocks of 4 and groups of 5 blocks per line
190coroutine Format {
191        char ch;                                                                        //used for communication
192        int g, b;                                                               //global because used in destructor
193};
194
195void  ?{}(Format & fmt) {
196        resume( fmt );                                                  //prime (start) coroutine
197}
198
199void ^?{}(Format & fmt) with fmt {
200        if ( fmt.g != 0 || fmt.b != 0 )
201        sout | endl;
202}
203
204void main(Format & fmt) with fmt {
205        for ( ;; ) {                                                    //for as many characters
206                for(g = 0; g < 5; g++) {                //groups of 5 blocks
207                        for(b = 0; b < 4; fb++) {       //blocks of 4 characters
208                                suspend();
209                                sout | ch;                                      //print character
210                        }
211                        sout | "  ";                                    //print block separator
212                }
213                sout | endl;                                            //print group separator
214        }
215}
216
217void prt(Format & fmt, char ch) {
218        fmt.ch = ch;
219        resume(fmt);
220}
221
222int main() {
223        Format fmt;
224        char ch;
225        Eof: for ( ;; ) {                                               //read until end of file
226                sin | ch;                                                       //read one character
227                if(eof(sin)) break Eof;                 //eof ?
228                prt(fmt, ch);                                           //push character for formatting
229        }
230}
231\end{cfacode}
232\caption{Formatting text into lines of 5 blocks of 4 characters.}
233\label{lst:fmt-line}
234\end{figure}
235
236\subsection{Construction}
237One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
238
239The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
240
241Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
242
243\begin{cfacode}
244//async: Runs function asynchronously on another thread
245forall(otype T)
246extern void async(void (*func)(T*), T* obj);
247
248forall(otype T)
249void noop(T*) {}
250
251void bar() {
252        int a;
253        async(noop, &a); //start thread running noop with argument a
254}
255\end{cfacode}
256
257The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information:
258
259\begin{ccode}
260extern void async(/* omitted */, void (*func)(void *), void *obj);
261
262void noop(/* omitted */, void *obj){}
263
264void bar(){
265        int a;
266        void _thunk0(int *_p0){
267                /* omitted */
268                noop(/* omitted */, _p0);
269        }
270        /* omitted */
271        async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a));
272}
273\end{ccode}
274The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes undefined behavior; i.e., the stack-based thunk being destroyed before it can be used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks.
275
276\subsection{Alternative: Composition}
277One solution to this challenge is to use composition/containement, where coroutine fields are added to manage the coroutine.
278
279\begin{cfacode}
280struct Fibonacci {
281        int fn; //used for communication
282        coroutine c; //composition
283};
284
285void FibMain(void *) {
286        //...
287}
288
289void ?{}(Fibonacci & this) {
290        this.fn = 0;
291        //Call constructor to initialize coroutine
292        (this.c){myMain};
293}
294\end{cfacode}
295The downside of this approach is that users need to correctly construct the coroutine handle before using it. Like any other objects, doing so the users carefully choose construction order to prevent usage of unconstructed objects. However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example. This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically.
296
297\subsection{Alternative: Reserved keyword}
298The next alternative is to use language support to annotate coroutines as follows:
299
300\begin{cfacode}
301coroutine Fibonacci {
302        int fn; //used for communication
303};
304\end{cfacode}
305The \code{coroutine} keyword means the compiler can find and inject code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users wantint to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases.
306
307\subsection{Alternative: Lamda Objects}
308
309For coroutines as for threads, many implementations are based on routine pointers or function objects\cite{Butenhof97, ANSI14:C++, MS:VisualC++, BoostCoroutines15}. For example, Boost implements coroutines in terms of four functor object types:
310\begin{cfacode}
311asymmetric_coroutine<>::pull_type
312asymmetric_coroutine<>::push_type
313symmetric_coroutine<>::call_type
314symmetric_coroutine<>::yield_type
315\end{cfacode}
316Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little.
317
318A variation of this would be to use a simple function pointer in the same way pthread does for threads :
319\begin{cfacode}
320void foo( coroutine_t cid, void * arg ) {
321        int * value = (int *)arg;
322        //Coroutine body
323}
324
325int main() {
326        int value = 0;
327        coroutine_t cid = coroutine_create( &foo, (void*)&value );
328        coroutine_resume( &cid );
329}
330\end{cfacode}
331This semantics is more common for thread interfaces than coroutines works equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity.
332
333\subsection{Alternative: Trait-based coroutines}
334
335Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as anything that satisfies the trait \code{is_coroutine} and is used as a coroutine.
336
337\begin{cfacode}
338trait is_coroutine(dtype T) {
339      void main(T & this);
340      coroutine_desc * get_coroutine(T & this);
341};
342
343forall( dtype T | is_coroutine(T) ) void suspend(T &);
344forall( dtype T | is_coroutine(T) ) void resume (T &);
345\end{cfacode}
346This ensures an object is not a coroutine until \code{resume} is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine.
347
348\begin{center}
349\begin{tabular}{c c c}
350\begin{cfacode}[tabsize=3]
351coroutine MyCoroutine {
352        int someValue;
353};
354\end{cfacode} & == & \begin{cfacode}[tabsize=3]
355struct MyCoroutine {
356        int someValue;
357        coroutine_desc __cor;
358};
359
360static inline
361coroutine_desc * get_coroutine(
362        struct MyCoroutine & this
363) {
364        return &this.__cor;
365}
366
367void main(struct MyCoroutine * this);
368\end{cfacode}
369\end{tabular}
370\end{center}
371
372The combination of these two approaches allows users new to coroutinning and concurrency to have an easy and concise specification, while more advanced users have tighter control on memory layout and initialization.
373
374\section{Thread Interface}\label{threads}
375The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. Both user and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism. User threads offer a flexible and lightweight interface. A thread can be declared using a struct declaration \code{thread} as follows:
376
377\begin{cfacode}
378thread foo {};
379\end{cfacode}
380
381As for coroutines, the keyword is a thin wrapper arount a \CFA trait:
382
383\begin{cfacode}
384trait is_thread(dtype T) {
385      void ^?{}(T & mutex this);
386      void main(T & this);
387      thread_desc* get_thread(T & this);
388};
389\end{cfacode}
390
391Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), it is a natural extension of the semantics using overloading to declare mains for different threads (the normal main being the main of the initial thread). As such the \code{main} routine of a thread can be defined as
392\begin{cfacode}
393thread foo {};
394
395void main(foo & this) {
396        sout | "Hello World!" | endl;
397}
398\end{cfacode}
399
400In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!"}. While this thesis encourages this approach to enforce strongly-typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With the static semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously.
401\begin{cfacode}
402typedef void (*voidFunc)(int);
403
404thread FuncRunner {
405        voidFunc func;
406        int arg;
407};
408
409void ?{}(FuncRunner & this, voidFunc inFunc, int arg) {
410        this.func = inFunc;
411        this.arg  = arg;
412}
413
414void main(FuncRunner & this) {
415        //thread starts here and runs the function
416        this.func( this.arg );
417}
418\end{cfacode}
419
420A consequence of the strongly-typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \acrshort{api}.
421
422Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is unnecessary. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} after the constructor has completed and \code{join} before the destructor runs.
423\begin{cfacode}
424thread World;
425
426void main(World & this) {
427        sout | "World!" | endl;
428}
429
430void main() {
431        World w;
432        //Thread forks here
433
434        //Printing "Hello " and "World!" are run concurrently
435        sout | "Hello " | endl;
436
437        //Implicit join at end of scope
438}
439\end{cfacode}
440
441This semantic has several advantages over explicit semantics: a thread is always started and stopped exaclty once, users cannot make any progamming errors, and it naturally scales to multiple threads meaning basic synchronisation is very simple.
442
443\begin{cfacode}
444thread MyThread {
445        //...
446};
447
448//main
449void main(MyThread & this) {
450        //...
451}
452
453void foo() {
454        MyThread thrds[10];
455        //Start 10 threads at the beginning of the scope
456
457        DoStuff();
458
459        //Wait for the 10 threads to finish
460}
461\end{cfacode}
462
463However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in the opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created.
464
465\begin{cfacode}
466thread MyThread {
467        //...
468};
469
470void main(MyThread & this) {
471        //...
472}
473
474void foo() {
475        MyThread * long_lived;
476        {
477                //Start a thread at the beginning of the scope
478                MyThread short_lived;
479
480                //create another thread that will outlive the thread in this scope
481                long_lived = new MyThread;
482
483                DoStuff();
484
485                //Wait for the thread short_lived to finish
486        }
487        DoMoreStuff();
488
489        //Now wait for the long_lived to finish
490        delete long_lived;
491}
492\end{cfacode}
Note: See TracBrowser for help on using the repository browser.