source: doc/proposals/concurrency/text/basics.tex @ dcfc4b3

ADTaaron-thesisarm-ehast-experimentalcleanup-dtorsdeferred_resndemanglerenumforall-pointer-decayjacob/cs343-translationjenkins-sandboxnew-astnew-ast-unique-exprnew-envno_listpersistent-indexerpthread-emulationqualifiedEnumresolv-newwith_gc
Last change on this file since dcfc4b3 was dcfc4b3, checked in by Thierry Delisle <tdelisle@…>, 7 years ago

Added internals section and updated v0.10 up to chapter 4

  • Property mode set to 100644
File size: 16.6 KB
Line 
1% ======================================================================
2% ======================================================================
3\chapter{Concurrency Basics}\label{basics}
4% ======================================================================
5% ======================================================================
6Before any detailed discussion of the concurrency and parallelism in \CFA, it is important to describe the basics of concurrency and how they are expressed in \CFA user-code.
7
8\section{Basics of concurrency}
9At its core, concurrency is based on having multiple call-stacks and scheduling among threads of execution executing on these stacks. Concurrency without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution.
10
11Indeed, while execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining, execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread perspective) across the stacks is called concurrency.
12
13Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stackfull coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. Indeed, concurrency challenges appear with non-determinism. Using mutual-exclusion or synchronisation are ways of limiting the lack of determinism in a system. A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Now it is important to understand that uncertainty is not undesireable; uncertainty can often be used by systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows\cit.
14
15\section{\protect\CFA 's Thread Building Blocks}
16One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent and/or parallel programs. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
17
18\section{Coroutines: A stepping stone}\label{coroutine}
19While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. Coroutines need to deal with context-switchs and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \acrshort{api} of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}.
20
21Here is an example of a solution to the fibonnaci problem using \CFA coroutines:
22\begin{cfacode}
23        coroutine Fibonacci {
24              int fn; // used for communication
25        };
26
27        void ?{}(Fibonacci & this) { // constructor
28              this.fn = 0;
29        }
30
31        // main automacically called on first resume
32        void main(Fibonacci & this) {
33                int fn1, fn2;           // retained between resumes
34                this.fn = 0;
35                fn1 = this.fn;
36                suspend(this);          // return to last resume
37
38                this.fn = 1;
39                fn2 = fn1;
40                fn1 = this.fn;
41                suspend(this);          // return to last resume
42
43                for ( ;; ) {
44                        this.fn = fn1 + fn2;
45                        fn2 = fn1;
46                        fn1 = this.fn;
47                        suspend(this);  // return to last resume
48                }
49        }
50
51        int next(Fibonacci & this) {
52                resume(this); // transfer to last suspend
53                return this.fn;
54        }
55
56        void main() { // regular program main
57                Fibonacci f1, f2;
58                for ( int i = 1; i <= 10; i += 1 ) {
59                        sout | next( f1 ) | next( f2 ) | endl;
60                }
61        }
62\end{cfacode}
63
64\subsection{Construction}
65One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
66
67The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
68
69Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
70
71\begin{cfacode}
72//async: Runs function asynchronously on another thread
73forall(otype T)
74extern void async(void (*func)(T*), T* obj);
75
76forall(otype T)
77void noop(T *) {}
78
79void bar() {
80        int a;
81        async(noop, &a);
82}
83\end{cfacode}
84The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information:
85
86\begin{ccode}
87extern void async(/* omitted */, void (*func)(void *), void *obj);
88
89void noop(/* omitted */, void *obj){}
90
91void bar(){
92        int a;
93        void _thunk0(int *_p0){
94                /* omitted */
95                noop(/* omitted */, _p0);
96        }
97        /* omitted */
98        async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a));
99}
100\end{ccode}
101The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block. This extra challenge limits which solutions are viable because storing the function pointer for too long causes undefined behavior; i.e. the stack based thunk being destroyed before it was used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that the routines cannot be passed outside of the scope of the functions these were declared in. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks.
102
103\subsection{Alternative: Composition}
104One solution to this challenge is to use composition/containement, where uses add insert a coroutine field which contains the necessary information to manage the coroutine.
105
106\begin{cfacode}
107        struct Fibonacci {
108                int fn; //used for communication
109                coroutine c; //composition
110        };
111
112        void ?{}(Fibonacci & this) {
113                this.fn = 0;
114                (this.c){}; //Call constructor to initialize coroutine
115        }
116\end{cfacode}
117There are two downsides to this approach. The first, which is relatively minor, is that the coroutine handle needs to be made aware of the main routine pointer. This requirement means the coroutine data must be made larger to store a value that is actually a compile time constant (address of the main routine). The second problem, which is both subtle and significant, is that now users can get the initialisation order of coroutines wrong. Indeed, every field of a \CFA struct is constructed but in declaration order, unless users explicitly write otherwise. This semantics means that users who forget to initialize the coroutine handle may resume the coroutine with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem.
118
119\subsection{Alternative: Reserved keyword}
120The next alternative is to use language support to annotate coroutines as follows:
121
122\begin{cfacode}
123        coroutine Fibonacci {
124                int fn; // used for communication
125        };
126\end{cfacode}
127This mean the compiler can solve problems by injecting code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users who would want to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can both be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases.
128
129\subsection{Alternative: Lamda Objects}
130
131For coroutines as for threads, many implementations are based on routine pointers or function objects\cit. For example, Boost implements coroutines in terms of four functor object types:
132\begin{cfacode}
133asymmetric_coroutine<>::pull_type
134asymmetric_coroutine<>::push_type
135symmetric_coroutine<>::call_type
136symmetric_coroutine<>::yield_type
137\end{cfacode}
138Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little.
139
140A variation of this would be to use an simple function pointer in the same way pthread does for threads :
141\begin{cfacode}
142void foo( coroutine_t cid, void * arg ) {
143        int * value = (int *)arg;
144        //Coroutine body
145}
146
147int main() {
148        int value = 0;
149        coroutine_t cid = coroutine_create( &foo, (void*)&value );
150        coroutine_resume( &cid );
151}
152\end{cfacode}
153This semantic is more common for thread interfaces than coroutines but would work equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity.
154
155\subsection{Alternative: Trait-based coroutines}
156
157Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as anything that satisfies the trait \code{is_coroutine} and is used as a coroutine.
158
159\begin{cfacode}
160trait is_coroutine(dtype T) {
161      void main(T & this);
162      coroutine_desc * get_coroutine(T & this);
163};
164
165forall( dtype T | is_coroutine(T) ) void suspend(T &);
166forall( dtype T | is_coroutine(T) ) void resume (T &);
167\end{cfacode}
168This ensures an object is not a coroutine until \code{resume} is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine.
169
170\begin{center}
171\begin{tabular}{c c c}
172\begin{cfacode}[tabsize=3]
173coroutine MyCoroutine {
174        int someValue;
175};
176\end{cfacode} & == & \begin{cfacode}[tabsize=3]
177struct MyCoroutine {
178        int someValue;
179        coroutine_desc __cor;
180};
181
182static inline
183coroutine_desc * get_coroutine(
184        struct MyCoroutine & this
185) {
186        return &this.__cor;
187}
188
189void main(struct MyCoroutine * this);
190\end{cfacode}
191\end{tabular}
192\end{center}
193
194The combination of these two approaches allows users new to coroutinning and concurrency to have an easy and concise specification, while more advanced users have tighter control on memory layout and initialization.
195
196\section{Thread Interface}\label{threads}
197The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. Both user and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism. User threads offer a flexible and lightweight interface. A thread can be declared using a struct declaration \code{thread} as follows:
198
199\begin{cfacode}
200        thread foo {};
201\end{cfacode}
202
203As for coroutines, the keyword is a thin wrapper arount a \CFA trait:
204
205\begin{cfacode}
206trait is_thread(dtype T) {
207      void ^?{}(T & mutex this);
208      void main(T & this);
209      thread_desc* get_thread(T & this);
210};
211\end{cfacode}
212
213Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), it is a natural extension of the semantics using overloading to declare mains for different threads (the normal main being the main of the initial thread). As such the \code{main} routine of a thread can be defined as
214\begin{cfacode}
215        thread foo {};
216
217        void main(foo & this) {
218                sout | "Hello World!" | endl;
219        }
220\end{cfacode}
221
222In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!"}. While this thesis encourages this approach to enforce strongly-typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously
223\begin{cfacode}
224        typedef void (*voidFunc)(int);
225
226        thread FuncRunner {
227                voidFunc func;
228                int arg;
229        };
230
231        void ?{}(FuncRunner & this, voidFunc inFunc, int arg) {
232                this.func = inFunc;
233        }
234
235        void main(FuncRunner & this) {
236                this.func( this.arg );
237        }
238\end{cfacode}
239
240An consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \acrshort{api}.
241
242Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is unnecessary. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} after the constructor has completed and \code{join} before the destructor runs.
243\begin{cfacode}
244thread World;
245
246void main(World & this) {
247        sout | "World!" | endl;
248}
249
250void main() {
251        World w;
252        //Thread forks here
253
254        //Printing "Hello " and "World!" are run concurrently
255        sout | "Hello " | endl;
256
257        //Implicit join at end of scope
258}
259\end{cfacode}
260
261This semantic has several advantages over explicit semantics: a thread is always started and stopped exaclty once and users cannot make any progamming errors and it naturally scales to multiple threads meaning basic synchronisation is very simple
262
263\begin{cfacode}
264thread MyThread {
265        //...
266};
267
268//main
269void main(MyThread & this) {
270        //...
271}
272
273void foo() {
274        MyThread thrds[10];
275        //Start 10 threads at the beginning of the scope
276
277        DoStuff();
278
279        //Wait for the 10 threads to finish
280}
281\end{cfacode}
282
283However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created
284
285\begin{cfacode}
286thread MyThread {
287        //...
288};
289
290void main(MyThread & this) {
291        //...
292}
293
294void foo() {
295        MyThread * long_lived;
296        {
297                //Start a thread at the beginning of the scope
298                MyThread short_lived;
299
300                //create another thread that will outlive the thread in this scope
301                long_lived = new MyThread;
302
303                DoStuff();
304
305                //Wait for the thread short_lived to finish
306        }
307        DoMoreStuff();
308
309        //Now wait for the long_lived to finish
310        delete long_lived;
311}
312\end{cfacode}
Note: See TracBrowser for help on using the repository browser.