Ignore:
Timestamp:
Nov 23, 2017, 1:31:43 PM (5 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, resolv-new, with_gc
Children:
88ef2af
Parents:
07c1e595
Message:

Revised up to chapter three

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/proposals/concurrency/text/basics.tex

    r07c1e595 r9f10d1f2  
    1616
    1717\section{\protect\CFA 's Thread Building Blocks}
    18 One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
     18One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
    1919
    2020\section{Coroutines: A stepping stone}\label{coroutine}
     
    6262void fibonacci_array(
    6363        int n,
    64         int * array
     64        int* array
    6565) {
    6666        int f1 = 0; int f2 = 1;
     
    9999
    100100int fibonacci_state(
    101         Iterator_t * it
     101        Iterator_t* it
    102102) {
    103103        int f;
     
    135135A good example of a problem made easier with coroutines is generators, like the Fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Figure \ref{lst:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence is used, while the rightmost approach requires holding internal state between calls on behalf of the generator and makes it much harder to handle corner cases like the Fibonacci seed.
    136136
    137 Figure \ref{lst:fibonacci-cfa} is an example of a solution to the Fibonacci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is as easy to use as the \code{fibonacci_state} solution, while the implementation is very similar to the \code{fibonacci_func} example.
     137Figure \ref{lst:fibonacci-cfa} is an example of a solution to the Fibonacci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the next generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is as easy to use as the \code{fibonacci_state} solution, while the implementation is very similar to the \code{fibonacci_func} example.
    138138
    139139\begin{figure}
     
    143143};
    144144
    145 void ?{}(Fibonacci & this) { //constructor
     145void ?{}(Fibonacci& this) { //constructor
    146146        this.fn = 0;
    147147}
    148148
    149149//main automatically called on first resume
    150 void main(Fibonacci & this) with (this) {
     150void main(Fibonacci& this) with (this) {
    151151        int fn1, fn2;           //retained between resumes
    152152        fn  = 0;
     
    167167}
    168168
    169 int next(Fibonacci & this) {
     169int next(Fibonacci& this) {
    170170        resume(this); //transfer to last suspend
    171171        return this.fn;
     
    183183\end{figure}
    184184
    185 Figure \ref{lst:fmt-line} shows the \code{Format} coroutine which rearranges text in order to group characters into blocks of fixed size. The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor.
     185Figure \ref{lst:fmt-line} shows the \code{Format} coroutine for restructuring text into groups of character blocks of fixed size. The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor.
    186186
    187187\begin{figure}
     
    193193};
    194194
    195 void  ?{}(Format & fmt) {
     195void  ?{}(Format& fmt) {
    196196        resume( fmt );                                                  //prime (start) coroutine
    197197}
    198198
    199 void ^?{}(Format & fmt) with fmt {
     199void ^?{}(Format& fmt) with fmt {
    200200        if ( fmt.g != 0 || fmt.b != 0 )
    201201        sout | endl;
    202202}
    203203
    204 void main(Format & fmt) with fmt {
     204void main(Format& fmt) with fmt {
    205205        for ( ;; ) {                                                    //for as many characters
    206206                for(g = 0; g < 5; g++) {                //groups of 5 blocks
     
    235235
    236236\subsection{Construction}
    237 One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
    238 
    239 The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
     237One important design challenge for implementing coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
     238
     239The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
    240240
    241241Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
     
    258258
    259259\begin{ccode}
    260 extern void async(/* omitted */, void (*func)(void *), void *obj);
    261 
    262 void noop(/* omitted */, void *obj){}
     260extern void async(/* omitted */, void (*func)(void*), void* obj);
     261
     262void noop(/* omitted */, void* obj){}
    263263
    264264void bar(){
    265265        int a;
    266         void _thunk0(int *_p0){
     266        void _thunk0(int* _p0){
    267267                /* omitted */
    268268                noop(/* omitted */, _p0);
    269269        }
    270270        /* omitted */
    271         async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a));
     271        async(/* omitted */, ((void (*)(void*))(&_thunk0)), (&a));
    272272}
    273273\end{ccode}
    274 The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes undefined behavior; i.e., the stack-based thunk being destroyed before it can be used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks.
     274The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes Undefined Behavior; i.e., the stack-based thunk being destroyed before it can be used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks.
    275275
    276276\subsection{Alternative: Composition}
     
    283283};
    284284
    285 void FibMain(void *) {
     285void FibMain(void*) {
    286286        //...
    287287}
    288288
    289 void ?{}(Fibonacci & this) {
     289void ?{}(Fibonacci& this) {
    290290        this.fn = 0;
    291291        //Call constructor to initialize coroutine
     
    293293}
    294294\end{cfacode}
    295 The downside of this approach is that users need to correctly construct the coroutine handle before using it. Like any other objects, doing so the users carefully choose construction order to prevent usage of objects not yet constructed. However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example. This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically.
     295The downside of this approach is that users need to correctly construct the coroutine handle before using it. Like any other objects, the user must carefully choose construction order to prevent usage of objects not yet constructed. However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example. This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically.
    296296
    297297\subsection{Alternative: Reserved keyword}
     
    307307\subsection{Alternative: Lambda Objects}
    308308
    309 For coroutines as for threads, many implementations are based on routine pointers or function objects\cite{Butenhof97, ANSI14:C++, MS:VisualC++, BoostCoroutines15}. For example, Boost implements coroutines in terms of four functor object types:
     309For coroutines as for threads, many implementations are based on routine pointers or function objects~\cite{Butenhof97, ANSI14:C++, MS:VisualC++, BoostCoroutines15}. For example, Boost implements coroutines in terms of four functor object types:
    310310\begin{cfacode}
    311311asymmetric_coroutine<>::pull_type
     
    318318A variation of this would be to use a simple function pointer in the same way pthread does for threads :
    319319\begin{cfacode}
    320 void foo( coroutine_t cid, void * arg ) {
    321         int * value = (int *)arg;
     320void foo( coroutine_t cid, void* arg ) {
     321        int* value = (int*)arg;
    322322        //Coroutine body
    323323}
     
    329329}
    330330\end{cfacode}
    331 This semantics is more common for thread interfaces than coroutines works equally well. As discussed in section \ref{threads}, this approach is superseded by static approaches in terms of expressivity.
     331This semantics is more common for thread interfaces but coroutines work equally well. As discussed in section \ref{threads}, this approach is superseded by static approaches in terms of expressivity.
    332332
    333333\subsection{Alternative: Trait-based coroutines}
     
    337337\begin{cfacode}
    338338trait is_coroutine(dtype T) {
    339       void main(T & this);
    340       coroutine_desc * get_coroutine(T & this);
    341 };
    342 
    343 forall( dtype T | is_coroutine(T) ) void suspend(T &);
    344 forall( dtype T | is_coroutine(T) ) void resume (T &);
    345 \end{cfacode}
    346 This ensures an object is not a coroutine until \code{resume} is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine.
     339      void main(T& this);
     340      coroutine_desc* get_coroutine(T& this);
     341};
     342
     343forall( dtype T | is_coroutine(T) ) void suspend(T&);
     344forall( dtype T | is_coroutine(T) ) void resume (T&);
     345\end{cfacode}
     346This ensures an object is not a coroutine until \code{resume} is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to implement the main routine.
    347347
    348348\begin{center}
     
    359359
    360360static inline
    361 coroutine_desc * get_coroutine(
    362         struct MyCoroutine & this
     361coroutine_desc* get_coroutine(
     362        struct MyCoroutine& this
    363363) {
    364364        return &this.__cor;
    365365}
    366366
    367 void main(struct MyCoroutine * this);
     367void main(struct MyCoroutine* this);
    368368\end{cfacode}
    369369\end{tabular}
     
    416416        this.func( this.arg );
    417417}
     418
     419void hello(/*unused*/ int) {
     420        sout | "Hello World!" | endl;
     421}
     422
     423int main() {
     424        FuncRunner f = {hello, 42};
     425        return 0'
     426}
    418427\end{cfacode}
    419428
     
    447456
    448457//main
    449 void main(MyThread & this) {
     458void main(MyThread& this) {
    450459        //...
    451460}
     
    461470\end{cfacode}
    462471
    463 However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in the opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created.
     472However, one of the drawbacks of this approach is that threads always form a lattice, i.e., they are always destroyed in the opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created.
    464473
    465474\begin{cfacode}
     
    468477};
    469478
    470 void main(MyThread & this) {
     479void main(MyThread& this) {
    471480        //...
    472481}
    473482
    474483void foo() {
    475         MyThread * long_lived;
     484        MyThread* long_lived;
    476485        {
    477486                //Start a thread at the beginning of the scope
Note: See TracChangeset for help on using the changeset viewer.