Changeset d3a85240


Ignore:
Timestamp:
Jan 11, 2017, 3:30:10 PM (7 years ago)
Author:
Rob Schluntz <rschlunt@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
Children:
2162c2c, 2298a7b8
Parents:
dd0c97b (diff), 17e5e2b (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:/u/cforall/software/cfa/cfa-cc

Files:
1 added
11 edited

Legend:

Unmodified
Added
Removed
  • doc/proposals/concurrency/concurrency.tex

    rdd0c97b rd3a85240  
    149149
    150150\begin{lstlisting}
    151         mutex struct counter_t { /*...see section §\ref{data}§...*/ };
     151        mutex struct counter_t { /*...see section §\ref{data}§...*/ };
    152152
    153153        void ?{}(counter_t & nomutex this); //constructor
     
    767767TO BE CONTINUED...
    768768
     769
     770
     771
     772
     773
     774
     775
     776
     777
    769778\newpage
    770779% ######     #    ######     #    #       #       ####### #       ###  #####  #     #
     
    807816As a system-level language, \CFA should offer both performance and flexibilty as its primary goals, simplicity and user-friendliness being a secondary concern. Therefore, the core of parallelism in \CFA should prioritize power and efficiency. With this said, deconstructing popular paradigms in order to get simple building blocks yields \glspl{uthread} as the core parallelism block. \Glspl{pool} and other parallelism paradigms can then be built on top of the underlying threading model.
    808817
     818\subsection{Coroutines : A stepping stone}\label{coroutine}
     819While the main focus of this proposal is concurrency and paralellism, it is important to adress coroutines which are actually a significant underlying aspect of the concurrency system. Indeed, while having nothing todo with parallelism and arguably very little to do with concurrency, coroutines need to deal with context-switchs and and other context management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads and a first class feature of \CFA.
     820
     821The core API of coroutines revolve around two features : independent stacks and \code{suspend}/\code{resume}.
     822Here is an example of a solution to the fibonnaci problem using \CFA coroutines :
     823\begin{lstlisting}
     824        struct Fibonacci {
     825              int fn; // used for communication
     826              coroutine_descriptor c;
     827        };
     828
     829        void ?{}(Fibonacci* this) {
     830              this->fn = 0;
     831        }
     832
     833        coroutine_descriptor* get_¶coroutine¶(Fibonacci* this) {
     834              return &this->c;
     835        }
     836
     837        void co_main(Fibonacci* this) {
     838                int fn1, fn2;           // retained between resumes
     839                this->fn = 0;
     840                fn1 = this->fn;
     841                suspend(this);          // return to last resume
     842
     843                this->fn = 1;
     844                fn2 = fn1;
     845                fn1 = this->fn;
     846                suspend(this);          // return to last resume
     847
     848                for ( ;; ) {
     849                        this->fn = fn1 + fn2;
     850                        fn2 = fn1;
     851                        fn1 = this->fn;
     852                        suspend(this);  // return to last resume
     853                }
     854        }
     855
     856        int next(Fibonacci* this) {
     857                resume(this); // transfer to last suspend
     858                return this.fn;
     859        }
     860
     861        void main() {
     862                Fibonacci f1, f2;
     863                for ( int i = 1; i <= 10; i += 1 ) {
     864                        sout | next(&f1) | '§\verb+ +§' | next(&f2) | endl;
     865                }
     866        }
     867\end{lstlisting}
     868
     869\subsubsection{Construction}
     870One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run some code after the user-constructor runs. In the case of the coroutines this challenge is simpler since there is no loss of determinism brough by preemption or scheduling, however, the underlying challenge remains the same for coroutines and threads.
     871
     872The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non trivial since users both expect to have fully constructed objects once the main is called and to be able to resume the coroutine from the main (Obviously we only solve cases where these two statements don't conflict). There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
     873
     874Furthermore, \CFA faces an extra challenge which is that polymorphique routines rely on invisible thunks when casted to non-polymorphic routines and these thunks have function scope, for example :
     875
     876TODO : Simple case where a thunk would be created.
     877
     878
     879
     880\subsubsection{Alternative: Inheritance}
     881One solution to this challenge would be to use inheritence,
     882
     883\begin{lstlisting}
     884        struct Fibonacci {
     885              int fn; // used for communication
     886              coroutine c;
     887        };
     888
     889        void ?{}(Fibonacci* this) {
     890              this->fn = 0;
     891                (&this->c){};
     892        }
     893\end{lstlisting}
     894
     895There are two downsides to the approach. The first, which is relatively minor, is that the base class needs to be made aware of the main routine pointer, regardless of whether we use a parameter or a virtual pointer, this means the coroutine data must be made larger to store a value that is actually a compile time constant (The address of the main routine). The second problem which is both subtle but significant, is that now can get the initialisation order of there coroutines wrong. Indeed, every field of a \CFA struct will be constructed but in the order of declaration, unless users explicitly write otherwise. This means that users who forget to initialize a the coroutine at the right time may resume the coroutine at with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem.
     896
     897\subsubsection{Alternative: Reserved keyword}
     898The next alternative is to use language support to annotate coroutines as follows :
     899
     900\begin{lstlisting}
     901        coroutine struct Fibonacci {
     902              int fn; // used for communication
     903        };
     904\end{lstlisting}
     905
     906This mean the compiler can solve problems by injecting code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users who would want to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of \CFA.
     907
     908\subsubsection{Alternative: Lamda Objects}
     909
     910Boost does not use objects...
     911TO BE CONTINUED...
     912
     913\subsubsection{Trait based coroutines}
     914
     915Finally the approach chosen, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines, the approach shown in section \ref{coroutine}. This approach defines a coroutine as anything that satisfies the \code{is_coroutine} and is used as a coroutine is a coroutine. This entails the an object is not a coroutine until \code{resume} (and \code{prime}) is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile.
     916
    809917% ####### #     # ######  #######    #    ######   #####
    810918%    #    #     # #     # #         # #   #     # #     #
     
    815923%    #    #     # #     # ####### #     # ######   #####
    816924
    817 \subsection{Thread Interface}
     925\subsection{Thread Interface}\label{threads}
    818926The basic building blocks of \CFA are \glspl{cfathread}. By default these are implemented as \glspl{uthread}, and as such, offer a flexible and lightweight threading interface (lightweight compared to \glspl{kthread}). A thread can be declared using a struct declaration with prefix \code{thread} as follows :
    819927
    820928\begin{lstlisting}
     929        trait is_¶thread¶(dtype T) {
     930                void co_main(T* this);
     931                coroutine* get_coroutine(T* this);
     932        };
     933
    821934        thread struct foo {};
    822935\end{lstlisting}
     
    845958
    846959        //main
    847         void ?main(FuncRunner* this) {
     960        void t_main(FuncRunner* this) {
    848961                this->func();
    849962        }
     
    9271040
    9281041                //Wait for the 10 threads to finish
    929         }
    930 \end{lstlisting}
    931 
    932 \subsection{Coroutines : A stepping stone}\label{coroutine}
    933 While the main focus of this proposal is concurrency and paralellism, it is important to adress coroutines which are actually a significant underlying aspect of the concurrency system. Indeed, while having nothing todo with parallelism and arguably very little to do with concurrency, coroutines need to deal with context-switchs and and other context management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads and a first class feature of \CFA.
    934 
    935 The core API of coroutines revolve around two features : independent stacks and suspedn/resume. Much like threads the syntax for declaring a coroutine is declaring a type and a main routine for it to start :
    936 \begin{lstlisting}
    937         coroutine struct MyCoroutine {
    938                 //...
    939         };
    940 
    941         //ctor
    942         void ?{}(MyCoroutine* this) {
    943 
    944         }
    945 
    946         //main
    947         void ?main(MyCoroutine* this) {
    948                 sout | "Hello World!" | endl;
    949         }
    950 \end{lstlisting}
    951 
    952 One a coroutine is created, users can context switch to it using \code{suspend} and come back using \code{resume}. Here is an example of a solution to the fibonnaci problem using coroutines :
    953 \begin{lstlisting}
    954         coroutine struct Fibonacci {
    955                 int fn; // used for communication
    956         };
    957 
    958         void ?main(Fibonacci* this) {
    959                 int fn1, fn2;           // retained between resumes
    960                 this->fn = 0;
    961                 fn1 = this->fn;
    962                 suspend(this);          // return to last resume
    963 
    964                 this->fn = 1;
    965                 fn2 = fn1;
    966                 fn1 = this->fn;
    967                 suspend(this);          // return to last resume
    968 
    969                 for ( ;; ) {
    970                         this->fn = fn1 + fn2;
    971                         fn2 = fn1;
    972                         fn1 = this->fn;
    973                         suspend(this);  // return to last resume
    974                 }
    975         }
    976 
    977         int next(Fibonacci& this) {
    978                 resume(&this); // transfer to last suspend
    979                 return this.fn;
    980         }
    981 
    982         void main() {
    983                 Fibonacci f1, f2;
    984                 for ( int i = 1; i <= 10; i += 1 ) {
    985                         sout | next(f1) | '§\verb+ +§' | next(f2) | endl;
    986                 }
    9871042        }
    9881043\end{lstlisting}
  • doc/proposals/concurrency/version

    rdd0c97b rd3a85240  
    1 0.7.48
     10.7.61
  • src/libcfa/concurrency/threads

    rdd0c97b rd3a85240  
    1515//
    1616
    17 #ifndef __THREADS_H__
    18 #define __THREADS_H__
     17#ifdef __CFORALL__
     18
     19#ifndef THREADS_H
     20#define THREADS_H
    1921
    2022#include "assert"       //
     
    111113}
    112114
    113 #endif //__THREADS_H__
     115#endif //THREADS_H
     116
     117#else
     118#include_next <thread>
     119#endif //__CFORALL__
    114120
    115121// Local Variables: //
  • src/libcfa/containers/vector

    rdd0c97b rd3a85240  
    1414//
    1515
    16 #pragma once
     16#ifdef __CFORALL__
     17
     18#ifndef VECTOR_H
     19#define VECTOR_H
    1720
    1821extern "C" {
     
    165168}
    166169
     170#endif // VECTOR_H
     171
     172#else
     173#include_next <vector>
     174#endif //__CFORALL__
     175
    167176// Local Variables: //
    168177// mode: c //
  • src/libcfa/fstream

    rdd0c97b rd3a85240  
    1313// Update Count     : 88
    1414//
     15
     16#ifdef __CFORALL__
    1517
    1618#ifndef __FSTREAM_H__
     
    6264#endif // __FSTREAM_H__
    6365
     66#else
     67#include_next <fstream>
     68#endif //__CFORALL__
     69
    6470// Local Variables: //
    6571// mode: c //
  • src/libcfa/iostream

    rdd0c97b rd3a85240  
    1313// Update Count     : 93
    1414//
     15
     16#ifdef __CFORALL__
    1517
    1618#ifndef __IOSTREAM_H__
     
    124126forall( dtype istype | istream( istype ) ) istype * ?|?( istype *, _Istream_cstrC );
    125127
    126 #endif // __IOSTREAM_H__
     128#endif // __IOSTREAM_H
     129
     130#else
     131#include_next <iostream>
     132#endif //__CFORALL__
    127133
    128134// Local Variables: //
  • src/libcfa/iterator

    rdd0c97b rd3a85240  
    1313// Update Count     : 9
    1414//
     15
     16#ifdef __CFORALL__
    1517
    1618#ifndef ITERATOR_H
     
    4648#endif // ITERATOR_H
    4749
     50#else
     51#include_next <iterator>
     52#endif //__CFORALL__
     53
    4854// Local Variables: //
    4955// mode: c //
  • src/libcfa/limits

    rdd0c97b rd3a85240  
    1313// Update Count     : 6
    1414//
     15
     16#ifdef __CFORALL__
     17
     18#ifndef LIMITS_H
     19#define LIMITS_H
    1520
    1621// Integral Constants
     
    107112extern const long _Complex _1_SQRT_2;                                   // 1 / sqrt(2)
    108113
     114#endif // LIMITS_H
     115
     116#else
     117#include_next <limits>
     118#endif //__CFORALL__
     119
    109120// Local Variables: //
    110121// mode: c //
  • src/libcfa/math

    rdd0c97b rd3a85240  
    1414//
    1515
     16#ifdef __CFORALL__
     17
     18#ifndef MATH_H
     19#define MATH_H
     20
    1621extern "C" {
    1722#include <math.h>                                                                               // fpclassify, isfinite, isnormal, isnan, isinf
     
    349354long double scalbln( long double, long int );
    350355
     356#endif // MATH_H
     357
     358#else
     359#include_next <math>
     360#endif //__CFORALL__
     361
    351362// Local Variables: //
    352363// mode: c //
  • src/libcfa/rational

    rdd0c97b rd3a85240  
    1515// Update Count     : 16
    1616//
     17#ifdef __CFORALL__
     18
     19#ifndef RATIONAL_H
     20#define RATIONAL_H
    1721
    1822#include "iostream"
     
    6165forall( dtype ostype | ostream( ostype ) ) ostype * ?|?( ostype *, Rational );
    6266
     67#endif // RATIONAL_H
     68
     69#else
     70#include_next <rational>
     71#endif //__CFORALL__
     72
    6373// Local Variables: //
    6474// mode: c //
  • src/libcfa/stdlib

    rdd0c97b rd3a85240  
    1313// Update Count     : 99
    1414//
     15
     16#ifdef __CFORALL__
     17
     18#ifndef STDLIB_H
     19#define STDLIB_H
    1520
    1621//---------------------------------------
     
    131136void swap( T * t1, T * t2 );
    132137
     138#endif // STDLIB_H
     139
     140#else
     141#include_next <stdlib>
     142#endif //__CFORALL__
     143
    133144// Local Variables: //
    134145// mode: c //
Note: See TracChangeset for help on using the changeset viewer.