Changeset f7ff3fb
- Timestamp:
- Jan 9, 2017, 4:35:40 PM (8 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 17e5e2b
- Parents:
- 66f8528
- Location:
- doc/proposals/concurrency
- Files:
-
- 1 added
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
TabularUnified doc/proposals/concurrency/concurrency.tex ¶
r66f8528 rf7ff3fb 149 149 150 150 \begin{lstlisting} 151 mutex struct counter_t { /*...see section §\ref{data}§...*/ };151 mutex struct counter_t { /*...see section §\ref{data}§...*/ }; 152 152 153 153 void ?{}(counter_t & nomutex this); //constructor … … 767 767 TO BE CONTINUED... 768 768 769 770 771 772 773 774 775 776 777 769 778 \newpage 770 779 % ###### # ###### # # # ####### # ### ##### # # … … 807 816 As a system-level language, \CFA should offer both performance and flexibilty as its primary goals, simplicity and user-friendliness being a secondary concern. Therefore, the core of parallelism in \CFA should prioritize power and efficiency. With this said, deconstructing popular paradigms in order to get simple building blocks yields \glspl{uthread} as the core parallelism block. \Glspl{pool} and other parallelism paradigms can then be built on top of the underlying threading model. 808 817 818 \subsection{Coroutines : A stepping stone}\label{coroutine} 819 While the main focus of this proposal is concurrency and paralellism, it is important to adress coroutines which are actually a significant underlying aspect of the concurrency system. Indeed, while having nothing todo with parallelism and arguably very little to do with concurrency, coroutines need to deal with context-switchs and and other context management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads and a first class feature of \CFA. 820 821 The core API of coroutines revolve around two features : independent stacks and \code{suspend}/\code{resume}. 822 Here is an example of a solution to the fibonnaci problem using \CFA coroutines : 823 \begin{lstlisting} 824 struct Fibonacci { 825 int fn; // used for communication 826 coroutine_descriptor c; 827 }; 828 829 void ?{}(Fibonacci* this) { 830 this->fn = 0; 831 } 832 833 coroutine_descriptor* get_¶coroutine¶(Fibonacci* this) { 834 return &this->c; 835 } 836 837 void co_main(Fibonacci* this) { 838 int fn1, fn2; // retained between resumes 839 this->fn = 0; 840 fn1 = this->fn; 841 suspend(this); // return to last resume 842 843 this->fn = 1; 844 fn2 = fn1; 845 fn1 = this->fn; 846 suspend(this); // return to last resume 847 848 for ( ;; ) { 849 this->fn = fn1 + fn2; 850 fn2 = fn1; 851 fn1 = this->fn; 852 suspend(this); // return to last resume 853 } 854 } 855 856 int next(Fibonacci* this) { 857 resume(this); // transfer to last suspend 858 return this.fn; 859 } 860 861 void main() { 862 Fibonacci f1, f2; 863 for ( int i = 1; i <= 10; i += 1 ) { 864 sout | next(&f1) | '§\verb+ +§' | next(&f2) | endl; 865 } 866 } 867 \end{lstlisting} 868 869 \subsubsection{Construction} 870 One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run some code after the user-constructor runs. In the case of the coroutines this challenge is simpler since there is no loss of determinism brough by preemption or scheduling, however, the underlying challenge remains the same for coroutines and threads. 871 872 The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non trivial since users both expect to have fully constructed objects once the main is called and to be able to resume the coroutine from the main (Obviously we only solve cases where these two statements don't conflict). There are several solutions to this problem but the chosen options effectively forces the design of the coroutine. 873 874 Furthermore, \CFA faces an extra challenge which is that polymorphique routines rely on invisible thunks when casted to non-polymorphic routines and these thunks have function scope, for example : 875 876 TODO : Simple case where a thunk would be created. 877 878 879 880 \subsubsection{Alternative: Inheritance} 881 One solution to this challenge would be to use inheritence, 882 883 \begin{lstlisting} 884 struct Fibonacci { 885 int fn; // used for communication 886 coroutine c; 887 }; 888 889 void ?{}(Fibonacci* this) { 890 this->fn = 0; 891 (&this->c){}; 892 } 893 \end{lstlisting} 894 895 There are two downsides to the approach. The first, which is relatively minor, is that the base class needs to be made aware of the main routine pointer, regardless of whether we use a parameter or a virtual pointer, this means the coroutine data must be made larger to store a value that is actually a compile time constant (The address of the main routine). The second problem which is both subtle but significant, is that now can get the initialisation order of there coroutines wrong. Indeed, every field of a \CFA struct will be constructed but in the order of declaration, unless users explicitly write otherwise. This means that users who forget to initialize a the coroutine at the right time may resume the coroutine at with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem. 896 897 \subsubsection{Alternative: Reserved keyword} 898 The next alternative is to use language support to annotate coroutines as follows : 899 900 \begin{lstlisting} 901 coroutine struct Fibonacci { 902 int fn; // used for communication 903 }; 904 \end{lstlisting} 905 906 This mean the compiler can solve problems by injecting code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users who would want to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of \CFA. 907 908 \subsubsection{Alternative: Lamda Objects} 909 910 Boost does not use objects... 911 TO BE CONTINUED... 912 913 \subsubsection{Trait based coroutines} 914 915 Finally the approach chosen, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines, the approach shown in section \ref{coroutine}. This approach defines a coroutine as anything that satisfies the \code{is_coroutine} and is used as a coroutine is a coroutine. This entails the an object is not a coroutine until \code{resume} (and \code{prime}) is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. 916 809 917 % ####### # # ###### ####### # ###### ##### 810 918 % # # # # # # # # # # # # … … 815 923 % # # # # # ####### # # ###### ##### 816 924 817 \subsection{Thread Interface} 925 \subsection{Thread Interface}\label{threads} 818 926 The basic building blocks of \CFA are \glspl{cfathread}. By default these are implemented as \glspl{uthread}, and as such, offer a flexible and lightweight threading interface (lightweight compared to \glspl{kthread}). A thread can be declared using a struct declaration with prefix \code{thread} as follows : 819 927 820 928 \begin{lstlisting} 929 trait is_¶thread¶(dtype T) { 930 void co_main(T* this); 931 coroutine* get_coroutine(T* this); 932 }; 933 821 934 thread struct foo {}; 822 935 \end{lstlisting} … … 845 958 846 959 //main 847 void ?main(FuncRunner* this) {960 void t_main(FuncRunner* this) { 848 961 this->func(); 849 962 } … … 927 1040 928 1041 //Wait for the 10 threads to finish 929 }930 \end{lstlisting}931 932 \subsection{Coroutines : A stepping stone}\label{coroutine}933 While the main focus of this proposal is concurrency and paralellism, it is important to adress coroutines which are actually a significant underlying aspect of the concurrency system. Indeed, while having nothing todo with parallelism and arguably very little to do with concurrency, coroutines need to deal with context-switchs and and other context management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads and a first class feature of \CFA.934 935 The core API of coroutines revolve around two features : independent stacks and suspedn/resume. Much like threads the syntax for declaring a coroutine is declaring a type and a main routine for it to start :936 \begin{lstlisting}937 coroutine struct MyCoroutine {938 //...939 };940 941 //ctor942 void ?{}(MyCoroutine* this) {943 944 }945 946 //main947 void ?main(MyCoroutine* this) {948 sout | "Hello World!" | endl;949 }950 \end{lstlisting}951 952 One a coroutine is created, users can context switch to it using \code{suspend} and come back using \code{resume}. Here is an example of a solution to the fibonnaci problem using coroutines :953 \begin{lstlisting}954 coroutine struct Fibonacci {955 int fn; // used for communication956 };957 958 void ?main(Fibonacci* this) {959 int fn1, fn2; // retained between resumes960 this->fn = 0;961 fn1 = this->fn;962 suspend(this); // return to last resume963 964 this->fn = 1;965 fn2 = fn1;966 fn1 = this->fn;967 suspend(this); // return to last resume968 969 for ( ;; ) {970 this->fn = fn1 + fn2;971 fn2 = fn1;972 fn1 = this->fn;973 suspend(this); // return to last resume974 }975 }976 977 int next(Fibonacci& this) {978 resume(&this); // transfer to last suspend979 return this.fn;980 }981 982 void main() {983 Fibonacci f1, f2;984 for ( int i = 1; i <= 10; i += 1 ) {985 sout | next(f1) | '§\verb+ +§' | next(f2) | endl;986 }987 1042 } 988 1043 \end{lstlisting} -
TabularUnified doc/proposals/concurrency/version ¶
r66f8528 rf7ff3fb 1 0.7. 481 0.7.61
Note: See TracChangeset
for help on using the changeset viewer.