Changeset 687165a for doc


Ignore:
Timestamp:
Nov 23, 2016, 12:03:39 PM (7 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
Children:
df3339a
Parents:
bbd44c5
Message:

Reviewed start of threading and added quick proposal for couroutines

Location:
doc/proposals/concurrency
Files:
4 edited

Legend:

Unmodified
Added
Removed
  • doc/proposals/concurrency/Makefile

    rbbd44c5 r687165a  
    99SOURCES = ${addsuffix .tex, \
    1010concurrency \
     11style \
     12glossary \
    1113}
    1214
  • doc/proposals/concurrency/concurrency.tex

    rbbd44c5 r687165a  
    149149
    150150\begin{lstlisting}
    151         mutex struct counter_t { /*...see section §\ref{data}§...*/ };
     151        mutex struct counter_t { /*...see section §\ref{data}§...*/ };
    152152
    153153        void ?{}(counter_t & nomutex this); //constructor
     
    567567\newpage
    568568\subsection{External scheduling} \label{extsched}
    569 As one might expect, the alternative to Internal scheduling is to use External scheduling instead. This method is somewhat more robust to deadlocks since one of the threads keeps a relatively tight control on scheduling. Indeed, as the following examples will demonstrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (ex: \uC) or in terms of data (ex: Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows what a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages.
     569An alternative to internal scheduling is to use external scheduling instead. This method is more constrained and explicit which may help users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (ex: \uC) or in terms of data (ex: Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages.
    570570
    571571\begin{center}
     
    585585
    586586        public:
    587                 void f();
     587                void f() { /*...*/ }
    588588                void g() { _Accept(f); }
    589589        private:
     
    593593\end{center}
    594594
    595 In the case of internal scheduling, the call to \code{wait} only guarantees that \code{g} was the last routine to access the monitor. This intails that the routine \code{f} may have acquired mutual exclusion several times while routine \code{h} was waiting. On the other hand, external scheduling guarantees that while routine \code{h} was waiting, no routine other than \code{g} could acquire the monitor.
     595In the case of internal scheduling, the call to \code{wait} only guarantees that \code{g} is the last routine to access the monitor. This intails that the routine \code{f} may have acquired mutual exclusion several times while routine \code{h} was waiting. On the other hand, external scheduling guarantees that while routine \code{h} was waiting, no routine other than \code{g} could acquire the monitor.
    596596\\
    597597
     
    776776% #       #     # #     # #     # ####### ####### ####### ####### ###  #####  #     #
    777777\section{Parallelism}
    778 Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. In this decade, it is not longer reasonnable to create high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest level approach of parallelism is to use \glspl{kthread} in combination with semantics like \code{fork}, \code{join}, etc. However, since these have significant costs and limitations \glspl{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues that all have strengths and weaknesses. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads.
     778Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. In this decade, it is not longer reasonnable to create a high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest-level approach of parallelism is to use \glspl{kthread} in combination with semantics like \code{fork}, \code{join}, etc. However, since these have significant costs and limitations, \glspl{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues that all have strengths and weaknesses. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads.
    779779
    780780\subsection{User-level threads}
    781 A direct improvement on the \gls{kthread} approach is to use \glspl{uthread}. These threads offer most of the same features that the operating system already provide but can be used on a much larger scale. This approach is the most powerfull solution as it allows all the features of multi-threading while removing several of the more expensives costs of using kernel threads. The down side is that almost none of the low-level threading problems are hidden, users still have to think about data races, deadlocks and synchronization issues. These issues can be somewhat alleviated by a concurrency toolkit with strong garantees but the parallelism toolkit offers very little to reduce complexity in itself.
    782 
    783 Examples of languages that support are Erlang~\cite{Erlang} and \uC~\cite{uC++book}.
    784 
    785 \subsection{Fibers : user-level threads without preemption}
    786 In the middle of the flexibility versus complexity spectrum lay \glspl{fiber} which offer \glspl{uthread} without the complexity of preemption by using cooperative scheduling. On a single core machine this means users need not worry about concurrency. On multi-core machines, while concurrency is still a concern, it is only a problem for fibers across cores but not while on the same core. This extra guarantee plus the fact that creating and destroying fibers are implicit synchronizing points means preventing mutable shared ressources still leaves many control flow options. However, multi-core processors can still execute fibers in parallel. This means users either need to worry about mutual exclusion, deadlocks and race conditions, or limit themselves to subset of concurrency primitives, raising the complexity in both cases. In this aspect, fibers can be seen as a more powerfull alternative to \glspl{job}.
     781A direct improvement on the \gls{kthread} approach is to use \glspl{uthread}. These threads offer most of the same features that the operating system already provide but can be used on a much larger scale. This approach is the most powerfull solution as it allows all the features of multi-threading, while removing several of the more expensives costs of using kernel threads. The down side is that almost none of the low-level threading problems are hidden, users still have to think about data races, deadlocks and synchronization issues. These issues can be somewhat alleviated by a concurrency toolkit with strong garantees but the parallelism toolkit offers very little to reduce complexity in itself.
     782
     783Examples of languages that support \glspl{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}.
     784
     785\subsubsection{Fibers : user-level threads without preemption}
     786A popular varient of \glspl{uthread} is what is often reffered to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strenghts of \glspl{fiber} but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignore fibers.
    787787
    788788An example of a language that uses fibers is Go~\cite{Go}
    789789
    790790\subsection{Jobs and thread pools}
    791 The approach on the opposite end of the spectrum is to base parallelism on \glspl{pool}. Indeed, \glspl{pool} offer limited flexibility but at the benefit of a simpler user interface. In \gls{pool} based systems, users express parallelism as units of work and the dependency graph (either explicit or implicit) that tie them together. This approach means users need not worry about concurrency but significantly limits the interaction that can occur between different jobs. Indeed, any \gls{job} that blocks also blocks the underlying worker, which effectively means the CPU utilization, and therefore throughput, suffers noticeably. It can be argued that a solution to this problem is to use more workers than available cores. However, unless the number of jobs and the number of workers are comparable, having a significant amount of blocked jobs will always results in idles cores.
     791The approach on the opposite end of the spectrum is to base parallelism on \glspl{pool}. Indeed, \glspl{pool} offer limited flexibility but at the benefit of a simpler user interface. In \gls{pool} based systems, users express parallelism as units of work and a dependency graph (either explicit or implicit) that tie them together. This approach means users need not worry about concurrency but significantly limits the interaction that can occur among jobs. Indeed, any \gls{job} that blocks also blocks the underlying worker, which effectively means the CPU utilization, and therefore throughput, suffers noticeably. It can be argued that a solution to this problem is to use more workers than available cores. However, unless the number of jobs and the number of workers are comparable, having a significant amount of blocked jobs always results in idles cores.
    792792
    793793The gold standard of this implementation is Intel's TBB library~\cite{TBB}.
    794794
    795795\subsection{Paradigm performance}
    796 While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead. However, interactions between jobs can easily exacerbate contention. User-level threads may allows fine grain context switching which may result in better resource utilisation but context switches will be more expansive and it is also harder for users to get perfect tunning. As with every example, fibers sit somewhat in the middle of the spectrum. Furthermore, if the units of uninterrupted work are large enough the paradigm choice will be largely amorticised by the actual work done.
     796While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead. However, interactions between jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilisation, but context switches will be more expansive and the extra control means users need to tweak more variables to get the desired performance. Furthermore, if the units of uninterrupted work are large enough the paradigm choice is largely amorticised by the actual work done.
    797797
    798798%  #####  #######    #          ####### ######  ######
     
    805805
    806806\section{\CFA 's Thread Building Blocks}
    807 As a system level language, \CFA should offer both performance and flexibilty as its primary goals, simplicity and user-friendliness being a secondary concern. Therefore, the core of parallelism in \CFA should prioritize power and efficiency. With this said, it is possible to deconstruct the three paradigms details aboved in order to get simple building blocks. Here is a table showing the core caracteristics of the mentionned paradigms :
    808 \begin{center}
    809 \begin{tabular}[t]{| r | c | c |}
    810 \cline{2-3}
    811 \multicolumn{1}{ c| }{} & Has a stack & Preemptive \\
    812 \hline
    813 \Glspl{pool} & X & X \\
    814 \hline
    815 \Glspl{fiber} & \checkmark & X \\
    816 \hline
    817 \Glspl{uthread} & \checkmark & \checkmark \\
    818 \hline
    819 \end{tabular}
    820 \end{center}
    821 
    822 This table is missing several variations (for example jobs on \glspl{uthread} or \glspl{fiber}), but these variation affect mostly performance and do not effect the guarantees as the presented paradigm do.
    823 
    824 As shown in section \ref{cfaparadigms} these different blocks being available in \CFA it is trivial to reproduce any of these paradigm.
     807As a system-level language, \CFA should offer both performance and flexibilty as its primary goals, simplicity and user-friendliness being a secondary concern. Therefore, the core of parallelism in \CFA should prioritize power and efficiency. With this said, deconstructing popular paradigms in order to get simple building blocks yields \glspl{uthread} as the core parallelism block. \Glspl{pool} and other parallelism paradigms can then be built on top of the underlying threading model.
    825808
    826809% ####### #     # ######  #######    #    ######   #####
     
    833816
    834817\subsection{Thread Interface}
    835 The basic building blocks of \CFA are \glspl{cfathread}. By default these are implemented as \glspl{uthread} and as such offer a flexible and lightweight threading interface (lightweight comparatievely to \glspl{kthread}). A thread can be declared using a struct declaration with prefix \code{thread} as follows :
     818The basic building blocks of \CFA are \glspl{cfathread}. By default these are implemented as \glspl{uthread}, and as such, offer a flexible and lightweight threading interface (lightweight compared to \glspl{kthread}). A thread can be declared using a struct declaration with prefix \code{thread} as follows :
    836819
    837820\begin{lstlisting}
     
    839822\end{lstlisting}
    840823
    841 Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use some function pointer representation as the interface of threads (for example : \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is definitely a special routine in \CFA, the existing syntax for declaring routines with unordinary name can be extended, i.e. operator overloading. As such the \code{main} routine of a thread can be defined as :
     824Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example : \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), the existing syntax for declaring routines names with special semantics can be extended, i.e. operator overloading. As such the \code{main} routine of a thread can be defined as :
    842825\begin{lstlisting}
    843826        thread struct foo {};
    844827
    845         void ?main(thread foo* this) {
    846                 /*... Some useful code ...*/
    847         }
    848 \end{lstlisting}
    849 
    850 With these semantics it is trivial to write a thread type that takes a function pointer as parameter and executes it on its stack asynchronously :
     828        void ?main(foo* this) {
     829                sout | "Hello World!" | endl;
     830        }
     831\end{lstlisting}
     832
     833In this example, threads of type \code{foo} will start there execution in the \code{void ?main(foo*)} routine which in this case prints \code{"Hello World!"}. While this proposoal encourages this approach which is enforces strongly type programming. Users may prefer to use the routine based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as parameter and executes it on its stack asynchronously :
    851834\begin{lstlisting}
    852835        typedef void (*voidFunc)(void);
     
    857840
    858841        //ctor
    859         void ?{}(thread FuncRunner* this, voidFunc inFunc) {
     842        void ?{}(FuncRunner* this, voidFunc inFunc) {
    860843                func = inFunc;
    861844        }
    862845
    863846        //main
    864         void ?main(thread FuncRunner* this) {
     847        void ?main(FuncRunner* this) {
    865848                this->func();
    866849        }
    867850\end{lstlisting}
    868851
    869 % In this example \code{func} is a function pointer stored in \acrfull{tls}, which is \CFA is both easy to use and completly typesafe.
    870 
    871 Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is not needed. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} once the constructor has completed and \code{join} before the destructor runs.
    872 \begin{lstlisting}
    873 thread struct FuncRunner; //FuncRunner declared above
    874 
    875 void world() {
     852Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is unnecessary. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} once the constructor has completed and \code{join} before the destructor runs.
     853\begin{lstlisting}
     854thread struct World; //FuncRunner declared above
     855
     856void ?main(thread World* this) {
    876857        sout | "World!" | endl;
    877858}
    878859
    879860void main() {
    880         FuncRunner run = {world};
     861        World w;
    881862        //Thread run forks here
    882863
     
    887868}
    888869\end{lstlisting}
    889 This semantic has several advantages over explicit semantics : typesafety is guaranteed, a thread is always started and stopped exaclty once and users cannot make any progamming errors. Furthermore it naturally follows the memory allocation semantics, which means users do not need to learn multiple semantics.
    890 
    891 These semantics also naturally scale to multiple threads meaning basic synchronisation is very simple :
     870This semantic has several advantages over explicit semantics : typesafety is guaranteed, a thread is always started and stopped exaclty once and users cannot make any progamming errors. However, one of the apparent drawbacks of this system is that threads now always form a lattice, that is they are always destroyed in opposite order of construction. While this seems like a significant limitation, existing \CFA semantics can solve this problem. Indeed, by using dynamic allocation to create threads will naturally let threads outlive the scope in which the thread was created much like dynamically allocating memory will let objects outlive the scope in which thy were created :
     871
    892872\begin{lstlisting}
    893873        thread struct MyThread {
     
    896876
    897877        //ctor
    898         void ?{}(thread MyThread* this) {}
     878        void ?{}(MyThread* this,
     879                     bool is_special = false) {
     880                //...
     881        }
    899882
    900883        //main
    901         void ?main(thread MyThread* this) {
     884        void ?main(MyThread* this) {
     885                //...
     886        }
     887
     888        void foo() {
     889                MyThread* special_thread;
     890                {
     891                        MyThread thrds = {false};
     892                        //Start a thread at the beginning of the scope
     893
     894                        DoStuff();
     895
     896                        //create a other thread that will outlive the thread in this scope
     897                        special_thread = new MyThread{true};
     898
     899                        //Wait for the thread to finish
     900                }
     901                DoMoreStuff();
     902
     903                //Now wait for the special
     904        }
     905\end{lstlisting}
     906
     907Another advantage of this semantic is that it naturally scale to multiple threads meaning basic synchronisation is very simple :
     908
     909\begin{lstlisting}
     910        thread struct MyThread {
     911                //...
     912        };
     913
     914        //ctor
     915        void ?{}(MyThread* this) {}
     916
     917        //main
     918        void ?main(MyThread* this) {
    902919                //...
    903920        }
     
    911928                //Wait for the 10 threads to finish
    912929        }
    913 
    914         void bar() {
    915                 MyThread* thrds = new MyThread[10];
    916                 //Start 10 threads at the beginning of the scope
    917 
    918                 DoStuff();
    919 
    920                 //Wait for the 10 threads to finish
    921                 delete MyThread;
    922         }
    923 \end{lstlisting}
    924 
    925 \newpage
    926 \large{\textbf{WORK IN PROGRESS}}
     930\end{lstlisting}
     931
     932\subsection{Coroutines : A stepping stone}\label{coroutine}
     933While the main focus of this proposal is concurrency and paralellism, it is important to adress coroutines which are actually a significant underlying aspect of the concurrency system. Indeed, while having nothing todo with parallelism and arguably very little to do with concurrency, coroutines need to deal with context-switchs and and other context management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads and a first class feature of \CFA.
     934
     935The core API of coroutines revolve around two features : independent stacks and suspedn/resume. Much like threads the syntax for declaring a coroutine is declaring a type and a main routine for it to start :
     936\begin{lstlisting}
     937        coroutine struct MyCoroutine {
     938                //...
     939        };
     940
     941        //ctor
     942        void ?{}(MyCoroutine* this) {}
     943
     944        //main
     945        void ?main(MyCoroutine* this) {
     946                sout | "Hello World!" | endl;
     947        }
     948\end{lstlisting}
     949
     950One a coroutine is created, users can context switch to it using \code{suspend} and come back using \code{resume}. Here is an example of a solution to the fibonnaci problem using coroutines :
     951\begin{lstlisting}
     952        coroutine struct Fibonacci {
     953                int fn; // used for communication
     954        };
     955
     956        void ?main(Fibonacci* this) {
     957                int fn1, fn2;           // retained between resumes
     958                this->fn = 0;
     959                fn1 = this->fn;
     960                suspend(this);          // return to last resume
     961
     962                this->fn = 1;
     963                fn2 = fn1;
     964                fn1 = this->fn;
     965                suspend(this);          // return to last resume
     966
     967                for ( ;; ) {
     968                        this->fn = fn1 + fn2;
     969                        fn2 = fn1;
     970                        fn1 = this->fn;
     971                        suspend(this);  // return to last resume
     972                }
     973        }
     974
     975        int next(Fibonacci& this) {
     976                resume(&this); // transfer to last suspend
     977                return this.fn;
     978        }
     979
     980        void main() {
     981                Fibonacci f1, f2;
     982                for ( int i = 1; i <= 10; i += 1 ) {
     983                        sout | next(f1) | '§\verb+ +§' | next(f2) | endl;
     984                }
     985        }
     986\end{lstlisting}
     987
    927988\subsection{The \CFA Kernel : Processors, Clusters and Threads}\label{kernel}
    928989
  • doc/proposals/concurrency/style.tex

    rbbd44c5 r687165a  
    22
    33\lstset{
    4 morekeywords=[2]{nomutex,mutex,thread,wait,wait_release,signal,signal_block,accept,monitor},
     4morekeywords=[2]{nomutex,mutex,thread,wait,wait_release,signal,signal_block,accept,monitor,suspend,resume,coroutine},
    55keywordstyle=[2]\color{blue},                           % second set of keywords for concurency
    66basicstyle=\linespread{0.9}\tt\small,           % reduce line spacing and use typewriter font
  • doc/proposals/concurrency/version

    rbbd44c5 r687165a  
    1 0.6.30
     10.6.45
Note: See TracChangeset for help on using the changeset viewer.