Changeset 8514fe19


Ignore:
Timestamp:
May 11, 2017, 11:08:07 AM (4 years ago)
Author:
Andrew Beach <ajbeach@…>
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, resolv-new, with_gc
Children:
f2e746d
Parents:
c850687 (diff), 6250a312 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Location:
doc/proposals/concurrency
Files:
1 added
4 edited

Legend:

Unmodified
Added
Removed
  • doc/proposals/concurrency/Makefile

    rc850687 r8514fe19  
    1010concurrency \
    1111style \
     12cfa-format \
    1213glossary \
    1314}
  • doc/proposals/concurrency/concurrency.tex

    rc850687 r8514fe19  
    99% math escape $...$ (dollar symbol)
    1010
    11 \documentclass[twoside,11pt]{article}
     11\documentclass[letterpaper,12pt,titlepage,oneside,final]{book}
    1212
    1313%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     
    2424\usepackage{graphicx}
    2525\usepackage{tabularx}
     26\usepackage{multicol}
    2627\usepackage[acronym]{glossaries}
    27 \usepackage{varioref}                                           % extended references
    28 \usepackage{inconsolata}
     28\usepackage{varioref}   
    2929\usepackage{listings}                                           % format program code
    3030\usepackage[flushmargin]{footmisc}                              % support label/reference in footnote
     
    6262\newcommand{\cit}{\textsuperscript{[Citation Needed]}\xspace}
    6363\newcommand{\code}[1]{\lstinline[language=CFA]{#1}}
    64 \newcommand{\pseudo}[1]{\lstinline[language=Pseudo]{#1}}
     64\newcommand{\pscode}[1]{\lstinline[language=pseudo]{#1}}
     65\newcommand{\TODO}{{\Textbf{TODO}}}
    6566
    6667\input{glossary}
     
    99100% ### #     #    #    #     # #######
    100101
    101 \section{Introduction}
     102\chapter{Introduction}
    102103This proposal provides a minimal core concurrency API that is both simple, efficient and can be reused to build higher-level features. The simplest possible concurrency core is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of the concurrency in \CFA. Indeed, for highly productive parallel programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading.
    103104
     
    112113%  #####  ####### #     #  #####   #####  #     # #     # ####### #     #  #####     #
    113114
    114 \section{Concurrency}
     115\chapter{Concurrency}
    115116Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). Which in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. This distinction can be hidden away in library code, but effective use of the librairy still has to take both paradigms into account. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency construct.
    116117
     
    123124% #     # ####### #     # ###    #    ####### #     #  #####
    124125
    125 \subsection{Monitors}
     126\section{Monitors}
    126127A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it :
    127 \begin{lstlisting}
     128\begin{cfacode}
    128129        typedef /*some monitor type*/ monitor;
    129130        int f(monitor & m);
     
    133134                f(m);
    134135        }
    135 \end{lstlisting}
     136\end{cfacode}
    136137
    137138%  #####     #    #       #
     
    143144%  #####  #     # ####### #######
    144145
    145 \subsubsection{Call semantics} \label{call}
     146\subsection{Call semantics} \label{call}
    146147The above monitor example displays some of the intrinsic characteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable.
    147148
    148149Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter :
    149150
    150 \begin{lstlisting}
    151         mutex struct counter_t { /*...see section §\ref{data}§...*/ };
     151\begin{cfacode}
     152        monitor counter_t { /*...see section $\ref{data}$...*/ };
    152153
    153154        void ?{}(counter_t & nomutex this); //constructor
     
    156157        //need for mutex is platform dependent here
    157158        void ?{}(size_t * this, counter_t & mutex cnt); //conversion
    158 \end{lstlisting}
     159\end{cfacode}
    159160
    160161Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual exclusion when constructing. This semantics is because an object not yet constructed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} key word depending on whether or not reading an \code{size_t} is an atomic operation or not.
    161162
    162 Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without quualifiers \code{void foo(counter_t & this)} then one could argue that it should default to the safest option \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. It can be argued that \code{nomutex} is the more "normal" behaviour, the \code{nomutex} keyword effectively stating explicitly that "this routine has nothing special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routine \code{void foo(counter_t & this)}. Mandatory keywords would also have the added benefice of being self-documented but at the cost of extra typing. In the end, which solution should be picked is still up for debate. For the reminder of this proposal, the explicit approach is used for clarity.
    163 
    164 The next semantic decision is to establish when mutex/nomutex may be used as a type qualifier. Consider the following declarations:
    165 \begin{lstlisting}
     163Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without quualifiers \code{void foo(counter_t & this)} then one could argue that it should default to the safest option \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. It can be argued that \code{nomutex} is the more "normal" behaviour, the \code{nomutex} keyword effectively stating explicitly that "this routine has nothing special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routine \code{void foo(counter_t & this)}. Mandatory keywords would also have the added benefice of being self-documented but at the cost of extra typing. However, since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword.
     164
     165
     166The next semantic decision is to establish when \code{mutex} may be used as a type qualifier. Consider the following declarations:
     167\begin{cfacode}
    166168        int f1(monitor & mutex m);
    167169        int f2(const monitor & mutex m);
     
    169171        int f4(monitor *[] mutex m);
    170172        int f5(graph(monitor*) & mutex m);
    171 \end{lstlisting}
    172 The problem is to indentify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C and even then making sure we only acquire objects once becomes also none trivial. This can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To keep everyone as sane as possible~\cite{Chicken}, this projects imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter (ignoring potential qualifiers and indirections). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is be acquired, passing an array to this routine would be type safe and yet result in undefined behavior because only the first element of the array is acquired. However, this ambiguity is part of the C type system with respects to arrays. For this reason, it would also be reasonnable to disallow mutex in the context where arrays may be passed.
     173\end{cfacode}
     174The problem is to indentify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C and even then making sure we only acquire objects once becomes also none trivial. This can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To keep everyone as sane as possible~\cite{Chicken}, this projects imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with one level of indirection (ignoring potential qualifiers). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is be acquired, passing an array to this routine would be type safe and yet result in undefined behavior because only the first element of the array is acquired. However, this ambiguity is part of the C type system with respects to arrays. For this reason, \code{mutex} is disallowed in the context where arrays may be passed.
     175
     176Finally, for convenience, monitors support multiple acquireing, that is acquireing a monitor while already holding it does not cause a deadlock. It simply increments an internal counter which is then used to release the monitor after the number of acquires and releases match up.
    173177
    174178% ######     #    #######    #
     
    180184% ######  #     #    #    #     #
    181185
    182 \subsubsection{Data semantics} \label{data}
     186\subsection{Data semantics} \label{data}
    183187Once the call semantics are established, the next step is to establish data semantics. Indeed, until now a monitor is used simply as a generic handle but in most cases monitors contian shared data. This data should be intrinsic to the monitor declaration to prevent any accidental use of data without its appropriate protection. For example, here is a complete version of the counter showed in section \ref{call}:
    184 \begin{lstlisting}
    185         mutex struct counter_t {
     188\begin{cfacode}
     189        monitor counter_t {
    186190                int value;
    187191        };
    188192
    189         void ?{}(counter_t & nomutex this) {
     193        void ?{}(counter_t & this) {
    190194                this.cnt = 0;
    191195        }
     
    199203                *this = (int)cnt;
    200204        }
    201 \end{lstlisting}
     205\end{cfacode}
    202206
    203207This simple counter is used as follows:
    204208\begin{center}
    205209\begin{tabular}{c @{\hskip 0.35in} c @{\hskip 0.35in} c}
    206 \begin{lstlisting}
     210\begin{cfacode}
    207211        //shared counter
    208212        counter_t cnt;
     
    214218          ...
    215219        thread N : cnt++;
    216 \end{lstlisting}
     220\end{cfacode}
    217221\end{tabular}
    218222\end{center}
    219223
    220224Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting. Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion, \CFA uses an explicit mechanism to acquire mutual-exclusion. A consequence of this approach is that it extends to multi-monitor calls.
    221 \begin{lstlisting}
     225\begin{cfacode}
    222226        int f(MonitorA & mutex a, MonitorB & mutex b);
    223227
     
    225229        MonitorB b;
    226230        f(a,b);
    227 \end{lstlisting}
     231\end{cfacode}
    228232This code acquires both locks before entering the critical section, called \emph{\gls{group-acquire}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to routines using the same monitors as arguments. However, since \CFA monitors use multi-acquisition locks, users can effectively force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order :
    229 \begin{lstlisting}
     233\begin{cfacode}
    230234        void foo(A & mutex a, B & mutex b) { //acquire a & b
    231235                //...
    232236        }
    233237
    234         void bar(A & mutex a, B & nomutex b) { //acquire a
     238        void bar(A & mutex a, B & /*nomutex*/ b) { //acquire a
    235239                //...
    236240                foo(a, b); //acquire b
     
    238242        }
    239243
    240         void baz(A & nomutex a, B & mutex b) { //acquire b
     244        void baz(A & /*nomutex*/ a, B & mutex b) { //acquire b
    241245                //...
    242246                foo(a, b); //acquire a
    243247                //...
    244248        }
    245 \end{lstlisting}
    246 
    247 The multi-acquisition monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. such use leads to nested monitor call problems~\cite{Lister77}, which is a specific implementation of the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown on several occasion\cit, solving this problem requires :
     249\end{cfacode}
     250
     251The multi-acquisition monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. such use leads to nested monitor call problems~\cite{Lister77}, which is a more specific variation of the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown on several occasion\cit, solving this problem requires :
    248252\begin{enumerate}
    249253        \item Dynamically tracking of the monitor-call order.
     
    269273%             #       ####### #######    #    #     # ####### #     # #     #
    270274
    271 \subsubsection{Implementation Details: Interaction with polymorphism}
     275\subsection{Implementation Details: Interaction with polymorphism}
    272276At first glance, interaction between monitors and \CFA's concept of polymorphism seems complex to support. However, it is shown that entry-point locking can solve most of the issues.
    273277
     
    279283\CFA & pseudo-code & pseudo-code \\
    280284\hline
    281 \begin{lstlisting}
    282 void foo(monitor & mutex a) {
     285\begin{cfacode}[tabsize=3]
     286void foo(monitor& mutex a){
    283287
    284288
     
    297301
    298302}
    299 \end{lstlisting} &\begin{lstlisting}
     303\end{cfacode} & \begin{pseudo}[tabsize=3]
    300304foo(& a) {
    301305
     
    315319        release(a);
    316320}
    317 \end{lstlisting} &\begin{lstlisting}
     321\end{pseudo} & \begin{pseudo}[tabsize=3]
    318322foo(& a) {
    319323        //called routine
     
    333337
    334338}
    335 \end{lstlisting}
     339\end{pseudo}
    336340\end{tabular}
    337341\end{center}
    338342
    339 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. \Gls{callsite-locking} would require a significant amount of work, since any \code{dtype} routine may have to obtain some lock before calling a routine, depending on whether or not the type passed is a monitor. However, with \gls{entry-point-locking} calling a monitor routine becomes exactly the same as calling it from anywhere else.
    340 
     343First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. \Gls{callsite-locking} would require a significant amount of work, since any \code{dtype} routine may have to obtain some lock before calling a routine, depending on whether or not the type passed is a monitor. However, with \gls{entry-point-locking} calling a monitor routine becomes exactly the same as calling it from anywhere else. Note that the \code{mutex} keyword relies on the resolver, which mean that in cases where generic monitor routines is actually desired, writing mutex routine is possible with the proper trait.
    341344
    342345
     
    349352% ### #     #    #    ###     #####   #####  #     # ####### ######
    350353
    351 \subsection{Internal scheduling} \label{insched}
    352 
    353 \begin{center}
    354 \begin{tabular}{ c @{\hskip 0.65in} c }
    355 \begin{lstlisting}[language=Pseudo]
     354\section{Internal scheduling} \label{insched}
     355In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronisation. With monitors, this is generally achieved with internal or external scheduling as in\cit. Since internal scheduling of single monitors is mostly a solved problem, this proposal concentraits on extending internal scheduling to multiple monitors at once. Indeed, like the \gls{group-acquire} semantics, internal scheduling extends to multiple monitors at once in a way that is natural to the user but requires additional complexity on the implementation side.
     356
     357First, Here is a simple example of such a technique :
     358
     359\begin{cfacode}
     360        monitor A {
     361                condition e;
     362        }
     363
     364        void foo(A & mutex a) {
     365                // ...
     366                // We need someone else to do something now
     367                wait(a.e);
     368                // ...
     369        }
     370
     371        void bar(A & mutex a) {
     372                // Do the thing foo is waiting on
     373                // ...
     374                // Signal foo it's done
     375                signal(a.e);
     376        }
     377\end{cfacode}
     378
     379Note that in \CFA, \code{condition} have no particular need to be stored inside a monitor, beyond any software engineering reasons. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. An important aspect to take into account here is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, foo is guaranteed to resume immediately after (unless some other function waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding a barging prevention or barging avoidance is more involved without language support.
     380
     381Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency.
     382
     383\subsection{Internal Scheduling - multi monitor}
     384It easier to understand the problem of multi monitor scheduling using a series of pseudo code though experiment. Note that in the following snippets of pseudo-code waiting and signalling is done without the use of a condition variable. While \CFA requires condition variables to use signalling, the variable itself only really holds the data needed for the implementation of internal schedulling. Some languages like JAVA\cit simply define an implicit condition variable for every monitor while other languages like \uC use explicit condition variables. Since the following pseudo-codes are simple and focused experiments, all condition variables are implicit.
     385
     386\begin{multicols}{2}
     387\begin{pseudo}
    356388acquire A
    357389        wait A
    358390release A
    359 \end{lstlisting}&\begin{lstlisting}[language=Pseudo]
     391\end{pseudo}
     392
     393\columnbreak
     394
     395\begin{pseudo}
    360396acquire A
    361397        signal A
    362398release A
    363 \end{lstlisting}
    364 \end{tabular}
    365 \end{center}
    366 
    367 Easy : like uC++
    368 
    369 \begin{center}
    370 \begin{tabular}{ c @{\hskip 0.65in} c }
    371 \begin{lstlisting}[language=Pseudo]
     399\end{pseudo}
     400\end{multicols}
     401
     402The previous example shows the simple case of having two threads (one for each column) and a single monitor A. One thread acquires before waiting and the other acquires before signalling. There are a few important things to note here. First, both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This can be hidden on the user side but is a logical requirement for barging prevention. Secondly, as stated above, while it is argued that not all problems regarding single monitors are solved, this paper only regards challenges of \gls{group-acquire} and considers other problems related to monitors as solved.
     403
     404An important note about this example is that signalling a monitor is a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement.
     405
     406A direct extension of the previous example is the \gls{group-acquire} version :
     407
     408\begin{multicols}{2}
     409\begin{pseudo}
     410acquire A & B
     411        wait A & B
     412release A & B
     413\end{pseudo}
     414
     415\columnbreak
     416
     417\begin{pseudo}
     418acquire A & B
     419        signal A & B
     420release A & B
     421\end{pseudo}
     422\end{multicols}
     423
     424This version uses \gls{group-acquire} (denoted using the \& symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization will happen between the two threads in exactly the same way and order. The only difference is that mutual exclusion will cover more monitors. On the implementation side, handling multiple monitors at once does add a degree of complexity but it is not significant compared to the next few examples.
     425
     426For the sake of completeness, here is another example of the single-monitor case, this time with nesting.
     427
     428\begin{multicols}{2}
     429\begin{pseudo}
    372430acquire A
    373431        acquire B
     
    375433        release B
    376434release A
    377 \end{lstlisting}&\begin{lstlisting}[language=Pseudo]
    378 acquire A
    379         acquire B
    380                 signal B
    381         release B
    382 release A
    383 \end{lstlisting}
    384 \end{tabular}
    385 \end{center}
    386 
    387 Also easy : like uC++
    388 
    389 \begin{center}
    390 \begin{tabular}{ c @{\hskip 0.65in} c }
    391 \begin{lstlisting}[language=Pseudo]
    392 acquire A & B
    393         wait A & B
    394 release A & B
    395 \end{lstlisting}&\begin{lstlisting}[language=Pseudo]
    396 acquire A & B
    397         signal A & B
    398 release A & B
    399 \end{lstlisting}
    400 \end{tabular}
    401 \end{center}
    402 
    403 Simplest extension : can be made like uC++ by tying B to A
    404 
    405 \begin{center}
    406 \begin{tabular}{ c @{\hskip 0.65in} c }
    407 \begin{lstlisting}[language=Pseudo]
     435\end{pseudo}
     436
     437\columnbreak
     438
     439\begin{pseudo}
     440
     441acquire B
     442        signal B
     443release B
     444
     445\end{pseudo}
     446\end{multicols}
     447
     448While these cases can cause some deadlock issues, we consider that these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. However, for monitors as for locks, it is possible to write program that using nesting without encountering any problems if they are nested carefully.
     449
     450The next example is where \gls{group-acquire} adds a significant layer of complexity to the internal signalling semantics.
     451
     452\begin{multicols}{2}
     453\begin{pseudo}
    408454acquire A
    409455        // Code Section 1
    410         acquire B
     456        acquire A & B
    411457                // Code Section 2
    412458                wait A & B
    413459                // Code Section 3
    414         release B
     460        release A & B
    415461        // Code Section 4
    416462release A
    417 \end{lstlisting}&\begin{lstlisting}[language=Pseudo]
     463\end{pseudo}
     464
     465\columnbreak
     466
     467\begin{pseudo}
    418468acquire A
    419469        // Code Section 5
    420         acquire B
     470        acquire A & B
    421471                // Code Section 6
    422472                signal A & B
    423473                // Code Section 7
    424         release B
     474        release A & B
    425475        // Code Section 8
    426476release A
    427 \end{lstlisting}
    428 \end{tabular}
    429 \end{center}
    430 
    431 Hard extension :
    432 
    433 Incorrect options for the signal :
    434 
    435 \begin{description}
    436  \item[-] Release B and baton pass after Code Section 8 : Passing b without having it
    437  \item[-] Keep B during Code Section 8 : Can lead to deadlocks since we secretly keep a lock longer than specified by the user
    438  \item[-] Instead of release B transfer A and B to waiter then try to reacquire A before running Code Section 8 : This allows barging
    439 \end{description}
    440 
    441 Since we don't want barging we need to pass A \& B and somehow block and get A back.
    442 
    443 \begin{center}
    444 \begin{tabular}{ c @{\hskip 0.65in} c }
    445 \begin{lstlisting}[language=Pseudo]
     477\end{pseudo}
     478\end{multicols}
     479
     480It is particularly important to pay attention to code sections 8 and 3 which are where the existing semantics of internal scheduling are undefined. The root of the problem is that \gls{group-acquire} is used in a context where one of the monitors is already acquired. As mentionned in previous sections, monitors support multiple acquiring which means the that nesting \gls{group-acquire} can be done safely. However, in the context of internal scheduling it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B", it actually only needs to release the monitor B. Since the other thread is waiting on monitor B, the signaller thread cannot simply release the monitor into the wild. This would mean that the waiting thread would have to reacquire the monitor and would therefore open the door to barging threads. Since the signalling thread still needs the monitor A, simply transferring ownership to the waiting thread is not an option because it would pottentially violate mutual exclusion. We are therefore left with three options :
     481
     482\subsubsection{Delaying signals}
     483The first more obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is the correct time to transfer ownership when the last lock is no longer needed is what fits most closely to the behaviour of single monitor scheduling. However, this solution can become much more complicated depending on the content of the code section 8. Indeed, nothing prevents a user from signalling monitor A on a different condition variable. In that case, if monitor B is transferred with monitor A, then it means the system needs to handle threads having ownership on more monitors than expected and how to tie monitors together. On the other hand if the signalling thread only transfers monitor A then somehow both monitors A and B have to be transferred to the waiting thread from two different threads. While this solution may work, it was not fully explored because there is no apparent upper bound on the complexity of ownership transfer.
     484
     485\subsubsection{Dependency graphs}
     486In the previous pseudo-code, there is a solution which would statisfy both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases A and then the waiter transfers back ownership of A when it releases it then the problem is solved. This is the second solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example the following code which is just a direct extension to three monitors requires at least three ownership transfer and has multiple solutions.
     487
     488\begin{multicols}{2}
     489\begin{pseudo}
    446490acquire A
    447491        acquire B
    448492                acquire C
    449493                        wait A & B & C
    450                 1: release C
    451         2: release B
    452 3: release A
    453 \end{lstlisting}&\begin{lstlisting}[language=Pseudo]
     494                release C
     495        release B
     496release A
     497\end{pseudo}
     498
     499\columnbreak
     500
     501\begin{pseudo}
    454502acquire A
    455503        acquire B
    456504                acquire C
    457505                        signal A & B & C
    458                 4: release C
    459         5: release B
    460 6: release A
    461 \end{lstlisting}
    462 \end{tabular}
    463 \end{center}
    464 
    465 To prevent barging :
    466 
    467 \begin{description}
    468  \item[-] When the signaller hits 4 : pass A, B, C to waiter
    469  \item[-] When the waiter hits 2 : pass A, B to signaller
    470  \item[-] When the signaller hits 5 : pass A to waiter
    471 \end{description}
    472 
    473 
    474 \begin{center}
    475 \begin{tabular}{ c @{\hskip 0.65in} c }
    476 \begin{lstlisting}[language=Pseudo]
     506                release C
     507        release B
     508release A
     509\end{pseudo}
     510\end{multicols}
     511
     512\subsubsection{Partial signalling}
     513Finally, the solution that was chosen for \CFA is to use partial signalling. Consider the following case :
     514
     515\begin{multicols}{2}
     516\begin{pseudo}[numbers=left]
    477517acquire A
    478         acquire C
    479                 acquire B
    480                         wait A & B & C
    481                 1: release B
    482         2: release C
    483 3: release A
    484 \end{lstlisting}&\begin{lstlisting}[language=Pseudo]
    485 acquire B
    486         acquire A
    487                 acquire C
    488                         signal A & B & C
    489                 4: release C
    490         5: release A
    491 6: release B
    492 \end{lstlisting}
    493 \end{tabular}
    494 \end{center}
    495 
    496 To prevent barging : When the signaller hits 4 : pass A, B, C to waiter. When the waiter hits 1 it must release B,
    497 
    498 \begin{description}
    499  \item[-]
    500  \item[-] When the waiter hits 1 : pass A, B to signaller
    501  \item[-] When the signaller hits 5 : pass A, B to waiter
    502  \item[-] When the waiter hits 2 : pass A to signaller
    503 \end{description}
     518        acquire A & B
     519                wait A & B
     520        release A & B
     521release A
     522\end{pseudo}
     523
     524\columnbreak
     525
     526\begin{pseudo}[numbers=left, firstnumber=6]
     527acquire A
     528        acquire A & B
     529                signal A & B
     530        release A & B
     531        // ... More code
     532release A
     533\end{pseudo}
     534\end{multicols}
     535
     536The partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated in to only two actions, passing monitors to the next owner when they should be release and conditionnaly waking threads if all conditions are met. Contrary to the other solutions, this solution quickly hits an upper bound on complexity of implementation.
     537
     538% Hard extension :
     539
     540% Incorrect options for the signal :
     541
     542% \begin{description}
     543%  \item[-] Release B and baton pass after Code Section 8 : Passing b without having it
     544%  \item[-] Keep B during Code Section 8 : Can lead to deadlocks since we secretly keep a lock longer than specified by the user
     545%  \item[-] Instead of release B transfer A and B to waiter then try to reacquire A before running Code Section 8 : This allows barging
     546% \end{description}
     547
     548% Since we don't want barging we need to pass A \& B and somehow block and get A back.
     549
     550% \begin{center}
     551% \begin{tabular}{ c @{\hskip 0.65in} c }
     552% \begin{lstlisting}[language=Pseudo]
     553% acquire A
     554%       acquire B
     555%               acquire C
     556%                       wait A & B & C
     557%               1: release C
     558%       2: release B
     559% 3: release A
     560% \end{lstlisting}&\begin{lstlisting}[language=Pseudo]
     561% acquire A
     562%       acquire B
     563%               acquire C
     564%                       signal A & B & C
     565%               4: release C
     566%       5: release B
     567% 6: release A
     568% \end{lstlisting}
     569% \end{tabular}
     570% \end{center}
     571
     572% To prevent barging :
     573
     574% \begin{description}
     575%  \item[-] When the signaller hits 4 : pass A, B, C to waiter
     576%  \item[-] When the waiter hits 2 : pass A, B to signaller
     577%  \item[-] When the signaller hits 5 : pass A to waiter
     578% \end{description}
     579
     580
     581% \begin{center}
     582% \begin{tabular}{ c @{\hskip 0.65in} c }
     583% \begin{lstlisting}[language=Pseudo]
     584% acquire A
     585%       acquire C
     586%               acquire B
     587%                       wait A & B & C
     588%               1: release B
     589%       2: release C
     590% 3: release A
     591% \end{lstlisting}&\begin{lstlisting}[language=Pseudo]
     592% acquire B
     593%       acquire A
     594%               acquire C
     595%                       signal A & B & C
     596%               4: release C
     597%       5: release A
     598% 6: release B
     599% \end{lstlisting}
     600% \end{tabular}
     601% \end{center}
     602
     603% To prevent barging : When the signaller hits 4 : pass A, B, C to waiter. When the waiter hits 1 it must release B,
     604
     605% \begin{description}
     606%  \item[-]
     607%  \item[-] When the waiter hits 1 : pass A, B to signaller
     608%  \item[-] When the signaller hits 5 : pass A, B to waiter
     609%  \item[-] When the waiter hits 2 : pass A to signaller
     610% \end{description}
    504611
    505612% Monitors also need to schedule waiting threads internally as a mean of synchronization. Internal scheduling is one of the simple examples of such a feature. It allows users to declare condition variables and have threads wait and signaled from them. Here is a simple example of such a technique :
     
    9201027% #        #   #     #    ###    #     # #     # #     # #       #     #
    9211028% ####### #     #    #    ###     #####   #####  #     # ####### ######
    922 \newpage
    923 \subsection{External scheduling} \label{extsched}
     1029\section{External scheduling} \label{extsched}
    9241030An alternative to internal scheduling is to use external scheduling instead. This method is more constrained and explicit which may help users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (ex: \uC) or in terms of data (ex: Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages.
    9251031
     
    9591065% ####### ####### #######  #####  #######    ####### ######   #####   #####
    9601066
    961 \subsubsection{Loose object definitions}
     1067\subsection{Loose object definitions}
    9621068In \uC, monitor declarations include an exhaustive list of monitor operations. Since \CFA is not object oriented it becomes both more difficult to implement but also less clear for the user :
    9631069
     
    9841090\end{center}
    9851091
    986 For the \pseudo{monitor is free} condition it is easy to implement a check that can evaluate the condition in a few instruction. However, a fast check for \pseudo{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in the following figure :
     1092For the \pscode{monitor is free} condition it is easy to implement a check that can evaluate the condition in a few instruction. However, a fast check for \pscode{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in the following figure :
    9871093
    9881094\begin{center}
     
    10571163% #     #  #####  #######    #    ###    #     # ####### #     #
    10581164
    1059 \subsubsection{Multi-monitor scheduling}
     1165\subsection{Multi-monitor scheduling}
    10601166
    10611167External scheduling, like internal scheduling, becomes orders of magnitude more complex when we start introducing multi-monitor syntax. Even in the simplest possible case some new semantics need to be established :
     
    11161222
    11171223
    1118 \subsubsection{Implementation Details: External scheduling queues}
     1224\subsection{Implementation Details: External scheduling queues}
    11191225To support multi-monitor external scheduling means that some kind of entry-queues must be used that is aware of both monitors. However, acceptable routines must be aware of the entry queues which means they must be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement a systematic algorithm of disambiguating which queue is relavant regardless of user ordering. The proposed algorithm is to fall back on monitors lock ordering and specify that the monitor that is acquired first is the lock with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonnable constraint. This algorithm choice has two consequences, the entry queue of the highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order arrival, they also contain the second mutex. Therefore, another thread with the same highest priority monitor but a different lowest priority monitor may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it may not be known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probable that half the multi-monitor queues will go unused for the entire duration of the program.
    11201226
    1121 \subsection{Other concurrency tools}
     1227\section{Other concurrency tools}
    11221228TO BE CONTINUED...
    11231229
     
    11311237
    11321238
    1133 \newpage
    11341239% ######     #    ######     #    #       #       ####### #       ###  #####  #     #
    11351240% #     #   # #   #     #   # #   #       #       #       #        #  #     # ##   ##
     
    11391244% #       #     # #    #  #     # #       #       #       #        #  #     # #     #
    11401245% #       #     # #     # #     # ####### ####### ####### ####### ###  #####  #     #
    1141 \section{Parallelism}
     1246\chapter{Parallelism}
    11421247Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. In this decade, it is not longer reasonnable to create a high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest-level approach of parallelism is to use \glspl{kthread} in combination with semantics like \code{fork}, \code{join}, etc. However, since these have significant costs and limitations, \glspl{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues that all have strengths and weaknesses. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads.
    11431248
     1249\section{Paradigm}
    11441250\subsection{User-level threads}
    11451251A direct improvement on the \gls{kthread} approach is to use \glspl{uthread}. These threads offer most of the same features that the operating system already provide but can be used on a much larger scale. This approach is the most powerfull solution as it allows all the features of multi-threading, while removing several of the more expensives costs of using kernel threads. The down side is that almost none of the low-level threading problems are hidden, users still have to think about data races, deadlocks and synchronization issues. These issues can be somewhat alleviated by a concurrency toolkit with strong garantees but the parallelism toolkit offers very little to reduce complexity in itself.
     
    11471253Examples of languages that support \glspl{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}.
    11481254
    1149 \subsubsection{Fibers : user-level threads without preemption}
     1255\subsection{Fibers : user-level threads without preemption}
    11501256A popular varient of \glspl{uthread} is what is often reffered to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strenghts of \glspl{fiber} but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignore fibers.
    11511257
     
    14941600% #     # #       #
    14951601% #     # ####### #######
    1496 \section{Putting it all together}
    1497 
    1498 
    1499 
    1500 
     1602\chapter{Putting it all together}
     1603
     1604
     1605
     1606
     1607
     1608\chapter{Conclusion}
    15011609
    15021610
     
    15121620% #       #     #    #    #     # #    #  #
    15131621% #        #####     #     #####  #     # ######
    1514 \section{Future work}
     1622\chapter{Future work}
    15151623Concurrency and parallelism is still a very active field that strongly benefits from hardware advances. As such certain features that aren't necessarily mature enough in their current state could become relevant in the lifetime of \CFA.
    15161624\subsection{Transactions}
  • doc/proposals/concurrency/style.tex

    rc850687 r8514fe19  
    11\input{common}                                          % bespoke macros used in the document
     2\input{cfa-format}
    23
    34% \CFADefaultStyle
  • doc/proposals/concurrency/version

    rc850687 r8514fe19  
    1 0.7.141
     10.8.2
Note: See TracChangeset for help on using the changeset viewer.