Changeset 64b272a for doc/proposals


Ignore:
Timestamp:
Nov 2, 2017, 3:47:57 PM (4 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, resolv-new, with_gc
Children:
e706bfd
Parents:
bd7f401
Message:

Prereview commit

Location:
doc/proposals/concurrency
Files:
2 added
10 edited

Legend:

Unmodified
Added
Removed
  • doc/proposals/concurrency/.gitignore

    rbd7f401 r64b272a  
    1616build/*.out
    1717build/*.ps
     18build/*.pstex
     19build/*.pstex_t
    1820build/*.tex
    1921build/*.toc
  • doc/proposals/concurrency/Makefile

    rbd7f401 r64b272a  
    1313annex/glossary \
    1414text/intro \
     15text/basics \
    1516text/cforall \
    16 text/basics \
    1717text/concurrency \
    1818text/internals \
    1919text/parallelism \
     20text/results \
    2021text/together \
    2122text/future \
     
    2930}}
    3031
    31 PICTURES = ${addsuffix .pstex, \
    32 }
     32PICTURES = ${addprefix build/, ${addsuffix .pstex, \
     33        system \
     34}}
    3335
    3436PROGRAMS = ${addsuffix .tex, \
     
    6769        build/*.out     \
    6870        build/*.ps      \
     71        build/*.pstex   \
    6972        build/*.pstex_t \
    7073        build/*.tex     \
  • doc/proposals/concurrency/text/cforall.tex

    rbd7f401 r64b272a  
    11% ======================================================================
    22% ======================================================================
    3 \chapter{Cforall crash course}
     3\chapter{Cforall Overview}
    44% ======================================================================
    55% ======================================================================
    66
    7 This thesis presents the design for a set of concurrency features in \CFA. Since it is a new dialect of C, the following is a quick introduction to the language, specifically tailored to the features needed to support concurrency.
     7The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency.
    88
    9 \CFA is a extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object oriented system language, meaning most of the major abstractions have either no runtime overhead or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory-layouts and calling-conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a received (e.g.: this), it does have some notion of objects\footnote{C defines the term objects as : [Where to I get the C11 reference manual?]}, most importantly construction and destruction of objects. Most of the following pieces of code can be found on the \CFA website \cite{www-cfa}
     9\CFA is a extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object oriented system language, meaning most of the major abstractions have either no runtime overhead or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory-layouts and calling-conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a receiver (e.g., this), it does have some notion of objects\footnote{C defines the term objects as : [Where to I get the C11 reference manual?]}, most importantly construction and destruction of objects. Most of the following code examples can be found on the \CFA website \cite{www-cfa}
    1010
    1111\section{References}
  • doc/proposals/concurrency/text/concurrency.tex

    rbd7f401 r64b272a  
    221221% ======================================================================
    222222% ======================================================================
    223 \section{Internal scheduling} \label{insched}
     223\section{Internal scheduling} \label{intsched}
    224224% ======================================================================
    225225% ======================================================================
     
    973973\label{lst:waitfor2}
    974974\end{figure}
     975
     976% ======================================================================
     977% ======================================================================
     978\subsection{Waiting for the destructor}
     979% ======================================================================
     980% ======================================================================
     981An important exception for the \code{waitfor} statement is destructor semantics. Indeed, the \code{waitfor} statement can accept any \code{mutex} routine, which counts the destructor. However, with the semantics discussed until now, waiting for the destructor does not make any sense since using an object after its destructor is called is undefined behaviour. The simplest approach to fix this hole in the semantics would be disallowing \code{waitfor} on destructor. However, a more expressive approach is to flip ordering of execution when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current \code{mutex} routine, similarly to how a condition is signalled.
     982\begin{figure}
     983\begin{cfacode}
     984monitor Executer {};
     985struct  Action;
     986
     987void ^?{}   (Executer & mutex this);
     988void execute(Executer & mutex this, const Action & );
     989void run    (Executer & mutex this) {
     990        while(true) {
     991                   waitfor(execute, this);
     992                or waitfor(^?{}   , this) {
     993                        break;
     994                }
     995        }
     996}
     997\end{cfacode}
     998\caption{Example of an executor which executes action in series until the destructor is called.}
     999\label{lst:dtor-order}
     1000\end{figure}
     1001For example, listing \ref{lst:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop.
  • doc/proposals/concurrency/text/future.tex

    rbd7f401 r64b272a  
    55% ======================================================================
    66
    7 Concurrency and parallelism is still a very active field that strongly benefits from hardware advances. As such certain features that aren't necessarily mature enough in their current state could become relevant in the lifetime of \CFA.
    8 \section{Non-Blocking IO}
     7\section{Flexible Scheduling} \label{futur:sched}
    98
    109
    11 \section{Other concurrency tools}
     10\section{Non-Blocking IO} \label{futur:nbio}
     11While most of the parallelism tools
     12However, many modern workloads are not bound on computation but on IO operations, an common case being webservers and XaaS (anything as a service). These type of workloads often require significant engineering around amortising costs of blocking IO operations. While improving throughtput of these operations is outside what \CFA can do as a language, it can help users to make better use of the CPU time otherwise spent waiting on IO operations. The current trend is to use asynchronous programming using tools like callbacks and/or futurs and promises\cit. However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear
    1213
    1314
    14 \section{Implicit threading}
    15 % Finally, simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the system level.
    16 %
    17 % \begin{center}
    18 % \begin{tabular}[t]{|c|c|c|}
    19 % Sequential & System Parallel & Language Parallel \\
    20 % \begin{lstlisting}
    21 % void big_sum(int* a, int* b,
    22 %                int* out,
    23 %                size_t length)
    24 % {
    25 %       for(int i = 0; i < length; ++i ) {
    26 %               out[i] = a[i] + b[i];
    27 %       }
    28 % }
    29 %
    30 %
    31 %
    32 %
    33 %
    34 % int* a[10000];
    35 % int* b[10000];
    36 % int* c[10000];
    37 % //... fill in a and b ...
    38 % big_sum(a, b, c, 10000);
    39 % \end{lstlisting} &\begin{lstlisting}
    40 % void big_sum(int* a, int* b,
    41 %                int* out,
    42 %                size_t length)
    43 % {
    44 %       range ar(a, a + length);
    45 %       range br(b, b + length);
    46 %       range or(out, out + length);
    47 %       parfor( ai, bi, oi,
    48 %       [](int* ai, int* bi, int* oi) {
    49 %               oi = ai + bi;
    50 %       });
    51 % }
    52 %
    53 % int* a[10000];
    54 % int* b[10000];
    55 % int* c[10000];
    56 % //... fill in a and b ...
    57 % big_sum(a, b, c, 10000);
    58 % \end{lstlisting}&\begin{lstlisting}
    59 % void big_sum(int* a, int* b,
    60 %                int* out,
    61 %                size_t length)
    62 % {
    63 %       for (ai, bi, oi) in (a, b, out) {
    64 %               oi = ai + bi;
    65 %       }
    66 % }
    67 %
    68 %
    69 %
    70 %
    71 %
    72 % int* a[10000];
    73 % int* b[10000];
    74 % int* c[10000];
    75 % //... fill in a and b ...
    76 % big_sum(a, b, c, 10000);
    77 % \end{lstlisting}
    78 % \end{tabular}
    79 % \end{center}
    80 %
     15
     16\section{Other concurrency tools} \label{futur:tools}
    8117
    8218
    83 \section{Multiple Paradigms}
     19\section{Implicit threading} \label{futur:implcit}
     20Simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the library level. The cannonical example of implcit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithm\cit. Listing \ref{lst:parfor} shows three different code examples that accomplish pointwise sums of large arrays. Note that none of these example explicitly declare any concurrency or parallelism objects.
     21
     22\begin{figure}
     23\begin{center}
     24\begin{tabular}[t]{|c|c|c|}
     25Sequential & Library Parallel & Language Parallel \\
     26\begin{cfacode}[tabsize=3]
     27void big_sum(
     28        int* a, int* b,
     29        int* o,
     30        size_t len)
     31{
     32        for(
     33                int i = 0;
     34                i < len;
     35                ++i )
     36        {
     37                o[i]=a[i]+b[i];
     38        }
     39}
    8440
    8541
    86 \section{Transactions}
     42
     43
     44
     45int* a[10000];
     46int* b[10000];
     47int* c[10000];
     48//... fill in a & b
     49big_sum(a,b,c,10000);
     50\end{cfacode} &\begin{cfacode}[tabsize=3]
     51void big_sum(
     52        int* a, int* b,
     53        int* o,
     54        size_t len)
     55{
     56        range ar(a, a+len);
     57        range br(b, b+len);
     58        range or(o, o+len);
     59        parfor( ai, bi, oi,
     60        [](     int* ai,
     61                int* bi,
     62                int* oi)
     63        {
     64                oi=ai+bi;
     65        });
     66}
     67
     68
     69int* a[10000];
     70int* b[10000];
     71int* c[10000];
     72//... fill in a & b
     73big_sum(a,b,c,10000);
     74\end{cfacode}&\begin{cfacode}[tabsize=3]
     75void big_sum(
     76        int* a, int* b,
     77        int* o,
     78        size_t len)
     79{
     80        parfor (ai,bi,oi)
     81            in (a, b, o )
     82        {
     83                oi = ai + bi;
     84        }
     85}
     86
     87
     88
     89
     90
     91
     92
     93int* a[10000];
     94int* b[10000];
     95int* c[10000];
     96//... fill in a & b
     97big_sum(a,b,c,10000);
     98\end{cfacode}
     99\end{tabular}
     100\end{center}
     101\caption{For loop to sum numbers: Sequential, using library parallelism and language parallelism.}
     102\label{lst:parfor}
     103\end{figure}
     104
     105Implicit parallelism is a general solution and therefore is
     106
     107\section{Multiple Paradigms} \label{futur:paradigms}
     108
     109
     110\section{Transactions} \label{futur:transaction}
     111Concurrency and parallelism is still a very active field that strongly benefits from hardware advances. As such certain features that aren't necessarily mature enough in their current state could become relevant in the lifetime of \CFA.
  • doc/proposals/concurrency/text/internals.tex

    rbd7f401 r64b272a  
    11
    22\chapter{Behind the scene}
    3 
    4 
    5 % ======================================================================
    6 % ======================================================================
    7 \section{Implementation Details: Interaction with polymorphism}
    8 % ======================================================================
    9 % ======================================================================
    10 Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be complex to support. However, it is shown that entry-point locking solves most of the issues.
    11 
    12 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines.
    13 
    14 Before looking into complex control-flow, it is important to present the difference between the two acquiring options : callsite and entry-point locking, i.e. acquiring the monitors before making a mutex routine call or as the first operation of the mutex routine-call. For example:
     3There are several challenges specific to \CFA when implementing concurrency. These challenges are direct results of \gls{bulk-acq} and loose object definitions. These two constraints are to root cause of most design decisions in the implementation. Furthermore, to avoid the head-aches of dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs and other dynamic memory allocation scheme. This is to avoid the chicken and egg problem \cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal, means that memory management is a constant concern in the design of the system.
     4
     5The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues. These queues need to be intrinsic\cit to avoid the need memory allocation. This entails that all the fields needed to keep track of all needed information. Since many conconcurrency operations can use an unbound amount of memory (depending on \gls{bulk-acq}) statically defining information in the intrusive fields of threads is insufficient. The only variable sized container that does not require memory allocation is the callstack, which is heavily used in the implementation of internal scheduling. Particularly the GCC extension variable length arrays which is used extensively.
     6
     7Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable length. The threads and the condition both allow a fixed amount of memory to be stored, while mutex-routines and the actual blocking call allow for an unbound amount (though the later is preferable in terms of performance).
     8
     9Note that since the major contributions of this thesis are extending monitor semantics to \gls{bulk-acq} and loose object definitions, any challenges that are not resulting of these characteristiques of \CFA are consired as problems which have already been solved and therefore will not be discussed further.
     10
     11% ======================================================================
     12% ======================================================================
     13\section{Mutex routines}
     14% ======================================================================
     15% ======================================================================
     16
     17The first step towards the monitor implementation is simple mutex-routines using monitors. In the single monitor case, this is done using the entry/exit procedure highlighted in listing \ref{lst:entry1}. This entry/exit procedure doesn't actually have to be extended to support multiple monitors, indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlocks\cit. In \CFA, ordering of monitor relies on memory ordering, this is sufficient because all objects are guaranteed to have distinct non-overlaping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behavior. When a mutex call is made, the concerned monitors are agregated into an variable-length pointer array and sorted based on pointer values. This array is concerved during the entire duration of the mutual-exclusion and it's ordering reused extensively.
    1518\begin{figure}
    16 \label{fig:locking-site}
     19\begin{multicols}{2}
     20Entry
     21\begin{pseudo}
     22if monitor is free
     23        enter
     24elif already own the monitor
     25        continue
     26else
     27        block
     28increment recursions
     29\end{pseudo}
     30\columnbreak
     31Exit
     32\begin{pseudo}
     33decrement recursion
     34if recursion == 0
     35        if entry queue not empty
     36                wake-up thread
     37\end{pseudo}
     38\end{multicols}
     39\caption{Initial entry and exit routine for monitors}
     40\label{lst:entry1}
     41\end{figure}
     42
     43\subsection{ Details: Interaction with polymorphism}
     44Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. However, it is shown that entry-point locking solves most of the issues.
     45
     46First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. It is important to present the difference between the two acquiring options : callsite and entry-point locking, i.e. acquiring the monitors before making a mutex routine call or as the first operation of the mutex routine-call. For example:
     47\begin{figure}[H]
    1748\begin{center}
    18 \setlength\tabcolsep{1.5pt}
    1949\begin{tabular}{|c|c|c|}
    2050Mutex & \gls{callsite-locking} & \gls{entry-point-locking} \\
     
    6797\end{center}
    6898\caption{Callsite vs entry-point locking for mutex calls}
    69 \end{figure}
    70 
    71 
    72 Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait, which is possible because monitors are designed in terms a trait. For example:
     99\label{fig:locking-site}
     100\end{figure}
     101
     102Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait, for example:
    73103\begin{cfacode}
    74104//Incorrect: T is not a monitor
     
    81111\end{cfacode}
    82112
    83 
    84 % ======================================================================
    85 % ======================================================================
    86 \section{Internal scheduling: Implementation} \label{inschedimpl}
    87 % ======================================================================
    88 % ======================================================================
    89 There are several challenges specific to \CFA when implementing internal scheduling. These challenges are direct results of \gls{bulk-acq} and loose object definitions. These two constraints are to root cause of most design decisions in the implementation of internal scheduling. Furthermore, to avoid the head-aches of dynamically allocating memory in a concurrent environment, the internal-scheduling design is entirely free of mallocs and other dynamic memory allocation scheme. This is to avoid the chicken and egg problem \cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal, means that memory management is a constant concern in the design of the system.
    90 
    91 The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues. These queues need to be intrinsic\cit to avoid the need memory allocation. This entails that all the fields needed to keep track of all needed information. Since internal scheduling can use an unbound amount of memory (depending on \gls{bulk-acq}) statically defining information information in the intrusive fields of threads is insufficient. The only variable sized container that does not require memory allocation is the callstack, which is heavily used in the implementation of internal scheduling. Particularly the GCC extension variable length arrays which is used extensively.
    92 
    93 Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable length. In the case of external scheduling, the threads and the condition both allow a fixed amount of memory to be stored, while mutex-routines and the actual blocking call allow for an unbound amount (though adding too much to the mutex routine stack size can become expansive faster).
    94 
    95 The following figure is the traditionnal illustration of a monitor :
     113Both entry-point and callsite locking are valid implementations. The current \CFA implementations uses entry-point locking because it seems to require less work if done using \gls{raii}, effectively transferring the burden of implementation to object construction/destruction. The same could be said of callsite locking, the difference being that the later does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, i.e.: the function body.
     114
     115% ======================================================================
     116% ======================================================================
     117\section{Threading} \label{impl:thread}
     118% ======================================================================
     119% ======================================================================
     120
     121Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency.
     122
     123\begin{figure}
     124\begin{center}
     125{\resizebox{\textwidth}{!}{\input{system.pstex_t}}}
     126\end{center}
     127\caption{Overview of the entire system}
     128\label{fig:system1}
     129\end{figure}
     130
     131\subsection{Context Switching}
     132As mentionned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading. This is because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific function call. This assumptions means that the basic recipe for context-switch is only to copy all callee-saved registers unto the stack and then switch the stack registers with the ones of the target coroutine/thread. Note that instruction pointer can be left untouched since the context-switch always inside the same function. In the case of coroutines, that is the entire story. Threads however do not simply context-switch between each other directly. The context-switch to processors which is where the scheduling happens. This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the "kernel" where scheduling and other system operation happen. Obiously, this has the cost of doubling the context-switch cost from because threads must context-switch to an intermediate stack. However, the performance of the 2-step context-switch is still superior to a \code{pthread_yield}(see section \ref{results}). additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch to use manually (or as part of monitors). This option is not currently present in \CFA but the changes required to add it are strictly additive.
     133
     134\subsection{Processors}
     135Parallelism in \CFA are built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically pthreads in the current implementation of \CFA. Indeed, any parallelism must go through operatiing system librairies. However, \gls{cfathread} are still the main source of concurrency, processors are simply the underlying source of parallelism. Indeed, processor kernel threads simply fetch a user-level thread from the scheduler and run, they are effectively executers for user-threads. The main benefit of this approach is that it offers a well defined boundary between kernel code and user-code, for example kernel thread quiescing, scheduling and interrupt handling. Processors internally use coroutines to take advantage of the existing context-switching semantics.
     136
     137\subsection{Stack management}
     138One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all pthreads created also have a stack created with them, which should be used as much as possible. Normally, coroutines also create there own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the kernel thread stack, effectively stealing the processor stack. The exception to this rule is the Main Processor, i.e. the initial kernel thread that is given to any program. In order to respect user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor.
     139
     140\subsection{Preemption}
     141Finally, an important aspect for any complete threading system is preemption. As mentionned in chapter \ref{basics}, preemption introduces an extra degree of unceretainty, which enables users to have multiple threads interleave transparrently between eachother, rather than having to cooperate between thread for proper scheduling and CPU distribution. Indeed, preemption is desireable because it adds a degree of isolation between tasks. In a fully cooperative system, any thread that runs into a long loop can starve other threads, while in a preemptive system starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly programmer burden. Obviously, preemption is not optimal for every workload, however any preemptive system can become a cooperative system by making the time-slices extremely large. Which is why \CFA uses a preemptive threading system.
     142
     143Preemption in \CFA is based on kernel timers which are used to run a discreet event simulation. Every processor keeps track of the current time and registers an expiration time with the preemption system. When the preemption system receives a change in preemption it sorts these expiration times in a list and sets a kernel timer for the closest one, effectiveling stepping between preemption events on each signals sent by the timer. These timers use the linux signal {\tt SIGALRM}, which is delivered to the process. This is important because when delivering signals to a process, the kernel documentation states that the signal can be delivered to any kernel thread for which the signal isn't block i.e. :
     144\begin{quote}
     145A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.
     146SIGNAL(7) - Linux Programmer's Manual
     147\end{quote}
     148For the sake of simplicity and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every thread except one. Now because of how involontary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread.
     149
     150Involontary context-switching is done by sending {\tt SIGUSER1} to the corresponding processor and having the thread yield from inside the signal handler. Effectively context-switch away from the signal-handler back to the kernel and the signal-handler frame will be unwound when the thread is scheduled again. This means that a signal-handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). It is important to note that signal-handlers save and restore signal masks because user-thread migration can cause signal mask to migrate from one kernel thread to another. This is only a problem if all kernel threads among which a user thread can migrate differ in terms of signal masks. However, since the kernel thread hanlding preemption requires a different signal mask, executing user threads on the kernel alarm thread can cause deadlocks. For this reason, the alarm thread is on a tight loop around a system call to \code{sigwait} or more specifically \code{sigwaitinfo}, requiring very little CPU time for preemption. One final detail about the alarm thread is how to wake it when additional communication is required (e.g. on thread termination). This is also done using {\tt SIGALRM}, but sent throught the \code{pthread_sigqueue}. Indeed, \code{sigwait} can differentiate signals sent from \code{pthread_sigqueue} from signals sent from alarms or the kernel.
     151
     152\subsection{Scheduler} \footnote{ I'm not sure what to write here, is this section even needed. }
     153Finally, an aspect that was not mentionned yet is the scheduling algorithm. Currently, the \CFA scheduler uses a single ready queue for all processors. Will this is not the highest performance algorithm, it has the significant advantage of being robust to heterogenous workloads. This is a very simple scheduling approach but is sufficient to for the context of this thesis.
     154
     155What to do here?
     156
     157However, when
     158As will be mentionned \ref{futur:sched} it needs to be updated when clusters will be
     159
     160clusters
     161
     162
     163
     164Among the most pressing updates to the \CFA
     165uses single queue
     166in future should move to multiple queues with workstealing
     167general purpouse means robust > fast
     168worksharing can higher standard deviation in performance
     169
     170
     171% ======================================================================
     172% ======================================================================
     173\section{Internal scheduling} \label{impl:intsched}
     174% ======================================================================
     175% ======================================================================
     176To ease the understanding of monitors, like many other concepts, they are generelly represented graphically. While non-scheduled monitors are simple enough for a graphical representation to be useful, internal scheduling is complex enough to justify a visual representation. The following figure is the traditionnal illustration of a monitor :
    96177
    97178\begin{center}
     
    99180\end{center}
    100181
    101 For \CFA, the previous picture does not have support for blocking multiple monitors on a single condition. To support \gls{bulk-acq} two changes to this picture are required. First, it doesn't make sense to tie the condition to a single monitor since blocking two monitors as one would require arbitrarily picking a monitor to hold the condition. Secondly, the object waiting on the conditions and AS-stack cannot simply contain the waiting thread since a single thread can potentially wait on multiple monitors. As mentionned in section \ref{inschedimpl}, the handling in multiple monitors is done by partially passing, which entails that each concerned monitor needs to have a node object. However, for waiting on the condition, since all threads need to wait together, a single object needs to be queued in the condition. Moving out the condition and updating the node types yields :
     182This picture has several components, the two most important being the entry-queue and the AS-stack. The entry-queue is a (almost) FIFO list where threads waiting to enter are parked, while the AS-stack is a FILO list used for threads that have been signaled or otherwise marked as running next. For \CFA, the previous picture does not have support for blocking multiple monitors on a single condition. To support \gls{bulk-acq} two changes to this picture are required. First, it doesn't make sense to tie the condition to a single monitor since blocking two monitors as one would require arbitrarily picking a monitor to hold the condition. Secondly, the object waiting on the conditions and AS-stack cannot simply contain the waiting thread since a single thread can potentially wait on multiple monitors. As mentionned in section \ref{intsched}, the handling in multiple monitors is done by partially passing, which entails that each concerned monitor needs to have a node object. However, for waiting on the condition, since all threads need to wait together, a single object needs to be queued in the condition. Moving out the condition and updating the node types yields :
    102183
    103184\begin{center}
     
    105186\end{center}
    106187
    107 \newpage
    108 
    109 This picture and the proper entry and leave algorithms is the fundamental implementation of internal scheduling.
    110 
     188This picture and the proper entry and leave algorithms is the fundamental implementation of internal scheduling (see listing \ref{lst:entry2}).
     189
     190\begin{figure}[b]
    111191\begin{multicols}{2}
    112192Entry
    113 \begin{pseudo}[numbers=left]
     193\begin{pseudo}
    114194if monitor is free
    115195        enter
    116 elif I already own the monitor
     196elif already own the monitor
    117197        continue
    118198else
     
    123203\columnbreak
    124204Exit
    125 \begin{pseudo}[numbers=left, firstnumber=8]
     205\begin{pseudo}
    126206decrement recursion
    127207if recursion == 0
     
    135215\end{pseudo}
    136216\end{multicols}
    137 
    138 Some important things to notice about the exit routine. The solution discussed in \ref{inschedimpl} can be seen on line 11 of the previous pseudo code. Basically, the solution boils down to having a seperate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has trasnferred ownership. This solution is safe as well as preventing any potential barging.
    139 
    140 % ======================================================================
    141 % ======================================================================
    142 \section{Implementation Details: External scheduling queues}
    143 % ======================================================================
    144 % ======================================================================
    145 To support multi-monitor external scheduling means that some kind of entry-queues must be used that is aware of both monitors. However, acceptable routines must be aware of the entry queues which means they must be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement a systematic algorithm of disambiguating which queue is relavant regardless of user ordering. The proposed algorithm is to fall back on monitors lock ordering and specify that the monitor that is acquired first is the lock with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. This algorithm choice has two consequences, the entry queue of the highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order arrival, they also contain the second mutex. Therefore, another thread with the same highest priority monitor but a different lowest priority monitor may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it may not be known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probable that half the multi-monitor queues will go unused for the entire duration of the program.
    146 
    147 
    148 \section{Internals}
    149 The complete mask can be pushed to any one, we are in a context where we already have full ownership of (at least) every concerned monitor and therefore monitors will refuse all calls no matter what.
     217\caption{Entry and exit routine for monitors with internal scheduling}
     218\label{lst:entry2}
     219\end{figure}
     220
     221Some important things to notice about the exit routine. The solution discussed in \ref{intsched} can be seen in the exit routine of listing \ref{lst:entry2}. Basically, the solution boils down to having a seperate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. This solution is deadlock safe as well as preventing any potential barging.
     222
     223The data structure used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the callstack of the \code{wait} and \code{signal_block} routines.
     224
     225% ======================================================================
     226% ======================================================================
     227\section{External scheduling}
     228% ======================================================================
     229% ======================================================================
     230Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that entry-queues are no longer specific to a single monitor, as mentionned in section \ref{extsched}. This means that some kind of entry-queues must be used that is aware of both monitors and which holds threads that are currently waiting to enter the critical section. This challenge is solved for internal scheduling by having the entry-queues in conditions no longer be tied to a monitor, effectively allowing conditions to be moved outside of monitors. However, in the case of external scheduling, acceptable routines must be aware of the entry queues, which means they must be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement that a systematic algorithm of disambiguating which monitor holds the relevant queue regardless of user ordering. The proposed algorithm is to fall back on monitor lock ordering and specify that the monitor that is acquired first is the one with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint.
     231
     232This algorithm choice has two consequences, the entry queue of the highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order of arrival, they also contain a set of monitors. Therefore, another thread whos set contains the same highest priority monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it is not known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probable that some queues will go unused for the entire duration of the program, for example if a monitor is only used in a pair.
     233
     234Therefore, the following modifications need to be made to support external scheduling :
     235\begin{itemize}
     236        \item The threads waiting on the entry-queue need to keep track of which routine is trying to enter, and using which set of monitors. The \code{mutex} routine already has all the required information on it's stack so the thread only needs to keep a pointer to that information.
     237        \item The monitors need to keep a mask of acceptable routines. This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it. It also needs storage to keep track of which routine was accepted. Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread. Note that the complete mask can be pushed to any owned monitors, regardless of \code{when} statements, the \code{waitfor} statement is used in a context where the thread already has full ownership of (at least) every concerned monitor and therefore monitors will refuse all calls no matter what.
     238        \item The entry/exit routine need to be updated as shown in listing \ref{lst:entry3}.
     239\end{itemize}
     240
     241Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine. This routine is needed because of the storage requirements of the call order inversion. Indeed, when waiting for the destructors, storage is need for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for. For regular \code{waitfor} statements, the callstack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later. The waitfor semantics can then be adjusted correspondingly, as seen in listing \ref{lst:entry-dtor}
     242
     243\begin{figure}
     244\begin{multicols}{2}
     245Entry
     246\begin{pseudo}
     247if monitor is free
     248        enter
     249elif already own the monitor
     250        continue
     251elif matches waitfor mask
     252        push waiter to AS-stack
     253        continue
     254else
     255        block
     256increment recursion
     257\end{pseudo}
     258\columnbreak
     259Exit
     260\begin{pseudo}
     261decrement recursion
     262if recursion == 0
     263        if signal_stack not empty
     264                set_owner to thread
     265                if all monitors ready
     266                        wake-up thread
     267
     268        if entry queue not empty
     269                wake-up thread
     270\end{pseudo}
     271\end{multicols}
     272\caption{Entry and exit routine for monitors with internal scheduling and external scheduling}
     273\label{lst:entry3}
     274\end{figure}
     275
     276\begin{figure}
     277\begin{multicols}{2}
     278Destructor Entry
     279\begin{pseudo}
     280if monitor is free
     281        enter
     282elif already own the monitor
     283        increment recursion
     284        return
     285create wait context
     286if matches waitfor mask
     287        reset mask
     288        push self to AS-stack
     289        baton pass
     290else
     291        wait
     292increment recursion
     293\end{pseudo}
     294\columnbreak
     295Waitfor
     296\begin{pseudo}
     297lock all monitors
     298if matching thread is already there
     299        if found destructor
     300                push destructor to AS-stack
     301                unlock all monitors
     302        else
     303                push self to AS-stack
     304                baton pass
     305        return
     306
     307if non-blocking
     308        Unlock all monitors
     309        Return
     310
     311push self to AS-stack
     312set waitfor mask
     313block
     314return
     315\end{pseudo}
     316\end{multicols}
     317\caption{Pseudo code for the \code{waitfor} routine and the \code{mutex} entry routine for destructors}
     318\label{lst:entry-dtor}
     319\end{figure}
  • doc/proposals/concurrency/text/parallelism.tex

    rbd7f401 r64b272a  
    2828While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead (i.e., not thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilisation, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large enough the paradigm choice is largely amortised by the actual work done.
    2929
    30 \TODO
    31 
    3230\section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel}
    33 
    3431
    3532\subsection{Future Work: Machine setup}\label{machine}
  • doc/proposals/concurrency/text/together.tex

    rbd7f401 r64b272a  
    3636}
    3737\end{cfacode}
    38 One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on for ever. Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner :
     38One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever. Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner :
    3939\begin{cfacode}
    4040// Visualization declaration
  • doc/proposals/concurrency/thesis.tex

    rbd7f401 r64b272a  
    3535\usepackage[pagewise]{lineno}
    3636\usepackage{fancyhdr}
     37\usepackage{float}
    3738\renewcommand{\linenumberfont}{\scriptsize\sffamily}
     39\usepackage{siunitx}
     40\sisetup{ binary-units=true }
    3841\input{style}                                                   % bespoke macros used in the document
    3942\usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,colorlinks=true,pagebackref=true,linkcolor=blue,citecolor=blue,urlcolor=blue,pagebackref=true,breaklinks=true]{hyperref}
     
    107110\input{together}
    108111
     112\input{results}
     113
    109114\input{future}
    110115
  • doc/proposals/concurrency/version

    rbd7f401 r64b272a  
    1 0.10.212
     10.10.340
Note: See TracChangeset for help on using the changeset viewer.