Changeset 9f66811


Ignore:
Timestamp:
Jun 28, 2018, 11:53:25 PM (6 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, no_list, persistent-indexer, pthread-emulation, qualifiedEnum
Children:
713926ca, c5283ba
Parents:
944ce47
Message:

more updates

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Paper.tex

    r944ce47 r9f66811  
    3232\renewcommand{\linenumberfont}{\scriptsize\sffamily}
    3333
     34\renewcommand{\topfraction}{0.8}                        % float must be greater than X of the page before it is forced onto its own page
     35\renewcommand{\bottomfraction}{0.8}                     % float must be greater than X of the page before it is forced onto its own page
     36\renewcommand{\floatpagefraction}{0.8}          % float must be greater than X of the page before it is forced onto its own page
    3437\renewcommand{\textfraction}{0.0}                       % the entire page maybe devoted to floats with no text on the page at all
    3538
     
    132135\makeatother
    133136
    134 \newenvironment{cquote}{%
    135         \list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
    136         \item\relax
    137 }{%
    138         \endlist
    139 }% cquote
     137\newenvironment{cquote}
     138               {\list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
     139                \item\relax}
     140               {\endlist}
     141
     142%\newenvironment{cquote}{%
     143%\list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
     144%\item\relax%
     145%}{%
     146%\endlist%
     147%}% cquote
    140148
    141149% CFA programming language, based on ANSI C (with some gcc additions)
     
    271279Concurrency tools handle mutual exclusion and synchronization, while parallelism tools handle performance, cost, and resource utilization.
    272280
    273 The proposed concurrency API is implemented in a dialect of C, called \CFA.
     281The proposed concurrency API is implemented in a dialect of C, called \CFA (pronounced C-for-all).
    274282The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantics, and type checking, and the corresponding high-performance runtime-library to implement the concurrent features.
    275283
     
    281289
    282290\CFA is a non-object-oriented extension of ISO-C, and hence, supports all C paradigms.
    283 %It is a non-object-oriented system-language, meaning most of the major abstractions have either no runtime overhead or can be opted out easily.
    284291Like C, the building blocks of \CFA are structures and routines.
    285292Virtually all of the code generated by the \CFA translator respects C memory layouts and calling conventions.
    286 While \CFA is not object oriented, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C does have a notion of objects: ``region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}.
    287 While some \CFA features are common in object-oriented programming, they are independent capabilities allowing \CFA to adopt them while maintaining a procedural paradigm.
     293While \CFA is not object oriented, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C has a notion of objects: ``region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}.
     294While some object-oriented features appear in \CFA, they are independent capabilities, allowing \CFA to adopt them while maintaining a procedural paradigm.
    288295
    289296
     
    338345Object-oriented programming languages only provide implicit qualification for the receiver.
    339346
    340 In detail, the @with@ statement has the form:
     347In detail, the @with@-statement syntax is:
    341348\begin{cfa}
    342349$\emph{with-statement}$:
     
    348355(Enumerations are already opened.)
    349356The object is the implicit qualifier for the open structure-fields.
    350 All expressions in the expression list are open in parallel within the compound statement, which is different from Pascal, which nests the openings from left to right.
     357All expressions in the expression list are opened in parallel within the compound statement, which is different from Pascal, which nests the openings from left to right.
    351358
    352359
     
    354361
    355362\CFA maximizes the ability to reuse names via overloading to aggressively address the naming problem.
    356 Both variables and routines may be overloaded, where selection is based on types, and number of returns (as in Ada~\cite{Ada}) and arguments.
     363Both variables and routines may be overloaded, where selection is based on number and types of returns and arguments (as in Ada~\cite{Ada}).
     364\newpage
     365\vspace*{-2\baselineskip}%???
    357366\begin{cquote}
    358 \vspace*{-\baselineskip}%???
     367\begin{cfa}
     368// selection based on type
     369\end{cfa}
    359370\lstDeleteShortInline@%
    360 \begin{cfa}
    361 // selection based on type
    362 \end{cfa}
    363371\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    364372\begin{cfa}
     
    411419Therefore, overloading eliminates long prefixes and other naming conventions to prevent name clashes.
    412420As seen in Section~\ref{s:Concurrency}, routine @main@ is heavily overloaded.
    413 For example, variable overloading is useful in the parallel semantics of the @with@ statement for fields with the same name:
     421As another example, variable overloading is useful in the parallel semantics of the @with@ statement for fields with the same name:
    414422\begin{cfa}
    415423struct S { int `i`; int j; double m; } s;
     
    464472\lstMakeShortInline@%
    465473\end{cquote}
    466 While concurrency does not use operator overloading directly, it provides an introduction for the syntax of constructors.
    467474
    468475
     
    470477
    471478Object lifetime is a challenge in non-managed programming languages.
    472 \CFA responds with \CC-like constructors and destructors:
     479\CFA responds with \CC-like constructors and destructors, using a different operator-overloading syntax.
    473480\begin{cfa}
    474481struct VLA { int len, * data; };                        $\C{// variable length array of integers}$
     
    490497Like \CC, construction is implicit on allocation (stack/heap) and destruction is implicit on deallocation.
    491498The object and all their fields are constructed/destructed.
    492 \CFA also provides @new@ and @delete@, which behave like @malloc@ and @free@, in addition to constructing and destructing objects:
     499\CFA also provides @new@ and @delete@ as library routines, which behave like @malloc@ and @free@, in addition to constructing and destructing objects:
    493500\begin{cfa}
    494501{
     
    518525int i = sum( sa, 5 );                                           $\C{// use S's 0 construction and +}$
    519526\end{cfa}
    520 The builtin type @zero_t@ (and @one_t@) overload constant 0 (and 1) for a new types, where both 0 and 1 have special meaning in C.
     527Type variables can be @otype@ or @dtype@.
     528@otype@ refers to a \emph{complete type}, \ie, a type with size, alignment, default constructor, copy constructor, destructor, and assignment operator.
     529@dtype@ refers to an \emph{incomplete type}, \eg, void or a forward-declared type.
     530The builtin types @zero_t@ and @one_t@ overload constant 0 and 1 for a new types, where both 0 and 1 have special meaning in C.
    521531
    522532\CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each routine declaration:
     
    533543\end{cfa}
    534544
    535 Assertions can be @otype@ or @dtype@.
    536 @otype@ refers to a \emph{complete} object, \ie an object has a size, alignment, default constructor, copy constructor, destructor and an assignment operator.
    537 @dtype@ refers to an \emph{incomplete} object, \ie, an object only has a size and alignment.
    538 
    539545Using the return type for overload discrimination, it is possible to write a type-safe @alloc@ based on the C @malloc@:
    540546\begin{cfa}
     
    554560In coroutining, the single thread is self-scheduling across the stacks, so execution is deterministic, \ie the execution path from input to output is fixed and predictable.
    555561A \newterm{stackless} coroutine executes on the caller's stack~\cite{Python} but this approach is restrictive, \eg preventing modularization and supporting only iterator/generator-style programming;
    556 a \newterm{stackfull} coroutine executes on its own stack, allowing full generality.
    557 Only stackfull coroutines are a stepping stone to concurrency.
     562a \newterm{stackful} coroutine executes on its own stack, allowing full generality.
     563Only stackful coroutines are a stepping stone to concurrency.
    558564
    559565The transition to concurrency, even for execution with a single thread and multiple stacks, occurs when coroutines also context switch to a \newterm{scheduling oracle}, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}.
     
    561567The resulting execution system now follows a cooperative threading-model, called \newterm{non-preemptive scheduling}.
    562568
    563 Because the scheduler is special, it can either be a stackless or stackfull coroutine.
     569Because the scheduler is special, it can either be a stackless or stackful coroutine.
    564570For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch.
    565 For stackfull, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
    566 A stackfull scheduler is often used for simplicity and security, even through there is a slightly higher runtime-cost.
     571For stackful, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
     572A stackful scheduler is often used for simplicity and security.
    567573
    568574Regardless of the approach used, a subset of concurrency related challenges start to appear.
    569575For the complete set of concurrency challenges to occur, the missing feature is \newterm{preemption}, where context switching occurs randomly between any two instructions, often based on a timer interrupt, called \newterm{preemptive scheduling}.
    570 While a scheduler introduces uncertainty in the order of execution, preemption introduces uncertainty where context switches occur.
     576While a scheduler introduces uncertainty in the order of execution, preemption introduces uncertainty about where context switches occur.
    571577Interestingly, uncertainty is necessary for the runtime (operating) system to give the illusion of parallelism on a single processor and increase performance on multiple processors.
    572578The reason is that only the runtime has complete knowledge about resources and how to best utilized them.
     
    574580otherwise, it is impossible to write meaningful programs.
    575581Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows.
    576 
    577 
    578 \subsection{\protect\CFA's Thread Building Blocks}
    579582
    580583An important missing feature in C is threading\footnote{While the C11 standard defines a \protect\lstinline@threads.h@ header, it is minimal and defined as optional.
     
    594597This capability is accomplish via the coroutine's stack, where suspend/resume context switch among stacks.
    595598Because threading design-challenges are present in coroutines, their design effort is relevant, and this effort can be easily exposed to programmers giving them a useful new programming paradigm because a coroutine handles the class of problems that need to retain state between calls, \eg plugins, device drivers, and finite-state machines.
    596 Therefore, the core \CFA coroutine-API for has two fundamental features: independent call-stacks and @suspend@/@resume@ operations.
     599Therefore, the two fundamental features of the core \CFA coroutine-API are independent call-stacks and @suspend@/@resume@ operations.
    597600
    598601For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers
     
    722725Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface routines, \eg @next@.
    723726Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
    724 The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code has the three suspend points, representing the three states in the Fibonacci formula, to context switch back to the caller's @resume@.
     727The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@.
    725728The interface routine @next@, takes a Fibonacci instance and context switches to it using @resume@;
    726729on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
     
    835838The previous examples are \newterm{asymmetric (semi) coroutine}s because one coroutine always calls a resuming routine for another coroutine, and the resumed coroutine always suspends back to its last resumer, similar to call/return for normal routines.
    836839However, @resume@ and @suspend@ context switch among existing stack-frames, rather than create new ones so there is no stack growth.
    837 \newterm{Symmetric (full) coroutine}s have a coroutine call to a resuming routine for another coroutine, and its coroutine main has a call to another resuming routine, which eventually forms a resuming-call cycle.
     840\newterm{Symmetric (full) coroutine}s have a coroutine call to a resuming routine for another coroutine, and its coroutine main calls another resuming routine, which eventually forms a resuming-call cycle.
    838841(The trivial cycle is a coroutine resuming itself.)
    839842This control flow is similar to recursion for normal routines, but again there is no stack growth from the context switch.
     
    926929The call from the consumer to @payment@ introduces the cycle between producer and consumer.
    927930When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed.
    928 The context switch restarts the producer at the point where it was last context switched, so it continues in @delivery@ after the resume.
     931The context switch restarts the producer at the point where it last context switched, so it continues in @delivery@ after the resume.
    929932
    930933@delivery@ returns the status value in @prod@'s coroutine main, where the status is printed.
     
    956959For example, for threads if the thread is implicitly started, it must start \emph{after} all constructors, because the thread relies on a completely initialized object, but the inherited constructor runs \emph{before} the derived.
    957960
    958 An alternatively is composition:
     961An alternative is composition:
    959962\begin{cfa}
    960963struct mycoroutine {
     
    10041007Users wanting to extend coroutines or build their own for various reasons can only do so in ways offered by the language.
    10051008Furthermore, implementing coroutines without language supports also displays the power of a programming language.
    1006 While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed without using the language support.
    1007 The reserved keyword simply eases use for the common cases.
    1008 
    1009 Part of the mechanism to generalize coroutines is using a \CFA trait, which defines a coroutine as anything satisfying the trait @is_coroutine@, and this trait is used to restrict coroutine-manipulation routines:
     1009While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed without language support.
     1010The reserved keyword simply eases use for the common case.
     1011
     1012Part of the mechanism to generalize coroutines is using a \CFA trait, which defines a coroutine as anything satisfying the trait @is_coroutine@, and this trait restricts the available set of coroutine-manipulation routines:
    10101013\begin{cfa}
    10111014trait is_coroutine( `dtype` T ) {
     
    10191022The routine definitions ensures there is a statically-typed @main@ routine that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the currently executing coroutine handle.
    10201023The @main@ routine has no return value or additional parameters because the coroutine type allows an arbitrary number of interface routines with corresponding arbitrary typed input/output values versus fixed ones.
    1021 The generic routines @suspend@ and @resume@ can be redefined, but any object passed to them is a coroutine since it must satisfy the @is_coroutine@ trait to compile.
    10221024The advantage of this approach is that users can easily create different types of coroutines, \eg changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ routine, and possibly redefining @suspend@ and @resume@.
    10231025The \CFA keyword @coroutine@ implicitly implements the getter and forward declarations required for implementing the coroutine main:
     
    11501152\begin{cfa}
    11511153int main() {
    1152         MyThread * heapLived;
     1154        MyThread * heapLive;
    11531155        {
    1154                 MyThread blockLived;                            $\C{// fork block-based thread}$
    1155                 heapLived = `new`( MyThread );          $\C{// fork heap-based thread}$
     1156                MyThread blockLive;                                     $\C{// fork block-based thread}$
     1157                heapLive = `new`( MyThread );           $\C{// fork heap-based thread}$
    11561158                ...
    11571159        }                                                                               $\C{// join block-based thread}$
    11581160        ...
    1159         `delete`( heapLived );                                  $\C{// join heap-based thread}$
     1161        `delete`( heapLive );                                   $\C{// join heap-based thread}$
    11601162}
    11611163\end{cfa}
    11621164The heap-based approach allows arbitrary thread-creation topologies, with respect to fork/join-style concurrency.
    11631165
    1164 Figure~\ref{s:ConcurrentMatrixSummation} shows concurrently adding the rows of a matrix and then totalling the subtotals sequential, after all the row threads have terminated.
     1166Figure~\ref{s:ConcurrentMatrixSummation} shows concurrently adding the rows of a matrix and then totalling the subtotals sequentially, after all the row threads have terminated.
    11651167The program uses heap-based threads because each thread needs different constructor values.
    11661168(Python provides a simple iteration mechanism to initialize array elements to different values allowing stack allocation.)
     
    12151217However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}.
    12161218A newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}.
    1217 While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it was rejected as the core paradigm for concurrency in \CFA.
     1219While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it is rejected as the core paradigm for concurrency in \CFA.
    12181220
    12191221One of the most natural, elegant, and efficient mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor}.
     
    12441246higher-level mechanisms often simplify usage by adding better coupling between synchronization and data, \eg message passing, or offering a simpler solution to otherwise involved challenges, \eg barrier lock.
    12451247Often synchronization is used to order access to a critical section, \eg ensuring a reader thread is the next kind of thread to enter a critical section.
    1246 If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, that reader has \newterm{barged}.
     1248If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, that reader \newterm{barged}.
    12471249Barging can result in staleness/freshness problems, where a reader barges ahead of a writer and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from ever being read (lost computation).
    12481250Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
     
    13141316\end{cfa}
    13151317
    1316 Mandatory monitor qualifiers have the benefit of being self-documenting, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameter is redundant.
     1318The benefit of mandatory monitor qualifiers is self-documentation, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameter is redundant.
    13171319Instead, one of qualifier semantics can be the default, and the other required.
    13181320For example, assume the safe @mutex@ qualifier for all monitor parameters because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors.
     
    13401342
    13411343To make the issue tractable, \CFA only acquires one monitor per parameter with at most one level of indirection.
    1342 However, the C type-system has an ambiguity with respects to arrays.
     1344However, there is an ambiguity in the C type-system with respects to arrays.
    13431345Is the argument for @f2@ a single object or an array of objects?
    13441346If it is an array, only the first element of the array is acquired, which seems unsafe;
     
    13551357
    13561358For object-oriented monitors, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@.
    1357 \CFA has no receiver, and hence, must use an explicit mechanism to specify which object has mutual exclusion acquired.
     1359\CFA has no receiver, and hence, must use an explicit mechanism to specify which object acquires mutual exclusion.
    13581360A positive consequence of this design decision is the ability to support multi-monitor routines.
    13591361\begin{cfa}
     
    13871389While \CFA provides only a partial solution, the \CFA partial solution handles many useful cases.
    13881390\begin{cfa}
    1389 monitor Bank { ... };
    1390 void deposit( Bank & `mutex` b, int deposit );
    1391 void transfer( Bank & `mutex` mybank, Bank & `mutex` yourbank, int me2you) {
    1392         deposit( mybank, `-`me2you );                   $\C{// debit}$
    1393         deposit( yourbank, me2you );                    $\C{// credit}$
     1391monitor BankAccount { ... };
     1392void deposit( BankAccount & `mutex` b, int deposit );
     1393void transfer( BankAccount & `mutex` my, BankAccount & `mutex` your, int me2you ) {
     1394        deposit( my, `-`me2you );                               $\C{// debit}$
     1395        deposit( your, me2you );                                $\C{// credit}$
    13941396}
    13951397\end{cfa}
     
    13981400
    13991401
    1400 \subsection{\protect\lstinline|mutex| statement} \label{mutex-stmt}
     1402\subsection{\protect\lstinline@mutex@ statement}
     1403\label{mutex-stmt}
    14011404
    14021405The monitor call-semantics associate all locking semantics to routines.
     
    15251528\end{figure}
    15261529
    1527 Figure~\ref{f:BBExt} shows a \CFA generic bounded-buffer with external scheduling, where producers/consumers detecting a full/empty buffer block and prevent more producers/consumers from entering the monitor until the buffer has a free/empty slot.
     1530Figure~\ref{f:BBExt} shows a \CFA generic bounded-buffer with external scheduling, where producers/consumers detecting a full/empty buffer block and prevent more producers/consumers from entering the monitor until there is a free/empty slot in the buffer.
    15281531External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion.
    15291532If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer.
     
    16461649}
    16471650\end{cfa}
    1648 must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the condition queue.
     1651must acquire a set of monitor locks greater than or equal to the number of locks for the waiting thread signalled from the condition queue.
    16491652
    16501653Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@.
     
    16531656To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible.
    16541657
    1655 When an overloaded routine appears in an @waitfor@ statement, calls to any routine with that name are accepted.
    1656 The rationale is that members with the same name should perform a similar function, and therefore, all should be eligible to accept a call.
     1658% When an overloaded routine appears in an @waitfor@ statement, calls to any routine with that name are accepted.
     1659% The rationale is that members with the same name should perform a similar function, and therefore, all should be eligible to accept a call.
    16571660As always, overloaded routines can be disambiguated using a cast:
    16581661\begin{cfa}
    16591662void rtn( M & mutex m );
    16601663`int` rtn( M & mutex m );
    1661 waitfor( (`int` (*)( M & mutex ))rtn, m1, m2 );
     1664waitfor( (`int` (*)( M & mutex ))rtn, m );
    16621665\end{cfa}
    16631666
     
    16811684For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}.
    16821685Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency.
    1683 Furthermore, \CFA concurrency has no superious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implict form of barging.
     1686Furthermore, there is no spurious wakeup~\cite[\S~9]{Buhr05a} in \CFA, which eliminates an implict form of barging.
    16841687
    16851688
     
    17531756
    17541757One scheduling solution is for the signaller to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling.
    1755 However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting is choices when the signaller finishes the inner mutex-statement.
    1756 The singaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@.
     1758However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting in options when the signaller finishes the inner mutex-statement.
     1759The signaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@.
    17571760In the latter case, waiter W2 must eventually pass @m2@ to waiter W1, which is complex because W1 may have waited before W2, so W2 is unaware of it.
    17581761Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves.
     
    17621765Partial signalling transfers ownership of monitors to the front waiter.
    17631766When the signaller thread exits or waits in the monitor the front waiter is unblocked if all its monitors are released.
    1764 This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
     1767The benefit of this solution is encapsulating complexity into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
    17651768
    17661769
     
    18291832        waitfor( f, m2 );                                               $\C{\color{red}// wait for call to f with argument m2}$
    18301833\end{cfa}
    1831 Routine @g@ has acquired both locks, so when routine @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@.
     1834Both locks are acquired by routine @g@, so when routine @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@.
    18321835This behaviour can be extended to the multi-monitor @waitfor@ statement.
    18331836\begin{cfa}
     
    19251928When the buffer is deallocated, the current waiter is unblocked and informed, so it can perform an appropriate action.
    19261929However, the basic @waitfor@ semantics do not support this functionality, since using an object after its destructor is called is undefined.
    1927 Therefore, to make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the call to the destructor is placed on the urgent queue and the acceptor continues execution, which throws an exception to the acceptor and then the caller is unbloced from the urgent queue to deallocate the object.
     1930Therefore, to make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the call to the destructor is placed on the urgent queue and the acceptor continues execution, which throws an exception to the acceptor and then the caller is unblocked from the urgent queue to deallocate the object.
    19281931Accepting the destructor is an idiomatic way to terminate a thread in \CFA.
    19291932
     
    19331936Threads in \CFA are monitors to allow direct communication among threads, \ie threads can have mutex routines that are called by other threads.
    19341937Hence, all monitor features are available when using threads.
    1935 Figure~\ref{f:pingpong} shows an example of two threads directly calling each other and accepting calls from each other in a cycle.
    1936 Note, both ping/pong threads are globally declared, @pi@/@po@, and hence, start (and possibly complete) before the program main starts.
    1937 
    1938 \begin{figure}
    1939 \lstDeleteShortInline@%
    1940 \begin{cquote}
     1938The following shows an example of two threads directly calling each other and accepting calls from each other in a cycle.
    19411939\begin{cfa}
    19421940thread Ping {} pi;
     
    19461944int main() {}
    19471945\end{cfa}
     1946\vspace{-0.8\baselineskip}
     1947\begin{cquote}
    19481948\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
    19491949\begin{cfa}
     
    19651965\end{cfa}
    19661966\end{tabular}
    1967 \lstMakeShortInline@%
    19681967\end{cquote}
    1969 \caption{Threads ping/pong using external scheduling}
    1970 \label{f:pingpong}
    1971 \end{figure}
     1968% \lstMakeShortInline@%
     1969% \caption{Threads ping/pong using external scheduling}
     1970% \label{f:pingpong}
     1971% \end{figure}
     1972Note, the ping/pong threads are globally declared, @pi@/@po@, and hence, start (and possibly complete) before the program main starts.
     1973
     1974
     1975\subsection{Low-level Locks}
     1976
     1977For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc.
     1978
    19721979
    19731980\section{Parallelism}
    19741981
    19751982Historically, computer performance was about processor speeds.
    1976 However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}.
     1983However, with heat dissipation being a direct consequence of speed increase, parallelism is the new source for increased performance~\cite{Sutter05, Sutter05b}.
    19771984Now, high-performance applications must care about parallelism, which requires concurrency.
    19781985The lowest-level approach of parallelism is to use \newterm{kernel threads} in combination with semantics like @fork@, @join@, \etc.
     
    20002007\subsection{Thread Pools}
    20012008
    2002 In contrast to direct threading is indirect \newterm{thread pools}, where small jobs (work units) are insert into a work pool for execution.
     2009In contrast to direct threading is indirect \newterm{thread pools}, where small jobs (work units) are inserted into a work pool for execution.
    20032010If the jobs are dependent, \ie interact, there is an implicit/explicit dependency graph that ties them together.
    20042011While removing direct concurrency, and hence the amount of context switching, thread pools significantly limit the interaction that can occur among jobs.
    2005 Indeed, jobs should not block because that also block the underlying thread, which effectively means the CPU utilization, and therefore throughput, suffers.
     2012Indeed, jobs should not block because that also blocks the underlying thread, which effectively means the CPU utilization, and therefore throughput, suffers.
    20062013While it is possible to tune the thread pool with sufficient threads, it becomes difficult to obtain high throughput and good core utilization as job interaction increases.
    20072014As well, concurrency errors return, which threads pools are suppose to mitigate.
    2008 Intel's TBB library~\cite{TBB} is the gold standard for thread pools.
    20092015
    20102016
     
    20362042The user cluster is created to contain the application user-threads.
    20372043Having all threads execute on the one cluster often maximizes utilization of processors, which minimizes runtime.
    2038 However, because of limitations of the underlying operating system, heterogeneous hardware, or scheduling requirements (real-time), it is sometimes necessary to have multiple clusters.
     2044However, because of limitations of the underlying operating system, heterogeneous hardware, or scheduling requirements (real-time), multiple clusters are sometimes necessary.
    20392045
    20402046
     
    20742080This storage is allocated at the base of a thread's stack before blocking, which means programmers must add a small amount of extra space for stacks.
    20752081
    2076 In \CFA, ordering of monitor acquisition relies on memory ordering to prevent deadlock~\cite{Havender68}, because all objects are guaranteed to have distinct non-overlapping memory layouts, and mutual-exclusion for a monitor is only defined for its lifetime.
     2082In \CFA, ordering of monitor acquisition relies on memory ordering to prevent deadlock~\cite{Havender68}, because all objects have distinct non-overlapping memory layouts, and mutual-exclusion for a monitor is only defined for its lifetime.
    20772083When a mutex call is made, pointers to the concerned monitors are aggregated into a variable-length array and sorted.
    20782084This array persists for the entire duration of the mutual-exclusion and its ordering reused extensively.
     
    20852091This method is a 2-step context-switch and provides a clear distinction between user and kernel code, where scheduling and other system operations happen.
    20862092The alternative 1-step context-switch uses the \emph{from} thread's stack to schedule and then context-switches directly to the \emph{to} thread's stack.
    2087 Experimental results (not presented) show the performance difference between these two approaches is virtually equivalent, because the 1-step performance is dominated by a lock instruction to prevent a race condition.
     2093Experimental results (not presented) show the performance difference between these two approaches is virtually equivalent, because both approaches are dominated by locking to prevent a race condition.
    20882094
    20892095All kernel threads (@pthreads@) created a stack.
     
    20972103When the timer expires, an interrupt is delivered, and the interrupt handler resets the count-down timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime-kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code.
    20982104Multiple signal handlers may be pending.
    2099 When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal was delivered.
     2105When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal is delivered.
    21002106The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler;
    2101 therefore, all virtual processors in a cluster need to have the same signal mask.
     2107therefore, the same signal mask is required for all virtual processors in a cluster.
    21022108
    21032109However, on current UNIX systems:
     
    22772283Figure~\ref{f:int-sched} shows the code for \CFA, with results in Table~\ref{tab:int-sched}.
    22782284Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
     2285Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.
    22792286
    22802287\begin{figure}
     
    24392446However, no single scheduler is optimal for all workloads and therefore there is value in being able to change the scheduler for given programs.
    24402447One solution is to offer various tweaking options, allowing the scheduler to be adjusted to the requirements of the workload.
    2441 However, to be truly flexible, it is necessary to have a pluggable scheduler.
     2448However, to be truly flexible, a pluggable scheduler is necessary.
    24422449Currently, the \CFA pluggable scheduler is too simple to handle complex scheduling, \eg quality of service and real-time, where the scheduler must interact with mutex objects to deal with issues like priority inversion.
    24432450
     
    24492456At its core, non-blocking I/O is an operating-system level feature queuing IO operations, \eg network operations, and registering for notifications instead of waiting for requests to complete.
    24502457Current trends use asynchronous programming like callbacks, futures, and/or promises, \eg Node.js~\cite{NodeJs} for JavaScript, Spring MVC~\cite{SpringMVC} for Java, and Django~\cite{Django} for Python.
    2451 However, these solutions lead to code that is hard create, read and maintain.
     2458However, these solutions lead to code that is hard to create, read and maintain.
    24522459A better approach is to tie non-blocking I/O into the concurrency system to provide ease of use with low overhead, \eg thread-per-connection web-services.
    24532460A non-blocking I/O library is currently under development for \CFA.
     
    24592466Examples of such tools can include futures and promises~\cite{promises}, executors and actors.
    24602467These additional features are useful when monitors offer a level of abstraction that is inadequate for certain tasks.
     2468As well, new \CFA extensions should make it possible to create a uniform interface for virtually all mutual exclusion, including monitors and low-level locks.
    24612469
    24622470\paragraph{Implicit Threading}
     
    24672475The canonical example of implicit concurrency is concurrent nested @for@ loops, which are amenable to divide and conquer algorithms~\cite{uC++book}.
    24682476The \CFA language features should make it possible to develop a reasonable number of implicit concurrency mechanism to solve basic HPC data-concurrency problems.
    2469 However, implicit concurrency is a restrictive solution and has its limitations, so it can never replace explicit concurrent programming.
     2477However, implicit concurrency is a restrictive solution with significant limitations, so it can never replace explicit concurrent programming.
    24702478
    24712479
Note: See TracChangeset for help on using the changeset viewer.