Changeset 9c32e21


Ignore:
Timestamp:
Jun 8, 2018, 3:47:35 PM (6 years ago)
Author:
Aaron Moss <a3moss@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, with_gc
Children:
f184ca3
Parents:
6eb131c (diff), 7b28e4a (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Paper.tex

    r6eb131c r9c32e21  
    580580\subsection{\protect\CFA's Thread Building Blocks}
    581581
    582 An important missing feature in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional.
     582An important missing feature in C is threading\footnote{While the C11 standard defines a \protect\lstinline@threads.h@ header, it is minimal and defined as optional.
    583583As such, library support for threading is far from widespread.
    584 At the time of writing the paper, neither \protect\lstinline|gcc| nor \protect\lstinline|clang| support ``threads.h'' in their standard libraries.}.
     584At the time of writing the paper, neither \protect\lstinline@gcc@ nor \protect\lstinline@clang@ support \protect\lstinline@threads.h@ in their standard libraries.}.
    585585In modern programming languages, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism.
    586586As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages.
     
    11401140}
    11411141\end{cfa}
    1142 A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{api}.
     1142A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{API}.
    11431143\end{comment}
    11441144
     
    14431443\label{s:InternalScheduling}
    14441444
    1445 While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer, Figure~\ref{f:GenericBoundedBuffer}, may be full/empty so produce/consumer threads must block.
     1445While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed.
     1446For example, Figure~\ref{f:GenericBoundedBuffer} shows a bounded buffer that may be full/empty so produce/consumer threads must block.
    14461447Leaving the monitor and trying again (busy waiting) is impractical for high-level programming.
    14471448Monitors eliminate busy waiting by providing internal synchronization to schedule threads needing access to the shared data, where the synchronization is blocking (threads are parked) versus spinning.
    1448 The synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} is defined as indicating which thread acquires the critical section next.
     1449Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} defines which thread acquires the critical section next.
    14491450\newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry.
    14501451
     
    15371538External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion.
    15381539If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer.
    1539 Threads making calls to routines that are currently excluded block outside (externally) of the monitor on a calling queue, versus blocking on condition queues inside the monitor.
     1540Threads making calls to routines that are currently excluded block outside (external) of the monitor on a calling queue, versus blocking on condition queues inside (internal) of the monitor.
    15401541
    15411542Both internal and external scheduling extend to multiple monitors in a natural way.
     1543\begin{cquote}
     1544\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
    15421545\begin{cfa}
    15431546monitor M { `condition e`; ... };
    15441547void foo( M & mutex m1, M & mutex m2 ) {
    1545         ... wait( `e` ); ...                                    $\C{// wait( e, m1, m2 )}$
     1548        ... wait( `e` ); ...   // wait( e, m1, m2 )
    15461549        ... wait( `e, m1` ); ...
    15471550        ... wait( `e, m2` ); ...
    15481551}
    1549 
     1552\end{cfa}
     1553&
     1554\begin{cfa}
    15501555void rtn$\(_1\)$( M & mutex m1, M & mutex m2 );
    15511556void rtn$\(_2\)$( M & mutex m1 );
    15521557void bar( M & mutex m1, M & mutex m2 ) {
    1553         ... waitfor( `rtn` ); ...                               $\C{// waitfor( rtn\(_1\), m1, m2 )}$
    1554         ... waitfor( `rtn, m1` ); ...                   $\C{// waitfor( rtn\(_2\), m1 )}$
    1555 }
    1556 \end{cfa}
     1558        ... waitfor( `rtn` ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\), m1, m2 )}$
     1559        ... waitfor( `rtn, m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$
     1560}
     1561\end{cfa}
     1562\end{tabular}
     1563\end{cquote}
    15571564For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@.
    15581565To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@.
    15591566Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe.
    1560 Similarly, for @waitfor( rtn, ... )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@.
     1567Finally, a signaller,
     1568\begin{cfa}
     1569void baz( M & mutex m1, M & mutex m2 ) {
     1570        ... signal( e ); ...
     1571}
     1572\end{cfa}
     1573must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the front of the condition queue.
     1574In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread.
     1575
     1576Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@.
    15611577To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@.
    15621578Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer.
     
    15891605The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors.
    15901606The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired.
    1591 When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting thread to prevent barging into the outer @mutex@ statement by another thread.
    1592 However, both the signalling and signalled threads still need monitor @m1@.
     1607When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread.
     1608However, both the signalling and waiting thread W1 still need monitor @m1@.
    15931609
    15941610\begin{figure}
     
    15981614monitor M m1, m2;
    15991615condition c;
     1616mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$
     1617        ...
     1618        mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$
     1619                ... `signal( c )`; ...
     1620                // m1, m2 acquired
     1621        } // $\LstCommentStyle{\color{red}release m2}$
     1622        // m1 acquired
     1623} // release m1
     1624\end{cfa}
     1625\end{lrbox}
     1626
     1627\newbox\myboxB
     1628\begin{lrbox}{\myboxB}
     1629\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1630
     1631
    16001632mutex( m1 ) {
    16011633        ...
     
    16091641\end{lrbox}
    16101642
    1611 \newbox\myboxB
    1612 \begin{lrbox}{\myboxB}
    1613 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1614 
    1615 
    1616 mutex( m1 ) {
    1617         ...
    1618         mutex( m1, m2 ) {
    1619                 ... `signal( c )`; ...
    1620                 // m1, m2 acquired
    1621         } // $\LstCommentStyle{\color{red}release m2}$
    1622         // m1 acquired
    1623 } // release m1
    1624 \end{cfa}
    1625 \end{lrbox}
    1626 
    16271643\newbox\myboxC
    16281644\begin{lrbox}{\myboxC}
     
    16301646
    16311647
    1632 mutex( m1 ) {
     1648mutex( m2 ) {
    16331649        ... `wait( c )`; ...
    1634         // m1 acquired
    1635 } // $\LstCommentStyle{\color{red}release m1}$
     1650        // m2 acquired
     1651} // $\LstCommentStyle{\color{red}release m2}$
    16361652
    16371653
     
    16421658
    16431659\begin{cquote}
    1644 \subfloat[Waiting Thread]{\label{f:WaitingThread}\usebox\myboxA}
     1660\subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxA}
    16451661\hspace{2\parindentlnth}
    1646 \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxB}
     1662\subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB}
    16471663\hspace{2\parindentlnth}
    1648 \subfloat[Other Waiting Thread]{\label{f:SignallingThread}\usebox\myboxC}
     1664\subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC}
    16491665\end{cquote}
    16501666\caption{Barging Prevention}
     
    16521668\end{figure}
    16531669
    1654 The obvious solution to the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred.
    1655 It can be argued that that moment is when the last lock is no longer needed, because this semantics fits most closely to the behaviour of single-monitor scheduling.
    1656 This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups.
    1657 This solution releases the monitors once every monitor in a group can be released.
    1658 However, since some monitors are never released (\eg the monitor of a thread), this interpretation means a group might never be released.
    1659 A more interesting interpretation is to transfer the group until all its monitors are released, which means the group is not passed further and a thread can retain its locks.
    1660 
    1661 However, listing \ref{f:int-secret} shows this solution can become much more complicated depending on what is executed while secretly holding B at line \ref{line:secret}, while avoiding the need to transfer ownership of a subset of the condition monitors.
     1670One scheduling solution is for the signaller to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling.
     1671However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting is choices when the signaller finishes the inner mutex-statement.
     1672The singaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@.
     1673In the latter case, waiter W2 must eventually pass @m2@ to waiter W1, which is complex because W2 may have waited before W1 so it is unaware of W1.
     1674Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves.
     1675
     1676While a number of approaches were examined~\cite[\S~4.3]{Delisle18}, the solution chosen for \CFA is a novel techique called \newterm{partial signalling}.
     1677Signalled threads are moved to an urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock.
     1678Partial signalling transfers ownership of monitors to the front waiter.
     1679When the signaller thread exits or waits in the monitor the front waiter is unblocked if all its monitors are released.
     1680This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
     1681
     1682\begin{comment}
    16621683Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable.
    16631684Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable.
    16641685Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen:
    16651686
    1666 \begin{comment}
    16671687\paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it.
    16681688\paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate.
     
    17551775\subsubsection{Partial Signalling} \label{partial-sig}
    17561776\end{comment}
    1757 
    1758 Finally, the solution that is chosen for \CFA is to use partial signalling.
    1759 Again using listing \ref{f:int-bulk-cfa}, the partial signalling solution transfers ownership of monitor @B@ at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor @A@.
    1760 Only when it reaches line \ref{line:lastRelease} does it actually wake up the waiting thread.
    1761 This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
    1762 This solution has a much simpler implementation than a dependency graph solving algorithms, which is why it was chosen.
    1763 Furthermore, after being fully implemented, this solution does not appear to have any significant downsides.
    1764 
    1765 Using partial signalling, listing \ref{f:dependency} can be solved easily:
    1766 \begin{itemize}
    1767         \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor @B@ to thread $\alpha$ and continues to hold monitor @A@.
    1768         \item When thread $\gamma$ reaches line \ref{line:release-a}  it transfers monitor @A@ to thread $\beta$  and wakes it up.
    1769         \item When thread $\beta$  reaches line \ref{line:release-aa} it transfers monitor @A@ to thread $\alpha$ and wakes it up.
    1770 \end{itemize}
    17711777
    17721778
Note: See TracChangeset for help on using the changeset viewer.