Changeset 332d3c2 for doc/papers


Ignore:
Timestamp:
Jun 9, 2018, 8:20:43 PM (3 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, with_gc
Children:
f184ca3
Parents:
7b28e4a
Message:

more updates

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Paper.tex

    r7b28e4a r332d3c2  
    15401540Threads making calls to routines that are currently excluded block outside (external) of the monitor on a calling queue, versus blocking on condition queues inside (internal) of the monitor.
    15411541
    1542 Both internal and external scheduling extend to multiple monitors in a natural way.
    1543 \begin{cquote}
    1544 \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
    1545 \begin{cfa}
    1546 monitor M { `condition e`; ... };
    1547 void foo( M & mutex m1, M & mutex m2 ) {
    1548         ... wait( `e` ); ...   // wait( e, m1, m2 )
    1549         ... wait( `e, m1` ); ...
    1550         ... wait( `e, m2` ); ...
    1551 }
    1552 \end{cfa}
    1553 &
    1554 \begin{cfa}
    1555 void rtn$\(_1\)$( M & mutex m1, M & mutex m2 );
    1556 void rtn$\(_2\)$( M & mutex m1 );
    1557 void bar( M & mutex m1, M & mutex m2 ) {
    1558         ... waitfor( `rtn` ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\), m1, m2 )}$
    1559         ... waitfor( `rtn, m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$
    1560 }
    1561 \end{cfa}
    1562 \end{tabular}
    1563 \end{cquote}
    1564 For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@.
    1565 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@.
    1566 Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe.
    1567 Finally, a signaller,
    1568 \begin{cfa}
    1569 void baz( M & mutex m1, M & mutex m2 ) {
    1570         ... signal( e ); ...
    1571 }
    1572 \end{cfa}
    1573 must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the front of the condition queue.
    1574 In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread.
    1575 
    1576 Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@.
    1577 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@.
    1578 Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer.
    1579 To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible.
    1580 
    1581 Given the ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock.
    1582 \begin{cfa}
    1583 void foo( M & mutex m1, M & mutex m2 ) {
    1584         ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired )}$
    1585 void baz( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$
    1586         ... signal( `e` ); ...
    1587 \end{cfa}
    1588 The @wait@ only releases @m1@ so the signalling thread cannot acquire both @m1@ and @m2@ to  enter @baz@ to get to the @signal@.
    1589 While deadlock issues can occur with multiple/nesting acquisition, this issue results from the fact that locks, and by extension monitors, are not perfectly composable.
    1590 
    1591 Finally, an important aspect of monitor implementation is barging, \ie can calling threads barge ahead of signalled threads?
    1592 If barging is allowed, synchronization between a singller and signallee is difficult, often requiring multiple unblock/block cycles (looping around a wait rechecking if a condition is met).
    1593 \begin{quote}
    1594 However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program.
    1595 It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74}
    1596 \end{quote}
    1597 \CFA scheduling \emph{precludes} barging, which simplifies synchronization among threads in the monitor and increases correctness.
    1598 For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}.
    1599 Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency.
    1600 
    1601 
    1602 \subsection{Barging Prevention}
    1603 
    1604 Figure~\ref{f:BargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signalling semantics.
    1605 The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors.
    1606 The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired.
    1607 When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread.
    1608 However, both the signalling and waiting thread W1 still need monitor @m1@.
    1609 
    1610 \begin{figure}
    1611 \newbox\myboxA
    1612 \begin{lrbox}{\myboxA}
    1613 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1614 monitor M m1, m2;
    1615 condition c;
    1616 mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$
    1617         ...
    1618         mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$
    1619                 ... `signal( c )`; ...
    1620                 // m1, m2 acquired
    1621         } // $\LstCommentStyle{\color{red}release m2}$
    1622         // m1 acquired
    1623 } // release m1
    1624 \end{cfa}
    1625 \end{lrbox}
    1626 
    1627 \newbox\myboxB
    1628 \begin{lrbox}{\myboxB}
    1629 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1630 
    1631 
    1632 mutex( m1 ) {
    1633         ...
    1634         mutex( m1, m2 ) {
    1635                 ... `wait( c )`; // block and release m1, m2
    1636                 // m1, m2 acquired
    1637         } // $\LstCommentStyle{\color{red}release m2}$
    1638         // m1 acquired
    1639 } // release m1
    1640 \end{cfa}
    1641 \end{lrbox}
    1642 
    1643 \newbox\myboxC
    1644 \begin{lrbox}{\myboxC}
    1645 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1646 
    1647 
    1648 mutex( m2 ) {
    1649         ... `wait( c )`; ...
    1650         // m2 acquired
    1651 } // $\LstCommentStyle{\color{red}release m2}$
    1652 
    1653 
    1654 
    1655 
    1656 \end{cfa}
    1657 \end{lrbox}
    1658 
    1659 \begin{cquote}
    1660 \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxA}
    1661 \hspace{2\parindentlnth}
    1662 \subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB}
    1663 \hspace{2\parindentlnth}
    1664 \subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC}
    1665 \end{cquote}
    1666 \caption{Barging Prevention}
    1667 \label{f:BargingPrevention}
    1668 \end{figure}
    1669 
    1670 One scheduling solution is for the signaller to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling.
    1671 However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting is choices when the signaller finishes the inner mutex-statement.
    1672 The singaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@.
    1673 In the latter case, waiter W2 must eventually pass @m2@ to waiter W1, which is complex because W2 may have waited before W1 so it is unaware of W1.
    1674 Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves.
    1675 
    1676 While a number of approaches were examined~\cite[\S~4.3]{Delisle18}, the solution chosen for \CFA is a novel techique called \newterm{partial signalling}.
    1677 Signalled threads are moved to an urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock.
    1678 Partial signalling transfers ownership of monitors to the front waiter.
    1679 When the signaller thread exits or waits in the monitor the front waiter is unblocked if all its monitors are released.
    1680 This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
    1681 
    1682 \begin{comment}
    1683 Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable.
    1684 Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable.
    1685 Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen:
    1686 
    1687 \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it.
    1688 \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate.
    1689 \\
    1690 
    1691 Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order.
    1692 However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{f:dependency}.
    1693 
    1694 In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means knowing when to release a group becomes complex and inefficient (see next section) and therefore effectively precludes this approach.
    1695 
    1696 
    1697 \subsubsection{Dependency graphs}
    1698 
    1699 \begin{figure}
    1700 \begin{multicols}{3}
    1701 Thread $\alpha$
    1702 \begin{cfa}[numbers=left, firstnumber=1]
    1703 acquire A
    1704         acquire A & B
    1705                 wait A & B
    1706         release A & B
    1707 release A
    1708 \end{cfa}
    1709 \columnbreak
    1710 Thread $\gamma$
    1711 \begin{cfa}[numbers=left, firstnumber=6, escapechar=|]
    1712 acquire A
    1713         acquire A & B
    1714                 |\label{line:signal-ab}|signal A & B
    1715         |\label{line:release-ab}|release A & B
    1716         |\label{line:signal-a}|signal A
    1717 |\label{line:release-a}|release A
    1718 \end{cfa}
    1719 \columnbreak
    1720 Thread $\beta$
    1721 \begin{cfa}[numbers=left, firstnumber=12, escapechar=|]
    1722 acquire A
    1723         wait A
    1724 |\label{line:release-aa}|release A
    1725 \end{cfa}
    1726 \end{multicols}
    1727 \begin{cfa}[caption={Pseudo-code for the three thread example.},label={f:dependency}]
    1728 \end{cfa}
    1729 \begin{center}
    1730 \input{dependency}
    1731 \end{center}
    1732 \caption{Dependency graph of the statements in listing \ref{f:dependency}}
    1733 \label{fig:dependency}
    1734 \end{figure}
    1735 
    1736 In listing \ref{f:int-bulk-cfa}, there is a solution that satisfies both barging prevention and mutual exclusion.
    1737 If ownership of both monitors is transferred to the waiter when the signaller releases @A & B@ and then the waiter transfers back ownership of @A@ back to the signaller when it releases it, then the problem is solved (@B@ is no longer in use at this point).
    1738 Dynamically finding the correct order is therefore the second possible solution.
    1739 The problem is effectively resolving a dependency graph of ownership requirements.
    1740 Here even the simplest of code snippets requires two transfers and has a super-linear complexity.
    1741 This complexity can be seen in listing \ref{f:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions.
    1742 Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks.
    1743 \begin{figure}
    1744 \begin{multicols}{2}
    1745 \begin{cfa}
    1746 acquire A
    1747         acquire B
    1748                 acquire C
    1749                         wait A & B & C
    1750                 release C
    1751         release B
    1752 release A
    1753 \end{cfa}
    1754 
    1755 \columnbreak
    1756 
    1757 \begin{cfa}
    1758 acquire A
    1759         acquire B
    1760                 acquire C
    1761                         signal A & B & C
    1762                 release C
    1763         release B
    1764 release A
    1765 \end{cfa}
    1766 \end{multicols}
    1767 \begin{cfa}[caption={Extension to three monitors of listing \ref{f:int-bulk-cfa}},label={f:explosion}]
    1768 \end{cfa}
    1769 \end{figure}
    1770 
    1771 Given the three threads example in listing \ref{f:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (\eg $\alpha1$ must happen before $\alpha2$).
    1772 The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold.
    1773 Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one.
    1774 
    1775 \subsubsection{Partial Signalling} \label{partial-sig}
    1776 \end{comment}
    1777 
    1778 
    1779 \subsection{Signalling: Now or Later}
     1542For internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread;
     1543the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently.
     1544The waiter unblocks next, takes the state, and exits the monitor.
     1545Blocking signalling is the reverse, where the waiter is providing the cooperation for the signalling thread;
     1546the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter.
     1547The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to take the state.
     1548
     1549Figure~\ref{f:DatingService} shows a dating service demonstrating the two forms of signalling: non-blocking and blocking.
     1550The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers.
     1551A thread blocks until an appropriate partner arrives.
     1552The complexity is exchanging phone number in the monitor,
     1553While the non-barging monitor prevents a caller from stealing a phone number, the monitor mutual-exclusion property
     1554
     1555The dating service is an example of a monitor that cannot be written using external scheduling because:
     1556
     1557The example in table \ref{tbl:datingservice} highlights the difference in behaviour.
     1558As mentioned, @signal@ only transfers ownership once the current critical section exits; this behaviour requires additional synchronization when a two-way handshake is needed.
     1559To avoid this explicit synchronization, the @condition@ type offers the @signal_block@ routine, which handles the two-way handshake as shown in the example.
     1560This feature removes the need for a second condition variables and simplifies programming.
     1561Like every other monitor semantic, @signal_block@ uses barging prevention, which means mutual-exclusion is baton-passed both on the front end and the back end of the call to @signal_block@, meaning no other thread can acquire the monitor either before or after the call.
    17801562
    17811563\begin{figure}
     
    18391621\subfloat[\lstinline@signal_block@]{\label{f:DatingSignalBlock}\usebox\myboxB}
    18401622\caption{Dating service. }
    1841 \label{f:Dating service}
     1623\label{f:DatingService}
    18421624\end{figure}
    18431625
    1844 An important note is that, until now, signalling a monitor was a delayed operation.
    1845 The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the @signal@ statement.
    1846 However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the @signal_block@ routine.
    1847 
    1848 The example in table \ref{tbl:datingservice} highlights the difference in behaviour.
    1849 As mentioned, @signal@ only transfers ownership once the current critical section exits; this behaviour requires additional synchronization when a two-way handshake is needed.
    1850 To avoid this explicit synchronization, the @condition@ type offers the @signal_block@ routine, which handles the two-way handshake as shown in the example.
    1851 This feature removes the need for a second condition variables and simplifies programming.
    1852 Like every other monitor semantic, @signal_block@ uses barging prevention, which means mutual-exclusion is baton-passed both on the front end and the back end of the call to @signal_block@, meaning no other thread can acquire the monitor either before or after the call.
    1853 
    1854 % ======================================================================
    1855 % ======================================================================
     1626Both internal and external scheduling extend to multiple monitors in a natural way.
     1627\begin{cquote}
     1628\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
     1629\begin{cfa}
     1630monitor M { `condition e`; ... };
     1631void foo( M & mutex m1, M & mutex m2 ) {
     1632        ... wait( `e` ); ...   // wait( e, m1, m2 )
     1633        ... wait( `e, m1` ); ...
     1634        ... wait( `e, m2` ); ...
     1635}
     1636\end{cfa}
     1637&
     1638\begin{cfa}
     1639void rtn$\(_1\)$( M & mutex m1, M & mutex m2 );
     1640void rtn$\(_2\)$( M & mutex m1 );
     1641void bar( M & mutex m1, M & mutex m2 ) {
     1642        ... waitfor( `rtn` ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\), m1, m2 )}$
     1643        ... waitfor( `rtn, m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$
     1644}
     1645\end{cfa}
     1646\end{tabular}
     1647\end{cquote}
     1648For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@.
     1649To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@.
     1650Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe.
     1651Finally, a signaller,
     1652\begin{cfa}
     1653void baz( M & mutex m1, M & mutex m2 ) {
     1654        ... signal( e ); ...
     1655}
     1656\end{cfa}
     1657must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the front of the condition queue.
     1658In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread.
     1659
     1660Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@.
     1661To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@.
     1662Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer.
     1663To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible.
     1664
     1665Given the ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock.
     1666\begin{cfa}
     1667void foo( M & mutex m1, M & mutex m2 ) {
     1668        ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired )}$
     1669void baz( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$
     1670        ... signal( `e` ); ...
     1671\end{cfa}
     1672The @wait@ only releases @m1@ so the signalling thread cannot acquire both @m1@ and @m2@ to  enter @baz@ to get to the @signal@.
     1673While deadlock issues can occur with multiple/nesting acquisition, this issue results from the fact that locks, and by extension monitors, are not perfectly composable.
     1674
     1675Finally, an important aspect of monitor implementation is barging, \ie can calling threads barge ahead of signalled threads?
     1676If barging is allowed, synchronization between a singller and signallee is difficult, often requiring multiple unblock/block cycles (looping around a wait rechecking if a condition is met).
     1677\begin{quote}
     1678However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program.
     1679It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74}
     1680\end{quote}
     1681\CFA scheduling \emph{precludes} barging, which simplifies synchronization among threads in the monitor and increases correctness.
     1682For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}.
     1683Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency.
     1684
     1685
     1686\subsection{Barging Prevention}
     1687
     1688Figure~\ref{f:BargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signalling semantics.
     1689The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors.
     1690The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired.
     1691When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread.
     1692However, both the signalling and waiting thread W1 still need monitor @m1@.
     1693
     1694\begin{figure}
     1695\newbox\myboxA
     1696\begin{lrbox}{\myboxA}
     1697\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1698monitor M m1, m2;
     1699condition c;
     1700mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$
     1701        ...
     1702        mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$
     1703                ... `signal( c )`; ...
     1704                // m1, m2 acquired
     1705        } // $\LstCommentStyle{\color{red}release m2}$
     1706        // m1 acquired
     1707} // release m1
     1708\end{cfa}
     1709\end{lrbox}
     1710
     1711\newbox\myboxB
     1712\begin{lrbox}{\myboxB}
     1713\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1714
     1715
     1716mutex( m1 ) {
     1717        ...
     1718        mutex( m1, m2 ) {
     1719                ... `wait( c )`; // block and release m1, m2
     1720                // m1, m2 acquired
     1721        } // $\LstCommentStyle{\color{red}release m2}$
     1722        // m1 acquired
     1723} // release m1
     1724\end{cfa}
     1725\end{lrbox}
     1726
     1727\newbox\myboxC
     1728\begin{lrbox}{\myboxC}
     1729\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1730
     1731
     1732mutex( m2 ) {
     1733        ... `wait( c )`; ...
     1734        // m2 acquired
     1735} // $\LstCommentStyle{\color{red}release m2}$
     1736
     1737
     1738
     1739
     1740\end{cfa}
     1741\end{lrbox}
     1742
     1743\begin{cquote}
     1744\subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxA}
     1745\hspace{2\parindentlnth}
     1746\subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB}
     1747\hspace{2\parindentlnth}
     1748\subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC}
     1749\end{cquote}
     1750\caption{Barging Prevention}
     1751\label{f:BargingPrevention}
     1752\end{figure}
     1753
     1754One scheduling solution is for the signaller to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling.
     1755However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting is choices when the signaller finishes the inner mutex-statement.
     1756The singaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@.
     1757In the latter case, waiter W2 must eventually pass @m2@ to waiter W1, which is complex because W2 may have waited before W1 so it is unaware of W1.
     1758Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves.
     1759
     1760While a number of approaches were examined~\cite[\S~4.3]{Delisle18}, the solution chosen for \CFA is a novel techique called \newterm{partial signalling}.
     1761Signalled threads are moved to an urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock.
     1762Partial signalling transfers ownership of monitors to the front waiter.
     1763When the signaller thread exits or waits in the monitor the front waiter is unblocked if all its monitors are released.
     1764This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
     1765
     1766\begin{comment}
     1767Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable.
     1768Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable.
     1769Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen:
     1770
     1771\paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it.
     1772\paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate.
     1773\\
     1774
     1775Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order.
     1776However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{f:dependency}.
     1777
     1778In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means knowing when to release a group becomes complex and inefficient (see next section) and therefore effectively precludes this approach.
     1779
     1780
     1781\subsubsection{Dependency graphs}
     1782
     1783\begin{figure}
     1784\begin{multicols}{3}
     1785Thread $\alpha$
     1786\begin{cfa}[numbers=left, firstnumber=1]
     1787acquire A
     1788        acquire A & B
     1789                wait A & B
     1790        release A & B
     1791release A
     1792\end{cfa}
     1793\columnbreak
     1794Thread $\gamma$
     1795\begin{cfa}[numbers=left, firstnumber=6, escapechar=|]
     1796acquire A
     1797        acquire A & B
     1798                |\label{line:signal-ab}|signal A & B
     1799        |\label{line:release-ab}|release A & B
     1800        |\label{line:signal-a}|signal A
     1801|\label{line:release-a}|release A
     1802\end{cfa}
     1803\columnbreak
     1804Thread $\beta$
     1805\begin{cfa}[numbers=left, firstnumber=12, escapechar=|]
     1806acquire A
     1807        wait A
     1808|\label{line:release-aa}|release A
     1809\end{cfa}
     1810\end{multicols}
     1811\begin{cfa}[caption={Pseudo-code for the three thread example.},label={f:dependency}]
     1812\end{cfa}
     1813\begin{center}
     1814\input{dependency}
     1815\end{center}
     1816\caption{Dependency graph of the statements in listing \ref{f:dependency}}
     1817\label{fig:dependency}
     1818\end{figure}
     1819
     1820In listing \ref{f:int-bulk-cfa}, there is a solution that satisfies both barging prevention and mutual exclusion.
     1821If ownership of both monitors is transferred to the waiter when the signaller releases @A & B@ and then the waiter transfers back ownership of @A@ back to the signaller when it releases it, then the problem is solved (@B@ is no longer in use at this point).
     1822Dynamically finding the correct order is therefore the second possible solution.
     1823The problem is effectively resolving a dependency graph of ownership requirements.
     1824Here even the simplest of code snippets requires two transfers and has a super-linear complexity.
     1825This complexity can be seen in listing \ref{f:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions.
     1826Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks.
     1827\begin{figure}
     1828\begin{multicols}{2}
     1829\begin{cfa}
     1830acquire A
     1831        acquire B
     1832                acquire C
     1833                        wait A & B & C
     1834                release C
     1835        release B
     1836release A
     1837\end{cfa}
     1838
     1839\columnbreak
     1840
     1841\begin{cfa}
     1842acquire A
     1843        acquire B
     1844                acquire C
     1845                        signal A & B & C
     1846                release C
     1847        release B
     1848release A
     1849\end{cfa}
     1850\end{multicols}
     1851\begin{cfa}[caption={Extension to three monitors of listing \ref{f:int-bulk-cfa}},label={f:explosion}]
     1852\end{cfa}
     1853\end{figure}
     1854
     1855Given the three threads example in listing \ref{f:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (\eg $\alpha1$ must happen before $\alpha2$).
     1856The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold.
     1857Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one.
     1858
     1859\subsubsection{Partial Signalling} \label{partial-sig}
     1860\end{comment}
     1861
     1862
    18561863\section{External scheduling} \label{extsched}
    1857 % ======================================================================
    1858 % ======================================================================
     1864
    18591865An alternative to internal scheduling is external scheduling (see Table~\ref{tbl:sched}).
    18601866
Note: See TracChangeset for help on using the changeset viewer.