Changeset 90cedbdd for doc/papers/concurrency
- Timestamp:
- Jun 8, 2018, 1:40:40 PM (6 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, with_gc
- Children:
- 7b28e4a
- Parents:
- 08b5a7e
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/papers/concurrency/Paper.tex
r08b5a7e r90cedbdd 580 580 \subsection{\protect\CFA's Thread Building Blocks} 581 581 582 An important missing feature in C is threading\footnote{While the C11 standard defines a ``threads.h''header, it is minimal and defined as optional.582 An important missing feature in C is threading\footnote{While the C11 standard defines a \protect\lstinline@threads.h@ header, it is minimal and defined as optional. 583 583 As such, library support for threading is far from widespread. 584 At the time of writing the paper, neither \protect\lstinline |gcc| nor \protect\lstinline|clang| support ``threads.h''in their standard libraries.}.584 At the time of writing the paper, neither \protect\lstinline@gcc@ nor \protect\lstinline@clang@ support \protect\lstinline@threads.h@ in their standard libraries.}. 585 585 In modern programming languages, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism. 586 586 As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. … … 1140 1140 } 1141 1141 \end{cfa} 1142 A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{ api}.1142 A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{API}. 1143 1143 \end{comment} 1144 1144 … … 1443 1443 \label{s:InternalScheduling} 1444 1444 1445 While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer, Figure~\ref{f:GenericBoundedBuffer}, may be full/empty so produce/consumer threads must block. 1445 While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed. 1446 For example, Figure~\ref{f:GenericBoundedBuffer} shows a bounded buffer that may be full/empty so produce/consumer threads must block. 1446 1447 Leaving the monitor and trying again (busy waiting) is impractical for high-level programming. 1447 1448 Monitors eliminate busy waiting by providing internal synchronization to schedule threads needing access to the shared data, where the synchronization is blocking (threads are parked) versus spinning. 1448 The synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} is defined as indicatingwhich thread acquires the critical section next.1449 Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} defines which thread acquires the critical section next. 1449 1450 \newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry. 1450 1451 … … 1537 1538 External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion. 1538 1539 If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer. 1539 Threads making calls to routines that are currently excluded block outside (external ly) of the monitor on a calling queue, versus blocking on condition queues insidethe monitor.1540 Threads making calls to routines that are currently excluded block outside (external) of the monitor on a calling queue, versus blocking on condition queues inside (internal) of the monitor. 1540 1541 1541 1542 Both internal and external scheduling extend to multiple monitors in a natural way. 1543 \begin{cquote} 1544 \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}} 1542 1545 \begin{cfa} 1543 1546 monitor M { `condition e`; ... }; 1544 1547 void foo( M & mutex m1, M & mutex m2 ) { 1545 ... wait( `e` ); ... $\C{// wait( e, m1, m2 )}$1548 ... wait( `e` ); ... // wait( e, m1, m2 ) 1546 1549 ... wait( `e, m1` ); ... 1547 1550 ... wait( `e, m2` ); ... 1548 1551 } 1549 1552 \end{cfa} 1553 & 1554 \begin{cfa} 1550 1555 void rtn$\(_1\)$( M & mutex m1, M & mutex m2 ); 1551 1556 void rtn$\(_2\)$( M & mutex m1 ); 1552 1557 void bar( M & mutex m1, M & mutex m2 ) { 1553 ... waitfor( `rtn` ); ... $\C{// waitfor( rtn\(_1\), m1, m2 )}$ 1554 ... waitfor( `rtn, m1` ); ... $\C{// waitfor( rtn\(_2\), m1 )}$ 1555 } 1556 \end{cfa} 1558 ... waitfor( `rtn` ); ... // $\LstCommentStyle{waitfor( rtn\(_1\), m1, m2 )}$ 1559 ... waitfor( `rtn, m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$ 1560 } 1561 \end{cfa} 1562 \end{tabular} 1563 \end{cquote} 1557 1564 For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@. 1558 1565 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@. 1559 1566 Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe. 1560 Similarly, for @waitfor( rtn, ... )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. 1567 Finally, a signaller, 1568 \begin{cfa} 1569 void baz( M & mutex m1, M & mutex m2 ) { 1570 ... signal( e ); ... 1571 } 1572 \end{cfa} 1573 must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the front of the condition queue. 1574 In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread. 1575 1576 Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. 1561 1577 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@. 1562 1578 Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer. … … 1589 1605 The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors. 1590 1606 The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired. 1591 When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting thread to prevent barging into the outer @mutex@ statement by another thread.1592 However, both the signalling and signalled threadsstill need monitor @m1@.1607 When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread. 1608 However, both the signalling and waiting thread W1 still need monitor @m1@. 1593 1609 1594 1610 \begin{figure} … … 1598 1614 monitor M m1, m2; 1599 1615 condition c; 1616 mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$ 1617 ... 1618 mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$ 1619 ... `signal( c )`; ... 1620 // m1, m2 acquired 1621 } // $\LstCommentStyle{\color{red}release m2}$ 1622 // m1 acquired 1623 } // release m1 1624 \end{cfa} 1625 \end{lrbox} 1626 1627 \newbox\myboxB 1628 \begin{lrbox}{\myboxB} 1629 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 1630 1631 1600 1632 mutex( m1 ) { 1601 1633 ... … … 1609 1641 \end{lrbox} 1610 1642 1611 \newbox\myboxB1612 \begin{lrbox}{\myboxB}1613 \begin{cfa}[aboveskip=0pt,belowskip=0pt]1614 1615 1616 mutex( m1 ) {1617 ...1618 mutex( m1, m2 ) {1619 ... `signal( c )`; ...1620 // m1, m2 acquired1621 } // $\LstCommentStyle{\color{red}release m2}$1622 // m1 acquired1623 } // release m11624 \end{cfa}1625 \end{lrbox}1626 1627 1643 \newbox\myboxC 1628 1644 \begin{lrbox}{\myboxC} … … 1630 1646 1631 1647 1632 mutex( m 1) {1648 mutex( m2 ) { 1633 1649 ... `wait( c )`; ... 1634 // m 1acquired1635 } // $\LstCommentStyle{\color{red}release m 1}$1650 // m2 acquired 1651 } // $\LstCommentStyle{\color{red}release m2}$ 1636 1652 1637 1653 … … 1642 1658 1643 1659 \begin{cquote} 1644 \subfloat[ Waiting Thread]{\label{f:WaitingThread}\usebox\myboxA}1660 \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxA} 1645 1661 \hspace{2\parindentlnth} 1646 \subfloat[ Signalling Thread]{\label{f:SignallingThread}\usebox\myboxB}1662 \subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB} 1647 1663 \hspace{2\parindentlnth} 1648 \subfloat[ Other Waiting Thread]{\label{f:SignallingThread}\usebox\myboxC}1664 \subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC} 1649 1665 \end{cquote} 1650 1666 \caption{Barging Prevention} … … 1652 1668 \end{figure} 1653 1669 1654 The obvious solution to the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. 1655 It can be argued that that moment is when the last lock is no longer needed, because this semantics fits most closely to the behaviour of single-monitor scheduling. 1656 This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. 1657 This solution releases the monitors once every monitor in a group can be released. 1658 However, since some monitors are never released (\eg the monitor of a thread), this interpretation means a group might never be released. 1659 A more interesting interpretation is to transfer the group until all its monitors are released, which means the group is not passed further and a thread can retain its locks. 1660 1661 However, listing \ref{f:int-secret} shows this solution can become much more complicated depending on what is executed while secretly holding B at line \ref{line:secret}, while avoiding the need to transfer ownership of a subset of the condition monitors. 1670 One scheduling solution is for the signaller to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling. 1671 However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting is choices when the signaller finishes the inner mutex-statement. 1672 The singaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@. 1673 In the latter case, waiter W2 must eventually pass @m2@ to waiter W1, which is complex because W2 may have waited before W1 so it is unaware of W1. 1674 Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves. 1675 1676 While a number of approaches were examined~\cite[\S~4.3]{Delisle18}, the solution chosen for \CFA is a novel techique called \newterm{partial signalling}. 1677 Signalled threads are moved to an urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock. 1678 Partial signalling transfers ownership of monitors to the front waiter. 1679 When the signaller thread exits or waits in the monitor the front waiter is unblocked if all its monitors are released. 1680 This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. 1681 1682 \begin{comment} 1662 1683 Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable. 1663 1684 Because the third thread is signalled when secretly holding @B@, the goal becomes unreachable. 1664 1685 Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen: 1665 1686 1666 \begin{comment}1667 1687 \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it. 1668 1688 \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate. … … 1755 1775 \subsubsection{Partial Signalling} \label{partial-sig} 1756 1776 \end{comment} 1757 1758 Finally, the solution that is chosen for \CFA is to use partial signalling.1759 Again using listing \ref{f:int-bulk-cfa}, the partial signalling solution transfers ownership of monitor @B@ at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor @A@.1760 Only when it reaches line \ref{line:lastRelease} does it actually wake up the waiting thread.1761 This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.1762 This solution has a much simpler implementation than a dependency graph solving algorithms, which is why it was chosen.1763 Furthermore, after being fully implemented, this solution does not appear to have any significant downsides.1764 1765 Using partial signalling, listing \ref{f:dependency} can be solved easily:1766 \begin{itemize}1767 \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor @B@ to thread $\alpha$ and continues to hold monitor @A@.1768 \item When thread $\gamma$ reaches line \ref{line:release-a} it transfers monitor @A@ to thread $\beta$ and wakes it up.1769 \item When thread $\beta$ reaches line \ref{line:release-aa} it transfers monitor @A@ to thread $\alpha$ and wakes it up.1770 \end{itemize}1771 1777 1772 1778
Note: See TracChangeset
for help on using the changeset viewer.