# Changeset 9c32e21

Ignore:
Timestamp:
Jun 8, 2018, 3:47:35 PM (4 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, with_gc
Children:
f184ca3
Parents:
6eb131c (diff), 7b28e4a (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

File:
1 edited

### Legend:

Unmodified
 r6eb131c \subsection{\protect\CFA's Thread Building Blocks} An important missing feature in C is threading\footnote{While the C11 standard defines a threads.h'' header, it is minimal and defined as optional. An important missing feature in C is threading\footnote{While the C11 standard defines a \protect\lstinline@threads.h@ header, it is minimal and defined as optional. As such, library support for threading is far from widespread. At the time of writing the paper, neither \protect\lstinline|gcc| nor \protect\lstinline|clang| support threads.h'' in their standard libraries.}. At the time of writing the paper, neither \protect\lstinline@gcc@ nor \protect\lstinline@clang@ support \protect\lstinline@threads.h@ in their standard libraries.}. In modern programming languages, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. } \end{cfa} A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{api}. A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{API}. \end{comment} \label{s:InternalScheduling} While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer, Figure~\ref{f:GenericBoundedBuffer}, may be full/empty so produce/consumer threads must block. While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed. For example, Figure~\ref{f:GenericBoundedBuffer} shows a bounded buffer that may be full/empty so produce/consumer threads must block. Leaving the monitor and trying again (busy waiting) is impractical for high-level programming. Monitors eliminate busy waiting by providing internal synchronization to schedule threads needing access to the shared data, where the synchronization is blocking (threads are parked) versus spinning. The synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} is defined as indicating which thread acquires the critical section next. Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} defines which thread acquires the critical section next. \newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry. External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion. If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer. Threads making calls to routines that are currently excluded block outside (externally) of the monitor on a calling queue, versus blocking on condition queues inside the monitor. Threads making calls to routines that are currently excluded block outside (external) of the monitor on a calling queue, versus blocking on condition queues inside (internal) of the monitor. Both internal and external scheduling extend to multiple monitors in a natural way. \begin{cquote} \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}} \begin{cfa} monitor M { condition e; ... }; void foo( M & mutex m1, M & mutex m2 ) { ... wait( e ); ...                                    $\C{// wait( e, m1, m2 )}$ ... wait( e ); ...   // wait( e, m1, m2 ) ... wait( e, m1 ); ... ... wait( e, m2 ); ... } \end{cfa} & \begin{cfa} void rtn$$$_1$$$( M & mutex m1, M & mutex m2 ); void rtn$$$_2$$$( M & mutex m1 ); void bar( M & mutex m1, M & mutex m2 ) { ... waitfor( rtn ); ...                               $\C{// waitfor( rtn$$_1$$, m1, m2 )}$ ... waitfor( rtn, m1 ); ...                   $\C{// waitfor( rtn$$_2$$, m1 )}$ } \end{cfa} ... waitfor( rtn ); ...       // $\LstCommentStyle{waitfor( rtn$$_1$$, m1, m2 )}$ ... waitfor( rtn, m1 ); ... // $\LstCommentStyle{waitfor( rtn$$_2$$, m1 )}$ } \end{cfa} \end{tabular} \end{cquote} For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@. To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@. Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe. Similarly, for @waitfor( rtn, ... )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. Finally, a signaller, \begin{cfa} void baz( M & mutex m1, M & mutex m2 ) { ... signal( e ); ... } \end{cfa} must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the front of the condition queue. In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread. Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@. Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer. The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors. The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired. When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting thread to prevent barging into the outer @mutex@ statement by another thread. However, both the signalling and signalled threads still need monitor @m1@. When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread. However, both the signalling and waiting thread W1 still need monitor @m1@. \begin{figure} monitor M m1, m2; condition c; mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$ ... mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$ ... signal( c ); ... // m1, m2 acquired } // $\LstCommentStyle{\color{red}release m2}$ // m1 acquired } // release m1 \end{cfa} \end{lrbox} \newbox\myboxB \begin{lrbox}{\myboxB} \begin{cfa}[aboveskip=0pt,belowskip=0pt] mutex( m1 ) { ... \end{lrbox} \newbox\myboxB \begin{lrbox}{\myboxB} \begin{cfa}[aboveskip=0pt,belowskip=0pt] mutex( m1 ) { ... mutex( m1, m2 ) { ... signal( c ); ... // m1, m2 acquired } // $\LstCommentStyle{\color{red}release m2}$ // m1 acquired } // release m1 \end{cfa} \end{lrbox} \newbox\myboxC \begin{lrbox}{\myboxC} mutex( m1 ) { mutex( m2 ) { ... wait( c ); ... // m1 acquired } // $\LstCommentStyle{\color{red}release m1}$ // m2 acquired } // $\LstCommentStyle{\color{red}release m2}$ \begin{cquote} \subfloat[Waiting Thread]{\label{f:WaitingThread}\usebox\myboxA} \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxA} \hspace{2\parindentlnth} \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxB} \subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB} \hspace{2\parindentlnth} \subfloat[Other Waiting Thread]{\label{f:SignallingThread}\usebox\myboxC} \subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC} \end{cquote} \caption{Barging Prevention} \end{figure} The obvious solution to the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed, because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. This solution releases the monitors once every monitor in a group can be released. However, since some monitors are never released (\eg the monitor of a thread), this interpretation means a group might never be released. A more interesting interpretation is to transfer the group until all its monitors are released, which means the group is not passed further and a thread can retain its locks. However, listing \ref{f:int-secret} shows this solution can become much more complicated depending on what is executed while secretly holding B at line \ref{line:secret}, while avoiding the need to transfer ownership of a subset of the condition monitors. One scheduling solution is for the signaller to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling. However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting is choices when the signaller finishes the inner mutex-statement. The singaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@. In the latter case, waiter W2 must eventually pass @m2@ to waiter W1, which is complex because W2 may have waited before W1 so it is unaware of W1. Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves. While a number of approaches were examined~\cite[\S~4.3]{Delisle18}, the solution chosen for \CFA is a novel techique called \newterm{partial signalling}. Signalled threads are moved to an urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock. Partial signalling transfers ownership of monitors to the front waiter. When the signaller thread exits or waits in the monitor the front waiter is unblocked if all its monitors are released. This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. \begin{comment} Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable. Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable. Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen: \begin{comment} \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it. \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate. \subsubsection{Partial Signalling} \label{partial-sig} \end{comment} Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{f:int-bulk-cfa}, the partial signalling solution transfers ownership of monitor @B@ at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor @A@. Only when it reaches line \ref{line:lastRelease} does it actually wake up the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithms, which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any significant downsides. Using partial signalling, listing \ref{f:dependency} can be solved easily: \begin{itemize} \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor @B@ to thread $\alpha$ and continues to hold monitor @A@. \item When thread $\gamma$ reaches line \ref{line:release-a}  it transfers monitor @A@ to thread $\beta$  and wakes it up. \item When thread $\beta$  reaches line \ref{line:release-aa} it transfers monitor @A@ to thread $\alpha$ and wakes it up. \end{itemize}