# Changeset 3fc59bdb

Ignore:
Timestamp:
Jun 11, 2018, 10:49:54 AM (4 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, with_gc
Children:
934d200, ee163895
Parents:
7bdcac1 (diff), f184ca3 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Files:
4 edited

### Legend:

Unmodified
 r7bdcac1 \subsection{\protect\CFA's Thread Building Blocks} An important missing feature in C is threading\footnote{While the C11 standard defines a threads.h'' header, it is minimal and defined as optional. An important missing feature in C is threading\footnote{While the C11 standard defines a \protect\lstinline@threads.h@ header, it is minimal and defined as optional. As such, library support for threading is far from widespread. At the time of writing the paper, neither \protect\lstinline|gcc| nor \protect\lstinline|clang| support threads.h'' in their standard libraries.}. At the time of writing the paper, neither \protect\lstinline@gcc@ nor \protect\lstinline@clang@ support \protect\lstinline@threads.h@ in their standard libraries.}. In modern programming languages, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. } \end{cfa} A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{api}. A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{API}. \end{comment} \label{s:InternalScheduling} While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer, Figure~\ref{f:GenericBoundedBuffer}, may be full/empty so produce/consumer threads must block. While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed. For example, Figure~\ref{f:GenericBoundedBuffer} shows a bounded buffer that may be full/empty so produce/consumer threads must block. Leaving the monitor and trying again (busy waiting) is impractical for high-level programming. Monitors eliminate busy waiting by providing internal synchronization to schedule threads needing access to the shared data, where the synchronization is blocking (threads are parked) versus spinning. The synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} is defined as indicating which thread acquires the critical section next. Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} defines which thread acquires the critical section next. \newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry. External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion. If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer. Threads making calls to routines that are currently excluded block outside (externally) of the monitor on a calling queue, versus blocking on condition queues inside the monitor. Both internal and external scheduling extend to multiple monitors in a natural way. \begin{cfa} monitor M { condition e; ... }; void foo( M & mutex m1, M & mutex m2 ) { ... wait( e ); ...                                    $\C{// wait( e, m1, m2 )}$ ... wait( e, m1 ); ... ... wait( e, m2 ); ... } void rtn$$$_1$$$( M & mutex m1, M & mutex m2 ); void rtn$$$_2$$$( M & mutex m1 ); void bar( M & mutex m1, M & mutex m2 ) { ... waitfor( rtn ); ...                               $\C{// waitfor( rtn$$_1$$, m1, m2 )}$ ... waitfor( rtn, m1 ); ...                   $\C{// waitfor( rtn$$_2$$, m1 )}$ } \end{cfa} For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@. To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@. Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe. Similarly, for @waitfor( rtn, ... )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@. Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer. To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible. Given the ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock. \begin{cfa} void foo( M & mutex m1, M & mutex m2 ) { ... wait( e, m1 ); ...                                $\C{// release m1, keeping m2 acquired )}$ void baz( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$ ... signal( e ); ... \end{cfa} The @wait@ only releases @m1@ so the signalling thread cannot acquire both @m1@ and @m2@ to  enter @baz@ to get to the @signal@. While deadlock issues can occur with multiple/nesting acquisition, this issue results from the fact that locks, and by extension monitors, are not perfectly composable. Finally, an important aspect of monitor implementation is barging, \ie can calling threads barge ahead of signalled threads? If barging is allowed, synchronization between a singller and signallee is difficult, often requiring multiple unblock/block cycles (looping around a wait rechecking if a condition is met). \begin{quote} However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program. It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74} \end{quote} \CFA scheduling \emph{precludes} barging, which simplifies synchronization among threads in the monitor and increases correctness. For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency. \subsection{Barging Prevention} Figure~\ref{f:BargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signalling semantics. The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors. The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired. When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting thread to prevent barging into the outer @mutex@ statement by another thread. However, both the signalling and signalled threads still need monitor @m1@. \begin{figure} \newbox\myboxA \begin{lrbox}{\myboxA} \begin{cfa}[aboveskip=0pt,belowskip=0pt] monitor M m1, m2; condition c; mutex( m1 ) { ... mutex( m1, m2 ) { ... wait( c ); // block and release m1, m2 // m1, m2 acquired } // $\LstCommentStyle{\color{red}release m2}$ // m1 acquired } // release m1 \end{cfa} \end{lrbox} \newbox\myboxB \begin{lrbox}{\myboxB} \begin{cfa}[aboveskip=0pt,belowskip=0pt] mutex( m1 ) { ... mutex( m1, m2 ) { ... signal( c ); ... // m1, m2 acquired } // $\LstCommentStyle{\color{red}release m2}$ // m1 acquired } // release m1 \end{cfa} \end{lrbox} \newbox\myboxC \begin{lrbox}{\myboxC} \begin{cfa}[aboveskip=0pt,belowskip=0pt] mutex( m1 ) { ... wait( c ); ... // m1 acquired } // $\LstCommentStyle{\color{red}release m1}$ \end{cfa} \end{lrbox} \begin{cquote} \subfloat[Waiting Thread]{\label{f:WaitingThread}\usebox\myboxA} \hspace{2\parindentlnth} \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxB} \hspace{2\parindentlnth} \subfloat[Other Waiting Thread]{\label{f:SignallingThread}\usebox\myboxC} \end{cquote} \caption{Barging Prevention} \label{f:BargingPrevention} \end{figure} The obvious solution to the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed, because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. This solution releases the monitors once every monitor in a group can be released. However, since some monitors are never released (\eg the monitor of a thread), this interpretation means a group might never be released. A more interesting interpretation is to transfer the group until all its monitors are released, which means the group is not passed further and a thread can retain its locks. However, listing \ref{f:int-secret} shows this solution can become much more complicated depending on what is executed while secretly holding B at line \ref{line:secret}, while avoiding the need to transfer ownership of a subset of the condition monitors. Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable. Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable. Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen: \begin{comment} \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it. \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate. \\ Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{f:dependency}. In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means knowing when to release a group becomes complex and inefficient (see next section) and therefore effectively precludes this approach. \subsubsection{Dependency graphs} \begin{figure} \begin{multicols}{3} Thread $\alpha$ \begin{cfa}[numbers=left, firstnumber=1] acquire A acquire A & B wait A & B release A & B release A \end{cfa} \columnbreak Thread $\gamma$ \begin{cfa}[numbers=left, firstnumber=6, escapechar=|] acquire A acquire A & B |\label{line:signal-ab}|signal A & B |\label{line:release-ab}|release A & B |\label{line:signal-a}|signal A |\label{line:release-a}|release A \end{cfa} \columnbreak Thread $\beta$ \begin{cfa}[numbers=left, firstnumber=12, escapechar=|] acquire A wait A |\label{line:release-aa}|release A \end{cfa} \end{multicols} \begin{cfa}[caption={Pseudo-code for the three thread example.},label={f:dependency}] \end{cfa} \begin{center} \input{dependency} \end{center} \caption{Dependency graph of the statements in listing \ref{f:dependency}} \label{fig:dependency} \end{figure} In listing \ref{f:int-bulk-cfa}, there is a solution that satisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases @A & B@ and then the waiter transfers back ownership of @A@ back to the signaller when it releases it, then the problem is solved (@B@ is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem is effectively resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and has a super-linear complexity. This complexity can be seen in listing \ref{f:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions. Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks. \begin{figure} \begin{multicols}{2} \begin{cfa} acquire A acquire B acquire C wait A & B & C release C release B release A \end{cfa} \columnbreak \begin{cfa} acquire A acquire B acquire C signal A & B & C release C release B release A \end{cfa} \end{multicols} \begin{cfa}[caption={Extension to three monitors of listing \ref{f:int-bulk-cfa}},label={f:explosion}] \end{cfa} \end{figure} Given the three threads example in listing \ref{f:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (\eg $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold. Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. \subsubsection{Partial Signalling} \label{partial-sig} \end{comment} Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{f:int-bulk-cfa}, the partial signalling solution transfers ownership of monitor @B@ at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor @A@. Only when it reaches line \ref{line:lastRelease} does it actually wake up the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithms, which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any significant downsides. Using partial signalling, listing \ref{f:dependency} can be solved easily: \begin{itemize} \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor @B@ to thread $\alpha$ and continues to hold monitor @A@. \item When thread $\gamma$ reaches line \ref{line:release-a}  it transfers monitor @A@ to thread $\beta$  and wakes it up. \item When thread $\beta$  reaches line \ref{line:release-aa} it transfers monitor @A@ to thread $\alpha$ and wakes it up. \end{itemize} \subsection{Signalling: Now or Later} Threads making calls to routines that are currently excluded block outside (external) of the monitor on a calling queue, versus blocking on condition queues inside (internal) of the monitor. For internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread; the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently. The waiter unblocks next, takes the state, and exits the monitor. Blocking signalling is the reverse, where the waiter is providing the cooperation for the signalling thread; the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter. The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to take the state. Figure~\ref{f:DatingService} shows a dating service demonstrating the two forms of signalling: non-blocking and blocking. The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers. A thread blocks until an appropriate partner arrives. The complexity is exchanging phone number in the monitor, While the non-barging monitor prevents a caller from stealing a phone number, the monitor mutual-exclusion property The dating service is an example of a monitor that cannot be written using external scheduling because: The example in table \ref{tbl:datingservice} highlights the difference in behaviour. As mentioned, @signal@ only transfers ownership once the current critical section exits; this behaviour requires additional synchronization when a two-way handshake is needed. To avoid this explicit synchronization, the @condition@ type offers the @signal_block@ routine, which handles the two-way handshake as shown in the example. This feature removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, @signal_block@ uses barging prevention, which means mutual-exclusion is baton-passed both on the front end and the back end of the call to @signal_block@, meaning no other thread can acquire the monitor either before or after the call. \begin{figure} \subfloat[\lstinline@signal_block@]{\label{f:DatingSignalBlock}\usebox\myboxB} \caption{Dating service. } \label{f:Dating service} \label{f:DatingService} \end{figure} An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the @signal@ statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the @signal_block@ routine. The example in table \ref{tbl:datingservice} highlights the difference in behaviour. As mentioned, @signal@ only transfers ownership once the current critical section exits; this behaviour requires additional synchronization when a two-way handshake is needed. To avoid this explicit synchronization, the @condition@ type offers the @signal_block@ routine, which handles the two-way handshake as shown in the example. This feature removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, @signal_block@ uses barging prevention, which means mutual-exclusion is baton-passed both on the front end and the back end of the call to @signal_block@, meaning no other thread can acquire the monitor either before or after the call. % ====================================================================== % ====================================================================== Both internal and external scheduling extend to multiple monitors in a natural way. \begin{cquote} \begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}} \begin{cfa} monitor M { condition e; ... }; void foo( M & mutex m1, M & mutex m2 ) { ... wait( e ); ...   // wait( e, m1, m2 ) ... wait( e, m1 ); ... ... wait( e, m2 ); ... } \end{cfa} & \begin{cfa} void rtn$$$_1$$$( M & mutex m1, M & mutex m2 ); void rtn$$$_2$$$( M & mutex m1 ); void bar( M & mutex m1, M & mutex m2 ) { ... waitfor( rtn ); ...       // $\LstCommentStyle{waitfor( rtn$$_1$$, m1, m2 )}$ ... waitfor( rtn, m1 ); ... // $\LstCommentStyle{waitfor( rtn$$_2$$, m1 )}$ } \end{cfa} \end{tabular} \end{cquote} For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@. To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@. Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe. Finally, a signaller, \begin{cfa} void baz( M & mutex m1, M & mutex m2 ) { ... signal( e ); ... } \end{cfa} must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the front of the condition queue. In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread. Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@. Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer. To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible. Given the ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock. \begin{cfa} void foo( M & mutex m1, M & mutex m2 ) { ... wait( e, m1 ); ...                                $\C{// release m1, keeping m2 acquired )}$ void baz( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$ ... signal( e ); ... \end{cfa} The @wait@ only releases @m1@ so the signalling thread cannot acquire both @m1@ and @m2@ to  enter @baz@ to get to the @signal@. While deadlock issues can occur with multiple/nesting acquisition, this issue results from the fact that locks, and by extension monitors, are not perfectly composable. Finally, an important aspect of monitor implementation is barging, \ie can calling threads barge ahead of signalled threads? If barging is allowed, synchronization between a singller and signallee is difficult, often requiring multiple unblock/block cycles (looping around a wait rechecking if a condition is met). \begin{quote} However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program. It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74} \end{quote} \CFA scheduling \emph{precludes} barging, which simplifies synchronization among threads in the monitor and increases correctness. For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency. \subsection{Barging Prevention} Figure~\ref{f:BargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signalling semantics. The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors. The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired. When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread. However, both the signalling and waiting thread W1 still need monitor @m1@. \begin{figure} \newbox\myboxA \begin{lrbox}{\myboxA} \begin{cfa}[aboveskip=0pt,belowskip=0pt] monitor M m1, m2; condition c; mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$ ... mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$ ... signal( c ); ... // m1, m2 acquired } // $\LstCommentStyle{\color{red}release m2}$ // m1 acquired } // release m1 \end{cfa} \end{lrbox} \newbox\myboxB \begin{lrbox}{\myboxB} \begin{cfa}[aboveskip=0pt,belowskip=0pt] mutex( m1 ) { ... mutex( m1, m2 ) { ... wait( c ); // block and release m1, m2 // m1, m2 acquired } // $\LstCommentStyle{\color{red}release m2}$ // m1 acquired } // release m1 \end{cfa} \end{lrbox} \newbox\myboxC \begin{lrbox}{\myboxC} \begin{cfa}[aboveskip=0pt,belowskip=0pt] mutex( m2 ) { ... wait( c ); ... // m2 acquired } // $\LstCommentStyle{\color{red}release m2}$ \end{cfa} \end{lrbox} \begin{cquote} \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxA} \hspace{2\parindentlnth} \subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB} \hspace{2\parindentlnth} \subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC} \end{cquote} \caption{Barging Prevention} \label{f:BargingPrevention} \end{figure} One scheduling solution is for the signaller to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling. However, Figure~\ref{f:OtherWaitingThread} shows this solution is complex depending on other waiters, resulting is choices when the signaller finishes the inner mutex-statement. The singaller can retain @m2@ until completion of the outer mutex statement and pass the locks to waiter W1, or it can pass @m2@ to waiter W2 after completing the inner mutex-statement, while continuing to hold @m1@. In the latter case, waiter W2 must eventually pass @m2@ to waiter W1, which is complex because W2 may have waited before W1 so it is unaware of W1. Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves. While a number of approaches were examined~\cite[\S~4.3]{Delisle18}, the solution chosen for \CFA is a novel techique called \newterm{partial signalling}. Signalled threads are moved to an urgent queue and the waiter at the front defines the set of monitors necessary for it to unblock. Partial signalling transfers ownership of monitors to the front waiter. When the signaller thread exits or waits in the monitor the front waiter is unblocked if all its monitors are released. This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. \begin{comment} Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable. Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable. Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen: \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it. \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate. \\ Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{f:dependency}. In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means knowing when to release a group becomes complex and inefficient (see next section) and therefore effectively precludes this approach. \subsubsection{Dependency graphs} \begin{figure} \begin{multicols}{3} Thread $\alpha$ \begin{cfa}[numbers=left, firstnumber=1] acquire A acquire A & B wait A & B release A & B release A \end{cfa} \columnbreak Thread $\gamma$ \begin{cfa}[numbers=left, firstnumber=6, escapechar=|] acquire A acquire A & B |\label{line:signal-ab}|signal A & B |\label{line:release-ab}|release A & B |\label{line:signal-a}|signal A |\label{line:release-a}|release A \end{cfa} \columnbreak Thread $\beta$ \begin{cfa}[numbers=left, firstnumber=12, escapechar=|] acquire A wait A |\label{line:release-aa}|release A \end{cfa} \end{multicols} \begin{cfa}[caption={Pseudo-code for the three thread example.},label={f:dependency}] \end{cfa} \begin{center} \input{dependency} \end{center} \caption{Dependency graph of the statements in listing \ref{f:dependency}} \label{fig:dependency} \end{figure} In listing \ref{f:int-bulk-cfa}, there is a solution that satisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases @A & B@ and then the waiter transfers back ownership of @A@ back to the signaller when it releases it, then the problem is solved (@B@ is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem is effectively resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and has a super-linear complexity. This complexity can be seen in listing \ref{f:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions. Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks. \begin{figure} \begin{multicols}{2} \begin{cfa} acquire A acquire B acquire C wait A & B & C release C release B release A \end{cfa} \columnbreak \begin{cfa} acquire A acquire B acquire C signal A & B & C release C release B release A \end{cfa} \end{multicols} \begin{cfa}[caption={Extension to three monitors of listing \ref{f:int-bulk-cfa}},label={f:explosion}] \end{cfa} \end{figure} Given the three threads example in listing \ref{f:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (\eg $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold. Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. \subsubsection{Partial Signalling} \label{partial-sig} \end{comment} \section{External scheduling} \label{extsched} % ====================================================================== % ====================================================================== An alternative to internal scheduling is external scheduling (see Table~\ref{tbl:sched}).
 r7bdcac1 There is also a set of _explicit_ conversions that are only allowed through a cast expression. Based on Glen's notes on conversions [1], I propose that safe and unsafe conversions be expressed as constructor variants, though I make explicit (cast) conversions a constructor variant as well rather than a dedicated operator. I propose that safe, unsafe, and explicit (cast) conversions be expressed as constructor variants. Throughout this article, I will use the following operator names for constructors and conversion functions from From to To: void ?{} ( To*, To );            // copy constructor void ?{} ( To*, From );          // explicit constructor void ?{explicit} ( To*, From );  // explicit cast conversion void ?{safe} ( To*, From );      // implicit safe conversion void ?{unsafe} ( To*, From );    // implicit unsafe conversion [1] http://plg.uwaterloo.ca/~cforall/Conversions/index.html Glen's design made no distinction between constructors and unsafe implicit void ?{} ( To&, To );            // copy constructor void ?{} ( To&, From );          // explicit constructor void ?{explicit} ( To&, From );  // explicit cast conversion void ?{safe} ( To&, From );      // implicit safe conversion void ?{unsafe} ( To&, From );    // implicit unsafe conversion It has been suggested that all constructors would define unsafe implicit conversions; this is elegant, but interacts poorly with tuples. Essentially, without making this distinction, a constructor like the following multiplying the space of possible interpretations of all functions: void ?{}( Coord *this, int x, int y ); void ?{}( Coord& this, int x, int y ); That said, it would certainly be possible to make a multiple-argument implicit used infrequently: void ?{unsafe}( Coord *this, int x, int y ); void ?{unsafe}( Coord& this, int x, int y ); An alternate possibility would be to only count two-arg constructors void ?{} ( To*, From ) as unsafe conversions; under this semantics, safe and void ?{} ( To&, From ) as unsafe conversions; under this semantics, safe and explicit conversions should also have a compiler-enforced restriction to ensure that they are two-arg functions (this restriction may be valuable is convertable to To. If user-defined conversions are not added to the language, void ?{} ( To*, From ) may be a suitable representation, relying on void ?{} ( To&, From ) may be a suitable representation, relying on conversions on the argument types to account for transitivity. On the other hand, To* should perhaps match its target type exactly, so another assertion syntax specific to conversions may be required, e.g. From -> To. Since To& should be an exact match on To, this should put all the implicit conversions on the RHS. On the other hand, under some models (like [1]), implicit conversions are not allowed in assertion parameters, so another assertion syntax specific to conversions may be required, e.g. From -> To. It has also been suggested that, for programmer control, no implicit conversions (except, possibly, for polymorphic specialization) should be allowed in resolution of cast operators. [1] ../working/assertion_resolution.md ### Constructor Idiom ### that we can use the full range of Cforall features for conversions, including polymorphism. Glen [1] defines a _constructor idiom_ that can be used to create chains of safe conversions without duplicating code; given a type Safe which members of another type From can be directly converted to, the constructor idiom allows us to write a conversion for any type To which Safe converts to: forall(otype To | { void ?{safe}( To*, Safe ) }) void ?{safe}( To *this, From that ) { In an earlier version of this proposal, Glen Ditchfield defines a _constructor idiom_ that can be used to create chains of safe conversions without duplicating code; given a type Safe which members of another type From can be directly converted to, the constructor idiom allows us to write a conversion for any type To which Safe converts to: forall(otype To | { void ?{safe}( To&, Safe ) }) void ?{safe}( To& this, From that ) { Safe tmp = /* some expression involving that */; *this = tmp; // uses assertion parameter this{ tmp }; // initialize from assertion parameter } unsafe conversions. Glen's original suggestion said the copy constructor for To should also be accepted as a resolution for void ?{safe}( To&, Safe ) (Safe == To), allowing this same code to be used for the single-step conversion as well. This proposal does come at the cost of an extra copy initialization of the target value, though. Contrariwise, if a monomorphic conversion from From to Safe is written, e.g: void ?{safe}( Safe& this, From that ) { this{ /* some parameters involving that */ }; } Then the code for a transitive conversion from From to any To type convertable from Safe is written: forall(otype To | { void ?{safe}( To&, Safe ) }) void ?{safe}( To& this, From that ) { Safe tmp = that;  // uses monomorphic conversion this{ tmp };      // initialize from assertion parameter } Given the entirely-boilerplate nature of this code, but negative performance implications of the unmodified constructor idiom, it might be fruitful to have transitive and single step conversion operators, and let CFA build the transitive conversions; some possible names: void ?{safe}  (To&, From);    void ?{final safe} (To&, From);  // single-step void ?{safe*} (To&, From);    void ?{safe}       (To&, From);  // transitive What selective non-use of the constructor idiom gives us is the ability to define a conversion that may only be the *last* conversion in a chain of such. Constructing a conversion graph able to unambiguously represent the full hierarchy of implicit conversions in C is provably impossible using only single-step conversions with no additional information (see Appendix A), but this mechanism is sufficiently powerful (see [1], though the design there has some minor bugs; the general idea is to use the constructor idiom to define two chains of conversions, one among the signed integral types, another among the unsigned, and to use monomorphic conversions to allow conversions between signed and unsigned integer types). One use for this is to solve the problem that explicit conversions were added to C++ for, that of conversions to bool chaining to become conversions to any arithmetic type. Another use is to unambiguously represent the full hierarchy of implicit conversions in C by making sign conversions non-transitive, allowing the compiler to resolve e.g. int -> unsigned long as int -> long -> unsigned long over int -> unsigned int -> unsigned long. See [2] for more details. [2] ../working/glen_conversions/index.html#usual ### Appendix A: Partial and Total Orders ### convert from int to unsigned long, so we just put in a direct conversion and make the compiler smart enough to figure out the costs" - this is the approach taken by the existing compipler, but given that in a user-defined approach taken by the existing compiler, but given that in a user-defined conversion proposal the users can build an arbitrary graph of conversions, this case still needs to be handled. exists a chain of conversions from a to b (see Appendix A for description of preorders and related constructs). This preorder corresponds roughly to a more usual type-theoretic concept of This preorder roughly corresponds to a more usual type-theoretic concept of subtyping ("if I can convert a to b, a is a more specific type than b"); however, since this graph is arbitrary, it may contain cycles, so if and so is considered to be the nearer type. By transitivity, then, the conversion from X to Y2 should be cheaper than the conversion from X to W, but in this case the X and W are the conversion from X to W, but in this case the Y2 and W are incomparable by the conversion preorder, so the tie is broken by the shorter path from X to W in favour of W, contradicting the transitivity property