- File:
-
- 1 edited
-
doc/proposals/concurrency/text/concurrency.tex (modified) (45 diffs)
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/concurrency/text/concurrency.tex
rb3ffb61 rcf966b5 4 4 % ====================================================================== 5 5 % ====================================================================== 6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels \cite{CSP,Go} for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account.7 8 Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desir eable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}.9 10 An approach that is worth mentioning because it is gaining in popularity is transaction nal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cite{Cpp-Transactions}, the performance and feature set is currently too restrictive to be the main concurrency paradigm for systems language, which is why it was rejected as the core paradigm for concurrency in \CFA.6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels~\cite{CSP,Go} for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account. 7 8 Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desirable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. 9 10 An approach that is worth mentioning because it is gaining in popularity is transactional memory~\cite{Herlihy93}. While this approach is even pursued by system languages like \CC~\cite{Cpp-Transactions}, the performance and feature set is currently too restrictive to be the main concurrency paradigm for systems language, which is why it was rejected as the core paradigm for concurrency in \CFA. 11 11 12 12 One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared-memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct. 13 13 14 14 \section{Basics} 15 Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchroni sation. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads.15 Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronization. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads. 16 16 17 17 \subsection{Mutual-Exclusion} 18 As mention ned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to higher-level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (e.g., being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic<T>} offers an easy way to express mutual-exclusion on a restricted set of operations (e.g.: reading/writing large types atomically). Another challenge with low-level locks is composability. Locks have restricted composability because it takes careful organising for multiple locks to be used while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.18 As mentioned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to higher-level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (e.g., being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic<T>} offers an easy way to express mutual-exclusion on a restricted set of operations (e.g.: reading/writing large types atomically). Another challenge with low-level locks is composability. Locks have restricted composability because it takes careful organizing for multiple locks to be used while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer. 19 19 20 20 \subsection{Synchronization} 21 As for mutual-exclusion, low-level synchroni sation primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, e.g.: message passing, or offering simpler solution to otherwise involved challenges. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronisation happens within a critical section, where threads must acquire mutual-exclusion in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property called barging. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. The classic exmaple is the thread that finishes using a ressource and unblocks a thread waiting to use the resource, but the unblocked thread must compete again to acquire the resource. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use status flags and other flag variables to detect barging threads are said to be using barging avoidance while algorithms that baton-passing locksbetween threads instead of releasing the locks are said to be using barging prevention.21 As for mutual-exclusion, low-level synchronization primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, e.g.: message passing, or offering a simpler solution to otherwise involved challenges. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronization happens within a critical section, where threads must acquire mutual-exclusion in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property is called barging. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. The classic example is the thread that finishes using a resource and unblocks a thread waiting to use the resource, but the unblocked thread must compete again to acquire the resource. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use flag variables to detect barging threads are said to be using barging avoidance, while algorithms that baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention. 22 22 23 23 % ====================================================================== … … 29 29 \begin{cfacode} 30 30 typedef /*some monitor type*/ monitor; 31 int f(monitor & m);31 int f(monitor& m); 32 32 33 33 int main() { … … 42 42 % ====================================================================== 43 43 % ====================================================================== 44 The above monitor example displays some of the intrinsic characteristics. First, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable objects.44 The above monitor example displays some of the intrinsic characteristics. First, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important, because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copy-able objects (\code{dtype}). 45 45 46 46 Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can occur for generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter : … … 71 71 \end{tabular} 72 72 \end{center} 73 Notice how the counter is used without any explicit synchroni sation and yet supports thread-safe semantics for both reading and writting, which is similar in usage to \CC \code{atomic} template.74 75 Here, the constructor (\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet con\-structed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation.76 77 For maximum usability, monitors use \gls{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock. For example, figure\ref{fig:search} uses recursion and \gls{multi-acq} to print values inside a binary tree.73 Notice how the counter is used without any explicit synchronization and yet supports thread-safe semantics for both reading and writing, which is similar in usage to \CC \code{atomic} template. 74 75 Here, the constructor (\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet con\-structed should never be shared and therefore does not require mutual exclusion. Furthermore, it allows the implementation greater freedom when it initializes the monitor locking. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation. 76 77 For maximum usability, monitors use \gls{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock. For example, listing \ref{fig:search} uses recursion and \gls{multi-acq} to print values inside a binary tree. 78 78 \begin{figure} 79 \label{fig:search} 80 \begin{cfacode} 79 \begin{cfacode}[caption={Recursive printing algorithm using \gls{multi-acq}.},label={fig:search}] 81 80 monitor printer { ... }; 82 81 struct tree { … … 92 91 } 93 92 \end{cfacode} 94 \caption{Recursive printing algorithm using \gls{multi-acq}.}95 93 \end{figure} 96 94 97 Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)}, then it is reasonable that it should default to the safest option \code{mutex}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. In fact, \code{nomutex} is the ``normal'' parameter behaviour, with the \code{nomutex} keyword effectively statingexplicitly that ``this routine is not special''. Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}.95 Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, it is reasonable that it should default to the safest option (\code{mutex}) when given a routine without qualifiers \code{void foo(counter_t & this)}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. On the other hand, \code{nomutex} is the ``normal'' parameter behaviour, it effectively states explicitly that ``this routine is not special''. Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}. 98 96 99 97 The next semantic decision is to establish when \code{mutex} may be used as a type qualifier. Consider the following declarations: 100 98 \begin{cfacode} 101 int f1(monitor & mutex m);99 int f1(monitor& mutex m); 102 100 int f2(const monitor & mutex m); 103 int f3(monitor ** mutex m);104 int f4(monitor * mutex m []);101 int f3(monitor** mutex m); 102 int f4(monitor* mutex m []); 105 103 int f5(graph(monitor*) & mutex m); 106 104 \end{cfacode} 107 The problem is to i ndentify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C, and even then making sure objects are only acquired once becomes none-trivial. This problem can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To make the issue tractable, this project imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with at most one level of indirection (ignoring potential qualifiers). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is be acquired, passing an array to this routine would be type safe and yet result in undefined behavior because only the first element of the array is acquired. However, this ambiguity is part of the C type-system with respects to arrays. For this reason, \code{mutex} is disallowed in the context where arrays may be passed:108 \begin{cfacode} 109 int f1(monitor & mutex m); //Okay : recommanded case110 int f2(monitor * mutex m); //Okay : could be an array but probably not111 int f3(monitor mutex m []); //Not Okay : Array of unk own length112 int f4(monitor ** mutex m); //Not Okay : Could be an array113 int f5(monitor * mutex m []); //Not Okay : Array of unkown length105 The problem is to identify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to identify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C, and even then, making sure objects are only acquired once becomes none-trivial. This problem can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To make the issue tractable, this project imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with at most one level of indirection (ignoring potential qualifiers). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is be acquired, passing an array to this routine would be type safe and yet result in undefined behaviour because only the first element of the array is acquired. However, this ambiguity is part of the C type-system with respects to arrays. For this reason, \code{mutex} is disallowed in the context where arrays may be passed: 106 \begin{cfacode} 107 int f1(monitor& mutex m); //Okay : recommended case 108 int f2(monitor* mutex m); //Okay : could be an array but probably not 109 int f3(monitor mutex m []); //Not Okay : Array of unknown length 110 int f4(monitor** mutex m); //Not Okay : Could be an array 111 int f5(monitor* mutex m []); //Not Okay : Array of unknown length 114 112 \end{cfacode} 115 113 Note that not all array functions are actually distinct in the type system. However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic. … … 123 121 f(a,b); 124 122 \end{cfacode} 125 While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found. The capa city to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors in the way is safe from deadlock. However, users can still force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order:126 \begin{cfacode} 127 void foo(A & mutex a, B& mutex b) { //acquire a & b123 While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found. The capability to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of acquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors is safe from deadlock when using \gls{bulk-acq}. However, users can still force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects acquiring order: 124 \begin{cfacode} 125 void foo(A& mutex a, B& mutex b) { //acquire a & b 128 126 ... 129 127 } 130 128 131 void bar(A & mutex a, B& /*nomutex*/ b) { //acquire a129 void bar(A& mutex a, B& /*nomutex*/ b) { //acquire a 132 130 ... foo(a, b); ... //acquire b 133 131 } 134 132 135 void baz(A & /*nomutex*/ a, B& mutex b) { //acquire b133 void baz(A& /*nomutex*/ a, B& mutex b) { //acquire b 136 134 ... foo(a, b); ... //acquire a 137 135 } … … 139 137 The \gls{multi-acq} monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. 140 138 141 However, such use leads to the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown\cite{Lister77}, solving this problem requires:139 However, such use leads to the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle difference means that calling these routines concurrently may lead to deadlock and is therefore Undefined Behavior. As shown~\cite{Lister77}, solving this problem requires: 142 140 \begin{enumerate} 143 141 \item Dynamically tracking of the monitor-call order. 144 142 \item Implement rollback semantics. 145 143 \end{enumerate} 146 While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is still prohibitively complex \cite{Dice10}. In \CFA, users simply need to be carefull when acquiring multiple monitors at the same time or only use \gls{bulk-acq} of all the monitors. While \CFA provides only a partial solution, many systemprovide no solution and the \CFA partial solution handles many useful cases.144 While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is still prohibitively complex~\cite{Dice10}. In \CFA, users simply need to be careful when acquiring multiple monitors at the same time or only use \gls{bulk-acq} of all the monitors. While \CFA provides only a partial solution, most systems provide no solution and the \CFA partial solution handles many useful cases. 147 145 148 146 For example, \gls{multi-acq} and \gls{bulk-acq} can be used together in interesting ways: … … 157 155 } 158 156 \end{cfacode} 159 This example shows a trivial solution to the bank-account transfer-problem \cite{BankTransfer}. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires carefull engineering.157 This example shows a trivial solution to the bank-account transfer-problem~\cite{BankTransfer}. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires careful engineering. 160 158 161 159 \subsection{\code{mutex} statement} \label{mutex-stmt} 162 160 163 The call semantics discussed above d have one software engineering issue, only a named routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the \code{mutex} statement to workaround the need for unnecessary names, avoiding a major software engineering problem\cite{2FTwoHardThings}. Listing\ref{lst:mutex-stmt} shows an example of the \code{mutex} statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the \code{mutex} statement has no semantic difference from a routine call with \code{mutex} parameters.164 165 \begin{ figure}161 The call semantics discussed above have one software engineering issue, only a named routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the \code{mutex} statement to workaround the need for unnecessary names, avoiding a major software engineering problem~\cite{2FTwoHardThings}. Table \ref{lst:mutex-stmt} shows an example of the \code{mutex} statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the \code{mutex} statement has no semantic difference from a routine call with \code{mutex} parameters. 162 163 \begin{table} 166 164 \begin{center} 167 165 \begin{tabular}{|c|c|} … … 170 168 \begin{cfacode}[tabsize=3] 171 169 monitor M {}; 172 void foo( M & mutex m ) {170 void foo( M & mutex m1, M & mutex m2 ) { 173 171 //critical section 174 172 } 175 173 176 void bar( M & m ) {177 foo( m );174 void bar( M & m1, M & m2 ) { 175 foo( m1, m2 ); 178 176 } 179 177 \end{cfacode}&\begin{cfacode}[tabsize=3] 180 178 monitor M {}; 181 void bar( M & m ) {182 mutex(m ) {179 void bar( M & m1, M & m2 ) { 180 mutex(m1, m2) { 183 181 //critical section 184 182 } … … 191 189 \caption{Regular call semantics vs. \code{mutex} statement} 192 190 \label{lst:mutex-stmt} 193 \end{ figure}191 \end{table} 194 192 195 193 % ====================================================================== … … 225 223 }; 226 224 \end{cfacode} 227 Note that the destructor of a monitor must be a \code{mutex} routine . This requirement ensures that the destructor has mutual-exclusion. As with any object, any callto a monitor, using \code{mutex} or otherwise, is Undefined Behaviour after the destructor has run.225 Note that the destructor of a monitor must be a \code{mutex} routine to prevent deallocation while a thread is accessing the monitor. As with any object, calls to a monitor, using \code{mutex} or otherwise, is Undefined Behaviour after the destructor has run. 228 226 229 227 % ====================================================================== … … 232 230 % ====================================================================== 233 231 % ====================================================================== 234 In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchroni sation. With monitors, this capability is generally achieved with internal or external scheduling as in\cite{Hoare74}. Since internal scheduling within a single monitor is mostly a solved problem, this thesis concentrates on extending internal scheduling to multiple monitors. Indeed, like the \gls{bulk-acq} semantics, internal scheduling extends to multiple monitors in a way that is natural to the user but requires additional complexity on the implementation side.235 236 First, here is a simple example of such a technique:232 In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronization. With monitors, this capability is generally achieved with internal or external scheduling as in~\cite{Hoare74}. Since internal scheduling within a single monitor is mostly a solved problem, this thesis concentrates on extending internal scheduling to multiple monitors. Indeed, like the \gls{bulk-acq} semantics, internal scheduling extends to multiple monitors in a way that is natural to the user but requires additional complexity on the implementation side. 233 234 First, here is a simple example of internal-scheduling : 237 235 238 236 \begin{cfacode} … … 241 239 } 242 240 243 void foo(A & mutex a) {241 void foo(A& mutex a1, A& mutex a2) { 244 242 ... 245 243 //Wait for cooperation from bar() 246 wait(a .e);244 wait(a1.e); 247 245 ... 248 246 } 249 247 250 void bar(A & mutex a) {248 void bar(A& mutex a1, A& mutex a2) { 251 249 //Provide cooperation for foo() 252 250 ... 253 251 //Unblock foo 254 signal(a.e); 255 } 256 \end{cfacode} 257 258 There are two details to note here. First, the \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This semantic is needed to respect mutual-exclusion. The alternative is to return immediately after the call to \code{signal}, which is significantly more restrictive. Second, in \CFA, while it is common to store a \code{condition} as a field of the monitor, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. 259 260 An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, \code{foo} is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency. 252 signal(a1.e); 253 } 254 \end{cfacode} 255 There are two details to note here. First, the \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This semantic is needed to respect mutual-exclusion, i.e., the signaller and signalled thread cannot be in the monitor simultaneously. The alternative is to return immediately after the call to \code{signal}, which is significantly more restrictive. Second, in \CFA, while it is common to store a \code{condition} as a field of the monitor, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. 256 257 An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, \code{foo} is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantee offers the benefit of not having to loop around waits to recheck that a condition is met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency. 261 258 262 259 % ====================================================================== … … 265 262 % ====================================================================== 266 263 % ====================================================================== 267 It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code . Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use the implicit condition as paremeter and explicitly names the monitors (A and B) associated with the condition. Note that in \CFA, condition variables are tied to a set of monitors on first use (called branding) which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors.264 It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code examples. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use the implicit condition variable as parameter and explicitly names the monitors (A and B) associated with the condition. Note that in \CFA, condition variables are tied to a \emph{group} of monitors on first use (called branding), which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors. The example below shows the simple case of having two threads (one for each column) and a single monitor A. 268 265 269 266 \begin{multicols}{2} … … 284 281 \end{pseudo} 285 282 \end{multicols} 286 The example shows the simple case of having two threads (one for each column) and a single monitor A.One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling. It is important to note here that both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This semantic is a logical requirement for barging prevention.283 One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling. It is important to note here that both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This semantic is a logical requirement for barging prevention. 287 284 288 285 A direct extension of the previous example is a \gls{bulk-acq} version: 289 290 286 \begin{multicols}{2} 291 287 \begin{pseudo} … … 294 290 release A & B 295 291 \end{pseudo} 296 297 292 \columnbreak 298 299 293 \begin{pseudo} 300 294 acquire A & B … … 305 299 This version uses \gls{bulk-acq} (denoted using the {\sf\&} symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers more monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate. 306 300 307 While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well known deadlock problem is the Nested Monitor Problem \cite{Lister77}, which occurs when a \code{wait} is made by a thread that holds more than one monitor. For example, the following pseudo-code runs into the nested-monitor problem :301 While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well known deadlock problem is the Nested Monitor Problem~\cite{Lister77}, which occurs when a \code{wait} is made by a thread that holds more than one monitor. For example, the following pseudo-code runs into the nested-monitor problem : 308 302 \begin{multicols}{2} 309 303 \begin{pseudo} … … 325 319 \end{pseudo} 326 320 \end{multicols} 327 328 The \code{wait} only releases monitor \code{B} so the signalling thread cannot acquire monitor \code{A} to get to the \code{signal}. Attempting release of all acquired monitors at the \code{wait} results in another set of problems such as releasing monitor \code{C}, which has nothing to do with the \code{signal}. 329 330 However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nesting is done correctly. For example, the next pseudo-code snippet acquires monitors {\sf A} then {\sf B} before waiting, while only acquiring {\sf B} when signalling, effectively avoiding the nested monitor problem. 321 The \code{wait} only releases monitor \code{B} so the signalling thread cannot acquire monitor \code{A} to get to the \code{signal}. Attempting release of all acquired monitors at the \code{wait} introduces a different set of problems, such as releasing monitor \code{C}, which has nothing to do with the \code{signal}. 322 323 However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nesting is done correctly. For example, the next pseudo-code snippet acquires monitors {\sf A} then {\sf B} before waiting, while only acquiring {\sf B} when signalling, effectively avoiding the Nested Monitor Problem~\cite{Lister77}. 331 324 332 325 \begin{multicols}{2} … … 350 343 \end{multicols} 351 344 345 This simple refactoring may not be possible, forcing more complex restructuring. 346 352 347 % ====================================================================== 353 348 % ====================================================================== … … 356 351 % ====================================================================== 357 352 358 A larger example is presented to show complex issues for \gls{bulk-acq} and all the implementation options are analyzed. Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code which implements the pseudo-code in listing \ref{lst:int-bulk-pseudo}. For the purpose of translating the given pseudo-code into \CFA-code any method of introducing monitor into context, other than a \code{mutex} parameter, is acceptable, e.g., global variables, pointer parameters or using locals with the \code{mutex}-statement.359 360 \begin{figure}[! b]353 A larger example is presented to show complex issues for \gls{bulk-acq} and all the implementation options are analyzed. Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code to implement the pseudo-code in listing \ref{lst:int-bulk-pseudo}. For the purpose of translating the given pseudo-code into \CFA-code, any method of introducing a monitor is acceptable, e.g., \code{mutex} parameter, global variables, pointer parameters or using locals with the \code{mutex}-statement. 354 355 \begin{figure}[!t] 361 356 \begin{multicols}{2} 362 357 Waiting thread … … 372 367 release A 373 368 \end{pseudo} 374 375 369 \columnbreak 376 377 370 Signalling thread 378 \begin{pseudo}[numbers=left, firstnumber=10 ]371 \begin{pseudo}[numbers=left, firstnumber=10,escapechar=|] 379 372 acquire A 380 373 //Code Section 5 381 374 acquire A & B 382 375 //Code Section 6 383 signal A & B376 |\label{line:signal1}|signal A & B 384 377 //Code Section 7 385 378 release A & B 386 379 //Code Section 8 387 release A380 |\label{line:lastRelease}|release A 388 381 \end{pseudo} 389 382 \end{multicols} 390 \caption{Internal scheduling with \gls{bulk-acq}} 391 \label{lst:int-bulk-pseudo} 392 \end{figure} 393 394 \begin{figure}[!b] 383 \begin{cfacode}[caption={Internal scheduling with \gls{bulk-acq}},label={lst:int-bulk-pseudo}] 384 \end{cfacode} 395 385 \begin{center} 396 386 \begin{cfacode}[xleftmargin=.4\textwidth] … … 413 403 } 414 404 \end{cfacode} 415 416 405 \columnbreak 417 418 406 Signalling thread 419 407 \begin{cfacode} … … 429 417 \end{cfacode} 430 418 \end{multicols} 431 \caption{Equivalent \CFA code for listing \ref{lst:int-bulk-pseudo}} 432 \label{lst:int-bulk-cfa} 433 \end{figure} 434 435 The complexity begins at code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should ``release \code{A & B}'' (line 16), it must actually transfer ownership of monitor \code{B} to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs monitor \code{A}, simply waking up the waiting thread is not an option because it violates mutual exclusion. There are three options. 436 437 \subsubsection{Delaying signals} 438 The obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. 419 \begin{cfacode}[caption={Equivalent \CFA code for listing \ref{lst:int-bulk-pseudo}},label={lst:int-bulk-cfa}] 420 \end{cfacode} 439 421 \begin{multicols}{2} 440 422 Waiter … … 450 432 451 433 Signaller 452 \begin{pseudo}[numbers=left, firstnumber=6 ]434 \begin{pseudo}[numbers=left, firstnumber=6,escapechar=|] 453 435 acquire A 454 436 acquire A & B 455 437 signal A & B 456 438 release A & B 457 //Secretly keep B here439 |\label{line:secret}|//Secretly keep B here 458 440 release A 459 441 //Wakeup waiter and transfer A & B 460 442 \end{pseudo} 461 443 \end{multicols} 462 However, this solution can become much more complicated depending on what is executed while secretly holding B (at line 10). Indeed, nothing prevents signalling monitor A on a different condition variable: 444 \begin{cfacode}[caption={Listing \ref{lst:int-bulk-pseudo}, with delayed signalling comments},label={lst:int-secret}] 445 \end{cfacode} 446 \end{figure} 447 448 The complexity begins at code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should ``release \code{A & B}'' (listing \ref{lst:int-bulk-pseudo} line \ref{line:signal1}), it must actually transfer ownership of monitor \code{B} to the waiting thread. This ownership transfer is required in order to prevent barging into \code{B} by another thread, since both the signalling and signalled threads still need monitor \code{A}. There are three options. 449 450 \subsubsection{Delaying signals} 451 The obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. The naive approach to this solution is to only release monitors once every monitor in a group can be released. However, since some monitors are never released (i.e., the monitor of a thread), this interpretation means groups can grow but may never shrink. A more interesting interpretation is to only transfer groups as one but to recreate the groups on every operation, i.e., limit ownership transfer to one per \code{signal}/\code{release}. 452 453 However, this solution can become much more complicated depending on what is executed while secretly holding B (listing \ref{lst:int-secret} line \ref{line:secret}). 454 The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, listing \ref{lst:dependency} shows a slightly different example where a third thread is waiting on monitor \code{A}, using a different condition variable. Because the third thread is signalled when secretly holding \code{B}, the goal becomes unreachable. Depending on the order of signals (listing \ref{lst:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen : 455 456 \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor \code{A} needs to be passed to thread $\beta$ when thread $\alpha$ is done with it. 457 \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor \code{B} needs to be retained and passed to thread $\alpha$ along with monitor \code{A}, which can be done directly or possibly using thread $\beta$ as an intermediate. 458 \\ 459 460 Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{lst:dependency}. 461 462 In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means monitors cannot be handled as a single homogeneous group and therefore effectively precludes this approach. 463 464 \subsubsection{Dependency graphs} 465 466 463 467 \begin{figure} 464 468 \begin{multicols}{3} … … 471 475 release A 472 476 \end{pseudo} 473 474 477 \columnbreak 475 476 478 Thread $\gamma$ 477 \begin{pseudo}[numbers=left, firstnumber= 1]479 \begin{pseudo}[numbers=left, firstnumber=6, escapechar=|] 478 480 acquire A 479 481 acquire A & B 480 signal A & B 481 release A & B 482 signal A 483 release A 484 \end{pseudo} 485 482 |\label{line:signal-ab}|signal A & B 483 |\label{line:release-ab}|release A & B 484 |\label{line:signal-a}|signal A 485 |\label{line:release-a}|release A 486 \end{pseudo} 486 487 \columnbreak 487 488 488 Thread $\beta$ 489 \begin{pseudo}[numbers=left, firstnumber=1 ]489 \begin{pseudo}[numbers=left, firstnumber=12, escapechar=|] 490 490 acquire A 491 491 wait A 492 release A 493 \end{pseudo} 494 492 |\label{line:release-aa}|release A 493 \end{pseudo} 495 494 \end{multicols} 496 \caption{Dependency graph} 497 \label{lst:dependency} 498 \end{figure} 499 500 The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, this goal is unreacheable in the previous example. Depending on the order of signals (line 12 and 15) two cases can happen. 501 502 \paragraph{Case 1: thread 1 goes first.} In this case, the problem is that monitor A needs to be passed to thread 2 when thread 1 is done with it. 503 \paragraph{Case 2: thread 2 goes first.} In this case, the problem is that monitor B needs to be passed to thread 1, which can be done directly or using thread 2 as an intermediate. 504 \\ 505 506 Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line 15 before line 11 and get the reverse effect. 507 508 In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means monitors cannot be handled as a single homogenous group and therefore effectively precludes this approach. 509 510 \subsubsection{Dependency graphs} 511 In the listing \ref{lst:int-bulk-pseudo} pseudo-code, there is a solution which statisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases \code{A & B} and then the waiter transfers back ownership of \code{A} when it releases it, then the problem is solved (\code{B} is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example, the following code, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions: 512 513 \begin{multicols}{2} 514 \begin{pseudo} 515 acquire A 516 acquire B 517 acquire C 518 wait A & B & C 519 release C 520 release B 521 release A 522 \end{pseudo} 523 524 \columnbreak 525 526 \begin{pseudo} 527 acquire A 528 acquire B 529 acquire C 530 signal A & B & C 531 release C 532 release B 533 release A 534 \end{pseudo} 535 \end{multicols} 536 537 \begin{figure} 495 \begin{cfacode}[caption={Pseudo-code for the three thread example.},label={lst:dependency}] 496 \end{cfacode} 538 497 \begin{center} 539 498 \input{dependency} … … 543 502 \end{figure} 544 503 545 Listing \ref{lst:dependency} is the three thread example rewritten for dependency graphs. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (e.g., $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one. 504 In the listing \ref{lst:int-bulk-pseudo} pseudo-code, there is a solution that satisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases \code{A & B} and then the waiter transfers back ownership of \code{A} back to the signaller when it releases it, then the problem is solved (\code{B} is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem is effectively resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner close to polynomial. This complexity explosion can be seen in listing \ref{lst:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions. Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks. 505 \begin{figure} 506 \begin{multicols}{2} 507 \begin{pseudo} 508 acquire A 509 acquire B 510 acquire C 511 wait A & B & C 512 release C 513 release B 514 release A 515 \end{pseudo} 516 517 \columnbreak 518 519 \begin{pseudo} 520 acquire A 521 acquire B 522 acquire C 523 signal A & B & C 524 release C 525 release B 526 release A 527 \end{pseudo} 528 \end{multicols} 529 \begin{cfacode}[caption={Extension to three monitors of listing \ref{lst:int-bulk-pseudo}},label={lst:explosion}] 530 \end{cfacode} 531 \end{figure} 532 533 Listing \ref{lst:dependency} is the three threads example used in the delayed signals solution. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (e.g., $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. 546 534 547 535 \subsubsection{Partial signalling} \label{partial-sig} 548 Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{lst:int-bulk-pseudo}, the partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be release and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithm which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any downsides worth mentionning. 536 Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{lst:int-bulk-pseudo}, the partial signalling solution transfers ownership of monitor \code{B} at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor \code{A}. Only when it reaches line \ref{line:lastRelease} does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithm, which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any significant downsides. 537 538 While listing \ref{lst:dependency} is a complicated problem for previous solutions, it can be solved easily with partial signalling : 539 \begin{itemize} 540 \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor \code{B} to thread $\alpha$ and continues to hold monitor \code{A}. 541 \item When thread $\gamma$ reaches line \ref{line:release-a} it transfers monitor \code{A} to thread $\beta$ and wakes it up. 542 \item When thread $\beta$ reaches line \ref{line:release-aa} it transfers monitor \code{A} to thread $\alpha$ and wakes it up. 543 \item Problem solved! 544 \end{itemize} 549 545 550 546 % ====================================================================== … … 553 549 % ====================================================================== 554 550 % ====================================================================== 555 \begin{ figure}551 \begin{table} 556 552 \begin{tabular}{|c|c|} 557 553 \code{signal} & \code{signal_block} \\ … … 654 650 \end{tabular} 655 651 \caption{Dating service example using \code{signal} and \code{signal_block}. } 656 \label{ lst:datingservice}657 \end{ figure}658 An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine \footnote{name to be discussed}.659 660 The example in listing \ref{lst:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits, this behaviour requires additional synchronisation when a two-way handshake is needed. To avoid this extraneous synchronisation, the \code{condition} type offers the \code{signal_block} routine, which handles the two-way handshake as shown in the example. This removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention, which means mutual-exclusion is baton-passed both on the frond-end and the back-end of the call to \code{signal_block}, meaning no other thread can acquire the monitor neither before nor after the call.652 \label{tbl:datingservice} 653 \end{table} 654 An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine. 655 656 The example in table \ref{tbl:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits, this behaviour requires additional synchronization when a two-way handshake is needed. To avoid this explicit synchronization, the \code{condition} type offers the \code{signal_block} routine, which handles the two-way handshake as shown in the example. This feature removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention, which means mutual-exclusion is baton-passed both on the frond-end and the back-end of the call to \code{signal_block}, meaning no other thread can acquire the monitor either before or after the call. 661 657 662 658 % ====================================================================== … … 727 723 \end{tabular} 728 724 \end{center} 729 This method is more constrained and explicit, which helps users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g., \uC with \code{_Accept}) or in terms of data (e.g., Go with channels). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \acrshort{api}s.725 This method is more constrained and explicit, which helps users reduce the non-deterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occurring. External scheduling can generally be done either in terms of control flow (e.g., Ada with \code{accept}, \uC with \code{_Accept}) or in terms of data (e.g., Go with channels). Of course, both of these paradigms have their own strengths and weaknesses, but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \acrshort{api}s. 730 726 731 727 For the \code{P} member above using internal scheduling, the call to \code{wait} only guarantees that \code{V} is the last routine to access the monitor, allowing a third routine, say \code{isInUse()}, acquire mutual exclusion several times while routine \code{P} is waiting. On the other hand, external scheduling guarantees that while routine \code{P} is waiting, no routine other than \code{V} can acquire the monitor. … … 736 732 % ====================================================================== 737 733 % ====================================================================== 738 In \uC, monitor declarations include an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user:734 In \uC, a monitor class declaration includee an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user: 739 735 740 736 \begin{cfacode} … … 752 748 \end{cfacode} 753 749 754 Furthermore, external scheduling is an example where implementation constraints become visible from the interface. Indeed, since there is no hard limit to the number of threads trying to acquire a monitor concurrently, performance is a significant concern. Here is the pseudo code for the entering phase of a monitor: 755 750 Furthermore, external scheduling is an example where implementation constraints become visible from the interface. Here is the pseudo code for the entering phase of a monitor: 756 751 \begin{center} 757 752 \begin{tabular}{l} … … 768 763 \end{tabular} 769 764 \end{center} 770 771 765 For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instruction. However, a fast check for \pscode{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in the following figure: 772 766 … … 778 772 \end{figure} 779 773 780 There are other alternatives to these pictures, but in the case of this picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (e.g., 128) of mutex members. This technique cannot be used in \CFA because it relies on the fact that the monitor type enumerates (declares) all the acceptable routines. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. It is important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units. 781 The alternative is to alter the implementeation like this: 774 There are other alternatives to these pictures, but in the case of this picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (e.g., 128) of mutex members. This approach requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units. For OO languages these constraints are common, since objects only offer adding member routines consistently across translation units via inheritence. However, in \CFA users can extend objects with mutex routines that are only visible in certain translation unit. This means that establishing a program-wide dense-ordering among mutex routines can only be done in the program linking phase, and still could have issues when using dynamically shared objects. 775 776 The alternative is to alter the implementation like this: 782 777 783 778 \begin{center} … … 785 780 \end{center} 786 781 787 Generating a mask dynamically means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of accepted function-pointers replaces the single instruction bitmask compare with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (e.g., listing \ref{lst:nest-ext}) may now require additionnal searches on calls to \code{waitfor} statement to check if a routine is already queued in.782 Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept seperately. Generating a mask dynamically means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of accepted function-pointers replaces the single instruction bitmask compare with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (e.g., listing \ref{lst:nest-ext}) may now require additional searches for the \code{waitfor} statement to check if a routine is already queued. 788 783 789 784 \begin{figure} 790 \begin{cfacode} 785 \begin{cfacode}[caption={Example of nested external scheduling},label={lst:nest-ext}] 791 786 monitor M {}; 792 787 void foo( M & mutex a ) {} … … 800 795 801 796 \end{cfacode} 802 \caption{Example of nested external scheduling}803 \label{lst:nest-ext}804 797 \end{figure} 805 798 806 Note that in the second picture, tasks need to always keep track of which routine they are attempting to acquire the monitor and the routine mask needs to have both a function pointer and a set of monitors, as will be discussed in the next section. These details where omitted from the picture for the sake of simplifying the representation.807 808 At this point, a decision must be made between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be prohibitivelyhard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA.799 Note that in the second picture, tasks need to always keep track of the monitors associated with mutex routines, and the routine mask needs to have both a function pointer and a set of monitors, as is be discussed in the next section. These details are omitted from the picture for the sake of simplicity. 800 801 At this point, a decision must be made between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA. 809 802 810 803 % ====================================================================== … … 821 814 822 815 void g(M & mutex b, M & mutex c) { 823 waitfor(f); //two monitors M => unkown which to pass to f(M & mutex) 824 } 825 \end{cfacode} 826 816 waitfor(f); //two monitors M => unknown which to pass to f(M & mutex) 817 } 818 \end{cfacode} 827 819 The obvious solution is to specify the correct monitor as follows: 828 820 … … 833 825 834 826 void g(M & mutex a, M & mutex b) { 835 waitfor( f, b );836 } 837 \end{cfacode}838 839 This syntax is unambiguous. Both locks are acquired and kept by \code{g}. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavio r can be extended tomulti-monitor \code{waitfor} statement as follows.827 //wait for call to f with argument b 828 waitfor(f, b); 829 } 830 \end{cfacode} 831 This syntax is unambiguous. Both locks are acquired and kept by \code{g}. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behaviour can be extended to the multi-monitor \code{waitfor} statement as follows. 840 832 841 833 \begin{cfacode} … … 845 837 846 838 void g(M & mutex a, M & mutex b) { 847 waitfor( f, a, b); 839 //wait for call to f with argument a and b 840 waitfor(f, a, b); 848 841 } 849 842 \end{cfacode} … … 851 844 Note that the set of monitors passed to the \code{waitfor} statement must be entirely contained in the set of monitors already acquired in the routine. \code{waitfor} used in any other context is Undefined Behaviour. 852 845 853 An important behavio r to note is when a set of monitors only match partially :846 An important behaviour to note is when a set of monitors only match partially : 854 847 855 848 \begin{cfacode} … … 870 863 871 864 void bar() { 872 f(a2, b); //fufill cooperation 873 } 874 \end{cfacode} 875 876 While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wake-up the waiting thread. It is also important to note that in the case of external scheduling, as for routine calls, the order of parameters is irrelevant; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are indistinguishable waiting condition. 865 f(a2, b); //fulfill cooperation 866 } 867 \end{cfacode} 868 While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wake-up the waiting thread. It is also important to note that in the case of external scheduling the order of parameters is irrelevant; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are indistinguishable waiting condition. 877 869 878 870 % ====================================================================== … … 882 874 % ====================================================================== 883 875 884 Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the \code{waitfor} statement. It checks that the set of monitor passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading.876 Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the \code{waitfor} statement. It checks that the set of monitors passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading but overloading is possible. 885 877 \begin{figure} 886 \begin{cfacode} 878 \begin{cfacode}[caption={Various correct and incorrect uses of the waitfor statement},label={lst:waitfor}] 887 879 monitor A{}; 888 880 monitor B{}; … … 911 903 waitfor(f4, a1); //Incorrect : f4 ambiguous 912 904 913 waitfor(f2, a1, b2); //Undefined Behaviour : b2 may not acquired 914 } 915 \end{cfacode} 916 \caption{Various correct and incorrect uses of the waitfor statement} 917 \label{lst:waitfor} 905 waitfor(f2, a1, b2); //Undefined Behaviour : b2 not mutex 906 } 907 \end{cfacode} 918 908 \end{figure} 919 909 920 Finally, for added flexibility, \CFA supports constructing complex \code{waitfor} mask using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} can be chained together using \code{or}; this chain forms a single statement that uses baton-pass to any one function that fits one of the function+monitor set passed in. To eanble users to tell which accepted function is accepted, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement. When multiple \code{waitfor} are chained together, only the statement corresponding to the accepted function is executed. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, that is only check of a matching function call already arrived and return immediately otherwise. Any and all of these clauses can be preceded by a \code{when} condition to dynamically construct the maskbased on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones.910 Finally, for added flexibility, \CFA supports constructing a complex \code{waitfor} statement using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} clauses can be chained together using \code{or}; this chain forms a single statement that uses baton-pass to any one function that fits one of the function+monitor set passed in. To enable users to tell which accepted function executed, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement, which is executed after the clause is triggered. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, which checks for a matching function call already arrived and otherwise continues. Any and all of these clauses can be preceded by a \code{when} condition to dynamically toggle the accept clauses on or off based on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones. 921 911 922 912 \begin{figure} 923 \begin{cfacode} 913 \begin{cfacode}[caption={Various correct and incorrect uses of the or, else, and timeout clause around a waitfor statement},label={lst:waitfor2}] 924 914 monitor A{}; 925 915 … … 979 969 } 980 970 \end{cfacode} 981 \caption{Various correct and incorrect uses of the or, else, and timeout clause around a waitfor statement}982 \label{lst:waitfor2}983 971 \end{figure} 984 972 … … 990 978 An interesting use for the \code{waitfor} statement is destructor semantics. Indeed, the \code{waitfor} statement can accept any \code{mutex} routine, which includes the destructor (see section \ref{data}). However, with the semantics discussed until now, waiting for the destructor does not make any sense since using an object after its destructor is called is undefined behaviour. The simplest approach is to disallow \code{waitfor} on a destructor. However, a more expressive approach is to flip execution ordering when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current \code{mutex} routine, similarly to how a condition is signalled. 991 979 \begin{figure} 992 \begin{cfacode} 980 \begin{cfacode}[caption={Example of an executor which executes action in series until the destructor is called.},label={lst:dtor-order}] 993 981 monitor Executer {}; 994 982 struct Action; … … 1005 993 } 1006 994 \end{cfacode} 1007 \caption{Example of an executor which executes action in series until the destructor is called.}1008 \label{lst:dtor-order}1009 995 \end{figure} 1010 996 For example, listing \ref{lst:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop. Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction.
Note:
See TracChangeset
for help on using the changeset viewer.