Changes in / [f51aefb:84118d8]
- Location:
- doc/proposals/concurrency
- Files:
-
- 1 added
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/concurrency/concurrency.tex
rf51aefb r84118d8 1 1 % requires tex packages: texlive-base texlive-latex-base tex-common texlive-humanities texlive-latex-extra texlive-fonts-recommended 2 2 3 % inline code ©...©(copyright symbol) emacs: C-q M-)4 % red highlighting ®...®(registered trademark symbol) emacs: C-q M-.5 % blue highlighting ß...ß(sharp s symbol) emacs: C-q M-_6 % green highlighting ¢...¢(cent symbol) emacs: C-q M-"7 % LaTex escape §...§(section symbol) emacs: C-q M-'8 % keyword escape ¶...¶(pilcrow symbol) emacs: C-q M-^3 % inline code �...� (copyright symbol) emacs: C-q M-) 4 % red highlighting �...� (registered trademark symbol) emacs: C-q M-. 5 % blue highlighting �...� (sharp s symbol) emacs: C-q M-_ 6 % green highlighting �...� (cent symbol) emacs: C-q M-" 7 % LaTex escape �...� (section symbol) emacs: C-q M-' 8 % keyword escape �...� (pilcrow symbol) emacs: C-q M-^ 9 9 % math escape $...$ (dollar symbol) 10 10 … … 14 14 15 15 % Latex packages used in the document. 16 \usepackage[T1]{fontenc} 16 \usepackage[T1]{fontenc} % allow Latin1 (extended ASCII) characters 17 17 \usepackage{textcomp} 18 18 \usepackage[latin1]{inputenc} 19 19 \usepackage{fullpage,times,comment} 20 20 \usepackage{epic,eepic} 21 \usepackage{upquote} 21 \usepackage{upquote} % switch curled `'" to straight 22 22 \usepackage{calc} 23 23 \usepackage{xspace} … … 25 25 \usepackage{tabularx} 26 26 \usepackage[acronym]{glossaries} 27 \usepackage{varioref} 27 \usepackage{varioref} % extended references 28 28 \usepackage{inconsolata} 29 \usepackage{listings} 30 \usepackage[flushmargin]{footmisc} 31 \usepackage{latexsym} 32 \usepackage{mathptmx} 29 \usepackage{listings} % format program code 30 \usepackage[flushmargin]{footmisc} % support label/reference in footnote 31 \usepackage{latexsym} % \Box glyph 32 \usepackage{mathptmx} % better math font with "times" 33 33 \usepackage[usenames]{color} 34 34 \usepackage[pagewise]{lineno} 35 35 \usepackage{fancyhdr} 36 36 \renewcommand{\linenumberfont}{\scriptsize\sffamily} 37 \input{ common}% bespoke macros used in the document37 \input{style} % bespoke macros used in the document 38 38 \usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,colorlinks=true,pagebackref=true,linkcolor=blue,citecolor=blue,urlcolor=blue,pagebackref=true,breaklinks=true]{hyperref} 39 39 \usepackage{breakurl} … … 44 44 \renewcommand{\UrlFont}{\small\sf} 45 45 46 \setlength{\topmargin}{-0.45in} 46 \setlength{\topmargin}{-0.45in} % move running title into header 47 47 \setlength{\headsep}{0.25in} 48 48 … … 86 86 \title{Concurrency in \CFA} 87 87 \author{Thierry Delisle \\ 88 Dept.of Computer Science, University of Waterloo, \\ Waterloo, Ontario, Canada88 School of Computer Science, University of Waterloo, \\ Waterloo, Ontario, Canada 89 89 } 90 90 … … 100 100 101 101 \section{Introduction} 102 This proposal provides a minimal core concurrency API that is both simple, efficient and can be reused to build higher-level features. The simplest possible core is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level construct as the basis of the concurrency in \CFA. 103 Indeed, for highly productive parallel programming high-level approaches are much more popular\cite{HPP:Study}. Examples are task based parallelism, message passing, implicit threading. 104 105 There are actually two problems that need to be solved in the design of the concurrency for a language. Which concurrency tools are available to the users and which parallelism tools are available. While these two concepts are often seen together, they are in fact distinct concepts that require different sorts of tools\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization while parallelism tools are more about performance, cost and resource utilization. 102 This proposal provides a minimal core concurrency API that is both simple, efficient and can be reused to build higher-level features. The simplest possible concurrency core is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of the concurrency in \CFA. Indeed, for highly productive parallel programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based parallelism, message passing and implicit threading. 103 104 There are actually two problems that need to be solved in the design of the concurrency for a programming language. Which concurrency tools are available to the users and which parallelism tools are available. While these two concepts are often seen together, they are in fact distinct concepts that require different sorts of tools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are more about performance, cost and resource utilization. 106 105 107 106 % ##### ####### # # ##### # # ###### ###### ####### # # ##### # # … … 114 113 115 114 \section{Concurrency} 116 % Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared-state (Erlang\cite{Erlang}, Haskell\cite{Haskell}, Akka (Scala)\cite{Akka}). In these paradigms, interaction among concurrent objects rely on message passing or other paradigms that often closely relate to networking concepts. However, in imperative or OO languages, these approaches entail a clear distinction between concurrent and non-concurrent paradigms (i.e. message passing versus routine call). Which in turns mean that programmers need to learn two sets of designs patterns in order to be effective. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on non-concurrent constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. However, for productivity reasons it is desireable to have a higher-level construct to be the core concurrency paradigm\cite{HPP:Study}. This project proposes Monitors\cite{Hoare74} as the core concurrency construct. 117 % \\ 118 119 Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared-state (Erlang\cite{Erlang}, Haskell\cite{Haskell}, Akka (Scala)\cite{Akka}). In these paradigms, interaction among concurrent objects rely on message passing\cite{Thoth,Harmony,V-Kernel} or other paradigms that often closely relate to networking concepts. However, in imperative or OO languages, these approaches entail a clear distinction between concurrent and non-concurrent paradigms (i.e. message passing versus routine call). Which in turns mean that programmers need to learn two sets of designs patterns in order to be effective. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on non-concurrent constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct to be the core concurrency paradigm\cite{HPP:Study}. One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. 120 121 Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. 122 Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes Monitors as the core concurrency construct. 123 \\ 124 125 Finally, an approach that is worth mentionning because it is gaining in popularity is transactionnal memory\cite{Dice10}. However, the performance and feature set is currently too restrictive to be possible to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. 115 Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts. However, in languages that use routine calls as their core abstraction mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e. message passing versus routine call). Which in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. This distinction can be hidden away in library code, but effective use of the librairy will still have to take both paradigms into account. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on non-concurrent constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct to be the core concurrency paradigm~\cite{HPP:Study}. An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be possible to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes Monitors as the core concurrency construct. 126 116 127 117 % # # ####### # # ### ####### ####### ###### ##### … … 134 124 135 125 \subsection{Monitors} 136 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java \cite{Java} or \uC\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it :126 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it : 137 127 \begin{lstlisting} 138 128 typedef /*some monitor type*/ monitor; … … 154 144 155 145 \subsubsection{Call semantics} \label{call} 156 The above example of monitors already displays some of their intrinsic caracteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable. 157 \\ 158 159 Another aspect to consider is when a monitor acquires its mutual exclusion. Indeed, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual exclusion on entry. Examples of this can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following example : 146 The above monitor example displays some of their intrinsic characteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable. 147 148 Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual exclusion on entry. Pass through can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic large counter : 160 149 161 150 \begin{lstlisting} … … 163 152 164 153 void ?{}(counter_t & nomutex this); 165 int ++?(counter_t & mutex this); 166 void ?{}(Int * this, counter_t & mutex cnt); 154 size_t ++?(counter_t & mutex this); 155 156 //need for mutex is platform dependent here 157 void ?{}(size_t * this, counter_t & mutex cnt); 167 158 \end{lstlisting} 168 159 *semantics of the declaration of \code{mutex struct counter_t} are discussed in details in section \ref{data} 169 \\ 170 171 This example is of a monitor implementing an atomic counter. Here, the constructor uses the \code{nomutex} keyword to signify that it does not acquire the coroutine mutual exclusion when constructing. This is because object not yet constructed should never be shared and therefore do not require mutual exclusion. The prefix increment operator 172 uses \code{mutex} to protect the incrementing process from race conditions. Finally, we have a conversion operator from \code{counter_t} to \code{Int}. This conversion may or may not require the \code{mutex} key word depending whether or not reading an \code{Int} is an atomic operation or not. 173 \\ 174 175 Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. If there were a meaning to routine \code{void foo(counter_t & this)} then one could argue that it should be to default to the safest option : \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. It can be argued that this is the more "normal" behavior, \code{nomutex} effectively stating explicitly that "this routine has nothing special". An other alternative is to make one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routine \code{void foo(counter_t & this)}. Mandatory keywords would also have the added benefice of being more clearly self-documented but at the cost of extra typing. In the end, which solution should be picked is still up for debate. For the reminder of this proposal, the explicit approach will be used for the sake of clarity. 176 \\ 177 178 Regardless of which keyword is kept, it is important to establish when mutex/nomutex may be used depending on type parameters. 160 161 Here, the constructor(\code(?{})) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual exclusion when constructing. This semantics is because object not yet constructed should never be shared and therefore do not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} key word depending on whether or not reading an \code{size_t} is an atomic operation or not. 162 163 Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. If there were a meaning to routine \code{void foo(counter_t & this)} then one could argue that it should default to the safest option : \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. It can be argued that this is the more "normal" behavior, \code{nomutex} effectively stating explicitly that "this routine has nothing special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routine \code{void foo(counter_t & this)}. Mandatory keywords would also have the added benefice of being self-documented but at the cost of extra typing. In the end, which solution should be picked is still up for debate. For the reminder of this proposal, the explicit approach is used for clarity. 164 165 Regardless of which keyword is kept, it is important to establish when mutex/nomutex may be used as a type qualifier. Consider : 179 166 \begin{lstlisting} 180 167 int f1(monitor & mutex m); … … 185 172 \end{lstlisting} 186 173 187 The problem is to indentify which object(s) should be acquired. Furthermore we also need to acquire each objects only once. In case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entering. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object will be acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths aren't necessarily known in C and even then making sure we only acquire objects once becomes also none trivial. This can be extended to absurd limits like \code{f5} which uses a custom graph of monitors. To keep everyone as sane as possible\cite{Chicken}, this projects imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter (ignoring potential qualifiers and indirections).174 The problem is to indentify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entering. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C and even then making sure we only acquire objects once becomes also none trivial. This can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To keep everyone as sane as possible~\cite{Chicken}, this projects imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter (ignoring potential qualifiers and indirections). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} will be acquired, passing an array to this routine would be type safe and result in undefined behavior. For this reason, it would also be reasonnable to disallow mutex in the context where arrays may be passed. 188 175 189 176 % ###### # ####### # … … 207 194 208 195 int ++?(counter_t & mutex this) { 209 return ++this->value; 210 } 211 196 return ++this.value; 197 } 198 199 //need for mutex is platform dependent here 212 200 void ?{}(int * this, counter_t & mutex cnt) { 213 201 *this = (int)cnt; 214 202 } 215 203 \end{lstlisting} 216 \begin{tabular}{ c c } 217 Thread 1 & Thread 2 \\ 218 \begin{lstlisting} 219 void f(counter_t & mutex c) { 220 for(;;) { 221 sout | (int)c | endl; 222 } 223 } 224 \end{lstlisting} &\begin{lstlisting} 225 void g(counter_t & mutex c) { 226 for(;;) { 227 ++c; 228 } 229 } 230 204 205 This simple counter offers an example of monitor usage. Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting : 206 \begin{center} 207 \begin{tabular}{c @{\hskip 0.35in} c @{\hskip 0.35in} c} 208 \begin{lstlisting} 209 counter_t cnt; 210 211 thread 1 : cnt++; 212 thread 2 : cnt++; 213 thread 3 : cnt++; 214 ... 215 thread N : cnt++; 231 216 \end{lstlisting} 232 217 \end{tabular} 233 \\ 234 235 236 This simple counter offers an example of monitor usage. Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting. \\ 218 \end{center} 237 219 238 220 These simple mutual exclusion semantics also naturally expand to multi-monitor calls. … … 245 227 \end{lstlisting} 246 228 247 This code acquires both locks before entering the critical section . In practice, writing multi-locking routines that can not lead to deadlocks can be very tricky. Having language level support for such feature is therefore a significant asset for \CFA. However, this does have significant repercussions relating to scheduling (see \ref{insched} and \ref{extsched}). Furthermore, the ability to acquire multiple monitors at the same time does incur a significant pitfall even without looking into scheduling. For example:229 This code acquires both locks before entering the critical section (Referenced as \gls{group-acquire} from now on). In practice, writing multi-locking routines that can not lead to deadlocks can be tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition will be consistent across calls to routines using the same monitors as arguments. However, since \CFA monitors use multi-acquiring locks users can effectively force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order : 248 230 \begin{lstlisting} 249 231 void foo(A & mutex a, B & mutex a) { … … 264 246 \end{lstlisting} 265 247 266 Recursive mutex routine calls are allowed in \CFA but if not done carefully it can lead to nested monitor call problems\cite{Lister77}. These problems which are a specific implementation of the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{bar} but explicit ordering in the case of \code{baz}. This subtle mistake can mean that calling these two functions concurrently will lead to deadlocks, depending on the implicit ordering matching the explicit ordering. As shown on several occasion\cit, there isn't really any solutions to this problem, users simply need to be carefull when acquiring multiple monitors at the same time. 248 Such a use will lead to nested monitor call problems~\cite{Lister77}, which are a specific implementation of the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlocks, depending on the implicit ordering matching the explicit ordering. As shown on several occasion\cit, solving this problems requires to : 249 \begin{enumerate} 250 \item Dynamically track the monitor call order. 251 \item Implement rollback semantics. 252 \end{enumerate} 253 254 While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is prohibitively complex \cit. In \CFA users simply need to be carefull when acquiring multiple monitors at the same time. 267 255 268 256 % ###### ####### ####### # ### # ##### … … 283 271 284 272 \subsubsection{Implementation Details: Interaction with polymorphism} 285 At first glance, interaction between monitors and \CFA's concept of polymorphism seem complexe to support. However, it can be reasoned that entry-point locking can solve most of the issues that could be present with polymorphism. 286 287 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore the main question is how to support \code{dtype} polymorphism. We must remember that monitors' main purpose is to ensure mutual exclusion when accessing shared data. This implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handle incomplete types (by definition) no \code{dtype} polymorphic routine can access shared data since the data would require knowledge about the type. Therefore the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. With callsite-locking, this would require significant amount of work since any \code{dtype} routine could have to obtain some lock before calling a routine. However, with entry-point-locking calling a monitor routine becomes exactly the same as calling it from anywhere else. 273 At first glance, interaction between monitors and \CFA's concept of polymorphism seem complex to support. However, it can be reasoned that entry-point locking can solve most of the issues that could be present with polymorphism. 274 275 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. \Gls{callsite-locking}\footnotemark would require a significant amount of work, since any \code{dtype} routine may have to obtain some lock before calling a routine, depending on whether or not the type passed is a monitor. However, with \gls{entry-point-locking}\footnotemark[\value{footnote}] calling a monitor routine becomes exactly the same as calling it from anywhere else. 276 \footnotetext{See glossary for a definition of \gls{callsite-locking} and \gls{entry-point-locking}} 288 277 289 278 % ### # # ####### ##### ##### # # ####### ###### … … 296 285 297 286 \subsection{Internal scheduling} \label{insched} 298 Monitors should also be able to schedule what threads access it as a mean of synchronization. Internal scheduling is one of the simple examples of such a feature. It allows users to declare condition variables and wait for them to be signaled. Here is a simple example of such a technique :287 Monitors also need to schedule waiting threads within it as a mean of synchronization. Internal scheduling is one of the simple examples of such a feature. It allows users to declare condition variables and have threads wait and signaled from them. Here is a simple example of such a technique : 299 288 300 289 \begin{lstlisting} … … 314 303 \end{lstlisting} 315 304 316 Here routine \code{foo} waits on the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. This semantic can easily be extended to multi-monitor calls by offering the same guarantee. 317 305 Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. This semantic can easily be extended to multi-monitor calls by offering the same guarantee. 318 306 \begin{center} 319 307 \begin{tabular}{ c @{\hskip 0.65in} c } … … 321 309 \begin{lstlisting} 322 310 void foo(monitor & mutex a, 323 monitor & mutex b) {311 monitor & mutex b) { 324 312 //... 325 313 wait(a.e); … … 330 318 \end{lstlisting} &\begin{lstlisting} 331 319 void bar(monitor & mutex a, 332 monitor & mutex b) {320 monitor & mutex b) { 333 321 signal(a.e); 334 322 } … … 340 328 \end{tabular} 341 329 \end{center} 342 343 A direct extension of the single monitor semantics would be to release all locks when waiting and transferring ownership of all locks when signalling. However, for the purpose of synchronization it may be usefull to only release some of the locks but keep others. On the technical side, partially releasing lock is feasible but from the user perspective a choice must be made for the syntax of this feature. It is possible to do without any extra syntax by relying on order of acquisition (Note that here the use of helper routines is irrelevant, only routines the acquire mutual exclusion have an impact on internal scheduling): 330 A direct extension of the single monitor semantics is to release all locks when waiting and transferring ownership of all locks when signalling. However, for the purpose of synchronization it may be usefull to only release some of the locks but keep others. It is possible to support internal scheduling and \gls{group-acquire} without any extra syntax by relying on order of acquisition. Here is an example of the different contexts in which internal scheduling can be used. (Note that here the use of helper routines is irrelevant, only routines acquire mutual exclusion have an impact on internal scheduling): 344 331 345 332 \begin{center} … … 351 338 352 339 void foo(monitor & mutex a, 353 monitor & mutex b) { 340 monitor & mutex b) { 341 354 342 wait(e); 355 343 } … … 365 353 366 354 void bar(monitor & mutex a, 367 monitor & nomutex b) {355 monitor & nomutex b) { 368 356 foo(a,b); 369 357 } 370 358 371 359 void foo(monitor & mutex a, 372 monitor & mutex b) {360 monitor & mutex b) { 373 361 wait(e); 374 362 } … … 379 367 380 368 void bar(monitor & mutex a, 381 monitor & nomutex b) {382 foo(a,b);369 monitor & nomutex b) { 370 baz(a,b); 383 371 } 384 372 385 373 void baz(monitor & nomutex a, 386 monitor & mutex b) {374 monitor & mutex b) { 387 375 wait(e); 388 376 } … … 393 381 \end{center} 394 382 395 This can be interpreted in two different ways : 396 \begin{flushleft} 397 \begin{enumerate} 398 \item \code{wait} atomically releases the monitors acquired by the inner-most routine, \underline{ignoring} nested calls. 399 \item \code{wait} atomically releases the monitors acquired by the inner-most routine, \underline{considering} nested calls. 400 \end{enumerate} 401 \end{flushleft} 402 While the difference between these two is subtle, it has a significant impact. In the first case it means that the calls to \code{foo} would behave the same in Context 1 and 2. This semantic would also mean that the call to \code{wait} in routine \code{baz} would only release \code{monitor b}. While this may seem intuitive with these examples, it does have one significant implication, it creates a strong distinction between acquiring multiple monitors in sequence and acquiring the same monitors simulatenously, i.e. : 383 Note that in \CFA, \code{condition} have no particular need to be stored inside a monitor, beyond any software engineering reasons. Context 1 is the simplest way of acquiring more than one monitor (\gls{group-acquire}), using a routine wiht multiple parameters having the \code{mutex} keyword. Context 2 also uses \gls{group-acquire} as well in routine \code{foo}. However, the routine is called by routine \code{bar} which only acquires monitor \code{a}. Since monitors can be acquired multiple times this will not cause a deadlock by itself but it does force the acquiring order to \code{a} then \code{b}. Context 3 also forces the acquiring order to be \code{a} then \code{b} but does not use \gls{group-acquire}. The previous example tries to illustrate the semantics that must be established to support releasing monitors in a \code{wait} statement. In all cases the behavior of the wait statment is to release all the locks that were acquired my the inner-most monitor call. That is \code{a & b} in context 1 and 2 and \code{b} only in context 3. Here are a few other examples of this behavior. 384 403 385 404 386 \begin{center} 405 \begin{tabular}{c @{\hskip 0.35in} c @{\hskip 0.35in} c} 406 \begin{lstlisting} 407 enterMonitor(a); 408 enterMonitor(b); 409 // do stuff 410 leaveMonitor(b); 411 leaveMonitor(a); 412 \end{lstlisting} & != &\begin{lstlisting} 413 enterMonitor(a); 414 enterMonitor(a, b); 415 // do stuff 416 leaveMonitor(a, b); 417 leaveMonitor(a); 387 \begin{tabular}{|c|c|c|} 388 \begin{lstlisting} 389 condition e; 390 391 //acquire a 392 void foo(monitor & nomutex a, 393 monitor & mutex b) { 394 bar(a,b); 395 } 396 397 //acquire a 398 void bar(monitor & mutex a, 399 monitor & nomutex b) { 400 401 //release a 402 //keep b 403 wait(e); 404 } 405 406 foo(a, b); 407 \end{lstlisting} &\begin{lstlisting} 408 condition e; 409 410 //acquire a & b 411 void foo(monitor & mutex a, 412 monitor & mutex b) { 413 bar(a,b); 414 } 415 416 //acquire b 417 void bar(monitor & mutex a, 418 monitor & nomutex b) { 419 420 //release b 421 //keep a 422 wait(e); 423 } 424 425 foo(a, b); 426 \end{lstlisting} &\begin{lstlisting} 427 condition e; 428 429 //acquire a & b 430 void foo(monitor & mutex a, 431 monitor & mutex b) { 432 bar(a,b); 433 } 434 435 //acquire none 436 void bar(monitor & nomutex a, 437 monitor & nomutex b) { 438 439 //release a & b 440 //keep none 441 wait(e); 442 } 443 444 foo(a, b); 418 445 \end{lstlisting} 419 446 \end{tabular} 420 447 \end{center} 421 422 This is not intuitive because even if both methods display the same monitors state both inside and outside the critical section respectively, the behavior is different. Furthermore, the actual acquiring order will be exaclty the same since acquiring a monitor from inside its mutual exclusion is a no-op. This means that even if the data and the actual control flow are the same using both methods, the behavior of the \code{wait} will be different. The alternative is option 2, that is releasing acquired monitors, \underline{considering} nesting. This solves the issue of having the two acquiring method differ at the cost of making routine \code{foo} behave differently depending on from which context it is called (Context 1 or 2). Indeed in Context 2, routine \code{foo} actually behaves like routine \code{baz} rather than having the same behavior than in Context 1. The fact that both implicit approaches can be unintuitive depending on the perspective may be a sign that the explicit approach is superior. For this reason this \CFA does not support implicit monitor releasing and uses explicit semantics. 423 \\ 424 425 The following examples shows three alternatives of explicit wait semantics : 426 \\ 427 428 \begin{center} 429 \begin{tabular}{|c|c|c|} 430 Case 1 & Case 2 & Case 3 \\ 431 Branding on construction & Explicit release list & Explicit ignore list \\ 432 \hline 433 \begin{lstlisting} 434 void foo(monitor & mutex a, 435 monitor & mutex b, 436 condition & c) 437 { 438 // Releases monitors 439 // branded in ctor 440 wait(c); 441 } 442 443 monitor a; 444 monitor b; 445 condition1 c1 = {a}; 446 condition2 c2 = {a, b}; 447 448 //Will release only a 449 foo(a,b,c1); 450 451 //Will release a and b 452 foo(a,b,c2); 453 \end{lstlisting} &\begin{lstlisting} 454 void foo(monitor & mutex a, 455 monitor & mutex b, 456 condition & c) 457 { 458 // Releases monitor a 459 // Holds monitor b 460 waitRelease(c, [a]); 461 } 462 463 monitor a; 464 monitor b; 465 condition c; 466 467 468 469 foo(a,b,c); 470 471 472 473 \end{lstlisting} &\begin{lstlisting} 474 void foo(monitor & mutex a, 475 monitor & mutex b, 476 condition & c) 477 { 478 // Releases monitor a 479 // Holds monitor b 480 waitHold(c, [b]); 481 } 482 483 monitor a; 484 monitor b; 485 condition c; 486 487 488 489 foo(a,b,c); 490 491 492 493 \end{lstlisting} 494 \end{tabular} 495 \end{center} 496 (Note : Case 2 and 3 use tuple semantics to pass a variable length list of elements.) 497 \\ 498 499 All these cases have their pros and cons. Case 1 is more distinct because it means programmers need to be carefull about where the condition is initialized as well as where it is used. On the other hand, it is very clear and explicitly states which monitor is released and which monitor stays acquired. This is similar to Case 2, which releases only the monitors explictly listed. However, in Case 2, calling the \code{wait} routine instead of the \code{waitRelease} routine releases all the acquired monitor. The Case 3 is an improvement on that since it releases all the monitors except those specified. The result is that the \code{wait} routine can be written as follows : 500 \begin{lstlisting} 501 void wait(condition & cond) { 502 waitHold(cond, []); 503 } 504 \end{lstlisting} 505 This alternative offers nice and consistent behavior between \code{wait} and \code{waitHold}. However, one large pitfall is that mutual exclusion can now be violated by calls to library code. Indeed, even if the following example seems benign there is one significant problem : 506 \begin{lstlisting} 507 monitor global; 508 509 extern void doStuff(); //uses global 510 511 void foo(monitor & mutex m) { 512 //... 513 doStuff(); //warning can release monitor m 514 //... 515 } 516 517 foo(global); 518 \end{lstlisting} 519 520 Indeed, if Case 2 or 3 are chosen it any code can violate the mutual exclusion of the calling code by issuing calls to \code{wait} or \code{waitHold} in a nested monitor context. Case 2 can be salvaged by removing the \code{wait} routine from the API but Case 3 cannot prevent users from calling \code{waitHold(someCondition, [])}. For this reason the syntax proposed in Case 3 is rejected. Note that the syntax proposed in case 1 and 2 are not exclusive. Indeed, by supporting two types of condition both cases can be supported : 521 \begin{lstlisting} 522 struct condition { /*...*/ }; 523 524 // Second argument is a variable length tuple. 525 void wait(condition & cond, [...] monitorsToRelease); 526 void signal(condition & cond); 527 528 struct conditionN { /*...*/ }; 529 530 void ?{}(conditionN* this, /*list of N monitors to release*/); 531 void wait(conditionN & cond); 532 void signal(conditionN & cond); 533 \end{lstlisting} 534 535 Regardless of the option chosen for wait semantics, signal must be symmetrical. In all cases, signal only needs a single parameter, the condition variable that needs to be signalled. But \code{signal} needs to be called from the same monitor(s) that call to \code{wait}. Otherwise, mutual exclusion cannot be properly transferred back to the waiting monitor. 536 537 Finally, an additionnal semantic which can be very usefull is the \code{signalBlock} routine. This routine behaves like signal for all of the semantics discussed above, but with the subtelty that mutual exclusion is transferred to the waiting task immediately rather than wating for the end of the critical section. 448 Note the right-most example which uses a helper routine and therefore is not relevant to find which monitors will be released. 449 450 These semantics imply that in order to release of subset of the monitors currently held, users must write (and name) a routine that only acquires the desired subset and simply calls wait. While users can use this method, \CFA offers the \code{wait_release}\footnote{Not sure if an overload of \code{wait} would work...} which will release only the specified monitors. 451 452 Regardless of the context in which the \code{wait} statement is used, \code{signal} must used holding the same set of monitors. In all cases, signal only needs a single parameter, the condition variable that needs to be signalled. But \code{signal} needs to be called from the same monitor(s) that call to \code{wait}. Otherwise, mutual exclusion cannot be properly transferred back to the waiting monitor. 453 454 Finally, an additional semantic which can be very usefull is the \code{signal_block} routine. This routine behaves like signal for all of the semantics discussed above, but with the subtelty that mutual exclusion is transferred to the waiting task immediately rather than wating for the end of the critical section. 538 455 \\ 539 456 … … 545 462 % # # # # ### # # # # # # # # # 546 463 % ####### # # # ### ##### ##### # # ####### ###### 547 464 \newpage 548 465 \subsection{External scheduling} \label{extsched} 549 466 As one might expect, the alternative to Internal scheduling is to use External scheduling instead. This method is somewhat more robust to deadlocks since one of the threads keeps a relatively tight control on scheduling. Indeed, as the following examples will demonstrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (ex: \uC) or in terms of data (ex: Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows what a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages. … … 756 673 % # # # # # # # ####### ####### ####### ####### ### ##### # # 757 674 \section{Parallelism} 758 Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being an ever growing challenge, parallelism has become the new source of greatest performance 675 Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being an ever growing challenge, parallelism has become the new source of greatest performance~\cite{Sutter05, Sutter05b}. In this decade, it is not longer reasonnable to create high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest level approach of parallelism is to use \glspl{kthread}. However since these have significant costs and limitations \glspl{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues which all have strengths and weaknesses. 759 676 760 677 \subsection{User-level threads} 761 678 A direct improvement on the \gls{kthread} approach is to use \glspl{uthread}. These threads offer most of the same features that the operating system already provide but can be used on a much larger scale. This is the most powerfull solution as it allows all the features of multi-threading while removing several of the more expensives costs of using kernel threads. The down side is that almost none of the low-level threading complexities are hidden, users still have to think about data races, deadlocks and synchronization issues. This can be somewhat alleviated by a concurrency toolkit with strong garantees but the parallelism toolkit offers very little to reduce complexity in itself. 762 679 763 Examples of languages that support are Java \cite{Java}, Haskell\cite{Haskell} and \uC\cite{uC++book}.680 Examples of languages that support are Java~\cite{Java}, Haskell~\cite{Haskell} and \uC~\cite{uC++book}. 764 681 765 682 \subsection{Jobs and thread pools} 766 683 The approach on the opposite end of the spectrum is to base parallelism on \glspl{job}. Indeed, \glspl{job} offer limited flexibility but at the benefit of a simpler user interface. In \gls{job} based systems users express parallelism as units of work and the dependency graph (either explicit or implicit) that tie them together. This means users need not to worry about concurrency but significantly limits the interaction that can occur between different jobs. Indeed, any \gls{job} that blocks also blocks the underlying \gls{kthread}, this effectively mean the CPU utilization, and therefore throughput, will suffer noticeably. 767 The golden standard of this implementation is Intel's TBB library \cite{TBB}.684 The golden standard of this implementation is Intel's TBB library~\cite{TBB}. 768 685 769 686 \subsection{Fibers : user-level threads without preemption} 770 687 Finally, in the middle of the flexibility versus complexity spectrum lay \glspl{fiber} which offer \glspl{uthread} without the complexity of preemption. This means users don't have to worry about other \glspl{fiber} suddenly executing between two instructions which signficantly reduces complexity. However, any call to IO or other concurrency primitives can lead to context switches. Furthermore, users can also block \glspl{fiber} in the middle of their execution without blocking a full processor core. This means users still have to worry about mutual exclusion, deadlocks and race conditions in their code, raising the complexity significantly. 771 An example of a language that uses fibers is Go \cite{Go}688 An example of a language that uses fibers is Go~\cite{Go} 772 689 773 690 \subsection{Paradigm performance} … … 815 732 \end{lstlisting} 816 733 817 Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use some function pointer representation as the interface of threads (for example : \Csharp \cite{Csharp} and Scala\cite{Scala}). However, we consider that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is definetely a special routine in \CFA, we can reuse the existing syntax for declaring routines with unordinary name, i.e. operator overloading. As such the \code{main} routine of a thread can be defined as such :734 Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use some function pointer representation as the interface of threads (for example : \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, we consider that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is definetely a special routine in \CFA, we can reuse the existing syntax for declaring routines with unordinary name, i.e. operator overloading. As such the \code{main} routine of a thread can be defined as such : 818 735 \begin{lstlisting} 819 736 thread struct foo {}; -
doc/proposals/concurrency/glossary.tex
rf51aefb r84118d8 1 1 \makeglossaries 2 3 \longnewglossaryentry{callsite-locking} 4 {name={callsite-locking}} 5 { 6 Locking done by the calling routine. With this technique, a routine calling a monitor routine will aquire the monitor \emph{before} making the call to the actuall routine. 7 } 8 9 \longnewglossaryentry{entry-point-locking} 10 {name={entry-point-locking}} 11 { 12 Locking done by the called routine. With this technique, a monitor routine called by another routine will aquire the monitor \emph{after} entering the routine body but prior to any other code. 13 } 14 15 \longnewglossaryentry{group-acquire} 16 {name={grouped acquiring}} 17 { 18 Implicitly acquiring several monitors when entering a monitor. 19 } 20 21 2 22 \longnewglossaryentry{uthread} 3 23 {name={user-level thread}} -
doc/proposals/concurrency/version
rf51aefb r84118d8 1 0. 4.991 0.5.146
Note: See TracChangeset
for help on using the changeset viewer.