Changeset 1f44196 for doc/proposals/concurrency/concurrency.tex
- Timestamp:
- Nov 29, 2016, 3:30:59 PM (7 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 8e5724e
- Parents:
- 3a2128f (diff), 9129a84 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/concurrency/concurrency.tex
r3a2128f r1f44196 14 14 15 15 % Latex packages used in the document. 16 \usepackage[T1]{fontenc} 16 \usepackage[T1]{fontenc} % allow Latin1 (extended ASCII) characters 17 17 \usepackage{textcomp} 18 18 \usepackage[latin1]{inputenc} 19 19 \usepackage{fullpage,times,comment} 20 20 \usepackage{epic,eepic} 21 \usepackage{upquote} 21 \usepackage{upquote} % switch curled `'" to straight 22 22 \usepackage{calc} 23 23 \usepackage{xspace} … … 25 25 \usepackage{tabularx} 26 26 \usepackage[acronym]{glossaries} 27 \usepackage{varioref} 27 \usepackage{varioref} % extended references 28 28 \usepackage{inconsolata} 29 \usepackage{listings} 30 \usepackage[flushmargin]{footmisc} 31 \usepackage{latexsym} 32 \usepackage{mathptmx} 29 \usepackage{listings} % format program code 30 \usepackage[flushmargin]{footmisc} % support label/reference in footnote 31 \usepackage{latexsym} % \Box glyph 32 \usepackage{mathptmx} % better math font with "times" 33 33 \usepackage[usenames]{color} 34 34 \usepackage[pagewise]{lineno} 35 35 \usepackage{fancyhdr} 36 36 \renewcommand{\linenumberfont}{\scriptsize\sffamily} 37 \input{ common}% bespoke macros used in the document37 \input{style} % bespoke macros used in the document 38 38 \usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,colorlinks=true,pagebackref=true,linkcolor=blue,citecolor=blue,urlcolor=blue,pagebackref=true,breaklinks=true]{hyperref} 39 39 \usepackage{breakurl} … … 44 44 \renewcommand{\UrlFont}{\small\sf} 45 45 46 \setlength{\topmargin}{-0.45in} 46 \setlength{\topmargin}{-0.45in} % move running title into header 47 47 \setlength{\headsep}{0.25in} 48 48 … … 86 86 \title{Concurrency in \CFA} 87 87 \author{Thierry Delisle \\ 88 Dept.of Computer Science, University of Waterloo, \\ Waterloo, Ontario, Canada88 School of Computer Science, University of Waterloo, \\ Waterloo, Ontario, Canada 89 89 } 90 90 … … 100 100 101 101 \section{Introduction} 102 This proposal provides a minimal core concurrency API that is both simple, efficient and can be reused to build higher-level features. The simplest possible core is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level construct as the basis of the concurrency in \CFA. 103 Indeed, for highly productive parallel programming high-level approaches are much more popular\cite{HPP:Study}. Examples are task based parallelism, message passing, implicit threading. 104 105 There are actually two problems that need to be solved in the design of the concurrency for a language. Which concurrency tools are available to the users and which parallelism tools are available. While these two concepts are often seen together, they are in fact distinct concepts that require different sorts of tools\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization while parallelism tools are more about performance, cost and resource utilization. 102 This proposal provides a minimal core concurrency API that is both simple, efficient and can be reused to build higher-level features. The simplest possible concurrency core is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of the concurrency in \CFA. Indeed, for highly productive parallel programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading. 103 104 There are actually two problems that need to be solved in the design of the concurrency for a programming language: which concurrency tools are available to the users and which parallelism tools are available. While these two concepts are often seen together, they are in fact distinct concepts that require different sorts of tools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are more about performance, cost and resource utilization. 106 105 107 106 % ##### ####### # # ##### # # ###### ###### ####### # # ##### # # … … 114 113 115 114 \section{Concurrency} 116 % Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared-state (Erlang\cite{Erlang}, Haskell\cite{Haskell}, Akka (Scala)\cite{Akka}). In these paradigms, interaction among concurrent objects rely on message passing or other paradigms that often closely relate to networking concepts. However, in imperative or OO languages, these approaches entail a clear distinction between concurrent and non-concurrent paradigms (i.e. message passing versus routine call). Which in turns mean that programmers need to learn two sets of designs patterns in order to be effective. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on non-concurrent constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. However, for productivity reasons it is desireable to have a higher-level construct to be the core concurrency paradigm\cite{HPP:Study}. This project proposes Monitors\cite{Hoare74} as the core concurrency construct. 117 % \\ 118 119 Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared-state (Erlang\cite{Erlang}, Haskell\cite{Haskell}, Akka (Scala)\cite{Akka}). In these paradigms, interaction among concurrent objects rely on message passing\cite{Thoth,Harmony,V-Kernel} or other paradigms that often closely relate to networking concepts. However, in imperative or OO languages, these approaches entail a clear distinction between concurrent and non-concurrent paradigms (i.e. message passing versus routine call). Which in turns mean that programmers need to learn two sets of designs patterns in order to be effective. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on non-concurrent constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct to be the core concurrency paradigm\cite{HPP:Study}. One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. 120 121 Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. 122 Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes Monitors as the core concurrency construct. 123 \\ 124 125 Finally, an approach that is worth mentionning because it is gaining in popularity is transactionnal memory\cite{Dice10}. However, the performance and feature set is currently too restrictive to be possible to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. 115 Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). Which in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. This distinction can be hidden away in library code, but effective use of the librairy still has to take both paradigms into account. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency construct. 126 116 127 117 % # # ####### # # ### ####### ####### ###### ##### … … 134 124 135 125 \subsection{Monitors} 136 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java \cite{Java} or \uC\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it :126 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it : 137 127 \begin{lstlisting} 138 128 typedef /*some monitor type*/ monitor; … … 154 144 155 145 \subsubsection{Call semantics} \label{call} 156 The above example of monitors already displays some of their intrinsic caracteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable. 157 \\ 158 159 Another aspect to consider is when a monitor acquires its mutual exclusion. Indeed, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual exclusion on entry. Examples of this can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following example : 160 161 \begin{lstlisting} 162 mutex struct counter_t { /*...*/ }; 163 164 void ?{}(counter_t & nomutex this); 165 int ++?(counter_t & mutex this); 166 void ?{}(Int * this, counter_t & mutex cnt); 167 \end{lstlisting} 168 *semantics of the declaration of \code{mutex struct counter_t} are discussed in details in section \ref{data} 169 \\ 170 171 This example is of a monitor implementing an atomic counter. Here, the constructor uses the \code{nomutex} keyword to signify that it does not acquire the coroutine mutual exclusion when constructing. This is because object not yet constructed should never be shared and therefore do not require mutual exclusion. The prefix increment operator 172 uses \code{mutex} to protect the incrementing process from race conditions. Finally, we have a conversion operator from \code{counter_t} to \code{Int}. This conversion may or may not require the \code{mutex} key word depending whether or not reading an \code{Int} is an atomic operation or not. 173 \\ 174 175 Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. If there were a meaning to routine \code{void foo(counter_t & this)} then one could argue that it should be to default to the safest option : \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. It can be argued that this is the more "normal" behavior, \code{nomutex} effectively stating explicitly that "this routine has nothing special". An other alternative is to make one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routine \code{void foo(counter_t & this)}. Mandatory keywords would also have the added benefice of being more clearly self-documented but at the cost of extra typing. In the end, which solution should be picked is still up for debate. For the reminder of this proposal, the explicit approach will be used for the sake of clarity. 176 \\ 177 178 Regardless of which keyword is kept, it is important to establish when mutex/nomutex may be used depending on type parameters. 146 The above monitor example displays some of the intrinsic characteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable. 147 148 Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter : 149 150 \begin{lstlisting} 151 mutex struct counter_t { /*...see section §\ref{data}§...*/ }; 152 153 void ?{}(counter_t & nomutex this); //constructor 154 size_t ++?(counter_t & mutex this); //increment 155 156 //need for mutex is platform dependent here 157 void ?{}(size_t * this, counter_t & mutex cnt); //conversion 158 \end{lstlisting} 159 160 Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual exclusion when constructing. This semantics is because an object not yet constructed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} key word depending on whether or not reading an \code{size_t} is an atomic operation or not. 161 162 Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without wualifiers \code{void foo(counter_t & this)} then one could argue that it should default to the safest option \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. It can be argued that \code{nomutex} is the more "normal" behaviour, the \code{nomutex} keyword effectively stating explicitly that "this routine has nothing special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routine \code{void foo(counter_t & this)}. Mandatory keywords would also have the added benefice of being self-documented but at the cost of extra typing. In the end, which solution should be picked is still up for debate. For the reminder of this proposal, the explicit approach is used for clarity. 163 164 The next semantic decision is to establish when mutex/nomutex may be used as a type qualifier. Consider the following declarations: 179 165 \begin{lstlisting} 180 166 int f1(monitor & mutex m); … … 184 170 int f5(graph(monitor*) & mutex m); 185 171 \end{lstlisting} 186 187 The problem is to indentify which object(s) should be acquired. Furthermore we also need to acquire each objects only once. In case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entering. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object will be acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths aren't necessarily known in C and even then making sure we only acquire objects once becomes also none trivial. This can be extended to absurd limits like \code{f5} which uses a custom graph of monitors. To keep everyone as sane as possible\cite{Chicken}, this projects imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter (ignoring potential qualifiers and indirections). 172 The problem is to indentify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C and even then making sure we only acquire objects once becomes also none trivial. This can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To keep everyone as sane as possible~\cite{Chicken}, this projects imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter (ignoring potential qualifiers and indirections). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is be acquired, passing an array to this routine would be type safe and yet result in undefined behavior because only the first element of the array is acquired. However, this ambiguity is part of the C type system with respects to arrays. For this reason, it would also be reasonnable to disallow mutex in the context where arrays may be passed. 188 173 189 174 % ###### # ####### # … … 196 181 197 182 \subsubsection{Data semantics} \label{data} 198 Once the call semantics are established, the next step is to establish data semantics. Indeed, until now a monitor is used simply as a generic handle but in most cases monitors contian shared data. This data should be intrinsic to the monitor declaration to prevent any accidental use of data without its appr ipriate protection. For example here is a more fleshed-out version of the counter showed in \ref{call}:183 Once the call semantics are established, the next step is to establish data semantics. Indeed, until now a monitor is used simply as a generic handle but in most cases monitors contian shared data. This data should be intrinsic to the monitor declaration to prevent any accidental use of data without its appropriate protection. For example, here is a complete version of the counter showed in section \ref{call}: 199 184 \begin{lstlisting} 200 185 mutex struct counter_t { … … 207 192 208 193 int ++?(counter_t & mutex this) { 209 return ++this->value; 210 } 211 194 return ++this.value; 195 } 196 197 //need for mutex is platform dependent here 212 198 void ?{}(int * this, counter_t & mutex cnt) { 213 199 *this = (int)cnt; 214 200 } 215 201 \end{lstlisting} 216 \begin{tabular}{ c c } 217 Thread 1 & Thread 2 \\ 218 \begin{lstlisting} 219 void f(counter_t & mutex c) { 220 for(;;) { 221 sout | (int)c | endl; 222 } 223 } 224 \end{lstlisting} &\begin{lstlisting} 225 void g(counter_t & mutex c) { 226 for(;;) { 227 ++c; 228 } 229 } 230 202 203 This simple counter is used as follows: 204 \begin{center} 205 \begin{tabular}{c @{\hskip 0.35in} c @{\hskip 0.35in} c} 206 \begin{lstlisting} 207 //shared counter 208 counter_t cnt; 209 210 //multiple threads access counter 211 thread 1 : cnt++; 212 thread 2 : cnt++; 213 thread 3 : cnt++; 214 ... 215 thread N : cnt++; 231 216 \end{lstlisting} 232 217 \end{tabular} 233 \\ 234 235 236 This simple counter offers an example of monitor usage. Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting. \\ 237 238 These simple mutual exclusion semantics also naturally expand to multi-monitor calls. 218 \end{center} 219 220 Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting. Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion, \CFA uses an explicit mechanism to acquire mutual-exclusion. A consequence of this approach is that it extends to multi-monitor calls. 239 221 \begin{lstlisting} 240 222 int f(MonitorA & mutex a, MonitorB & mutex b); … … 244 226 f(a,b); 245 227 \end{lstlisting} 246 247 This code acquires both locks before entering the critical section. In practice, writing multi-locking routines that can not lead to deadlocks can be very tricky. Having language level support for such feature is therefore a significant asset for \CFA. However, this does have significant repercussions relating to scheduling (see \ref{insched} and \ref{extsched}). Furthermore, the ability to acquire multiple monitors at the same time does incur a significant pitfall even without looking into scheduling. For example : 248 \begin{lstlisting} 249 void foo(A & mutex a, B & mutex a) { 250 //... 251 } 252 253 void bar(A & mutex a, B & nomutex a) 254 //... 255 foo(a, b); 256 //... 257 } 258 259 void baz(A & nomutex a, B & mutex a) 260 //... 261 foo(a, b); 262 //... 263 } 264 \end{lstlisting} 265 266 Recursive mutex routine calls are allowed in \CFA but if not done carefully it can lead to nested monitor call problems\cite{Lister77}. These problems which are a specific implementation of the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{bar} but explicit ordering in the case of \code{baz}. This subtle mistake can mean that calling these two functions concurrently will lead to deadlocks, depending on the implicit ordering matching the explicit ordering. As shown on several occasion\cit, there isn't really any solutions to this problem, users simply need to be carefull when acquiring multiple monitors at the same time. 228 This code acquires both locks before entering the critical section, called \emph{\gls{group-acquire}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to routines using the same monitors as arguments. However, since \CFA monitors use multi-acquisition locks, users can effectively force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order : 229 \begin{lstlisting} 230 void foo(A & mutex a, B & mutex b) { //acquire a & b 231 //... 232 } 233 234 void bar(A & mutex a, B & nomutex b) { //acquire a 235 //... 236 foo(a, b); //acquire b 237 //... 238 } 239 240 void baz(A & nomutex a, B & mutex b) { //acquire b 241 //... 242 foo(a, b); //acquire a 243 //... 244 } 245 \end{lstlisting} 246 247 The multi-acquisition monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. such use leads to nested monitor call problems~\cite{Lister77}, which is a specific implementation of the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown on several occasion\cit, solving this problem requires : 248 \begin{enumerate} 249 \item Dynamically tracking of the monitor-call order. 250 \item Implement rollback semantics. 251 \end{enumerate} 252 253 While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is prohibitively complex \cit. In \CFA, users simply need to be carefull when acquiring multiple monitors at the same time. 267 254 268 255 % ###### ####### ####### # ### # ##### … … 283 270 284 271 \subsubsection{Implementation Details: Interaction with polymorphism} 285 At first glance, interaction between monitors and \CFA's concept of polymorphism seem complexe to support. However, it can be reasoned that entry-point locking can solve most of the issues that could be present with polymorphism. 286 287 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore the main question is how to support \code{dtype} polymorphism. We must remember that monitors' main purpose is to ensure mutual exclusion when accessing shared data. This implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handle incomplete types (by definition) no \code{dtype} polymorphic routine can access shared data since the data would require knowledge about the type. Therefore the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. With callsite-locking, this would require significant amount of work since any \code{dtype} routine could have to obtain some lock before calling a routine. However, with entry-point-locking calling a monitor routine becomes exactly the same as calling it from anywhere else. 272 At first glance, interaction between monitors and \CFA's concept of polymorphism seems complex to support. However, it is shown that entry-point locking can solve most of the issues. 273 274 Before looking into complex control flow, it is important to present the difference between the two acquiring options : \gls{callsite-locking} and \gls{entry-point-locking}, i.e. acquiring the monitors before making a mutex call or as the first instruction of the mutex call. For example: 275 276 \begin{center} 277 \begin{tabular}{|c|c|c|} 278 Code & \gls{callsite-locking} & \gls{entry-point-locking} \\ 279 \CFA & pseudo-code & pseudo-code \\ 280 \hline 281 \begin{lstlisting} 282 void foo(monitor & mutex a) { 283 284 285 286 //Do Work 287 //... 288 289 } 290 291 void main() { 292 monitor a; 293 294 295 296 foo(a); 297 298 } 299 \end{lstlisting} &\begin{lstlisting} 300 foo(& a) { 301 302 303 304 //Do Work 305 //... 306 307 } 308 309 main() { 310 monitor a; 311 //calling routine 312 //handles concurrency 313 acquire(a); 314 foo(a); 315 release(a); 316 } 317 \end{lstlisting} &\begin{lstlisting} 318 foo(& a) { 319 //called routine 320 //handles concurrency 321 acquire(a); 322 //Do Work 323 //... 324 release(a); 325 } 326 327 main() { 328 monitor a; 329 330 331 332 foo(a); 333 334 } 335 \end{lstlisting} 336 \end{tabular} 337 \end{center} 338 339 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. \Gls{callsite-locking} would require a significant amount of work, since any \code{dtype} routine may have to obtain some lock before calling a routine, depending on whether or not the type passed is a monitor. However, with \gls{entry-point-locking} calling a monitor routine becomes exactly the same as calling it from anywhere else. 340 341 288 342 289 343 % ### # # ####### ##### ##### # # ####### ###### … … 296 350 297 351 \subsection{Internal scheduling} \label{insched} 298 Monitors should also be able to schedule what threads access it as a mean of synchronization. Internal scheduling is one of the simple examples of such a feature. It allows users to declare condition variables and wait for them to be signaled. Here is a simple example of such a technique :352 Monitors also need to schedule waiting threads internally as a mean of synchronization. Internal scheduling is one of the simple examples of such a feature. It allows users to declare condition variables and have threads wait and signaled from them. Here is a simple example of such a technique : 299 353 300 354 \begin{lstlisting} … … 314 368 \end{lstlisting} 315 369 316 Here routine \code{foo} waits on the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. This semantic can easily be extended to multi-monitor calls by offering the same guarantee. 317 370 Note that in \CFA, \code{condition} have no particular need to be stored inside a monitor, beyond any software engineering reasons. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. This semantic can easily be extended to multi-monitor calls by offering the same guarantee. 318 371 \begin{center} 319 372 \begin{tabular}{ c @{\hskip 0.65in} c } … … 321 374 \begin{lstlisting} 322 375 void foo(monitor & mutex a, 323 monitor & mutex b) {376 monitor & mutex b) { 324 377 //... 325 378 wait(a.e); … … 330 383 \end{lstlisting} &\begin{lstlisting} 331 384 void bar(monitor & mutex a, 332 monitor & mutex b) {385 monitor & mutex b) { 333 386 signal(a.e); 334 387 } … … 340 393 \end{tabular} 341 394 \end{center} 342 343 A direct extension of the single monitor semantics would be to release all locks when waiting and transferring ownership of all locks when signalling. However, for the purpose of synchronization it may be usefull to only release some of the locks but keep others. On the technical side, partially releasing lock is feasible but from the user perspective a choice must be made for the syntax of this feature. It is possible to do without any extra syntax by relying on order of acquisition (Note that here the use of helper routines is irrelevant, only routines the acquire mutual exclusion have an impact on internal scheduling): 395 A direct extension of the single monitor semantics is to release all locks when waiting and transferring ownership of all locks when signalling. However, for the purpose of synchronization it may be usefull to only release some of the locks but keep others. It is possible to support internal scheduling and \gls{group-acquire} without any extra syntax by relying on order of acquisition. Here is an example of the different contexts in which internal scheduling can be used. (Note that here the use of helper routines is irrelevant, only routines acquire mutual exclusion have an impact on internal scheduling): 344 396 345 397 \begin{center} … … 350 402 condition e; 351 403 404 //acquire a & b 352 405 void foo(monitor & mutex a, 353 monitor & mutex b) { 354 wait(e); 406 monitor & mutex b) { 407 408 wait(e); //release a & b 355 409 } 356 410 … … 364 418 condition e; 365 419 420 //acquire a 366 421 void bar(monitor & mutex a, 367 monitor & nomutex b) {422 monitor & nomutex b) { 368 423 foo(a,b); 369 424 } 370 425 426 //acquire a & b 371 427 void foo(monitor & mutex a, 372 monitor & mutex b) {373 wait(e); 428 monitor & mutex b) { 429 wait(e); //release a & b 374 430 } 375 431 … … 378 434 condition e; 379 435 436 //acquire a 380 437 void bar(monitor & mutex a, 381 monitor & nomutex b) { 382 foo(a,b); 383 } 384 438 monitor & nomutex b) { 439 baz(a,b); 440 } 441 442 //acquire b 385 443 void baz(monitor & nomutex a, 386 monitor & mutex b) {387 wait(e); 444 monitor & mutex b) { 445 wait(e); //release b 388 446 } 389 447 … … 393 451 \end{center} 394 452 395 This can be interpreted in two different ways : 396 \begin{flushleft} 397 \begin{enumerate} 398 \item \code{wait} atomically releases the monitors acquired by the inner-most routine, \underline{ignoring} nested calls. 399 \item \code{wait} atomically releases the monitors acquired by the inner-most routine, \underline{considering} nested calls. 400 \end{enumerate} 401 \end{flushleft} 402 While the difference between these two is subtle, it has a significant impact. In the first case it means that the calls to \code{foo} would behave the same in Context 1 and 2. This semantic would also mean that the call to \code{wait} in routine \code{baz} would only release \code{monitor b}. While this may seem intuitive with these examples, it does have one significant implication, it creates a strong distinction between acquiring multiple monitors in sequence and acquiring the same monitors simulatenously, i.e. : 453 Context 1 is the simplest way of acquiring more than one monitor (\gls{group-acquire}), using a routine with multiple parameters having the \code{mutex} keyword. Context 2 also uses \gls{group-acquire} as well in routine \code{foo}. However, the routine is called by routine \code{bar}, which only acquires monitor \code{a}. Since monitors can be acquired multiple times this does not cause a deadlock by itself but it does force the acquiring order to \code{a} then \code{b}. Context 3 also forces the acquiring order to be \code{a} then \code{b} but does not use \gls{group-acquire}. The previous example tries to illustrate the semantics that must be established to support releasing monitors in a \code{wait} statement. In all cases, the behavior of the wait statment is to release all the locks that were acquired my the inner-most monitor call. That is \code{a & b} in context 1 and 2 and \code{b} only in context 3. Here are a few other examples of this behavior. 454 403 455 404 456 \begin{center} 405 \begin{tabular}{c @{\hskip 0.35in} c @{\hskip 0.35in} c} 406 \begin{lstlisting} 407 enterMonitor(a); 408 enterMonitor(b); 409 // do stuff 410 leaveMonitor(b); 411 leaveMonitor(a); 412 \end{lstlisting} & != &\begin{lstlisting} 413 enterMonitor(a); 414 enterMonitor(a, b); 415 // do stuff 416 leaveMonitor(a, b); 417 leaveMonitor(a); 457 \begin{tabular}{|c|c|c|} 458 \begin{lstlisting} 459 condition e; 460 461 //acquire b 462 void foo(monitor & nomutex a, 463 monitor & mutex b) { 464 bar(a,b); 465 } 466 467 //acquire a 468 void bar(monitor & mutex a, 469 monitor & nomutex b) { 470 471 wait(e); //release a 472 //keep b 473 } 474 475 foo(a, b); 476 \end{lstlisting} &\begin{lstlisting} 477 condition e; 478 479 //acquire a & b 480 void foo(monitor & mutex a, 481 monitor & mutex b) { 482 bar(a,b); 483 } 484 485 //acquire b 486 void bar(monitor & mutex a, 487 monitor & nomutex b) { 488 489 wait(e); //release b 490 //keep a 491 } 492 493 foo(a, b); 494 \end{lstlisting} &\begin{lstlisting} 495 condition e; 496 497 //acquire a & b 498 void foo(monitor & mutex a, 499 monitor & mutex b) { 500 bar(a,b); 501 } 502 503 //acquire none 504 void bar(monitor & nomutex a, 505 monitor & nomutex b) { 506 507 wait(e); //release a & b 508 //keep none 509 } 510 511 foo(a, b); 418 512 \end{lstlisting} 419 513 \end{tabular} 420 514 \end{center} 421 422 This is not intuitive because even if both methods display the same monitors state both inside and outside the critical section respectively, the behavior is different. Furthermore, the actual acquiring order will be exaclty the same since acquiring a monitor from inside its mutual exclusion is a no-op. This means that even if the data and the actual control flow are the same using both methods, the behavior of the \code{wait} will be different. The alternative is option 2, that is releasing acquired monitors, \underline{considering} nesting. This solves the issue of having the two acquiring method differ at the cost of making routine \code{foo} behave differently depending on from which context it is called (Context 1 or 2). Indeed in Context 2, routine \code{foo} actually behaves like routine \code{baz} rather than having the same behavior than in Context 1. The fact that both implicit approaches can be unintuitive depending on the perspective may be a sign that the explicit approach is superior. For this reason this \CFA does not support implicit monitor releasing and uses explicit semantics. 423 \\ 424 425 The following examples shows three alternatives of explicit wait semantics : 426 \\ 427 515 Note the right-most example is actually a trick pulled on the reader. Monitor state information is stored in thread local storage rather then in the routine context, which means that helper routines and other \code{nomutex} routines are invisible to the runtime system in regards to concurrency. This means that in the right-most example, the routine parameters are completly unnecessary. However, calling this routine from outside a valid monitor context is undefined. 516 517 These semantics imply that in order to release of subset of the monitors currently held, users must write (and name) a routine that only acquires the desired subset and simply calls wait. While users can use this method, \CFA offers the \code{wait_release}\footnote{Not sure if an overload of \code{wait} would work...} which will release only the specified monitors. In the center previous examples, the code in the center uses the \code{bar} routine to only release monitor \code{b}. Using the \code{wait_release} helper, this can be rewritten without having the name two routines : 428 518 \begin{center} 429 \begin{tabular}{|c|c|c|} 430 Case 1 & Case 2 & Case 3 \\ 431 Branding on construction & Explicit release list & Explicit ignore list \\ 432 \hline 433 \begin{lstlisting} 434 void foo(monitor & mutex a, 435 monitor & mutex b, 436 condition & c) 437 { 438 // Releases monitors 439 // branded in ctor 440 wait(c); 441 } 442 443 monitor a; 444 monitor b; 445 condition1 c1 = {a}; 446 condition2 c2 = {a, b}; 447 448 //Will release only a 449 foo(a,b,c1); 450 451 //Will release a and b 452 foo(a,b,c2); 519 \begin{tabular}{ c c c } 520 \begin{lstlisting} 521 condition e; 522 523 //acquire a & b 524 void foo(monitor & mutex a, 525 monitor & mutex b) { 526 bar(a,b); 527 } 528 529 //acquire b 530 void bar(monitor & mutex a, 531 monitor & nomutex b) { 532 533 wait(e); //release b 534 //keep a 535 } 536 537 foo(a, b); 453 538 \end{lstlisting} &\begin{lstlisting} 454 void foo(monitor & mutex a, 455 monitor & mutex b, 456 condition & c) 457 { 458 // Releases monitor a 459 // Holds monitor b 460 waitRelease(c, [a]); 461 } 462 463 monitor a; 464 monitor b; 465 condition c; 466 467 468 469 foo(a,b,c); 470 471 472 539 => 473 540 \end{lstlisting} &\begin{lstlisting} 474 void foo(monitor & mutex a, 475 monitor & mutex b, 476 condition & c) 477 { 478 // Releases monitor a 479 // Holds monitor b 480 waitHold(c, [b]); 481 } 482 483 monitor a; 484 monitor b; 485 condition c; 486 487 488 489 foo(a,b,c); 490 491 492 541 condition e; 542 543 //acquire a & b 544 void foo(monitor & mutex a, 545 monitor & mutex b) { 546 wait_release(e,b); //release b 547 //keep a 548 } 549 550 foo(a, b); 493 551 \end{lstlisting} 494 552 \end{tabular} 495 553 \end{center} 496 (Note : Case 2 and 3 use tuple semantics to pass a variable length list of elements.) 497 \\ 498 499 All these cases have their pros and cons. Case 1 is more distinct because it means programmers need to be carefull about where the condition is initialized as well as where it is used. On the other hand, it is very clear and explicitly states which monitor is released and which monitor stays acquired. This is similar to Case 2, which releases only the monitors explictly listed. However, in Case 2, calling the \code{wait} routine instead of the \code{waitRelease} routine releases all the acquired monitor. The Case 3 is an improvement on that since it releases all the monitors except those specified. The result is that the \code{wait} routine can be written as follows : 500 \begin{lstlisting} 501 void wait(condition & cond) { 502 waitHold(cond, []); 503 } 504 \end{lstlisting} 505 This alternative offers nice and consistent behavior between \code{wait} and \code{waitHold}. However, one large pitfall is that mutual exclusion can now be violated by calls to library code. Indeed, even if the following example seems benign there is one significant problem : 506 \begin{lstlisting} 507 monitor global; 508 509 extern void doStuff(); //uses global 510 511 void foo(monitor & mutex m) { 512 //... 513 doStuff(); //warning can release monitor m 514 //... 515 } 516 517 foo(global); 518 \end{lstlisting} 519 520 Indeed, if Case 2 or 3 are chosen it any code can violate the mutual exclusion of the calling code by issuing calls to \code{wait} or \code{waitHold} in a nested monitor context. Case 2 can be salvaged by removing the \code{wait} routine from the API but Case 3 cannot prevent users from calling \code{waitHold(someCondition, [])}. For this reason the syntax proposed in Case 3 is rejected. Note that the syntax proposed in case 1 and 2 are not exclusive. Indeed, by supporting two types of condition both cases can be supported : 521 \begin{lstlisting} 522 struct condition { /*...*/ }; 523 524 // Second argument is a variable length tuple. 525 void wait(condition & cond, [...] monitorsToRelease); 526 void signal(condition & cond); 527 528 struct conditionN { /*...*/ }; 529 530 void ?{}(conditionN* this, /*list of N monitors to release*/); 531 void wait(conditionN & cond); 532 void signal(conditionN & cond); 533 \end{lstlisting} 534 535 Regardless of the option chosen for wait semantics, signal must be symmetrical. In all cases, signal only needs a single parameter, the condition variable that needs to be signalled. But \code{signal} needs to be called from the same monitor(s) that call to \code{wait}. Otherwise, mutual exclusion cannot be properly transferred back to the waiting monitor. 536 537 Finally, an additionnal semantic which can be very usefull is the \code{signalBlock} routine. This routine behaves like signal for all of the semantics discussed above, but with the subtelty that mutual exclusion is transferred to the waiting task immediately rather than wating for the end of the critical section. 554 555 Regardless of the context in which the \code{wait} statement is used, \code{signal} must be called holding the same set of monitors. In all cases, signal only needs a single parameter, the condition variable that needs to be signalled. But \code{signal} needs to be called from the same monitor(s) that call to \code{wait}. Otherwise, mutual exclusion cannot be properly transferred back to the waiting monitor. 556 557 Finally, an additional semantic which can be very usefull is the \code{signal_block} routine. This routine behaves like signal for all of the semantics discussed above, but with the subtelty that mutual exclusion is transferred to the waiting task immediately rather than wating for the end of the critical section. 538 558 \\ 539 559 … … 545 565 % # # # # ### # # # # # # # # # 546 566 % ####### # # # ### ##### ##### # # ####### ###### 547 567 \newpage 548 568 \subsection{External scheduling} \label{extsched} 549 A s one might expect, the alternative to Internal scheduling is to use External scheduling instead. This method is somewhat more robust to deadlocks since one of the threads keeps a relatively tight control on scheduling. Indeed, as the following examples will demonstrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (ex: \uC) or in terms of data (ex: Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows whata simple use \code{accept} versus \code{wait}/\code{signal} and its advantages.569 An alternative to internal scheduling is to use external scheduling instead. This method is more constrained and explicit which may help users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (ex: \uC) or in terms of data (ex: Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages. 550 570 551 571 \begin{center} … … 565 585 566 586 public: 567 void f() ;587 void f() { /*...*/ } 568 588 void g() { _Accept(f); } 569 589 private: … … 573 593 \end{center} 574 594 575 In the case of internal scheduling, the call to \code{wait} only guarantees that \code{g} was the last routine to access the monitor. This intails that the routine \code{f} may have acquired mutual exclusion several times while routine \code{h} was waiting. On the other hand, external scheduling guarantees that while routine \code{h} was waiting, no routine other than \code{g} could acquire the monitor.595 In the case of internal scheduling, the call to \code{wait} only guarantees that \code{g} is the last routine to access the monitor. This intails that the routine \code{f} may have acquired mutual exclusion several times while routine \code{h} was waiting. On the other hand, external scheduling guarantees that while routine \code{h} was waiting, no routine other than \code{g} could acquire the monitor. 576 596 \\ 577 597 … … 756 776 % # # # # # # # ####### ####### ####### ####### ### ##### # # 757 777 \section{Parallelism} 758 Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being a n ever growing challenge, parallelism has become the new source of greatest performance \cite{Sutter05, Sutter05b}. In this decade, it is not longer reasonnable to create high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest level approach of parallelism is to use \glspl{kthread}. However since these have significant costs and limitations \glspl{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues which all have strengths and weaknesses.778 Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. In this decade, it is not longer reasonnable to create a high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest-level approach of parallelism is to use \glspl{kthread} in combination with semantics like \code{fork}, \code{join}, etc. However, since these have significant costs and limitations, \glspl{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues that all have strengths and weaknesses. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads. 759 779 760 780 \subsection{User-level threads} 761 A direct improvement on the \gls{kthread} approach is to use \glspl{uthread}. These threads offer most of the same features that the operating system already provide but can be used on a much larger scale. This is the most powerfull solution as it allows all the features of multi-threading while removing several of the more expensives costs of using kernel threads. The down side is that almost none of the low-level threading complexities are hidden, users still have to think about data races, deadlocks and synchronization issues. This can be somewhat alleviated by a concurrency toolkit with strong garantees but the parallelism toolkit offers very little to reduce complexity in itself. 762 763 Examples of languages that support are Java\cite{Java}, Haskell\cite{Haskell} and \uC\cite{uC++book}. 781 A direct improvement on the \gls{kthread} approach is to use \glspl{uthread}. These threads offer most of the same features that the operating system already provide but can be used on a much larger scale. This approach is the most powerfull solution as it allows all the features of multi-threading, while removing several of the more expensives costs of using kernel threads. The down side is that almost none of the low-level threading problems are hidden, users still have to think about data races, deadlocks and synchronization issues. These issues can be somewhat alleviated by a concurrency toolkit with strong garantees but the parallelism toolkit offers very little to reduce complexity in itself. 782 783 Examples of languages that support \glspl{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}. 784 785 \subsubsection{Fibers : user-level threads without preemption} 786 A popular varient of \glspl{uthread} is what is often reffered to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strenghts of \glspl{fiber} but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignore fibers. 787 788 An example of a language that uses fibers is Go~\cite{Go} 764 789 765 790 \subsection{Jobs and thread pools} 766 The approach on the opposite end of the spectrum is to base parallelism on \glspl{job}. Indeed, \glspl{job} offer limited flexibility but at the benefit of a simpler user interface. In \gls{job} based systems users express parallelism as units of work and the dependency graph (either explicit or implicit) that tie them together. This means users need not to worry about concurrency but significantly limits the interaction that can occur between different jobs. Indeed, any \gls{job} that blocks also blocks the underlying \gls{kthread}, this effectively mean the CPU utilization, and therefore throughput, will suffer noticeably. 767 The golden standard of this implementation is Intel's TBB library\cite{TBB}. 768 769 \subsection{Fibers : user-level threads without preemption} 770 Finally, in the middle of the flexibility versus complexity spectrum lay \glspl{fiber} which offer \glspl{uthread} without the complexity of preemption. This means users don't have to worry about other \glspl{fiber} suddenly executing between two instructions which signficantly reduces complexity. However, any call to IO or other concurrency primitives can lead to context switches. Furthermore, users can also block \glspl{fiber} in the middle of their execution without blocking a full processor core. This means users still have to worry about mutual exclusion, deadlocks and race conditions in their code, raising the complexity significantly. 771 An example of a language that uses fibers is Go\cite{Go} 791 The approach on the opposite end of the spectrum is to base parallelism on \glspl{pool}. Indeed, \glspl{pool} offer limited flexibility but at the benefit of a simpler user interface. In \gls{pool} based systems, users express parallelism as units of work and a dependency graph (either explicit or implicit) that tie them together. This approach means users need not worry about concurrency but significantly limits the interaction that can occur among jobs. Indeed, any \gls{job} that blocks also blocks the underlying worker, which effectively means the CPU utilization, and therefore throughput, suffers noticeably. It can be argued that a solution to this problem is to use more workers than available cores. However, unless the number of jobs and the number of workers are comparable, having a significant amount of blocked jobs always results in idles cores. 792 793 The gold standard of this implementation is Intel's TBB library~\cite{TBB}. 772 794 773 795 \subsection{Paradigm performance} 774 While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pin the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms will show better performance but it all strongly depends on the usage. Having mostly indepent units of work to execute almost guarantess that the \gls{job} based system will have the best performance. However, add interactions between jobs and the processor utilisation might suffer. User-level threads may allow maximum ressource utilisation but context switches will be more expansive and it is also harder for users to get perfect tunning. As with every example, fibers sit somewhat in the middle of the spectrum. Furthermore, if the units of uninterrupted work are large enough the paradigm choice will belargely amorticised by the actual work done.796 While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead. However, interactions between jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilisation, but context switches will be more expansive and the extra control means users need to tweak more variables to get the desired performance. Furthermore, if the units of uninterrupted work are large enough the paradigm choice is largely amorticised by the actual work done. 775 797 776 798 % ##### ####### # ####### ###### ###### … … 783 805 784 806 \section{\CFA 's Thread Building Blocks} 785 As a system level language, \CFA should offer both performance and flexibilty as its primary goals, simplicity and user-friendliness being a secondary concern. Therefore, the core of parallelism in \CFA should prioritize power and efficiency. With this said, it is possible to deconstruct the three paradigms details aboved in order to get simple building blocks. Here is a table showing the core caracteristics of the mentionned paradigms : 786 \begin{center} 787 \begin{tabular}[t]{| r | c | c |} 788 \cline{2-3} 789 \multicolumn{1}{ c| }{} & Has a stack & Preemptive \\ 790 \hline 791 \Glspl{job} & X & X \\ 792 \hline 793 \Glspl{fiber} & \checkmark & X \\ 794 \hline 795 \Glspl{uthread} & \checkmark & \checkmark \\ 796 \hline 797 \end{tabular} 798 \end{center} 799 800 As shown in section \ref{cfaparadigms} these different blocks being available in \CFA it is trivial to reproduce any of these paradigm. 807 As a system-level language, \CFA should offer both performance and flexibilty as its primary goals, simplicity and user-friendliness being a secondary concern. Therefore, the core of parallelism in \CFA should prioritize power and efficiency. With this said, deconstructing popular paradigms in order to get simple building blocks yields \glspl{uthread} as the core parallelism block. \Glspl{pool} and other parallelism paradigms can then be built on top of the underlying threading model. 801 808 802 809 % ####### # # ###### ####### # ###### ##### … … 809 816 810 817 \subsection{Thread Interface} 811 The basic building blocks of \CFA are \glspl{cfathread}. By default these are implemented as \glspl{uthread} and as such offer a flexible and lightweight threading interface (lightweight comparatievely to \glspl{kthread}). A thread can be declared using a struct declaration prefix with the\code{thread} as follows :818 The basic building blocks of \CFA are \glspl{cfathread}. By default these are implemented as \glspl{uthread}, and as such, offer a flexible and lightweight threading interface (lightweight compared to \glspl{kthread}). A thread can be declared using a struct declaration with prefix \code{thread} as follows : 812 819 813 820 \begin{lstlisting} … … 815 822 \end{lstlisting} 816 823 817 Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use some function pointer representation as the interface of threads (for example : \Csharp \cite{Csharp} and Scala \cite{Scala}). However, we consider that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is definetely a special routine in \CFA, we can reuse the existing syntax for declaring routines with unordinary name, i.e. operator overloading. As such the \code{main} routine of a thread can be defined as such:824 Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example : \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), the existing syntax for declaring routines names with special semantics can be extended, i.e. operator overloading. As such the \code{main} routine of a thread can be defined as : 818 825 \begin{lstlisting} 819 826 thread struct foo {}; 820 827 821 void ?main( threadfoo* this) {822 /*... Some useful code ...*/823 } 824 \end{lstlisting} 825 826 With these semantics it is trivial to write a thread type that takes a function pointer as parameter and executes it on its stack asynchronously :828 void ?main(foo* this) { 829 sout | "Hello World!" | endl; 830 } 831 \end{lstlisting} 832 833 In this example, threads of type \code{foo} will start there execution in the \code{void ?main(foo*)} routine which in this case prints \code{"Hello World!"}. While this proposoal encourages this approach which is enforces strongly type programming. Users may prefer to use the routine based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as parameter and executes it on its stack asynchronously : 827 834 \begin{lstlisting} 828 835 typedef void (*voidFunc)(void); … … 833 840 834 841 //ctor 835 void ?{}( threadFuncRunner* this, voidFunc inFunc) {842 void ?{}(FuncRunner* this, voidFunc inFunc) { 836 843 func = inFunc; 837 844 } 838 845 839 846 //main 840 void ?main( threadFuncRunner* this) {847 void ?main(FuncRunner* this) { 841 848 this->func(); 842 849 } 843 850 \end{lstlisting} 844 851 845 % In this example \code{func} is a function pointer stored in \acrfull{tls}, which is \CFA is both easy to use and completly typesafe. 846 847 Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is not needed. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} once the constructor has completed and \code{join} before the destructor runs. 848 \begin{lstlisting} 849 thread struct FuncRunner; //FuncRunner declared above 850 851 void world() { 852 Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is unnecessary. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} once the constructor has completed and \code{join} before the destructor runs. 853 \begin{lstlisting} 854 thread struct World; //FuncRunner declared above 855 856 void ?main(thread World* this) { 852 857 sout | "World!" | endl; 853 858 } 854 859 855 860 void main() { 856 FuncRunner run = {world};861 World w; 857 862 //Thread run forks here 858 863 … … 863 868 } 864 869 \end{lstlisting} 865 This semantic has several advantages over explicit semantics : typesafety is guaranteed, any thread will always be started and stopped exaclty once and users can't make any progamming errors. Furthermore it naturally follows the memory allocation semantics which means users don't need to learn multiple semantics. 866 867 These semantics also naturally scale to multiple threads meaning basic synchronisation is very simple : 870 This semantic has several advantages over explicit semantics : typesafety is guaranteed, a thread is always started and stopped exaclty once and users cannot make any progamming errors. However, one of the apparent drawbacks of this system is that threads now always form a lattice, that is they are always destroyed in opposite order of construction. While this seems like a significant limitation, existing \CFA semantics can solve this problem. Indeed, by using dynamic allocation to create threads will naturally let threads outlive the scope in which the thread was created much like dynamically allocating memory will let objects outlive the scope in which thy were created : 871 868 872 \begin{lstlisting} 869 873 thread struct MyThread { … … 872 876 873 877 //ctor 874 void ?{}(thread MyThread* this) {} 878 void ?{}(MyThread* this, 879 bool is_special = false) { 880 //... 881 } 875 882 876 883 //main 877 void ?main(thread MyThread* this) { 884 void ?main(MyThread* this) { 885 //... 886 } 887 888 void foo() { 889 MyThread* special_thread; 890 { 891 MyThread thrds = {false}; 892 //Start a thread at the beginning of the scope 893 894 DoStuff(); 895 896 //create a other thread that will outlive the thread in this scope 897 special_thread = new MyThread{true}; 898 899 //Wait for the thread to finish 900 } 901 DoMoreStuff(); 902 903 //Now wait for the special 904 } 905 \end{lstlisting} 906 907 Another advantage of this semantic is that it naturally scale to multiple threads meaning basic synchronisation is very simple : 908 909 \begin{lstlisting} 910 thread struct MyThread { 911 //... 912 }; 913 914 //ctor 915 void ?{}(MyThread* this) {} 916 917 //main 918 void ?main(MyThread* this) { 878 919 //... 879 920 } … … 889 930 \end{lstlisting} 890 931 932 \subsection{Coroutines : A stepping stone}\label{coroutine} 933 While the main focus of this proposal is concurrency and paralellism, it is important to adress coroutines which are actually a significant underlying aspect of the concurrency system. Indeed, while having nothing todo with parallelism and arguably very little to do with concurrency, coroutines need to deal with context-switchs and and other context management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads and a first class feature of \CFA. 934 935 The core API of coroutines revolve around two features : independent stacks and suspedn/resume. Much like threads the syntax for declaring a coroutine is declaring a type and a main routine for it to start : 936 \begin{lstlisting} 937 coroutine struct MyCoroutine { 938 //... 939 }; 940 941 //ctor 942 void ?{}(MyCoroutine* this) { 943 944 } 945 946 //main 947 void ?main(MyCoroutine* this) { 948 sout | "Hello World!" | endl; 949 } 950 \end{lstlisting} 951 952 One a coroutine is created, users can context switch to it using \code{suspend} and come back using \code{resume}. Here is an example of a solution to the fibonnaci problem using coroutines : 953 \begin{lstlisting} 954 coroutine struct Fibonacci { 955 int fn; // used for communication 956 }; 957 958 void ?main(Fibonacci* this) { 959 int fn1, fn2; // retained between resumes 960 this->fn = 0; 961 fn1 = this->fn; 962 suspend(this); // return to last resume 963 964 this->fn = 1; 965 fn2 = fn1; 966 fn1 = this->fn; 967 suspend(this); // return to last resume 968 969 for ( ;; ) { 970 this->fn = fn1 + fn2; 971 fn2 = fn1; 972 fn1 = this->fn; 973 suspend(this); // return to last resume 974 } 975 } 976 977 int next(Fibonacci& this) { 978 resume(&this); // transfer to last suspend 979 return this.fn; 980 } 981 982 void main() { 983 Fibonacci f1, f2; 984 for ( int i = 1; i <= 10; i += 1 ) { 985 sout | next(f1) | '§\verb+ +§' | next(f2) | endl; 986 } 987 } 988 \end{lstlisting} 989 891 990 \newpage 892 \ large{\textbf{WORK IN PROGRESS}}991 \bf{WORK IN PROGRESS} 893 992 \subsection{The \CFA Kernel : Processors, Clusters and Threads}\label{kernel} 894 993
Note: See TracChangeset
for help on using the changeset viewer.