Changeset 21a1efb
- Timestamp:
- Sep 12, 2017, 4:06:56 PM (7 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- b2e2e34
- Parents:
- 416cc86
- Location:
- doc/proposals/concurrency
- Files:
-
- 2 added
- 7 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/concurrency/Makefile
r416cc86 r21a1efb 22 22 monitor \ 23 23 ext_monitor \ 24 int_monitor \ 24 25 }} 25 26 -
doc/proposals/concurrency/annex/local.bib
r416cc86 r21a1efb 39 39 title = {Intel Thread Building Blocks}, 40 40 } 41 42 @manual{www-cfa, 43 keywords = {Cforall}, 44 title = {Cforall Programmming Language}, 45 address = {https://plg.uwaterloo.ca/~cforall/} 46 } 47 48 @article{rob-thesis, 49 keywords = {Constructors, Destructors, Tuples}, 50 author = {Rob Schluntz}, 51 title = {Resource Management and Tuples in Cforall}, 52 year = 2017 53 } -
doc/proposals/concurrency/style/cfa-format.tex
r416cc86 r21a1efb 166 166 xleftmargin=\parindentlnth, % indent code to paragraph indentation 167 167 moredelim=[is][\color{red}\bfseries]{**R**}{**R**}, % red highlighting 168 morekeywords=[2]{accept, signal, signal_block, wait },168 morekeywords=[2]{accept, signal, signal_block, wait, waitfor}, 169 169 } 170 170 -
doc/proposals/concurrency/text/basics.tex
r416cc86 r21a1efb 1 1 % ====================================================================== 2 2 % ====================================================================== 3 \chapter{Basics} 3 \chapter{Basics}\label{basics} 4 4 % ====================================================================== 5 5 % ====================================================================== … … 13 13 14 14 \section{Coroutines: A stepping stone}\label{coroutine} 15 While the main focus of this proposal is concurrency and parallelism, as mentionned above it is important to adress coroutines, which are actually a significant underlying aspect of a concurrency system. Indeed, while having nothing to do with parallelism and arguably little to do with concurrency, coroutines need to deal with context-switchs andand other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core API of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}.15 While the main focus of this proposal is concurrency and parallelism, as mentionned above it is important to adress coroutines, which are actually a significant underlying aspect of a concurrency system. Indeed, while having nothing to do with parallelism and arguably little to do with concurrency, coroutines need to deal with context-switchs and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core API of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}. 16 16 17 17 Here is an example of a solution to the fibonnaci problem using \CFA coroutines: … … 21 21 }; 22 22 23 void ?{}(Fibonacci *this) { // constructor24 this ->fn = 0;23 void ?{}(Fibonacci & this) { // constructor 24 this.fn = 0; 25 25 } 26 26 27 27 // main automacically called on first resume 28 void main(Fibonacci *this) {28 void main(Fibonacci & this) { 29 29 int fn1, fn2; // retained between resumes 30 this ->fn = 0;31 fn1 = this ->fn;30 this.fn = 0; 31 fn1 = this.fn; 32 32 suspend(this); // return to last resume 33 33 34 this ->fn = 1;34 this.fn = 1; 35 35 fn2 = fn1; 36 fn1 = this ->fn;36 fn1 = this.fn; 37 37 suspend(this); // return to last resume 38 38 39 39 for ( ;; ) { 40 this ->fn = fn1 + fn2;40 this.fn = fn1 + fn2; 41 41 fn2 = fn1; 42 fn1 = this ->fn;42 fn1 = this.fn; 43 43 suspend(this); // return to last resume 44 44 } 45 45 } 46 46 47 int next(Fibonacci *this) {47 int next(Fibonacci & this) { 48 48 resume(this); // transfer to last suspend 49 49 return this.fn; … … 53 53 Fibonacci f1, f2; 54 54 for ( int i = 1; i <= 10; i += 1 ) { 55 sout | next( &f1) | next(&f2) | endl;55 sout | next( f1 ) | next( f2 ) | endl; 56 56 } 57 57 } … … 106 106 }; 107 107 108 void ?{}(Fibonacci *this) {109 this ->fn = 0;110 ( &this->c){};108 void ?{}(Fibonacci & this) { 109 this.fn = 0; 110 (this.c){}; 111 111 } 112 112 \end{cfacode} … … 126 126 \subsection{Alternative: Lamda Objects} 127 127 128 For coroutines as for threads, many implementations are based on routine pointers or function objects\cit. For example, Boost implements coroutines in terms of four functor object types: 128 For coroutines as for threads, many implementations are based on routine pointers or function objects\cit. For example, Boost implements coroutines in terms of four functor object types: 129 129 \begin{cfacode} 130 130 asymmetric_coroutine<>::pull_type … … 132 132 symmetric_coroutine<>::call_type 133 133 symmetric_coroutine<>::yield_type 134 \end{cfacode} 135 Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little. 136 137 A variation of this would be to use an simple function pointer in the same way pthread does for threads : 134 \end{cfacode} 135 Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little. 136 137 A variation of this would be to use an simple function pointer in the same way pthread does for threads : 138 138 \begin{cfacode} 139 139 void foo( coroutine_t cid, void * arg ) { … … 152 152 \subsection{Alternative: Trait-based coroutines} 153 153 154 Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as anything that satisfies the trait \code{is_coroutine} and is used as a coroutine is a coroutine.154 Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as anything that satisfies the trait \code{is_coroutine} and is used as a coroutine. 155 155 156 156 \begin{cfacode} 157 157 trait is_coroutine(dtype T) { 158 void main(T *this);159 coroutine_desc * get_coroutine(T *this);160 }; 161 \end{cfacode} 162 This ensures an object is not a coroutine until \code{resume} (or \code{prime}) is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory foot print of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine. 158 void main(T & this); 159 coroutine_desc * get_coroutine(T & this); 160 }; 161 \end{cfacode} 162 This ensures an object is not a coroutine until \code{resume} (or \code{prime}) is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory foot print of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine. 163 163 164 164 \begin{center} … … 174 174 }; 175 175 176 static inline 177 coroutine_desc * get_coroutine( 178 struct MyCoroutine * this176 static inline 177 coroutine_desc * get_coroutine( 178 struct MyCoroutine & this 179 179 ) { 180 return &this ->__cor;180 return &this.__cor; 181 181 } 182 182 … … 186 186 \end{center} 187 187 188 188 The combination of these two approaches allows users new to concurrency to have a easy and concise method while more advanced users can expose themselves to otherwise hidden pitfalls at the benefit of tighter control on memory layout and initialization. 189 189 190 190 \section{Thread Interface}\label{threads} 191 The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. Both use and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism. User threads offer a flexible and lightweight interface. A thread can be declared using a struct declaration \code{thread} as follows:191 The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. Both user and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism. User threads offer a flexible and lightweight interface. A thread can be declared using a struct declaration \code{thread} as follows: 192 192 193 193 \begin{cfacode} … … 199 199 \begin{cfacode} 200 200 trait is_thread(dtype T) { 201 void ^?{}(T *mutex this);202 void main(T *this);203 thread_desc* get_thread(T *this);201 void ^?{}(T & mutex this); 202 void main(T & this); 203 thread_desc* get_thread(T & this); 204 204 }; 205 205 \end{cfacode} … … 209 209 thread foo {}; 210 210 211 void main(foo *this) {211 void main(foo & this) { 212 212 sout | "Hello World!" | endl; 213 213 } … … 223 223 224 224 //ctor 225 void ?{}(FuncRunner *this, voidFunc inFunc) {226 func = inFunc;225 void ?{}(FuncRunner & this, voidFunc inFunc) { 226 this.func = inFunc; 227 227 } 228 228 229 229 //main 230 void main(FuncRunner *this) {231 this ->func();230 void main(FuncRunner & this) { 231 this.func(); 232 232 } 233 233 \end{cfacode} … … 239 239 thread World; 240 240 241 void main( thread World*this) {241 void main(World & this) { 242 242 sout | "World!" | endl; 243 243 } … … 257 257 258 258 \begin{cfacode} 259 thread MyThread { 260 //... 261 }; 262 263 //main 264 void main(MyThread* this) { 265 //... 266 } 267 268 void foo() { 269 MyThread thrds[10]; 270 //Start 10 threads at the beginning of the scope 259 thread MyThread { 260 //... 261 }; 262 263 //main 264 void main(MyThread & this) { 265 //... 266 } 267 268 void foo() { 269 MyThread thrds[10]; 270 //Start 10 threads at the beginning of the scope 271 272 DoStuff(); 273 274 //Wait for the 10 threads to finish 275 } 276 \end{cfacode} 277 278 However, one of the apparent drawbacks of this system is that threads now always form a lattice, that is they are always destroyed in opposite order of construction because of block structure. However, storage allocation is not limited to blocks; dynamic allocation can create threads that outlive the scope in which the thread is created much like dynamically allocating memory lets objects outlive the scope in which they are created 279 280 \begin{cfacode} 281 thread MyThread { 282 //... 283 }; 284 285 //main 286 void main(MyThread & this) { 287 //... 288 } 289 290 void foo() { 291 MyThread * long_lived; 292 { 293 MyThread short_lived; 294 //Start a thread at the beginning of the scope 271 295 272 296 DoStuff(); 273 297 274 //Wait for the 10 threads to finish 275 } 276 \end{cfacode} 277 278 However, one of the apparent drawbacks of this system is that threads now always form a lattice, that is they are always destroyed in opposite order of construction because of block structure. However, storage allocation os not limited to blocks; dynamic allocation can create threads that outlive the scope in which the thread is created much like dynamically allocating memory lets objects outlive the scope in which they are created 279 280 \begin{cfacode} 281 thread MyThread { 282 //... 283 }; 284 285 //main 286 void main(MyThread* this) { 287 //... 288 } 289 290 void foo() { 291 MyThread* long_lived; 292 { 293 MyThread short_lived; 294 //Start a thread at the beginning of the scope 295 296 DoStuff(); 297 298 //create another thread that will outlive the thread in this scope 299 long_lived = new MyThread; 300 301 //Wait for the thread short_lived to finish 302 } 303 DoMoreStuff(); 304 305 //Now wait for the short_lived to finish 306 delete long_lived; 307 } 308 \end{cfacode} 298 //create another thread that will outlive the thread in this scope 299 long_lived = new MyThread; 300 301 //Wait for the thread short_lived to finish 302 } 303 DoMoreStuff(); 304 305 //Now wait for the short_lived to finish 306 delete long_lived; 307 } 308 \end{cfacode} -
doc/proposals/concurrency/text/cforall.tex
r416cc86 r21a1efb 4 4 % ====================================================================== 5 5 % ====================================================================== 6 7 As mentionned in the introduction, the document presents the design for the concurrency features in \CFA. Since it is a new language here is a quick review of the language specifically tailored to the features needed to support concurrency. 8 9 \CFA is a extension of ISO C and therefore supports much of the same paradigms as C. It is a non-object oriented system level language, meaning it has very most of the major abstractions have either no runtime cost or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over assembly. The vast majority of the code produced by a \CFA compiler respects memory-layouts and calling-conventions laid out by C. However, while \CFA is not an object-oriented language according to a strict definition. It does have some notion of objects, most importantly construction and destruction of objects. Most of the following pieces of code can be found as is on the \CFA website : \cite{www-cfa} 10 11 \section{References} 12 13 Like \CC, \CFA introduces references as an alternative to pointers. In regards to concurrency, the semantics difference between pointers and references aren't particularly relevant but since this document uses mostly references here is a quick overview of the semantics : 14 \begin{cfacode} 15 int x, *p1 = &x, **p2 = &p1, ***p3 = &p2, 16 &r1 = x, &&r2 = r1, &&&r3 = r2; 17 ***p3 = 3; // change x 18 r3 = 3; // change x, ***r3 19 **p3 = ...; // change p1 20 &r3 = ...; // change r1, (&*)**r3 21 *p3 = ...; // change p2 22 &&r3 = ...; // change r2, (&(&*)*)*r3 23 &&&r3 = p3; // change r3 to p3, (&(&(&*)*)*)r3 24 int y, z, & ar[3] = { x, y, z }; // initialize array of references 25 &ar[1] = &z; // change reference array element 26 typeof( ar[1] ) p; // is int, i.e., the type of referenced object 27 typeof( &ar[1] ) q; // is int &, i.e., the type of reference 28 sizeof( ar[1] ) == sizeof( int ); // is true, i.e., the size of referenced object 29 sizeof( &ar[1] ) == sizeof( int *); // is true, i.e., the size of a reference 30 \end{cfacode} 31 The important thing to take away from this code snippet is that references offer a handle to an object much like pointers but which is automatically derefferenced when convinient. 32 33 \section{Overloading} 34 35 Another important feature \CFA has in common with \CC is function overloading : 36 \begin{cfacode} 37 // selection based on type and number of parameters 38 void f( void ); // (1) 39 void f( char ); // (2) 40 void f( int, double ); // (3) 41 f(); // select (1) 42 f( 'a' ); // select (2) 43 f( 3, 5.2 ); // select (3) 44 45 // selection based on type and number of returns 46 char f( int ); // (1) 47 double f( int ); // (2) 48 [ int, double ] f( int ); // (3) 49 char c = f( 3 ); // select (1) 50 double d = f( 4 ); // select (2) 51 [ int, double ] t = f( 5 ); // select (3) 52 \end{cfacode} 53 This feature is particularly important for concurrency since the runtime system relies on creating different types do represent concurrency objects. Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent clashes. As seen in chapter \ref{basics}, the main is an example of routine that benefits from overloading when concurrency in introduced. 54 55 \section{Operators} 56 Overloading also extends to operators. The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation would be, like so : 57 \begin{cfacode} 58 int ++?( int op ); // unary prefix increment 59 int ?++( int op ); // unary postfix increment 60 int ?+?( int op1, int op2 ); // binary plus 61 int ?<=?( int op1, int op2 ); // binary less than 62 int ?=?( int & op1, int op2 ); // binary assignment 63 int ?+=?( int & op1, int op2 ); // binary plus-assignment 64 65 struct S { int i, j; }; 66 S ?+?( S op1, S op2 ) { // add two structures 67 return (S){ op1.i + op2.i, op1.j + op2.j }; 68 } 69 S s1 = { 1, 2 }, s2 = { 2, 3 }, s3; 70 s3 = s1 + s2; // compute sum: s3 == { 2, 5 } 71 \end{cfacode} 72 73 Since concurrency does not use operator overloading, this feature is more important as an introduction for the syntax of constructors. 74 75 \section{Constructors/Destructors} 76 \CFA uses the following syntax for constructors and destructors : 77 \begin{cfacode} 78 struct S { 79 size_t size; 80 int * ia; 81 }; 82 void ?{}( S & s, int asize ) with s { // constructor operator 83 size = asize; // initialize fields 84 ia = calloc( size, sizeof( S ) ); 85 } 86 void ^?{}( S & s ) with s { // destructor operator 87 free( ia ); // de-initialization fields 88 } 89 int main() { 90 S x = { 10 }, y = { 100 }; // implict calls: ?{}( x, 10 ), ?{}( y, 100 ) 91 ... // use x and y 92 ^x{}; ^y{}; // explicit calls to de-initialize 93 x{ 20 }; y{ 200 }; // explicit calls to reinitialize 94 ... // reuse x and y 95 } // implict calls: ^?{}( y ), ^?{}( x ) 96 \end{cfacode} 97 The language guarantees that every object and all their fields are constructed. Like \CC construction is automatically done on declaration and destruction done when the declared variables reach the end of its scope. 98 99 For more information see \cite{cforall-ug,rob-thesis,www-cfa}. -
doc/proposals/concurrency/text/concurrency.tex
r416cc86 r21a1efb 300 300 % ====================================================================== 301 301 % ====================================================================== 302 It easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors.302 It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. 303 303 304 304 \begin{multicols}{2} … … 397 397 \end{center} 398 398 399 It is particularly important to pay attention to code sections 8 and 3, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{group-acquire} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B" (line 16), it must actually transfer ownership of monitor B to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs the monitor A, simply waking up the waiting thread is not an option because it would violate mutual exclusion. There are three options:399 It is particularly important to pay attention to code sections 8 and 4, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{group-acquire} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B" (line 16), it must actually transfer ownership of monitor B to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs the monitor A, simply waking up the waiting thread is not an option because it would violate mutual exclusion. There are three options: 400 400 401 401 \subsubsection{Delaying signals} … … 467 467 Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line 15 before line 11 and get the reverse effect. 468 468 469 In both cases however, the threads need to be able to distinguish on a per monitor basis which ones need to be released and which ones need to be transferred. Which means monitors cannot be handled as a single homogenous group.469 In both cases, the threads need to be able to distinguish on a per monitor basis which ones need to be released and which ones need to be transferred. Which means monitors cannot be handled as a single homogenous group. 470 470 471 471 \subsubsection{Dependency graphs} … … 497 497 Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one. 498 498 499 \subsubsection{Partial signalling} 499 \subsubsection{Partial signalling} \label{partial-sig} 500 500 Finally, the solution that is chosen for \CFA is to use partial signalling. Consider the following case: 501 501 … … 605 605 % ====================================================================== 606 606 % ====================================================================== 607 \subsection{Internal scheduling: Implementation} \label{insched-impl} 608 % ====================================================================== 609 % ====================================================================== 610 \TODO 611 607 \subsection{Internal scheduling: Implementation} \label{inschedimpl} 608 % ====================================================================== 609 % ====================================================================== 610 There are several challenges specific to \CFA when implementing internal scheduling. These challenges are direct results of \gls{group-acquire} and loose object definitions. These two constraints are to root cause of most design decisions in the implementation of internal scheduling. Furthermore, to avoid the head-aches of dynamically allocating memory in a concurrent environment, the internal-scheduling design is entirely free of mallocs and other dynamic memory allocation scheme. This is to avoid the chicken and egg problem of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal, means that memory management is a constant concern in the design of the system. 611 612 The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues. These queues need to be intrinsic\cit to avoid the need memory allocation. This entails that all the fields needed to keep track of all needed information. Since internal scheduling can use an unbound amount of memory (depending on \gls{group-acquire}) statically defining information information in the intrusive fields of threads is insufficient. The only variable sized container that does not require memory allocation is the callstack, which is heavily used in the implementation of internal scheduling. Particularly the GCC extension variable length arrays which is used extensively. 613 614 Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable length. In the case of external scheduling, the threads and the condition both allow a fixed amount of memory to be stored, while mutex-routines and the actual blocking call allow for an unbound amount (though adding too much to the mutex routine stack size can become expansive faster). 615 616 The following figure is the traditionnal illustration of a monitor : 617 618 \begin{center} 619 {\resizebox{0.4\textwidth}{!}{\input{monitor}}} 620 \end{center} 621 622 For \CFA, the previous picture does not have support for blocking multiple monitors on a single condition. To support \gls{group-acquire} two changes to this picture are required. First, it doesn't make sense to tie the condition to a single monitor since blocking two monitors as one would require arbitrarily picking a monitor to hold the condition. Secondly, the object waiting on the conditions and AS-stack cannot simply contain the waiting thread since a single thread can potentially wait on multiple monitors. As mentionned in section \ref{inschedimpl}, the handling in multiple monitors is done by partially passing, which entails that each concerned monitor needs to have a node object. However, for waiting on the condition, since all threads need to wait together, a single object needs to be queued in the condition. Moving out the condition and updating the node types yields : 623 624 \begin{center} 625 {\resizebox{0.8\textwidth}{!}{\input{int_monitor}}} 626 \end{center} 627 628 \newpage 629 630 This picture and the proper entry and leave algorithms is the fundamental implementation of internal scheduling. 631 632 \begin{multicols}{2} 633 Entry 634 \begin{pseudo}[numbers=left] 635 if monitor is free 636 enter 637 elif I already own the monitor 638 continue 639 else 640 block 641 increment recursion 642 643 \end{pseudo} 644 \columnbreak 645 Exit 646 \begin{pseudo}[numbers=left, firstnumber=8] 647 decrement recursion 648 if recursion == 0 649 if signal_stack not empty 650 set_owner to thread 651 if all monitors ready 652 wake-up thread 653 654 if entry queue not empty 655 wake-up thread 656 \end{pseudo} 657 \end{multicols} 658 659 Some important things to notice about the exit routine. The solution discussed in \ref{inschedimpl} can be seen on line 11 of the previous pseudo code. Basically, the solution boils down to having a seperate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has trasnferred ownership. This solution is safe as well as preventing any potential barging. 612 660 613 661 % ====================================================================== … … 644 692 inUse = true; 645 693 } 646 void g() {694 void V() { 647 695 inUse = false; 648 696 … … 652 700 \end{tabular} 653 701 \end{center} 654 This method is more constrained and explicit, which may help users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g. \uC) or in terms of data (e.g. Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages.655 656 In the case of internal scheduling, the call to \code{wait} only guarantees that \code{ g} is the last routine to access the monitor. This entails that the routine \code{f} may have acquired mutual exclusion several times while routine \code{h} was waiting. On the other hand, external scheduling guarantees that while routine \code{h} was waiting, no routine other than \code{g} could acquire the monitor.702 This method is more constrained and explicit, which may help users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g., \uC) or in terms of data (e.g. Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket APIs. 703 704 In the case of internal scheduling, the call to \code{wait} only guarantees that \code{V} is the last routine to access the monitor. This entails that the routine \code{V} may have acquired mutual exclusion several times while routine \code{P} was waiting. On the other hand, external scheduling guarantees that while routine \code{P} was waiting, no routine other than \code{V} could acquire the monitor. 657 705 658 706 % ====================================================================== … … 667 715 668 716 void f(A & mutex a); 669 void g(A & mutex a) { accept(f); } 670 \end{cfacode} 671 672 However, external scheduling is an example where implementation constraints become visible from the interface. Indeed, since there is no hard limit to the number of threads trying to acquire a monitor concurrently, performance is a significant concern. Here is the pseudo code for the entering phase of a monitor: 717 void f(int a, float b); 718 void g(A & mutex a) { 719 waitfor(f); // Less obvious which f() to wait for 720 } 721 \end{cfacode} 722 723 Furthermore, external scheduling is an example where implementation constraints become visible from the interface. Indeed, since there is no hard limit to the number of threads trying to acquire a monitor concurrently, performance is a significant concern. Here is the pseudo code for the entering phase of a monitor: 673 724 674 725 \begin{center} … … 677 728 if monitor is free 678 729 enter 730 elif I already own the monitor 731 continue 679 732 elif monitor accepts me 680 733 enter … … 685 738 \end{center} 686 739 687 For the \pscode{monitor is free} conditionit is easy to implement a check that can evaluate the condition in a few instruction. However, a fast check for \pscode{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in the following figure:740 For the fist two conditions, it is easy to implement a check that can evaluate the condition in a few instruction. However, a fast check for \pscode{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in the following figure: 688 741 689 742 \begin{center} … … 691 744 \end{center} 692 745 693 There are other alternatives to these pictures but in the case of this picture implementing a fast accept check is relatively easy. Indeed simply updating a bitmask when the acceptor queue changes is enough to have a check that executes in a single instruction, even with a fairly large number (e.g. 128) of mutex members. However, this relies on the fact that all the acceptable routines are declared with the monitor type. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. Its important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units.746 There are other alternatives to these pictures but in the case of this picture implementing a fast accept check is relatively easy. Indeed simply updating a bitmask when the acceptor queue changes is enough to have a check that executes in a single instruction, even with a fairly large number (e.g. 128) of mutex members. This technique cannot be used in \CFA because it relies on the fact that the monitor type declares all the acceptable routines. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. Its important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units. 694 747 The alternative would be to have a picture more like this one: 695 748 … … 698 751 \end{center} 699 752 700 Not storing the queues inside the monitor means that the storage can vary between routines, allowing for more flexibility and extensions. Storing an array of function-pointers would solve the issue of uniquely identifying acceptable routines. However, the single instruction bitmask compare has been replaced by dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling may now require additionnal searches on calls to accept to check if a routine is already queued in. 701 702 At this point we must make a decision between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. 703 704 In either cases here are a few alternatives for the different syntaxes this syntax : \\ 705 \begin{center} 706 {\renewcommand{\arraystretch}{1.5} 707 \begin{tabular}[t]{l @{\hskip 0.35in} l} 708 \hline 709 \multicolumn{2}{ c }{\code{accept} on type}\\ 710 \hline 711 Alternative 1 & Alternative 2 \\ 712 \begin{lstlisting} 713 mutex struct A 714 accept( void f(A & mutex a) ) 715 {}; 716 \end{lstlisting} &\begin{lstlisting} 717 mutex struct A {} 718 accept( void f(A & mutex a) ); 719 720 \end{lstlisting} \\ 721 Alternative 3 & Alternative 4 \\ 722 \begin{lstlisting} 723 mutex struct A { 724 accept( void f(A & mutex a) ) 725 }; 726 727 \end{lstlisting} &\begin{lstlisting} 728 mutex struct A { 729 accept : 730 void f(A & mutex a) ); 731 }; 732 \end{lstlisting}\\ 733 \hline 734 \multicolumn{2}{ c }{\code{accept} on routine}\\ 735 \hline 736 \begin{lstlisting} 737 mutex struct A {}; 738 739 void f(A & mutex a) 740 741 accept( void f(A & mutex a) ) 742 void g(A & mutex a) { 743 /*...*/ 744 } 745 \end{lstlisting}&\\ 746 \end{tabular} 747 } 748 \end{center} 749 750 Another aspect to consider is what happens if multiple overloads of the same routine are used. For the time being it is assumed that multiple overloads of the same routine should be scheduled regardless of the overload used. However, this could easily be extended in the future. 753 Not storing the queues inside the monitor means that the storage can vary between routines, allowing for more flexibility and extensions. Storing an array of function-pointers would solve the issue of uniquely identifying acceptable routines. However, the single instruction bitmask compare has been replaced by dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling may now require additionnal searches on calls to waitfor to check if a routine is already queued in. 754 755 At this point we must make a decision between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be prohibitively hard to write. This is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA. 756 757 Another aspect to consider is what happens if multiple overloads of the same routine are used. For the time being it is assumed that multiple overloads of the same routine are considered as distinct routines. However, this could easily be extended in the future. 751 758 752 759 % ====================================================================== … … 758 765 External scheduling, like internal scheduling, becomes orders of magnitude more complex when we start introducing multi-monitor syntax. Even in the simplest possible case some new semantics need to be established: 759 766 \begin{cfacode} 760 accept( void f(mutex struct A & mutex this))761 767 mutex struct A {}; 762 768 … … 764 770 765 771 void g(A & mutex a, B & mutex b) { 766 accept(f); //ambiguous, which monitor772 waitfor(f); //ambiguous, which monitor 767 773 } 768 774 \end{cfacode} … … 771 777 772 778 \begin{cfacode} 773 accept( void f(mutex struct A & mutex this))774 779 mutex struct A {}; 775 780 … … 777 782 778 783 void g(A & mutex a, B & mutex b) { 779 accept( b, f ); 780 } 781 \end{cfacode} 782 783 This is unambiguous. Both locks will be acquired and kept, when routine \code{f} is called the lock for monitor \code{a} will be temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{b}). This behavior can be extended to multi-monitor accept statment as follows. 784 785 \begin{cfacode} 786 accept( void f(mutex struct A & mutex, mutex struct A & mutex)) 784 waitfor( f, b ); 785 } 786 \end{cfacode} 787 788 This is unambiguous. Both locks will be acquired and kept, when routine \code{f} is called the lock for monitor \code{b} will be temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavior can be extended to multi-monitor waitfor statment as follows. 789 790 \begin{cfacode} 787 791 mutex struct A {}; 788 792 … … 790 794 791 795 void g(A & mutex a, B & mutex b) { 792 accept( b, a, f); 793 } 794 \end{cfacode} 795 796 Note that the set of monitors passed to the \code{accept} statement must be entirely contained in the set of monitor already acquired in the routine. \code{accept} used in any other context is Undefined Behaviour. 796 waitfor( f, a, b); 797 } 798 \end{cfacode} 799 800 Note that the set of monitors passed to the \code{waitfor} statement must be entirely contained in the set of monitor already acquired in the routine. \code{waitfor} used in any other context is Undefined Behaviour. 801 802 An important behavior to note is that what happens when set of monitors only match partially : 803 804 \begin{cfacode} 805 mutex struct A {}; 806 807 mutex struct B {}; 808 809 void g(A & mutex a, B & mutex b) { 810 waitfor(f, a, b); 811 } 812 813 A a1, a2; 814 B b; 815 816 void foo() { 817 g(a1, b); 818 } 819 820 void bar() { 821 f(a2, b); 822 } 823 \end{cfacode} 824 825 While the equivalent can happen when using internal scheduling, the fact that conditions are branded on first use means that users have to use two different condition variables. In both cases, partially matching monitor sets will not wake-up the waiting thread. It is also important to note that in the case of external scheduling, as for routine calls, the order of parameters is important; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are to distinct waiting condition. 797 826 798 827 % ====================================================================== -
doc/proposals/concurrency/version
r416cc86 r21a1efb 1 0.9.1 221 0.9.180
Note: See TracChangeset
for help on using the changeset viewer.