Changes in / [816d61c:f13ee31]
- Files:
-
- 6 added
- 5 deleted
- 12 edited
Legend:
- Unmodified
- Added
- Removed
-
.gitignore
r816d61c rf13ee31 13 13 libcfa/Makefile 14 14 src/Makefile 15 version15 /version 16 16 17 17 # genereted by premake -
doc/proposals/concurrency/Makefile
r816d61c rf13ee31 13 13 annex/glossary \ 14 14 text/intro \ 15 text/cforall \ 15 16 text/basics \ 16 17 text/concurrency \ -
doc/proposals/concurrency/build/bump_ver.sh
r816d61c rf13ee31 1 1 #!/bin/bash 2 if [ ! -f build/version ]; then3 echo "0.0.0" > build/version2 if [ ! -f version ]; then 3 echo "0.0.0" > version 4 4 fi 5 5 6 sed -r 's/([0-9]+\.[0-9]+.)([0-9]+)/echo "\1\$((\2+1))" > version/ge' build/version > /dev/null6 sed -r 's/([0-9]+\.[0-9]+.)([0-9]+)/echo "\1\$((\2+1))" > version/ge' version > /dev/null -
doc/proposals/concurrency/text/basics.tex
r816d61c rf13ee31 7 7 8 8 \section{Basics of concurrency} 9 At its core, concurrency is based on having multiple call stacks and potentially multiple threads of execution for these stacks. Concurrency alone without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution and switching between these call stacks on a regular basis. A minimal concurrency product can be achieved by creating coroutines which instead of context switching between each other, always ask an oracle where to context switch next. While coroutines do not technically require a stack, stackfull coroutines are the closest abstraction to a practical "naked"" call stack. When writing concurrency in terms of coroutines, the oracle effectively becomes a scheduler and the whole system now follows a cooperative threading model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine but in any case a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to be present, the only feature missing is preemption. Indeed, concurrency challenges appear with the lack of determinism. Guaranteeing mutual-exclusion or synchronisation are simply ways of limiting the lack of determinism in the system. A scheduler introduces order of execution uncertainty while preemption introduces incertainty about when context-switches occur. Now it is important to understand that uncertainty is not necessarily undesireable, uncertainty can often be used by systems to significantly increase performance and is often the basis of giving the user the illusion that hundred of tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as little determinism as correctness will allow\cit.9 At its core, concurrency is based on having call-stacks and potentially multiple threads of execution for these stacks. Concurrency without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution, and switching between these call stacks on a regular basis. A minimal concurrency product can be achieved by creating coroutines, which instead of context switching between each other, always ask an oracle where to context switch next. While coroutines do not technically require a stack, stackfull coroutines are the closest abstraction to a practical "naked"" call stack. When writing concurrency in terms of coroutines, the oracle effectively becomes a scheduler and the whole system now follows a cooperative threading-model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. Indeed, concurrency challenges appear with non-determinism. Guaranteeing mutual-exclusion or synchronisation are simply ways of limiting the lack of determinism in a system. A scheduler introduces order of execution uncertainty, while preemption introduces incertainty about where context-switches occur. Now it is important to understand that uncertainty is not necessarily undesireable; uncertainty can often be used by systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows\cit. 10 10 11 11 \section{\protect\CFA 's Thread Building Blocks} 12 % As a system-level language, \CFA should offer both performance and flexibilty as its primary goals, simplicity and user-friendliness being a secondary concern. Therefore, the core of parallelism in \CFA should prioritize power and efficiency. With this said, deconstructing popular paradigms in order to get simple building blocks yields \glspl{uthread} as the core parallelism block. \Glspl{pool} and other parallelism paradigms can then be built on top of the underlying threading model. 13 One of the important features that is missing to C is threading. On modern architectures, the lack of threading is becoming less and less forgivable\cite{Sutter05, Sutter05b} and therefore any modern programming language should have the proper tools to allow users to write performant concurrent and/or parallel programs. As an extension of C, \CFA needs to express these concepts an a way that is as natural as possible to programmers used to imperative languages. And being a system level language means programmers will expect to be able to choose precisely which features they need and which cost they are willing to pay. 14 15 \section{Coroutines A stepping stone}\label{coroutine} 16 While the main focus of this proposal is concurrency and parallelism, as mentionned above it is important to adress coroutines which are actually a significant underlying aspect of the concurrency system. Indeed, while having nothing todo with parallelism and arguably little to do with concurrency, coroutines need to deal with context-switchs and and other context management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core API of coroutines revolve around two features independent call stacks and \code{suspend}/\code{resume}. 12 One of the important features that is missing in C is threading. On modern architectures, a lack of threading is becoming less and less forgivable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent and/or parallel programs. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers used to imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay. 13 14 \section{Coroutines: A stepping stone}\label{coroutine} 15 While the main focus of this proposal is concurrency and parallelism, as mentionned above it is important to adress coroutines, which are actually a significant underlying aspect of a concurrency system. Indeed, while having nothing todo with parallelism and arguably little to do with concurrency, coroutines need to deal with context-switchs and and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core API of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}. 17 16 18 17 Here is an example of a solution to the fibonnaci problem using \CFA coroutines: … … 26 25 } 27 26 27 // main automacically called on first resume 28 28 void main(Fibonacci* this) { 29 29 int fn1, fn2; // retained between resumes … … 59 59 60 60 \subsection{Construction} 61 One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run some code after the user-constructor runs. In the case of the coroutines this challenge is simpler since there is no loss of determinism brough by preemption or scheduling, however, the underlying challenge remains the same for coroutines and threads.62 63 The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor (Obviously we only solve cases where these two statements don't conflict). There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.64 65 Furthermore, \CFA faces an extra challenge which is that polymorphique routines rely oninvisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:61 One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads. 62 63 The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. Like for regular objects, constructors can still leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine. 64 65 Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks: 66 66 67 67 \begin{cfacode} … … 78 78 } 79 79 \end{cfacode} 80 Indeed, the generated C code\footnote{Code trimmed down for brevity} shows that a local thunk is created in orderto hold type information:80 The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information: 81 81 82 82 \begin{ccode} … … 95 95 } 96 96 \end{ccode} 97 The problem in th e this example is that there is a race condition between the start of the execution of \code{noop} on the other thread and the stack frame of \code{bar} being destroyed. This extra challenge limits which solutions are viable because storing the function pointer for too long only increases the chances that the race will end in undefined behavior; i.e. the stack based thunk being destroyed before it was used.97 The problem in this example is a race condition between the start of the execution of \code{noop} on the other thread and the stack frame of \code{bar} being destroyed. This extra challenge limits which solutions are viable because storing the function pointer for too long only increases the chances that the race will end in undefined behavior; i.e. the stack based thunk being destroyed before it was used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that the routines cannot be passed outside of the scope of the functions these were declared in. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks. 98 98 99 99 \subsection{Alternative: Composition} 100 One solution to this challenge would be to use inheritence,100 One solution to this challenge would be to use composition/containement, 101 101 102 102 \begin{cfacode} 103 103 struct Fibonacci { 104 104 int fn; // used for communication 105 coroutine c; 105 coroutine c; //composition 106 106 }; 107 107 … … 111 111 } 112 112 \end{cfacode} 113 114 There are two downsides to this approach. The first, which is relatively minor, is that the base class needs to be made aware of the main routine pointer, regardless of whether we use a parameter or a virtual pointer, this means the coroutine data must be made larger to store a value that is actually a compile time constant (The address of the main routine). The second problem which is both subtle but significant, is that now users can get the initialisation order of there coroutines wrong. Indeed, every field of a \CFA struct will be constructed but in the order of declaration, unless users explicitly write otherwise. This means that users who forget to initialize a the coroutine at the right time may resume the coroutine with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem. 113 There are two downsides to this approach. The first, which is relatively minor, is that the base class needs to be made aware of the main routine pointer, regardless of whether a parameter or a virtual pointer is used, this means the coroutine data must be made larger to store a value that is actually a compile time constant (address of the main routine). The second problem, which is both subtle and significant, is that now users can get the initialisation order of there coroutines wrong. Indeed, every field of a \CFA struct is constructed but in declaration order, unless users explicitly write otherwise. This semantics means that users who forget to initialize a the coroutine may resume the coroutine with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem. 115 114 116 115 \subsection{Alternative: Reserved keyword} … … 122 121 }; 123 122 \end{cfacode} 124 125 123 This mean the compiler can solve problems by injecting code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users who would want to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of \CFA. 126 124 While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can both be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases. … … 128 126 \subsection{Alternative: Lamda Objects} 129 127 130 For coroutines as for threads, many implementations are based on routine pointers or function objects\cit. For example, Boost implements coroutines in terms of four functor object types \code{asymmetric_coroutine<>::pull_type}, \code{asymmetric_coroutine<>::push_type}, \code{symmetric_coroutine<>::call_type}, \code{symmetric_coroutine<>::yield_type}. Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known example. The main problem of these approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write and \CFA and solves several issues, added support for routine/lambda based coroutines adds very little. 131 132 \subsection{Trait based coroutines} 133 134 Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as \say{anything that \say{satisfies the trait \code{is_coroutine} and is used as a coroutine} is a coroutine}. 128 For coroutines as for threads, many implementations are based on routine pointers or function objects\cit. For example, Boost implements coroutines in terms of four functor object types: 129 \begin{cfacode} 130 asymmetric_coroutine<>::pull_type 131 asymmetric_coroutine<>::push_type 132 symmetric_coroutine<>::call_type 133 symmetric_coroutine<>::yield_type 134 \end{cfacode} 135 Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little. 136 137 A variation of this would be to use an simple function pointer in the same way pthread does for threads : 138 \begin{cfacode} 139 void foo( coroutine_t cid, void * arg ) { 140 int * value = (int *)arg; 141 //Coroutine body 142 } 143 144 int main() { 145 int value = 0; 146 coroutine_t cid = coroutine_create( &foo, (void*)&value ); 147 coroutine_resume( &cid ); 148 } 149 \end{cfacode} 150 This semantic is more common for thread interfaces than coroutines but would work equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity. 151 152 \subsection{Alternative: Trait-based coroutines} 153 154 Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as anything that satisfies the trait \code{is_coroutine} and is used as a coroutine is a coroutine. 135 155 136 156 \begin{cfacode} … … 140 160 }; 141 161 \end{cfacode} 142 143 This entails that an object is not a coroutine until \code{resume} (or \code{prime}) is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory foot print of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine. 162 This ensures an object is not a coroutine until \code{resume} (or \code{prime}) is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory foot print of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine. 163 164 \begin{center} 165 \begin{tabular}{c c c} 166 \begin{cfacode}[tabsize=3] 167 coroutine MyCoroutine { 168 int someValue; 169 }; 170 \end{cfacode} & == & \begin{cfacode}[tabsize=3] 171 struct MyCoroutine { 172 int someValue; 173 coroutine_desc __cor; 174 }; 175 176 static inline 177 coroutine_desc * get_coroutine( 178 struct MyCoroutine * this 179 ) { 180 return &this->__cor; 181 } 182 183 void main(struct MyCoroutine * this); 184 \end{cfacode} 185 \end{tabular} 186 \end{center} 187 144 188 145 189 146 190 \section{Thread Interface}\label{threads} 147 The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. B y default these are implemented as \glspl{uthread}, and as such, offer a flexible and lightweight threading interface (lightweight compared to \glspl{kthread}). A thread can be declared using a SUEdeclaration \code{thread} as follows:191 The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. Both use and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism. User threads offer a flexible and lightweight interface. A thread can be declared using a struct declaration \code{thread} as follows: 148 192 149 193 \begin{cfacode} … … 151 195 \end{cfacode} 152 196 153 Likefor coroutines, the keyword is a thin wrapper arount a \CFA trait:197 As for coroutines, the keyword is a thin wrapper arount a \CFA trait: 154 198 155 199 \begin{cfacode} … … 170 214 \end{cfacode} 171 215 172 In this example, threads of type \code{foo} will start there execution in the \code{void main(foo*)} routine which in this case prints \code{"Hello World!"}. While this proposoal encourages this approach which enforces strongly typeprogramming, users may prefer to use the routine based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as parameter and executes it on its stack asynchronously216 In this example, threads of type \code{foo} start execution in the \code{void main(foo*)} routine which prints \code{"Hello World!"}. While this proposoal encourages this approach to enforce strongly-typed programming, users may prefer to use the routine based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as parameter and executes it on its stack asynchronously 173 217 \begin{cfacode} 174 218 typedef void (*voidFunc)(void); … … 201 245 void main() { 202 246 World w; 203 //Thread runforks here204 205 //Printing "Hello " and "World!" will be run concurrently247 //Thread forks here 248 249 //Printing "Hello " and "World!" are run concurrently 206 250 sout | "Hello " | endl; 207 251 … … 210 254 \end{cfacode} 211 255 212 This semantic has several advantages over explicit semantics typesafety is guaranteed, a thread is always started and stopped exaclty once and users cannot make any progamming errors. However, one of the apparent drawbacks of this system is that threads now always form a lattice, that is they are always destroyed in opposite order of construction. While this seems like a significant limitation, existing \CFA semantics can solve this problem. Indeed, using dynamic allocation to create threads will naturally let threads outlive the scope in which the thread was created much like dynamically allocating memory will let objects outlive the scope in which thy were created 256 This semantic has several advantages over explicit semantics typesafety is guaranteed, a thread is always started and stopped exaclty once and users cannot make any progamming errors. Another advantage of this semantic is that it naturally scale to multiple threads meaning basic synchronisation is very simple 257 258 \begin{cfacode} 259 thread MyThread { 260 //... 261 }; 262 263 //main 264 void main(MyThread* this) { 265 //... 266 } 267 268 void foo() { 269 MyThread thrds[10]; 270 //Start 10 threads at the beginning of the scope 271 272 DoStuff(); 273 274 //Wait for the 10 threads to finish 275 } 276 \end{cfacode} 277 278 However, one of the apparent drawbacks of this system is that threads now always form a lattice, that is they are always destroyed in opposite order of construction because of block structure. However, storage allocation os not limited to blocks; dynamic allocation can create threads that outlive the scope in which the thread is created much like dynamically allocating memory lets objects outlive the scope in which they are created 213 279 214 280 \begin{cfacode} … … 241 307 } 242 308 \end{cfacode} 243 244 Another advantage of this semantic is that it naturally scale to multiple threads meaning basic synchronisation is very simple245 246 \begin{cfacode}247 thread MyThread {248 //...249 };250 251 //main252 void main(MyThread* this) {253 //...254 }255 256 void foo() {257 MyThread thrds[10];258 //Start 10 threads at the beginning of the scope259 260 DoStuff();261 262 //Wait for the 10 threads to finish263 }264 \end{cfacode} -
doc/proposals/concurrency/text/concurrency.tex
r816d61c rf13ee31 4 4 % ====================================================================== 5 5 % ====================================================================== 6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. This distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At a lower level, non-concurrent paradigms are often implemented as locks and atomic operations. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct. 6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account. 7 8 Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. 9 10 An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be the main concurrency paradigm for general purpose language, which is why it was rejected as the core paradigm for concurrency in \CFA. 11 12 One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct. 7 13 8 14 \section{Basics} 9 The basic features that concurrency tools neet to offer is support for mutual-exclusion and synchronisation. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is the group of instructions on an associated portion of data that requires the limited access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools are used to guarantee that event \textit{X} always happens before \textit{Y}.15 Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronisation. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools numerous mechanisms to establish timing relationships among threads. 10 16 11 17 \subsection{Mutual-Exclusion} 12 As mentionned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solution exists for mutual exclusion which vary in terms of performance, flexibility and ease of use. Methods range from low level locks, which are fast and flexible but require significant attention to be correct, to higher level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Often by either guaranteeing some problems cannot occur (e.g. being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic<T>} which offer an easy way to express mutual-exclusion on a restricted set of features (.e.g: reading/writing large types atomically). Another challenge with low level locks is composability. Locks are said to not be composable because it takes careful organising for multiple locks to be used and oncewhile preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.18 As mentionned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solution exists for mutual exclusion which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to higher-level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (e.g., being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic<T>} which offer an easy way to express mutual-exclusion on a restricted set of operations (.e.g: reading/writing large types atomically). Another challenge with low-level locks is composability. Locks are not composable because it takes careful organising for multiple locks to be used while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer. 13 19 14 20 \subsection{Synchronization} 15 As for mutual-exclusion, low level synchronisation primitive often offer g reat performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, for examplemessage passing, or offering simple solution to otherwise involved challenges. An example of this is barging. As mentionned above synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time synchronisation happens around a critical section, where threads most acquire said critical section in a certain order. However, it may also be desired to be able to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. This is called barging, where event \textit{X} tries to effect event \textit{Y} but anoter thread races to grab the critical section and emits \textit{Z} before \textit{Y}. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.21 As for mutual-exclusion, low level synchronisation primitive often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, .eg., message passing, or offering simple solution to otherwise involved challenges. An example of this is barging. As mentionned above synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time synchronisation happens around a critical section, where threads most acquire said critical section in a certain order. However, it may also be desired to be able to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. This is called barging, where event \textit{X} tries to effect event \textit{Y} but anoter thread races to grab the critical section and emits \textit{Z} before \textit{Y}. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. 16 22 17 23 % ====================================================================== … … 20 26 % ====================================================================== 21 27 % ====================================================================== 22 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OO Psemantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it :28 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OO semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it : 23 29 \begin{cfacode} 24 30 typedef /*some monitor type*/ monitor; … … 36 42 % ====================================================================== 37 43 % ====================================================================== 38 The above monitor example displays some of the intrinsic characteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable.39 40 Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can be bothgeneric helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter :44 The above monitor example displays some of the intrinsic characteristics. First, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable objects. 45 46 Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can occur for generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter : 41 47 42 48 \begin{cfacode} … … 46 52 size_t ++?(counter_t & mutex this); //increment 47 53 48 //need for mutex is platform dependent here54 //need for mutex is platform dependent 49 55 void ?{}(size_t * this, counter_t & mutex cnt); //conversion 50 56 \end{cfacode} … … 52 58 Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet constructed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading an \code{size_t} is an atomic operation. 53 59 54 Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)} then it is reasonable that it should default to the safest option \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. In fact\code{nomutex} is the "normal" parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that "this routine is not special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routines neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without a doubt wheter or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword.60 Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)}, then it is reasonable that it should default to the safest option \code{mutex}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. In fact, \code{nomutex} is the "normal" parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that "this routine is not special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routines neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without a doubt wheter or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword. 55 61 56 62 … … 60 66 int f2(const monitor & mutex m); 61 67 int f3(monitor ** mutex m); 62 int f4(monitor * [] mutex m);68 int f4(monitor * mutex m []); 63 69 int f5(graph(monitor*) & mutex m); 64 70 \end{cfacode} … … 68 74 int f1(monitor & mutex m); //Okay : recommanded case 69 75 int f2(monitor * mutex m); //Okay : could be an array but probably not 70 int f3(monitor [] mutex m); //Not Okay : Array of unkown length76 int f3(monitor mutex m []); //Not Okay : Array of unkown length 71 77 int f4(monitor ** mutex m); //Not Okay : Could be an array 72 int f5(monitor * [] mutex m); //Not Okay : Array of unkown length78 int f5(monitor * mutex m []); //Not Okay : Array of unkown length 73 79 \end{cfacode} 74 80 -
doc/proposals/concurrency/text/intro.tex
r816d61c rf13ee31 3 3 % ====================================================================== 4 4 5 This proposal provides a minimal concurrency API that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency core is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of the concurrency in \CFA. Indeed, for highly productive parallel programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading.5 This proposal provides a minimal concurrency API that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency system is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of the concurrency, in \CFA. Indeed, for highly productive parallel programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading. Therefore a high-level approach is adapted in \CFA 6 6 7 There are actually two problems that need to be solved in the design of concurrency for a programming language: which concurrency tools are available to the users and which parallelism tools are available. While these two concepts are often seen together, they are in fact distinct concepts that require different sorts oftools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are about performance, cost and resource utilization.7 There are actually two problems that need to be solved in the design of concurrency for a programming language: which concurrency and which parallelism tools are available to the users. While these two concepts are often combined, they are in fact distinct concepts that require different tools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are about performance, cost and resource utilization. -
doc/proposals/concurrency/thesis.tex
r816d61c rf13ee31 77 77 \fancyhf{} 78 78 \cfoot{\thepage} 79 \rfoot{v\input{ build/version}}79 \rfoot{v\input{version}} 80 80 81 81 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% … … 94 94 95 95 \input{intro} 96 97 \input{cforall} 96 98 97 99 \input{basics} -
doc/working/exception/translate.c
r816d61c rf13ee31 54 54 55 55 __throw_resume(exception_instance); 56 57 58 59 // Rethrows (inside matching handlers): 60 "Cforall" 61 62 throw; 63 64 resume; 65 66 "C" 67 68 __rethrow_terminate(); 69 70 return false; 56 71 57 72 … … 237 252 } 238 253 void finally1() { 239 // (Finally, because of timing, also work for resume.) 254 // Finally, because of timing, also works for resume. 255 // However this might not actually be better in any way. 240 256 __try_resume_cleanup(); 241 257 -
src/Common/PassVisitor.h
r816d61c rf13ee31 54 54 virtual void visit( BranchStmt *branchStmt ) override final; 55 55 virtual void visit( ReturnStmt *returnStmt ) override final; 56 virtual void visit( ThrowStmt *throwStmt ) override final; 56 57 virtual void visit( TryStmt *tryStmt ) override final; 57 58 virtual void visit( CatchStmt *catchStmt ) override final; … … 139 140 virtual Statement* mutate( BranchStmt *branchStmt ) override final; 140 141 virtual Statement* mutate( ReturnStmt *returnStmt ) override final; 142 virtual Statement* mutate( ThrowStmt *throwStmt ) override final; 141 143 virtual Statement* mutate( TryStmt *returnStmt ) override final; 142 144 virtual Statement* mutate( CatchStmt *catchStmt ) override final; … … 230 232 std::list< Statement* > * get_afterStmts () { return stmtsToAddAfter_impl ( pass, 0); } 231 233 bool visit_children() { bool* skip = skip_children_impl(pass, 0); return ! (skip && *skip); } 234 void reset_visit() { bool* skip = skip_children_impl(pass, 0); if(skip) *skip = false; } 235 236 guard_value_impl init_guard() { 237 guard_value_impl guard; 238 auto at_cleanup = at_cleanup_impl(pass, 0); 239 if( at_cleanup ) { 240 *at_cleanup = [&guard]( cleanup_func_t && func, void* val ) { 241 guard.push( std::move( func ), val ); 242 }; 243 } 244 return guard; 245 } 232 246 }; 233 247 248 template<typename pass_type, typename T> 249 void GuardValue( pass_type * pass, T& val ) { 250 pass->at_cleanup( [ val ]( void * newVal ) { 251 * static_cast< T * >( newVal ) = val; 252 }, static_cast< void * >( & val ) ); 253 } 254 234 255 #include "PassVisitor.impl.h" -
src/Common/PassVisitor.impl.h
r816d61c rf13ee31 1 1 #pragma once 2 2 3 #define VISIT_START( node ) \ 4 call_previsit( node ); \ 5 if( visit_children() ) { \ 6 7 #define VISIT_END( node ) \ 8 } \ 9 return call_postvisit( node ); \ 10 11 #define MUTATE_START( node ) \ 12 call_premutate( node ); \ 13 if( visit_children() ) { \ 3 #define VISIT_START( node ) \ 4 __attribute__((unused)) \ 5 const auto & guard = init_guard(); \ 6 call_previsit( node ); \ 7 if( visit_children() ) { \ 8 reset_visit(); \ 9 10 #define VISIT_END( node ) \ 11 } \ 12 call_postvisit( node ); \ 13 14 #define MUTATE_START( node ) \ 15 __attribute__((unused)) \ 16 const auto & guard = init_guard(); \ 17 call_premutate( node ); \ 18 if( visit_children() ) { \ 19 reset_visit(); \ 14 20 15 21 #define MUTATE_END( type, node ) \ … … 18 24 19 25 20 #define VISIT_BODY( node ) \21 VISIT_START( node ); \22 Visitor::visit( node ); \23 VISIT_END( node ); \26 #define VISIT_BODY( node ) \ 27 VISIT_START( node ); \ 28 Visitor::visit( node ); \ 29 VISIT_END( node ); \ 24 30 25 31 … … 389 395 390 396 //-------------------------------------------------------------------------- 397 // ThrowStmt 398 399 template< typename pass_type > 400 void PassVisitor< pass_type >::visit( ThrowStmt * node ) { 401 VISIT_BODY( node ); 402 } 403 404 template< typename pass_type > 405 Statement * PassVisitor< pass_type >::mutate( ThrowStmt * node ) { 406 MUTATE_BODY( Statement, node ); 407 } 408 409 //-------------------------------------------------------------------------- 391 410 // TryStmt 392 411 template< typename pass_type > -
src/Common/PassVisitor.proto.h
r816d61c rf13ee31 1 1 #pragma once 2 3 typedef std::function<void( void * )> cleanup_func_t; 4 5 class guard_value_impl { 6 public: 7 guard_value_impl() = default; 8 9 ~guard_value_impl() { 10 while( !cleanups.empty() ) { 11 auto& cleanup = cleanups.top(); 12 cleanup.func( cleanup.val ); 13 cleanups.pop(); 14 } 15 } 16 17 void push( cleanup_func_t && func, void* val ) { 18 cleanups.emplace( std::move(func), val ); 19 } 20 21 private: 22 struct cleanup_t { 23 cleanup_func_t func; 24 void * val; 25 26 cleanup_t( cleanup_func_t&& func, void * val ) : func(func), val(val) {} 27 }; 28 29 std::stack< cleanup_t > cleanups; 30 }; 31 32 typedef std::function< void( cleanup_func_t, void * ) > at_cleanup_t; 2 33 3 34 //------------------------------------------------------------------------------------------------------------------------------------------------------------------------- … … 73 104 #define FIELD_PTR( type, name ) \ 74 105 template<typename pass_type> \ 75 static inline auto name##_impl( pass_type& pass, __attribute__((unused)) int unused ) -> decltype( &pass.name ) { return &pass.name; } 106 static inline auto name##_impl( pass_type& pass, __attribute__((unused)) int unused ) -> decltype( &pass.name ) { return &pass.name; } \ 76 107 \ 77 108 template<typename pass_type> \ … … 82 113 FIELD_PTR( std::list< Statement* >, stmtsToAddAfter ) 83 114 FIELD_PTR( bool, skip_children ) 115 FIELD_PTR( at_cleanup_t, at_cleanup ) -
src/InitTweak/GenInit.cc
r816d61c rf13ee31 16 16 #include <stack> 17 17 #include <list> 18 19 #include "InitTweak.h" 18 20 #include "GenInit.h" 19 #include "InitTweak.h" 21 22 #include "Common/PassVisitor.h" 23 20 24 #include "SynTree/Declaration.h" 21 #include "SynTree/Type.h"22 25 #include "SynTree/Expression.h" 23 #include "SynTree/Statement.h"24 26 #include "SynTree/Initializer.h" 25 27 #include "SynTree/Mutator.h" 28 #include "SynTree/Statement.h" 29 #include "SynTree/Type.h" 30 26 31 #include "SymTab/Autogen.h" 27 32 #include "SymTab/Mangler.h" 33 34 #include "GenPoly/DeclMutator.h" 28 35 #include "GenPoly/PolyMutator.h" 29 #include "GenPoly/DeclMutator.h"30 36 #include "GenPoly/ScopedSet.h" 37 31 38 #include "ResolvExpr/typeops.h" 32 39 … … 37 44 } 38 45 39 class ReturnFixer final : public GenPoly::PolyMutator{46 class ReturnFixer { 40 47 public: 41 48 /// consistently allocates a temporary variable for the return value … … 44 51 static void makeReturnTemp( std::list< Declaration * > &translationUnit ); 45 52 46 typedef GenPoly::PolyMutator Parent; 47 using Parent::mutate; 48 virtual DeclarationWithType * mutate( FunctionDecl *functionDecl ) override; 49 virtual Statement * mutate( ReturnStmt * returnStmt ) override; 53 void premutate( FunctionDecl *functionDecl ); 54 void premutate( ReturnStmt * returnStmt ); 55 56 at_cleanup_t at_cleanup; 57 std::list< Statement * > stmtsToAddBefore; 50 58 51 59 protected: … … 129 137 130 138 void ReturnFixer::makeReturnTemp( std::list< Declaration * > & translationUnit ) { 131 ReturnFixerfixer;139 PassVisitor<ReturnFixer> fixer; 132 140 mutateAll( translationUnit, fixer ); 133 141 } 134 142 135 Statement *ReturnFixer::mutate( ReturnStmt *returnStmt ) {143 void ReturnFixer::premutate( ReturnStmt *returnStmt ) { 136 144 std::list< DeclarationWithType * > & returnVals = ftype->get_returnVals(); 137 145 assert( returnVals.size() == 0 || returnVals.size() == 1 ); … … 144 152 construct->get_args().push_back( new AddressExpr( new VariableExpr( returnVals.front() ) ) ); 145 153 construct->get_args().push_back( returnStmt->get_expr() ); 146 stmtsToAdd .push_back(new ExprStmt(noLabels, construct));154 stmtsToAddBefore.push_back(new ExprStmt(noLabels, construct)); 147 155 148 156 // return the retVal object 149 157 returnStmt->set_expr( new VariableExpr( returnVals.front() ) ); 150 158 } // if 151 return returnStmt; 152 } 153 154 DeclarationWithType* ReturnFixer::mutate( FunctionDecl *functionDecl ) { 155 ValueGuard< FunctionType * > oldFtype( ftype ); 156 ValueGuard< std::string > oldFuncName( funcName ); 159 } 160 161 void ReturnFixer::premutate( FunctionDecl *functionDecl ) { 162 GuardValue( this, ftype ); 163 GuardValue( this, funcName ); 157 164 158 165 ftype = functionDecl->get_functionType(); 159 166 funcName = functionDecl->get_name(); 160 return Parent::mutate( functionDecl );161 167 } 162 168
Note: See TracChangeset
for help on using the changeset viewer.