Changeset 8220e50
- Timestamp:
- Jun 18, 2019, 9:30:33 AM (5 years ago)
- Branches:
- ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- 07ca4dd
- Parents:
- 54b3fde
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/papers/concurrency/Paper.tex
r54b3fde r8220e50 311 311 Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads. 312 312 As a result, languages like Java, Scala, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms. 313 From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC, BoostThreads}, including putting green threads back into Java~\cite{Quasar}.313 From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}, including putting green threads back into Java~\cite{Quasar}. 314 314 The main argument for user-level threading is that they are lighter weight than kernel threads (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work-units to facilitate load balancing by the runtime~\cite{Verch12}. 315 315 As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{vonBehren03}. … … 327 327 328 328 Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary. 329 Two concurrency violations of this philosophy are \emph{spurious wakeup} and \emph{barging}, i.e., random wakeup~\cite[\S~8]{Buhr05a} and signals-as-hints~\cite[\S~8]{Buhr05a}, where one is a consequence of the other, i.e., once there is spurious wakeup, signals-as-hints follows.329 Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~8]{Buhr05a}) and \emph{barging} (signals-as-hints~\cite[\S~8]{Buhr05a}), where one is a consequence of the other, i.e., once there is spurious wakeup, signals-as-hints follows. 330 330 However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice. 331 331 Similarly, signals-as-hints is often a performance decision. 332 332 We argue removing spurious wakeup and signals-as-hints makes concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation. 333 (Author sexperience teaching concurrency is that students are highly confused by these semantics.)333 (Author experience teaching concurrency is that students are highly confused by these semantics.) 334 334 Clawing back performance, when local non-determinism is unimportant, should be an option not the default. 335 335 … … 367 367 \section{Stateful Function} 368 368 369 The generator/coroutine provides a stateful function, which is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}. 370 A stateful function allows execution to be temporarily suspended and later resumed, e.g., plugin, device driver, finite-state machine. 369 The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, e.g., plugin, device driver, finite-state machine. 371 370 Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension. 372 371 This capability is accomplished by retaining a data/execution \emph{closure} between invocations. … … 543 542 \end{figure} 544 543 545 For generators, coroutines, and threads, many designs are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.544 Stateful functions appear as generators, coroutines, and threads, where presentations are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}. 546 545 For example, Python presents generators as a function object: 547 546 \begin{python} … … 587 586 588 587 Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version. 589 This kind ofgenerator is an \emph{output generator}, producing a new result on each resumption.588 This generator is an \emph{output generator}, producing a new result on each resumption. 590 589 To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle. 591 An additional requirement is the ability to create an arbitrary number of generators (of any kind), \ie retaining state in global variables is insufficient;590 An additional requirement is the ability to create an arbitrary number of generators (of any kind), \ie retaining one state in global variables is insufficient; 592 591 hence, state is retained in a closure between calls. 593 592 Figure~\ref{f:CFibonacci} shows the C approach of manually creating the closure in structure @Fib@, and multiple instances of this closure provide multiple Fibonacci generators. 594 The C version only has the middle execution state because the top execution state becomes declaration initialization.593 The C version only has the middle execution state because the top execution state is declaration initialization. 595 594 Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a custom \CFA @generator@ type. 596 595 This generator type is then connected to a function that \emph{must be named \lstinline|main|},\footnote{ … … 668 667 As well, the data state is small, where variables @byte@ and @msg@ are communication variables for passing in message bytes and returning the message, and variables @lnth@, @crc@, and @sum@ are local variable that must be retained between calls and are manually hoisted into the generator type. 669 668 % Manually, detecting and hoisting local-state variables is easy when the number is small. 670 Finally, the execution state is large, with one @resume@ and seven @suspend@s.669 In contrast, the execution state is large, with one @resume@ and seven @suspend@s. 671 670 Hence, the key benefits of the generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than using a table-driven approach. 672 671 Because FSMs can be complex and occur frequently in important domains, direct support of the generator is crucial in a systems programming-language. … … 780 779 Figure~\ref{f:CFAPingPongGen} shows a symmetric generator, where the generator resumes another generator, forming a resume/resume cycle. 781 780 (The trivial cycle is a generator resuming itself.) 782 This control flow is similar to recursion for functions ,but without stack growth.781 This control flow is similar to recursion for functions but without stack growth. 783 782 The steps for symmetric control-flow are creating, executing, and terminating the cycle. 784 783 Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope. … … 789 788 Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example). 790 789 The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack. 791 Also, since local variables are not retained in the generator function, it does not contain an objects with destructors that must be called, so the cost is the same as a function return.790 Also, since local variables are not retained in the generator function, it does not contain any objects with destructors that must be called, so the cost is the same as a function return. 792 791 Destructor cost occurs when the generator instance is deallocated, which is easily controlled by the programmer. 793 792 … … 1220 1219 Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter. 1221 1220 Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end. 1222 For other scenarios, it is always possible to devise a solution with additional programming effort .1221 For other scenarios, it is always possible to devise a solution with additional programming effort, such as forcing the cycle forward (backward) to a safe point before starting termination. 1223 1222 1224 1223 The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first. … … 1290 1289 The function definitions ensures there is a statically-typed @main@ function that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the currently executing coroutine handle. 1291 1290 The @main@ function has no return value or additional parameters because the coroutine type allows an arbitrary number of interface functions with corresponding arbitrary typed input/output values versus fixed ones. 1292 The advantage of this approach is that users can easily create different types of coroutines, \eg changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ function, and possibly redefining @suspend@and @resume@.1291 The advantage of this approach is that users can easily create different types of coroutines, \eg changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ function, and possibly redefining \textsf{suspend} and @resume@. 1293 1292 1294 1293 The \CFA custom-type @coroutine@ implicitly implements the getter and forward declarations for the coroutine main. … … 1338 1337 Once allocated, a VLS is fixed sized.} 1339 1338 on the allocating stack, provided the allocating stack is large enough. 1340 For a VLS stack allocation , allocation/deallocation is an inexpensive adjustment of the stack point, modulo any stack constructor costs (\eg initial frame setup).1339 For a VLS stack allocation/deallocation is an inexpensive adjustment of the stack pointer, modulo any stack constructor costs (\eg initial frame setup). 1341 1340 For heap stack allocation, allocation/deallocation is an expensive heap allocation (where the heap can be a shared resource), modulo any stack constructor costs. 1342 1341 With heap stack allocation, it is also possible to use a split (segmented) stack calling-convention, available with gcc and clang, so the stack is variable sized. … … 1363 1362 However, coroutines are a stepping stone towards concurrency. 1364 1363 1365 The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a} \cite{Adya02}.1364 The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a}. 1366 1365 Therefore, a minimal concurrency system requires coroutines \emph{in conjunction with a nondeterministic scheduler}. 1367 The resulting execution system now follows a cooperative threading-model , called \newterm{non-preemptive scheduling}.1366 The resulting execution system now follows a cooperative threading-model~\cite{Adya02,libdill}, called \newterm{non-preemptive scheduling}. 1368 1367 Adding \newterm{preemption} introduces non-cooperative scheduling, where context switching occurs randomly between any two instructions often based on a timer interrupt, called \newterm{preemptive scheduling}. 1369 1368 While a scheduler introduces uncertain execution among explicit context switches, preemption introduces uncertainty by introducing implicit context switches. … … 1424 1423 This semantic ensures a thread is started and stopped exactly once, eliminating some programming error, and scales to multiple threads for basic (termination) synchronization. 1425 1424 For block allocation to arbitrary depth, including recursion, threads are created/destroyed in a lattice structure (tree with top and bottom). 1426 Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic ally allocating.1425 Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic allocation. 1427 1426 \begin{cfa} 1428 1427 MyTask * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$ … … 1525 1524 \subsection{Mutual Exclusion} 1526 1525 1527 A group of instructions manipulating a specific instance of shared data that must be performed atomically is called a n (individual) \newterm{critical-section}~\cite{Dijkstra65}.1528 The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously.1526 A group of instructions manipulating a specific instance of shared data that must be performed atomically is called a \newterm{critical section}~\cite{Dijkstra65}, which is enforced by \newterm{simple mutual-exclusion}. 1527 The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session use the resource simultaneously and different sessions are segregated, which is enforced by \newterm{complex mutual-exclusion} providing the correct kind and number of threads using a group critical-section. 1529 1528 The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers share a session but writers have a unique session. 1530 \newterm{Mutual exclusion} enforces the correct kind and number of threads using a critical section.1531 1529 1532 1530 However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. … … 1548 1546 Preventing or detecting barging is an involved challenge with low-level locks, which is made easier through higher-level constructs. 1549 1547 This challenge is often split into two different approaches: barging avoidance and prevention. 1550 Algorithms that unconditionally releasing a lock for competing threads to acquire use barging avoidance during synchronization to force a barging thread to wait .1548 Algorithms that unconditionally releasing a lock for competing threads to acquire use barging avoidance during synchronization to force a barging thread to wait; 1551 1549 algorithms that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely. 1552 1550
Note: See TracChangeset
for help on using the changeset viewer.