258 | | This paper provides a minimal concurrency \newterm{Application Program Interface} (API) that is simple, efficient and can be used to build other concurrency features. |
259 | | While the simplest concurrency system is a thread and a lock, this low-level approach is hard to master. |
260 | | An easier approach for programmers is to support higher-level constructs as the basis of concurrency. |
261 | | Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{Hochstein05}. |
262 | | Examples of high-level approaches are task (work) based~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}. |
263 | | |
264 | | The following terminology is used. |
265 | | A \newterm{thread} is a fundamental unit of execution that runs a sequence of code and requires a stack to maintain state. |
266 | | Multiple simultaneous threads give rise to \newterm{concurrency}, which requires locking to ensure safe communication and access to shared data. |
267 | | % Correspondingly, concurrency is defined as the concepts and challenges that occur when multiple independent (sharing memory, timing dependencies, \etc) concurrent threads are introduced. |
268 | | \newterm{Locking}, and by extension \newterm{locks}, are defined as a mechanism to prevent progress of threads to provide safety. |
269 | | \newterm{Parallelism} is running multiple threads simultaneously. |
270 | | Parallelism implies \emph{actual} simultaneous execution, where concurrency only requires \emph{apparent} simultaneous execution. |
271 | | As such, parallelism only affects performance, which is observed through differences in space and/or time at runtime. |
272 | | |
273 | | Hence, there are two problems to be solved: concurrency and parallelism. |
274 | | While these two concepts are often combined, they are distinct, requiring different tools~\cite[\S~2]{Buhr05a}. |
275 | | Concurrency tools handle synchronization and mutual exclusion, while parallelism tools handle performance, cost and resource utilization. |
276 | | |
277 | | The proposed concurrency API is implemented in a dialect of C, called \CFA. |
278 | | The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantic, and type checking, and the corresponding high-performance runtime-library to implement the concurrency features. |
279 | | |
280 | | |
281 | | \section{\CFA Overview} |
282 | | |
283 | | The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency. |
284 | | Extended versions and explanation of the following code examples are available at the \CFA website~\cite{Cforall} or in Moss~\etal~\cite{Moss18}. |
285 | | |
286 | | \CFA is an extension of ISO-C, and hence, supports all C paradigms. |
287 | | %It is a non-object-oriented system-language, meaning most of the major abstractions have either no runtime overhead or can be opted out easily. |
288 | | Like C, the basics of \CFA revolve around structures and routines. |
289 | | Virtually all of the code generated by the \CFA translator respects C memory layouts and calling conventions. |
290 | | While \CFA is not an object-oriented language, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C does have a notion of objects: ``region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}. |
291 | | While some \CFA features are common in object-oriented programming-languages, they are an independent capability allowing \CFA to adopt them while retaining a procedural paradigm. |
292 | | |
293 | | |
294 | | \subsection{References} |
295 | | |
296 | | \CFA provides multi-level rebindable references, as an alternative to pointers, which significantly reduces syntactic noise. |
297 | | \begin{cfa} |
298 | | int x = 1, y = 2, z = 3; |
299 | | int * p1 = &x, ** p2 = &p1, *** p3 = &p2, $\C{// pointers to x}$ |
300 | | `&` r1 = x, `&&` r2 = r1, `&&&` r3 = r2; $\C{// references to x}$ |
301 | | int * p4 = &z, `&` r4 = z; |
302 | | |
303 | | *p1 = 3; **p2 = 3; ***p3 = 3; // change x |
304 | | r1 = 3; r2 = 3; r3 = 3; // change x: implicit dereferences *r1, **r2, ***r3 |
305 | | **p3 = &y; *p3 = &p4; // change p1, p2 |
306 | | `&`r3 = &y; `&&`r3 = &`&`r4; // change r1, r2: cancel implicit dereferences (&*)**r3, (&(&*)*)*r3, &(&*)r4 |
307 | | \end{cfa} |
308 | | A reference is a handle to an object, like a pointer, but is automatically dereferenced by the specified number of levels. |
309 | | Referencing (address-of @&@) a reference variable cancels one of the implicit dereferences, until there are no more implicit references, after which normal expression behaviour applies. |
310 | | |
311 | | |
312 | | \subsection{\texorpdfstring{\protect\lstinline{with} Statement}{with Statement}} |
313 | | \label{s:WithStatement} |
314 | | |
315 | | Heterogeneous data is aggregated into a structure/union. |
316 | | To reduce syntactic noise, \CFA provides a @with@ statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate field-qualification by opening a scope containing the field identifiers. |
317 | | \begin{cquote} |
318 | | \vspace*{-\baselineskip}%??? |
319 | | \lstDeleteShortInline@% |
320 | | \begin{cfa} |
321 | | struct S { char c; int i; double d; }; |
322 | | struct T { double m, n; }; |
323 | | // multiple aggregate parameters |
324 | | \end{cfa} |
325 | | \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}} |
326 | | \begin{cfa} |
327 | | void f( S & s, T & t ) { |
328 | | `s.`c; `s.`i; `s.`d; |
329 | | `t.`m; `t.`n; |
330 | | } |
331 | | \end{cfa} |
332 | | & |
333 | | \begin{cfa} |
334 | | void f( S & s, T & t ) `with ( s, t )` { |
335 | | c; i; d; // no qualification |
336 | | m; n; |
337 | | } |
338 | | \end{cfa} |
339 | | \end{tabular} |
340 | | \lstMakeShortInline@% |
341 | | \end{cquote} |
342 | | Object-oriented programming languages only provide implicit qualification for the receiver. |
343 | | |
344 | | In detail, the @with@ statement has the form: |
345 | | \begin{cfa} |
346 | | $\emph{with-statement}$: |
347 | | 'with' '(' $\emph{expression-list}$ ')' $\emph{compound-statement}$ |
348 | | \end{cfa} |
349 | | and may appear as the body of a routine or nested within a routine body. |
350 | | Each expression in the expression-list provides a type and object. |
351 | | The type must be an aggregate type. |
352 | | (Enumerations are already opened.) |
353 | | The object is the implicit qualifier for the open structure-fields. |
354 | | All expressions in the expression list are open in parallel within the compound statement, which is different from Pascal, which nests the openings from left to right. |
355 | | |
356 | | |
357 | | \subsection{Overloading} |
358 | | |
359 | | \CFA maximizes the ability to reuse names via overloading to aggressively address the naming problem. |
360 | | Both variables and routines may be overloaded, where selection is based on types, and number of returns (as in Ada~\cite{Ada}) and arguments. |
361 | | \begin{cquote} |
362 | | \vspace*{-\baselineskip}%??? |
363 | | \lstDeleteShortInline@% |
364 | | \begin{cfa} |
365 | | // selection based on type |
366 | | \end{cfa} |
367 | | \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}} |
368 | | \begin{cfa} |
369 | | const short int `MIN` = -32768; |
370 | | const int `MIN` = -2147483648; |
371 | | const long int `MIN` = -9223372036854775808L; |
372 | | \end{cfa} |
373 | | & |
374 | | \begin{cfa} |
375 | | short int si = `MIN`; |
376 | | int i = `MIN`; |
377 | | long int li = `MIN`; |
378 | | \end{cfa} |
379 | | \end{tabular} |
380 | | \begin{cfa} |
381 | | // selection based on type and number of parameters |
382 | | \end{cfa} |
383 | | \begin{tabular}{@{}l@{\hspace{2.7\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}} |
384 | | \begin{cfa} |
385 | | void `f`( void ); |
386 | | void `f`( char ); |
387 | | void `f`( int, double ); |
388 | | \end{cfa} |
389 | | & |
390 | | \begin{cfa} |
391 | | `f`(); |
392 | | `f`( 'a' ); |
393 | | `f`( 3, 5.2 ); |
394 | | \end{cfa} |
395 | | \end{tabular} |
396 | | \begin{cfa} |
397 | | // selection based on type and number of returns |
398 | | \end{cfa} |
399 | | \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}} |
400 | | \begin{cfa} |
401 | | char `f`( int ); |
402 | | double `f`( int ); |
403 | | [char, double] `f`( int ); |
404 | | \end{cfa} |
405 | | & |
406 | | \begin{cfa} |
407 | | char c = `f`( 3 ); |
408 | | double d = `f`( 3 ); |
409 | | [d, c] = `f`( 3 ); |
410 | | \end{cfa} |
411 | | \end{tabular} |
412 | | \lstMakeShortInline@% |
413 | | \end{cquote} |
414 | | Overloading is important for \CFA concurrency since the runtime system relies on creating different types to represent concurrency objects. |
415 | | Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions to prevent name clashes. |
416 | | As seen in Section~\ref{basics}, routine @main@ is heavily overloaded. |
417 | | |
418 | | Variable overloading is useful in the parallel semantics of the @with@ statement for fields with the same name: |
419 | | \begin{cfa} |
420 | | struct S { int `i`; int j; double m; } s; |
421 | | struct T { int `i`; int k; int m; } t; |
422 | | with ( s, t ) { |
423 | | j + k; $\C{// unambiguous, s.j + t.k}$ |
424 | | m = 5.0; $\C{// unambiguous, s.m = 5.0}$ |
425 | | m = 1; $\C{// unambiguous, t.m = 1}$ |
426 | | int a = m; $\C{// unambiguous, a = t.m }$ |
427 | | double b = m; $\C{// unambiguous, b = s.m}$ |
428 | | int c = `s.i` + `t.i`; $\C{// unambiguous, qualification}$ |
429 | | (double)m; $\C{// unambiguous, cast s.m}$ |
430 | | } |
431 | | \end{cfa} |
432 | | For parallel semantics, both @s.i@ and @t.i@ are visible the same type, so only @i@ is ambiguous without qualification. |
433 | | |
434 | | |
435 | | \subsection{Operators} |
436 | | |
437 | | Overloading also extends to operators. |
438 | | Operator-overloading syntax creates a routine name with an operator symbol and question marks for the operands: |
439 | | \begin{cquote} |
440 | | \lstDeleteShortInline@% |
441 | | \begin{tabular}{@{}ll@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}} |
442 | | \begin{cfa} |
443 | | int ++? (int op); |
444 | | int ?++ (int op); |
445 | | int `?+?` (int op1, int op2); |
446 | | int ?<=?(int op1, int op2); |
447 | | int ?=? (int & op1, int op2); |
448 | | int ?+=?(int & op1, int op2); |
449 | | \end{cfa} |
450 | | & |
451 | | \begin{cfa} |
452 | | // unary prefix increment |
453 | | // unary postfix increment |
454 | | // binary plus |
455 | | // binary less than |
456 | | // binary assignment |
457 | | // binary plus-assignment |
458 | | \end{cfa} |
459 | | & |
460 | | \begin{cfa} |
461 | | struct S { int i, j; }; |
462 | | S `?+?`( S op1, S op2) { // add two structures |
463 | | return (S){op1.i + op2.i, op1.j + op2.j}; |
464 | | } |
465 | | S s1 = {1, 2}, s2 = {2, 3}, s3; |
466 | | s3 = s1 `+` s2; // compute sum: s3 == {2, 5} |
467 | | \end{cfa} |
468 | | \end{tabular} |
469 | | \lstMakeShortInline@% |
470 | | \end{cquote} |
471 | | While concurrency does not use operator overloading directly, it provides an introduction for the syntax of constructors. |
472 | | |
473 | | |
474 | | \subsection{Parametric Polymorphism} |
475 | | \label{s:ParametricPolymorphism} |
476 | | |
477 | | The signature feature of \CFA is parametric-polymorphic routines~\cite{} with routines generalized using a @forall@ clause (giving the language its name), which allow separately compiled routines to support generic usage over multiple types. |
478 | | For example, the following sum routine works for any type that supports construction from 0 and addition \commenttd{constructors have not been introduced yet.}: |
479 | | \begin{cfa} |
480 | | forall( otype T | { void `?{}`( T *, zero_t ); T `?+?`( T, T ); } ) // constraint type, 0 and + |
481 | | T sum( T a[$\,$], size_t size ) { |
482 | | `T` total = { `0` }; $\C{// initialize by 0 constructor}$ |
483 | | for ( size_t i = 0; i < size; i += 1 ) |
484 | | total = total `+` a[i]; $\C{// select appropriate +}$ |
485 | | return total; |
486 | | } |
487 | | S sa[5]; |
488 | | int i = sum( sa, 5 ); $\C{// use S's 0 construction and +}$ |
489 | | \end{cfa} |
490 | | |
491 | | \CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each routine declaration: |
492 | | \begin{cfa} |
493 | | trait `sumable`( otype T ) { |
494 | | void `?{}`( T &, zero_t ); $\C{// 0 literal constructor}$ |
495 | | T `?+?`( T, T ); $\C{// assortment of additions}$ |
496 | | T ?+=?( T &, T ); |
497 | | T ++?( T & ); |
498 | | T ?++( T & ); |
499 | | }; |
500 | | forall( otype T `| sumable( T )` ) $\C{// use trait}$ |
501 | | T sum( T a[$\,$], size_t size ); |
502 | | \end{cfa} |
503 | | |
504 | | Assertions can be @otype@ or @dtype@. |
505 | | @otype@ refers to a ``complete'' object, \ie an object has a size, default constructor, copy constructor, destructor and an assignment operator. |
506 | | @dtype@ only guarantees an object has a size and alignment. |
507 | | |
508 | | Using the return type for discrimination, it is possible to write a type-safe @alloc@ based on the C @malloc@: |
509 | | \begin{cfa} |
510 | | forall( dtype T | sized(T) ) T * alloc( void ) { return (T *)malloc( sizeof(T) ); } |
511 | | int * ip = alloc(); $\C{// select type and size from left-hand side}$ |
512 | | double * dp = alloc(); |
513 | | struct S {...} * sp = alloc(); |
514 | | \end{cfa} |
515 | | where the return type supplies the type/size of the allocation, which is impossible in most type systems. |
516 | | |
517 | | |
518 | | \subsection{Constructors / Destructors} |
519 | | |
520 | | Object lifetime is a challenge in non-managed programming languages. |
521 | | \CFA responds with \CC-like constructors and destructors: |
522 | | \begin{cfa} |
523 | | struct VLA { int len, * data; }; $\C{// variable length array of integers}$ |
524 | | void ?{}( VLA & vla ) with ( vla ) { len = 10; data = alloc( len ); } $\C{// default constructor}$ |
525 | | void ?{}( VLA & vla, int size, char fill ) with ( vla ) { len = size; data = alloc( len, fill ); } // initialization |
526 | | void ?{}( VLA & vla, VLA other ) { vla.len = other.len; vla.data = other.data; } $\C{// copy, shallow}$ |
527 | | void ^?{}( VLA & vla ) with ( vla ) { free( data ); } $\C{// destructor}$ |
528 | | { |
529 | | VLA x, y = { 20, 0x01 }, z = y; $\C{// z points to y}$ |
530 | | // x{}; y{ 20, 0x01 }; z{ z, y }; |
531 | | ^x{}; $\C{// deallocate x}$ |
532 | | x{}; $\C{// reallocate x}$ |
533 | | z{ 5, 0xff }; $\C{// reallocate z, not pointing to y}$ |
534 | | ^y{}; $\C{// deallocate y}$ |
535 | | y{ x }; $\C{// reallocate y, points to x}$ |
536 | | x{}; $\C{// reallocate x, not pointing to y}$ |
537 | | // ^z{}; ^y{}; ^x{}; |
538 | | } |
539 | | \end{cfa} |
540 | | Like \CC, construction is implicit on allocation (stack/heap) and destruction is implicit on deallocation. |
541 | | The object and all their fields are constructed/destructed. |
542 | | \CFA also provides @new@ and @delete@, which behave like @malloc@ and @free@, in addition to constructing and destructing objects: |
543 | | \begin{cfa} |
544 | | { struct S s = {10}; $\C{// allocation, call constructor}$ |
545 | | ... |
546 | | } $\C{// deallocation, call destructor}$ |
547 | | struct S * s = new(); $\C{// allocation, call constructor}$ |
548 | | ... |
549 | | delete( s ); $\C{// deallocation, call destructor}$ |
550 | | \end{cfa} |
551 | | \CFA concurrency uses object lifetime as a means of synchronization and/or mutual exclusion. |
552 | | |
553 | | |
554 | | \section{Concurrency Basics}\label{basics} |
555 | | |
556 | | At its core, concurrency is based on multiple call-stacks and scheduling threads executing on these stacks. |
557 | | Multiple call stacks (or contexts) and a single thread of execution, called \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}. |
558 | | In coroutining, the single thread is self-scheduling across the stacks, so execution is deterministic, \ie given fixed inputs, the execution path to the outputs is fixed and predictable. |
559 | | A \newterm{stackless} coroutine executes on the caller's stack~\cite{Python} but this approach is restrictive, \eg preventing modularization and supporting only iterator/generator-style programming; |
560 | | a \newterm{stackfull} coroutine executes on its own stack, allowing full generality. |
561 | | Only stackfull coroutines are a stepping-stone to concurrency. |
562 | | |
563 | | The transition to concurrency, even for execution with a single thread and multiple stacks, occurs when coroutines also context switch to a scheduling oracle, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}. |
564 | | Therefore, a minimal concurrency system is possible using coroutines (see Section \ref{coroutine}) in conjunction with a scheduler to decide where to context switch next. |
565 | | The resulting execution system now follows a cooperative threading-model, called \newterm{non-preemptive scheduling}. |
566 | | |
567 | | Because the scheduler is special, it can either be a stackless or stackfull coroutine. \commenttd{I dislike this sentence, it seems imply 1-step vs 2-step but also seems to say that some kind of coroutine is required, which is not the case.} |
568 | | For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch. |
569 | | For stackfull, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches. |
570 | | A stackfull scheduler is often used for simplicity and security, even through there is a slightly higher runtime-cost. \commenttd{I'm not a fan of the fact that we don't quantify this but yet imply it is negligeable.} |
571 | | |
572 | | Regardless of the approach used, a subset of concurrency related challenges start to appear. |
573 | | For the complete set of concurrency challenges to occur, the missing feature is \newterm{preemption}, where context switching occurs randomly between any two instructions, often based on a timer interrupt, called \newterm{preemptive scheduling}. |
574 | | While a scheduler introduces uncertainty in the order of execution, preemption introduces uncertainty where context switches occur. |
575 | | Interestingly, uncertainty is necessary for the runtime (operating) system to give the illusion of parallelism on a single processor and increase performance on multiple processors. |
576 | | The reason is that only the runtime has complete knowledge about resources and how to best utilized them. |
577 | | However, the introduction of unrestricted non-determinism results in the need for \newterm{mutual exclusion} and \newterm{synchronization} to restrict non-determinism for correctness; |
578 | | otherwise, it is impossible to write meaningful programs. |
579 | | Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows. |
580 | | |
581 | | |
582 | | \subsection{\protect\CFA's Thread Building Blocks} |
583 | | |
584 | | An important missing feature in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional. |
585 | | As such, library support for threading is far from widespread. |
586 | | At the time of writing the paper, neither \protect\lstinline|gcc| nor \protect\lstinline|clang| support ``threads.h'' in their standard libraries.}. |
587 | | In modern programming languages, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism. |
588 | | As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. |
589 | | Furthermore, because C is a system-level language, programmers expect to choose precisely which features they need and which cost they are willing to pay. |
590 | | Hence, concurrent programs should be written using high-level mechanisms, and only step down to lower-level mechanisms when performance bottlenecks are encountered. |
591 | | |
592 | | |
593 | | \subsection{Coroutines: A Stepping Stone}\label{coroutine} |
594 | | |
595 | | While the focus of this discussion is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system. |
596 | | Coroutines are generalized routines allowing execution to be temporarily suspend and later resumed. |
597 | | Hence, unlike a normal routine, a coroutine may not terminate when it returns to its caller, allowing it to be restarted with the values and execution location present at the point of suspension. |
598 | | This capability is accomplish via the coroutine's stack, where suspend/resume context switch among stacks. |
599 | | Because threading design-challenges are present in coroutines, their design effort is relevant, and this effort can be easily exposed to programmers giving them a useful new programming paradigm because a coroutine handles the class of problems that need to retain state between calls, \eg plugins, device drivers, and finite-state machines. |
600 | | Therefore, the core \CFA coroutine-API for has two fundamental features: independent call-stacks and @suspend@/@resume@ operations. |
601 | | |
602 | | For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers, where Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C. |
603 | | \begin{displaymath} |
604 | | \mathsf{fib}(n) = \left \{ |
605 | | \begin{array}{ll} |
606 | | 0 & n = 0 \\ |
607 | | 1 & n = 1 \\ |
608 | | \mathsf{fib}(n-1) + \mathsf{fib}(n-2) & n \ge 2 \\ |
609 | | \end{array} |
610 | | \right. |
611 | | \end{displaymath} |
612 | | Figure~\ref{f:GlobalVariables} illustrates the following problems: |
613 | | unique unencapsulated global variables necessary to retain state between calls; |
614 | | only one Fibonacci generator; |
615 | | execution state must be explicitly retained via explicit state variables. |
616 | | Figure~\ref{f:ExternalState} addresses these issues: |
617 | | unencapsulated program global variables become encapsulated structure variables; |
618 | | unique global variables are replaced by multiple Fibonacci objects; |
619 | | explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $\mathsf{fib}(n-2)$. |
| 304 | This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA~\cite{Moss18,Cforall} and its runtime, which is written entirely in \CFA. |
| 305 | \CFA is a modern, polymorphic, non-object-oriented\footnote{ |
| 306 | \CFA has features often associated with object-oriented programming languages, such as constructors, destructors, virtuals and simple inheritance. |
| 307 | However, functions \emph{cannot} be nested in structures, so there is no lexical binding between a structure and set of functions (member/method) implemented by an implicit \lstinline@this@ (receiver) parameter.}, |
| 308 | backwards-compatible extension of the C programming language. |
| 309 | In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a \emph{research vehicle} for new typing and control-flow capabilities on top of a highly popular programming language allowing immediate dissemination. |
| 310 | Within the \CFA framework, new control-flow features are created from scratch because ISO \Celeven defines only a subset of the \CFA extensions, where the overlapping features are concurrency~\cite[\S~7.26]{C11}. |
| 311 | However, \Celeven concurrency is largely wrappers for a subset of the pthreads library~\cite{Butenhof97,Pthreads}, and \Celeven and pthreads concurrency is simple, based on thread fork/join in a function and mutex/condition locks, which is low-level and error-prone; |
| 312 | no high-level language concurrency features are defined. |
| 313 | Interestingly, almost a decade after publication of the \Celeven standard, neither gcc-8, clang-9 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating little interest in the C11 concurrency approach (possibly because the effort to add concurrency to \CC). |
| 314 | Finally, while the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}. |
| 315 | |
| 316 | In contrast, there has been a renewed interest during the past decade in user-level (M:N, green) threading in old and new programming languages. |
| 317 | As multi-core hardware became available in the 1980/90s, both user and kernel threading were examined. |
| 318 | Kernel threading was chosen, largely because of its simplicity and fit with the simpler operating systems and hardware architectures at the time, which gave it a performance advantage~\cite{Drepper03}. |
| 319 | Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads. |
| 320 | As a result, languages like Java, Scala, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms. |
| 321 | From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}, including putting green threads back into Java~\cite{Quasar}. |
| 322 | The main argument for user-level threading is that it is lighter weight than kernel threading (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work units to facilitate load balancing by the runtime~\cite{Verch12}. |
| 323 | As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{Adya02,vonBehren03}. |
| 324 | Finally, performant user-threading implementations (both time and space) meet or exceed direct kernel-threading implementations, while achieving the programming advantages of high concurrency levels and safety. |
| 325 | |
| 326 | A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \ie some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}. |
| 327 | The consequence is that a language must provide sufficient tools to program around safety issues, as inline and library code is all sequential to the compiler. |
| 328 | One solution is low-level qualifiers and functions (\eg @volatile@ and atomics) allowing \emph{programmers} to explicitly write safe (race-free~\cite{Boehm12}) programs. |
| 329 | A safer solution is high-level language constructs so the \emph{compiler} knows the optimization boundaries, and hence, provides implicit safety. |
| 330 | This problem is best known with respect to concurrency, but applies to other complex control-flow, like exceptions\footnote{ |
| 331 | \CFA exception handling will be presented in a separate paper. |
| 332 | The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++} |
| 333 | } and coroutines. |
| 334 | Finally, language solutions allow matching constructs with language paradigm, \ie imperative and functional languages often have different presentations of the same concept to fit their programming model. |
| 335 | |
| 336 | Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary. |
| 337 | Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~8]{Buhr05a}) and \emph{barging}\footnote{ |
| 338 | The notion of competitive succession instead of direct handoff, \ie a lock owner releases the lock and an arriving thread acquires it ahead of preexisting waiter threads. |
| 339 | } (signals-as-hints~\cite[\S~8]{Buhr05a}), where one is a consequence of the other, \ie once there is spurious wakeup, signals-as-hints follow. |
| 340 | However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice. |
| 341 | Similarly, signals-as-hints are often a performance decision. |
| 342 | We argue removing spurious wakeup and signals-as-hints make concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation. |
| 343 | (Author experience teaching concurrency is that students are highly confused by these semantics.) |
| 344 | Clawing back performance, when local non-determinism is unimportant, should be an option not the default. |
| 345 | |
| 346 | \begin{comment} |
| 347 | Most augmented traditional (Fortran 18~\cite{Fortran18}, Cobol 14~\cite{Cobol14}, Ada 12~\cite{Ada12}, Java 11~\cite{Java11}) and new languages (Go~\cite{Go}, Rust~\cite{Rust}, and D~\cite{D}), except \CC, diverge from C with different syntax and semantics, only interoperate indirectly with C, and are not systems languages, for those with managed memory. |
| 348 | As a result, there is a significant learning curve to move to these languages, and C legacy-code must be rewritten. |
| 349 | While \CC, like \CFA, takes an evolutionary approach to extend C, \CC's constantly growing complex and interdependent features-set (\eg objects, inheritance, templates, etc.) mean idiomatic \CC code is difficult to use from C, and C programmers must expend significant effort learning \CC. |
| 350 | Hence, rewriting and retraining costs for these languages, even \CC, are prohibitive for companies with a large C software-base. |
| 351 | \CFA with its orthogonal feature-set, its high-performance runtime, and direct access to all existing C libraries circumvents these problems. |
| 352 | \end{comment} |
| 353 | |
| 354 | \CFA embraces user-level threading, language extensions for advanced control-flow, and safety as the default. |
| 355 | We present comparative examples so the reader can judge if the \CFA control-flow extensions are better and safer than those in other concurrent, imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms. |
| 356 | The main contributions of this work are: |
| 357 | \begin{itemize}[topsep=3pt,itemsep=1pt] |
| 358 | \item |
| 359 | language-level generators, coroutines and user-level threading, which respect the expectations of C programmers. |
| 360 | \item |
| 361 | monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capabilities with all monitor synchronization mechanisms. |
| 362 | \item |
| 363 | providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features. |
| 364 | % \item |
| 365 | % library extensions for executors, futures, and actors built on the basic mechanisms. |
| 366 | \item |
| 367 | a runtime system with no spurious wakeup. |
| 368 | \item |
| 369 | a dynamic partitioning mechanism to segregate the execution environment for specialized requirements. |
| 370 | % \item |
| 371 | % a non-blocking I/O library |
| 372 | \item |
| 373 | experimental results showing comparable performance of the new features with similar mechanisms in other programming languages. |
| 374 | \end{itemize} |
| 375 | |
| 376 | Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls, which produces constructs @generator@ and @coroutine@. |
| 377 | Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread, which produces the @thread@ construct. |
| 378 | Section~\ref{s:MutualExclusionSynchronization} discusses the two mechanisms to restricted nondeterminism when controlling shared access to resources (mutual exclusion) and timing relationships among threads (synchronization). |
| 379 | Section~\ref{s:Monitor} shows how both mutual exclusion and synchronization are safely embedded in the @monitor@ and @thread@ constructs. |
| 380 | Section~\ref{s:CFARuntimeStructure} describes the large-scale mechanism to structure (cluster) threads and virtual processors (kernel threads). |
| 381 | Section~\ref{s:Performance} uses a series of microbenchmarks to compare \CFA threading with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0. |
| 382 | |
| 383 | |
| 384 | \section{Stateful Function} |
| 385 | \label{s:StatefulFunction} |
| 386 | |
| 387 | The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine. |
| 388 | Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension. |
| 389 | This capability is accomplished by retaining a data/execution \emph{closure} between invocations. |
| 390 | If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited. |
| 391 | If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions. |
| 392 | Hence, refactoring a stackless coroutine may require changing it to stackful. |
| 393 | A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack. |
| 394 | As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles). |
| 395 | A fixed closure activated by modified call/return is faster than a variable closure activated by context switching. |
| 396 | Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance. |
| 397 | Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general. |
| 398 | Note, creation cost is amortized across usage, so activation cost is usually the dominant factor. |
730 | | Using a coroutine, it is possible to express the Fibonacci formula directly without any of the C problems. |
731 | | Figure~\ref{f:Coroutine3States} creates a @coroutine@ type: |
732 | | \begin{cfa} |
733 | | `coroutine` Fib { int fn; }; |
734 | | \end{cfa} |
735 | | which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface routines @next@. |
736 | | Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main. |
737 | | The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code has the three suspend points, representing the three states in the Fibonacci formula, to context switch back to the caller's resume. |
738 | | The interface routine @next@, takes a Fibonacci instance and context switches to it using @resume@; |
739 | | on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned. |
740 | | The first @resume@ is special because it cocalls the coroutine at its coroutine main and allocates the stack; |
741 | | when the coroutine main returns, its stack is deallocated. |
742 | | Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes. |
743 | | Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}. |
744 | | Coroutine generators are called \newterm{output coroutines} because values are only returned. |
745 | | |
746 | | Figure~\ref{f:CFAFmt} shows an \newterm{input coroutine}, @Format@, for restructuring text into groups of characters of fixed-size blocks. |
747 | | For example, the input of the left is reformatted into the output on the right. |
748 | | \begin{quote} |
| 562 | Stateful functions appear as generators, coroutines, and threads, where presentations are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}. |
| 563 | For example, Python presents generators as a function object: |
| 564 | \begin{python} |
| 565 | def Gen(): |
| 566 | ... `yield val` ... |
| 567 | gen = Gen() |
| 568 | for i in range( 10 ): |
| 569 | print( next( gen ) ) |
| 570 | \end{python} |
| 571 | Boost presents coroutines in terms of four functor object-types: |
| 572 | \begin{cfa} |
| 573 | asymmetric_coroutine<>::pull_type |
| 574 | asymmetric_coroutine<>::push_type |
| 575 | symmetric_coroutine<>::call_type |
| 576 | symmetric_coroutine<>::yield_type |
| 577 | \end{cfa} |
| 578 | and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}, \eg pthreads: |
| 579 | \begin{cfa} |
| 580 | void * rtn( void * arg ) { ... } |
| 581 | int i = 3, rc; |
| 582 | pthread_t t; $\C{// thread id}$ |
| 583 | `rc = pthread_create( &t, rtn, (void *)i );` $\C{// create and initialized task, type-unsafe input parameter}$ |
| 584 | \end{cfa} |
| 585 | % void mycor( pthread_t cid, void * arg ) { |
| 586 | % int * value = (int *)arg; $\C{// type unsafe, pointer-size only}$ |
| 587 | % // thread body |
| 588 | % } |
| 589 | % int main() { |
| 590 | % int input = 0, output; |
| 591 | % coroutine_t cid = coroutine_create( &mycor, (void *)&input ); $\C{// type unsafe, pointer-size only}$ |
| 592 | % coroutine_resume( cid, (void *)input, (void **)&output ); $\C{// type unsafe, pointer-size only}$ |
| 593 | % } |
| 594 | \CFA's preferred presentation model for generators/coroutines/threads is a hybrid of objects and functions, with an object-oriented flavour. |
| 595 | Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type. |
| 596 | The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed. |
| 597 | |
| 598 | |
| 599 | \subsection{Generator} |
| 600 | |
| 601 | Stackless generators have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution. |
| 602 | The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity. |
| 603 | A series of different kinds of generators and their implementation demonstrate how this goal is accomplished. |
| 604 | |
| 605 | Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version. |
| 606 | This generator is an \emph{output generator}, producing a new result on each resumption. |
| 607 | To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle. |
| 608 | An additional requirement is the ability to create an arbitrary number of generators (of any kind), \ie retaining one state in global variables is insufficient; |
| 609 | hence, state is retained in a closure between calls. |
| 610 | Figure~\ref{f:CFibonacci} shows the C approach of manually creating the closure in structure @Fib@, and multiple instances of this closure provide multiple Fibonacci generators. |
| 611 | The C version only has the middle execution state because the top execution state is declaration initialization. |
| 612 | Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a custom \CFA @generator@ type. |
| 613 | This generator type is then connected to a function that \emph{must be named \lstinline|main|},\footnote{ |
| 614 | The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution. |
| 615 | Hence, overloading this name for other starting points (generator/coroutine/thread) is a logical extension.} |
| 616 | called a \emph{generator main},which takes as its only parameter a reference to the generator type. |
| 617 | The generator main contains @suspend@ statements that suspend execution without ending the generator versus @return@. |
| 618 | For the Fibonacci generator-main,\footnote{ |
| 619 | The \CFA \lstinline|with| opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics. |
| 620 | Multiple aggregates may be opened.} |
| 621 | the top initialization state appears at the start and the middle execution state is denoted by statement @suspend@. |
| 622 | Any local variables in @main@ \emph{are not retained} between calls; |
| 623 | hence local variables are only for temporary computations \emph{between} suspends. |
| 624 | All retained state \emph{must} appear in the generator's type. |
| 625 | As well, generator code containing a @suspend@ cannot be refactored into a helper function called by the generator, because @suspend@ is implemented via @return@, so a return from the helper function goes back to the current generator not the resumer. |
| 626 | The generator is started by calling function @resume@ with a generator instance, which begins execution at the top of the generator main, and subsequent @resume@ calls restart the generator at its point of last suspension. |
| 627 | Resuming an ended (returned) generator is undefined. |
| 628 | Function @resume@ returns its argument generator so it can be cascaded in an expression, in this case to print the next Fibonacci value @fn@ computed in the generator instance. |
| 629 | Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA generator only needs one additional field, @next@, to handle retention of execution state. |
| 630 | The computed @goto@ at the start of the generator main, which branches after the previous suspend, adds very little cost to the resume call. |
| 631 | Finally, an explicit generator type provides both design and performance benefits, such as multiple type-safe interface functions taking and returning arbitrary types.\footnote{ |
| 632 | The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|++?|, \lstinline|?++|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|. |
| 633 | }% |
| 634 | \begin{cfa} |
| 635 | int ?()( Fib & fib ) { return `resume( fib )`.fn; } $\C[3.9in]{// function-call interface}$ |
| 636 | int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// use function-call interface to skip N values}$ |
| 637 | double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}\CRT$ |
| 638 | sout | (int)f1() | (double)f1() | f2( 2 ); // alternative interface, cast selects call based on return type, step 2 values |
| 639 | \end{cfa} |
| 640 | Now, the generator can be a separately compiled opaque-type only accessed through its interface functions. |
| 641 | For contrast, Figure~\ref{f:PythonFibonacci} shows the equivalent Python Fibonacci generator, which does not use a generator type, and hence only has a single interface, but an implicit closure. |
| 642 | |
| 643 | Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden. |
| 644 | (This restriction is removed by the coroutine in Section~\ref{s:Coroutine}.) |
| 645 | This requirement follows from the generality of variable-size local-state, \eg local state with a variable-length array requires dynamic allocation because the array size is unknown at compile time. |
| 646 | However, dynamic allocation significantly increases the cost of generator creation/destruction and is a showstopper for embedded real-time programming. |
| 647 | But more importantly, the size of the generator type is tied to the local state in the generator main, which precludes separate compilation of the generator main, \ie a generator must be inlined or local state must be dynamically allocated. |
| 648 | With respect to safety, we believe static analysis can discriminate local state from temporary variables in a generator, \ie variable usage spanning @suspend@, and generate a compile-time error. |
| 649 | Finally, our current experience is that most generator problems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small. |
| 650 | As well, C programmers are not afraid of this kind of semantic programming requirement, if it results in very small, fast generators. |
| 651 | |
| 652 | Figure~\ref{f:CFAFormatGen} shows an asymmetric \newterm{input generator}, @Fmt@, for restructuring text into groups of characters of fixed-size blocks, \ie the input on the left is reformatted into the output on the right, where newlines are ignored. |
| 653 | \begin{center} |
794 | | Format fmt; |
795 | | eof: for ( ;; ) { |
796 | | sin | fmt.ch; |
797 | | if ( eof( sin ) ) break eof; |
798 | | format( fmt ); |
| 841 | enum { N = 5 }; |
| 842 | PingPong ping = {"ping",N,0}, pong = {"pong",N,0}; |
| 843 | &ping.partner = &pong; &pong.partner = &ping; |
| 844 | `resume( ping );` |
| 845 | } |
| 846 | \end{cfa} |
| 847 | \end{lrbox} |
| 848 | |
| 849 | \begin{lrbox}{\myboxB} |
| 850 | \begin{cfa}[escapechar={},aboveskip=0pt,belowskip=0pt] |
| 851 | typedef struct PingPong { |
| 852 | const char * name; |
| 853 | int N, i; |
| 854 | struct PingPong * partner; |
| 855 | void * next; |
| 856 | } PingPong; |
| 857 | #define PPCtor(name, N) {name,N,0,NULL,NULL} |
| 858 | void comain( PingPong * pp ) { |
| 859 | if ( pp->next ) goto *pp->next; |
| 860 | pp->next = &&cycle; |
| 861 | for ( ; pp->i < pp->N; pp->i += 1 ) { |
| 862 | printf( "%s %d\n", pp->name, pp->i ); |
| 863 | asm( "mov %0,%%rdi" : "=m" (pp->partner) ); |
| 864 | asm( "mov %rdi,%rax" ); |
| 865 | asm( "popq %rbx" ); |
| 866 | asm( "jmp comain" ); |
| 867 | cycle: ; |
| 868 | } |
| 869 | } |
| 870 | \end{cfa} |
| 871 | \end{lrbox} |
| 872 | |
| 873 | \subfloat[\CFA symmetric generator]{\label{f:CFAPingPongGen}\usebox\myboxA} |
| 874 | \hspace{3pt} |
| 875 | \vrule |
| 876 | \hspace{3pt} |
| 877 | \subfloat[C generator simulation]{\label{f:CPingPongSim}\usebox\myboxB} |
| 878 | \hspace{3pt} |
| 879 | \caption{Ping-Pong symmetric generator} |
| 880 | \label{f:PingPongSymmetricGenerator} |
| 881 | \end{figure} |
| 882 | |
| 883 | Finally, part of this generator work was inspired by the recent \CCtwenty generator proposal~\cite{C++20Coroutine19} (which they call coroutines). |
| 884 | Our work provides the same high-performance asymmetric generators as \CCtwenty, and extends their work with symmetric generators. |
| 885 | An additional \CCtwenty generator feature allows @suspend@ and @resume@ to be followed by a restricted compound statement that is executed after the current generator has reset its stack but before calling the next generator, specified with \CFA syntax: |
| 886 | \begin{cfa} |
| 887 | ... suspend`{ ... }`; |
| 888 | ... resume( C )`{ ... }` ... |
| 889 | \end{cfa} |
| 890 | Since the current generator's stack is released before calling the compound statement, the compound statement can only reference variables in the generator's type. |
| 891 | This feature is useful when a generator is used in a concurrent context to ensure it is stopped before releasing a lock in the compound statement, which might immediately allow another thread to resume the generator. |
| 892 | Hence, this mechanism provides a general and safe handoff of the generator among competing threads. |
| 893 | |
| 894 | |
| 895 | \subsection{Coroutine} |
| 896 | \label{s:Coroutine} |
| 897 | |
| 898 | Stackful coroutines extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main. |
| 899 | A coroutine is specified by replacing @generator@ with @coroutine@ for the type. |
| 900 | Coroutine generality results in higher cost for creation, due to dynamic stack allocation, execution, due to context switching among stacks, and terminating, due to possible stack unwinding and dynamic stack deallocation. |
| 901 | A series of different kinds of coroutines and their implementations demonstrate how coroutines extend generators. |
| 902 | |
| 903 | First, the previous generator examples are converted to their coroutine counterparts, allowing local-state variables to be moved from the generator type into the coroutine main. |
| 904 | \begin{description} |
| 905 | \item[Fibonacci] |
| 906 | Move the declaration of @fn1@ to the start of coroutine main. |
| 907 | \begin{cfa}[xleftmargin=0pt] |
| 908 | void main( Fib & fib ) with(fib) { |
| 909 | `int fn1;` |
| 910 | \end{cfa} |
| 911 | \item[Formatter] |
| 912 | Move the declaration of @g@ and @b@ to the for loops in the coroutine main. |
| 913 | \begin{cfa}[xleftmargin=0pt] |
| 914 | for ( `g`; 5 ) { |
| 915 | for ( `b`; 4 ) { |
| 916 | \end{cfa} |
| 917 | \item[Device Driver] |
| 918 | Move the declaration of @lnth@ and @sum@ to their points of initialization. |
| 919 | \begin{cfa}[xleftmargin=0pt] |
| 920 | status = CONT; |
| 921 | `unsigned int lnth = 0, sum = 0;` |
| 922 | ... |
| 923 | `unsigned short int crc = byte << 8;` |
| 924 | \end{cfa} |
| 925 | \item[PingPong] |
| 926 | Move the declaration of @i@ to the for loop in the coroutine main. |
| 927 | \begin{cfa}[xleftmargin=0pt] |
| 928 | void main( PingPong & pp ) with(pp) { |
| 929 | for ( `i`; N ) { |
| 930 | \end{cfa} |
| 931 | \end{description} |
| 932 | It is also possible to refactor code containing local-state and @suspend@ statements into a helper function, like the computation of the CRC for the device driver. |
| 933 | \begin{cfa} |
| 934 | unsigned int Crc() { |
| 935 | `suspend;` |
| 936 | unsigned short int crc = byte << 8; |
| 937 | `suspend;` |
| 938 | status = (crc | byte) == sum ? MSG : ECRC; |
| 939 | return crc; |
| 940 | } |
| 941 | \end{cfa} |
| 942 | A call to this function is placed at the end of the driver's coroutine-main. |
| 943 | For complex finite-state machines, refactoring is part of normal program abstraction, especially when code is used in multiple places. |
| 944 | Again, this complexity is usually associated with execution state rather than data state. |
| 945 | |
| 946 | \begin{comment} |
| 947 | Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @next@. |
| 948 | Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main. |
| 949 | The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@. |
| 950 | The interface function @next@, takes a Fibonacci instance and context switches to it using @resume@; |
| 951 | on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned. |
| 952 | The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack; |
| 953 | when the coroutine main returns, its stack is deallocated. |
| 954 | Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes. |
| 955 | Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}. |
| 956 | Coroutine generators are called \newterm{output coroutines} because values are only returned. |
| 957 | |
| 958 | \begin{figure} |
| 959 | \centering |
| 960 | \newbox\myboxA |
| 961 | % \begin{lrbox}{\myboxA} |
| 962 | % \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
| 963 | % `int fn1, fn2, state = 1;` // single global variables |
| 964 | % int fib() { |
| 965 | % int fn; |
| 966 | % `switch ( state )` { // explicit execution state |
| 967 | % case 1: fn = 0; fn1 = fn; state = 2; break; |
| 968 | % case 2: fn = 1; fn2 = fn1; fn1 = fn; state = 3; break; |
| 969 | % case 3: fn = fn1 + fn2; fn2 = fn1; fn1 = fn; break; |
| 970 | % } |
| 971 | % return fn; |
| 972 | % } |
| 973 | % int main() { |
| 974 | % |
| 975 | % for ( int i = 0; i < 10; i += 1 ) { |
| 976 | % printf( "%d\n", fib() ); |
| 977 | % } |
| 978 | % } |
| 979 | % \end{cfa} |
| 980 | % \end{lrbox} |
| 981 | \begin{lrbox}{\myboxA} |
| 982 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
| 983 | #define FibCtor { 0, 1 } |
| 984 | typedef struct { int fn1, fn; } Fib; |
| 985 | int fib( Fib * f ) { |
| 986 | |
| 987 | int ret = f->fn1; |
| 988 | f->fn1 = f->fn; |
| 989 | f->fn = ret + f->fn; |
| 990 | return ret; |
| 991 | } |
| 992 | |
| 993 | |
| 994 | |
| 995 | int main() { |
| 996 | Fib f1 = FibCtor, f2 = FibCtor; |
| 997 | for ( int i = 0; i < 10; i += 1 ) { |
| 998 | printf( "%d %d\n", |
| 999 | fib( &f1 ), fib( &f2 ) ); |
947 | | |
948 | | After iterating $N$ times, the producer calls @stop@. |
949 | | The @done@ flag is set to stop the consumer's execution and a resume is executed. |
950 | | The context switch restarts @cons@ in @payment@ and it returns with the last receipt. |
951 | | The consumer terminates its loops because @done@ is true, its @main@ terminates, so @cons@ transitions from a coroutine back to an object, and @prod@ reactivates after the resume in @stop@. |
952 | | @stop@ returns and @prod@'s coroutine main terminates. |
953 | | The program main restarts after the resume in @start@. |
954 | | @start@ returns and the program main terminates. |
955 | | |
956 | | |
957 | | \subsection{Coroutine Implementation} |
958 | | |
959 | | A significant implementation challenge for coroutines (and threads, see section \ref{threads}) is adding extra fields and executing code after/before the coroutine constructor/destructor and coroutine main to create/initialize/de-initialize/destroy extra fields and the stack. |
960 | | There are several solutions to this problem and the chosen option forced the \CFA coroutine design. |
961 | | |
962 | | Object-oriented inheritance provides extra fields and code in a restricted context, but it requires programmers to explicitly perform the inheritance: |
963 | | \begin{cfa} |
964 | | struct mycoroutine $\textbf{\textsf{inherits}}$ baseCoroutine { ... } |
965 | | \end{cfa} |
966 | | and the programming language (and possibly its tool set, \eg debugger) may need to understand @baseCoroutine@ because of the stack. |
967 | | Furthermore, the execution of constructs/destructors is in the wrong order for certain operations, \eg for threads; |
968 | | \eg, if the thread is implicitly started, it must start \emph{after} all constructors, because the thread relies on a completely initialized object, but the inherited constructor runs \emph{before} the derived. |
969 | | |
970 | | An alternatively is composition: |
971 | | \begin{cfa} |
972 | | struct mycoroutine { |
973 | | ... // declarations |
| 1207 | Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling. |
| 1208 | |
| 1209 | \begin{figure} |
| 1210 | \begin{center} |
| 1211 | \input{FullProdConsStack.pstex_t} |
| 1212 | \end{center} |
| 1213 | \vspace*{-10pt} |
| 1214 | \caption{Producer / consumer runtime stacks} |
| 1215 | \label{f:ProdConsRuntimeStacks} |
| 1216 | |
| 1217 | \medskip |
| 1218 | |
| 1219 | \begin{center} |
| 1220 | \input{FullCoroutinePhases.pstex_t} |
| 1221 | \end{center} |
| 1222 | \vspace*{-10pt} |
| 1223 | \caption{Ping / Pong coroutine steps} |
| 1224 | \label{f:PingPongFullCoroutineSteps} |
| 1225 | \end{figure} |
| 1226 | |
| 1227 | Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack. |
| 1228 | Furthermore, each deallocated coroutine must guarantee all destructors are run for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep. |
| 1229 | When a coroutine's main ends, its stack is already unwound so any stack allocated objects with destructors have been finalized. |
| 1230 | The na\"{i}ve semantics for coroutine-cycle termination is to context switch to the last resumer, like executing a @suspend@/@return@ in a generator. |
| 1231 | However, for coroutines, the last resumer is \emph{not} implicitly below the current stack frame, as for generators, because each coroutine's stack is independent. |
| 1232 | Unfortunately, it is impossible to determine statically if a coroutine is in a cycle and unrealistic to check dynamically (graph-cycle problem). |
| 1233 | Hence, a compromise solution is necessary that works for asymmetric (acyclic) and symmetric (cyclic) coroutines. |
| 1234 | |
| 1235 | Our solution is to context switch back to the first resumer (starter) once the coroutine ends. |
| 1236 | This semantics works well for the most common asymmetric and symmetric coroutine usage patterns. |
| 1237 | For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer. |
| 1238 | All previous generators converted to coroutines have this property. |
| 1239 | For symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle. |
| 1240 | Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter. |
| 1241 | Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end. |
| 1242 | For other scenarios, it is always possible to devise a solution with additional programming effort, such as forcing the cycle forward (backward) to a safe point before starting termination. |
| 1243 | |
| 1244 | The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first. |
| 1245 | Assume generator @PingPong@ is converted to a coroutine. |
| 1246 | Figure~\ref{f:PingPongFullCoroutineSteps} shows the creation, starter, and cyclic execution steps of the coroutine version. |
| 1247 | The program main creates (declares) coroutine instances @ping@ and @pong@. |
| 1248 | Next, program main resumes @ping@, making it @ping@'s starter, and @ping@'s main resumes @pong@'s main, making it @pong@'s starter. |
| 1249 | Execution forms a cycle when @pong@ resumes @ping@, and cycles $N$ times. |
| 1250 | By adjusting $N$ for either @ping@/@pong@, it is possible to have either one finish first, instead of @pong@ always ending first. |
| 1251 | If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main in function @start@. |
| 1252 | If @ping@ ends first, it resumes its starter the program main in function @start@. |
| 1253 | Regardless of the cycle complexity, the starter stack always leads back to the program main, but the stack can be entered at an arbitrary point. |
| 1254 | Once back at the program main, coroutines @ping@ and @pong@ are deallocated. |
| 1255 | For generators, deallocation runs the destructors for all objects in the generator type. |
| 1256 | For coroutines, deallocation deals with objects in the coroutine type and must also run the destructors for any objects pending on the coroutine's stack for any unterminated coroutine. |
| 1257 | Hence, if a coroutine's destructor detects the coroutine is not ended, it implicitly raises a cancellation exception (uncatchable exception) at the coroutine and resumes it so the cancellation exception can propagate to the root of the coroutine's stack destroying all local variable on the stack. |
| 1258 | So the \CFA semantics for the generator and coroutine, ensure both can be safely deallocated at any time, regardless of their current state, like any other aggregate object. |
| 1259 | Explicitly raising normal exceptions at another coroutine can replace flag variables, like @stop@, \eg @prod@ raises a @stop@ exception at @cons@ after it finishes generating values and resumes @cons@, which catches the @stop@ exception to terminate its loop. |
| 1260 | |
| 1261 | Finally, there is an interesting effect for @suspend@ with symmetric coroutines. |
| 1262 | A coroutine must retain its last resumer to suspend back because the resumer is on a different stack. |
| 1263 | These reverse pointers allow @suspend@ to cycle \emph{backwards}, which may be useful in certain cases. |
| 1264 | However, there is an anomaly if a coroutine resumes itself, because it overwrites its last resumer with itself, losing the ability to resume the last external resumer. |
| 1265 | To prevent losing this information, a self-resume does not overwrite the last resumer. |
| 1266 | |
| 1267 | |
| 1268 | \subsection{Generator / Coroutine Implementation} |
| 1269 | |
| 1270 | A significant implementation challenge for generators/coroutines (and threads in Section~\ref{s:threads}) is adding extra fields to the custom types and related functions, \eg inserting code after/before the coroutine constructor/destructor and @main@ to create/initialize/de-initialize/destroy any extra fields, \eg stack. |
| 1271 | There are several solutions to these problem, which follow from the object-oriented flavour of adopting custom types. |
| 1272 | |
| 1273 | For object-oriented languages, inheritance is used to provide extra fields and code via explicit inheritance: |
| 1274 | \begin{cfa}[morekeywords={class,inherits}] |
| 1275 | class myCoroutine inherits baseCoroutine { ... } |
| 1276 | \end{cfa} |
| 1277 | % The problem is that the programming language and its tool chain, \eg debugger, @valgrind@, need to understand @baseCoroutine@ because it infers special property, so type @baseCoroutine@ becomes a de facto keyword and all types inheriting from it are implicitly custom types. |
| 1278 | The problem is that some special properties are not handled by existing language semantics, \eg the execution of constructors/destructors is in the wrong order to implicitly start threads because the thread must start \emph{after} all constructors as it relies on a completely initialized object, but the inherited constructor runs \emph{before} the derived. |
| 1279 | Alternatives, such as explicitly starting threads as in Java, are repetitive and forgetting to call start is a common source of errors. |
| 1280 | An alternative is composition: |
| 1281 | \begin{cfa} |
| 1282 | struct myCoroutine { |
| 1283 | ... // declaration/communication variables |
1071 | | The combination of these two approaches allows an easy and concise specification to coroutining (and concurrency) for normal users, while more advanced users have tighter control on memory layout and initialization. |
1072 | | |
1073 | | |
1074 | | \subsection{Thread Interface} |
1075 | | \label{threads} |
1076 | | |
1077 | | Both user and kernel threads are supported, where user threads provide concurrency and kernel threads provide parallelism. |
1078 | | Like coroutines and for the same design reasons, the selected approach for user threads is to use language support by introducing a new kind of aggregate (structure) and a \CFA trait: |
| 1352 | The combination of custom types and fundamental @trait@ description of these types allows a concise specification for programmers and tools, while more advanced programmers can have tighter control over memory layout and initialization. |
| 1353 | |
| 1354 | Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a task is similar). |
| 1355 | The coroutine handle is the @coroutine@ instance containing programmer specified type global/communication variables across interface functions. |
| 1356 | The coroutine descriptor contains all implicit declarations needed by the runtime, \eg @suspend@/@resume@, and can be part of the coroutine handle or separate. |
| 1357 | The coroutine stack can appear in a number of locations and be fixed or variable sized. |
| 1358 | Hence, the coroutine's stack could be a VLS\footnote{ |
| 1359 | We are examining variable-sized structures (VLS), where fields can be variable-sized structures or arrays. |
| 1360 | Once allocated, a VLS is fixed sized.} |
| 1361 | on the allocating stack, provided the allocating stack is large enough. |
| 1362 | For a VLS stack allocation/deallocation is an inexpensive adjustment of the stack pointer, modulo any stack constructor costs (\eg initial frame setup). |
| 1363 | For heap stack allocation, allocation/deallocation is an expensive heap allocation (where the heap can be a shared resource), modulo any stack constructor costs. |
| 1364 | With heap stack allocation, it is also possible to use a split (segmented) stack calling convention, available with gcc and clang, so the stack is variable sized. |
| 1365 | Currently, \CFA supports stack/heap allocated descriptors but only fixed-sized heap allocated stacks. |
| 1366 | In \CFA debug-mode, the fixed-sized stack is terminated with a write-only page, which catches most stack overflows. |
| 1367 | Experience teaching concurrency with \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue for students. |
| 1368 | Split-stack allocation is under development but requires recompilation of legacy code, which may be impossible. |
| 1369 | |
| 1370 | \begin{figure} |
| 1371 | \centering |
| 1372 | \input{corlayout.pstex_t} |
| 1373 | \caption{Coroutine memory layout} |
| 1374 | \label{f:CoroutineMemoryLayout} |
| 1375 | \end{figure} |
| 1376 | |
| 1377 | |
| 1378 | \section{Concurrency} |
| 1379 | \label{s:Concurrency} |
| 1380 | |
| 1381 | Concurrency is nondeterministic scheduling of independent sequential execution paths (threads), where each thread has its own stack. |
| 1382 | A single thread with multiple call stacks, \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}. |
| 1383 | In coroutining, coroutines self-schedule the thread across stacks so execution is deterministic. |
| 1384 | (It is \emph{impossible} to generate a concurrency error when coroutining.) |
| 1385 | However, coroutines are a stepping stone towards concurrency. |
| 1386 | |
| 1387 | The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a}. |
| 1388 | Therefore, a minimal concurrency system requires coroutines \emph{in conjunction with a nondeterministic scheduler}. |
| 1389 | The resulting execution system now follows a cooperative threading model~\cite{Adya02,libdill}, called \newterm{non-preemptive scheduling}. |
| 1390 | Adding \newterm{preemption} introduces non-cooperative scheduling, where context switching occurs randomly between any two instructions often based on a timer interrupt, called \newterm{preemptive scheduling}. |
| 1391 | While a scheduler introduces uncertain execution among explicit context switches, preemption introduces uncertainty by introducing implicit context switches. |
| 1392 | Uncertainty gives the illusion of parallelism on a single processor and provides a mechanism to access and increase performance on multiple processors. |
| 1393 | The reason is that the scheduler/runtime have complete knowledge about resources and how to best utilized them. |
| 1394 | However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization}, which restrict nondeterminism for correctness; |
| 1395 | otherwise, it is impossible to write meaningful concurrent programs. |
| 1396 | Optimal concurrent performance is often obtained by having as much nondeterminism as mutual exclusion and synchronization correctness allow. |
| 1397 | |
| 1398 | A scheduler can either be a stackless or stackful. |
| 1399 | For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch. |
| 1400 | For stackful, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches. |
| 1401 | The \CFA runtime uses a stackful scheduler for uniformity and security. |
| 1402 | |
| 1403 | |
| 1404 | \subsection{Thread} |
| 1405 | \label{s:threads} |
| 1406 | |
| 1407 | Threading needs the ability to start a thread and wait for its completion. |
| 1408 | A common API for this ability is @fork@ and @join@. |
| 1409 | \begin{cquote} |
| 1410 | \begin{tabular}{@{}lll@{}} |
| 1411 | \multicolumn{1}{c}{\textbf{Java}} & \multicolumn{1}{c}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\ |
| 1412 | \begin{cfa} |
| 1413 | class MyTask extends Thread {...} |
| 1414 | mytask t = new MyTask(...); |
| 1415 | `t.start();` // start |
| 1416 | // concurrency |
| 1417 | `t.join();` // wait |
| 1418 | \end{cfa} |
| 1419 | & |
| 1420 | \begin{cfa} |
| 1421 | class MyTask { ... } // functor |
| 1422 | MyTask mytask; |
| 1423 | `thread t( mytask, ... );` // start |
| 1424 | // concurrency |
| 1425 | `t.join();` // wait |
| 1426 | \end{cfa} |
| 1427 | & |
| 1428 | \begin{cfa} |
| 1429 | void * rtn( void * arg ) {...} |
| 1430 | pthread_t t; int i = 3; |
| 1431 | `pthread_create( &t, rtn, (void *)i );` // start |
| 1432 | // concurrency |
| 1433 | `pthread_join( t, NULL );` // wait |
| 1434 | \end{cfa} |
| 1435 | \end{tabular} |
| 1436 | \end{cquote} |
| 1437 | \CFA has a simpler approach using a custom @thread@ type and leveraging declaration semantics (allocation/deallocation), where threads implicitly @fork@ after construction and @join@ before destruction. |
| 1438 | \begin{cfa} |
| 1439 | thread MyTask {}; |
| 1440 | void main( MyTask & this ) { ... } |
| 1441 | int main() { |
| 1442 | MyTask team`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$ |
| 1443 | // concurrency |
| 1444 | } $\C{// deallocate stack-based threads, implicit joins before destruction}$ |
| 1445 | \end{cfa} |
| 1446 | This semantic ensures a thread is started and stopped exactly once, eliminating some programming error, and scales to multiple threads for basic (termination) synchronization. |
| 1447 | For block allocation to arbitrary depth, including recursion, threads are created/destroyed in a lattice structure (tree with top and bottom). |
| 1448 | Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic allocation. |
| 1449 | \begin{cfa} |
| 1450 | MyTask * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$ |
| 1451 | int main() { |
| 1452 | MyTask * team = factory( 10 ); |
| 1453 | // concurrency |
| 1454 | `delete( team );` $\C{// deallocate heap-based threads, implicit joins before destruction}\CRT$ |
| 1455 | } |
| 1456 | \end{cfa} |
| 1457 | |
| 1458 | Figure~\ref{s:ConcurrentMatrixSummation} shows concurrently adding the rows of a matrix and then totalling the subtotals sequentially, after all the row threads have terminated. |
| 1459 | The program uses heap-based threads because each thread needs different constructor values. |
| 1460 | (Python provides a simple iteration mechanism to initialize array elements to different values allowing stack allocation.) |
| 1461 | The allocation/deallocation pattern appears unusual because allocated objects are immediately deallocated without any intervening code. |
| 1462 | However, for threads, the deletion provides implicit synchronization, which is the intervening code. |
| 1463 | % While the subtotals are added in linear order rather than completion order, which slightly inhibits concurrency, the computation is restricted by the critical-path thread (\ie the thread that takes the longest), and so any inhibited concurrency is very small as totalling the subtotals is trivial. |
| 1464 | |
| 1465 | \begin{figure} |
| 1466 | \begin{cfa} |
| 1467 | `thread` Adder { int * row, cols, & subtotal; } $\C{// communication variables}$ |
| 1468 | void ?{}( Adder & adder, int row[], int cols, int & subtotal ) { |
| 1469 | adder.[ row, cols, &subtotal ] = [ row, cols, &subtotal ]; |
| 1470 | } |
| 1471 | void main( Adder & adder ) with( adder ) { |
| 1472 | subtotal = 0; |
| 1473 | for ( c; cols ) { subtotal += row[c]; } |
| 1474 | } |
| 1475 | int main() { |
| 1476 | const int rows = 10, cols = 1000; |
| 1477 | int matrix[rows][cols], subtotals[rows], total = 0; |
| 1478 | // read matrix |
| 1479 | Adder * adders[rows]; |
| 1480 | for ( r; rows; ) { $\C{// start threads to sum rows}$ |
| 1481 | adders[r] = `new( matrix[r], cols, &subtotals[r] );` |
| 1482 | } |
| 1483 | for ( r; rows ) { $\C{// wait for threads to finish}$ |
| 1484 | `delete( adders[r] );` $\C{// termination join}$ |
| 1485 | total += subtotals[r]; $\C{// total subtotal}$ |
| 1486 | } |
| 1487 | sout | total; |
| 1488 | } |
| 1489 | \end{cfa} |
| 1490 | \caption{Concurrent matrix summation} |
| 1491 | \label{s:ConcurrentMatrixSummation} |
| 1492 | \end{figure} |
| 1493 | |
| 1494 | |
| 1495 | \subsection{Thread Implementation} |
| 1496 | |
| 1497 | Threads in \CFA are user level run by runtime kernel threads (see Section~\ref{s:CFARuntimeStructure}), where user threads provide concurrency and kernel threads provide parallelism. |
| 1498 | Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the task-interface functions. |
1342 | | int f1( M & mutex m ); |
1343 | | int f2( M * mutex m ); |
1344 | | int f3( M * mutex m[] ); |
1345 | | int f4( stack( M * ) & mutex m ); |
1346 | | \end{cfa} |
1347 | | the issue is that some of these parameter types are composed of multiple objects. |
1348 | | For @f1@, there is only a single parameter object. |
1349 | | Adding indirection in @f2@ still identifies a single object. |
1350 | | However, the matrix in @f3@ introduces multiple objects. |
1351 | | While shown shortly, multiple acquisition is possible; |
1352 | | however array lengths are often unknown in C. |
1353 | | This issue is exacerbated in @f4@, where the data structure must be safely traversed to acquire all of its elements. |
1354 | | |
1355 | | To make the issue tractable, \CFA only acquires one monitor per parameter with at most one level of indirection. |
1356 | | However, the C type-system has an ambiguity with respects to arrays. |
1357 | | Is the argument for @f2@ a single object or an array of objects? |
1358 | | If it is an array, only the first element of the array is acquired, which seems unsafe; |
1359 | | hence, @mutex@ is disallowed for array parameters. |
1360 | | \begin{cfa} |
1361 | | int f1( M & mutex m ); $\C{// allowed: recommended case}$ |
1362 | | int f2( M * mutex m ); $\C{// disallowed: could be an array}$ |
1363 | | int f3( M mutex m[$\,$] ); $\C{// disallowed: array length unknown}$ |
1364 | | int f4( M ** mutex m ); $\C{// disallowed: could be an array}$ |
1365 | | int f5( M * mutex m[$\,$] ); $\C{// disallowed: array length unknown}$ |
1366 | | \end{cfa} |
1367 | | % Note, not all array routines have distinct types: @f2@ and @f3@ have the same type, as do @f4@ and @f5@. |
1368 | | % However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic. |
1369 | | |
1370 | | For object-oriented monitors, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@. |
1371 | | \CFA has no receiver, and hence, must use an explicit mechanism to specify which object has mutual exclusion acquired. |
1372 | | A positive consequence of this design decision is the ability to support multi-monitor routines. |
1373 | | \begin{cfa} |
1374 | | int f( M & mutex x, M & mutex y ); $\C{// multiple monitor parameter of any type}$ |
1375 | | M m1, m2; |
1376 | | f( m1, m2 ); |
1377 | | \end{cfa} |
1378 | | (While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.) |
1379 | | In practice, writing multi-locking routines that do not deadlocks is tricky. |
1380 | | Having language support for such a feature is therefore a significant asset for \CFA. |
1381 | | |
1382 | | The capability to acquire multiple locks before entering a critical section is called \newterm{bulk acquire}. |
1383 | | In previous example, \CFA guarantees the order of acquisition is consistent across calls to different routines using the same monitors as arguments. |
1384 | | This consistent ordering means acquiring multiple monitors is safe from deadlock. |
1385 | | However, users can force the acquiring order. |
1386 | | For example, notice the use of @mutex@/\lstinline[morekeywords=nomutex]@nomutex@ and how this affects the acquiring order: |
1387 | | \begin{cfa} |
1388 | | void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$ |
| 1666 | int f1( M & mutex m ); $\C{// single parameter object}$ |
| 1667 | int f2( M * mutex m ); $\C{// single or multiple parameter object}$ |
| 1668 | int f3( M * mutex m[$\,$] ); $\C{// multiple parameter object}$ |
| 1669 | int f4( stack( M * ) & mutex m ); $\C{// multiple parameters object}$ |
| 1670 | \end{cfa} |
| 1671 | Function @f1@ has a single parameter object, while @f2@'s indirection could be a single or multi-element array, where static array size is often unknown in C. |
| 1672 | Function @f3@ has a multiple object matrix, and @f4@ a multiple object data structure. |
| 1673 | While shown shortly, multiple object acquisition is possible, but the number of objects must be statically known. |
| 1674 | Therefore, \CFA only acquires one monitor per parameter with at most one level of indirection, excluding pointers as it is impossible to statically determine the size. |
| 1675 | |
| 1676 | For object-oriented monitors, \eg Java, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@. |
| 1677 | \CFA has no receiver, and hence, the explicit @mutex@ qualifier is used to specify which objects acquire mutual exclusion. |
| 1678 | A positive consequence of this design decision is the ability to support multi-monitor functions,\footnote{ |
| 1679 | While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.} |
| 1680 | called \newterm{bulk acquire}. |
| 1681 | \CFA guarantees acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitors is safe from deadlock. |
| 1682 | Figure~\ref{f:BankTransfer} shows a trivial solution to the bank transfer problem~\cite{BankTransfer}, where two resources must be locked simultaneously, using \CFA monitors with implicit locking and \CC with explicit locking. |
| 1683 | A \CFA programmer only has to manage when to acquire mutual exclusion; |
| 1684 | a \CC programmer must select the correct lock and acquisition mechanism from a panoply of locking options. |
| 1685 | Making good choices for common cases in \CFA simplifies the programming experience and enhances safety. |
| 1686 | |
| 1687 | \begin{figure} |
| 1688 | \centering |
| 1689 | \begin{lrbox}{\myboxA} |
| 1690 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
| 1691 | monitor BankAccount { |
| 1692 | |
| 1693 | int balance; |
| 1694 | } b1 = { 0 }, b2 = { 0 }; |
| 1695 | void deposit( BankAccount & `mutex` b, |
| 1696 | int deposit ) with(b) { |
| 1697 | balance += deposit; |
| 1698 | } |
| 1699 | void transfer( BankAccount & `mutex` my, |
| 1700 | BankAccount & `mutex` your, int me2you ) { |
| 1701 | |
| 1702 | deposit( my, -me2you ); // debit |
| 1703 | deposit( your, me2you ); // credit |
| 1704 | } |
| 1705 | `thread` Person { BankAccount & b1, & b2; }; |
| 1706 | void main( Person & person ) with(person) { |
| 1707 | for ( 10_000_000 ) { |
| 1708 | if ( random() % 3 ) deposit( b1, 3 ); |
| 1709 | if ( random() % 3 ) transfer( b1, b2, 7 ); |
| 1710 | } |
| 1711 | } |
| 1712 | int main() { |
| 1713 | `Person p1 = { b1, b2 }, p2 = { b2, b1 };` |
| 1714 | |
| 1715 | } // wait for threads to complete |
| 1716 | \end{cfa} |
| 1717 | \end{lrbox} |
| 1718 | |
| 1719 | \begin{lrbox}{\myboxB} |
| 1720 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
| 1721 | struct BankAccount { |
| 1722 | `recursive_mutex m;` |
| 1723 | int balance = 0; |
| 1724 | } b1, b2; |
| 1725 | void deposit( BankAccount & b, int deposit ) { |
| 1726 | `scoped_lock lock( b.m );` |
| 1727 | b.balance += deposit; |
| 1728 | } |
| 1729 | void transfer( BankAccount & my, |
| 1730 | BankAccount & your, int me2you ) { |
| 1731 | `scoped_lock lock( my.m, your.m );` |
| 1732 | deposit( my, -me2you ); // debit |
| 1733 | deposit( your, me2you ); // credit |
| 1734 | } |
| 1735 | |
| 1736 | void person( BankAccount & b1, BankAccount & b2 ) { |
| 1737 | for ( int i = 0; i < 10$'$000$'$000; i += 1 ) { |
| 1738 | if ( random() % 3 ) deposit( b1, 3 ); |
| 1739 | if ( random() % 3 ) transfer( b1, b2, 7 ); |
| 1740 | } |
| 1741 | } |
| 1742 | int main() { |
| 1743 | `thread p1(person, ref(b1), ref(b2)), p2(person, ref(b2), ref(b1));` |
| 1744 | `p1.join(); p2.join();` |
| 1745 | } |
| 1746 | \end{cfa} |
| 1747 | \end{lrbox} |
| 1748 | |
| 1749 | \subfloat[\CFA]{\label{f:CFABank}\usebox\myboxA} |
| 1750 | \hspace{3pt} |
| 1751 | \vrule |
| 1752 | \hspace{3pt} |
| 1753 | \subfloat[\CC]{\label{f:C++Bank}\usebox\myboxB} |
| 1754 | \hspace{3pt} |
| 1755 | \caption{Bank transfer problem} |
| 1756 | \label{f:BankTransfer} |
| 1757 | \end{figure} |
| 1758 | |
| 1759 | Users can still force the acquiring order by using @mutex@/\lstinline[morekeywords=nomutex]@nomutex@. |
| 1760 | \begin{cfa} |
| 1761 | void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$ |
1452 | | |
1453 | | Figure~\ref{f:BBInt} shows a \CFA bounded-buffer with internal scheduling, where producers/consumers enter the monitor, see the buffer is full/empty, and block on an appropriate condition lock, @full@/@empty@. |
1454 | | The @wait@ routine atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the routine's parameter list. |
1455 | | The appropriate condition lock is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer. |
1456 | | Signalling is unconditional, because signalling an empty condition lock does nothing. |
1457 | | Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means: |
1458 | | \begin{enumerate} |
1459 | | \item |
1460 | | The signalling thread returns immediately, and the signalled thread continues. |
1461 | | \item |
1462 | | The signalling thread continues and the signalled thread is marked for urgent unblocking at the next scheduling point (exit/wait). |
1463 | | \item |
1464 | | The signalling thread blocks but is marked for urgrent unblocking at the next scheduling point and the signalled thread continues. |
1465 | | \end{enumerate} |
1466 | | The first approach is too restrictive, as it precludes solving a reasonable class of problems (\eg dating service). |
1467 | | \CFA supports the next two semantics as both are useful. |
1468 | | Finally, while it is common to store a @condition@ as a field of the monitor, in \CFA, a @condition@ variable can be created/stored independently. |
1469 | | Furthermore, a condition variable is tied to a \emph{group} of monitors on first use (called \newterm{branding}), which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors. |
| 1814 | Finally, \CFA monitors do not allow calling threads to barge ahead of signalled threads, which simplifies synchronization among threads in the monitor and increases correctness. |
| 1815 | If barging is allowed, synchronization between a signaller and signallee is difficult, often requiring additional flags and multiple unblock/block cycles. |
| 1816 | In fact, signals-as-hints is completely opposite from that proposed by Hoare in the seminal paper on monitors~\cite[p.~550]{Hoare74}. |
| 1817 | % \begin{cquote} |
| 1818 | % However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program. |
| 1819 | % It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74} |
| 1820 | % \end{cquote} |
| 1821 | Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of self barging. |
| 1822 | Hence, a \CFA @wait@ statement is not enclosed in a @while@ loop retesting a blocking predicate, which can cause thread starvation due to barging. |
| 1823 | |
| 1824 | Figure~\ref{f:MonitorScheduling} shows general internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}). |
| 1825 | External calling threads block on the calling queue, if the monitor is occupied, otherwise they enter in FIFO order. |
| 1826 | Internal threads block on condition queues via @wait@ and reenter from the condition in FIFO order. |
| 1827 | Alternatively, internal threads block on urgent from the @signal_block@ or @waitfor@, and reenter implicitly when the monitor becomes empty, \ie, the thread in the monitor exits or waits. |
| 1828 | |
| 1829 | There are three signalling mechanisms to unblock waiting threads to enter the monitor. |
| 1830 | Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion, so either the signaller or signallee can proceed. |
| 1831 | For internal scheduling, threads are unblocked from condition queues using @signal@, where the signallee is moved to urgent and the signaller continues (solid line). |
| 1832 | Multiple signals move multiple signallees to urgent until the condition is empty. |
| 1833 | When the signaller exits or waits, a thread blocked on urgent is processed before calling threads to prevent barging. |
| 1834 | (Java conceptually moves the signalled thread to the calling queue, and hence, allows barging.) |
| 1835 | The alternative unblock is in the opposite order using @signal_block@, where the signaller is moved to urgent and the signallee continues (dashed line), and is implicitly unblocked from urgent when the signallee exits or waits. |
| 1836 | |
| 1837 | For external scheduling, the condition queues are not used; |
| 1838 | instead threads are unblocked directly from the calling queue using @waitfor@ based on function names requesting mutual exclusion. |
| 1839 | (The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.) |
| 1840 | The @waitfor@ has the same semantics as @signal_block@, where the signalled thread executes before the signallee, which waits on urgent. |
| 1841 | Executing multiple @waitfor@s from different signalled functions causes the calling threads to move to urgent. |
| 1842 | External scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited. |
| 1843 | Internal scheduling behaves the same for an urgent stack or queue, except for multiple signalling, where the threads unblock from urgent in reverse order from signalling. |
| 1844 | If the restart order is important, multiple signalling by a signal thread can be transformed into daisy-chain signalling among threads, where each thread signals the next thread. |
| 1845 | We tried both a stack for @waitfor@ and queue for signalling, but that resulted in complex semantics about which thread enters next. |
| 1846 | Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling. |
| 1847 | |
| 1848 | \begin{figure} |
| 1849 | \centering |
| 1850 | % \subfloat[Scheduling Statements] { |
| 1851 | % \label{fig:SchedulingStatements} |
| 1852 | % {\resizebox{0.45\textwidth}{!}{\input{CondSigWait.pstex_t}}} |
| 1853 | \input{CondSigWait.pstex_t} |
| 1854 | % }% subfloat |
| 1855 | % \quad |
| 1856 | % \subfloat[Bulk acquire monitor] { |
| 1857 | % \label{fig:BulkMonitor} |
| 1858 | % {\resizebox{0.45\textwidth}{!}{\input{ext_monitor.pstex_t}}} |
| 1859 | % }% subfloat |
| 1860 | \caption{Monitor Scheduling} |
| 1861 | \label{f:MonitorScheduling} |
| 1862 | \end{figure} |
| 1863 | |
| 1864 | Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, detect the buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@. |
| 1865 | The @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list. |
| 1866 | The appropriate condition variable is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer. |
| 1867 | Signalling is unconditional, because signalling an empty condition variable does nothing. |
| 1868 | It is common to declare condition variables as monitor fields to prevent shared access, hence no locking is required for access as the conditions are protected by the monitor lock. |
| 1869 | In \CFA, a condition variable can be created/stored independently. |
| 1870 | % To still prevent expensive locking on access, a condition variable is tied to a \emph{group} of monitors on first use, called \newterm{branding}, resulting in a low-cost boolean test to detect sharing from other monitors. |
| 1871 | |
| 1872 | % Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means: |
| 1873 | % \begin{enumerate} |
| 1874 | % \item |
| 1875 | % The signalling thread returns immediately and the signalled thread continues. |
| 1876 | % \item |
| 1877 | % The signalling thread continues and the signalled thread is marked for urgent unblocking at the next scheduling point (exit/wait). |
| 1878 | % \item |
| 1879 | % The signalling thread blocks but is marked for urgent unblocking at the next scheduling point and the signalled thread continues. |
| 1880 | % \end{enumerate} |
| 1881 | % The first approach is too restrictive, as it precludes solving a reasonable class of problems, \eg dating service (see Figure~\ref{f:DatingService}). |
| 1882 | % \CFA supports the next two semantics as both are useful. |
1541 | | Threads making calls to routines that are currently excluded block outside (externally) of the monitor on a calling queue, versus blocking on condition queues inside the monitor. |
1542 | | |
1543 | | Both internal and external scheduling extend to multiple monitors in a natural way. |
1544 | | \begin{cfa} |
1545 | | monitor M { `condition e`; ... }; |
1546 | | void foo( M & mutex m1, M & mutex m2 ) { |
1547 | | ... wait( `e` ); ... $\C{// wait( e, m1, m2 )}$ |
1548 | | ... wait( `e, m1` ); ... |
1549 | | ... wait( `e, m2` ); ... |
1550 | | } |
1551 | | |
1552 | | void rtn$\(_1\)$( M & mutex m1, M & mutex m2 ); |
1553 | | void rtn$\(_2\)$( M & mutex m1 ); |
1554 | | void bar( M & mutex m1, M & mutex m2 ) { |
1555 | | ... waitfor( `rtn` ); ... $\C{// waitfor( rtn\(_1\), m1, m2 )}$ |
1556 | | ... waitfor( `rtn, m1` ); ... $\C{// waitfor( rtn\(_2\), m1 )}$ |
1557 | | } |
1558 | | \end{cfa} |
1559 | | For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@. |
1560 | | To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@. |
1561 | | Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe. |
1562 | | Similarly, for @waitfor( rtn, ... )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. |
1563 | | To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@. |
1564 | | Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer. |
1565 | | To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible. |
1566 | | |
1567 | | Given the ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock. |
1568 | | \begin{cfa} |
1569 | | void foo( M & mutex m1, M & mutex m2 ) { |
1570 | | ... wait( `e, m1` ); ... $\C{// release m1, keeping m2 acquired )}$ |
1571 | | void baz( M & mutex m1, M & mutex m2 ) { $\C{// must acquire m1 and m2 )}$ |
1572 | | ... signal( `e` ); ... |
1573 | | \end{cfa} |
1574 | | The @wait@ only releases @m1@ so the signalling thread cannot acquire both @m1@ and @m2@ to enter @baz@ to get to the @signal@. |
1575 | | While deadlock issues can occur with multiple/nesting acquisition, this issue results from the fact that locks, and by extension monitors, are not perfectly composable. |
1576 | | |
1577 | | Finally, an important aspect of monitor implementation is barging, \ie can calling threads barge ahead of signalled threads? |
1578 | | If barging is allowed, synchronization between a singller and signallee is difficult, often requiring multiple unblock/block cycles (looping around a wait rechecking if a condition is met). |
1579 | | \begin{quote} |
1580 | | However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program. |
1581 | | It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74} |
1582 | | \end{quote} |
1583 | | \CFA scheduling \emph{precludes} barging, which simplifies synchronization among threads in the monitor and increases correctness. |
1584 | | For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}. |
1585 | | Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency. |
1586 | | |
1587 | | |
1588 | | \subsection{Barging Prevention} |
1589 | | |
1590 | | Figure~\ref{f:BargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signalling semantics. |
1591 | | The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors. |
1592 | | The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired. |
1593 | | When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting thread to prevent barging into the outer @mutex@ statement by another thread. |
1594 | | However, both the signalling and signalled threads still need monitor @m1@. |
1595 | | |
1596 | | \begin{figure} |
1597 | | \newbox\myboxA |
1598 | | \begin{lrbox}{\myboxA} |
1599 | | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
1600 | | monitor M m1, m2; |
1601 | | condition c; |
1602 | | mutex( m1 ) { |
1603 | | ... |
1604 | | mutex( m1, m2 ) { |
1605 | | ... `wait( c )`; // block and release m1, m2 |
1606 | | // m1, m2 acquired |
1607 | | } // $\LstCommentStyle{\color{red}release m2}$ |
1608 | | // m1 acquired |
1609 | | } // release m1 |
1610 | | \end{cfa} |
1611 | | \end{lrbox} |
1612 | | |
1613 | | \newbox\myboxB |
1614 | | \begin{lrbox}{\myboxB} |
1615 | | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
1616 | | |
1617 | | |
1618 | | mutex( m1 ) { |
1619 | | ... |
1620 | | mutex( m1, m2 ) { |
1621 | | ... `signal( c )`; ... |
1622 | | // m1, m2 acquired |
1623 | | } // $\LstCommentStyle{\color{red}release m2}$ |
1624 | | // m1 acquired |
1625 | | } // release m1 |
1626 | | \end{cfa} |
1627 | | \end{lrbox} |
1628 | | |
1629 | | \newbox\myboxC |
1630 | | \begin{lrbox}{\myboxC} |
1631 | | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
1632 | | |
1633 | | |
1634 | | mutex( m1 ) { |
1635 | | ... `wait( c )`; ... |
1636 | | // m1 acquired |
1637 | | } // $\LstCommentStyle{\color{red}release m1}$ |
1638 | | |
1639 | | |
1640 | | |
1641 | | |
1642 | | \end{cfa} |
1643 | | \end{lrbox} |
1644 | | |
1645 | | \begin{cquote} |
1646 | | \subfloat[Waiting Thread]{\label{f:WaitingThread}\usebox\myboxA} |
1647 | | \hspace{2\parindentlnth} |
1648 | | \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxB} |
1649 | | \hspace{2\parindentlnth} |
1650 | | \subfloat[Other Waiting Thread]{\label{f:SignallingThread}\usebox\myboxC} |
1651 | | \end{cquote} |
1652 | | \caption{Barging Prevention} |
1653 | | \label{f:BargingPrevention} |
1654 | | \end{figure} |
1655 | | |
1656 | | The obvious solution to the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. |
1657 | | It can be argued that that moment is when the last lock is no longer needed, because this semantics fits most closely to the behaviour of single-monitor scheduling. |
1658 | | This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. |
1659 | | This solution releases the monitors once every monitor in a group can be released. |
1660 | | However, since some monitors are never released (\eg the monitor of a thread), this interpretation means a group might never be released. |
1661 | | A more interesting interpretation is to transfer the group until all its monitors are released, which means the group is not passed further and a thread can retain its locks. |
1662 | | |
1663 | | However, listing \ref{f:int-secret} shows this solution can become much more complicated depending on what is executed while secretly holding B at line \ref{line:secret}, while avoiding the need to transfer ownership of a subset of the condition monitors. |
1664 | | Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable. |
1665 | | Because the third thread is signalled when secretly holding @B@, the goal becomes unreachable. |
1666 | | Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen: |
1667 | | |
1668 | | \begin{comment} |
1669 | | \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it. |
1670 | | \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate. |
1671 | | \\ |
1672 | | |
1673 | | Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. |
1674 | | However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{f:dependency}. |
1675 | | |
1676 | | In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means knowing when to release a group becomes complex and inefficient (see next section) and therefore effectively precludes this approach. |
1677 | | |
1678 | | |
1679 | | \subsubsection{Dependency graphs} |
1680 | | |
1681 | | \begin{figure} |
1682 | | \begin{multicols}{3} |
1683 | | Thread $\alpha$ |
1684 | | \begin{cfa}[numbers=left, firstnumber=1] |
1685 | | acquire A |
1686 | | acquire A & B |
1687 | | wait A & B |
1688 | | release A & B |
1689 | | release A |
1690 | | \end{cfa} |
1691 | | \columnbreak |
1692 | | Thread $\gamma$ |
1693 | | \begin{cfa}[numbers=left, firstnumber=6, escapechar=|] |
1694 | | acquire A |
1695 | | acquire A & B |
1696 | | |\label{line:signal-ab}|signal A & B |
1697 | | |\label{line:release-ab}|release A & B |
1698 | | |\label{line:signal-a}|signal A |
1699 | | |\label{line:release-a}|release A |
1700 | | \end{cfa} |
1701 | | \columnbreak |
1702 | | Thread $\beta$ |
1703 | | \begin{cfa}[numbers=left, firstnumber=12, escapechar=|] |
1704 | | acquire A |
1705 | | wait A |
1706 | | |\label{line:release-aa}|release A |
1707 | | \end{cfa} |
1708 | | \end{multicols} |
1709 | | \begin{cfa}[caption={Pseudo-code for the three thread example.},label={f:dependency}] |
1710 | | \end{cfa} |
1711 | | \begin{center} |
1712 | | \input{dependency} |
1713 | | \end{center} |
1714 | | \caption{Dependency graph of the statements in listing \ref{f:dependency}} |
1715 | | \label{fig:dependency} |
1716 | | \end{figure} |
1717 | | |
1718 | | In listing \ref{f:int-bulk-cfa}, there is a solution that satisfies both barging prevention and mutual exclusion. |
1719 | | If ownership of both monitors is transferred to the waiter when the signaller releases @A & B@ and then the waiter transfers back ownership of @A@ back to the signaller when it releases it, then the problem is solved (@B@ is no longer in use at this point). |
1720 | | Dynamically finding the correct order is therefore the second possible solution. |
1721 | | The problem is effectively resolving a dependency graph of ownership requirements. |
1722 | | Here even the simplest of code snippets requires two transfers and has a super-linear complexity. |
1723 | | This complexity can be seen in listing \ref{f:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions. |
1724 | | Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks. |
1725 | | \begin{figure} |
1726 | | \begin{multicols}{2} |
1727 | | \begin{cfa} |
1728 | | acquire A |
1729 | | acquire B |
1730 | | acquire C |
1731 | | wait A & B & C |
1732 | | release C |
1733 | | release B |
1734 | | release A |
1735 | | \end{cfa} |
1736 | | |
1737 | | \columnbreak |
1738 | | |
1739 | | \begin{cfa} |
1740 | | acquire A |
1741 | | acquire B |
1742 | | acquire C |
1743 | | signal A & B & C |
1744 | | release C |
1745 | | release B |
1746 | | release A |
1747 | | \end{cfa} |
1748 | | \end{multicols} |
1749 | | \begin{cfa}[caption={Extension to three monitors of listing \ref{f:int-bulk-cfa}},label={f:explosion}] |
1750 | | \end{cfa} |
1751 | | \end{figure} |
1752 | | |
1753 | | Given the three threads example in listing \ref{f:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (\eg $\alpha1$ must happen before $\alpha2$). |
1754 | | The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold. |
1755 | | Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. |
1756 | | |
1757 | | \subsubsection{Partial Signalling} \label{partial-sig} |
1758 | | \end{comment} |
1759 | | |
1760 | | Finally, the solution that is chosen for \CFA is to use partial signalling. |
1761 | | Again using listing \ref{f:int-bulk-cfa}, the partial signalling solution transfers ownership of monitor @B@ at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor @A@. |
1762 | | Only when it reaches line \ref{line:lastRelease} does it actually wake up the waiting thread. |
1763 | | This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. |
1764 | | This solution has a much simpler implementation than a dependency graph solving algorithms, which is why it was chosen. |
1765 | | Furthermore, after being fully implemented, this solution does not appear to have any significant downsides. |
1766 | | |
1767 | | Using partial signalling, listing \ref{f:dependency} can be solved easily: |
1768 | | \begin{itemize} |
1769 | | \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor @B@ to thread $\alpha$ and continues to hold monitor @A@. |
1770 | | \item When thread $\gamma$ reaches line \ref{line:release-a} it transfers monitor @A@ to thread $\beta$ and wakes it up. |
1771 | | \item When thread $\beta$ reaches line \ref{line:release-aa} it transfers monitor @A@ to thread $\alpha$ and wakes it up. |
1772 | | \end{itemize} |
1773 | | |
1774 | | |
1775 | | \subsection{Signalling: Now or Later} |
| 1988 | Threads calling excluded functions block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor. |
| 1989 | Figure~\ref{f:RWExt} shows a readers/writer lock written using external scheduling, where a waiting reader detects a writer using the resource and restricts further calls until the writer exits by calling @EndWrite@. |
| 1990 | The writer does a similar action for each reader or writer using the resource. |
| 1991 | Note, no new calls to @StarRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@. |
| 1992 | External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on conditions in the monitor. |
| 1993 | The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go @select@ on channels. |
| 1994 | While both mechanisms have strengths and weaknesses, this project uses the control-flow mechanism to be consistent with other language features. |
| 1995 | % Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor functions (see Section~\ref{s:Multi-MonitorScheduling}). |
| 1996 | |
| 1997 | Figure~\ref{f:DatingService} shows a dating service demonstrating non-blocking and blocking signalling. |
| 1998 | The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers. |
| 1999 | A thread blocks until an appropriate partner arrives. |
| 2000 | The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property. |
| 2001 | For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner. |
| 2002 | For signal-block scheduling, the implicit urgent-queue replaces the explict @exchange@-condition and @signal_block@ puts the finding thread on the urgent condition and unblocks the matcher. |
| 2003 | The dating service is an example of a monitor that cannot be written using external scheduling because it requires knowledge of calling parameters to make scheduling decisions, and parameters of waiting threads are unavailable; |
| 2004 | as well, an arriving thread may not find a partner and must wait, which requires a condition variable, and condition variables imply internal scheduling. |
| 2005 | Furthermore, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number. |
| 2006 | Putting loops around the @wait@s does not correct the problem; |
| 2007 | the simple solution must be restructured to account for barging. |
1918 | | \caption{Different forms of scheduling.} |
1919 | | \label{tbl:sched} |
1920 | | \end{table} |
1921 | | \end{comment} |
1922 | | |
1923 | | This method is more constrained and explicit, which helps users reduce the non-deterministic nature of concurrency. |
1924 | | Indeed, as the following examples demonstrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occurring. |
1925 | | External scheduling can generally be done either in terms of control flow (\eg Ada with @accept@, \uC with @_Accept@) or in terms of data (\eg Go with channels). |
1926 | | Of course, both of these paradigms have their own strengths and weaknesses, but for this project, control-flow semantics was chosen to stay consistent with the rest of the languages semantics. |
1927 | | Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines. |
1928 | | The previous example shows a simple use @_Accept@ versus @wait@/@signal@ and its advantages. |
1929 | | Note that while other languages often use @accept@/@select@ as the core external scheduling keyword, \CFA uses @waitfor@ to prevent name collisions with existing socket \textbf{api}s. |
1930 | | |
1931 | | For the @P@ member above using internal scheduling, the call to @wait@ only guarantees that @V@ is the last routine to access the monitor, allowing a third routine, say @isInUse()@, acquire mutual exclusion several times while routine @P@ is waiting. |
1932 | | On the other hand, external scheduling guarantees that while routine @P@ is waiting, no other routine than @V@ can acquire the monitor. |
1933 | | |
1934 | | % ====================================================================== |
1935 | | % ====================================================================== |
1936 | | \subsection{Loose Object Definitions} |
1937 | | % ====================================================================== |
1938 | | % ====================================================================== |
1939 | | In \uC, a monitor class declaration includes an exhaustive list of monitor operations. |
1940 | | Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user: |
1941 | | |
1942 | | \begin{cfa} |
1943 | | monitor A {}; |
1944 | | |
1945 | | void f(A & mutex a); |
1946 | | void g(A & mutex a) { |
1947 | | waitfor(f); // Obvious which f() to wait for |
1948 | | } |
1949 | | |
1950 | | void f(A & mutex a, int); // New different F added in scope |
1951 | | void h(A & mutex a) { |
1952 | | waitfor(f); // Less obvious which f() to wait for |
1953 | | } |
1954 | | \end{cfa} |
1955 | | |
1956 | | Furthermore, external scheduling is an example where implementation constraints become visible from the interface. |
1957 | | Here is the cfa-code for the entering phase of a monitor: |
1958 | | \begin{center} |
1959 | | \begin{tabular}{l} |
1960 | | \begin{cfa} |
1961 | | if monitor is free |
1962 | | enter |
1963 | | elif already own the monitor |
1964 | | continue |
1965 | | elif monitor accepts me |
1966 | | enter |
1967 | | else |
1968 | | block |
1969 | | \end{cfa} |
1970 | | \end{tabular} |
1971 | | \end{center} |
1972 | | For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instructions. |
1973 | | However, a fast check for @monitor accepts me@ is much harder to implement depending on the constraints put on the monitors. |
1974 | | Indeed, monitors are often expressed as an entry queue and some acceptor queue as in Figure~\ref{fig:ClassicalMonitor}. |
| 2100 | \end{cquote} |
| 2101 | For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex parameters, \ie @wait( e, m1, m2 )@. |
| 2102 | To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@. |
| 2103 | Wait cannot statically verifies the released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@. |
| 2104 | While \CC supports bulk locking, @wait@ only accepts a single lock for a condition variable, so bulk locking with condition variables is asymmetric. |
| 2105 | Finally, a signaller, |
| 2106 | \begin{cfa} |
| 2107 | void baz( M & mutex m1, M & mutex m2 ) { |
| 2108 | ... signal( e ); ... |
| 2109 | } |
| 2110 | \end{cfa} |
| 2111 | must have acquired at least the same locks as the waiting thread signalled from a condition queue to allow the locks to be passed, and hence, prevent barging. |
| 2112 | |
| 2113 | Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn, m1, m2 )@. |
| 2114 | To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@. |
| 2115 | @waitfor@ does statically verify the monitor types passed are the same as the acquired mutex-parameters of the given function or function pointer, hence the function (pointer) prototype must be accessible. |
| 2116 | % When an overloaded function appears in an @waitfor@ statement, calls to any function with that name are accepted. |
| 2117 | % The rationale is that members with the same name should perform a similar function, and therefore, all should be eligible to accept a call. |
| 2118 | Overloaded functions can be disambiguated using a cast |
| 2119 | \begin{cfa} |
| 2120 | void rtn( M & mutex m ); |
| 2121 | `int` rtn( M & mutex m ); |
| 2122 | waitfor( (`int` (*)( M & mutex ))rtn, m ); |
| 2123 | \end{cfa} |
| 2124 | |
| 2125 | The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock. |
| 2126 | \begin{cfa} |
| 2127 | void foo( M & mutex m1, M & mutex m2 ) { |
| 2128 | ... wait( `e, m1` ); ... $\C{// release m1, keeping m2 acquired )}$ |
| 2129 | void bar( M & mutex m1, M & mutex m2 ) { $\C{// must acquire m1 and m2 )}$ |
| 2130 | ... signal( `e` ); ... |
| 2131 | \end{cfa} |
| 2132 | The @wait@ only releases @m1@ so the signalling thread cannot acquire @m1@ and @m2@ to enter @bar@ and @signal@ the condition. |
| 2133 | While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitors, not being perfectly composable. |
| 2134 | |
| 2135 | |
| 2136 | |
| 2137 | \subsection{\texorpdfstring{Extended \protect\lstinline@waitfor@}{Extended waitfor}} |
| 2138 | |
| 2139 | Figure~\ref{f:ExtendedWaitfor} shows the extended form of the @waitfor@ statement to conditionally accept one of a group of mutex functions, with an optional statement to be performed \emph{after} the mutex function finishes. |
| 2140 | For a @waitfor@ clause to be executed, its @when@ must be true and an outstanding call to its corresponding member(s) must exist. |
| 2141 | The \emph{conditional-expression} of a @when@ may call a function, but the function must not block or context switch. |
| 2142 | If there are multiple acceptable mutex calls, selection occurs top-to-bottom (prioritized) among the @waitfor@ clauses, whereas some programming languages with similar mechanisms accept nondeterministically for this case, \eg Go \lstinline[morekeywords=select]@select@. |
| 2143 | If some accept guards are true and there are no outstanding calls to these members, the acceptor is blocked until a call to one of these members is made. |
| 2144 | If there is a @timeout@ clause, it provides an upper bound on waiting. |
| 2145 | If all the accept guards are false, the statement does nothing, unless there is a terminating @else@ clause with a true guard, which is executed instead. |
| 2146 | Hence, the terminating @else@ clause allows a conditional attempt to accept a call without blocking. |
| 2147 | If both @timeout@ and @else@ clause are present, the @else@ must be conditional, or the @timeout@ is never triggered. |
| 2148 | There is also a traditional future wait queue (not shown) (\eg Microsoft (@WaitForMultipleObjects@)), to wait for a specified number of future elements in the queue. |
2183 | | // Correct : block only if b == true if b == false, don't even make the call |
2184 | | when(b) waitfor(f1, a); |
2185 | | |
2186 | | // Correct : block only if b == true if b == false, make non-blocking call |
2187 | | waitfor(f1, a); or when(!b) else; |
2188 | | |
2189 | | // Correct : block only of t > 1 |
2190 | | waitfor(f1, a); or when(t > 1) timeout(t); or else; |
2191 | | |
2192 | | // Incorrect : timeout clause is dead code |
2193 | | waitfor(f1, a); or timeout(t); or else; |
2194 | | |
2195 | | // Incorrect : order must be waitfor [or waitfor... [or timeout] [or else]] |
2196 | | timeout(t); or waitfor(f1, a); or else; |
2197 | | } |
2198 | | \end{cfa} |
2199 | | \caption{Correct and incorrect uses of the or, else, and timeout clause around a waitfor statement} |
2200 | | \label{f:waitfor2} |
2201 | | \end{figure} |
2202 | | |
2203 | | % ====================================================================== |
2204 | | % ====================================================================== |
2205 | | \subsection{Waiting For The Destructor} |
2206 | | % ====================================================================== |
2207 | | % ====================================================================== |
2208 | | An interesting use for the @waitfor@ statement is destructor semantics. |
2209 | | Indeed, the @waitfor@ statement can accept any @mutex@ routine, which includes the destructor (see section \ref{data}). |
2210 | | However, with the semantics discussed until now, waiting for the destructor does not make any sense, since using an object after its destructor is called is undefined behaviour. |
2211 | | The simplest approach is to disallow @waitfor@ on a destructor. |
2212 | | However, a more expressive approach is to flip ordering of execution when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current @mutex@ routine, similarly to how a condition is signalled. |
2213 | | \begin{figure} |
2214 | | \begin{cfa}[caption={Example of an executor which executes action in series until the destructor is called.},label={f:dtor-order}] |
2215 | | monitor Executer {}; |
2216 | | struct Action; |
2217 | | |
2218 | | void ^?{} (Executer & mutex this); |
2219 | | void execute(Executer & mutex this, const Action & ); |
2220 | | void run (Executer & mutex this) { |
2221 | | while(true) { |
2222 | | waitfor(execute, this); |
2223 | | or waitfor(^?{} , this) { |
2224 | | break; |
2225 | | } |
2226 | | } |
2227 | | } |
2228 | | \end{cfa} |
2229 | | \end{figure} |
2230 | | For example, listing \ref{f:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop. |
2231 | | Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction. |
2232 | | |
2233 | | |
2234 | | % ###### # ###### # # # ####### # ### ##### # # |
2235 | | % # # # # # # # # # # # # # # # ## ## |
2236 | | % # # # # # # # # # # # # # # # # # # |
2237 | | % ###### # # ###### # # # # ##### # # ##### # # # |
2238 | | % # ####### # # ####### # # # # # # # # |
2239 | | % # # # # # # # # # # # # # # # # |
2240 | | % # # # # # # # ####### ####### ####### ####### ### ##### # # |
2241 | | \section{Parallelism} |
2242 | | Historically, computer performance was about processor speeds and instruction counts. |
2243 | | However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. |
2244 | | In this decade, it is no longer reasonable to create a high-performance application without caring about parallelism. |
2245 | | Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. |
2246 | | The lowest-level approach of parallelism is to use \textbf{kthread} in combination with semantics like @fork@, @join@, \etc. |
2247 | | However, since these have significant costs and limitations, \textbf{kthread} are now mostly used as an implementation tool rather than a user oriented one. |
2248 | | There are several alternatives to solve these issues that all have strengths and weaknesses. |
2249 | | While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads. |
2250 | | |
2251 | | \section{Paradigms} |
2252 | | \subsection{User-Level Threads} |
2253 | | A direct improvement on the \textbf{kthread} approach is to use \textbf{uthread}. |
2254 | | These threads offer most of the same features that the operating system already provides but can be used on a much larger scale. |
2255 | | This approach is the most powerful solution as it allows all the features of multithreading, while removing several of the more expensive costs of kernel threads. |
2256 | | The downside is that almost none of the low-level threading problems are hidden; users still have to think about data races, deadlocks and synchronization issues. |
2257 | | These issues can be somewhat alleviated by a concurrency toolkit with strong guarantees, but the parallelism toolkit offers very little to reduce complexity in itself. |
2258 | | |
2259 | | Examples of languages that support \textbf{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}. |
2260 | | |
2261 | | \subsection{Fibers : User-Level Threads Without Preemption} \label{fibers} |
2262 | | A popular variant of \textbf{uthread} is what is often referred to as \textbf{fiber}. |
2263 | | However, \textbf{fiber} do not present meaningful semantic differences with \textbf{uthread}. |
2264 | | The significant difference between \textbf{uthread} and \textbf{fiber} is the lack of \textbf{preemption} in the latter. |
2265 | | Advocates of \textbf{fiber} list their high performance and ease of implementation as major strengths, but the performance difference between \textbf{uthread} and \textbf{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. |
2266 | | Therefore this proposal largely ignores fibers. |
2267 | | |
2268 | | An example of a language that uses fibers is Go~\cite{Go} |
2269 | | |
2270 | | \subsection{Jobs and Thread Pools} |
2271 | | An approach on the opposite end of the spectrum is to base parallelism on \textbf{pool}. |
2272 | | Indeed, \textbf{pool} offer limited flexibility but at the benefit of a simpler user interface. |
2273 | | In \textbf{pool} based systems, users express parallelism as units of work, called jobs, and a dependency graph (either explicit or implicit) that ties them together. |
2274 | | This approach means users need not worry about concurrency but significantly limit the interaction that can occur among jobs. |
2275 | | Indeed, any \textbf{job} that blocks also block the underlying worker, which effectively means the CPU utilization, and therefore throughput, suffers noticeably. |
2276 | | It can be argued that a solution to this problem is to use more workers than available cores. |
2277 | | However, unless the number of jobs and the number of workers are comparable, having a significant number of blocked jobs always results in idles cores. |
2278 | | |
2279 | | The gold standard of this implementation is Intel's TBB library~\cite{TBB}. |
2280 | | |
2281 | | \subsection{Paradigm Performance} |
2282 | | While the choice between the three paradigms listed above may have significant performance implications, it is difficult to pin down the performance implications of choosing a model at the language level. |
2283 | | Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. |
2284 | | Having a large amount of mostly independent units of work to execute almost guarantees equivalent performance across paradigms and that the \textbf{pool}-based system has the best efficiency thanks to the lower memory overhead (\ie no thread stack per job). |
2285 | | However, interactions among jobs can easily exacerbate contention. |
2286 | | User-level threads allow fine-grain context switching, which results in better resource utilization, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. |
2287 | | Finally, if the units of uninterrupted work are large, enough the paradigm choice is largely amortized by the actual work done. |
2288 | | |
2289 | | \section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel} |
2290 | | A \textbf{cfacluster} is a group of \textbf{kthread} executed in isolation. \textbf{uthread} are scheduled on the \textbf{kthread} of a given \textbf{cfacluster}, allowing organization between \textbf{uthread} and \textbf{kthread}. |
2291 | | It is important that \textbf{kthread} belonging to a same \textbf{cfacluster} have homogeneous settings, otherwise migrating a \textbf{uthread} from one \textbf{kthread} to the other can cause issues. |
2292 | | A \textbf{cfacluster} also offers a pluggable scheduler that can optimize the workload generated by the \textbf{uthread}. |
2293 | | |
2294 | | \textbf{cfacluster} have not been fully implemented in the context of this paper. |
2295 | | Currently \CFA only supports one \textbf{cfacluster}, the initial one. |
2296 | | |
2297 | | \subsection{Future Work: Machine Setup}\label{machine} |
2298 | | While this was not done in the context of this paper, another important aspect of clusters is affinity. |
2299 | | While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups. |
2300 | | For example, a system using \textbf{numa} configurations may benefit from users being able to tie clusters and/or kernel threads to certain CPU cores. |
2301 | | OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity. |
2302 | | |
2303 | | \subsection{Paradigms}\label{cfaparadigms} |
2304 | | Given these building blocks, it is possible to reproduce all three of the popular paradigms. |
2305 | | Indeed, \textbf{uthread} is the default paradigm in \CFA. |
2306 | | However, disabling \textbf{preemption} on a cluster means threads effectively become fibers. |
2307 | | Since several \textbf{cfacluster} with different scheduling policy can coexist in the same application, this allows \textbf{fiber} and \textbf{uthread} to coexist in the runtime of an application. |
2308 | | Finally, it is possible to build executors for thread pools from \textbf{uthread} or \textbf{fiber}, which includes specialized jobs like actors~\cite{Actors}. |
2309 | | |
2310 | | |
2311 | | |
2312 | | \section{Behind the Scenes} |
2313 | | There are several challenges specific to \CFA when implementing concurrency. |
2314 | | These challenges are a direct result of bulk acquire and loose object definitions. |
2315 | | These two constraints are the root cause of most design decisions in the implementation. |
2316 | | Furthermore, to avoid contention from dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs. |
2317 | | This approach avoids the chicken and egg problem~\cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. |
2318 | | This extra goal means that memory management is a constant concern in the design of the system. |
2319 | | |
2320 | | The main memory concern for concurrency is queues. |
2321 | | All blocking operations are made by parking threads onto queues and all queues are designed with intrusive nodes, where each node has pre-allocated link fields for chaining, to avoid the need for memory allocation. |
2322 | | Since several concurrency operations can use an unbound amount of memory (depending on bulk acquire), statically defining information in the intrusive fields of threads is insufficient.The only way to use a variable amount of memory without requiring memory allocation is to pre-allocate large buffers of memory eagerly and store the information in these buffers. |
2323 | | Conveniently, the call stack fits that description and is easy to use, which is why it is used heavily in the implementation of internal scheduling, particularly variable-length arrays. |
2324 | | Since stack allocation is based on scopes, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable-length array. |
2325 | | The threads and the condition both have a fixed amount of memory, while @mutex@ routines and blocking calls allow for an unbound amount, within the stack size. |
2326 | | |
2327 | | Note that since the major contributions of this paper are extending monitor semantics to bulk acquire and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed. |
2328 | | |
2329 | | % ====================================================================== |
2330 | | % ====================================================================== |
2331 | | \section{Mutex Routines} |
2332 | | % ====================================================================== |
2333 | | % ====================================================================== |
2334 | | |
2335 | | The first step towards the monitor implementation is simple @mutex@ routines. |
2336 | | In the single monitor case, mutual-exclusion is done using the entry/exit procedure in listing \ref{f:entry1}. |
2337 | | The entry/exit procedures do not have to be extended to support multiple monitors. |
2338 | | Indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlock~\cite{Havender68}. |
2339 | | In \CFA, ordering of monitor acquisition relies on memory ordering. |
2340 | | This approach is sufficient because all objects are guaranteed to have distinct non-overlapping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behaviour. |
2341 | | When a mutex call is made, the concerned monitors are aggregated into a variable-length pointer array and sorted based on pointer values. |
2342 | | This array persists for the entire duration of the mutual-exclusion and its ordering reused extensively. |
2343 | | \begin{figure} |
2344 | | \begin{multicols}{2} |
2345 | | Entry |
2346 | | \begin{cfa} |
2347 | | if monitor is free |
2348 | | enter |
2349 | | elif already own the monitor |
2350 | | continue |
2351 | | else |
2352 | | block |
2353 | | increment recursions |
2354 | | \end{cfa} |
2355 | | \columnbreak |
2356 | | Exit |
2357 | | \begin{cfa} |
2358 | | decrement recursion |
2359 | | if recursion == 0 |
2360 | | if entry queue not empty |
2361 | | wake-up thread |
2362 | | \end{cfa} |
2363 | | \end{multicols} |
2364 | | \begin{cfa}[caption={Initial entry and exit routine for monitors},label={f:entry1}] |
2365 | | \end{cfa} |
2366 | | \end{figure} |
2367 | | |
2368 | | \subsection{Details: Interaction with polymorphism} |
2369 | | Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. |
2370 | | However, it is shown that entry-point locking solves most of the issues. |
2371 | | |
2372 | | First of all, interaction between @otype@ polymorphism (see Section~\ref{s:ParametricPolymorphism}) and monitors is impossible since monitors do not support copying. |
2373 | | Therefore, the main question is how to support @dtype@ polymorphism. |
2374 | | It is important to present the difference between the two acquiring options: \textbf{callsite-locking} and entry-point locking, \ie acquiring the monitors before making a mutex routine-call or as the first operation of the mutex routine-call. |
2375 | | For example: |
| 2486 | } |
| 2487 | |
| 2488 | \end{cfa} |
| 2489 | \end{tabular} |
| 2490 | \end{cquote} |
| 2491 | % \lstMakeShortInline@% |
| 2492 | % \caption{Threads ping/pong using external scheduling} |
| 2493 | % \label{f:pingpong} |
| 2494 | % \end{figure} |
| 2495 | Note, the ping/pong threads are globally declared, @pi@/@po@, and hence, start (and possibly complete) before the program main starts. |
| 2496 | \end{comment} |
| 2497 | |
| 2498 | |
| 2499 | \subsection{Execution Properties} |
| 2500 | |
| 2501 | Table~\ref{t:ObjectPropertyComposition} shows how the \CFA high-level constructs cover 3 fundamental execution properties: thread, stateful function, and mutual exclusion. |
| 2502 | Case 1 is a basic object, with none of the new execution properties. |
| 2503 | Case 2 allows @mutex@ calls to Case 1 to protect shared data. |
| 2504 | Case 3 allows stateful functions to suspend/resume but restricts operations because the state is stackless. |
| 2505 | Case 4 allows @mutex@ calls to Case 3 to protect shared data. |
| 2506 | Cases 5 and 6 are the same as 3 and 4 without restriction because the state is stackful. |
| 2507 | Cases 7 and 8 are rejected because a thread cannot execute without a stackful state in a preemptive environment when context switching from the signal handler. |
| 2508 | Cases 9 and 10 have a stackful thread without and with @mutex@ calls. |
| 2509 | For situations where threads do not require direct communication, case 9 provides faster creation/destruction by eliminating @mutex@ setup. |
| 2510 | |
2382 | | \begin{cfa}[tabsize=3] |
2383 | | void foo(monitor& mutex a){ |
2384 | | |
2385 | | // Do Work |
2386 | | //... |
2387 | | |
2388 | | } |
2389 | | |
2390 | | void main() { |
2391 | | monitor a; |
2392 | | |
2393 | | foo(a); |
2394 | | |
2395 | | } |
2396 | | \end{cfa} & \begin{cfa}[tabsize=3] |
2397 | | foo(& a) { |
2398 | | |
2399 | | // Do Work |
2400 | | //... |
2401 | | |
2402 | | } |
2403 | | |
2404 | | main() { |
2405 | | monitor a; |
2406 | | acquire(a); |
2407 | | foo(a); |
2408 | | release(a); |
2409 | | } |
2410 | | \end{cfa} & \begin{cfa}[tabsize=3] |
2411 | | foo(& a) { |
2412 | | acquire(a); |
2413 | | // Do Work |
2414 | | //... |
2415 | | release(a); |
2416 | | } |
2417 | | |
2418 | | main() { |
2419 | | monitor a; |
2420 | | |
2421 | | foo(a); |
2422 | | |
2423 | | } |
2424 | | \end{cfa} |
2425 | | \end{tabular} |
2426 | | \end{center} |
2427 | | \caption{Call-site vs entry-point locking for mutex calls} |
2428 | | \label{tbl:locking-site} |
2429 | | \end{table} |
2430 | | |
2431 | | Note the @mutex@ keyword relies on the type system, which means that in cases where a generic monitor-routine is desired, writing the mutex routine is possible with the proper trait, \eg: |
2432 | | \begin{cfa} |
2433 | | // Incorrect: T may not be monitor |
2434 | | forall(dtype T) |
2435 | | void foo(T * mutex t); |
2436 | | |
2437 | | // Correct: this routine only works on monitors (any monitor) |
2438 | | forall(dtype T | is_monitor(T)) |
2439 | | void bar(T * mutex t)); |
2440 | | \end{cfa} |
2441 | | |
2442 | | Both entry point and \textbf{callsite-locking} are feasible implementations. |
2443 | | The current \CFA implementation uses entry-point locking because it requires less work when using \textbf{raii}, effectively transferring the burden of implementation to object construction/destruction. |
2444 | | It is harder to use \textbf{raii} for call-site locking, as it does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, \ie the routine body. |
2445 | | For example, the monitor call can appear in the middle of an expression. |
2446 | | Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites. |
2447 | | |
2448 | | % ====================================================================== |
2449 | | % ====================================================================== |
2450 | | \section{Threading} \label{impl:thread} |
2451 | | % ====================================================================== |
2452 | | % ====================================================================== |
2453 | | |
2454 | | Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. |
2455 | | Each component of the picture is explained in detail in the flowing sections. |
2456 | | |
2457 | | \begin{figure} |
2458 | | \begin{center} |
2459 | | {\resizebox{\textwidth}{!}{\input{system.pstex_t}}} |
2460 | | \end{center} |
2461 | | \caption{Overview of the entire system} |
2462 | | \label{fig:system1} |
2463 | | \end{figure} |
2464 | | |
2465 | | \subsection{Processors} |
2466 | | Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically @pthread@s in the current implementation of \CFA. |
2467 | | Indeed, any parallelism must go through operating-system libraries. |
2468 | | However, \textbf{uthread} are still the main source of concurrency, processors are simply the underlying source of parallelism. |
2469 | | Indeed, processor \textbf{kthread} simply fetch a \textbf{uthread} from the scheduler and run it; they are effectively executers for user-threads. |
2470 | | The main benefit of this approach is that it offers a well-defined boundary between kernel code and user code, for example, kernel thread quiescing, scheduling and interrupt handling. |
2471 | | Processors internally use coroutines to take advantage of the existing context-switching semantics. |
2472 | | |
2473 | | \subsection{Stack Management} |
2474 | | One of the challenges of this system is to reduce the footprint as much as possible. |
2475 | | Specifically, all @pthread@s created also have a stack created with them, which should be used as much as possible. |
2476 | | Normally, coroutines also create their own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the \textbf{kthread} stack, effectively stealing the processor stack. |
2477 | | The exception to this rule is the Main Processor, \ie the initial \textbf{kthread} that is given to any program. |
2478 | | In order to respect C user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large. |
2479 | | |
2480 | | \subsection{Context Switching} |
2481 | | As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks. |
2482 | | To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific routine call. |
2483 | | This assumption means that the context-switch only has to copy the callee-saved registers onto the stack and then switch the stack registers with the ones of the target coroutine/thread. |
2484 | | Note that the instruction pointer can be left untouched since the context-switch is always inside the same routine |
2485 | | Threads, however, do not context-switch between each other directly. |
2486 | | They context-switch to the scheduler. |
2487 | | This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the kernel where scheduling and other system operations happen. |
2488 | | Obviously, this doubles the context-switch cost because threads must context-switch to an intermediate stack. |
2489 | | The alternative 1-step context-switch uses the stack of the ``from'' thread to schedule and then context-switches directly to the ``to'' thread. |
2490 | | However, the performance of the 2-step context-switch is still superior to a @pthread_yield@ (see section \ref{results}). |
2491 | | Additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch (akin to the Microsoft @SwitchToFiber@~\cite{switchToWindows} routine). |
2492 | | This option is not currently present in \CFA, but the changes required to add it are strictly additive. |
2493 | | |
2494 | | \subsection{Preemption} \label{preemption} |
2495 | | Finally, an important aspect for any complete threading system is preemption. |
2496 | | As mentioned in section \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution. |
2497 | | Indeed, preemption is desirable because it adds a degree of isolation among threads. |
2498 | | In a fully cooperative system, any thread that runs a long loop can starve other threads, while in a preemptive system, starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly a programmer burden. |
2499 | | Obviously, preemption is not optimal for every workload. |
2500 | | However any preemptive system can become a cooperative system by making the time slices extremely large. |
2501 | | Therefore, \CFA uses a preemptive threading system. |
2502 | | |
2503 | | Preemption in \CFA\footnote{Note that the implementation of preemption is strongly tied with the underlying threading system. |
2504 | | For this reason, only the Linux implementation is cover, \CFA does not run on Windows at the time of writting} is based on kernel timers, which are used to run a discrete-event simulation. |
2505 | | Every processor keeps track of the current time and registers an expiration time with the preemption system. |
2506 | | When the preemption system receives a change in preemption, it inserts the time in a sorted order and sets a kernel timer for the closest one, effectively stepping through preemption events on each signal sent by the timer. |
2507 | | These timers use the Linux signal {\tt SIGALRM}, which is delivered to the process rather than the kernel-thread. |
2508 | | This results in an implementation problem, because when delivering signals to a process, the kernel can deliver the signal to any kernel thread for which the signal is not blocked, \ie: |
2509 | | \begin{quote} |
2510 | | A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. |
2511 | | If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal. |
2512 | | SIGNAL(7) - Linux Programmer's Manual |
2513 | | \end{quote} |
2514 | | For the sake of simplicity, and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every kernel thread except one. |
2515 | | |
2516 | | Now because of how involuntary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread. |
2517 | | Hence, involuntary context-switching is done by sending signal {\tt SIGUSR1} to the corresponding proces\-sor and having the thread yield from inside the signal handler. |
2518 | | This approach effectively context-switches away from the signal handler back to the kernel and the signal handler frame is eventually unwound when the thread is scheduled again. |
2519 | | As a result, a signal handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). |
2520 | | It is important to note that signal handlers save and restore signal masks because user-thread migration can cause a signal mask to migrate from one kernel thread to another. |
2521 | | This behaviour is only a problem if all kernel threads, among which a user thread can migrate, differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distinguishes ``async-signal-safe'' routines from other routines}. |
2522 | | However, since the kernel thread handling preemption requires a different signal mask, executing user threads on the kernel-alarm thread can cause deadlocks. |
2523 | | For this reason, the alarm thread is in a tight loop around a system call to @sigwaitinfo@, requiring very little CPU time for preemption. |
2524 | | One final detail about the alarm thread is how to wake it when additional communication is required (\eg on thread termination). |
2525 | | This unblocking is also done using {\tt SIGALRM}, but sent through the @pthread_sigqueue@. |
2526 | | Indeed, @sigwait@ can differentiate signals sent from @pthread_sigqueue@ from signals sent from alarms or the kernel. |
2527 | | |
2528 | | \subsection{Scheduler} |
2529 | | Finally, an aspect that was not mentioned yet is the scheduling algorithm. |
2530 | | Currently, the \CFA scheduler uses a single ready queue for all processors, which is the simplest approach to scheduling. |
2531 | | Further discussion on scheduling is present in section \ref{futur:sched}. |
2532 | | |
2533 | | % ====================================================================== |
2534 | | % ====================================================================== |
2535 | | \section{Internal Scheduling} \label{impl:intsched} |
2536 | | % ====================================================================== |
2537 | | % ====================================================================== |
2538 | | The following figure is the traditional illustration of a monitor (repeated from page~\pageref{fig:ClassicalMonitor} for convenience): |
2539 | | |
2540 | | \begin{figure} |
2541 | | \begin{center} |
2542 | | {\resizebox{0.4\textwidth}{!}{\input{monitor}}} |
2543 | | \end{center} |
2544 | | \caption{Traditional illustration of a monitor} |
2545 | | \end{figure} |
2546 | | |
2547 | | This picture has several components, the two most important being the entry queue and the AS-stack. |
2548 | | The entry queue is an (almost) FIFO list where threads waiting to enter are parked, while the acceptor/signaller (AS) stack is a FILO list used for threads that have been signalled or otherwise marked as running next. |
2549 | | |
2550 | | For \CFA, this picture does not have support for blocking multiple monitors on a single condition. |
2551 | | To support bulk acquire two changes to this picture are required. |
2552 | | First, it is no longer helpful to attach the condition to \emph{a single} monitor. |
2553 | | Secondly, the thread waiting on the condition has to be separated across multiple monitors, seen in figure \ref{fig:monitor_cfa}. |
2554 | | |
2555 | | \begin{figure} |
2556 | | \begin{center} |
2557 | | {\resizebox{0.8\textwidth}{!}{\input{int_monitor}}} |
2558 | | \end{center} |
2559 | | \caption{Illustration of \CFA Monitor} |
2560 | | \label{fig:monitor_cfa} |
2561 | | \end{figure} |
2562 | | |
2563 | | This picture and the proper entry and leave algorithms (see listing \ref{f:entry2}) is the fundamental implementation of internal scheduling. |
2564 | | Note that when a thread is moved from the condition to the AS-stack, it is conceptually split into N pieces, where N is the number of monitors specified in the parameter list. |
2565 | | The thread is woken up when all the pieces have popped from the AS-stacks and made active. |
2566 | | In this picture, the threads are split into halves but this is only because there are two monitors. |
2567 | | For a specific signalling operation every monitor needs a piece of thread on its AS-stack. |
2568 | | |
2569 | | \begin{figure} |
2570 | | \begin{multicols}{2} |
2571 | | Entry |
2572 | | \begin{cfa} |
2573 | | if monitor is free |
2574 | | enter |
2575 | | elif already own the monitor |
2576 | | continue |
2577 | | else |
2578 | | block |
2579 | | increment recursion |
2580 | | |
2581 | | \end{cfa} |
2582 | | \columnbreak |
2583 | | Exit |
2584 | | \begin{cfa} |
2585 | | decrement recursion |
2586 | | if recursion == 0 |
2587 | | if signal_stack not empty |
2588 | | set_owner to thread |
2589 | | if all monitors ready |
2590 | | wake-up thread |
2591 | | |
2592 | | if entry queue not empty |
2593 | | wake-up thread |
2594 | | \end{cfa} |
2595 | | \end{multicols} |
2596 | | \begin{cfa}[caption={Entry and exit routine for monitors with internal scheduling},label={f:entry2}] |
2597 | | \end{cfa} |
2598 | | \end{figure} |
2599 | | |
2600 | | The solution discussed in \ref{s:InternalScheduling} can be seen in the exit routine of listing \ref{f:entry2}. |
2601 | | Basically, the solution boils down to having a separate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. |
2602 | | This solution is deadlock safe as well as preventing any potential barging. |
2603 | | The data structures used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the call stack of the @wait@ and @signal_block@ routines. |
2604 | | |
2605 | | \begin{figure} |
2606 | | \begin{center} |
2607 | | {\resizebox{0.8\textwidth}{!}{\input{monitor_structs.pstex_t}}} |
2608 | | \end{center} |
2609 | | \caption{Data structures involved in internal/external scheduling} |
2610 | | \label{fig:structs} |
2611 | | \end{figure} |
2612 | | |
2613 | | Figure \ref{fig:structs} shows a high-level representation of these data structures. |
2614 | | The main idea behind them is that, a thread cannot contain an arbitrary number of intrusive ``next'' pointers for linking onto monitors. |
2615 | | The @condition node@ is the data structure that is queued onto a condition variable and, when signalled, the condition queue is popped and each @condition criterion@ is moved to the AS-stack. |
2616 | | Once all the criteria have been popped from their respective AS-stacks, the thread is woken up, which is what is shown in listing \ref{f:entry2}. |
2617 | | |
2618 | | % ====================================================================== |
2619 | | % ====================================================================== |
2620 | | \section{External Scheduling} |
2621 | | % ====================================================================== |
2622 | | % ====================================================================== |
2623 | | Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that waiting-thread queues are no longer specific to a single monitor, as mentioned in section \ref{extsched}. |
2624 | | For internal scheduling, these queues are part of condition variables, which are still unique for a given scheduling operation (\ie no signal statement uses multiple conditions). |
2625 | | However, in the case of external scheduling, there is no equivalent object which is associated with @waitfor@ statements. |
2626 | | This absence means the queues holding the waiting threads must be stored inside at least one of the monitors that is acquired. |
2627 | | These monitors being the only objects that have sufficient lifetime and are available on both sides of the @waitfor@ statement. |
2628 | | This requires an algorithm to choose which monitor holds the relevant queue. |
2629 | | It is also important that said algorithm be independent of the order in which users list parameters. |
2630 | | The proposed algorithm is to fall back on monitor lock ordering (sorting by address) and specify that the monitor that is acquired first is the one with the relevant waiting queue. |
2631 | | This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. |
2632 | | |
2633 | | This algorithm choice has two consequences: |
2634 | | \begin{itemize} |
2635 | | \item The queue of the monitor with the lowest address is no longer a true FIFO queue because threads can be moved to the front of the queue. |
2636 | | These queues need to contain a set of monitors for each of the waiting threads. |
2637 | | Therefore, another thread whose set contains the same lowest address monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing. |
2638 | | \item The queue of the lowest priority monitor is both required and potentially unused. |
2639 | | Indeed, since it is not known at compile time which monitor is the monitor which has the lowest address, every monitor needs to have the correct queues even though it is possible that some queues go unused for the entire duration of the program, for example if a monitor is only used in a specific pair. |
2640 | | \end{itemize} |
2641 | | Therefore, the following modifications need to be made to support external scheduling: |
2642 | | \begin{itemize} |
2643 | | \item The threads waiting on the entry queue need to keep track of which routine they are trying to enter, and using which set of monitors. |
2644 | | The @mutex@ routine already has all the required information on its stack, so the thread only needs to keep a pointer to that information. |
2645 | | \item The monitors need to keep a mask of acceptable routines. |
2646 | | This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it. |
2647 | | It also needs storage to keep track of which routine was accepted. |
2648 | | Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread. |
2649 | | Note that if a thread has acquired two monitors but executes a @waitfor@ with only one monitor as a parameter, setting the mask of acceptable routines to both monitors will not cause any problems since the extra monitor will not change ownership regardless. |
2650 | | This becomes relevant when @when@ clauses affect the number of monitors passed to a @waitfor@ statement. |
2651 | | \item The entry/exit routines need to be updated as shown in listing \ref{f:entry3}. |
2652 | | \end{itemize} |
2653 | | |
2654 | | \subsection{External Scheduling - Destructors} |
2655 | | Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine. |
2656 | | This routine is needed because of the storage requirements of the call order inversion. |
2657 | | Indeed, when waiting for the destructors, storage is needed for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for. |
2658 | | For regular @waitfor@ statements, the call stack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later. |
2659 | | The @waitfor@ semantics can then be adjusted correspondingly, as seen in listing \ref{f:entry-dtor} |
2660 | | |
2661 | | \begin{figure} |
2662 | | \begin{multicols}{2} |
2663 | | Entry |
2664 | | \begin{cfa} |
2665 | | if monitor is free |
2666 | | enter |
2667 | | elif already own the monitor |
2668 | | continue |
2669 | | elif matches waitfor mask |
2670 | | push criteria to AS-stack |
2671 | | continue |
2672 | | else |
2673 | | block |
2674 | | increment recursion |
2675 | | \end{cfa} |
2676 | | \columnbreak |
2677 | | Exit |
2678 | | \begin{cfa} |
2679 | | decrement recursion |
2680 | | if recursion == 0 |
2681 | | if signal_stack not empty |
2682 | | set_owner to thread |
2683 | | if all monitors ready |
2684 | | wake-up thread |
2685 | | endif |
2686 | | endif |
2687 | | |
2688 | | if entry queue not empty |
2689 | | wake-up thread |
2690 | | endif |
2691 | | \end{cfa} |
2692 | | \end{multicols} |
2693 | | \begin{cfa}[caption={Entry and exit routine for monitors with internal scheduling and external scheduling},label={f:entry3}] |
2694 | | \end{cfa} |
2695 | | \end{figure} |
2696 | | |
2697 | | \begin{figure} |
2698 | | \begin{multicols}{2} |
2699 | | Destructor Entry |
2700 | | \begin{cfa} |
2701 | | if monitor is free |
2702 | | enter |
2703 | | elif already own the monitor |
2704 | | increment recursion |
2705 | | return |
2706 | | create wait context |
2707 | | if matches waitfor mask |
2708 | | reset mask |
2709 | | push self to AS-stack |
2710 | | baton pass |
2711 | | else |
2712 | | wait |
2713 | | increment recursion |
2714 | | \end{cfa} |
2715 | | \columnbreak |
2716 | | Waitfor |
2717 | | \begin{cfa} |
2718 | | if matching thread is already there |
2719 | | if found destructor |
2720 | | push destructor to AS-stack |
2721 | | unlock all monitors |
2722 | | else |
2723 | | push self to AS-stack |
2724 | | baton pass |
2725 | | endif |
2726 | | return |
2727 | | endif |
2728 | | if non-blocking |
2729 | | Unlock all monitors |
2730 | | Return |
2731 | | endif |
2732 | | |
2733 | | push self to AS-stack |
2734 | | set waitfor mask |
2735 | | block |
2736 | | return |
2737 | | \end{cfa} |
2738 | | \end{multicols} |
2739 | | \begin{cfa}[caption={Pseudo code for the \protect\lstinline|waitfor| routine and the \protect\lstinline|mutex| entry routine for destructors},label={f:entry-dtor}] |
2740 | | \end{cfa} |
2741 | | \end{figure} |
2742 | | |
2743 | | |
2744 | | % ====================================================================== |
2745 | | % ====================================================================== |
2746 | | \section{Putting It All Together} |
2747 | | % ====================================================================== |
2748 | | % ====================================================================== |
2749 | | |
2750 | | |
2751 | | \section{Threads As Monitors} |
2752 | | As it was subtly alluded in section \ref{threads}, @thread@s in \CFA are in fact monitors, which means that all monitor features are available when using threads. |
2753 | | For example, here is a very simple two thread pipeline that could be used for a simulator of a game engine: |
2754 | | \begin{figure} |
2755 | | \begin{cfa}[caption={Toy simulator using \protect\lstinline|thread|s and \protect\lstinline|monitor|s.},label={f:engine-v1}] |
2756 | | // Visualization declaration |
2757 | | thread Renderer {} renderer; |
2758 | | Frame * simulate( Simulator & this ); |
2759 | | |
2760 | | // Simulation declaration |
2761 | | thread Simulator{} simulator; |
2762 | | void render( Renderer & this ); |
2763 | | |
2764 | | // Blocking call used as communication |
2765 | | void draw( Renderer & mutex this, Frame * frame ); |
2766 | | |
2767 | | // Simulation loop |
2768 | | void main( Simulator & this ) { |
2769 | | while( true ) { |
2770 | | Frame * frame = simulate( this ); |
2771 | | draw( renderer, frame ); |
2772 | | } |
2773 | | } |
2774 | | |
2775 | | // Rendering loop |
2776 | | void main( Renderer & this ) { |
2777 | | while( true ) { |
2778 | | waitfor( draw, this ); |
2779 | | render( this ); |
2780 | | } |
2781 | | } |
2782 | | \end{cfa} |
2783 | | \end{figure} |
2784 | | One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever. |
2785 | | Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner: |
2786 | | \begin{figure} |
2787 | | \begin{cfa}[caption={Same toy simulator with proper termination condition.},label={f:engine-v2}] |
2788 | | // Visualization declaration |
2789 | | thread Renderer {} renderer; |
2790 | | Frame * simulate( Simulator & this ); |
2791 | | |
2792 | | // Simulation declaration |
2793 | | thread Simulator{} simulator; |
2794 | | void render( Renderer & this ); |
2795 | | |
2796 | | // Blocking call used as communication |
2797 | | void draw( Renderer & mutex this, Frame * frame ); |
2798 | | |
2799 | | // Simulation loop |
2800 | | void main( Simulator & this ) { |
2801 | | while( true ) { |
2802 | | Frame * frame = simulate( this ); |
2803 | | draw( renderer, frame ); |
2804 | | |
2805 | | // Exit main loop after the last frame |
2806 | | if( frame->is_last ) break; |
2807 | | } |
2808 | | } |
2809 | | |
2810 | | // Rendering loop |
2811 | | void main( Renderer & this ) { |
2812 | | while( true ) { |
2813 | | waitfor( draw, this ); |
2814 | | or waitfor( ^?{}, this ) { |
2815 | | // Add an exit condition |
2816 | | break; |
2817 | | } |
2818 | | |
2819 | | render( this ); |
2820 | | } |
2821 | | } |
2822 | | |
2823 | | // Call destructor for simulator once simulator finishes |
2824 | | // Call destructor for renderer to signify shutdown |
2825 | | \end{cfa} |
2826 | | \end{figure} |
2827 | | |
2828 | | \section{Fibers \& Threads} |
2829 | | As mentioned in section \ref{preemption}, \CFA uses preemptive threads by default but can use fibers on demand. |
2830 | | Currently, using fibers is done by adding the following line of code to the program~: |
2831 | | \begin{cfa} |
2832 | | unsigned int default_preemption() { |
2833 | | return 0; |
2834 | | } |
2835 | | \end{cfa} |
2836 | | This routine is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice, \ie no preemption. |
2837 | | However, once clusters are fully implemented, it will be possible to create fibers and \textbf{uthread} in the same system, as in listing \ref{f:fiber-uthread} |
2838 | | \begin{figure} |
2839 | | \lstset{language=CFA,deletedelim=**[is][]{`}{`}} |
2840 | | \begin{cfa}[caption={Using fibers and \textbf{uthread} side-by-side in \CFA},label={f:fiber-uthread}] |
2841 | | // Cluster forward declaration |
2842 | | struct cluster; |
2843 | | |
2844 | | // Processor forward declaration |
2845 | | struct processor; |
2846 | | |
2847 | | // Construct clusters with a preemption rate |
2848 | | void ?{}(cluster& this, unsigned int rate); |
2849 | | // Construct processor and add it to cluster |
2850 | | void ?{}(processor& this, cluster& cluster); |
2851 | | // Construct thread and schedule it on cluster |
2852 | | void ?{}(thread& this, cluster& cluster); |
2853 | | |
2854 | | // Declare two clusters |
2855 | | cluster thread_cluster = { 10`ms }; // Preempt every 10 ms |
2856 | | cluster fibers_cluster = { 0 }; // Never preempt |
2857 | | |
2858 | | // Construct 4 processors |
2859 | | processor processors[4] = { |
2860 | | //2 for the thread cluster |
2861 | | thread_cluster; |
2862 | | thread_cluster; |
2863 | | //2 for the fibers cluster |
2864 | | fibers_cluster; |
2865 | | fibers_cluster; |
2866 | | }; |
2867 | | |
2868 | | // Declares thread |
2869 | | thread UThread {}; |
2870 | | void ?{}(UThread& this) { |
2871 | | // Construct underlying thread to automatically |
2872 | | // be scheduled on the thread cluster |
2873 | | (this){ thread_cluster } |
2874 | | } |
2875 | | |
2876 | | void main(UThread & this); |
2877 | | |
2878 | | // Declares fibers |
2879 | | thread Fiber {}; |
2880 | | void ?{}(Fiber& this) { |
2881 | | // Construct underlying thread to automatically |
2882 | | // be scheduled on the fiber cluster |
2883 | | (this.__thread){ fibers_cluster } |
2884 | | } |
2885 | | |
2886 | | void main(Fiber & this); |
2887 | | \end{cfa} |
2888 | | \end{figure} |
2889 | | |
2890 | | |
2891 | | % ====================================================================== |
2892 | | % ====================================================================== |
2893 | | \section{Performance Results} \label{results} |
2894 | | % ====================================================================== |
2895 | | % ====================================================================== |
2896 | | \section{Machine Setup} |
2897 | | Table \ref{tab:machine} shows the characteristics of the machine used to run the benchmarks. |
2898 | | All tests were made on this machine. |
2899 | | \begin{table} |
2900 | | \begin{center} |
2901 | | \begin{tabular}{| l | r | l | r |} |
2902 | | \hline |
2903 | | Architecture & x86\_64 & NUMA node(s) & 8 \\ |
2904 | | \hline |
2905 | | CPU op-mode(s) & 32-bit, 64-bit & Model name & AMD Opteron\texttrademark Processor 6380 \\ |
2906 | | \hline |
2907 | | Byte Order & Little Endian & CPU Freq & 2.5\si{\giga\hertz} \\ |
2908 | | \hline |
2909 | | CPU(s) & 64 & L1d cache & \SI{16}{\kibi\byte} \\ |
2910 | | \hline |
2911 | | Thread(s) per core & 2 & L1i cache & \SI{64}{\kibi\byte} \\ |
2912 | | \hline |
2913 | | Core(s) per socket & 8 & L2 cache & \SI{2048}{\kibi\byte} \\ |
2914 | | \hline |
2915 | | Socket(s) & 4 & L3 cache & \SI{6144}{\kibi\byte} \\ |
| 2520 | thread & stateful & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\ |
| 2578 | \centering |
| 2579 | \begin{tabular}{@{}l|l@{}} |
| 2580 | \begin{cfa} |
| 2581 | struct Adder { |
| 2582 | int * row, cols; |
| 2583 | }; |
| 2584 | int operator()() { |
| 2585 | subtotal = 0; |
| 2586 | for ( int c = 0; c < cols; c += 1 ) |
| 2587 | subtotal += row[c]; |
| 2588 | return subtotal; |
| 2589 | } |
| 2590 | void ?{}( Adder * adder, int row[$\,$], int cols, int & subtotal ) { |
| 2591 | adder.[rows, cols, subtotal] = [rows, cols, subtotal]; |
| 2592 | } |
| 2593 | |
| 2594 | |
| 2595 | |
| 2596 | |
| 2597 | \end{cfa} |
| 2598 | & |
| 2599 | \begin{cfa} |
| 2600 | int main() { |
| 2601 | const int rows = 10, cols = 10; |
| 2602 | int matrix[rows][cols], subtotals[rows], total = 0; |
| 2603 | // read matrix |
| 2604 | Executor executor( 4 ); // kernel threads |
| 2605 | Adder * adders[rows]; |
| 2606 | for ( r; rows ) { // send off work for executor |
| 2607 | adders[r] = new( matrix[r], cols, &subtotal[r] ); |
| 2608 | executor.send( *adders[r] ); |
| 2609 | } |
| 2610 | for ( r; rows ) { // wait for results |
| 2611 | delete( adders[r] ); |
| 2612 | total += subtotals[r]; |
| 2613 | } |
| 2614 | sout | total; |
| 2615 | } |
| 2616 | \end{cfa} |
| 2617 | \end{tabular} |
| 2618 | \caption{Executor} |
| 2619 | \end{figure} |
| 2620 | \end{comment} |
| 2621 | |
| 2622 | |
| 2623 | \section{Runtime Structure} |
| 2624 | \label{s:CFARuntimeStructure} |
| 2625 | |
| 2626 | Figure~\ref{f:RunTimeStructure} illustrates the runtime structure of a \CFA program. |
| 2627 | In addition to the new kinds of objects introduced by \CFA, there are two more runtime entities used to control parallel execution: cluster and (virtual) processor. |
| 2628 | An executing thread is illustrated by its containment in a processor. |
| 2629 | |
| 2630 | \begin{figure} |
| 2631 | \centering |
| 2632 | \input{RunTimeStructure} |
| 2633 | \caption{\CFA Runtime structure} |
| 2634 | \label{f:RunTimeStructure} |
| 2635 | \end{figure} |
| 2636 | |
| 2637 | |
| 2638 | \subsection{Cluster} |
| 2639 | \label{s:RuntimeStructureCluster} |
| 2640 | |
| 2641 | A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the (user) threads from its own ready queue (like an OS executing kernel threads). |
| 2642 | The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults. |
| 2643 | The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors. |
| 2644 | However, the design allows changing the scheduler, \eg multi-queue multi-server with work-stealing/sharing across the virtual processors. |
| 2645 | If several clusters exist, both threads and virtual processors, can be explicitly migrated from one cluster to another. |
| 2646 | No automatic load balancing among clusters is performed by \CFA. |
| 2647 | |
| 2648 | When a \CFA program begins execution, it creates a user cluster with a single processor and a special processor to handle preemption that does not execute user threads. |
| 2649 | The user cluster is created to contain the application user-threads. |
| 2650 | Having all threads execute on the one cluster often maximizes utilization of processors, which minimizes runtime. |
| 2651 | However, because of limitations of scheduling requirements (real-time), NUMA architecture, heterogeneous hardware, or issues with the underlying operating system, multiple clusters are sometimes necessary. |
| 2652 | |
| 2653 | |
| 2654 | \subsection{Virtual Processor} |
| 2655 | \label{s:RuntimeStructureProcessor} |
| 2656 | |
| 2657 | A virtual processor is implemented by a kernel thread (\eg UNIX process), which are scheduled for execution on a hardware processor by the underlying operating system. |
| 2658 | Programs may use more virtual processors than hardware processors. |
| 2659 | On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel. |
| 2660 | (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.) |
| 2661 | The \CFA runtime attempts to block unused processors and unblock processors as the system load increases; |
| 2662 | balancing the workload with processors is difficult because it requires future knowledge, \ie what will the applicaton workload do next. |
| 2663 | Preemption occurs on virtual processors rather than user threads, via operating-system interrupts. |
| 2664 | Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads. |
| 2665 | Turning off preemption transforms user threads into fibres. |
| 2666 | |
| 2667 | |
| 2668 | \begin{comment} |
| 2669 | \section{Implementation} |
| 2670 | \label{s:Implementation} |
| 2671 | |
| 2672 | A primary implementation challenge is avoiding contention from dynamically allocating memory because of bulk acquire, \eg the internal-scheduling design is (almost) free of allocations. |
| 2673 | All blocking operations are made by parking threads onto queues, therefore all queues are designed with intrusive nodes, where each node has preallocated link fields for chaining. |
| 2674 | Furthermore, several bulk-acquire operations need a variable amount of memory. |
| 2675 | This storage is allocated at the base of a thread's stack before blocking, which means programmers must add a small amount of extra space for stacks. |
| 2676 | |
| 2677 | In \CFA, ordering of monitor acquisition relies on memory ordering to prevent deadlock~\cite{Havender68}, because all objects have distinct non-overlapping memory layouts, and mutual-exclusion for a monitor is only defined for its lifetime. |
| 2678 | When a mutex call is made, pointers to the concerned monitors are aggregated into a variable-length array and sorted. |
| 2679 | This array persists for the entire duration of the mutual exclusion and is used extensively for synchronization operations. |
| 2680 | |
| 2681 | To improve performance and simplicity, context switching occurs inside a function call, so only callee-saved registers are copied onto the stack and then the stack register is switched; |
| 2682 | the corresponding registers are then restored for the other context. |
| 2683 | Note, the instruction pointer is untouched since the context switch is always inside the same function. |
| 2684 | Experimental results (not presented) for a stackless or stackful scheduler (1 versus 2 context switches) (see Section~\ref{s:Concurrency}) show the performance is virtually equivalent, because both approaches are dominated by locking to prevent a race condition. |
| 2685 | |
| 2686 | All kernel threads (@pthreads@) created a stack. |
| 2687 | Each \CFA virtual processor is implemented as a coroutine and these coroutines run directly on the kernel-thread stack, effectively stealing this stack. |
| 2688 | The exception to this rule is the program main, \ie the initial kernel thread that is given to any program. |
| 2689 | In order to respect C expectations, the stack of the initial kernel thread is used by program main rather than the main processor, allowing it to grow dynamically as in a normal C program. |
| 2690 | \end{comment} |
| 2691 | |
| 2692 | |
| 2693 | \subsection{Preemption} |
| 2694 | |
| 2695 | Nondeterministic preemption provides fairness from long-running threads, and forces concurrent programmers to write more robust programs, rather than relying on code between cooperative scheduling to be atomic. |
| 2696 | This atomic reliance can fail on multi-core machines, because execution across cores is nondeterministic. |
| 2697 | A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Microsoft runtime does not support interrupts and on Linux systems, interrupts are complex (see below). |
| 2698 | Preemption is normally handled by setting a countdown timer on each virtual processor. |
| 2699 | When the timer expires, an interrupt is delivered, and the interrupt handler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code. |
| 2700 | Multiple signal handlers may be pending. |
| 2701 | When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal is delivered. |
| 2702 | The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler; |
| 2703 | therefore, the same signal mask is required for all virtual processors in a cluster. |
| 2704 | Because preemption frequency is usually long (1 millisecond) performance cost is negligible. |
| 2705 | |
| 2706 | Linux switched a decade ago from specific to arbitrary process signal-delivery for applications with multiple kernel threads. |
| 2707 | \begin{cquote} |
| 2708 | A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. |
| 2709 | If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which it will deliver the signal. |
| 2710 | SIGNAL(7) - Linux Programmer's Manual |
| 2711 | \end{cquote} |
| 2712 | Hence, the timer-expiry signal, which is generated \emph{externally} by the Linux kernel to an application, is delivered to any of its Linux subprocesses (kernel threads). |
| 2713 | To ensure each virtual processor receives a preemption signal, a discrete-event simulation is run on a special virtual processor, and only it sets and receives timer events. |
| 2714 | Virtual processors register an expiration time with the discrete-event simulator, which is inserted in sorted order. |
| 2715 | The simulation sets the countdown timer to the value at the head of the event list, and when the timer expires, all events less than or equal to the current time are processed. |
| 2716 | Processing a preemption event sends an \emph{internal} @SIGUSR1@ signal to the registered virtual processor, which is always delivered to that processor. |
| 2717 | |
| 2718 | |
| 2719 | \subsection{Debug Kernel} |
| 2720 | |
| 2721 | There are two versions of the \CFA runtime kernel: debug and non-debug. |
| 2722 | The debugging version has many runtime checks and internal assertions, \eg stack (non-writable) guard page, and checks for stack overflow whenever context switches occur among coroutines and threads, which catches most stack overflows. |
| 2723 | After a program is debugged, the non-debugging version can be used to significantly decrease space and increase performance. |
| 2724 | |
| 2725 | |
| 2726 | \section{Performance} |
| 2727 | \label{s:Performance} |
| 2728 | |
| 2729 | To verify the implementation of the \CFA runtime, a series of microbenchmarks are performed comparing \CFA with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0. |
| 2730 | For comparison, the package must be multi-processor (M:N), which excludes libdill/libmil~\cite{libdill} (M:1)), and use a shared-memory programming model, \eg not message passing. |
| 2731 | The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and \CFA/\uC are compiled with gcc 6.5. |
| 2732 | |
| 2733 | All benchmarks are run using the following harness. (The Java harness is augmented to circumvent JIT issues.) |
| 2734 | \begin{cfa} |
| 2735 | unsigned int N = 10_000_000; |
| 2736 | #define BENCH( `run` ) Time before = getTimeNsec(); `run;` Duration result = (getTimeNsec() - before) / N; |
| 2737 | \end{cfa} |
| 2738 | The method used to get time is @clock_gettime( CLOCK_REALTIME )@. |
| 2739 | Each benchmark is performed @N@ times, where @N@ varies depending on the benchmark; |
| 2740 | the total time is divided by @N@ to obtain the average time for a benchmark. |
| 2741 | Each benchmark experiment is run 31 times. |
| 2742 | All omitted tests for other languages are functionally identical to the \CFA tests and available online~\cite{CforallBenchMarks}. |
| 2743 | % tar --exclude=.deps --exclude=Makefile --exclude=Makefile.in --exclude=c.c --exclude=cxx.cpp --exclude=fetch_add.c -cvhf benchmark.tar benchmark |
| 2744 | |
| 2745 | \paragraph{Object Creation} |
| 2746 | |
| 2747 | Object creation is measured by creating/deleting the specific kind of concurrent object. |
| 2748 | Figure~\ref{f:creation} shows the code for \CFA, with results in Table~\ref{tab:creation}. |
| 2749 | The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine to force stack creation, the creation cost is artificially low. |
| 2750 | |