Changeset 016b1eb


Ignore:
Timestamp:
Jun 16, 2020, 9:09:53 PM (4 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
c1ee231
Parents:
9019b14
Message:

final changes for round 2 of the SP&E concurrency paper

Location:
doc/papers/concurrency
Files:
4 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Paper.tex

    r9019b14 r016b1eb  
    292292
    293293\CFA~\cite{Moss18,Cforall} is a modern, polymorphic, non-object-oriented\footnote{
    294 \CFA has object-oriented features, such as constructors, destructors, virtuals and simple trait/interface inheritance.
     294\CFA has object-oriented features, such as constructors, destructors, and simple trait/interface inheritance.
    295295% Go interfaces, Rust traits, Swift Protocols, Haskell Type Classes and Java Interfaces.
    296296% "Trait inheritance" works for me. "Interface inheritance" might also be a good choice, and distinguish clearly from implementation inheritance.
    297 % You'll want to be a little bit careful with terms like "structural" and "nominal" inheritance as well. CFA has structural inheritance (I think Go as well) -- it's inferred based on the structure of the code. Java, Rust, and Haskell (not sure about Swift) have nominal inheritance, where there needs to be a specific statement that "this type inherits from this type".
    298 However, functions \emph{cannot} be nested in structures, so there is no lexical binding between a structure and set of functions implemented by an implicit \lstinline@this@ (receiver) parameter.},
     297% You'll want to be a little bit careful with terms like "structural" and "nominal" inheritance as well. CFA has structural inheritance (I think Go as well) -- it's inferred based on the structure of the code.
     298% Java, Rust, and Haskell (not sure about Swift) have nominal inheritance, where there needs to be a specific statement that "this type inherits from this type".
     299However, functions \emph{cannot} be nested in structures and there is no mechanism to designate a function parameter as a receiver, \lstinline@this@, parameter.},
    299300backwards-compatible extension of the C programming language.
    300301In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a vehicle for new typing and control-flow capabilities on top of a highly popular programming language\footnote{
     
    317318Coroutines are only a stepping stone towards concurrency where the commonality is that coroutines and threads retain state between calls.
    318319
    319 \Celeven/\CCeleven define concurrency~\cite[\S~7.26]{C11}, but it is largely wrappers for a subset of the pthreads library~\cite{Pthreads}.\footnote{Pthreads concurrency is based on simple thread fork and join in a function and mutex or condition locks, which is low-level and error-prone}
     320\Celeven and \CCeleven define concurrency~\cite[\S~7.26]{C11}, but it is largely wrappers for a subset of the pthreads library~\cite{Pthreads}.\footnote{Pthreads concurrency is based on simple thread fork and join in a function and mutex or condition locks, which is low-level and error-prone}
    320321Interestingly, almost a decade after the \Celeven standard, the most recent versions of gcc, clang, and msvc do not support the \Celeven include @threads.h@, indicating no interest in the C11 concurrency approach (possibly because of the recent effort to add concurrency to \CC).
    321322While the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}, as for \CC.
     
    392393\label{s:FundamentalExecutionProperties}
    393394
    394 The features in a programming language should be composed from a set of fundamental properties rather than an ad hoc collection chosen by the designers.
     395The features in a programming language should be composed of a set of fundamental properties rather than an ad hoc collection chosen by the designers.
    395396To this end, the control-flow features created for \CFA are based on the fundamental properties of any language with function-stack control-flow (see also \uC~\cite[pp.~140-142]{uC++}).
    396 The fundamental properties are execution state, thread, and mutual-exclusion/synchronization (MES).
     397The fundamental properties are execution state, thread, and mutual-exclusion/synchronization.
    397398These independent properties can be used to compose different language features, forming a compositional hierarchy, where the combination of all three is the most advanced feature, called a thread.
    398399While it is possible for a language to only provide threads for composing programs~\cite{Hermes90}, this unnecessarily complicates and makes inefficient solutions to certain classes of problems.
    399400As is shown, each of the non-rejected composed language features solves a particular set of problems, and hence, has a defensible position in a programming language.
    400 If a compositional feature is missing, a programmer has too few fundamental properties resulting in a complex and/or is inefficient solution.
     401If a compositional feature is missing, a programmer has too few fundamental properties resulting in a complex and/or inefficient solution.
    401402
    402403In detail, the fundamental properties are:
    403404\begin{description}[leftmargin=\parindent,topsep=3pt,parsep=0pt]
    404405\item[\newterm{execution state}:]
    405 is the state information needed by a control-flow feature to initialize, manage compute data and execution location(s), and de-initialize, \eg calling a function initializes a stack frame including contained objects with constructors, manages local data in blocks and return locations during calls, and de-initializes the frame by running any object destructors and management operations.
     406is the state information needed by a control-flow feature to initialize and manage both compute data and execution location(s), and de-initialize.
     407For example, calling a function initializes a stack frame including contained objects with constructors, manages local data in blocks and return locations during calls, and de-initializes the frame by running any object destructors and management operations.
    406408State is retained in fixed-sized aggregate structures (objects) and dynamic-sized stack(s), often allocated in the heap(s) managed by the runtime system.
    407409The lifetime of state varies with the control-flow feature, where longer life-time and dynamic size provide greater power but also increase usage complexity and cost.
     
    414416Multiple threads provide \emph{concurrent execution};
    415417concurrent execution becomes parallel when run on multiple processing units, \eg hyper-threading, cores, or sockets.
    416 There must be language mechanisms to create, block and unblock, and join with a thread, even if the mechanism is indirect.
    417 
    418 \item[\newterm{MES}:]
    419 is the concurrency mechanisms to perform an action without interruption and establish timing relationships among multiple threads.
     418A programmer needs mechanisms to create, block and unblock, and join with a thread, even if these basic mechanisms are supplied indirectly through high-level features.
     419
     420\item[\newterm{mutual-exclusion / synchronization (MES)}:]
     421is the concurrency mechanism to perform an action without interruption and establish timing relationships among multiple threads.
    420422We contented these two properties are independent, \ie mutual exclusion cannot provide synchronization and vice versa without introducing additional threads~\cite[\S~4]{Buhr05a}.
    421 Limiting MES, \eg no access to shared data, results in contrived solutions and inefficiency on multi-core von Neumann computers where shared memory is a foundational aspect of its design.
     423Limiting MES functionality results in contrived solutions and inefficiency on multi-core von Neumann computers where shared memory is a foundational aspect of its design.
    422424\end{description}
    423 These properties are fundamental because they cannot be built from existing language features, \eg a basic programming language like C99~\cite{C99} cannot create new control-flow features, concurrency, or provide MES without atomic hardware mechanisms.
     425These properties are fundamental as they cannot be built from existing language features, \eg a basic programming language like C99~\cite{C99} cannot create new control-flow features, concurrency, or provide MES without (atomic) hardware mechanisms.
    424426
    425427
     
    443445\renewcommand{\arraystretch}{1.25}
    444446%\setlength{\tabcolsep}{5pt}
     447\vspace*{-5pt}
    445448\begin{tabular}{c|c||l|l}
    446449\multicolumn{2}{c||}{execution properties} & \multicolumn{2}{c}{mutual exclusion / synchronization} \\
     
    461464Yes (stackful)          & Yes           & \textbf{11}\ \ \ @thread@                             & \textbf{12}\ \ @mutex@ @thread@               \\
    462465\end{tabular}
     466\vspace*{-8pt}
    463467\end{table}
    464468
     
    468472A @mutex@ structure, often called a \newterm{monitor}, provides a high-level interface for race-free access of shared data in concurrent programming-languages.
    469473Case 3 is case 1 where the structure can implicitly retain execution state and access functions use this execution state to resume/suspend across \emph{callers}, but resume/suspend does not retain a function's local state.
    470 A stackless structure, often called a \newterm{generator} or \emph{iterator}, is \newterm{stackless} because it still borrow the caller's stack and thread, but the stack is used only to preserve state across its callees not callers.
     474A stackless structure, often called a \newterm{generator} or \emph{iterator}, is \newterm{stackless} because it still borrows the caller's stack and thread, but the stack is used only to preserve state across its callees not callers.
    471475Generators provide the first step toward directly solving problems like finite-state machines that retain data and execution state between calls, whereas normal functions restart on each call.
    472476Case 4 is cases 2 and 3 with thread safety during execution of the generator's access functions.
     
    475479A stackful generator, often called a \newterm{coroutine}, is \newterm{stackful} because resume/suspend now context switch to/from the caller's and coroutine's stack.
    476480A coroutine extends the state retained between calls beyond the generator's structure to arbitrary call depth in the access functions.
    477 Cases 7 and 8 are rejected because a new thread must have its own stack, where the thread begins and stack frames are stored for calls, \ie it is unrealistic for a thread to borrow a stack.
    478 Cases 9 and 10 are rejected because a thread needs a growable stack to accept calls, make calls, block, or be preempted, all of which compound to require an unknown amount of execution state.
    479 If this kind of thread exists, it must execute to completion, \ie computation only, which severely restricts runtime management.
     481Cases 7, 8, 9 and 10 are rejected because a new thread must have its own stack, where the thread begins and stack frames are stored for calls, \ie it is unrealistic for a thread to borrow a stack.
     482For cases 9 and 10, the stackless frame is not growable, precluding accepting nested calls, making calls, blocking as it requires calls, or preemption as it requires pushing an interrupt frame, all of which compound to require an unknown amount of execution state.
     483Hence, if this kind of uninterruptable thread exists, it must execute to completion, \ie computation only, which severely restricts runtime management.
    480484Cases 11 and 12 are a stackful thread with and without safe access to shared state.
    481485A thread is the language mechanism to start another thread of control in a program with growable execution state for call/return execution.
     
    13961400The call to @start@ is the first @resume@ of @prod@, which remembers the program main as the starter and creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
    13971401@prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer's @deliver@ function to transfer the values, and printing the status returned from the consumer.
    1398 The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
     1402The producer's call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
    13991403Similarly on the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine.
    14001404The symmetric coroutine cycle forms when the consumer calls the producer's @payment@ function, which resumes the producer in the consumer's delivery function.
    14011405When the producer calls @delivery@ again, it resumes the consumer in the @payment@ function.
    1402 Both interface function than return to the their corresponding coroutine-main functions for the next cycle.
     1406Both interface functions then return to their corresponding coroutine-main functions for the next cycle.
    14031407Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling.
    14041408As a consequence of a coroutine retaining its last resumer for suspending back, these reverse pointers allow @suspend@ to cycle \emph{backwards} around a symmetric coroutine cycle.
     
    14141418
    14151419Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack.
    1416 Furthermore, each deallocated coroutine must execute all destructors for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
     1420Furthermore, each deallocated coroutine must execute all destructors for objects allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
    14171421In the example, termination begins with the producer's loop stopping after N iterations and calling the consumer's @stop@ function, which sets the @done@ flag, resumes the consumer in function @payment@, terminating the call, and the consumer's loop in its coroutine main.
    14181422% (Not shown is having @prod@ raise a nonlocal @stop@ exception at @cons@ after it finishes generating values and suspend back to @cons@, which catches the @stop@ exception to terminate its loop.)
     
    14381442if @ping@ ends first, it resumes its starter the program main on return.
    14391443Regardless of the cycle complexity, the starter structure always leads back to the program main, but the path can be entered at an arbitrary point.
    1440 Once back at the program main (creator), coroutines @ping@ and @pong@ are deallocated, runnning any destructors for objects within the coroutine and possibly deallocating any coroutine stacks for non-terminated coroutines, where stack deallocation implies stack unwinding to find destructors for allocated objects on the stack.
    1441 Hence, the \CFA termination semantics for the generator and coroutine ensure correct deallocation semnatics, regardless of the coroutine's state (terminated or active), like any other aggregate object.
     1444Once back at the program main (creator), coroutines @ping@ and @pong@ are deallocated, running any destructors for objects within the coroutine and possibly deallocating any coroutine stacks for non-terminated coroutines, where stack deallocation implies stack unwinding to find destructors for allocated objects on the stack.
     1445Hence, the \CFA termination semantics for the generator and coroutine ensure correct deallocation semantics, regardless of the coroutine's state (terminated or active), like any other aggregate object.
    14421446
    14431447
     
    14451449
    14461450A significant implementation challenge for generators and coroutines (and threads in Section~\ref{s:threads}) is adding extra fields to the custom types and related functions, \eg inserting code after/before the coroutine constructor/destructor and @main@ to create/initialize/de-initialize/destroy any extra fields, \eg the coroutine stack.
    1447 There are several solutions to these problem, which follow from the object-oriented flavour of adopting custom types.
     1451There are several solutions to this problem, which follow from the object-oriented flavour of adopting custom types.
    14481452
    14491453For object-oriented languages, inheritance is used to provide extra fields and code via explicit inheritance:
     
    14801484forall( `dtype` T | is_coroutine(T) ) void $suspend$( T & ), resume( T & );
    14811485\end{cfa}
    1482 Note, copying generators, coroutines, and threads is undefined because muliple objects cannot execute on a shared stack and stack copying does not work in unmanaged languages (no garbage collection), like C, because the stack may contain pointers to objects within it that require updating for the copy.
     1486Note, copying generators, coroutines, and threads is undefined because multiple objects cannot execute on a shared stack and stack copying does not work in unmanaged languages (no garbage collection), like C, because the stack may contain pointers to objects within it that require updating for the copy.
    14831487The \CFA @dtype@ property provides no \emph{implicit} copying operations and the @is_coroutine@ trait provides no \emph{explicit} copying operations, so all coroutines must be passed by reference or pointer.
    14841488The function definitions ensure there is a statically typed @main@ function that is the starting point (first stack frame) of a coroutine, and a mechanism to read the coroutine descriptor from its handle.
     
    16251629        MyThread * team = factory( 10 );
    16261630        // concurrency
    1627         `delete( team );` $\C{// deallocate heap-based threads, implicit joins before destruction}\CRT$
     1631        `adelete( team );` $\C{// deallocate heap-based threads, implicit joins before destruction}\CRT$
    16281632}
    16291633\end{cfa}
     
    17021706Unrestricted nondeterminism is meaningless as there is no way to know when a result is completed and safe to access.
    17031707To produce meaningful execution requires clawing back some determinism using mutual exclusion and synchronization, where mutual exclusion provides access control for threads using shared data, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}.
    1704 The shared data protected by mutual exlusion is called a \newterm{critical section}~\cite{Dijkstra65}, and the protection can be simple, only 1 thread, or complex, only N kinds of threads, \eg group~\cite{Joung00} or readers/writer~\cite{Courtois71} problems.
     1708The shared data protected by mutual exclusion is called a \newterm{critical section}~\cite{Dijkstra65}, and the protection can be simple, only 1 thread, or complex, only N kinds of threads, \eg group~\cite{Joung00} or readers/writer~\cite{Courtois71} problems.
    17051709Without synchronization control in a critical section, an arriving thread can barge ahead of preexisting waiter threads resulting in short/long-term starvation, staleness and freshness problems, and incorrect transfer of data.
    17061710Preventing or detecting barging is a challenge with low-level locks, but made easier through higher-level constructs.
     
    18261830\end{cquote}
    18271831The @dtype@ property prevents \emph{implicit} copy operations and the @is_monitor@ trait provides no \emph{explicit} copy operations, so monitors must be passed by reference or pointer.
    1828 Similarly, the function definitions ensures there is a mechanism to read the monitor descriptor from its handle, and a special destructor to prevent deallocation if a thread is using the shared data.
     1832Similarly, the function definitions ensure there is a mechanism to read the monitor descriptor from its handle, and a special destructor to prevent deallocation if a thread is using the shared data.
    18291833The custom monitor type also inserts any locks needed to implement the mutual exclusion semantics.
    18301834\CFA relies heavily on traits as an abstraction mechanism, so the @mutex@ qualifier prevents coincidentally matching of a monitor trait with a type that is not a monitor, similar to coincidental inheritance where a shape and playing card can both be drawable.
     
    24792483
    24802484One scheduling solution is for the signaller S to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling.
    2481 However, this solution is inefficient if W2 waited first and can be immediate passed @m2@ when released, while S retains @m1@ until completion of the outer mutex statement.
     2485However, this solution is inefficient if W2 waited first and immediate passed @m2@ when released, while S retains @m1@ until completion of the outer mutex statement.
    24822486If W1 waited first, the signaller must retain @m1@ amd @m2@ until completion of the outer mutex statement and then pass both to W1.
    24832487% Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves.
    2484 To support this efficient semantics and prevent barging, the implementation maintains a list of monitors acquired for each blocked thread.
     2488To support these efficient semantics and prevent barging, the implementation maintains a list of monitors acquired for each blocked thread.
    24852489When a signaller exits or waits in a mutex function or statement, the front waiter on urgent is unblocked if all its monitors are released.
    2486 Implementing a fast subset check for the necessary released monitors is important and discussed in the following sections.
     2490Implementing a fast subset check for the necessarily released monitors is important and discussed in the following sections.
    24872491% The benefit is encapsulating complexity into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
    24882492
     
    25432547Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array.
    25442548Then, the same implementation approach used for the urgent stack (see Section~\ref{s:Scheduling}) is used for the calling queue.
    2545 Each caller has a list of monitors acquired, and the @waitfor@ statement performs a short linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers.
     2549Each caller has a list of monitors acquired, and the @waitfor@ statement performs a short linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transferred.
    25462550
    25472551
     
    27782782The \CFA program @main@ uses the call/return paradigm to directly communicate with the @GoRtn main@, whereas Go switches to the unbuffered channel paradigm to indirectly communicate with the goroutine.
    27792783Communication by multiple threads is safe for the @gortn@ thread via mutex calls in \CFA or channel assignment in Go.
    2780 The different between call and channel send occurs for buffered channels making the send asynchronous.
    2781 In \CFA, asynchronous call and multiple buffers is provided using an administrator and worker threads~\cite{Gentleman81} and/or futures (not discussed).
     2784The difference between call and channel send occurs for buffered channels making the send asynchronous.
     2785In \CFA, asynchronous call and multiple buffers are provided using an administrator and worker threads~\cite{Gentleman81} and/or futures (not discussed).
    27822786
    27832787Figure~\ref{f:DirectCommunicationDatingService} shows the dating-service problem in Figure~\ref{f:DatingServiceMonitor} extended from indirect monitor communication to direct thread communication.
    2784 When converting a monitor to a thread (server), the coding pattern is to move as much code as possible from the accepted functions into the thread main so it does an much work as possible.
     2788When converting a monitor to a thread (server), the coding pattern is to move as much code as possible from the accepted functions into the thread main so it does as much work as possible.
    27852789Notice, the dating server is postponing requests for an unspecified time while continuing to accept new requests.
    27862790For complex servers, \eg web-servers, there can be hundreds of lines of code in the thread main and safe interaction with clients can be complex.
     
    27902794
    27912795For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc.
    2792 Some of these low-level mechanism are used to build the \CFA runtime, but we always advocate using high-level mechanisms whenever possible.
     2796Some of these low-level mechanisms are used to build the \CFA runtime, but we always advocate using high-level mechanisms whenever possible.
    27932797
    27942798
     
    29802984
    29812985To test the performance of the \CFA runtime, a series of microbenchmarks are used to compare \CFA with pthreads, Java 11.0.6, Go 1.12.6, Rust 1.37.0, Python 3.7.6, Node.js 12.14.1, and \uC 7.0.0.
    2982 For comparison, the package must be multi-processor (M:N), which excludes libdil and /libmil~\cite{libdill} (M:1)), and use a shared-memory programming model, \eg not message passing.
     2986For comparison, the package must be multi-processor (M:N), which excludes libdil and libmil~\cite{libdill} (M:1)), and use a shared-memory programming model, \eg not message passing.
    29832987The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and pthreads/\CFA/\uC are compiled with gcc 9.2.1.
    29842988
     
    30493053Figure~\ref{f:schedint} shows the code for \CFA, with results in Table~\ref{t:schedint}.
    30503054Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
    3051 Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.
     3055Java scheduling is significantly greater because the benchmark explicitly creates multiple threads in order to prevent the JIT from making the program sequential, \ie removing all locking.
    30523056
    30533057\begin{multicols}{2}
     
    33083312This type of concurrency can be achieved both at the language level and at the library level.
    33093313The canonical example of implicit concurrency is concurrent nested @for@ loops, which are amenable to divide and conquer algorithms~\cite{uC++book}.
    3310 The \CFA language features should make it possible to develop a reasonable number of implicit concurrency mechanism to solve basic HPC data-concurrency problems.
     3314The \CFA language features should make it possible to develop a reasonable number of implicit concurrency mechanisms to solve basic HPC data-concurrency problems.
    33113315However, implicit concurrency is a restrictive solution with significant limitations, so it can never replace explicit concurrent programming.
    33123316
  • doc/papers/concurrency/figures/RunTimeStructure.fig

    r9019b14 r016b1eb  
    88-2
    991200 2
    10 6 3855 2775 4155 2925
    11 1 3 0 1 0 0 0 0 0 0.000 1 0.0000 3930 2850 30 30 3930 2850 3960 2880
    12 1 3 0 1 0 0 0 0 0 0.000 1 0.0000 4035 2850 30 30 4035 2850 4065 2880
     106 3255 2475 3555 2625
     111 3 0 1 0 0 0 0 0 0.000 1 0.0000 3330 2550 30 30 3330 2550 3360 2580
     121 3 0 1 0 0 0 0 0 0.000 1 0.0000 3435 2550 30 30 3435 2550 3465 2580
    1313-6
    14 6 4755 3525 5055 3675
    15 1 3 0 1 0 0 0 0 0 0.000 1 0.0000 4830 3600 30 30 4830 3600 4860 3630
    16 1 3 0 1 0 0 0 0 0 0.000 1 0.0000 4935 3600 30 30 4935 3600 4965 3630
     146 4155 3225 4455 3375
     151 3 0 1 0 0 0 0 0 0.000 1 0.0000 4230 3300 30 30 4230 3300 4260 3330
     161 3 0 1 0 0 0 0 0 0.000 1 0.0000 4335 3300 30 30 4335 3300 4365 3330
    1717-6
    18 6 4650 2775 4950 2925
    19 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4725 2850 15 15 4725 2850 4740 2865
    20 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4800 2850 15 15 4800 2850 4815 2865
    21 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4875 2850 15 15 4875 2850 4890 2865
     186 4050 2475 4350 2625
     191 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4125 2550 15 15 4125 2550 4140 2565
     201 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4200 2550 15 15 4200 2550 4215 2565
     211 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4275 2550 15 15 4275 2550 4290 2565
    2222-6
    23 6 3225 2400 3525 2550
    24 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3300 2475 15 15 3300 2475 3315 2490
    25 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3375 2475 15 15 3375 2475 3390 2490
    26 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3450 2475 15 15 3450 2475 3465 2490
     236 2625 2100 2925 2250
     241 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 2700 2175 15 15 2700 2175 2715 2190
     251 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 2775 2175 15 15 2775 2175 2790 2190
     261 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 2850 2175 15 15 2850 2175 2865 2190
    2727-6
    28 6 5475 3450 5625 3750
    29 1 3 0 1 -1 -1 0 0 20 0.000 1 4.7120 5550 3525 15 15 5550 3525 5535 3540
    30 1 3 0 1 -1 -1 0 0 20 0.000 1 4.7120 5550 3600 15 15 5550 3600 5535 3615
    31 1 3 0 1 -1 -1 0 0 20 0.000 1 4.7120 5550 3675 15 15 5550 3675 5535 3690
     286 4875 3150 5025 3450
     291 3 0 1 -1 -1 0 0 20 0.000 1 4.7120 4950 3225 15 15 4950 3225 4935 3240
     301 3 0 1 -1 -1 0 0 20 0.000 1 4.7120 4950 3300 15 15 4950 3300 4935 3315
     311 3 0 1 -1 -1 0 0 20 0.000 1 4.7120 4950 3375 15 15 4950 3375 4935 3390
    3232-6
    33 6 4275 3525 4575 3675
    34 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4350 3600 15 15 4350 3600 4365 3615
    35 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4425 3600 15 15 4425 3600 4440 3615
    36 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4500 3600 15 15 4500 3600 4515 3615
     336 3675 3225 3975 3375
     341 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3750 3300 15 15 3750 3300 3765 3315
     351 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3825 3300 15 15 3825 3300 3840 3315
     361 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3900 3300 15 15 3900 3300 3915 3315
    3737-6
    38 6 3225 4125 4650 4425
    39 6 4350 4200 4650 4350
    40 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4425 4275 15 15 4425 4275 4440 4290
    41 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4500 4275 15 15 4500 4275 4515 4290
    42 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 4575 4275 15 15 4575 4275 4590 4290
     386 2625 3825 4050 4125
     396 3750 3900 4050 4050
     401 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3825 3975 15 15 3825 3975 3840 3990
     411 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3900 3975 15 15 3900 3975 3915 3990
     421 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3975 3975 15 15 3975 3975 3990 3990
    4343-6
    44 1 1 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3450 4275 225 150 3450 4275 3675 4425
    45 1 1 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4050 4275 225 150 4050 4275 4275 4425
     441 1 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2850 3975 225 150 2850 3975 3075 4125
     451 1 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3450 3975 225 150 3450 3975 3675 4125
    4646-6
    47 6 6675 4125 7500 4425
    48 6 7200 4200 7500 4350
    49 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 7275 4275 15 15 7275 4275 7290 4290
    50 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 7350 4275 15 15 7350 4275 7365 4290
    51 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 7425 4275 15 15 7425 4275 7440 4290
     476 6075 3825 6900 4125
     486 6600 3900 6900 4050
     491 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 6675 3975 15 15 6675 3975 6690 3990
     501 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 6750 3975 15 15 6750 3975 6765 3990
     511 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 6825 3975 15 15 6825 3975 6840 3990
    5252-6
    53 1 1 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6900 4275 225 150 6900 4275 7125 4425
     531 1 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6300 3975 225 150 6300 3975 6525 4125
    5454-6
    55 6 6675 3525 8025 3975
     556 6075 3225 7425 3675
    56562 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    5757        1 1 1.00 45.00 90.00
    58          6675 3750 6975 3750
     58         6075 3450 6375 3450
    59592 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    6060        1 1 1.00 45.00 90.00
    61          7125 3750 7350 3750
     61         6525 3450 6750 3450
    62622 2 0 1 -1 -1 0 0 -1 0.000 0 0 0 0 0 5
    63          7800 3975 7800 3525 7350 3525 7350 3975 7800 3975
     63         7200 3675 7200 3225 6750 3225 6750 3675 7200 3675
    64642 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    6565        1 1 1.00 45.00 90.00
    66          7800 3750 8025 3750
     66         7200 3450 7425 3450
    6767-6
    68 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5550 2625 150 150 5550 2625 5700 2625
    69 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5550 3225 150 150 5550 3225 5700 3225
    70 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5550 3975 150 150 5550 3975 5700 3975
    71 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3525 2850 150 150 3525 2850 3675 2850
    72 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4200 2475 150 150 4200 2475 4350 2475
    73 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2850 150 150 4425 2850 4575 2850
    74 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4650 2475 150 150 4650 2475 4800 2475
    75 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3975 3600 150 150 3975 3600 4125 3600
    76 1 3 0 1 0 0 0 0 0 0.000 1 0.0000 3525 3600 30 30 3525 3600 3555 3630
    77 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3750 2475 150 150 3750 2475 3900 2625
    78 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4875 3600 150 150 4875 3600 5025 3750
    79 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3975 2850 150 150 3975 2850 4125 2850
    80 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 7200 2775 150 150 7200 2775 7350 2775
    81 1 3 0 1 0 0 0 0 0 0.000 1 0.0000 2250 4830 30 30 2250 4830 2280 4860
    82 1 3 0 1 0 0 0 0 0 0.000 1 0.0000 7200 2775 30 30 7200 2775 7230 2805
    83 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3525 3600 150 150 3525 3600 3675 3600
    84 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3875 4800 100 100 3875 4800 3975 4800
    85 1 1 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4650 4800 150 75 4650 4800 4800 4875
     681 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4950 2325 150 150 4950 2325 5100 2325
     691 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4950 2925 150 150 4950 2925 5100 2925
     701 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4950 3675 150 150 4950 3675 5100 3675
     711 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2925 2550 150 150 2925 2550 3075 2550
     721 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3600 2175 150 150 3600 2175 3750 2175
     731 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3825 2550 150 150 3825 2550 3975 2550
     741 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4050 2175 150 150 4050 2175 4200 2175
     751 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3375 3300 150 150 3375 3300 3525 3300
     761 3 0 1 0 0 0 0 0 0.000 1 0.0000 2925 3300 30 30 2925 3300 2955 3330
     771 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3150 2175 150 150 3150 2175 3300 2325
     781 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4275 3300 150 150 4275 3300 4425 3450
     791 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3375 2550 150 150 3375 2550 3525 2550
     801 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6600 2475 150 150 6600 2475 6750 2475
     811 3 0 1 0 0 0 0 0 0.000 1 0.0000 1650 4530 30 30 1650 4530 1680 4560
     821 3 0 1 0 0 0 0 0 0.000 1 0.0000 6600 2475 30 30 6600 2475 6630 2505
     831 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2925 3300 150 150 2925 3300 3075 3300
     841 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3275 4500 100 100 3275 4500 3375 4500
     851 1 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4050 4500 150 75 4050 4500 4200 4575
    86862 2 0 1 -1 -1 0 0 -1 0.000 0 0 0 0 0 5
    87          2400 4200 2400 3750 1950 3750 1950 4200 2400 4200
     87         1800 3900 1800 3450 1350 3450 1350 3900 1800 3900
    88882 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5
    89          6300 4500 6300 1800 3000 1800 3000 4500 6300 4500
     89         5700 4200 5700 1500 2400 1500 2400 4200 5700 4200
    90902 2 0 1 -1 -1 0 0 -1 0.000 0 0 0 0 0 5
    91          5775 2850 5775 2400 5325 2400 5325 2850 5775 2850
     91         5175 2550 5175 2100 4725 2100 4725 2550 5175 2550
    92922 2 0 1 -1 -1 0 0 -1 0.000 0 0 0 0 0 5
    93          5775 4200 5775 3750 5325 3750 5325 4200 5775 4200
     93         5175 3900 5175 3450 4725 3450 4725 3900 5175 3900
    94942 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    9595        1 1 1.00 45.00 90.00
    96          5175 3975 5325 3975
     96         4575 3675 4725 3675
    97972 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    9898        1 1 1.00 45.00 90.00
    99          5175 3225 5325 3225
     99         4575 2925 4725 2925
    1001002 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    101101        1 1 1.00 45.00 90.00
    102          5175 2625 5325 2625
     102         4575 2325 4725 2325
    1031032 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    104104        1 1 1.00 45.00 90.00
    105          5775 3975 5925 3975
     105         5175 3675 5325 3675
    1061062 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    107107        1 1 1.00 45.00 90.00
    108          5775 3225 5925 3225
     108         5175 2925 5325 2925
    1091092 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    110110        1 1 1.00 45.00 90.00
    111          5775 2625 5925 2625
     111         5175 2325 5325 2325
    1121122 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    113          5175 3975 5175 2625
     113         4575 3675 4575 2325
    1141142 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    115115        1 1 1.00 45.00 90.00
    116          5925 3975 5925 2025
     116         5325 3675 5325 1725
    1171172 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    118118        1 1 1.00 45.00 90.00
    119          5925 3750 6225 3750
     119         5325 3450 5625 3450
    1201202 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    121121        1 1 1.00 45.00 90.00
    122          3450 2625 3225 2625
     122         2850 2325 2625 2325
    1231232 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 3
    124124        1 1 1.00 45.00 90.00
    125          5925 2025 4200 2025 4200 2250
     125         5325 1725 3600 1725 3600 1950
    1261262 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    127          3225 2625 3225 3600
     127         2625 2325 2625 3300
    1281282 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    129129        1 1 1.00 45.00 90.00
    130          3075 3600 3375 3600
     130         2475 3300 2775 3300
    1311312 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    132132        1 1 1.00 45.00 90.00
    133          3675 3600 3825 3600
     133         3075 3300 3225 3300
    1341342 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    135135        1 1 1.00 45.00 90.00
    136          4125 3600 4275 3600
     136         3525 3300 3675 3300
    1371372 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    138138        1 1 1.00 45.00 90.00
    139          4575 3600 4725 3600
     139         3975 3300 4125 3300
    1401402 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    141141        1 1 1.00 45.00 90.00
    142          5025 3600 5175 3600
     142         4425 3300 4575 3300
    1431432 2 0 1 -1 -1 0 0 -1 0.000 0 0 0 0 0 5
    144          5775 3450 5775 3000 5325 3000 5325 3450 5775 3450
     144         5175 3150 5175 2700 4725 2700 4725 3150 5175 3150
    1451452 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5
    146          8100 4500 8100 1800 6600 1800 6600 4500 8100 4500
     146         7500 4200 7500 1500 6000 1500 6000 4200 7500 4200
    1471472 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 2
    148148        1 1 1.00 45.00 90.00
    149          7050 2775 6825 2775
     149         6450 2475 6225 2475
    1501502 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    151          6825 2775 6825 3750
     151         6225 2475 6225 3450
    1521522 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 1 0 4
    153153        1 1 1.00 45.00 90.00
    154          7875 3750 7875 2325 7200 2325 7200 2550
     154         7275 3450 7275 2025 6600 2025 6600 2250
    1551552 2 0 1 -1 -1 0 0 -1 0.000 0 0 0 0 0 5
    156          5850 4950 5850 4725 5625 4725 5625 4950 5850 4950
     156         5250 4650 5250 4425 5025 4425 5025 4650 5250 4650
    1571572 2 1 1 -1 -1 0 0 -1 3.000 0 0 0 0 0 5
    158          6975 4950 6750 4950 6750 4725 6975 4725 6975 4950
    159 4 1 -1 0 0 0 10 0.0000 2 105 720 5550 4425 Processors\001
    160 4 1 -1 0 0 0 10 0.0000 2 120 1005 4200 3225 Blocked Tasks\001
    161 4 1 -1 0 0 0 10 0.0000 2 150 870 4200 3975 Ready Tasks\001
    162 4 1 -1 0 0 0 10 0.0000 2 135 1095 7350 1725 Other Cluster(s)\001
    163 4 1 -1 0 0 0 10 0.0000 2 105 840 4650 1725 User Cluster\001
    164 4 1 -1 0 0 0 10 0.0000 2 150 615 2175 3675 Manager\001
    165 4 1 -1 0 0 0 10 0.0000 2 105 990 2175 3525 Discrete-event\001
    166 4 1 -1 0 0 0 10 0.0000 2 135 795 2175 4350 preemption\001
    167 4 0 -1 0 0 0 10 0.0000 2 150 1290 2325 4875 genrator/coroutine\001
    168 4 0 -1 0 0 0 10 0.0000 2 120 270 4050 4875 task\001
    169 4 0 -1 0 0 0 10 0.0000 2 105 450 7050 4875 cluster\001
    170 4 0 -1 0 0 0 10 0.0000 2 105 660 5925 4875 processor\001
    171 4 0 -1 0 0 0 10 0.0000 2 105 555 4875 4875 monitor\001
     158         6375 4650 6150 4650 6150 4425 6375 4425 6375 4650
     1594 1 -1 0 0 0 10 0.0000 2 105 720 4950 4125 Processors\001
     1604 1 -1 0 0 0 10 0.0000 2 120 1005 3600 2925 Blocked Tasks\001
     1614 1 -1 0 0 0 10 0.0000 2 150 870 3600 3675 Ready Tasks\001
     1624 1 -1 0 0 0 10 0.0000 2 135 1095 6750 1425 Other Cluster(s)\001
     1634 1 -1 0 0 0 10 0.0000 2 105 840 4050 1425 User Cluster\001
     1644 1 -1 0 0 0 10 0.0000 2 150 615 1575 3375 Manager\001
     1654 1 -1 0 0 0 10 0.0000 2 105 990 1575 3225 Discrete-event\001
     1664 1 -1 0 0 0 10 0.0000 2 135 795 1575 4050 preemption\001
     1674 0 -1 0 0 0 10 0.0000 2 150 1365 1725 4575 generator/coroutine\001
     1684 0 -1 0 0 0 10 0.0000 2 120 270 3450 4575 task\001
     1694 0 -1 0 0 0 10 0.0000 2 105 450 6450 4575 cluster\001
     1704 0 -1 0 0 0 10 0.0000 2 105 660 5325 4575 processor\001
     1714 0 -1 0 0 0 10 0.0000 2 105 555 4275 4575 monitor\001
  • doc/papers/concurrency/mail2

    r9019b14 r016b1eb  
    934934Page 18, line 17: is using
    935935
     936
     937
     938Date: Tue, 16 Jun 2020 13:45:03 +0000
     939From: Aaron Thomas <onbehalfof@manuscriptcentral.com>
     940Reply-To: speoffice@wiley.com
     941To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca
     942Subject: SPE-19-0219.R2 successfully submitted
     943
     94416-Jun-2020
     945
     946Dear Dr Buhr,
     947
     948Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been successfully submitted online and is presently being given full consideration for publication in Software: Practice and Experience.
     949
     950Your manuscript number is SPE-19-0219.R2.  Please mention this number in all future correspondence regarding this submission.
     951
     952You can view the status of your manuscript at any time by checking your Author Center after logging into https://mc.manuscriptcentral.com/spe.  If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site.
     953
     954
     955Thank you for submitting your manuscript to Software: Practice and Experience.
     956
     957Sincerely,
     958
     959Software: Practice and Experience Editorial Office
     960
  • doc/papers/concurrency/response2

    r9019b14 r016b1eb  
    2727      thread creation and destruction?
    2828
    29 The best description of Smalltalk concurrency I can find is in J. Hunt,
    30 Smalltalk and Object Orientation, Springer-Verlag London Limited, 1997, Chapter
    31 31 Concurrency in Smalltalk. It states on page 332:
    32 
    33   For a process to be spawned from the current process there must be some way
    34   of creating a new process. This is done using one of four messages to a
    35   block. These messages are:
    36 
    37     aBlock fork: This creates and schedules a process which will execute the
    38     block. The priority of this process is inherited from the parent process.
    39     ...
    40 
    41   The Semaphore class provides facilities for achieving simple synchronization,
    42   it is simple because it only allows for two forms of communication signal and
    43   wait.
    44 
    45 Hence, "aBlock fork" creates, "Semaphore" blocks/unblocks (as does message send
    46 to an aBlock object), and garbage collection of an aBlock joins with its
    47 thread. The fact that a programmer *implicitly* does "fork", "block"/"unblock",
    48 and "join", does not change their fundamental requirement.
     29Fixed, changed sentence to:
     30
     31 A programmer needs mechanisms to create, block and unblock, and join with a
     32 thread, even if these basic mechanisms are supplied indirectly through
     33 high-level features.
    4934
    5035
     
    10388storage, too, which is a single instance across all generator instances of that
    10489type, as for static storage in an object type. All the kinds of storage are
    105 at play with semantics that is virtually the same as in other languages.
     90at play with semantics that is the same as in other languages.
    10691
    10792
     
    118103Just-in-Time Compiler. We modified our test programs based on his advise, and
    119104he validated our programs as correctly measuring the specified language
    120 feature. Hence, we have taken into account all issues related to performing
    121 benchmarks in JITTED languages.  Dave's help is recognized in the
    122 Acknowledgment section. Also, all the benchmark programs are publicly available
    123 for independent verification.
    124 
    125 Similarly, we verified our Node.js programs with Gregor Richards, an expert in
    126 just-in-time compilation for dynamic typing.
     105feature. Dave's help is recognized in the Acknowledgment section.  Similarly,
     106we verified our Node.js programs with Gregor Richards, an expert in
     107just-in-time compilation for dynamic typing.  Hence, we have taken into account
     108all issues related to performing benchmarks in JITTED languages.  Also, all the
     109benchmark programs are publicly available for independent verification.
    127110
    128111
     
    155138Since many aspects of Cforall are not OO, the rest of the paper *does* depend
    156139on Cforall being identified as non-OO, otherwise readers would have
    157 significantly different expectations for the design. We believe your definition
    158 of OO is too broad, such as including C. Just because a programming language
    159 can support aspects of the OO programming style, does not make it OO. (Just
    160 because a duck can swim does not make it a fish.)
    161 
    162 Our definition of non-OO follows directly from the Wikipedia entry:
    163 
    164   Object-oriented programming (OOP) is a programming paradigm based on the
    165   concept of "objects", which can contain data, in the form of fields (often
    166   known as attributes or properties), and code, in the form of procedures
    167   (often known as methods). A feature of objects is an object's procedures that
    168   can access and often modify the data fields of the object with which they are
    169   associated (objects have a notion of "this" or "self").
    170   https://en.wikipedia.org/wiki/Object-oriented_programming
    171 
    172 Cforall fails this definition as code cannot appear in an "object" and there is
    173 no implicit receiver. As well, Cforall, Go, and Rust do not have nominal
    174 inheritance and they not considered OO languages, e.g.:
    175 
    176  "**Is Go an object-oriented language?** Yes and no. Although Go has types and
    177  methods and allows an object-oriented style of programming, there is no type
    178  hierarchy. The concept of "interface" in Go provides a different approach
    179  that we believe is easy to use and in some ways more general. There are also
    180  ways to embed types in other types to provide something analogous-but not
    181  identical-to subclassing. Moreover, methods in Go are more general than in
    182  C++ or Java: they can be defined for any sort of data, even built-in types
    183  such as plain, "unboxed" integers. They are not restricted to structs (classes).
    184  https://golang.org/doc/faq#Is_Go_an_object-oriented_language
     140significantly different expectations for the design. We did not mean to suggest
     141that languages that support function pointers with structures support an OO
     142style. We revised the footnote to avoid this interpretation. Finally, Golang
     143does not identify itself as an OO language.
     144https://golang.org/doc/faq#Is_Go_an_object-oriented_language
    185145
    186146
     
    219179Whereas, the coroutine only needs the array allocated when needed. Now a
    220180coroutine has a stack which occupies storage, but the maximum stack size only
    221 needs to be the call chain allocating the most storage, where as the generator
     181needs to be the call chain allocating the most storage, whereas the generator
    222182has a maximum size of all variable that could be created.
    223183
     
    314274
    315275 \item[\newterm{execution state}:] is the state information needed by a
    316  control-flow feature to initialize, manage compute data and execution
    317  location(s), and de-initialize, \eg calling a function initializes a stack
    318  frame including contained objects with constructors, manages local data in
    319  blocks and return locations during calls, and de-initializes the frame by
     276 control-flow feature to initialize and manage both compute data and execution
     277 location(s), and de-initialize. For example, calling a function initializes a
     278 stack frame including contained objects with constructors, manages local data
     279 in blocks and return locations during calls, and de-initializes the frame by
    320280 running any object destructors and management operations.
    321281
     
    330290appropriate word?
    331291
     292
    332293    "computation only" as opposed to what?
    333294
     
    335296i.e., the operation starts with everything it needs to compute its result and
    336297runs to completion, blocking only when it is done and returns its result.
     298Computation only occurs in "embarrassingly parallel" problems.
    337299
    338300
     
    596558    * coroutines/generators/threads: here there is some discussion, but it can
    597559      be improved.
    598     * interal/external scheduling: I didn't find any direct comparison between
     560    * internal/external scheduling: I didn't find any direct comparison between
    599561      these features, except by way of example.
    600562
     
    672634  }
    673635
    674 Additonal text has been added to the start of Section 3.2 address this comment.
     636Additional text has been added to the start of Section 3.2 addressing this
     637comment.
    675638
    676639
     
    787750and its shorthand form (not shown in the paper)
    788751
    789   waitfor( remove, remove2 : t );
    790 
    791 A call to one these remove functions satisfies the waitfor (exact selection
    792 details are discussed in Section 6.4).
     752  waitfor( remove, remove2 : buffer );
     753
     754A call to either of these remove functions satisfies the waitfor (exact
     755selection details are discussed in Section 6.4).
    793756
    794757
     
    835798signal or signal_block.
    836799
     800
    837801    I believe that one difference between the Go program and the Cforall
    838802    equivalent is that the Goroutine has an associated queue, so that
     
    842806Actually, the buffer length is 0 for the Cforall call and the Go unbuffered
    843807send so both are synchronous communication.
     808
    844809
    845810    I think this should be stated explicitly. (Presumably, one could modify the
     
    985950      sout | "join";
    986951  }
     952
    987953  int main() {
    988954      T t[3]; // threads start and delay
Note: See TracChangeset for help on using the changeset viewer.