source: doc/papers/concurrency/mail2 @ 6b6b9ba

Last change on this file since 6b6b9ba was 6b6b9ba, checked in by Peter A. Buhr <pabuhr@…>, 4 years ago

3rd round referee replies

  • Property mode set to 100644
File size: 68.5 KB
1Date: Wed, 26 Jun 2019 20:12:38 +0000
2From: Aaron Thomas <>
5Subject: SPE-19-0219 successfully submitted
9Dear Dr Buhr,
11Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been received by Software: Practice and Experience. It will be given full consideration for publication in the journal.
13Your manuscript number is SPE-19-0219.  Please mention this number in all future correspondence regarding this submission.
15You can view the status of your manuscript at any time by checking your Author Center after logging into  If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site.
18Thank you for submitting your manuscript to Software: Practice and Experience.
22Software: Practice and Experience Editorial Office
26Date: Tue, 12 Nov 2019 22:25:17 +0000
27From: Richard Jones <>
30Subject: Software: Practice and Experience - Decision on Manuscript ID
31 SPE-19-0219
35Dear Dr Buhr,
37Many thanks for submitting SPE-19-0219 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter.
39The decision on this paper is that it requires substantial further work is required. The referees have a number of substantial concerns. All the reviewers found the submission very hard to read; two of the reviewers state that it needs very substantial restructuring. These concerns must be addressed before your submission can be considered further.
41A revised version of your manuscript that takes into account the comments of the referees will be reconsidered for publication.
43Please note that submitting a revision of your manuscript does not guarantee eventual acceptance, and that your revision will be subject to re-review by the referees before a decision is rendered.
45You have 90 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request an extension.
47You can upload your revised manuscript and submit it through your Author Center. Log into  and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions".
49When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided.  You can use this space to document any changes you make to the original manuscript.
51If you feel that your paper could benefit from English language polishing, you may wish to consider having your paper professionally edited for English language by a service such as Wiley's at Please note that while this service will greatly improve the readability of your paper, it does not guarantee acceptance of your paper by the journal.
53Once again, thank you for submitting your manuscript to Software: Practice and Experience and I look forward to receiving your revision.
58Prof. Richard Jones
59Software: Practice and Experience
63Referee(s)' Comments to Author:
65Reviewing: 1
67Comments to the Author
68This article presents the design and rationale behind the various
69threading and synchronization mechanisms of C-forall, a new low-level
70programming language.  This paper is very similar to a companion paper
71which I have also received: as the papers are similar, so will these
72reviews be --- in particular any general comments from the other
73review apply to this paper also.
75As far as I can tell, the article contains three main ideas: an
76asynchronous execution / threading model; a model for monitors to
77provide mutual exclusion; and an implementation.  The first two ideas
78are drawn together in Table 1: unfortunately this is on page 25 of 30
79pages of text. Implementation choices and descriptions are scattered
80throughout the paper - and the sectioning of the paper seems almost
83The article is about its contributions.  Simply adding feature X to
84language Y isn't by itself a contribution, (when feature X isn't
85already a contribution).  The contribution can be in the design: the
86motivation, the space of potential design options, the particular
87design chosen and the rationale for that choice, or the resulting
88performance.  For example: why support two kinds of generators as well
89as user-level threads?  Why support both low and high level
90synchronization constructs?  Similarly I would have found the article
91easier to follow if it was written top down, presenting the design
92principles, present the space of language features, justify chosen
93language features (and rationale) and those excluded, and then present
94implementation, and performance.
96Then the writing of the article is often hard to follow, to say the
97least. Two examples: section 3 "stateful functions" - I've some idea
98what that is (a function with Algol's "own" or C's "static" variables?
99but in fact the paper has a rather more specific idea than that. The
100top of page 3 throws a whole lot of defintions at the reader
101"generator" "coroutine" "stackful" "stackless" "symmetric"
102"asymmetric" without every stopping to define each one --- but then in
103footnote "C" takes the time to explain what C's "main" function is?  I
104cannot imagine a reader of this paper who doesn't know what "main" is
105in C; especially if they understand the other concepts already
106presented in the paper.  The start of section 3 then does the same
107thing: putting up a whole lot of definitions, making distinctions and
108comparisons, even talking about some runtime details, but the critical
109definition of a monitor doesn't appear until three pages later, at the
110start of section 5 on p15, lines 29-34 are a good, clear, description
111of what a monitor actually is.  That needs to come first, rather than
112being buried again after two sections of comparisons, discussions,
113implementations, and options that are ungrounded because they haven't
114told the reader what they are actually talking about.  First tell the
115reader what something is, then how they might use it (as programmers:
116what are the rules and restrictions) and only then start comparison
117with other things, other approaches, other languages, or
120The description of the implementation is similarly lost in the trees
121without ever really seeing the wood. Figure 19 is crucial here, but
122it's pretty much at the end of the paper, and comments about
123implementations are threaded throughout the paper without the context
124(fig 19) to understand what's going on.   The protocol for performance
125testing may just about suffice for C (although is N constantly ten
126million, or does it vary for each benchmark) but such evaluation isn't
127appropriate for garbage-collected or JITTed languages like Java or Go.
129other comments working through the paper - these are mostly low level
130and are certainly not comprehensive.
132p1 only a subset of C-forall extensions?
134p1 "has features often associated with object-oriented programming
135languages, such as constructors, destructors, virtuals and simple
136inheritance."   There's no need to quibble about this. Once a language
137has inheritance, it's hard to claim it's not object-oriented.
140p2 barging? signals-as-hints?
142p3 start your discussion of generations with a simple example of a
143C-forall generator.  Fig 1(b) might do: but put it inline instead of
144the python example - and explain the key rules and restrictions on the
145construct.  Then don't even start to compare with coroutines until
146you've presented, described and explained your coroutines...
147p3 I'd probably leave out the various "C" versions unless there are
148key points to make you can't make in C-forall. All the alternatives
149are just confusing.
152p4 but what's that "with" in Fig 1(B)
154p5 start with the high level features of C-forall generators...
156p5 why is the paper explaining networking protocols?
158p7 lines 1-9 (transforming generator to coroutine - why would I do any
159of this? Why would I want one instead of the other (do not use "stack"
160in your answer!)
162p10 last para "A coroutine must retain its last resumer to suspend
163back because the resumer is on a different stack. These reverse
164pointers allow suspend to cycle backwards, "  I've no idea what is
165going on here?  why should I care?  Shouldn't I just be using threads
166instead?  why not?
168p16 for the same reasons - what reasons?
170p17 if the multiple-monitor entry procedure really is novel, write a
171paper about that, and only about that.
173p23 "Loose Object Definitions" - no idea what that means.  in that
174section: you can't leave out JS-style dynamic properties.  Even in
175OOLs that (one way or another) allow separate definitions of methods
176(like Objective-C, Swift, Ruby, C#) at any time a runtime class has a
177fixed definition.  Quite why the detail about bit mask implementation
178is here anyway, I've no idea.
180p25 this cluster isn't a CLU cluster then?
182* conclusion should conclude the paper, not the related.
185Reviewing: 2
187Comments to the Author
188This paper describes the concurrency features of an extension of C (whose name I will write as "C\/" here, for convenience), including much design-level discussion of the coroutine- and monitor-based features and some microbenchmarks exploring the current implementation's performance. The key message of the latter is that the system's concurrency abstractions are much lighter-weight than the threading found in mainstream C or Java implementations.
190There is much description of the system and its details, but nothing about (non-artificial) uses of it. Although the microbenchmark data is encouraging, arguably not enough practical experience with the system has been reported here to say much about either its usability advantages or its performance.
192As such, the main contribution of the paper seem to be to document the existence of the described system and to provide a detailed design rationale and (partial) tutorial. I believe that could be of interest to some readers, so an acceptable manuscript is lurking in here somewhere.
194Unfortunately, at present the writing style is somewhere between unclear and infuriating. It omits to define terms; it uses needlessly many terms for what are apparently (but not clearly) the same things; it interrupts itself rather than deliver the natural consequent of whatever it has just said; and so on. Section 5 is particularly bad in these regards -- see my detailed comments below. Fairly major additional efforts will be needed to turn the present text into a digestible design-and-tutorial document. I suspect that a shorter paper could do this job better than the present manuscript, which is overwrought in parts.
196p2: lines 4--9 are a little sloppy. It is not the languages but their popular implementations which "adopt" the 1:1 kernel threading model.
198line 10: "medium work" -- "medium-sized work"?
200line 18: "is all sequential to the compiler" -- not true in modern compilers, and in 2004 H-J Boehm wrote a tech report describing exactly why ("Threads cannot be implemented as a library", HP Labs).
202line 20: "knows the optimization boundaries" -- I found this vague. What's an example?
204line 31: this paragraph has made a lot of claims. Perhaps forward-reference to the parts of the paper that discuss each one.
206line 33: "so the reader can judge if" -- this reads rather passive-aggressively. Perhaps better: "... to support our argument that..."
208line 41: "a dynamic partitioning mechanism" -- I couldn't tell what this meant
210p3. Presenting concept of a "stateful function" as a new language feature seems odd. In C, functions often have local state thanks to static local variables (or globals, indeed). Of course, that has several limitations. Can you perhaps present your contributions by enumerating these limitations? See also my suggestion below about a possible framing centred on a strawman.
212line 2: "an old idea that is new again" -- this is too oblique
214lines 2--15: I found this to be a word/concept soup. Stacks, closures, generators, stackless stackful, coroutine, symmetric, asymmetric, resume/suspend versus resume/resume... there needs to be a more gradual and structured way to introduce all this, and ideally one that minimises redundancy. Maybe present it as a series of "definitions" each with its own heading, e.g. "A closure is stackless if its local state has statically known fixed size"; "A generator simply means a stackless closure." And so on. Perhaps also strongly introduce the word "activate" as a direct contrast with resume and suspend. These are just a flavour of the sort of changes that might make this paragraph into something readable.
216Continuing the thought: I found it confusing that by these definitinos, a stackful closure is not a stack, even though logically the stack *is* a kind of closure (it is a representation of the current thread's continuation).
218lines 24--27: without explaining what the boost functor types mean, I don't think the point here comes across.
220line 34: "semantically coupled" -- I wasn't surew hat this meant
222p4: the point of Figure 1 (C) was not immediately clear. It seem to be showing how one might "compile down" Figure 1 (B). Or is that Figure 1 (A)?
224It's right that the incidental language features of the system are not front-and-centre, but I'd appreciate some brief glossing of non-C languages features as they appear. Examples are the square bracket notation, the pipe notation and the constructor syntax. These explanations could go in the caption of the figure which first uses them, perhaps. Overall I found the figure captions to be terse, and a missed opportunity to explain clearly what was going on.
226p5 line 23: "This restriction is removed..." -- give us some up-front summary of your contributions and the elements of the language design that will be talked about, so that this isn't an aside. This will reduce the "twisty passages" feeling that characterises much of the paper.
228line 40: "a killer asymmetric generator" -- this is stylistically odd, and the sentence about failures doesn't convincigly argue that C\/ will help with them. Have you any experience writing device drivers using C\/? Or any argument that the kinds of failures can be traced to the "stack-ripping" style that one is forced to use without coroutines? Also, a typo on line 41: "device drives". And saying "Windows/Linux" is sloppy... what does the cited paper actually say?
230p6 lines 13--23: this paragraph is difficult to understand. It seems to be talking about a control-flow pattern roughly equivalent to tail recursion. What is the high-level point, other than that this is possible?
232line 34: "which they call coroutines" -- a better way to make this point is presumably that the C++20 proposal only provides a specialised kind of coroutine, namely generators, despite its use of the more general word.
234line 47: "... due to dynamic stack allocation, execution..." -- this sentence doesn't scan. I suggest adding "and for" in the relevant places where currently there are only commas.
236p8 / Figure 5 (B) -- the GNU C extension of unary "&&" needs to be explained. The whole figure needs a better explanation, in fact.
238p9, lines 1--10: I wasn't sure this stepping-through really added much value. What are the truly important points to note about this code?
240p10: similarly, lines 3--27 again are somewhere between tedious and confusing. I'm sure the motivation and details of "starter semantics" can both be stated much more pithily.
242line 32: "a self-resume does not overwrite the last resumer" -- is this a hack or a defensible principled decision?
244p11: "a common source of errors" -- among beginners or among production code? Presumably the former.
246line 23: "with builtin and library" -- not sure what this means
248lines 31--36: these can be much briefer. The only important point here seems to be that coroutines cannot be copied.
250p12: line 1: what is a "task"? Does it matter?
252line 7: calling it "heap stack" seems to be a recipe for confusion. "Stack-and-heap" might be better, and contrast with "stack-and-VLS" perhaps. When "VLS" is glossed, suggest actually expanding its initials: say "length" not "size".
254line 21: are you saying "cooperative threading" is the same as "non-preemptive scheduling", or that one is a special case (kind) of the other? Both are defensible, but be clear.
256line 27: "mutual exclusion and synchronization" -- the former is a kind of the latter, so I suggest "and other forms of synchronization".
258line 30: "can either be a stackless or stackful" -- stray "a", but also, this seems to be switching from generic/background terminology to C\/-specific terminology.
260An expositional idea occurs: start the paper with a strawman naive/limited realisation of coroutines -- say, Simon Tatham's popular "Coroutines in C" web page -- and identify point by point what the limitations are and how C\/ overcomes them. Currently the presentation is often flat (lacking motivating contrasts) and backwards (stating solutions before problems). The foregoing approach might fix both of these.
262page 13: line 23: it seems a distraction to mention the Python feature here.
264p14 line 5: it seems odd to describe these as "stateless" just because they lack shared mutable state. It means the code itself is even more stateful. Maybe the "stack ripping" argument could usefully be given here.
266line 16: "too restrictive" -- would be good to have a reference to justify this, or at least give a sense of what the state-of-the-art performance in transactional memory systems is (both software and hardware)
268line 22: "simulate monitors" -- what about just *implementing* monitors? isn't that what these systems do? or is the point more about refining them somehow into something more specialised?
270p15: sections 4.1 and 4.2 seem adrift and misplaced. Split them into basic parts (which go earlier) and more advanced parts (e.g. barging, which can be explained later).
272line 31: "acquire/release" -- misses an opportunity to contrast the monitor's "enter/exit" abstraction with the less structured acquire/release of locks.
274p16 line 12: the "implicit" versus "explicit" point is unclear. Is it perhaps about the contract between an opt-in *discipline* and a language-enforced *guarantee*?
276line 28: no need to spend ages dithering about which one is default and which one is the explicit qualifier. Tell us what you decided, briefly justify it, and move on.
278p17: Figure 11: since the main point seems to be to highlight bulk acquire, include a comment which identifies the line where this is happening.
280line 2: "impossible to statically..." -- or dynamically. Doing it dynamically would be perfectly acceptable (locking is a dynamic operation after all)
282"guarantees acquisition order is consistent" -- assuming it's done in a single bulk acquire.
284p18: section 5.3: the text here is a mess. The explanations of "internal" versus "external" scheduling are unclear, and "signals as hints" is not explained. "... can cause thread starvation" -- means including a while loop, or not doing so? "There are three signalling mechanisms.." but the text does not follow that by telling us what they are. My own scribbled attempt at unpicking the internal/external thing: "threads already in the monitor, albeit waiting, have priority over those trying to enter".
286p19: line 3: "empty condition" -- explain that condition variables don't store anything. So being "empty" means that the queue of waiting threads (threads waiting to be signalled that the condition has become true) is empty.
288line 6: "... can be transformed into external scheduling..." -- OK, but give some motivation.
290p20: line 6: "mechnaism"
292lines 16--20: this is dense and can probably only be made clear with an example
294p21 line 21: clarify that nested monitor deadlock was describe earlier (in 5.2). (Is the repetition necessary?)
296line 27: "locks, and by extension monitors" -- this is true but the "by extension" argument is faulty. It is perfectly possible to use locks as a primitive and build a compositional mechanism out of them, e.g. transactions.
298p22 line 2: should say "restructured"
300line 33: "Implementing a fast subset check..." -- make clear that the following section explains how to do this. Restructuring the sections themselves could do this, or noting in the text.
302p23: line 3: "dynamic member adding, eg, JavaScript" -- needs to say "as permitted in JavaScript", and "dynamically adding members" is stylistically better
304p23: line 18: "urgent stack" -- back-reference to where this was explained before
306p24 line 7: I did not understand what was more "direct" about "direct communication". Also, what is a "passive monitor" -- just a monitor, given that monitors are passive by design?
308line 14 / section 5.9: this table was useful and it (or something like it) could be used much earlier on to set the structure of the rest of the paper. The explanation at present is too brief, e.g. I did not really understand the point about cases 7 and 8.
310p25 line 2: instead of casually dropping in a terse explanation for the newly intrdouced term "virtual processor", introduce it properly. Presumably the point is to give a less ambiguous meaning to "thread" by reserving it only for C\/'s green threads.
312Table 1: what does "No / Yes" mean?
314p26 line 15: "transforms user threads into fibres" -- a reference is needed to explain what "fibres" means... guessing it's in the sense of Adya et al.
316line 20: "Microsoft runtime" -- means Windows?
318lines 21--26: don't say "interrupt" to mean "signal", especially not without clear introduction. You can use "POSIX signal" to disambiguate from condition variables' "signal".
320p27 line 3: "frequency is usually long" -- that's a "time period" or "interval", not a frequency
322line 5: the lengthy quotation is not really necessary; just paraphrase the first sentence and move on.
324line 20: "to verify the implementation" -- I don't think that means what is intended
326Tables in section 7 -- too many significant figures. How many overall runs are described? What is N in each case?
328p29 line 2: "to eliminate this cost" -- arguably confusing since nowadays on commodity CPUs most of the benefits of inlining are not to do with call overheads, but from later optimizations enabled as a consequence of the inlining
330line 41: "a hierarchy" -- are they a hierarchy? If so, this could be explained earlier. Also, to say these make up "an integrated set... of control-flow features" verges on the tautologous.
332p30 line 15: "a common case being web servers and XaaS" -- that's two cases
335Reviewing: 3
337Comments to the Author
338# Cforall review
340Overall, I quite enjoyed reading the paper. Cforall has some very interesting ideas. I did have some suggestions that I think would be helpful before final publication. I also left notes on various parts of the paper that I find confusing when reading, in hopes that it may be useful to you.
342## Summary
344* Expand on the motivations for including both generator and coroutines, vs trying to build one atop the other
345* Expand on the motivations for having Why both symmetric and asymettric coroutines?
346* Comparison to async-await model adopted by other languages
347    * C#, JS
348    * Rust and its async/await model
349* Consider performance comparisons against node.js and Rust frameworks
350* Discuss performance of monitors vs finer-grained memory models and atomic operations found in other languages
351* Why both internal/external scheduling for synchronization?
353## Generator/coroutines
355In general, this section was clear, but I thought it would be useful to provide a somewhat deeper look into why Cforall opted for the particular combination of features that it offers. I see three main differences from other languages:
357* Generators are not exposed as a "function" that returns a generator object, but rather as a kind of struct, with communication happening via mutable state instead of "return values". That is, the generator must be manually resumed and (if I understood) it is expected to store values that can then later be read (perhaps via methods), instead of having a `yield <Expr>` statement that yields up a value explicitly.
358* Both "symmetric" and "asymmetric" generators are supported, instead of only asymmetric.
359* Coroutines (multi-frame generators) are an explicit mechanism.
361In most other languages, coroutines are rather built by layering single-frame generators atop one another (e.g., using a mechanism like async-await), and symmetric coroutines are basically not supported. I'd like to see a bit more justification for Cforall including all the above mechanisms -- it seemed like symmetric coroutines were a useful building block for some of the user-space threading and custom scheduler mechanisms that were briefly mentioned later in the paper.
363In the discussion of coroutines, I would have expected a bit more of a comparison to the async-await mechanism offered in other languages. Certainly the semantics of async-await in JavaScript implies significantly more overhead (because each async fn is a distinct heap object). [Rust's approach avoids this overhead][zc], however, and might be worthy of a comparison (see the Performance section).
365## Locks and threading
367### Comparison to atomics overlooks performance
369There are several sections in the paper that compare against atomics -- for example, on page 15, the paper shows a simple monitor that encapsulates an integer and compares that to C++ atomics. Later, the paper compares the simplicity of monitors against the `volatile` quantifier from Java. The conclusion in section 8 also revisits this point.
371While I agree that monitors are simpler, they are obviously also significantly different from a performance perspective -- the paper doesn't seem to address this at all. It's plausible that (e.g.) the `Aint` monitor type described in the paper can be compiled and mapped to the specialized instructions offered by hardware, but I didn't see any mention of how this would be done. There is also no mention of the more nuanced memory ordering relations offered by C++11 and how one might achieve similar performance characteristics in Cforall (perhaps the answer is that one simply doesn't need to; I think that's defensible, but worth stating explicitly).
373### Justification for external scheduling feels lacking
375Cforall includes both internal and external scheduling; I found the explanation for the external scheduling mechanism to be lacking in justification. Why include both mechanisms when most languages seem to make do with only internal scheduling? It would be useful to show some scenarios where external scheduling is truly more powerful.
377I would have liked to see some more discussion of external scheduling and how it  interacts with software engineering best practices. It seems somewhat similar to AOP in certain regards. It seems to add a bit of "extra semantics" to monitor methods, in that any method may now also become a kind of synchronization point. The "open-ended" nature of this feels like it could easily lead to subtle bugs, particularly when code refactoring occurs (which may e.g. split an existing method into two). This seems particularly true if external scheduling can occur across compilation units -- the paper suggested that this is true, but I wasn't entirely clear.
379I would have also appreciated a few more details on how external scheduling is implemented. It seems to me that there must be some sort of "hooks" on mutex methods so that they can detect whether some other function is waiting on them and awaken those blocked threads. I'm not sure how such hooks are inserted, particularly across compilation units. The material in Section 5.6 didn't quite clarify the matter for me. For example, it left me somewhat confused about whether the `f` and `g` functions declared were meant to be local to a translation unit, or shared with other unit.
381### Presentation of monitors is somewhat confusing
383I found myself confused fairly often in the section on monitors. I'm just going to leave some notes here on places that I got confused in how that it could be useful to you as feedback on writing that might want to be clarified.
385To start, I did not realize that the `mutex_opt` notation was a keyword, I thought it was a type annotation. I think this could be called out more explicitly.
387Later, in section 5.2, the paper discusses `nomutex` annotations, which initially threw me, as they had not been introduced (now I realize that this paragraph is there to justify why there is no such keyword). The paragraph might be rearranged to make that clearer, perhaps by leading with the choice that Cforall made.
389On page 17, the paper states that "acquiring multiple monitors is safe from deadlock", but this could be stated a bit more precisely: acquiring multiple monitors in a bulk-acquire is safe from deadlock (deadlock can still result from nested acquires).
391On page 18, the paper states that wait states do not have to be enclosed in loops, as there is no concern of barging. This seems true but there are also other reasons to use loops (e.g., if there are multiple reasons to notify on the same condition). Thus the statement initially surprised me, as barging is only one of many reasons that I typically employ loops around waits.
393I did not understand the diagram in Figure 12 for some time. Initially, I thought that it was generic to all monitors, and I could not understand the state space. It was only later that I realized it was specific to your example. Updating the caption from "Monitor scheduling to "Monitor scheduling in the example from Fig 13" might have helped me quite a bit.
395I spent quite some time reading the boy/girl dating example (\*) and I admit I found it somewhat confusing. For example, I couldn't tell whether there were supposed to be many "girl" threads executing at once, or if there was only supposed to be one girl and one boy thread executing in a loop. Are the girl/boy threads supposed to invoke the girl/boy methods or vice versa? Surely there is some easier way to set this up? I believe that when reading the paper I convinced myself of how it was supposed to be working, but I'm writing this review some days later, and I find myself confused all over again and not able to easily figure it out.
397(\*) as an aside, I would consider modifying the example to some other form of matching, like customers and support personnel.
399## Related work
401The paper offered a number of comparisons to Go, C#, Scala, and so forth, but seems to have overlooked another recent language, Rust. In many ways, Rust seems to be closest in philosophy to Cforall, so it seems like an odd omission. I already mentioned above that Rust is in the process of shipping [async-await syntax][aa], which is definitely an alternative to the generator/coroutine approach in Cforall (though one with clear pros/cons).
403## Performance
405In the performance section in particular, you might consider comparing against some of the Rust web servers and threading systems. For example, actix is top of the [single query TechEmpower Framework benchmarks], and tokio is near the top of the [plainthreading benchmarks][pt] (hyper, the top, is more of an HTTP framework, though it is also written in Rust). It would seem worth trying to compare their "context switching" costs as well -- I believe both actix and tokio have a notion of threads that could be readily compared.
407Another addition that might be worth considering is to compare against node.js promises, although I think the comparison to process creation is not as clean.
409That said, I think that the performance comparison is not a big focus of the paper, so it may not be necessary to add anything to it.
411## Authorship of this review
413I'm going to sign this review. This review was authored by Nicholas D. Matsakis. In the intrerest of full disclosure, I'm heavily involved in the Rust project, although I dont' think that influenced this review in particular. Feel free to reach out to me for clarifying questions.
415## Links
424Subject: Re: manuscript SPE-19-0219
425To: "Peter A. Buhr" <>
426From: Richard Jones <>
427Date: Tue, 12 Nov 2019 22:43:55 +0000
429Dear Dr Buhr
431Your should have received a decision letter on this today. I am sorry that this
432has taken so long. Unfortunately SP&E receives a lot of submissions and getting
433reviewers is a perennial problem.
438Peter A. Buhr wrote on 11/11/2019 13:10:
439>     26-Jun-2019
440>     Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall"
441>     has been received by Software: Practice and Experience. It will be given
442>     full consideration for publication in the journal.
444> Hi, it has been over 4 months since submission of our manuscript SPE-19-0219
445> with no response.
447> Currently, I am refereeing a paper for IEEE that already cites our prior SP&E
448> paper and the Master's thesis forming the bases of the SP&E paper under
449> review. Hence our work is apropos and we want to get it disseminates as soon as
450> possible.
452> [3] A. Moss, R. Schluntz, and P. A. Buhr, "Cforall: Adding modern programming
453>      language features to C," Software - Practice and Experience, vol. 48,
454>      no. 12, pp. 2111-2146, 2018.
456> [4] T. Delisle, "Concurrency in C for all," Master's thesis, University of
457>      Waterloo, 2018.  [Online].  Available:
462Date: Mon, 13 Jan 2020 05:33:15 +0000
463From: Richard Jones <>
466Subject: Revision reminder - SPE-19-0219
469Dear Dr Buhr
472This is a reminder that your opportunity to revise and re-submit your
473manuscript will expire 28 days from now. If you require more time please
474contact me directly and I may grant an extension to this deadline, otherwise
475the option to submit a revision online, will not be available.
477I look forward to receiving your revision.
481Prof. Richard Jones
482Editor, Software: Practice and Experience
487Date: Wed, 5 Feb 2020 04:22:18 +0000
488From: Aaron Thomas <>
491Subject: SPE-19-0219.R1 successfully submitted
495Dear Dr Buhr,
497Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has
498been successfully submitted online and is presently being given full
499consideration for publication in Software: Practice and Experience.
501Your manuscript number is SPE-19-0219.R1.  Please mention this number in all
502future correspondence regarding this submission.
504You can view the status of your manuscript at any time by checking your Author
505Center after logging into  If you have
506difficulty using this site, please click the 'Get Help Now' link at the top
507right corner of the site.
509Thank you for submitting your manuscript to Software: Practice and Experience.
512Software: Practice and Experience Editorial Office
516Date: Sat, 18 Apr 2020 10:42:13 +0000
517From: Richard Jones <>
520Subject: Software: Practice and Experience - Decision on Manuscript ID
521 SPE-19-0219.R1
525Dear Dr Buhr,
527Many thanks for submitting SPE-19-0219.R1 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter.
529I believe that we are making progress here towards a paper that can be published in Software: Practice and Experience.  However the referees still have significant concerns about the paper. The journal's focus is on practice and experience, and one of the the reviewers' concerns remains that your submission should focus the narrative more on the perspective of the programmer than the language designer. I agree that this would strengthen your submission, and I ask you to address this as well as the referees' other comments.
531A revised version of your manuscript that takes into account the comments of the referee(s) will be reconsidered for publication.
533Please note that submitting a revision of your manuscript does not guarantee eventual acceptance, and that your revision may be subject to re-review by the referees before a decision is rendered.
535You have 90 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request a short extension.
537You can upload your revised manuscript and submit it through your Author Center. Log into  and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions".
539When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided.  You can use this space to document any changes you make to the original manuscript.
541If you would like help with English language editing, or other article preparation support, Wiley Editing Services offers expert help with English Language Editing, as well as translation, manuscript formatting, and figure formatting at You can also check out our resources for Preparing Your Article for general guidance about writing and preparing your manuscript at
543Once again, thank you for submitting your manuscript to Software: Practice and Experience and I look forward to receiving your revision.
548Prof. Richard Jones
549Software: Practice and Experience
553Referee(s)' Comments to Author:
555Reviewing: 1
557Comments to the Author
558(A relatively short second review)
560I thank the authors for their revisions and comprehensive response to
561reviewers' comments --- many of my comments have been successfully
562addressed by the revisions.  Here I'll structure my comments around
563the main salient points in that response which I consider would
564benefit from further explanation.
566>  Table 1 is moved to the start and explained in detail.
568I consider this change makes a significant improvement to the paper,
569laying out the landscape of language features at the start, and thus
570addresses my main concerns about the paper.
572I still have a couple of issues --- perhaps the largest is that it's
573still not clear at this point in the paper what some of these options
574are, or crucially how they would be used. I don't know if it's
575possbile to give high-level examples or use cases to be clear about
576these up front - or if that would duplicate too much information from
577later in the paper - either way expanding out the discussion - even if
578just two a couple of sentences for each row - would help me more.  The
579point is not just to define these categories but to ensure the
580readers' understanding of these definitons agrees with that used in
581the paper.
583in a little more detail:
585 * 1st para section 2 begs the question: why not support each
586   dimension independently, and let the programmer or library designer
587   combiine features?
589 * "execution state" seems a relatively low-level description here.
590  I don't think of e.g. the lambda calculus that way. Perhaps it's as
591  good a term as any.
593 * Why must there "be language mechanisms to create, block/unblock,
594   and join with a thread"?  There aren't in Smalltalk (although there
595   are in the runtime).  Especially given in Cforall those mechanisms
596   are *implicit* on thread creation and destruction?
598 * "Case 1 is a function that borrows storage for its state (stack
599   frame/activation) and a thread from its invoker"
601   this much makes perfect sense to me, but I don't understand how a
602   non-stateful, non-theaded function can then retain
604   "this state across callees, ie, function local-variables are
605   retained on the stack across calls."
607   how can it retain function-local values *across calls* when it
608   doesn't have any functional-local state?
610   I'm not sure if I see two separate cases here - rougly equivalent
611   to C functions without static storage, and then C functions *with*
612   static storage. I assumed that was the distinction between cases 1
613   & 3; but perhpas the actual distinction is that 3 has a
614   suspend/resume point, and so the "state" in figure 1 is this
615   component of execution state (viz figs 1 & 2), not the state
616   representing the cross-call variables?
618>    but such evaluation isn't appropriate for garbage-collected or JITTed
619   languages like Java or Go.
621For JITTed languages in particular, reporting peak performance needs
622to "warm up" the JIT with a number of iterators before beginning
623measurement. Actually for JIT's its even worse: see Edd Barrett et al
624OOPSLA 2017.
628minor issues:
630 * footnote A - I've looked at various other papers & the website to
631   try to understand how "object-oriented" Cforall is - I'm still not
632   sure.  This footnote says Cforall has "virtuals" - presumably
633   virtual functions, i.e. dynamic dispatch - and inheritance: that
634   really is OO as far as I (and most OO people) are concerned.  For
635   example Haskell doesn't have inheritance, so it's not OO; while
636   CLOS (the Common Lisp *Object* System) or things like Cecil and
637   Dylan are considered OO even though they have "multiple function
638   parameters as receivers", lack "lexical binding between a structure
639   and set of functions", and don't have explicit receiver invocation
640   syntax.  Python has receiver syntax, but unlike Java or Smalltalk
641   or C++, method declarations still need to have an explicit "self"
642   receiver parameter.  Seems to me that Go, for example, is
643   more-or-less OO with interfaces, methods, and dynamic dispatch (yes
644   also and an explicit receiver syntax but that's not
645   determiniative); while Rust lacks dynamic dispatch built-in.  C is
646   not OO as a language, but as you say given it supports function
647   pointers with structures, it does support an OO programm style.
649   This is why I again recommend just not buying into this fight: not
650   making any claims about whether Cforall is OO or is not - because
651   as I see it, the rest of the paper doesn't depend on whether
652   Cforall is OO or not.  That said: this is just a recommendation,
653   and I won't quibble over this any further.
655 * is a "monitor function" the same as a "mutex function"?
656   if so the paper should pick one term; if not, make the distinction clear.
659 * "As stated on line 1 because state declarations from the generator
660    type can be moved out of the coroutine type into the coroutine main"
662    OK sure, but again: *why* would a programmer want to do that?
663    (Other than, I guess, to show the difference between coroutines &
664    generators?)  Perhaps another way to put this is that the first
665    para of 3.2 gives the disadvantages of coroutines vs-a-vs
666    generators, briefly describes the extended semantics, but never
667    actualy says why a programmer may want those extended semantics,
668    or how they would benefit.  I don't mean to belabour the point,
669    but (generalist?) readers like me would generally benefit from
670    those kinds of discussions about each feature throughout the
671    paper: why might a programmer want to use them?
674> p17 if the multiple-monitor entry procedure really is novel, write a paper
675> about that, and only about that.
677> We do not believe this is a practical suggestion.
679 * I'm honestly not trying to be snide here: I'm not an expert on
680   monitor or concurrent implementations. Brinch Hansen's original
681   monitors were single acquire; this draft does not cite any other
682   previous work that I could see. I'm not suggesting that the brief
683   mention of this mechanism necessarily be removed from this paper,
684   but if this is novel (and a clear advance over a classical OO
685   monitor a-la Java which only acquires the distinguished reciever)
686   then that would be worth another paper in itself.
688> * conclusion should conclude the paper, not the related.
689> We do not understand this comment.if ithis
691My typo: the paper's conclusion should come at the end, after the
692future work section.
697To encourage accountability, I'm signing my reviews in 2020.
698For the record, I am James Noble,
701Reviewing: 2
703Comments to the Author
704I thank the authors for their detailed response. To respond to a couple of points raised  in response to my review (number 2):
706- on the Boehm paper and whether code is "all sequential to the compiler": I now understand the authors' position better and suspect we are in violent agreement, except for whether it's appropriate to use the rather breezy phrase "all sequential to the compiler". It would be straightforward to clarify that code not using the atomics features is optimized *as if* it were sequential, i.e. on the assumption of a lack of data races.
708- on the distinction between "mutual exclusion" and "synchronization": the added citation does help, in that it makes a coherent case for the definition the authors prefer. However, the text could usefully clarify that this is a matter of definition not of fact, given especially that in my assessment the authors' preferred definition is not the most common one. (Although the mention of Hoare's apparent use of this definition is one data point, countervailing ones are found in many contemporaneous or later papers, e.g. Habermann's 1972 "Synchronization of Communicating Processes" (CACM 15(3)), Reed & Kanodia's 1979 "Synchronization with eventcounts and sequencers" (CACM (22(2)) and so on.)
710I am glad to see that the authors have taken on board most of the straightforward improvements I suggested.
712However, a recurring problem of unclear writing still remains through many parts of the paper, including much of sections 2, 3 and 6. To highlight a couple of problem patches (by no means exhaustive):
714- section 2 (an expanded version of what was previously section 5.9) lacks examples and is generally obscure and allusory ("the most advanced feature" -- name it! "in triplets" -- there is only one triplet!; what are "execution locations"? "initialize" and "de-initialize" what? "borrowed from the invoker" is a concept in need of explaining or at least a fully explained example -- in what sense does a plain function borrow" its stack frame? "computation only" as opposed to what? in 2.2, in what way is a "request" fundamental to "synchronization"? and the "implicitly" versus "explicitly" point needs stating as elsewhere, with a concrete example e.g. Java built-in mutexes versus java.util.concurrent).
716- section 6: 6.2 omits the most important facts in preference for otherwise inscrutable detail: "identify the kind of parameter" (first say *that there are* kinds of parameter, and what "kinds" means!); "mutex parameters are documentation" is misleading (they are also semantically significant!) and fails to say *what* they mean; the most important thing is surely that 'mutex' is a language feature for performing lock/unlock operations at function entry/exit. So say it! The meanings of examples f3 and f4 remain unclear. Meanwhile in 6.3, "urgent" is not introduced (we are supposed to infer its meaning from Figure 12, but that Figure is incomprehensible to me), and we are told of "external scheduling"'s long history in Ada but not clearly what it actually means; 6.4's description of "waitfor" tells us it is different from an if-else chain but tries to use two *different* inputs to tell us that the behavior is different; tell us an instance where *the same* values of C1 and C2 give different behavior (I even wrote out a truth table and still don't see the semantic difference)
718The authors frequently use bracketed phrases, and sometimes slashes "/", in ways that are confusing and/or detrimental to readability. Page 13 line 2's "forward (backward)" is one particularly egregious example. In general I would recommend the the authors try to limit their use of parentheses and slashes as a means of forcing a clearer wording to emerge. Also, the use of "eg." is often cursory and does not explain the examples given, which are frequently a one- or two-word phrase of unclear referent.
720Considering the revision more broadly, none of the more extensive or creative rewrites I suggested in my previous review have been attempted, nor any equivalent efforts to improve its readability. The hoisting of the former section 5.9 is a good idea, but the newly added material accompanying it (around Table 1) suffers fresh deficiencies in clarity. Overall the paper is longer than before, even though (as my previous review stated), I believe a shorter paper is required in order to serve the likely purpose of publication. (Indeed, the authors' letter implies that a key goal of publication is to build community and gain external users.)
722Given this trajectory, I no longer see a path to an acceptable revision of the present submission. Instead I suggest the authors consider splitting the paper in two: one half about coroutines and stack management, the other about mutexes, monitors and the runtime. (A briefer presentation of the runtime may be helpful in the first paper also, and a brief recap of the generator and coroutine support is obviously needed in the second too.) Both of these new papers would need to be written with a strong emphasis on clarity, paying great care to issues of structure, wording, choices of example, and restraint (saying what's important, not everything that could be said). I am confident the authors could benefit from getting early feedback from others at their institution. For the performance experiments, of course these do not split evenly -- most (but not all) belong in the second of these two hypothetical papers. But the first of them would still have plenty of meat to it; for me, a clear and thorough study of the design space around coroutines is the most interesting and tantalizing prospect.
724I do not buy the authors' defense of the limited practical experience or "non-micro" benchmarking presented. Yes, gaining external users is hard and I am sympathetic on that point. But building something at least *somewhat* substantial with your own system should be within reach, and without it the "practice and experience" aspects of the work have not been explored. Clearly C\/ is the product of a lot of work over an extended period, so it is a surprise that no such experience is readily available for inclusion.
726Some smaller points:
728It does not seem right to state that a stack is essential to Von Neumann architectures -- since the earliest Von Neumann machines (and indeed early Fortran) did not use one.
730To elaborate on something another reviewer commented on: it is a surprise to find a "Future work" section *after* the "Conclusion" section. A "Conclusions and future work" section often works well.
733Reviewing: 3
735Comments to the Author
736This is the second round of reviewing.
738As in the first review, I found that the paper (and Cforall) contains
739a lot of really interesting ideas, but it remains really difficult to
740have a good sense of which idea I should use and when. This applies in
741different ways to different features from the language:
743* coroutines/generators/threads: here there is
744  some discussion, but it can be improved.
745* interal/external scheduling: I didn't find any direct comparison
746  between these features, except by way of example.
748I requested similar things in my previous review and I see that
749content was added in response to those requests. Unfortunately, I'm
750not sure that I can say it improved the paper's overall read. I think
751in some sense the additions were "too much" -- I would have preferred
752something more like a table or a few paragraphs highlighting the key
753reasons one would pick one construct or the other.
755In general, I do wonder if the paper is just trying to do too much.
756The discussion of clusters and pre-emption in particular feels quite
759## Summary
761I make a number of suggestions below but the two most important
762I think are:
764* Recommend to shorten the comparison on coroutine/generator/threads
765  in Section 2 to a paragraph with a few examples, or possibly a table
766  explaining the trade-offs between the constructs
767* Recommend to clarify the relationship between internal/external
768  scheduling -- is one more general but more error-prone or low-level?
770## Coroutines/generators/threads
772There is obviously a lot of overlap between these features, and in
773particular between coroutines and generators. As noted in the previous
774review, many languages have chosen to offer *only* generators, and to
775build coroutines by stacks of generators invoking one another.
777I believe the newly introduced Section 2 of the paper is trying to
778motivate why each of these constructs exist, but I did not find it
779effective. It was dense and difficult to understand. I think the
780problem is that Section 2 seems to be trying to derive "from first
781principles" why each construct exists, but I think that a more "top
782down" approach would be easier to understand.
784In fact, the end of Section 2.1 (on page 5) contains a particular
785paragraph that embodies this "top down" approach. It starts,
786"programmers can now answer three basic questions", and thus gives
787some practical advice for which construct you should use and when. I
788think giving some examples of specific applications that this
789paragraph, combined with some examples of cases where each construct
790was needed, would be a better approach.
792I don't think this compariosn needs to be very long. It seems clear
793enough that one would
795* prefer generators for simple computations that yield up many values,
796* prefer coroutines for more complex processes that have significant
797  internal structure,
798* prefer threads for cases where parallel execution is desired or
799  needed.
801I did appreciate the comparison in Section 2.3 between async-await in
802JS/Java and generators/coroutines. I agree with its premise that those
803mechanisms are a poor replacement for generators (and, indeed, JS has
804a distinct generator mechanism, for example, in part for this reason).
805I believe I may have asked for this in a previous review, but having
806read it, I wonder if it is really necessary, since those mechanisms
807are so different in purpose.
809## Internal vs external scheduling
811I find the motivation for supporting both internal and external
812scheduling to be fairly implicit. After several reads through the
813section, I came to the conclusion that internal scheduling is more
814expressive than external scheduling, but sometimes less convenient or
815clear. Is this correct? If not, it'd be useful to clarify where
816external scheduling is more expressive.
818The same is true, I think, of the `signal_block` function, which I
819have not encountered before; it seems like its behavior can be modeled
820with multiple condition variables, but that's clearly more complex.
822One question I had about `signal_block`: what happens if one signals
823but no other thread is waiting? Does it block until some other thread
824waits? Or is that user error?
826I would find it very interesting to try and capture some of the
827properties that make internal vs external scheduling the better
830For example, it seems to me that external scheduling works well if
831there are only a few "key" operations, but that internal scheduling
832might be better otherwise, simply because it would be useful to have
833the ability to name a signal that can be referenced by many
834methods. Consider the bounded buffer from Figure 13: if it had
835multiple methods for removing elements, and not just `remove`, then
836the `waitfor(remove)` call in `insert` might not be sufficient.
838## Comparison of external scheduling to messaging
840I did enjoy the section comparing external scheduling to Go's
841messaging mechanism, which I believe is a new addition.
843I believe that one difference between the Go program and the Cforall
844equivalent is that the Goroutine has an associated queue, so that
845multiple messages could be enqueued, whereas the Cforall equivalent is
846effectively a "bounded buffer" of length 1. Is that correct? I think
847this should be stated explicitly. (Presumably, one could modify the
848Cforall program to include an explicit vector of queued messages if
849desired, but you would also be reimplementing the channel
852Also, in Figure 20, I believe that there is a missing `mutex` keyword.
853The fiugre states:
856void main(GoRtn & gortn) with(gortn) {
859but I think it should probably be as follows:
862void main(GoRtn & mutex gortn) with(gortn) {
865Unless there is some implicit `mutex` associated with being a main
866function for a `monitor thread`.
868## Atomic operations and race freedom
870I was glad to see that the paper acknowledged that Cforall still had
871low-level atomic operations, even if their use is discouraged in favor
872of higher-level alternatives.
874However, I still feel that the conclusion overstates the value of the
875contribution here when it says that "Cforall high-level race-free
876monitors and threads provide the core mechanisms for mutual exclusion
877and synchronization, without the need for volatile and atomics". I
878feel confident that Java programmers, for example, would be advised to
879stick with synchronized methods whenever possible, and it seems to me
880that they offer similar advantages -- but they sometimes wind up using
881volatiles for performance reasons.
883I was also confused by the term "race-free" in that sentence. In
884particular, I don't think that Cforall has any mechanisms for
885preventing *data races*, and it clearly doesn't prevent "race
886conditions" (which would bar all sorts of useful programs). I suppose
887that "race free" here might be referring to the improvements such as
888removing barging behavior.
890## Performance comparisons
892In my previous review, I requested comparisons against Rust and
893node.js, and I see that the new version of the paper includes both,
894which is a good addition.
896One note on the Rust results: I believe that the results are comparing
897against the threads found in Rust's standard library, which are
898essentially a shallow wrapper around pthreads, and hence the
899performance is quite close to pthread performance (as one would
900expect). It would perhaps be more interesting to see a comparison
901built using [tokio] or [async-std], two of the more prominent
902user-space threading libraries that build on Rust's async-await
903feature (which operates quite differently than Javascript's
904async-await, in that it doesn't cause every aync function call to
905schedule a distinct task).
910That said, I am satisfied with the performance results as they are in
911the current revision.
913## Minor notes and typos
915Several figures used the `with` keyword. I deduced that `with(foo)`
916permits one to write `bar` instead of ``. It seems worth
917introducing. Apologies if this is stated in the paper, if so I missed
920On page 20, section 6.3, "external scheduling and vice versus" should be
921"external scheduling and vice versa".
923On page 5, section 2.3, the paper states "we content" but it should be
924"we contend".
926Reviewing: Editor
928A few small comments in addition to those of the referees.
930Page 1. I don't believe that it s fair to imply that Scala is  "research vehicle" as it is used by major players, Twitter being the most prominent example.
932Page 15. Must Cforall threads start after construction (e.g. see your example on page 15, line 21)? I can think of examples where it is not desirable that threads start immediately after construction, e.g. a game with N players, each of whom is expensive to create, but all of whom should be started at the same time.
934Page 18, line 17: is using
938Date: Tue, 16 Jun 2020 13:45:03 +0000
939From: Aaron Thomas <>
942Subject: SPE-19-0219.R2 successfully submitted
946Dear Dr Buhr,
948Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been successfully submitted online and is presently being given full consideration for publication in Software: Practice and Experience.
950Your manuscript number is SPE-19-0219.R2.  Please mention this number in all future correspondence regarding this submission.
952You can view the status of your manuscript at any time by checking your Author Center after logging into  If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site.
955Thank you for submitting your manuscript to Software: Practice and Experience.
959Software: Practice and Experience Editorial Office
963Date: Wed, 2 Sep 2020 20:55:34 +0000
964From: Richard Jones <>
967Subject: Software: Practice and Experience - Decision on Manuscript ID
968 SPE-19-0219.R2
972Dear Dr Buhr,
974Many thanks for submitting SPE-19-0219.R2 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter. I apologise for the length of time it has taken to get these.
976Both reviewers consider this paper to be close to acceptance. However, before I can accept this paper, I would like you address the comments of Reviewer 2, particularly with regard to the description of the adaptation Java harness to deal with warmup. I would expect to see a convincing argument that the computation has reached a steady state. I would also like you to provide the values for N for each benchmark run. This should be very straightforward for you to do. There are a couple of papers on steady state that you may wish to consult (though I am certainly not pushing my own work).
9781) Barrett, Edd; Bolz-Tereick, Carl Friedrich; Killick, Rebecca; Mount, Sarah and Tratt, Laurence. Virtual Machine Warmup Blows Hot and Cold. OOPSLA 2017.
979Virtual Machines (VMs) with Just-In-Time (JIT) compilers are traditionally thought to execute programs in two phases: the initial warmup phase determines which parts of a program would most benefit from dynamic compilation, before JIT compiling those parts into machine code; subsequently the program is said to be at a steady state of peak performance. Measurement methodologies almost always discard data collected during the warmup phase such that reported measurements focus entirely on peak performance. We introduce a fully automated statistical approach, based on changepoint analysis, which allows us to determine if a program has reached a steady state and, if so, whether that represents peak performance or not. Using this, we show that even when run in the most controlled of circumstances, small, deterministic, widely studied microbenchmarks often fail to reach a steady state of peak performance on a variety of common VMs. Repeating our experiment on 3 different machines, we found that at most 43.5% of pairs consistently reach a steady state of peak performance.
9812) Kalibera, Tomas and Jones, Richard. Rigorous Benchmarking in Reasonable Time. ISMM  2013.
982Experimental evaluation is key to systems research. Because modern systems are complex and non-deterministic, good experimental methodology demands that researchers account for uncertainty. To obtain valid results, they are expected to run many iterations of benchmarks, invoke virtual machines (VMs) several times, or even rebuild VM or benchmark binaries more than once. All this repetition costs time to complete experiments. Currently, many evaluations give up on sufficient repetition or rigorous statistical methods, or even run benchmarks only in training sizes. The results reported often lack proper variation estimates and, when a small difference between two systems is reported, some are simply unreliable.In contrast, we provide a statistically rigorous methodology for repetition and summarising results that makes efficient use of experimentation time. Time efficiency comes from two key observations. First, a given benchmark on a given platform is typically prone to much less non-determinism than the common worst-case of published corner-case studies. Second, repetition is most needed where most uncertainty arises (whether between builds, between executions or between iterations). We capture experimentation cost with a novel mathematical model, which we use to identify the number of repetitions at each level of an experiment necessary and sufficient to obtain a given level of precision.We present our methodology as a cookbook that guides researchers on the number of repetitions they should run to obtain reliable results. We also show how to present results with an effect size confidence interval. As an example, we show how to use our methodology to conduct throughput experiments with the DaCapo and SPEC CPU benchmarks on three recent platforms.
984You have 42 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request a short extension.
986You can upload your revised manuscript and submit it through your Author Center. Log into and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions".
988When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided.  You can use this space to document any changes you make to the original manuscript.
990If you would like help with English language editing, or other article preparation support, Wiley Editing Services offers expert help with English Language Editing, as well as translation, manuscript formatting, and figure formatting at You can also check out our resources for Preparing Your Article for general guidance about writing and preparing your manuscript at
992Once again, thank you for submitting your manuscript to Software: Practice and Experience. I look forward to receiving your revision.
997Prof. Richard Jones
998Editor, Software: Practice and Experience
1001Referee(s)' Comments to Author:
1003Reviewing: 1
1005Comments to the Author
1006Overall, I felt that this draft was an improvement on previous drafts and I don't have further changes to request.
1008I appreciated the new language to clarify the relationship of external and internal scheduling, for example, as well as the new measurements of Rust tokio. Also, while I still believe that the choice between thread/generator/coroutine and so forth could be made crisper and clearer, the current draft of Section 2 did seem adequate to me in terms of specifying the considerations that users would have to take into account to make the choice.
1011Reviewing: 2
1013Comments to the Author
1014First: let me apologise for the delay on this review. I'll blame the global pandemic combined with my institution's senior management's counterproductive decisions for taking up most of my time and all of my energy.
1016At this point, reading the responses, I think we've been around the course enough times that further iteration is unlikely to really improve the paper any further, so I'm happy to recommend acceptance.    My main comments are that there were some good points in the responses to *all* the reviews and I strongly encourage the authors to incorporate those discursive responses into the final paper so they may benefit readers as well as reviewers.   I agree with the recommendations of reviewer #2 that the paper could usefully be split in to two, which I think I made to a previous revision, but I'm happy to leave that decision to the Editor.
1018Finally, the paper needs to describe how the Java harness was adapted to deal with warmup; why the computation has warmed up and reached a steady state - similarly for js and Python. The tables should also give the "N" chosen for each benchmark run.
1020minor points
1021* don't start sentences with "However"
1022* most downloaded isn't an "Award"
1026Date: Thu, 1 Oct 2020 05:34:29 +0000
1027From: Richard Jones <>
1030Subject: Revision reminder - SPE-19-0219.R2
1034Dear Dr Buhr
1038This is a reminder that your opportunity to revise and re-submit your manuscript will expire 14 days from now. If you require more time please contact me directly and I may grant an extension to this deadline, otherwise the option to submit a revision online, will not be available.
1040If your article is of potential interest to the general public, (which means it must be timely, groundbreaking, interesting and impact on everyday society) then please e-mail explaining the public interest side of the research. Wiley will then investigate the potential for undertaking a global press campaign on the article.
1042I look forward to receiving your revision.
1046Prof. Richard Jones
1047Editor, Software: Practice and Experience
Note: See TracBrowser for help on using the repository browser.