Date: Wed, 26 Jun 2019 20:12:38 +0000 From: Aaron Thomas Reply-To: speoffice@wiley.com To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca Subject: SPE-19-0219 successfully submitted 26-Jun-2019 Dear Dr Buhr, Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been received by Software: Practice and Experience. It will be given full consideration for publication in the journal. Your manuscript number is SPE-19-0219. Please mention this number in all future correspondence regarding this submission. You can view the status of your manuscript at any time by checking your Author Center after logging into https://mc.manuscriptcentral.com/spe. If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site. Thank you for submitting your manuscript to Software: Practice and Experience. Sincerely, Software: Practice and Experience Editorial Office Date: Tue, 12 Nov 2019 22:25:17 +0000 From: Richard Jones Reply-To: R.E.Jones@kent.ac.uk To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca Subject: Software: Practice and Experience - Decision on Manuscript ID SPE-19-0219 12-Nov-2019 Dear Dr Buhr, Many thanks for submitting SPE-19-0219 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter. The decision on this paper is that it requires substantial further work is required. The referees have a number of substantial concerns. All the reviewers found the submission very hard to read; two of the reviewers state that it needs very substantial restructuring. These concerns must be addressed before your submission can be considered further. A revised version of your manuscript that takes into account the comments of the referees will be reconsidered for publication. Please note that submitting a revision of your manuscript does not guarantee eventual acceptance, and that your revision will be subject to re-review by the referees before a decision is rendered. You have 90 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request an extension. You can upload your revised manuscript and submit it through your Author Center. Log into https://mc.manuscriptcentral.com/spe and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions". When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided. You can use this space to document any changes you make to the original manuscript. If you feel that your paper could benefit from English language polishing, you may wish to consider having your paper professionally edited for English language by a service such as Wiley's at http://wileyeditingservices.com. Please note that while this service will greatly improve the readability of your paper, it does not guarantee acceptance of your paper by the journal. Once again, thank you for submitting your manuscript to Software: Practice and Experience and I look forward to receiving your revision. Sincerely, Prof. Richard Jones Software: Practice and Experience R.E.Jones@kent.ac.uk Referee(s)' Comments to Author: Reviewing: 1 Comments to the Author This article presents the design and rationale behind the various threading and synchronization mechanisms of C-forall, a new low-level programming language. This paper is very similar to a companion paper which I have also received: as the papers are similar, so will these reviews be --- in particular any general comments from the other review apply to this paper also. As far as I can tell, the article contains three main ideas: an asynchronous execution / threading model; a model for monitors to provide mutual exclusion; and an implementation. The first two ideas are drawn together in Table 1: unfortunately this is on page 25 of 30 pages of text. Implementation choices and descriptions are scattered throughout the paper - and the sectioning of the paper seems almost arbitrary. The article is about its contributions. Simply adding feature X to language Y isn't by itself a contribution, (when feature X isn't already a contribution). The contribution can be in the design: the motivation, the space of potential design options, the particular design chosen and the rationale for that choice, or the resulting performance. For example: why support two kinds of generators as well as user-level threads? Why support both low and high level synchronization constructs? Similarly I would have found the article easier to follow if it was written top down, presenting the design principles, present the space of language features, justify chosen language features (and rationale) and those excluded, and then present implementation, and performance. Then the writing of the article is often hard to follow, to say the least. Two examples: section 3 "stateful functions" - I've some idea what that is (a function with Algol's "own" or C's "static" variables? but in fact the paper has a rather more specific idea than that. The top of page 3 throws a whole lot of defintions at the reader "generator" "coroutine" "stackful" "stackless" "symmetric" "asymmetric" without every stopping to define each one --- but then in footnote "C" takes the time to explain what C's "main" function is? I cannot imagine a reader of this paper who doesn't know what "main" is in C; especially if they understand the other concepts already presented in the paper. The start of section 3 then does the same thing: putting up a whole lot of definitions, making distinctions and comparisons, even talking about some runtime details, but the critical definition of a monitor doesn't appear until three pages later, at the start of section 5 on p15, lines 29-34 are a good, clear, description of what a monitor actually is. That needs to come first, rather than being buried again after two sections of comparisons, discussions, implementations, and options that are ungrounded because they haven't told the reader what they are actually talking about. First tell the reader what something is, then how they might use it (as programmers: what are the rules and restrictions) and only then start comparison with other things, other approaches, other languages, or implementations. The description of the implementation is similarly lost in the trees without ever really seeing the wood. Figure 19 is crucial here, but it's pretty much at the end of the paper, and comments about implementations are threaded throughout the paper without the context (fig 19) to understand what's going on. The protocol for performance testing may just about suffice for C (although is N constantly ten million, or does it vary for each benchmark) but such evaluation isn't appropriate for garbage-collected or JITTed languages like Java or Go. other comments working through the paper - these are mostly low level and are certainly not comprehensive. p1 only a subset of C-forall extensions? p1 "has features often associated with object-oriented programming languages, such as constructors, destructors, virtuals and simple inheritance." There's no need to quibble about this. Once a language has inheritance, it's hard to claim it's not object-oriented. p2 barging? signals-as-hints? p3 start your discussion of generations with a simple example of a C-forall generator. Fig 1(b) might do: but put it inline instead of the python example - and explain the key rules and restrictions on the construct. Then don't even start to compare with coroutines until you've presented, described and explained your coroutines... p3 I'd probably leave out the various "C" versions unless there are key points to make you can't make in C-forall. All the alternatives are just confusing. p4 but what's that "with" in Fig 1(B) p5 start with the high level features of C-forall generators... p5 why is the paper explaining networking protocols? p7 lines 1-9 (transforming generator to coroutine - why would I do any of this? Why would I want one instead of the other (do not use "stack" in your answer!) p10 last para "A coroutine must retain its last resumer to suspend back because the resumer is on a different stack. These reverse pointers allow suspend to cycle backwards, " I've no idea what is going on here? why should I care? Shouldn't I just be using threads instead? why not? p16 for the same reasons - what reasons? p17 if the multiple-monitor entry procedure really is novel, write a paper about that, and only about that. p23 "Loose Object Definitions" - no idea what that means. in that section: you can't leave out JS-style dynamic properties. Even in OOLs that (one way or another) allow separate definitions of methods (like Objective-C, Swift, Ruby, C#) at any time a runtime class has a fixed definition. Quite why the detail about bit mask implementation is here anyway, I've no idea. p25 this cluster isn't a CLU cluster then? * conclusion should conclude the paper, not the related. Reviewing: 2 Comments to the Author This paper describes the concurrency features of an extension of C (whose name I will write as "C\/" here, for convenience), including much design-level discussion of the coroutine- and monitor-based features and some microbenchmarks exploring the current implementation's performance. The key message of the latter is that the system's concurrency abstractions are much lighter-weight than the threading found in mainstream C or Java implementations. There is much description of the system and its details, but nothing about (non-artificial) uses of it. Although the microbenchmark data is encouraging, arguably not enough practical experience with the system has been reported here to say much about either its usability advantages or its performance. As such, the main contribution of the paper seem to be to document the existence of the described system and to provide a detailed design rationale and (partial) tutorial. I believe that could be of interest to some readers, so an acceptable manuscript is lurking in here somewhere. Unfortunately, at present the writing style is somewhere between unclear and infuriating. It omits to define terms; it uses needlessly many terms for what are apparently (but not clearly) the same things; it interrupts itself rather than deliver the natural consequent of whatever it has just said; and so on. Section 5 is particularly bad in these regards -- see my detailed comments below. Fairly major additional efforts will be needed to turn the present text into a digestible design-and-tutorial document. I suspect that a shorter paper could do this job better than the present manuscript, which is overwrought in parts. p2: lines 4--9 are a little sloppy. It is not the languages but their popular implementations which "adopt" the 1:1 kernel threading model. line 10: "medium work" -- "medium-sized work"? line 18: "is all sequential to the compiler" -- not true in modern compilers, and in 2004 H-J Boehm wrote a tech report describing exactly why ("Threads cannot be implemented as a library", HP Labs). line 20: "knows the optimization boundaries" -- I found this vague. What's an example? line 31: this paragraph has made a lot of claims. Perhaps forward-reference to the parts of the paper that discuss each one. line 33: "so the reader can judge if" -- this reads rather passive-aggressively. Perhaps better: "... to support our argument that..." line 41: "a dynamic partitioning mechanism" -- I couldn't tell what this meant p3. Presenting concept of a "stateful function" as a new language feature seems odd. In C, functions often have local state thanks to static local variables (or globals, indeed). Of course, that has several limitations. Can you perhaps present your contributions by enumerating these limitations? See also my suggestion below about a possible framing centred on a strawman. line 2: "an old idea that is new again" -- this is too oblique lines 2--15: I found this to be a word/concept soup. Stacks, closures, generators, stackless stackful, coroutine, symmetric, asymmetric, resume/suspend versus resume/resume... there needs to be a more gradual and structured way to introduce all this, and ideally one that minimises redundancy. Maybe present it as a series of "definitions" each with its own heading, e.g. "A closure is stackless if its local state has statically known fixed size"; "A generator simply means a stackless closure." And so on. Perhaps also strongly introduce the word "activate" as a direct contrast with resume and suspend. These are just a flavour of the sort of changes that might make this paragraph into something readable. Continuing the thought: I found it confusing that by these definitinos, a stackful closure is not a stack, even though logically the stack *is* a kind of closure (it is a representation of the current thread's continuation). lines 24--27: without explaining what the boost functor types mean, I don't think the point here comes across. line 34: "semantically coupled" -- I wasn't surew hat this meant p4: the point of Figure 1 (C) was not immediately clear. It seem to be showing how one might "compile down" Figure 1 (B). Or is that Figure 1 (A)? It's right that the incidental language features of the system are not front-and-centre, but I'd appreciate some brief glossing of non-C languages features as they appear. Examples are the square bracket notation, the pipe notation and the constructor syntax. These explanations could go in the caption of the figure which first uses them, perhaps. Overall I found the figure captions to be terse, and a missed opportunity to explain clearly what was going on. p5 line 23: "This restriction is removed..." -- give us some up-front summary of your contributions and the elements of the language design that will be talked about, so that this isn't an aside. This will reduce the "twisty passages" feeling that characterises much of the paper. line 40: "a killer asymmetric generator" -- this is stylistically odd, and the sentence about failures doesn't convincigly argue that C\/ will help with them. Have you any experience writing device drivers using C\/? Or any argument that the kinds of failures can be traced to the "stack-ripping" style that one is forced to use without coroutines? Also, a typo on line 41: "device drives". And saying "Windows/Linux" is sloppy... what does the cited paper actually say? p6 lines 13--23: this paragraph is difficult to understand. It seems to be talking about a control-flow pattern roughly equivalent to tail recursion. What is the high-level point, other than that this is possible? line 34: "which they call coroutines" -- a better way to make this point is presumably that the C++20 proposal only provides a specialised kind of coroutine, namely generators, despite its use of the more general word. line 47: "... due to dynamic stack allocation, execution..." -- this sentence doesn't scan. I suggest adding "and for" in the relevant places where currently there are only commas. p8 / Figure 5 (B) -- the GNU C extension of unary "&&" needs to be explained. The whole figure needs a better explanation, in fact. p9, lines 1--10: I wasn't sure this stepping-through really added much value. What are the truly important points to note about this code? p10: similarly, lines 3--27 again are somewhere between tedious and confusing. I'm sure the motivation and details of "starter semantics" can both be stated much more pithily. line 32: "a self-resume does not overwrite the last resumer" -- is this a hack or a defensible principled decision? p11: "a common source of errors" -- among beginners or among production code? Presumably the former. line 23: "with builtin and library" -- not sure what this means lines 31--36: these can be much briefer. The only important point here seems to be that coroutines cannot be copied. p12: line 1: what is a "task"? Does it matter? line 7: calling it "heap stack" seems to be a recipe for confusion. "Stack-and-heap" might be better, and contrast with "stack-and-VLS" perhaps. When "VLS" is glossed, suggest actually expanding its initials: say "length" not "size". line 21: are you saying "cooperative threading" is the same as "non-preemptive scheduling", or that one is a special case (kind) of the other? Both are defensible, but be clear. line 27: "mutual exclusion and synchronization" -- the former is a kind of the latter, so I suggest "and other forms of synchronization". line 30: "can either be a stackless or stackful" -- stray "a", but also, this seems to be switching from generic/background terminology to C\/-specific terminology. An expositional idea occurs: start the paper with a strawman naive/limited realisation of coroutines -- say, Simon Tatham's popular "Coroutines in C" web page -- and identify point by point what the limitations are and how C\/ overcomes them. Currently the presentation is often flat (lacking motivating contrasts) and backwards (stating solutions before problems). The foregoing approach might fix both of these. page 13: line 23: it seems a distraction to mention the Python feature here. p14 line 5: it seems odd to describe these as "stateless" just because they lack shared mutable state. It means the code itself is even more stateful. Maybe the "stack ripping" argument could usefully be given here. line 16: "too restrictive" -- would be good to have a reference to justify this, or at least give a sense of what the state-of-the-art performance in transactional memory systems is (both software and hardware) line 22: "simulate monitors" -- what about just *implementing* monitors? isn't that what these systems do? or is the point more about refining them somehow into something more specialised? p15: sections 4.1 and 4.2 seem adrift and misplaced. Split them into basic parts (which go earlier) and more advanced parts (e.g. barging, which can be explained later). line 31: "acquire/release" -- misses an opportunity to contrast the monitor's "enter/exit" abstraction with the less structured acquire/release of locks. p16 line 12: the "implicit" versus "explicit" point is unclear. Is it perhaps about the contract between an opt-in *discipline* and a language-enforced *guarantee*? line 28: no need to spend ages dithering about which one is default and which one is the explicit qualifier. Tell us what you decided, briefly justify it, and move on. p17: Figure 11: since the main point seems to be to highlight bulk acquire, include a comment which identifies the line where this is happening. line 2: "impossible to statically..." -- or dynamically. Doing it dynamically would be perfectly acceptable (locking is a dynamic operation after all) "guarantees acquisition order is consistent" -- assuming it's done in a single bulk acquire. p18: section 5.3: the text here is a mess. The explanations of "internal" versus "external" scheduling are unclear, and "signals as hints" is not explained. "... can cause thread starvation" -- means including a while loop, or not doing so? "There are three signalling mechanisms.." but the text does not follow that by telling us what they are. My own scribbled attempt at unpicking the internal/external thing: "threads already in the monitor, albeit waiting, have priority over those trying to enter". p19: line 3: "empty condition" -- explain that condition variables don't store anything. So being "empty" means that the queue of waiting threads (threads waiting to be signalled that the condition has become true) is empty. line 6: "... can be transformed into external scheduling..." -- OK, but give some motivation. p20: line 6: "mechnaism" lines 16--20: this is dense and can probably only be made clear with an example p21 line 21: clarify that nested monitor deadlock was describe earlier (in 5.2). (Is the repetition necessary?) line 27: "locks, and by extension monitors" -- this is true but the "by extension" argument is faulty. It is perfectly possible to use locks as a primitive and build a compositional mechanism out of them, e.g. transactions. p22 line 2: should say "restructured" line 33: "Implementing a fast subset check..." -- make clear that the following section explains how to do this. Restructuring the sections themselves could do this, or noting in the text. p23: line 3: "dynamic member adding, eg, JavaScript" -- needs to say "as permitted in JavaScript", and "dynamically adding members" is stylistically better p23: line 18: "urgent stack" -- back-reference to where this was explained before p24 line 7: I did not understand what was more "direct" about "direct communication". Also, what is a "passive monitor" -- just a monitor, given that monitors are passive by design? line 14 / section 5.9: this table was useful and it (or something like it) could be used much earlier on to set the structure of the rest of the paper. The explanation at present is too brief, e.g. I did not really understand the point about cases 7 and 8. p25 line 2: instead of casually dropping in a terse explanation for the newly intrdouced term "virtual processor", introduce it properly. Presumably the point is to give a less ambiguous meaning to "thread" by reserving it only for C\/'s green threads. Table 1: what does "No / Yes" mean? p26 line 15: "transforms user threads into fibres" -- a reference is needed to explain what "fibres" means... guessing it's in the sense of Adya et al. line 20: "Microsoft runtime" -- means Windows? lines 21--26: don't say "interrupt" to mean "signal", especially not without clear introduction. You can use "POSIX signal" to disambiguate from condition variables' "signal". p27 line 3: "frequency is usually long" -- that's a "time period" or "interval", not a frequency line 5: the lengthy quotation is not really necessary; just paraphrase the first sentence and move on. line 20: "to verify the implementation" -- I don't think that means what is intended Tables in section 7 -- too many significant figures. How many overall runs are described? What is N in each case? p29 line 2: "to eliminate this cost" -- arguably confusing since nowadays on commodity CPUs most of the benefits of inlining are not to do with call overheads, but from later optimizations enabled as a consequence of the inlining line 41: "a hierarchy" -- are they a hierarchy? If so, this could be explained earlier. Also, to say these make up "an integrated set... of control-flow features" verges on the tautologous. p30 line 15: "a common case being web servers and XaaS" -- that's two cases Reviewing: 3 Comments to the Author # Cforall review Overall, I quite enjoyed reading the paper. Cforall has some very interesting ideas. I did have some suggestions that I think would be helpful before final publication. I also left notes on various parts of the paper that I find confusing when reading, in hopes that it may be useful to you. ## Summary * Expand on the motivations for including both generator and coroutines, vs trying to build one atop the other * Expand on the motivations for having Why both symmetric and asymettric coroutines? * Comparison to async-await model adopted by other languages * C#, JS * Rust and its async/await model * Consider performance comparisons against node.js and Rust frameworks * Discuss performance of monitors vs finer-grained memory models and atomic operations found in other languages * Why both internal/external scheduling for synchronization? ## Generator/coroutines In general, this section was clear, but I thought it would be useful to provide a somewhat deeper look into why Cforall opted for the particular combination of features that it offers. I see three main differences from other languages: * Generators are not exposed as a "function" that returns a generator object, but rather as a kind of struct, with communication happening via mutable state instead of "return values". That is, the generator must be manually resumed and (if I understood) it is expected to store values that can then later be read (perhaps via methods), instead of having a `yield ` statement that yields up a value explicitly. * Both "symmetric" and "asymmetric" generators are supported, instead of only asymmetric. * Coroutines (multi-frame generators) are an explicit mechanism. In most other languages, coroutines are rather built by layering single-frame generators atop one another (e.g., using a mechanism like async-await), and symmetric coroutines are basically not supported. I'd like to see a bit more justification for Cforall including all the above mechanisms -- it seemed like symmetric coroutines were a useful building block for some of the user-space threading and custom scheduler mechanisms that were briefly mentioned later in the paper. In the discussion of coroutines, I would have expected a bit more of a comparison to the async-await mechanism offered in other languages. Certainly the semantics of async-await in JavaScript implies significantly more overhead (because each async fn is a distinct heap object). [Rust's approach avoids this overhead][zc], however, and might be worthy of a comparison (see the Performance section). ## Locks and threading ### Comparison to atomics overlooks performance There are several sections in the paper that compare against atomics -- for example, on page 15, the paper shows a simple monitor that encapsulates an integer and compares that to C++ atomics. Later, the paper compares the simplicity of monitors against the `volatile` quantifier from Java. The conclusion in section 8 also revisits this point. While I agree that monitors are simpler, they are obviously also significantly different from a performance perspective -- the paper doesn't seem to address this at all. It's plausible that (e.g.) the `Aint` monitor type described in the paper can be compiled and mapped to the specialized instructions offered by hardware, but I didn't see any mention of how this would be done. There is also no mention of the more nuanced memory ordering relations offered by C++11 and how one might achieve similar performance characteristics in Cforall (perhaps the answer is that one simply doesn't need to; I think that's defensible, but worth stating explicitly). ### Justification for external scheduling feels lacking Cforall includes both internal and external scheduling; I found the explanation for the external scheduling mechanism to be lacking in justification. Why include both mechanisms when most languages seem to make do with only internal scheduling? It would be useful to show some scenarios where external scheduling is truly more powerful. I would have liked to see some more discussion of external scheduling and how it interacts with software engineering best practices. It seems somewhat similar to AOP in certain regards. It seems to add a bit of "extra semantics" to monitor methods, in that any method may now also become a kind of synchronization point. The "open-ended" nature of this feels like it could easily lead to subtle bugs, particularly when code refactoring occurs (which may e.g. split an existing method into two). This seems particularly true if external scheduling can occur across compilation units -- the paper suggested that this is true, but I wasn't entirely clear. I would have also appreciated a few more details on how external scheduling is implemented. It seems to me that there must be some sort of "hooks" on mutex methods so that they can detect whether some other function is waiting on them and awaken those blocked threads. I'm not sure how such hooks are inserted, particularly across compilation units. The material in Section 5.6 didn't quite clarify the matter for me. For example, it left me somewhat confused about whether the `f` and `g` functions declared were meant to be local to a translation unit, or shared with other unit. ### Presentation of monitors is somewhat confusing I found myself confused fairly often in the section on monitors. I'm just going to leave some notes here on places that I got confused in how that it could be useful to you as feedback on writing that might want to be clarified. To start, I did not realize that the `mutex_opt` notation was a keyword, I thought it was a type annotation. I think this could be called out more explicitly. Later, in section 5.2, the paper discusses `nomutex` annotations, which initially threw me, as they had not been introduced (now I realize that this paragraph is there to justify why there is no such keyword). The paragraph might be rearranged to make that clearer, perhaps by leading with the choice that Cforall made. On page 17, the paper states that "acquiring multiple monitors is safe from deadlock", but this could be stated a bit more precisely: acquiring multiple monitors in a bulk-acquire is safe from deadlock (deadlock can still result from nested acquires). On page 18, the paper states that wait states do not have to be enclosed in loops, as there is no concern of barging. This seems true but there are also other reasons to use loops (e.g., if there are multiple reasons to notify on the same condition). Thus the statement initially surprised me, as barging is only one of many reasons that I typically employ loops around waits. I did not understand the diagram in Figure 12 for some time. Initially, I thought that it was generic to all monitors, and I could not understand the state space. It was only later that I realized it was specific to your example. Updating the caption from "Monitor scheduling to "Monitor scheduling in the example from Fig 13" might have helped me quite a bit. I spent quite some time reading the boy/girl dating example (\*) and I admit I found it somewhat confusing. For example, I couldn't tell whether there were supposed to be many "girl" threads executing at once, or if there was only supposed to be one girl and one boy thread executing in a loop. Are the girl/boy threads supposed to invoke the girl/boy methods or vice versa? Surely there is some easier way to set this up? I believe that when reading the paper I convinced myself of how it was supposed to be working, but I'm writing this review some days later, and I find myself confused all over again and not able to easily figure it out. (\*) as an aside, I would consider modifying the example to some other form of matching, like customers and support personnel. ## Related work The paper offered a number of comparisons to Go, C#, Scala, and so forth, but seems to have overlooked another recent language, Rust. In many ways, Rust seems to be closest in philosophy to Cforall, so it seems like an odd omission. I already mentioned above that Rust is in the process of shipping [async-await syntax][aa], which is definitely an alternative to the generator/coroutine approach in Cforall (though one with clear pros/cons). ## Performance In the performance section in particular, you might consider comparing against some of the Rust web servers and threading systems. For example, actix is top of the [single query TechEmpower Framework benchmarks], and tokio is near the top of the [plainthreading benchmarks][pt] (hyper, the top, is more of an HTTP framework, though it is also written in Rust). It would seem worth trying to compare their "context switching" costs as well -- I believe both actix and tokio have a notion of threads that could be readily compared. Another addition that might be worth considering is to compare against node.js promises, although I think the comparison to process creation is not as clean. That said, I think that the performance comparison is not a big focus of the paper, so it may not be necessary to add anything to it. ## Authorship of this review I'm going to sign this review. This review was authored by Nicholas D. Matsakis. In the intrerest of full disclosure, I'm heavily involved in the Rust project, although I dont' think that influenced this review in particular. Feel free to reach out to me for clarifying questions. ## Links [aa]: https://blog.rust-lang.org/2019/09/30/Async-await-hits-beta.html [zc]: https://aturon.github.io/blog/2016/08/11/futures/ [sq]: https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=db [pt]: https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=plaintext Subject: Re: manuscript SPE-19-0219 To: "Peter A. Buhr" From: Richard Jones Date: Tue, 12 Nov 2019 22:43:55 +0000 Dear Dr Buhr Your should have received a decision letter on this today. I am sorry that this has taken so long. Unfortunately SP&E receives a lot of submissions and getting reviewers is a perennial problem. Regards Richard Peter A. Buhr wrote on 11/11/2019 13:10: > 26-Jun-2019 > Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" > has been received by Software: Practice and Experience. It will be given > full consideration for publication in the journal. > > Hi, it has been over 4 months since submission of our manuscript SPE-19-0219 > with no response. > > Currently, I am refereeing a paper for IEEE that already cites our prior SP&E > paper and the Master's thesis forming the bases of the SP&E paper under > review. Hence our work is apropos and we want to get it disseminates as soon as > possible. > > [3] A. Moss, R. Schluntz, and P. A. Buhr, "Cforall: Adding modern programming > language features to C," Software - Practice and Experience, vol. 48, > no. 12, pp. 2111-2146, 2018. > > [4] T. Delisle, "Concurrency in C for all," Master's thesis, University of > Waterloo, 2018. [Online]. Available: > https://uwspace.uwaterloo.ca/bitstream/handle/10012/12888 Date: Mon, 13 Jan 2020 05:33:15 +0000 From: Richard Jones Reply-To: R.E.Jones@kent.ac.uk To: pabuhr@uwaterloo.ca Subject: Revision reminder - SPE-19-0219 13-Jan-2020 Dear Dr Buhr SPE-19-0219 This is a reminder that your opportunity to revise and re-submit your manuscript will expire 28 days from now. If you require more time please contact me directly and I may grant an extension to this deadline, otherwise the option to submit a revision online, will not be available. I look forward to receiving your revision. Sincerely, Prof. Richard Jones Editor, Software: Practice and Experience https://mc.manuscriptcentral.com/spe Date: Wed, 5 Feb 2020 04:22:18 +0000 From: Aaron Thomas Reply-To: speoffice@wiley.com To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca Subject: SPE-19-0219.R1 successfully submitted 04-Feb-2020 Dear Dr Buhr, Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been successfully submitted online and is presently being given full consideration for publication in Software: Practice and Experience. Your manuscript number is SPE-19-0219.R1. Please mention this number in all future correspondence regarding this submission. You can view the status of your manuscript at any time by checking your Author Center after logging into https://mc.manuscriptcentral.com/spe. If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site. Thank you for submitting your manuscript to Software: Practice and Experience. Sincerely, Software: Practice and Experience Editorial Office Date: Sat, 18 Apr 2020 10:42:13 +0000 From: Richard Jones Reply-To: R.E.Jones@kent.ac.uk To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca Subject: Software: Practice and Experience - Decision on Manuscript ID SPE-19-0219.R1 18-Apr-2020 Dear Dr Buhr, Many thanks for submitting SPE-19-0219.R1 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter. I believe that we are making progress here towards a paper that can be published in Software: Practice and Experience. However the referees still have significant concerns about the paper. The journal's focus is on practice and experience, and one of the the reviewers' concerns remains that your submission should focus the narrative more on the perspective of the programmer than the language designer. I agree that this would strengthen your submission, and I ask you to address this as well as the referees' other comments. A revised version of your manuscript that takes into account the comments of the referee(s) will be reconsidered for publication. Please note that submitting a revision of your manuscript does not guarantee eventual acceptance, and that your revision may be subject to re-review by the referees before a decision is rendered. You have 90 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request a short extension. You can upload your revised manuscript and submit it through your Author Center. Log into https://mc.manuscriptcentral.com/spe and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions". When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided. You can use this space to document any changes you make to the original manuscript. If you would like help with English language editing, or other article preparation support, Wiley Editing Services offers expert help with English Language Editing, as well as translation, manuscript formatting, and figure formatting at www.wileyauthors.com/eeo/preparation. You can also check out our resources for Preparing Your Article for general guidance about writing and preparing your manuscript at www.wileyauthors.com/eeo/prepresources. Once again, thank you for submitting your manuscript to Software: Practice and Experience and I look forward to receiving your revision. Sincerely, Richard Prof. Richard Jones Software: Practice and Experience R.E.Jones@kent.ac.uk Referee(s)' Comments to Author: Reviewing: 1 Comments to the Author (A relatively short second review) I thank the authors for their revisions and comprehensive response to reviewers' comments --- many of my comments have been successfully addressed by the revisions. Here I'll structure my comments around the main salient points in that response which I consider would benefit from further explanation. > Table 1 is moved to the start and explained in detail. I consider this change makes a significant improvement to the paper, laying out the landscape of language features at the start, and thus addresses my main concerns about the paper. I still have a couple of issues --- perhaps the largest is that it's still not clear at this point in the paper what some of these options are, or crucially how they would be used. I don't know if it's possbile to give high-level examples or use cases to be clear about these up front - or if that would duplicate too much information from later in the paper - either way expanding out the discussion - even if just two a couple of sentences for each row - would help me more. The point is not just to define these categories but to ensure the readers' understanding of these definitons agrees with that used in the paper. in a little more detail: * 1st para section 2 begs the question: why not support each dimension independently, and let the programmer or library designer combiine features? * "execution state" seems a relatively low-level description here. I don't think of e.g. the lambda calculus that way. Perhaps it's as good a term as any. * Why must there "be language mechanisms to create, block/unblock, and join with a thread"? There aren't in Smalltalk (although there are in the runtime). Especially given in Cforall those mechanisms are *implicit* on thread creation and destruction? * "Case 1 is a function that borrows storage for its state (stack frame/activation) and a thread from its invoker" this much makes perfect sense to me, but I don't understand how a non-stateful, non-theaded function can then retain "this state across callees, ie, function local-variables are retained on the stack across calls." how can it retain function-local values *across calls* when it doesn't have any functional-local state? I'm not sure if I see two separate cases here - rougly equivalent to C functions without static storage, and then C functions *with* static storage. I assumed that was the distinction between cases 1 & 3; but perhpas the actual distinction is that 3 has a suspend/resume point, and so the "state" in figure 1 is this component of execution state (viz figs 1 & 2), not the state representing the cross-call variables? > but such evaluation isn't appropriate for garbage-collected or JITTed languages like Java or Go. For JITTed languages in particular, reporting peak performance needs to "warm up" the JIT with a number of iterators before beginning measurement. Actually for JIT's its even worse: see Edd Barrett et al OOPSLA 2017. minor issues: * footnote A - I've looked at various other papers & the website to try to understand how "object-oriented" Cforall is - I'm still not sure. This footnote says Cforall has "virtuals" - presumably virtual functions, i.e. dynamic dispatch - and inheritance: that really is OO as far as I (and most OO people) are concerned. For example Haskell doesn't have inheritance, so it's not OO; while CLOS (the Common Lisp *Object* System) or things like Cecil and Dylan are considered OO even though they have "multiple function parameters as receivers", lack "lexical binding between a structure and set of functions", and don't have explicit receiver invocation syntax. Python has receiver syntax, but unlike Java or Smalltalk or C++, method declarations still need to have an explicit "self" receiver parameter. Seems to me that Go, for example, is more-or-less OO with interfaces, methods, and dynamic dispatch (yes also and an explicit receiver syntax but that's not determiniative); while Rust lacks dynamic dispatch built-in. C is not OO as a language, but as you say given it supports function pointers with structures, it does support an OO programm style. This is why I again recommend just not buying into this fight: not making any claims about whether Cforall is OO or is not - because as I see it, the rest of the paper doesn't depend on whether Cforall is OO or not. That said: this is just a recommendation, and I won't quibble over this any further. * is a "monitor function" the same as a "mutex function"? if so the paper should pick one term; if not, make the distinction clear. * "As stated on line 1 because state declarations from the generator type can be moved out of the coroutine type into the coroutine main" OK sure, but again: *why* would a programmer want to do that? (Other than, I guess, to show the difference between coroutines & generators?) Perhaps another way to put this is that the first para of 3.2 gives the disadvantages of coroutines vs-a-vs generators, briefly describes the extended semantics, but never actualy says why a programmer may want those extended semantics, or how they would benefit. I don't mean to belabour the point, but (generalist?) readers like me would generally benefit from those kinds of discussions about each feature throughout the paper: why might a programmer want to use them? > p17 if the multiple-monitor entry procedure really is novel, write a paper > about that, and only about that. > We do not believe this is a practical suggestion. * I'm honestly not trying to be snide here: I'm not an expert on monitor or concurrent implementations. Brinch Hansen's original monitors were single acquire; this draft does not cite any other previous work that I could see. I'm not suggesting that the brief mention of this mechanism necessarily be removed from this paper, but if this is novel (and a clear advance over a classical OO monitor a-la Java which only acquires the distinguished reciever) then that would be worth another paper in itself. > * conclusion should conclude the paper, not the related. > We do not understand this comment.if ithis My typo: the paper's conclusion should come at the end, after the future work section. To encourage accountability, I'm signing my reviews in 2020. For the record, I am James Noble, kjx@ecs.vuw.ac.nz. Reviewing: 2 Comments to the Author I thank the authors for their detailed response. To respond to a couple of points raised in response to my review (number 2): - on the Boehm paper and whether code is "all sequential to the compiler": I now understand the authors' position better and suspect we are in violent agreement, except for whether it's appropriate to use the rather breezy phrase "all sequential to the compiler". It would be straightforward to clarify that code not using the atomics features is optimized *as if* it were sequential, i.e. on the assumption of a lack of data races. - on the distinction between "mutual exclusion" and "synchronization": the added citation does help, in that it makes a coherent case for the definition the authors prefer. However, the text could usefully clarify that this is a matter of definition not of fact, given especially that in my assessment the authors' preferred definition is not the most common one. (Although the mention of Hoare's apparent use of this definition is one data point, countervailing ones are found in many contemporaneous or later papers, e.g. Habermann's 1972 "Synchronization of Communicating Processes" (CACM 15(3)), Reed & Kanodia's 1979 "Synchronization with eventcounts and sequencers" (CACM (22(2)) and so on.) I am glad to see that the authors have taken on board most of the straightforward improvements I suggested. However, a recurring problem of unclear writing still remains through many parts of the paper, including much of sections 2, 3 and 6. To highlight a couple of problem patches (by no means exhaustive): - section 2 (an expanded version of what was previously section 5.9) lacks examples and is generally obscure and allusory ("the most advanced feature" -- name it! "in triplets" -- there is only one triplet!; what are "execution locations"? "initialize" and "de-initialize" what? "borrowed from the invoker" is a concept in need of explaining or at least a fully explained example -- in what sense does a plain function borrow" its stack frame? "computation only" as opposed to what? in 2.2, in what way is a "request" fundamental to "synchronization"? and the "implicitly" versus "explicitly" point needs stating as elsewhere, with a concrete example e.g. Java built-in mutexes versus java.util.concurrent). - section 6: 6.2 omits the most important facts in preference for otherwise inscrutable detail: "identify the kind of parameter" (first say *that there are* kinds of parameter, and what "kinds" means!); "mutex parameters are documentation" is misleading (they are also semantically significant!) and fails to say *what* they mean; the most important thing is surely that 'mutex' is a language feature for performing lock/unlock operations at function entry/exit. So say it! The meanings of examples f3 and f4 remain unclear. Meanwhile in 6.3, "urgent" is not introduced (we are supposed to infer its meaning from Figure 12, but that Figure is incomprehensible to me), and we are told of "external scheduling"'s long history in Ada but not clearly what it actually means; 6.4's description of "waitfor" tells us it is different from an if-else chain but tries to use two *different* inputs to tell us that the behavior is different; tell us an instance where *the same* values of C1 and C2 give different behavior (I even wrote out a truth table and still don't see the semantic difference) The authors frequently use bracketed phrases, and sometimes slashes "/", in ways that are confusing and/or detrimental to readability. Page 13 line 2's "forward (backward)" is one particularly egregious example. In general I would recommend the the authors try to limit their use of parentheses and slashes as a means of forcing a clearer wording to emerge. Also, the use of "eg." is often cursory and does not explain the examples given, which are frequently a one- or two-word phrase of unclear referent. Considering the revision more broadly, none of the more extensive or creative rewrites I suggested in my previous review have been attempted, nor any equivalent efforts to improve its readability. The hoisting of the former section 5.9 is a good idea, but the newly added material accompanying it (around Table 1) suffers fresh deficiencies in clarity. Overall the paper is longer than before, even though (as my previous review stated), I believe a shorter paper is required in order to serve the likely purpose of publication. (Indeed, the authors' letter implies that a key goal of publication is to build community and gain external users.) Given this trajectory, I no longer see a path to an acceptable revision of the present submission. Instead I suggest the authors consider splitting the paper in two: one half about coroutines and stack management, the other about mutexes, monitors and the runtime. (A briefer presentation of the runtime may be helpful in the first paper also, and a brief recap of the generator and coroutine support is obviously needed in the second too.) Both of these new papers would need to be written with a strong emphasis on clarity, paying great care to issues of structure, wording, choices of example, and restraint (saying what's important, not everything that could be said). I am confident the authors could benefit from getting early feedback from others at their institution. For the performance experiments, of course these do not split evenly -- most (but not all) belong in the second of these two hypothetical papers. But the first of them would still have plenty of meat to it; for me, a clear and thorough study of the design space around coroutines is the most interesting and tantalizing prospect. I do not buy the authors' defense of the limited practical experience or "non-micro" benchmarking presented. Yes, gaining external users is hard and I am sympathetic on that point. But building something at least *somewhat* substantial with your own system should be within reach, and without it the "practice and experience" aspects of the work have not been explored. Clearly C\/ is the product of a lot of work over an extended period, so it is a surprise that no such experience is readily available for inclusion. Some smaller points: It does not seem right to state that a stack is essential to Von Neumann architectures -- since the earliest Von Neumann machines (and indeed early Fortran) did not use one. To elaborate on something another reviewer commented on: it is a surprise to find a "Future work" section *after* the "Conclusion" section. A "Conclusions and future work" section often works well. Reviewing: 3 Comments to the Author This is the second round of reviewing. As in the first review, I found that the paper (and Cforall) contains a lot of really interesting ideas, but it remains really difficult to have a good sense of which idea I should use and when. This applies in different ways to different features from the language: * coroutines/generators/threads: here there is some discussion, but it can be improved. * interal/external scheduling: I didn't find any direct comparison between these features, except by way of example. I requested similar things in my previous review and I see that content was added in response to those requests. Unfortunately, I'm not sure that I can say it improved the paper's overall read. I think in some sense the additions were "too much" -- I would have preferred something more like a table or a few paragraphs highlighting the key reasons one would pick one construct or the other. In general, I do wonder if the paper is just trying to do too much. The discussion of clusters and pre-emption in particular feels quite rushed. ## Summary I make a number of suggestions below but the two most important I think are: * Recommend to shorten the comparison on coroutine/generator/threads in Section 2 to a paragraph with a few examples, or possibly a table explaining the trade-offs between the constructs * Recommend to clarify the relationship between internal/external scheduling -- is one more general but more error-prone or low-level? ## Coroutines/generators/threads There is obviously a lot of overlap between these features, and in particular between coroutines and generators. As noted in the previous review, many languages have chosen to offer *only* generators, and to build coroutines by stacks of generators invoking one another. I believe the newly introduced Section 2 of the paper is trying to motivate why each of these constructs exist, but I did not find it effective. It was dense and difficult to understand. I think the problem is that Section 2 seems to be trying to derive "from first principles" why each construct exists, but I think that a more "top down" approach would be easier to understand. In fact, the end of Section 2.1 (on page 5) contains a particular paragraph that embodies this "top down" approach. It starts, "programmers can now answer three basic questions", and thus gives some practical advice for which construct you should use and when. I think giving some examples of specific applications that this paragraph, combined with some examples of cases where each construct was needed, would be a better approach. I don't think this compariosn needs to be very long. It seems clear enough that one would * prefer generators for simple computations that yield up many values, * prefer coroutines for more complex processes that have significant internal structure, * prefer threads for cases where parallel execution is desired or needed. I did appreciate the comparison in Section 2.3 between async-await in JS/Java and generators/coroutines. I agree with its premise that those mechanisms are a poor replacement for generators (and, indeed, JS has a distinct generator mechanism, for example, in part for this reason). I believe I may have asked for this in a previous review, but having read it, I wonder if it is really necessary, since those mechanisms are so different in purpose. ## Internal vs external scheduling I find the motivation for supporting both internal and external scheduling to be fairly implicit. After several reads through the section, I came to the conclusion that internal scheduling is more expressive than external scheduling, but sometimes less convenient or clear. Is this correct? If not, it'd be useful to clarify where external scheduling is more expressive. The same is true, I think, of the `signal_block` function, which I have not encountered before; it seems like its behavior can be modeled with multiple condition variables, but that's clearly more complex. One question I had about `signal_block`: what happens if one signals but no other thread is waiting? Does it block until some other thread waits? Or is that user error? I would find it very interesting to try and capture some of the properties that make internal vs external scheduling the better choice. For example, it seems to me that external scheduling works well if there are only a few "key" operations, but that internal scheduling might be better otherwise, simply because it would be useful to have the ability to name a signal that can be referenced by many methods. Consider the bounded buffer from Figure 13: if it had multiple methods for removing elements, and not just `remove`, then the `waitfor(remove)` call in `insert` might not be sufficient. ## Comparison of external scheduling to messaging I did enjoy the section comparing external scheduling to Go's messaging mechanism, which I believe is a new addition. I believe that one difference between the Go program and the Cforall equivalent is that the Goroutine has an associated queue, so that multiple messages could be enqueued, whereas the Cforall equivalent is effectively a "bounded buffer" of length 1. Is that correct? I think this should be stated explicitly. (Presumably, one could modify the Cforall program to include an explicit vector of queued messages if desired, but you would also be reimplementing the channel abstraction.) Also, in Figure 20, I believe that there is a missing `mutex` keyword. The fiugre states: ``` void main(GoRtn & gortn) with(gortn) { ``` but I think it should probably be as follows: ``` void main(GoRtn & mutex gortn) with(gortn) { ``` Unless there is some implicit `mutex` associated with being a main function for a `monitor thread`. ## Atomic operations and race freedom I was glad to see that the paper acknowledged that Cforall still had low-level atomic operations, even if their use is discouraged in favor of higher-level alternatives. However, I still feel that the conclusion overstates the value of the contribution here when it says that "Cforall high-level race-free monitors and threads provide the core mechanisms for mutual exclusion and synchronization, without the need for volatile and atomics". I feel confident that Java programmers, for example, would be advised to stick with synchronized methods whenever possible, and it seems to me that they offer similar advantages -- but they sometimes wind up using volatiles for performance reasons. I was also confused by the term "race-free" in that sentence. In particular, I don't think that Cforall has any mechanisms for preventing *data races*, and it clearly doesn't prevent "race conditions" (which would bar all sorts of useful programs). I suppose that "race free" here might be referring to the improvements such as removing barging behavior. ## Performance comparisons In my previous review, I requested comparisons against Rust and node.js, and I see that the new version of the paper includes both, which is a good addition. One note on the Rust results: I believe that the results are comparing against the threads found in Rust's standard library, which are essentially a shallow wrapper around pthreads, and hence the performance is quite close to pthread performance (as one would expect). It would perhaps be more interesting to see a comparison built using [tokio] or [async-std], two of the more prominent user-space threading libraries that build on Rust's async-await feature (which operates quite differently than Javascript's async-await, in that it doesn't cause every aync function call to schedule a distinct task). [tokio]: https://tokio.rs/ [async-std]: https://async.rs/ That said, I am satisfied with the performance results as they are in the current revision. ## Minor notes and typos Several figures used the `with` keyword. I deduced that `with(foo)` permits one to write `bar` instead of `foo.bar`. It seems worth introducing. Apologies if this is stated in the paper, if so I missed it. On page 20, section 6.3, "external scheduling and vice versus" should be "external scheduling and vice versa". On page 5, section 2.3, the paper states "we content" but it should be "we contend". Reviewing: Editor A few small comments in addition to those of the referees. Page 1. I don't believe that it s fair to imply that Scala is "research vehicle" as it is used by major players, Twitter being the most prominent example. Page 15. Must Cforall threads start after construction (e.g. see your example on page 15, line 21)? I can think of examples where it is not desirable that threads start immediately after construction, e.g. a game with N players, each of whom is expensive to create, but all of whom should be started at the same time. Page 18, line 17: is using Date: Tue, 16 Jun 2020 13:45:03 +0000 From: Aaron Thomas Reply-To: speoffice@wiley.com To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca Subject: SPE-19-0219.R2 successfully submitted 16-Jun-2020 Dear Dr Buhr, Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been successfully submitted online and is presently being given full consideration for publication in Software: Practice and Experience. Your manuscript number is SPE-19-0219.R2. Please mention this number in all future correspondence regarding this submission. You can view the status of your manuscript at any time by checking your Author Center after logging into https://mc.manuscriptcentral.com/spe. If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site. Thank you for submitting your manuscript to Software: Practice and Experience. Sincerely, Software: Practice and Experience Editorial Office Date: Wed, 2 Sep 2020 20:55:34 +0000 From: Richard Jones Reply-To: R.E.Jones@kent.ac.uk To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca Subject: Software: Practice and Experience - Decision on Manuscript ID SPE-19-0219.R2 02-Sep-2020 Dear Dr Buhr, Many thanks for submitting SPE-19-0219.R2 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter. I apologise for the length of time it has taken to get these. Both reviewers consider this paper to be close to acceptance. However, before I can accept this paper, I would like you address the comments of Reviewer 2, particularly with regard to the description of the adaptation Java harness to deal with warmup. I would expect to see a convincing argument that the computation has reached a steady state. I would also like you to provide the values for N for each benchmark run. This should be very straightforward for you to do. There are a couple of papers on steady state that you may wish to consult (though I am certainly not pushing my own work). 1) Barrett, Edd; Bolz-Tereick, Carl Friedrich; Killick, Rebecca; Mount, Sarah and Tratt, Laurence. Virtual Machine Warmup Blows Hot and Cold. OOPSLA 2017. https://doi.org/10.1145/3133876 Virtual Machines (VMs) with Just-In-Time (JIT) compilers are traditionally thought to execute programs in two phases: the initial warmup phase determines which parts of a program would most benefit from dynamic compilation, before JIT compiling those parts into machine code; subsequently the program is said to be at a steady state of peak performance. Measurement methodologies almost always discard data collected during the warmup phase such that reported measurements focus entirely on peak performance. We introduce a fully automated statistical approach, based on changepoint analysis, which allows us to determine if a program has reached a steady state and, if so, whether that represents peak performance or not. Using this, we show that even when run in the most controlled of circumstances, small, deterministic, widely studied microbenchmarks often fail to reach a steady state of peak performance on a variety of common VMs. Repeating our experiment on 3 different machines, we found that at most 43.5% of pairs consistently reach a steady state of peak performance. 2) Kalibera, Tomas and Jones, Richard. Rigorous Benchmarking in Reasonable Time. ISMM 2013. https://doi.org/10.1145/2555670.2464160 Experimental evaluation is key to systems research. Because modern systems are complex and non-deterministic, good experimental methodology demands that researchers account for uncertainty. To obtain valid results, they are expected to run many iterations of benchmarks, invoke virtual machines (VMs) several times, or even rebuild VM or benchmark binaries more than once. All this repetition costs time to complete experiments. Currently, many evaluations give up on sufficient repetition or rigorous statistical methods, or even run benchmarks only in training sizes. The results reported often lack proper variation estimates and, when a small difference between two systems is reported, some are simply unreliable.In contrast, we provide a statistically rigorous methodology for repetition and summarising results that makes efficient use of experimentation time. Time efficiency comes from two key observations. First, a given benchmark on a given platform is typically prone to much less non-determinism than the common worst-case of published corner-case studies. Second, repetition is most needed where most uncertainty arises (whether between builds, between executions or between iterations). We capture experimentation cost with a novel mathematical model, which we use to identify the number of repetitions at each level of an experiment necessary and sufficient to obtain a given level of precision.We present our methodology as a cookbook that guides researchers on the number of repetitions they should run to obtain reliable results. We also show how to present results with an effect size confidence interval. As an example, we show how to use our methodology to conduct throughput experiments with the DaCapo and SPEC CPU benchmarks on three recent platforms. You have 42 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request a short extension. You can upload your revised manuscript and submit it through your Author Center. Log into https://mc.manuscriptcentral.com/spe and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions". When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided. You can use this space to document any changes you make to the original manuscript. If you would like help with English language editing, or other article preparation support, Wiley Editing Services offers expert help with English Language Editing, as well as translation, manuscript formatting, and figure formatting at www.wileyauthors.com/eeo/preparation. You can also check out our resources for Preparing Your Article for general guidance about writing and preparing your manuscript at www.wileyauthors.com/eeo/prepresources. Once again, thank you for submitting your manuscript to Software: Practice and Experience. I look forward to receiving your revision. Sincerely, Richard Prof. Richard Jones Editor, Software: Practice and Experience R.E.Jones@kent.ac.uk Referee(s)' Comments to Author: Reviewing: 1 Comments to the Author Overall, I felt that this draft was an improvement on previous drafts and I don't have further changes to request. I appreciated the new language to clarify the relationship of external and internal scheduling, for example, as well as the new measurements of Rust tokio. Also, while I still believe that the choice between thread/generator/coroutine and so forth could be made crisper and clearer, the current draft of Section 2 did seem adequate to me in terms of specifying the considerations that users would have to take into account to make the choice. Reviewing: 2 Comments to the Author First: let me apologise for the delay on this review. I'll blame the global pandemic combined with my institution's senior management's counterproductive decisions for taking up most of my time and all of my energy. At this point, reading the responses, I think we've been around the course enough times that further iteration is unlikely to really improve the paper any further, so I'm happy to recommend acceptance. My main comments are that there were some good points in the responses to *all* the reviews and I strongly encourage the authors to incorporate those discursive responses into the final paper so they may benefit readers as well as reviewers. I agree with the recommendations of reviewer #2 that the paper could usefully be split in to two, which I think I made to a previous revision, but I'm happy to leave that decision to the Editor. Finally, the paper needs to describe how the Java harness was adapted to deal with warmup; why the computation has warmed up and reached a steady state - similarly for js and Python. The tables should also give the "N" chosen for each benchmark run. minor points * don't start sentences with "However" * most downloaded isn't an "Award" Date: Thu, 1 Oct 2020 05:34:29 +0000 From: Richard Jones Reply-To: R.E.Jones@kent.ac.uk To: pabuhr@uwaterloo.ca Subject: Revision reminder - SPE-19-0219.R2 01-Oct-2020 Dear Dr Buhr SPE-19-0219.R2 This is a reminder that your opportunity to revise and re-submit your manuscript will expire 14 days from now. If you require more time please contact me directly and I may grant an extension to this deadline, otherwise the option to submit a revision online, will not be available. If your article is of potential interest to the general public, (which means it must be timely, groundbreaking, interesting and impact on everyday society) then please e-mail ejp@wiley.co.uk explaining the public interest side of the research. Wiley will then investigate the potential for undertaking a global press campaign on the article. I look forward to receiving your revision. Sincerely, Prof. Richard Jones Editor, Software: Practice and Experience https://mc.manuscriptcentral.com/spe Date: Tue, 6 Oct 2020 15:29:41 +0000 From: Mayank Roy Chowdhury Reply-To: speoffice@wiley.com To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca Subject: SPE-19-0219.R3 successfully submitted 06-Oct-2020 Dear Dr Buhr, Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been successfully submitted online and is presently being given full consideration for publication in Software: Practice and Experience. Your manuscript number is SPE-19-0219.R3. Please mention this number in all future correspondence regarding this submission. You can view the status of your manuscript at any time by checking your Author Center after logging into https://mc.manuscriptcentral.com/spe. If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site. Thank you for submitting your manuscript to Software: Practice and Experience. Sincerely, Software: Practice and Experience Editorial Office