source: doc/papers/concurrency/mail2 @ c7806122

ADTarm-ehast-experimentalenumforall-pointer-decayjacob/cs343-translationnew-ast-unique-exprpthread-emulationqualifiedEnum
Last change on this file since c7806122 was 016b1eb, checked in by Peter A. Buhr <pabuhr@…>, 4 years ago

final changes for round 2 of the SP&E concurrency paper

  • Property mode set to 100644
File size: 60.1 KB
Line 
1Date: Wed, 26 Jun 2019 20:12:38 +0000
2From: Aaron Thomas <onbehalfof@manuscriptcentral.com>
3Reply-To: speoffice@wiley.com
4To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca
5Subject: SPE-19-0219 successfully submitted
6
726-Jun-2019
8
9Dear Dr Buhr,
10
11Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been received by Software: Practice and Experience. It will be given full consideration for publication in the journal.
12
13Your manuscript number is SPE-19-0219.  Please mention this number in all future correspondence regarding this submission.
14
15You can view the status of your manuscript at any time by checking your Author Center after logging into https://mc.manuscriptcentral.com/spe.  If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site.
16
17
18Thank you for submitting your manuscript to Software: Practice and Experience.
19
20Sincerely,
21
22Software: Practice and Experience Editorial Office
23
24
25
26Date: Tue, 12 Nov 2019 22:25:17 +0000
27From: Richard Jones <onbehalfof@manuscriptcentral.com>
28Reply-To: R.E.Jones@kent.ac.uk
29To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca
30Subject: Software: Practice and Experience - Decision on Manuscript ID
31 SPE-19-0219
32
3312-Nov-2019
34
35Dear Dr Buhr,
36
37Many thanks for submitting SPE-19-0219 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter.
38
39The decision on this paper is that it requires substantial further work is required. The referees have a number of substantial concerns. All the reviewers found the submission very hard to read; two of the reviewers state that it needs very substantial restructuring. These concerns must be addressed before your submission can be considered further.
40
41A revised version of your manuscript that takes into account the comments of the referees will be reconsidered for publication.
42
43Please note that submitting a revision of your manuscript does not guarantee eventual acceptance, and that your revision will be subject to re-review by the referees before a decision is rendered.
44
45You have 90 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request an extension.
46
47You can upload your revised manuscript and submit it through your Author Center. Log into https://mc.manuscriptcentral.com/spe  and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions".
48
49When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided.  You can use this space to document any changes you make to the original manuscript.
50
51If you feel that your paper could benefit from English language polishing, you may wish to consider having your paper professionally edited for English language by a service such as Wiley's at http://wileyeditingservices.com. Please note that while this service will greatly improve the readability of your paper, it does not guarantee acceptance of your paper by the journal.
52 
53Once again, thank you for submitting your manuscript to Software: Practice and Experience and I look forward to receiving your revision.
54
55
56Sincerely,
57
58Prof. Richard Jones
59Software: Practice and Experience
60R.E.Jones@kent.ac.uk
61
62
63Referee(s)' Comments to Author:
64
65Reviewing: 1
66
67Comments to the Author
68This article presents the design and rationale behind the various
69threading and synchronization mechanisms of C-forall, a new low-level
70programming language.  This paper is very similar to a companion paper
71which I have also received: as the papers are similar, so will these
72reviews be --- in particular any general comments from the other
73review apply to this paper also.
74
75As far as I can tell, the article contains three main ideas: an
76asynchronous execution / threading model; a model for monitors to
77provide mutual exclusion; and an implementation.  The first two ideas
78are drawn together in Table 1: unfortunately this is on page 25 of 30
79pages of text. Implementation choices and descriptions are scattered
80throughout the paper - and the sectioning of the paper seems almost
81arbitrary.
82
83The article is about its contributions.  Simply adding feature X to
84language Y isn't by itself a contribution, (when feature X isn't
85already a contribution).  The contribution can be in the design: the
86motivation, the space of potential design options, the particular
87design chosen and the rationale for that choice, or the resulting
88performance.  For example: why support two kinds of generators as well
89as user-level threads?  Why support both low and high level
90synchronization constructs?  Similarly I would have found the article
91easier to follow if it was written top down, presenting the design
92principles, present the space of language features, justify chosen
93language features (and rationale) and those excluded, and then present
94implementation, and performance.
95
96Then the writing of the article is often hard to follow, to say the
97least. Two examples: section 3 "stateful functions" - I've some idea
98what that is (a function with Algol's "own" or C's "static" variables?
99but in fact the paper has a rather more specific idea than that. The
100top of page 3 throws a whole lot of defintions at the reader
101"generator" "coroutine" "stackful" "stackless" "symmetric"
102"asymmetric" without every stopping to define each one --- but then in
103footnote "C" takes the time to explain what C's "main" function is?  I
104cannot imagine a reader of this paper who doesn't know what "main" is
105in C; especially if they understand the other concepts already
106presented in the paper.  The start of section 3 then does the same
107thing: putting up a whole lot of definitions, making distinctions and
108comparisons, even talking about some runtime details, but the critical
109definition of a monitor doesn't appear until three pages later, at the
110start of section 5 on p15, lines 29-34 are a good, clear, description
111of what a monitor actually is.  That needs to come first, rather than
112being buried again after two sections of comparisons, discussions,
113implementations, and options that are ungrounded because they haven't
114told the reader what they are actually talking about.  First tell the
115reader what something is, then how they might use it (as programmers:
116what are the rules and restrictions) and only then start comparison
117with other things, other approaches, other languages, or
118implementations.
119
120The description of the implementation is similarly lost in the trees
121without ever really seeing the wood. Figure 19 is crucial here, but
122it's pretty much at the end of the paper, and comments about
123implementations are threaded throughout the paper without the context
124(fig 19) to understand what's going on.   The protocol for performance
125testing may just about suffice for C (although is N constantly ten
126million, or does it vary for each benchmark) but such evaluation isn't
127appropriate for garbage-collected or JITTed languages like Java or Go.
128
129other comments working through the paper - these are mostly low level
130and are certainly not comprehensive.
131
132p1 only a subset of C-forall extensions?
133
134p1 "has features often associated with object-oriented programming
135languages, such as constructors, destructors, virtuals and simple
136inheritance."   There's no need to quibble about this. Once a language
137has inheritance, it's hard to claim it's not object-oriented.
138
139
140p2 barging? signals-as-hints?
141
142p3 start your discussion of generations with a simple example of a
143C-forall generator.  Fig 1(b) might do: but put it inline instead of
144the python example - and explain the key rules and restrictions on the
145construct.  Then don't even start to compare with coroutines until
146you've presented, described and explained your coroutines...
147p3 I'd probably leave out the various "C" versions unless there are
148key points to make you can't make in C-forall. All the alternatives
149are just confusing.
150
151
152p4 but what's that "with" in Fig 1(B)
153
154p5 start with the high level features of C-forall generators...
155
156p5 why is the paper explaining networking protocols?
157
158p7 lines 1-9 (transforming generator to coroutine - why would I do any
159of this? Why would I want one instead of the other (do not use "stack"
160in your answer!)
161
162p10 last para "A coroutine must retain its last resumer to suspend
163back because the resumer is on a different stack. These reverse
164pointers allow suspend to cycle backwards, "  I've no idea what is
165going on here?  why should I care?  Shouldn't I just be using threads
166instead?  why not?
167
168p16 for the same reasons - what reasons?
169
170p17 if the multiple-monitor entry procedure really is novel, write a
171paper about that, and only about that.
172
173p23 "Loose Object Definitions" - no idea what that means.  in that
174section: you can't leave out JS-style dynamic properties.  Even in
175OOLs that (one way or another) allow separate definitions of methods
176(like Objective-C, Swift, Ruby, C#) at any time a runtime class has a
177fixed definition.  Quite why the detail about bit mask implementation
178is here anyway, I've no idea.
179
180p25 this cluster isn't a CLU cluster then?
181
182* conclusion should conclude the paper, not the related.
183
184
185Reviewing: 2
186
187Comments to the Author
188This paper describes the concurrency features of an extension of C (whose name I will write as "C\/" here, for convenience), including much design-level discussion of the coroutine- and monitor-based features and some microbenchmarks exploring the current implementation's performance. The key message of the latter is that the system's concurrency abstractions are much lighter-weight than the threading found in mainstream C or Java implementations.
189
190There is much description of the system and its details, but nothing about (non-artificial) uses of it. Although the microbenchmark data is encouraging, arguably not enough practical experience with the system has been reported here to say much about either its usability advantages or its performance.
191
192As such, the main contribution of the paper seem to be to document the existence of the described system and to provide a detailed design rationale and (partial) tutorial. I believe that could be of interest to some readers, so an acceptable manuscript is lurking in here somewhere.
193
194Unfortunately, at present the writing style is somewhere between unclear and infuriating. It omits to define terms; it uses needlessly many terms for what are apparently (but not clearly) the same things; it interrupts itself rather than deliver the natural consequent of whatever it has just said; and so on. Section 5 is particularly bad in these regards -- see my detailed comments below. Fairly major additional efforts will be needed to turn the present text into a digestible design-and-tutorial document. I suspect that a shorter paper could do this job better than the present manuscript, which is overwrought in parts.
195
196p2: lines 4--9 are a little sloppy. It is not the languages but their popular implementations which "adopt" the 1:1 kernel threading model.
197
198line 10: "medium work" -- "medium-sized work"?
199
200line 18: "is all sequential to the compiler" -- not true in modern compilers, and in 2004 H-J Boehm wrote a tech report describing exactly why ("Threads cannot be implemented as a library", HP Labs).
201
202line 20: "knows the optimization boundaries" -- I found this vague. What's an example?
203
204line 31: this paragraph has made a lot of claims. Perhaps forward-reference to the parts of the paper that discuss each one.
205
206line 33: "so the reader can judge if" -- this reads rather passive-aggressively. Perhaps better: "... to support our argument that..."
207
208line 41: "a dynamic partitioning mechanism" -- I couldn't tell what this meant
209
210p3. Presenting concept of a "stateful function" as a new language feature seems odd. In C, functions often have local state thanks to static local variables (or globals, indeed). Of course, that has several limitations. Can you perhaps present your contributions by enumerating these limitations? See also my suggestion below about a possible framing centred on a strawman.
211
212line 2: "an old idea that is new again" -- this is too oblique
213
214lines 2--15: I found this to be a word/concept soup. Stacks, closures, generators, stackless stackful, coroutine, symmetric, asymmetric, resume/suspend versus resume/resume... there needs to be a more gradual and structured way to introduce all this, and ideally one that minimises redundancy. Maybe present it as a series of "definitions" each with its own heading, e.g. "A closure is stackless if its local state has statically known fixed size"; "A generator simply means a stackless closure." And so on. Perhaps also strongly introduce the word "activate" as a direct contrast with resume and suspend. These are just a flavour of the sort of changes that might make this paragraph into something readable.
215
216Continuing the thought: I found it confusing that by these definitinos, a stackful closure is not a stack, even though logically the stack *is* a kind of closure (it is a representation of the current thread's continuation).
217
218lines 24--27: without explaining what the boost functor types mean, I don't think the point here comes across.
219
220line 34: "semantically coupled" -- I wasn't surew hat this meant
221
222p4: the point of Figure 1 (C) was not immediately clear. It seem to be showing how one might "compile down" Figure 1 (B). Or is that Figure 1 (A)?
223
224It's right that the incidental language features of the system are not front-and-centre, but I'd appreciate some brief glossing of non-C languages features as they appear. Examples are the square bracket notation, the pipe notation and the constructor syntax. These explanations could go in the caption of the figure which first uses them, perhaps. Overall I found the figure captions to be terse, and a missed opportunity to explain clearly what was going on.
225
226p5 line 23: "This restriction is removed..." -- give us some up-front summary of your contributions and the elements of the language design that will be talked about, so that this isn't an aside. This will reduce the "twisty passages" feeling that characterises much of the paper.
227
228line 40: "a killer asymmetric generator" -- this is stylistically odd, and the sentence about failures doesn't convincigly argue that C\/ will help with them. Have you any experience writing device drivers using C\/? Or any argument that the kinds of failures can be traced to the "stack-ripping" style that one is forced to use without coroutines? Also, a typo on line 41: "device drives". And saying "Windows/Linux" is sloppy... what does the cited paper actually say?
229
230p6 lines 13--23: this paragraph is difficult to understand. It seems to be talking about a control-flow pattern roughly equivalent to tail recursion. What is the high-level point, other than that this is possible?
231
232line 34: "which they call coroutines" -- a better way to make this point is presumably that the C++20 proposal only provides a specialised kind of coroutine, namely generators, despite its use of the more general word.
233
234line 47: "... due to dynamic stack allocation, execution..." -- this sentence doesn't scan. I suggest adding "and for" in the relevant places where currently there are only commas.
235
236p8 / Figure 5 (B) -- the GNU C extension of unary "&&" needs to be explained. The whole figure needs a better explanation, in fact.
237
238p9, lines 1--10: I wasn't sure this stepping-through really added much value. What are the truly important points to note about this code?
239
240p10: similarly, lines 3--27 again are somewhere between tedious and confusing. I'm sure the motivation and details of "starter semantics" can both be stated much more pithily.
241
242line 32: "a self-resume does not overwrite the last resumer" -- is this a hack or a defensible principled decision?
243
244p11: "a common source of errors" -- among beginners or among production code? Presumably the former.
245
246line 23: "with builtin and library" -- not sure what this means
247
248lines 31--36: these can be much briefer. The only important point here seems to be that coroutines cannot be copied.
249
250p12: line 1: what is a "task"? Does it matter?
251
252line 7: calling it "heap stack" seems to be a recipe for confusion. "Stack-and-heap" might be better, and contrast with "stack-and-VLS" perhaps. When "VLS" is glossed, suggest actually expanding its initials: say "length" not "size".
253
254line 21: are you saying "cooperative threading" is the same as "non-preemptive scheduling", or that one is a special case (kind) of the other? Both are defensible, but be clear.
255
256line 27: "mutual exclusion and synchronization" -- the former is a kind of the latter, so I suggest "and other forms of synchronization".
257
258line 30: "can either be a stackless or stackful" -- stray "a", but also, this seems to be switching from generic/background terminology to C\/-specific terminology.
259
260An expositional idea occurs: start the paper with a strawman naive/limited realisation of coroutines -- say, Simon Tatham's popular "Coroutines in C" web page -- and identify point by point what the limitations are and how C\/ overcomes them. Currently the presentation is often flat (lacking motivating contrasts) and backwards (stating solutions before problems). The foregoing approach might fix both of these.
261
262page 13: line 23: it seems a distraction to mention the Python feature here.
263
264p14 line 5: it seems odd to describe these as "stateless" just because they lack shared mutable state. It means the code itself is even more stateful. Maybe the "stack ripping" argument could usefully be given here.
265
266line 16: "too restrictive" -- would be good to have a reference to justify this, or at least give a sense of what the state-of-the-art performance in transactional memory systems is (both software and hardware)
267
268line 22: "simulate monitors" -- what about just *implementing* monitors? isn't that what these systems do? or is the point more about refining them somehow into something more specialised?
269
270p15: sections 4.1 and 4.2 seem adrift and misplaced. Split them into basic parts (which go earlier) and more advanced parts (e.g. barging, which can be explained later).
271
272line 31: "acquire/release" -- misses an opportunity to contrast the monitor's "enter/exit" abstraction with the less structured acquire/release of locks.
273
274p16 line 12: the "implicit" versus "explicit" point is unclear. Is it perhaps about the contract between an opt-in *discipline* and a language-enforced *guarantee*?
275
276line 28: no need to spend ages dithering about which one is default and which one is the explicit qualifier. Tell us what you decided, briefly justify it, and move on.
277
278p17: Figure 11: since the main point seems to be to highlight bulk acquire, include a comment which identifies the line where this is happening.
279
280line 2: "impossible to statically..." -- or dynamically. Doing it dynamically would be perfectly acceptable (locking is a dynamic operation after all)
281
282"guarantees acquisition order is consistent" -- assuming it's done in a single bulk acquire.
283
284p18: section 5.3: the text here is a mess. The explanations of "internal" versus "external" scheduling are unclear, and "signals as hints" is not explained. "... can cause thread starvation" -- means including a while loop, or not doing so? "There are three signalling mechanisms.." but the text does not follow that by telling us what they are. My own scribbled attempt at unpicking the internal/external thing: "threads already in the monitor, albeit waiting, have priority over those trying to enter".
285
286p19: line 3: "empty condition" -- explain that condition variables don't store anything. So being "empty" means that the queue of waiting threads (threads waiting to be signalled that the condition has become true) is empty.
287
288line 6: "... can be transformed into external scheduling..." -- OK, but give some motivation.
289
290p20: line 6: "mechnaism"
291
292lines 16--20: this is dense and can probably only be made clear with an example
293
294p21 line 21: clarify that nested monitor deadlock was describe earlier (in 5.2). (Is the repetition necessary?)
295
296line 27: "locks, and by extension monitors" -- this is true but the "by extension" argument is faulty. It is perfectly possible to use locks as a primitive and build a compositional mechanism out of them, e.g. transactions.
297
298p22 line 2: should say "restructured"
299
300line 33: "Implementing a fast subset check..." -- make clear that the following section explains how to do this. Restructuring the sections themselves could do this, or noting in the text.
301
302p23: line 3: "dynamic member adding, eg, JavaScript" -- needs to say "as permitted in JavaScript", and "dynamically adding members" is stylistically better
303
304p23: line 18: "urgent stack" -- back-reference to where this was explained before
305
306p24 line 7: I did not understand what was more "direct" about "direct communication". Also, what is a "passive monitor" -- just a monitor, given that monitors are passive by design?
307
308line 14 / section 5.9: this table was useful and it (or something like it) could be used much earlier on to set the structure of the rest of the paper. The explanation at present is too brief, e.g. I did not really understand the point about cases 7 and 8.
309
310p25 line 2: instead of casually dropping in a terse explanation for the newly intrdouced term "virtual processor", introduce it properly. Presumably the point is to give a less ambiguous meaning to "thread" by reserving it only for C\/'s green threads.
311
312Table 1: what does "No / Yes" mean?
313
314p26 line 15: "transforms user threads into fibres" -- a reference is needed to explain what "fibres" means... guessing it's in the sense of Adya et al.
315
316line 20: "Microsoft runtime" -- means Windows?
317
318lines 21--26: don't say "interrupt" to mean "signal", especially not without clear introduction. You can use "POSIX signal" to disambiguate from condition variables' "signal".
319
320p27 line 3: "frequency is usually long" -- that's a "time period" or "interval", not a frequency
321
322line 5: the lengthy quotation is not really necessary; just paraphrase the first sentence and move on.
323
324line 20: "to verify the implementation" -- I don't think that means what is intended
325
326Tables in section 7 -- too many significant figures. How many overall runs are described? What is N in each case?
327
328p29 line 2: "to eliminate this cost" -- arguably confusing since nowadays on commodity CPUs most of the benefits of inlining are not to do with call overheads, but from later optimizations enabled as a consequence of the inlining
329
330line 41: "a hierarchy" -- are they a hierarchy? If so, this could be explained earlier. Also, to say these make up "an integrated set... of control-flow features" verges on the tautologous.
331
332p30 line 15: "a common case being web servers and XaaS" -- that's two cases
333
334
335Reviewing: 3
336
337Comments to the Author
338# Cforall review
339
340Overall, I quite enjoyed reading the paper. Cforall has some very interesting ideas. I did have some suggestions that I think would be helpful before final publication. I also left notes on various parts of the paper that I find confusing when reading, in hopes that it may be useful to you.
341
342## Summary
343
344* Expand on the motivations for including both generator and coroutines, vs trying to build one atop the other
345* Expand on the motivations for having Why both symmetric and asymettric coroutines?
346* Comparison to async-await model adopted by other languages
347    * C#, JS
348    * Rust and its async/await model
349* Consider performance comparisons against node.js and Rust frameworks
350* Discuss performance of monitors vs finer-grained memory models and atomic operations found in other languages
351* Why both internal/external scheduling for synchronization?
352
353## Generator/coroutines
354
355In general, this section was clear, but I thought it would be useful to provide a somewhat deeper look into why Cforall opted for the particular combination of features that it offers. I see three main differences from other languages:
356
357* Generators are not exposed as a "function" that returns a generator object, but rather as a kind of struct, with communication happening via mutable state instead of "return values". That is, the generator must be manually resumed and (if I understood) it is expected to store values that can then later be read (perhaps via methods), instead of having a `yield <Expr>` statement that yields up a value explicitly.
358* Both "symmetric" and "asymmetric" generators are supported, instead of only asymmetric.
359* Coroutines (multi-frame generators) are an explicit mechanism.
360
361In most other languages, coroutines are rather built by layering single-frame generators atop one another (e.g., using a mechanism like async-await), and symmetric coroutines are basically not supported. I'd like to see a bit more justification for Cforall including all the above mechanisms -- it seemed like symmetric coroutines were a useful building block for some of the user-space threading and custom scheduler mechanisms that were briefly mentioned later in the paper.
362
363In the discussion of coroutines, I would have expected a bit more of a comparison to the async-await mechanism offered in other languages. Certainly the semantics of async-await in JavaScript implies significantly more overhead (because each async fn is a distinct heap object). [Rust's approach avoids this overhead][zc], however, and might be worthy of a comparison (see the Performance section).
364
365## Locks and threading
366
367### Comparison to atomics overlooks performance
368
369There are several sections in the paper that compare against atomics -- for example, on page 15, the paper shows a simple monitor that encapsulates an integer and compares that to C++ atomics. Later, the paper compares the simplicity of monitors against the `volatile` quantifier from Java. The conclusion in section 8 also revisits this point.
370
371While I agree that monitors are simpler, they are obviously also significantly different from a performance perspective -- the paper doesn't seem to address this at all. It's plausible that (e.g.) the `Aint` monitor type described in the paper can be compiled and mapped to the specialized instructions offered by hardware, but I didn't see any mention of how this would be done. There is also no mention of the more nuanced memory ordering relations offered by C++11 and how one might achieve similar performance characteristics in Cforall (perhaps the answer is that one simply doesn't need to; I think that's defensible, but worth stating explicitly).
372
373### Justification for external scheduling feels lacking
374
375Cforall includes both internal and external scheduling; I found the explanation for the external scheduling mechanism to be lacking in justification. Why include both mechanisms when most languages seem to make do with only internal scheduling? It would be useful to show some scenarios where external scheduling is truly more powerful.
376
377I would have liked to see some more discussion of external scheduling and how it  interacts with software engineering best practices. It seems somewhat similar to AOP in certain regards. It seems to add a bit of "extra semantics" to monitor methods, in that any method may now also become a kind of synchronization point. The "open-ended" nature of this feels like it could easily lead to subtle bugs, particularly when code refactoring occurs (which may e.g. split an existing method into two). This seems particularly true if external scheduling can occur across compilation units -- the paper suggested that this is true, but I wasn't entirely clear.
378
379I would have also appreciated a few more details on how external scheduling is implemented. It seems to me that there must be some sort of "hooks" on mutex methods so that they can detect whether some other function is waiting on them and awaken those blocked threads. I'm not sure how such hooks are inserted, particularly across compilation units. The material in Section 5.6 didn't quite clarify the matter for me. For example, it left me somewhat confused about whether the `f` and `g` functions declared were meant to be local to a translation unit, or shared with other unit.
380
381### Presentation of monitors is somewhat confusing
382
383I found myself confused fairly often in the section on monitors. I'm just going to leave some notes here on places that I got confused in how that it could be useful to you as feedback on writing that might want to be clarified.
384
385To start, I did not realize that the `mutex_opt` notation was a keyword, I thought it was a type annotation. I think this could be called out more explicitly.
386
387Later, in section 5.2, the paper discusses `nomutex` annotations, which initially threw me, as they had not been introduced (now I realize that this paragraph is there to justify why there is no such keyword). The paragraph might be rearranged to make that clearer, perhaps by leading with the choice that Cforall made.
388
389On page 17, the paper states that "acquiring multiple monitors is safe from deadlock", but this could be stated a bit more precisely: acquiring multiple monitors in a bulk-acquire is safe from deadlock (deadlock can still result from nested acquires).
390
391On page 18, the paper states that wait states do not have to be enclosed in loops, as there is no concern of barging. This seems true but there are also other reasons to use loops (e.g., if there are multiple reasons to notify on the same condition). Thus the statement initially surprised me, as barging is only one of many reasons that I typically employ loops around waits.
392
393I did not understand the diagram in Figure 12 for some time. Initially, I thought that it was generic to all monitors, and I could not understand the state space. It was only later that I realized it was specific to your example. Updating the caption from "Monitor scheduling to "Monitor scheduling in the example from Fig 13" might have helped me quite a bit.
394
395I spent quite some time reading the boy/girl dating example (\*) and I admit I found it somewhat confusing. For example, I couldn't tell whether there were supposed to be many "girl" threads executing at once, or if there was only supposed to be one girl and one boy thread executing in a loop. Are the girl/boy threads supposed to invoke the girl/boy methods or vice versa? Surely there is some easier way to set this up? I believe that when reading the paper I convinced myself of how it was supposed to be working, but I'm writing this review some days later, and I find myself confused all over again and not able to easily figure it out.
396
397(\*) as an aside, I would consider modifying the example to some other form of matching, like customers and support personnel.
398
399## Related work
400
401The paper offered a number of comparisons to Go, C#, Scala, and so forth, but seems to have overlooked another recent language, Rust. In many ways, Rust seems to be closest in philosophy to Cforall, so it seems like an odd omission. I already mentioned above that Rust is in the process of shipping [async-await syntax][aa], which is definitely an alternative to the generator/coroutine approach in Cforall (though one with clear pros/cons).
402
403## Performance
404
405In the performance section in particular, you might consider comparing against some of the Rust web servers and threading systems. For example, actix is top of the [single query TechEmpower Framework benchmarks], and tokio is near the top of the [plainthreading benchmarks][pt] (hyper, the top, is more of an HTTP framework, though it is also written in Rust). It would seem worth trying to compare their "context switching" costs as well -- I believe both actix and tokio have a notion of threads that could be readily compared.
406
407Another addition that might be worth considering is to compare against node.js promises, although I think the comparison to process creation is not as clean.
408
409That said, I think that the performance comparison is not a big focus of the paper, so it may not be necessary to add anything to it.
410
411## Authorship of this review
412
413I'm going to sign this review. This review was authored by Nicholas D. Matsakis. In the intrerest of full disclosure, I'm heavily involved in the Rust project, although I dont' think that influenced this review in particular. Feel free to reach out to me for clarifying questions.
414
415## Links
416
417[aa]: https://blog.rust-lang.org/2019/09/30/Async-await-hits-beta.html
418[zc]: https://aturon.github.io/blog/2016/08/11/futures/
419[sq]: https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=db
420[pt]: https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=plaintext
421
422
423
424Subject: Re: manuscript SPE-19-0219
425To: "Peter A. Buhr" <pabuhr@uwaterloo.ca>
426From: Richard Jones <R.E.Jones@kent.ac.uk>
427Date: Tue, 12 Nov 2019 22:43:55 +0000
428
429Dear Dr Buhr
430
431Your should have received a decision letter on this today. I am sorry that this
432has taken so long. Unfortunately SP&E receives a lot of submissions and getting
433reviewers is a perennial problem.
434
435Regards
436Richard
437
438Peter A. Buhr wrote on 11/11/2019 13:10:
439>     26-Jun-2019
440>     Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall"
441>     has been received by Software: Practice and Experience. It will be given
442>     full consideration for publication in the journal.
443>
444> Hi, it has been over 4 months since submission of our manuscript SPE-19-0219
445> with no response.
446>
447> Currently, I am refereeing a paper for IEEE that already cites our prior SP&E
448> paper and the Master's thesis forming the bases of the SP&E paper under
449> review. Hence our work is apropos and we want to get it disseminates as soon as
450> possible.
451>
452> [3] A. Moss, R. Schluntz, and P. A. Buhr, "Cforall: Adding modern programming
453>      language features to C," Software - Practice and Experience, vol. 48,
454>      no. 12, pp. 2111-2146, 2018.
455>
456> [4] T. Delisle, "Concurrency in C for all," Master's thesis, University of
457>      Waterloo, 2018.  [Online].  Available:
458>      https://uwspace.uwaterloo.ca/bitstream/handle/10012/12888
459
460
461
462Date: Mon, 13 Jan 2020 05:33:15 +0000
463From: Richard Jones <onbehalfof@manuscriptcentral.com>
464Reply-To: R.E.Jones@kent.ac.uk
465To: pabuhr@uwaterloo.ca
466Subject: Revision reminder - SPE-19-0219
467
46813-Jan-2020
469Dear Dr Buhr
470SPE-19-0219
471
472This is a reminder that your opportunity to revise and re-submit your
473manuscript will expire 28 days from now. If you require more time please
474contact me directly and I may grant an extension to this deadline, otherwise
475the option to submit a revision online, will not be available.
476
477I look forward to receiving your revision.
478
479Sincerely,
480
481Prof. Richard Jones
482Editor, Software: Practice and Experience
483https://mc.manuscriptcentral.com/spe
484
485
486
487Date: Wed, 5 Feb 2020 04:22:18 +0000
488From: Aaron Thomas <onbehalfof@manuscriptcentral.com>
489Reply-To: speoffice@wiley.com
490To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca
491Subject: SPE-19-0219.R1 successfully submitted
492
49304-Feb-2020
494
495Dear Dr Buhr,
496
497Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has
498been successfully submitted online and is presently being given full
499consideration for publication in Software: Practice and Experience.
500
501Your manuscript number is SPE-19-0219.R1.  Please mention this number in all
502future correspondence regarding this submission.
503
504You can view the status of your manuscript at any time by checking your Author
505Center after logging into https://mc.manuscriptcentral.com/spe.  If you have
506difficulty using this site, please click the 'Get Help Now' link at the top
507right corner of the site.
508
509Thank you for submitting your manuscript to Software: Practice and Experience.
510
511Sincerely,
512Software: Practice and Experience Editorial Office
513
514
515
516Date: Sat, 18 Apr 2020 10:42:13 +0000
517From: Richard Jones <onbehalfof@manuscriptcentral.com>
518Reply-To: R.E.Jones@kent.ac.uk
519To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca
520Subject: Software: Practice and Experience - Decision on Manuscript ID
521 SPE-19-0219.R1
522
52318-Apr-2020
524
525Dear Dr Buhr,
526
527Many thanks for submitting SPE-19-0219.R1 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter.
528
529I believe that we are making progress here towards a paper that can be published in Software: Practice and Experience.  However the referees still have significant concerns about the paper. The journal's focus is on practice and experience, and one of the the reviewers' concerns remains that your submission should focus the narrative more on the perspective of the programmer than the language designer. I agree that this would strengthen your submission, and I ask you to address this as well as the referees' other comments.
530
531A revised version of your manuscript that takes into account the comments of the referee(s) will be reconsidered for publication.
532
533Please note that submitting a revision of your manuscript does not guarantee eventual acceptance, and that your revision may be subject to re-review by the referees before a decision is rendered.
534
535You have 90 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request a short extension.
536
537You can upload your revised manuscript and submit it through your Author Center. Log into https://mc.manuscriptcentral.com/spe  and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions".
538
539When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided.  You can use this space to document any changes you make to the original manuscript.
540
541If you would like help with English language editing, or other article preparation support, Wiley Editing Services offers expert help with English Language Editing, as well as translation, manuscript formatting, and figure formatting at www.wileyauthors.com/eeo/preparation. You can also check out our resources for Preparing Your Article for general guidance about writing and preparing your manuscript at www.wileyauthors.com/eeo/prepresources.
542
543Once again, thank you for submitting your manuscript to Software: Practice and Experience and I look forward to receiving your revision.
544
545Sincerely,
546Richard
547
548Prof. Richard Jones
549Software: Practice and Experience
550R.E.Jones@kent.ac.uk
551
552
553Referee(s)' Comments to Author:
554
555Reviewing: 1
556
557Comments to the Author
558(A relatively short second review)
559
560I thank the authors for their revisions and comprehensive response to
561reviewers' comments --- many of my comments have been successfully
562addressed by the revisions.  Here I'll structure my comments around
563the main salient points in that response which I consider would
564benefit from further explanation.
565
566>  Table 1 is moved to the start and explained in detail.
567
568I consider this change makes a significant improvement to the paper,
569laying out the landscape of language features at the start, and thus
570addresses my main concerns about the paper.
571
572I still have a couple of issues --- perhaps the largest is that it's
573still not clear at this point in the paper what some of these options
574are, or crucially how they would be used. I don't know if it's
575possbile to give high-level examples or use cases to be clear about
576these up front - or if that would duplicate too much information from
577later in the paper - either way expanding out the discussion - even if
578just two a couple of sentences for each row - would help me more.  The
579point is not just to define these categories but to ensure the
580readers' understanding of these definitons agrees with that used in
581the paper.
582
583in a little more detail:
584
585 * 1st para section 2 begs the question: why not support each
586   dimension independently, and let the programmer or library designer
587   combiine features?
588
589 * "execution state" seems a relatively low-level description here.
590  I don't think of e.g. the lambda calculus that way. Perhaps it's as
591  good a term as any.
592
593 * Why must there "be language mechanisms to create, block/unblock,
594   and join with a thread"?  There aren't in Smalltalk (although there
595   are in the runtime).  Especially given in Cforall those mechanisms
596   are *implicit* on thread creation and destruction?
597
598 * "Case 1 is a function that borrows storage for its state (stack
599   frame/activation) and a thread from its invoker"
600
601   this much makes perfect sense to me, but I don't understand how a
602   non-stateful, non-theaded function can then retain
603
604   "this state across callees, ie, function local-variables are
605   retained on the stack across calls."
606
607   how can it retain function-local values *across calls* when it
608   doesn't have any functional-local state?
609
610   I'm not sure if I see two separate cases here - rougly equivalent
611   to C functions without static storage, and then C functions *with*
612   static storage. I assumed that was the distinction between cases 1
613   & 3; but perhpas the actual distinction is that 3 has a
614   suspend/resume point, and so the "state" in figure 1 is this
615   component of execution state (viz figs 1 & 2), not the state
616   representing the cross-call variables?
617
618>    but such evaluation isn't appropriate for garbage-collected or JITTed
619   languages like Java or Go.
620
621For JITTed languages in particular, reporting peak performance needs
622to "warm up" the JIT with a number of iterators before beginning
623measurement. Actually for JIT's its even worse: see Edd Barrett et al
624OOPSLA 2017.
625   
626
627
628minor issues:
629
630 * footnote A - I've looked at various other papers & the website to
631   try to understand how "object-oriented" Cforall is - I'm still not
632   sure.  This footnote says Cforall has "virtuals" - presumably
633   virtual functions, i.e. dynamic dispatch - and inheritance: that
634   really is OO as far as I (and most OO people) are concerned.  For
635   example Haskell doesn't have inheritance, so it's not OO; while
636   CLOS (the Common Lisp *Object* System) or things like Cecil and
637   Dylan are considered OO even though they have "multiple function
638   parameters as receivers", lack "lexical binding between a structure
639   and set of functions", and don't have explicit receiver invocation
640   syntax.  Python has receiver syntax, but unlike Java or Smalltalk
641   or C++, method declarations still need to have an explicit "self"
642   receiver parameter.  Seems to me that Go, for example, is
643   more-or-less OO with interfaces, methods, and dynamic dispatch (yes
644   also and an explicit receiver syntax but that's not
645   determiniative); while Rust lacks dynamic dispatch built-in.  C is
646   not OO as a language, but as you say given it supports function
647   pointers with structures, it does support an OO programm style.
648 
649   This is why I again recommend just not buying into this fight: not
650   making any claims about whether Cforall is OO or is not - because
651   as I see it, the rest of the paper doesn't depend on whether
652   Cforall is OO or not.  That said: this is just a recommendation,
653   and I won't quibble over this any further.
654
655 * is a "monitor function" the same as a "mutex function"?
656   if so the paper should pick one term; if not, make the distinction clear.
657
658
659 * "As stated on line 1 because state declarations from the generator
660    type can be moved out of the coroutine type into the coroutine main"
661
662    OK sure, but again: *why* would a programmer want to do that?
663    (Other than, I guess, to show the difference between coroutines &
664    generators?)  Perhaps another way to put this is that the first
665    para of 3.2 gives the disadvantages of coroutines vs-a-vs
666    generators, briefly describes the extended semantics, but never
667    actualy says why a programmer may want those extended semantics,
668    or how they would benefit.  I don't mean to belabour the point,
669    but (generalist?) readers like me would generally benefit from
670    those kinds of discussions about each feature throughout the
671    paper: why might a programmer want to use them?
672   
673
674> p17 if the multiple-monitor entry procedure really is novel, write a paper
675> about that, and only about that.
676
677> We do not believe this is a practical suggestion.
678
679 * I'm honestly not trying to be snide here: I'm not an expert on
680   monitor or concurrent implementations. Brinch Hansen's original
681   monitors were single acquire; this draft does not cite any other
682   previous work that I could see. I'm not suggesting that the brief
683   mention of this mechanism necessarily be removed from this paper,
684   but if this is novel (and a clear advance over a classical OO
685   monitor a-la Java which only acquires the distinguished reciever)
686   then that would be worth another paper in itself.
687 
688> * conclusion should conclude the paper, not the related.
689> We do not understand this comment.if ithis
690
691My typo: the paper's conclusion should come at the end, after the
692future work section.
693
694
695
696
697To encourage accountability, I'm signing my reviews in 2020.
698For the record, I am James Noble, kjx@ecs.vuw.ac.nz.
699
700
701Reviewing: 2
702
703Comments to the Author
704I thank the authors for their detailed response. To respond to a couple of points raised  in response to my review (number 2):
705
706- on the Boehm paper and whether code is "all sequential to the compiler": I now understand the authors' position better and suspect we are in violent agreement, except for whether it's appropriate to use the rather breezy phrase "all sequential to the compiler". It would be straightforward to clarify that code not using the atomics features is optimized *as if* it were sequential, i.e. on the assumption of a lack of data races.
707
708- on the distinction between "mutual exclusion" and "synchronization": the added citation does help, in that it makes a coherent case for the definition the authors prefer. However, the text could usefully clarify that this is a matter of definition not of fact, given especially that in my assessment the authors' preferred definition is not the most common one. (Although the mention of Hoare's apparent use of this definition is one data point, countervailing ones are found in many contemporaneous or later papers, e.g. Habermann's 1972 "Synchronization of Communicating Processes" (CACM 15(3)), Reed & Kanodia's 1979 "Synchronization with eventcounts and sequencers" (CACM (22(2)) and so on.)
709
710I am glad to see that the authors have taken on board most of the straightforward improvements I suggested.
711
712However, a recurring problem of unclear writing still remains through many parts of the paper, including much of sections 2, 3 and 6. To highlight a couple of problem patches (by no means exhaustive):
713
714- section 2 (an expanded version of what was previously section 5.9) lacks examples and is generally obscure and allusory ("the most advanced feature" -- name it! "in triplets" -- there is only one triplet!; what are "execution locations"? "initialize" and "de-initialize" what? "borrowed from the invoker" is a concept in need of explaining or at least a fully explained example -- in what sense does a plain function borrow" its stack frame? "computation only" as opposed to what? in 2.2, in what way is a "request" fundamental to "synchronization"? and the "implicitly" versus "explicitly" point needs stating as elsewhere, with a concrete example e.g. Java built-in mutexes versus java.util.concurrent).
715
716- section 6: 6.2 omits the most important facts in preference for otherwise inscrutable detail: "identify the kind of parameter" (first say *that there are* kinds of parameter, and what "kinds" means!); "mutex parameters are documentation" is misleading (they are also semantically significant!) and fails to say *what* they mean; the most important thing is surely that 'mutex' is a language feature for performing lock/unlock operations at function entry/exit. So say it! The meanings of examples f3 and f4 remain unclear. Meanwhile in 6.3, "urgent" is not introduced (we are supposed to infer its meaning from Figure 12, but that Figure is incomprehensible to me), and we are told of "external scheduling"'s long history in Ada but not clearly what it actually means; 6.4's description of "waitfor" tells us it is different from an if-else chain but tries to use two *different* inputs to tell us that the behavior is different; tell us an instance where *the same* values of C1 and C2 give different behavior (I even wrote out a truth table and still don't see the semantic difference)
717
718The authors frequently use bracketed phrases, and sometimes slashes "/", in ways that are confusing and/or detrimental to readability. Page 13 line 2's "forward (backward)" is one particularly egregious example. In general I would recommend the the authors try to limit their use of parentheses and slashes as a means of forcing a clearer wording to emerge. Also, the use of "eg." is often cursory and does not explain the examples given, which are frequently a one- or two-word phrase of unclear referent.
719
720Considering the revision more broadly, none of the more extensive or creative rewrites I suggested in my previous review have been attempted, nor any equivalent efforts to improve its readability. The hoisting of the former section 5.9 is a good idea, but the newly added material accompanying it (around Table 1) suffers fresh deficiencies in clarity. Overall the paper is longer than before, even though (as my previous review stated), I believe a shorter paper is required in order to serve the likely purpose of publication. (Indeed, the authors' letter implies that a key goal of publication is to build community and gain external users.)
721
722Given this trajectory, I no longer see a path to an acceptable revision of the present submission. Instead I suggest the authors consider splitting the paper in two: one half about coroutines and stack management, the other about mutexes, monitors and the runtime. (A briefer presentation of the runtime may be helpful in the first paper also, and a brief recap of the generator and coroutine support is obviously needed in the second too.) Both of these new papers would need to be written with a strong emphasis on clarity, paying great care to issues of structure, wording, choices of example, and restraint (saying what's important, not everything that could be said). I am confident the authors could benefit from getting early feedback from others at their institution. For the performance experiments, of course these do not split evenly -- most (but not all) belong in the second of these two hypothetical papers. But the first of them would still have plenty of meat to it; for me, a clear and thorough study of the design space around coroutines is the most interesting and tantalizing prospect.
723
724I do not buy the authors' defense of the limited practical experience or "non-micro" benchmarking presented. Yes, gaining external users is hard and I am sympathetic on that point. But building something at least *somewhat* substantial with your own system should be within reach, and without it the "practice and experience" aspects of the work have not been explored. Clearly C\/ is the product of a lot of work over an extended period, so it is a surprise that no such experience is readily available for inclusion.
725
726Some smaller points:
727
728It does not seem right to state that a stack is essential to Von Neumann architectures -- since the earliest Von Neumann machines (and indeed early Fortran) did not use one.
729
730To elaborate on something another reviewer commented on: it is a surprise to find a "Future work" section *after* the "Conclusion" section. A "Conclusions and future work" section often works well.
731
732
733Reviewing: 3
734
735Comments to the Author
736This is the second round of reviewing.
737
738As in the first review, I found that the paper (and Cforall) contains
739a lot of really interesting ideas, but it remains really difficult to
740have a good sense of which idea I should use and when. This applies in
741different ways to different features from the language:
742
743* coroutines/generators/threads: here there is
744  some discussion, but it can be improved.
745* interal/external scheduling: I didn't find any direct comparison
746  between these features, except by way of example.
747
748I requested similar things in my previous review and I see that
749content was added in response to those requests. Unfortunately, I'm
750not sure that I can say it improved the paper's overall read. I think
751in some sense the additions were "too much" -- I would have preferred
752something more like a table or a few paragraphs highlighting the key
753reasons one would pick one construct or the other.
754
755In general, I do wonder if the paper is just trying to do too much.
756The discussion of clusters and pre-emption in particular feels quite
757rushed.
758
759## Summary
760
761I make a number of suggestions below but the two most important
762I think are:
763
764* Recommend to shorten the comparison on coroutine/generator/threads
765  in Section 2 to a paragraph with a few examples, or possibly a table
766  explaining the trade-offs between the constructs
767* Recommend to clarify the relationship between internal/external
768  scheduling -- is one more general but more error-prone or low-level?
769
770## Coroutines/generators/threads
771
772There is obviously a lot of overlap between these features, and in
773particular between coroutines and generators. As noted in the previous
774review, many languages have chosen to offer *only* generators, and to
775build coroutines by stacks of generators invoking one another.
776
777I believe the newly introduced Section 2 of the paper is trying to
778motivate why each of these constructs exist, but I did not find it
779effective. It was dense and difficult to understand. I think the
780problem is that Section 2 seems to be trying to derive "from first
781principles" why each construct exists, but I think that a more "top
782down" approach would be easier to understand.
783
784In fact, the end of Section 2.1 (on page 5) contains a particular
785paragraph that embodies this "top down" approach. It starts,
786"programmers can now answer three basic questions", and thus gives
787some practical advice for which construct you should use and when. I
788think giving some examples of specific applications that this
789paragraph, combined with some examples of cases where each construct
790was needed, would be a better approach.
791
792I don't think this compariosn needs to be very long. It seems clear
793enough that one would
794
795* prefer generators for simple computations that yield up many values,
796* prefer coroutines for more complex processes that have significant
797  internal structure,
798* prefer threads for cases where parallel execution is desired or
799  needed.
800
801I did appreciate the comparison in Section 2.3 between async-await in
802JS/Java and generators/coroutines. I agree with its premise that those
803mechanisms are a poor replacement for generators (and, indeed, JS has
804a distinct generator mechanism, for example, in part for this reason).
805I believe I may have asked for this in a previous review, but having
806read it, I wonder if it is really necessary, since those mechanisms
807are so different in purpose.
808
809## Internal vs external scheduling
810
811I find the motivation for supporting both internal and external
812scheduling to be fairly implicit. After several reads through the
813section, I came to the conclusion that internal scheduling is more
814expressive than external scheduling, but sometimes less convenient or
815clear. Is this correct? If not, it'd be useful to clarify where
816external scheduling is more expressive.
817
818The same is true, I think, of the `signal_block` function, which I
819have not encountered before; it seems like its behavior can be modeled
820with multiple condition variables, but that's clearly more complex.
821
822One question I had about `signal_block`: what happens if one signals
823but no other thread is waiting? Does it block until some other thread
824waits? Or is that user error?
825
826I would find it very interesting to try and capture some of the
827properties that make internal vs external scheduling the better
828choice.
829
830For example, it seems to me that external scheduling works well if
831there are only a few "key" operations, but that internal scheduling
832might be better otherwise, simply because it would be useful to have
833the ability to name a signal that can be referenced by many
834methods. Consider the bounded buffer from Figure 13: if it had
835multiple methods for removing elements, and not just `remove`, then
836the `waitfor(remove)` call in `insert` might not be sufficient.
837
838## Comparison of external scheduling to messaging
839
840I did enjoy the section comparing external scheduling to Go's
841messaging mechanism, which I believe is a new addition.
842
843I believe that one difference between the Go program and the Cforall
844equivalent is that the Goroutine has an associated queue, so that
845multiple messages could be enqueued, whereas the Cforall equivalent is
846effectively a "bounded buffer" of length 1. Is that correct? I think
847this should be stated explicitly. (Presumably, one could modify the
848Cforall program to include an explicit vector of queued messages if
849desired, but you would also be reimplementing the channel
850abstraction.)
851
852Also, in Figure 20, I believe that there is a missing `mutex` keyword.
853The fiugre states:
854
855```
856void main(GoRtn & gortn) with(gortn) {
857```
858
859but I think it should probably be as follows:
860
861```
862void main(GoRtn & mutex gortn) with(gortn) {
863```
864
865Unless there is some implicit `mutex` associated with being a main
866function for a `monitor thread`.
867
868## Atomic operations and race freedom
869
870I was glad to see that the paper acknowledged that Cforall still had
871low-level atomic operations, even if their use is discouraged in favor
872of higher-level alternatives.
873
874However, I still feel that the conclusion overstates the value of the
875contribution here when it says that "Cforall high-level race-free
876monitors and threads provide the core mechanisms for mutual exclusion
877and synchronization, without the need for volatile and atomics". I
878feel confident that Java programmers, for example, would be advised to
879stick with synchronized methods whenever possible, and it seems to me
880that they offer similar advantages -- but they sometimes wind up using
881volatiles for performance reasons.
882
883I was also confused by the term "race-free" in that sentence. In
884particular, I don't think that Cforall has any mechanisms for
885preventing *data races*, and it clearly doesn't prevent "race
886conditions" (which would bar all sorts of useful programs). I suppose
887that "race free" here might be referring to the improvements such as
888removing barging behavior.
889
890## Performance comparisons
891
892In my previous review, I requested comparisons against Rust and
893node.js, and I see that the new version of the paper includes both,
894which is a good addition.
895
896One note on the Rust results: I believe that the results are comparing
897against the threads found in Rust's standard library, which are
898essentially a shallow wrapper around pthreads, and hence the
899performance is quite close to pthread performance (as one would
900expect). It would perhaps be more interesting to see a comparison
901built using [tokio] or [async-std], two of the more prominent
902user-space threading libraries that build on Rust's async-await
903feature (which operates quite differently than Javascript's
904async-await, in that it doesn't cause every aync function call to
905schedule a distinct task).
906
907[tokio]: https://tokio.rs/
908[async-std]: https://async.rs/
909
910That said, I am satisfied with the performance results as they are in
911the current revision.
912
913## Minor notes and typos
914
915Several figures used the `with` keyword. I deduced that `with(foo)`
916permits one to write `bar` instead of `foo.bar`. It seems worth
917introducing. Apologies if this is stated in the paper, if so I missed
918it.
919
920On page 20, section 6.3, "external scheduling and vice versus" should be
921"external scheduling and vice versa".
922
923On page 5, section 2.3, the paper states "we content" but it should be
924"we contend".
925
926Reviewing: Editor
927
928A few small comments in addition to those of the referees.
929
930Page 1. I don't believe that it s fair to imply that Scala is  "research vehicle" as it is used by major players, Twitter being the most prominent example.
931
932Page 15. Must Cforall threads start after construction (e.g. see your example on page 15, line 21)? I can think of examples where it is not desirable that threads start immediately after construction, e.g. a game with N players, each of whom is expensive to create, but all of whom should be started at the same time.
933
934Page 18, line 17: is using
935
936
937
938Date: Tue, 16 Jun 2020 13:45:03 +0000
939From: Aaron Thomas <onbehalfof@manuscriptcentral.com>
940Reply-To: speoffice@wiley.com
941To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca
942Subject: SPE-19-0219.R2 successfully submitted
943
94416-Jun-2020
945
946Dear Dr Buhr,
947
948Your manuscript entitled "Advanced Control-flow and Concurrency in Cforall" has been successfully submitted online and is presently being given full consideration for publication in Software: Practice and Experience.
949
950Your manuscript number is SPE-19-0219.R2.  Please mention this number in all future correspondence regarding this submission.
951
952You can view the status of your manuscript at any time by checking your Author Center after logging into https://mc.manuscriptcentral.com/spe.  If you have difficulty using this site, please click the 'Get Help Now' link at the top right corner of the site.
953
954
955Thank you for submitting your manuscript to Software: Practice and Experience.
956
957Sincerely,
958
959Software: Practice and Experience Editorial Office
960
Note: See TracBrowser for help on using the repository browser.