# Changeset 04b4a71 for doc/papers

Ignore:
Timestamp:
Jun 3, 2020, 2:35:13 PM (17 months ago)
Branches:
arm-eh, jacob/cs343-translation, master, new-ast, new-ast-unique-expr
Children:
fe9cf9e
Parents:
4e7c0fc0
Message:

update concurrency paper with referee changes and generate a response to the referee's report

Location:
doc/papers
Files:
3 edited

### Legend:

Unmodified
 r4e7c0fc0 Software: Practice and Experience Editorial Office Date: Sat, 18 Apr 2020 10:42:13 +0000 From: Richard Jones Reply-To: R.E.Jones@kent.ac.uk To: tdelisle@uwaterloo.ca, pabuhr@uwaterloo.ca Subject: Software: Practice and Experience - Decision on Manuscript ID SPE-19-0219.R1 18-Apr-2020 Dear Dr Buhr, Many thanks for submitting SPE-19-0219.R1 entitled "Advanced Control-flow and Concurrency in Cforall" to Software: Practice and Experience. The paper has now been reviewed and the comments of the referees are included at the bottom of this letter. I believe that we are making progress here towards a paper that can be published in Software: Practice and Experience.  However the referees still have significant concerns about the paper. The journal's focus is on practice and experience, and one of the the reviewers' concerns remains that your submission should focus the narrative more on the perspective of the programmer than the language designer. I agree that this would strengthen your submission, and I ask you to address this as well as the referees' other comments. A revised version of your manuscript that takes into account the comments of the referee(s) will be reconsidered for publication. Please note that submitting a revision of your manuscript does not guarantee eventual acceptance, and that your revision may be subject to re-review by the referees before a decision is rendered. You have 90 days from the date of this email to submit your revision. If you are unable to complete the revision within this time, please contact me to request a short extension. You can upload your revised manuscript and submit it through your Author Center. Log into https://mc.manuscriptcentral.com/spe  and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions". When submitting your revised manuscript, you will be able to respond to the comments made by the referee(s) in the space provided.  You can use this space to document any changes you make to the original manuscript. If you would like help with English language editing, or other article preparation support, Wiley Editing Services offers expert help with English Language Editing, as well as translation, manuscript formatting, and figure formatting at www.wileyauthors.com/eeo/preparation. You can also check out our resources for Preparing Your Article for general guidance about writing and preparing your manuscript at www.wileyauthors.com/eeo/prepresources. Once again, thank you for submitting your manuscript to Software: Practice and Experience and I look forward to receiving your revision. Sincerely, Richard Prof. Richard Jones Software: Practice and Experience R.E.Jones@kent.ac.uk Referee(s)' Comments to Author: Reviewing: 1 Comments to the Author (A relatively short second review) I thank the authors for their revisions and comprehensive response to reviewers' comments --- many of my comments have been successfully addressed by the revisions.  Here I'll structure my comments around the main salient points in that response which I consider would benefit from further explanation. >  Table 1 is moved to the start and explained in detail. I consider this change makes a significant improvement to the paper, laying out the landscape of language features at the start, and thus addresses my main concerns about the paper. I still have a couple of issues --- perhaps the largest is that it's still not clear at this point in the paper what some of these options are, or crucially how they would be used. I don't know if it's possbile to give high-level examples or use cases to be clear about these up front - or if that would duplicate too much information from later in the paper - either way expanding out the discussion - even if just two a couple of sentences for each row - would help me more.  The point is not just to define these categories but to ensure the readers' understanding of these definitons agrees with that used in the paper. in a little more detail: * 1st para section 2 begs the question: why not support each dimension independently, and let the programmer or library designer combiine features? * "execution state" seems a relatively low-level description here. I don't think of e.g. the lambda calculus that way. Perhaps it's as good a term as any. * Why must there "be language mechanisms to create, block/unblock, and join with a thread"?  There aren't in Smalltalk (although there are in the runtime).  Especially given in Cforall those mechanisms are *implicit* on thread creation and destruction? * "Case 1 is a function that borrows storage for its state (stack frame/activation) and a thread from its invoker" this much makes perfect sense to me, but I don't understand how a non-stateful, non-theaded function can then retain "this state across callees, ie, function local-variables are retained on the stack across calls." how can it retain function-local values *across calls* when it doesn't have any functional-local state? I'm not sure if I see two separate cases here - rougly equivalent to C functions without static storage, and then C functions *with* static storage. I assumed that was the distinction between cases 1 & 3; but perhpas the actual distinction is that 3 has a suspend/resume point, and so the "state" in figure 1 is this component of execution state (viz figs 1 & 2), not the state representing the cross-call variables? >    but such evaluation isn't appropriate for garbage-collected or JITTed languages like Java or Go. For JITTed languages in particular, reporting peak performance needs to "warm up" the JIT with a number of iterators before beginning measurement. Actually for JIT's its even worse: see Edd Barrett et al OOPSLA 2017. minor issues: * footnote A - I've looked at various other papers & the website to try to understand how "object-oriented" Cforall is - I'm still not sure.  This footnote says Cforall has "virtuals" - presumably virtual functions, i.e. dynamic dispatch - and inheritance: that really is OO as far as I (and most OO people) are concerned.  For example Haskell doesn't have inheritance, so it's not OO; while CLOS (the Common Lisp *Object* System) or things like Cecil and Dylan are considered OO even though they have "multiple function parameters as receivers", lack "lexical binding between a structure and set of functions", and don't have explicit receiver invocation syntax.  Python has receiver syntax, but unlike Java or Smalltalk or C++, method declarations still need to have an explicit "self" receiver parameter.  Seems to me that Go, for example, is more-or-less OO with interfaces, methods, and dynamic dispatch (yes also and an explicit receiver syntax but that's not determiniative); while Rust lacks dynamic dispatch built-in.  C is not OO as a language, but as you say given it supports function pointers with structures, it does support an OO programm style. This is why I again recommend just not buying into this fight: not making any claims about whether Cforall is OO or is not - because as I see it, the rest of the paper doesn't depend on whether Cforall is OO or not.  That said: this is just a recommendation, and I won't quibble over this any further. * is a "monitor function" the same as a "mutex function"? if so the paper should pick one term; if not, make the distinction clear. * "As stated on line 1 because state declarations from the generator type can be moved out of the coroutine type into the coroutine main" OK sure, but again: *why* would a programmer want to do that? (Other than, I guess, to show the difference between coroutines & generators?)  Perhaps another way to put this is that the first para of 3.2 gives the disadvantages of coroutines vs-a-vs generators, briefly describes the extended semantics, but never actualy says why a programmer may want those extended semantics, or how they would benefit.  I don't mean to belabour the point, but (generalist?) readers like me would generally benefit from those kinds of discussions about each feature throughout the paper: why might a programmer want to use them? > p17 if the multiple-monitor entry procedure really is novel, write a paper > about that, and only about that. > We do not believe this is a practical suggestion. * I'm honestly not trying to be snide here: I'm not an expert on monitor or concurrent implementations. Brinch Hansen's original monitors were single acquire; this draft does not cite any other previous work that I could see. I'm not suggesting that the brief mention of this mechanism necessarily be removed from this paper, but if this is novel (and a clear advance over a classical OO monitor a-la Java which only acquires the distinguished reciever) then that would be worth another paper in itself. > * conclusion should conclude the paper, not the related. > We do not understand this comment.if ithis My typo: the paper's conclusion should come at the end, after the future work section. To encourage accountability, I'm signing my reviews in 2020. For the record, I am James Noble, kjx@ecs.vuw.ac.nz. Reviewing: 2 Comments to the Author I thank the authors for their detailed response. To respond to a couple of points raised  in response to my review (number 2): - on the Boehm paper and whether code is "all sequential to the compiler": I now understand the authors' position better and suspect we are in violent agreement, except for whether it's appropriate to use the rather breezy phrase "all sequential to the compiler". It would be straightforward to clarify that code not using the atomics features is optimized *as if* it were sequential, i.e. on the assumption of a lack of data races. - on the distinction between "mutual exclusion" and "synchronization": the added citation does help, in that it makes a coherent case for the definition the authors prefer. However, the text could usefully clarify that this is a matter of definition not of fact, given especially that in my assessment the authors' preferred definition is not the most common one. (Although the mention of Hoare's apparent use of this definition is one data point, countervailing ones are found in many contemporaneous or later papers, e.g. Habermann's 1972 "Synchronization of Communicating Processes" (CACM 15(3)), Reed & Kanodia's 1979 "Synchronization with eventcounts and sequencers" (CACM (22(2)) and so on.) I am glad to see that the authors have taken on board most of the straightforward improvements I suggested. However, a recurring problem of unclear writing still remains through many parts of the paper, including much of sections 2, 3 and 6. To highlight a couple of problem patches (by no means exhaustive): - section 2 (an expanded version of what was previously section 5.9) lacks examples and is generally obscure and allusory ("the most advanced feature" -- name it! "in triplets" -- there is only one triplet!; what are "execution locations"? "initialize" and "de-initialize" what? "borrowed from the invoker" is a concept in need of explaining or at least a fully explained example -- in what sense does a plain function borrow" its stack frame? "computation only" as opposed to what? in 2.2, in what way is a "request" fundamental to "synchronization"? and the "implicitly" versus "explicitly" point needs stating as elsewhere, with a concrete example e.g. Java built-in mutexes versus java.util.concurrent). - section 6: 6.2 omits the most important facts in preference for otherwise inscrutable detail: "identify the kind of parameter" (first say *that there are* kinds of parameter, and what "kinds" means!); "mutex parameters are documentation" is misleading (they are also semantically significant!) and fails to say *what* they mean; the most important thing is surely that 'mutex' is a language feature for performing lock/unlock operations at function entry/exit. So say it! The meanings of examples f3 and f4 remain unclear. Meanwhile in 6.3, "urgent" is not introduced (we are supposed to infer its meaning from Figure 12, but that Figure is incomprehensible to me), and we are told of "external scheduling"'s long history in Ada but not clearly what it actually means; 6.4's description of "waitfor" tells us it is different from an if-else chain but tries to use two *different* inputs to tell us that the behavior is different; tell us an instance where *the same* values of C1 and C2 give different behavior (I even wrote out a truth table and still don't see the semantic difference) The authors frequently use bracketed phrases, and sometimes slashes "/", in ways that are confusing and/or detrimental to readability. Page 13 line 2's "forward (backward)" is one particularly egregious example. In general I would recommend the the authors try to limit their use of parentheses and slashes as a means of forcing a clearer wording to emerge. Also, the use of "eg." is often cursory and does not explain the examples given, which are frequently a one- or two-word phrase of unclear referent. Considering the revision more broadly, none of the more extensive or creative rewrites I suggested in my previous review have been attempted, nor any equivalent efforts to improve its readability. The hoisting of the former section 5.9 is a good idea, but the newly added material accompanying it (around Table 1) suffers fresh deficiencies in clarity. Overall the paper is longer than before, even though (as my previous review stated), I believe a shorter paper is required in order to serve the likely purpose of publication. (Indeed, the authors' letter implies that a key goal of publication is to build community and gain external users.) Given this trajectory, I no longer see a path to an acceptable revision of the present submission. Instead I suggest the authors consider splitting the paper in two: one half about coroutines and stack management, the other about mutexes, monitors and the runtime. (A briefer presentation of the runtime may be helpful in the first paper also, and a brief recap of the generator and coroutine support is obviously needed in the second too.) Both of these new papers would need to be written with a strong emphasis on clarity, paying great care to issues of structure, wording, choices of example, and restraint (saying what's important, not everything that could be said). I am confident the authors could benefit from getting early feedback from others at their institution. For the performance experiments, of course these do not split evenly -- most (but not all) belong in the second of these two hypothetical papers. But the first of them would still have plenty of meat to it; for me, a clear and thorough study of the design space around coroutines is the most interesting and tantalizing prospect. I do not buy the authors' defense of the limited practical experience or "non-micro" benchmarking presented. Yes, gaining external users is hard and I am sympathetic on that point. But building something at least *somewhat* substantial with your own system should be within reach, and without it the "practice and experience" aspects of the work have not been explored. Clearly C\/ is the product of a lot of work over an extended period, so it is a surprise that no such experience is readily available for inclusion. Some smaller points: It does not seem right to state that a stack is essential to Von Neumann architectures -- since the earliest Von Neumann machines (and indeed early Fortran) did not use one. To elaborate on something another reviewer commented on: it is a surprise to find a "Future work" section *after* the "Conclusion" section. A "Conclusions and future work" section often works well. Reviewing: 3 Comments to the Author This is the second round of reviewing. As in the first review, I found that the paper (and Cforall) contains a lot of really interesting ideas, but it remains really difficult to have a good sense of which idea I should use and when. This applies in different ways to different features from the language: * coroutines/generators/threads: here there is some discussion, but it can be improved. * interal/external scheduling: I didn't find any direct comparison between these features, except by way of example. I requested similar things in my previous review and I see that content was added in response to those requests. Unfortunately, I'm not sure that I can say it improved the paper's overall read. I think in some sense the additions were "too much" -- I would have preferred something more like a table or a few paragraphs highlighting the key reasons one would pick one construct or the other. In general, I do wonder if the paper is just trying to do too much. The discussion of clusters and pre-emption in particular feels quite rushed. ## Summary I make a number of suggestions below but the two most important I think are: * Recommend to shorten the comparison on coroutine/generator/threads in Section 2 to a paragraph with a few examples, or possibly a table explaining the trade-offs between the constructs * Recommend to clarify the relationship between internal/external scheduling -- is one more general but more error-prone or low-level? ## Coroutines/generators/threads There is obviously a lot of overlap between these features, and in particular between coroutines and generators. As noted in the previous review, many languages have chosen to offer *only* generators, and to build coroutines by stacks of generators invoking one another. I believe the newly introduced Section 2 of the paper is trying to motivate why each of these constructs exist, but I did not find it effective. It was dense and difficult to understand. I think the problem is that Section 2 seems to be trying to derive "from first principles" why each construct exists, but I think that a more "top down" approach would be easier to understand. In fact, the end of Section 2.1 (on page 5) contains a particular paragraph that embodies this "top down" approach. It starts, "programmers can now answer three basic questions", and thus gives some practical advice for which construct you should use and when. I think giving some examples of specific applications that this paragraph, combined with some examples of cases where each construct was needed, would be a better approach. I don't think this compariosn needs to be very long. It seems clear enough that one would * prefer generators for simple computations that yield up many values, * prefer coroutines for more complex processes that have significant internal structure, * prefer threads for cases where parallel execution is desired or needed. I did appreciate the comparison in Section 2.3 between async-await in JS/Java and generators/coroutines. I agree with its premise that those mechanisms are a poor replacement for generators (and, indeed, JS has a distinct generator mechanism, for example, in part for this reason). I believe I may have asked for this in a previous review, but having read it, I wonder if it is really necessary, since those mechanisms are so different in purpose. ## Internal vs external scheduling I find the motivation for supporting both internal and external scheduling to be fairly implicit. After several reads through the section, I came to the conclusion that internal scheduling is more expressive than external scheduling, but sometimes less convenient or clear. Is this correct? If not, it'd be useful to clarify where external scheduling is more expressive. The same is true, I think, of the signal_block function, which I have not encountered before; it seems like its behavior can be modeled with multiple condition variables, but that's clearly more complex. One question I had about signal_block: what happens if one signals but no other thread is waiting? Does it block until some other thread waits? Or is that user error? I would find it very interesting to try and capture some of the properties that make internal vs external scheduling the better choice. For example, it seems to me that external scheduling works well if there are only a few "key" operations, but that internal scheduling might be better otherwise, simply because it would be useful to have the ability to name a signal that can be referenced by many methods. Consider the bounded buffer from Figure 13: if it had multiple methods for removing elements, and not just remove, then the waitfor(remove) call in insert might not be sufficient. ## Comparison of external scheduling to messaging I did enjoy the section comparing external scheduling to Go's messaging mechanism, which I believe is a new addition. I believe that one difference between the Go program and the Cforall equivalent is that the Goroutine has an associated queue, so that multiple messages could be enqueued, whereas the Cforall equivalent is effectively a "bounded buffer" of length 1. Is that correct? I think this should be stated explicitly. (Presumably, one could modify the Cforall program to include an explicit vector of queued messages if desired, but you would also be reimplementing the channel abstraction.) Also, in Figure 20, I believe that there is a missing mutex keyword. The fiugre states:  void main(GoRtn & gortn) with(gortn) {  but I think it should probably be as follows:  void main(GoRtn & mutex gortn) with(gortn) {  Unless there is some implicit mutex associated with being a main function for a monitor thread. ## Atomic operations and race freedom I was glad to see that the paper acknowledged that Cforall still had low-level atomic operations, even if their use is discouraged in favor of higher-level alternatives. However, I still feel that the conclusion overstates the value of the contribution here when it says that "Cforall high-level race-free monitors and threads provide the core mechanisms for mutual exclusion and synchronization, without the need for volatile and atomics". I feel confident that Java programmers, for example, would be advised to stick with synchronized methods whenever possible, and it seems to me that they offer similar advantages -- but they sometimes wind up using volatiles for performance reasons. I was also confused by the term "race-free" in that sentence. In particular, I don't think that Cforall has any mechanisms for preventing *data races*, and it clearly doesn't prevent "race conditions" (which would bar all sorts of useful programs). I suppose that "race free" here might be referring to the improvements such as removing barging behavior. ## Performance comparisons In my previous review, I requested comparisons against Rust and node.js, and I see that the new version of the paper includes both, which is a good addition. One note on the Rust results: I believe that the results are comparing against the threads found in Rust's standard library, which are essentially a shallow wrapper around pthreads, and hence the performance is quite close to pthread performance (as one would expect). It would perhaps be more interesting to see a comparison built using [tokio] or [async-std], two of the more prominent user-space threading libraries that build on Rust's async-await feature (which operates quite differently than Javascript's async-await, in that it doesn't cause every aync function call to schedule a distinct task). [tokio]: https://tokio.rs/ [async-std]: https://async.rs/ That said, I am satisfied with the performance results as they are in the current revision. ## Minor notes and typos Several figures used the with keyword. I deduced that with(foo) permits one to write bar instead of foo.bar. It seems worth introducing. Apologies if this is stated in the paper, if so I missed it. On page 20, section 6.3, "external scheduling and vice versus" should be "external scheduling and vice versa". On page 5, section 2.3, the paper states "we content" but it should be "we contend". Reviewing: Editor A few small comments in addition to those of the referees. Page 1. I don't believe that it s fair to imply that Scala is  "research vehicle" as it is used by major players, Twitter being the most prominent example. Page 15. Must Cforall threads start after construction (e.g. see your example on page 15, line 21)? I can think of examples where it is not desirable that threads start immediately after construction, e.g. a game with N players, each of whom is expensive to create, but all of whom should be started at the same time. Page 18, line 17: is using