source: doc/papers/concurrency/response3 @ e0116c4e

ADTarm-ehast-experimentalenumforall-pointer-decayjacob/cs343-translationnew-ast-unique-exprpthread-emulationqualifiedEnum
Last change on this file since e0116c4e was e0116c4e, checked in by Peter A. Buhr <pabuhr@…>, 4 years ago

responses for 3rd round of refereeing

  • Property mode set to 100644
File size: 3.3 KB
RevLine 
[e0116c4e]1    I would like you address the comments of Reviewer 2, particularly with
2    regard to the description of the adaptation Java harness to deal with
3    warmup. I would expect to see a convincing argument that the computation
4    has reached a steady state.
5
6We understand referee2 and your concern about the JIT experiments, which is why
7we verified our experiments with two experts in JIT development for both Java
8and Node.js before submitting the paper. We also read the supplied papers, but
9most of the information is not applicable to our work for the following
10reasons.
11
121. SPEC benchmarks are medium to large. In contrast, our benchmarks are 5-15
13   lines in length for each programming language (see code for the Cforall
14   tests in the paper). Hence, there is no significant computations, complex
15   control flow, or use of memory.  They test one specific language features
16   (context switch, mutex call, etc.) in isolation over and over again. These
17   language features are fixed (e.g., acquiring and releasing a lock is a fixed
18   cost). Therefore, unless the feature can be removed there is nothing to
19   optimize at runtime. But these features cannot be removed without changing
20   the meaning of the benchmark. If the feature is removed, the timing result
21   would be 0. In fact, it was difficult to prevent the JIT from completely
22   eliding some benchmarks because there are no side-effects.
23   
242. All of our benchmark results correlate across programming languages with and
25   without JIT, indicating the JIT has completed any runtime optimizations
26   (added this sentence to Section 8.1). Any large differences are explained by
27   how a language implements a feature not by how the compiler/JIT precesses
28   that feature.  Section 8.1 discusses these points in detail.
29
303. We also added a sentence about running all JIT-base programming language
31   experiments for 30 minutes and there was no statistical difference,
32   med/avg/std correlated with the short-run experiments, which seems a
33   convincing argument that the benchmark has reached a steady state. If the
34   JIT takes longer than 30 minutes to achieve its optimization goals, it is
35   unlikely to be useful.
36
374. The purpose of the performance section is not to draw conclusions about
38   improvements. It is to contrast program-language implementation approaches.
39   Section 8.1 talks about ramifications of certain design and implementation
40   decisions with respect to overall performance. The only conclusion we draw
41   about performance is:
42
43     Performance comparisons with other concurrent systems and languages show
44     the Cforall approach is competitive across all basic operations, which
45     translates directly into good performance in well-written applications
46     with advanced control-flow.
47
48
49   I would also like you to provide the values for N for each benchmark run.
50
51Done.
52
53
54Referee 2 suggested
55
56   * don't start sentences with "However"
57
58However, there are numerous grammar sites on the web indicating "however" (a
59conjunction) at the start of a sentence is acceptable, e.g.:
60
61 https://www.merriam-webster.com/words-at-play/can-you-start-a-sentence-with-however
62 This is a stylistic choice, more than anything else, as we have a
63 considerable body of evidence of writers using however to begin sentences,
64 frequently with the meaning of "nevertheless."
Note: See TracBrowser for help on using the repository browser.