source: doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex @ 36cc24a

ADTast-experimentalpthread-emulation
Last change on this file since 36cc24a was 3ce3fb9, checked in by Peter A. Buhr <pabuhr@…>, 2 years ago

small changes and first attempt to present graphs in micro-benchmarks chapter

  • Property mode set to 100644
File size: 37.4 KB
Line 
1\chapter{Micro-Benchmarks}\label{microbench}
2
3The first step in evaluating this work is to test small controlled cases to ensure the basics work properly.
4This chapter presents five different experimental setups, evaluating the basic features of the \CFA, libfibre~\cite{libfibre}, Go, and Tokio~\cite{Tokio} schedulers.
5All of these systems have a \gls{uthrding} model.
6Note, all tests in each system are functionally identical and available online~\cite{SchedulingBenchmarks}.
7
8\section{Benchmark Environment}\label{microenv}
9All benchmarks are run on two distinct hardware platforms.
10\begin{description}
11\item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
12The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}.
13Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively.
14Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
15The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
16
17\item[Intel] is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
18The Xeon CPU has 24 cores with 2 \glspl{hthrd} per core, for 48 \glspl{hthrd} per socket with 4 sockets for a total of 196 \glspl{hthrd}.
19Each CPU has 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
20Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
21The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
22\end{description}
23
24For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyper threading.
25If more \glspl{hthrd} are needed, then 1 NUMA node with hyperthreading is used.
26If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA nodes as needed.
27
28The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine.
29Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU also incur high latency.
30
31
32\section{Cycling latency}
33
34The most basic evaluation of any ready queue is the latency needed to push and pop one element from the ready queue.
35Since these two operations also describe a @yield@ operation, many systems use this operation as the fundamental benchmark.
36However, yielding can be treated as a special case by optimizing it away since the number of ready \ats does not change.
37Hence, systems that perform this optimization have an artificial performance benefit because the yield becomes a \emph{nop}.
38For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}.
39This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
40At runtime, each \at unparks the next \at before parking itself.
41Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue.
42
43\begin{figure}
44        \centering
45        \input{cycle.pstex_t}
46        \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \at unparks the next \at in the cycle before parking itself.}
47        \label{fig:cycle}
48\end{figure}
49
50Therefore, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment.
51In fact, the total number of \ats waiting on the ready queue is expected to vary because of the race between the next \at unparking and the current \at parking.
52That is, the runtime cannot anticipate that the current task immediately parks.
53As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \at parks because of time-slicing or multiple \procs.
54If this happens, the scheduler push and pop are avoided and the results of the experiment are skewed.
55(Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.)
56Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops.
57To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully park before being unparked again.
58Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
59
60Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark.
61There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition.
62
63\begin{figure}
64\begin{cfa}
65Thread.main() {
66        count := 0
67        for {
68                @this.next.wake()@
69                @wait()@
70                count ++
71                if must_stop() { break }
72        }
73        global.count += count
74}
75\end{cfa}
76\caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
77\label{fig:cycle:code}
78%\end{figure}
79
80\bigskip
81
82%\begin{figure}
83        \subfloat[][Throughput, 100 cycles per \proc]{
84                \resizebox{0.5\linewidth}{!}{
85                        \input{result.cycle.jax.ops.pstex_t}
86                }
87                \label{fig:cycle:jax:ops}
88        }
89        \subfloat[][Throughput, 1 cycle per \proc]{
90                \resizebox{0.5\linewidth}{!}{
91                        \input{result.cycle.low.jax.ops.pstex_t}
92                }
93                \label{fig:cycle:jax:low:ops}
94        }
95
96        \subfloat[][Scalability, 100 cycles per \proc]{
97                \resizebox{0.5\linewidth}{!}{
98                        \input{result.cycle.jax.ns.pstex_t}
99                }
100                \label{fig:cycle:jax:ns}
101        }
102        \subfloat[][Scalability, 1 cycle per \proc]{
103                \resizebox{0.5\linewidth}{!}{
104                        \input{result.cycle.low.jax.ns.pstex_t}
105                }
106                \label{fig:cycle:jax:low:ns}
107        }
108        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are maximums while the solid line is the medium.}
109        \label{fig:cycle:jax}
110\end{figure}
111
112\subsection{Results}
113
114Figures~\ref{fig:cycle:jax} and~\ref{fig:cycle:nasus} show the throughput as a function of \proc count on Intel and AMD respectively, where each cycle has 5 \ats.
115The graphs show traditional throughput on the top row and \newterm{scalability} on the bottom row.
116Scalability uses the same data as throughput but the Y axis is calculated as the number of \procs over the throughput.
117In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same.
118The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy.
119The right column shows results for 1 cycle per \proc, where the ready queues are expected to be near empty most of the time.
120The distinction between 100 and 1 cycles is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
121
122\begin{figure}
123        \subfloat[][Throughput, 100 cycles per \proc]{
124                \resizebox{0.5\linewidth}{!}{
125                        \input{result.cycle.nasus.ops.pstex_t}
126                }
127                \label{fig:cycle:nasus:ops}
128        }
129        \subfloat[][Throughput, 1 cycle per \proc]{
130                \resizebox{0.5\linewidth}{!}{
131                        \input{result.cycle.low.nasus.ops.pstex_t}
132                }
133                \label{fig:cycle:nasus:low:ops}
134        }
135
136        \subfloat[][Scalability, 100 cycles per \proc]{
137                \resizebox{0.5\linewidth}{!}{
138                        \input{result.cycle.nasus.ns.pstex_t}
139                }
140                \label{fig:cycle:nasus:ns}
141        }
142        \subfloat[][Scalability, 1 cycle per \proc]{
143                \resizebox{0.5\linewidth}{!}{
144                        \input{result.cycle.low.nasus.ns.pstex_t}
145                }
146                \label{fig:cycle:nasus:low:ns}
147        }
148        \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
149        \label{fig:cycle:nasus}
150\end{figure}
151
152The experiment ran 15 times for each series and processor count.
153Each series has a solid and two dashed lines representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
154An alternative display is to use error bars with min/max as the bottom/top for the bar.
155However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.}
156This graph presentation offers an overview of the distribution of the results for each series.
157
158The experimental setup uses taskset to limit the placement of \glspl{kthrd} by the operating system.
159As mentioned in Section~\ref{microenv}, the experiment is setup to prioritize running on two \glspl{hthrd} per core before running on multiple sockets.
160For the Intel machine, this means that from 1 to 24 \procs one socket and \emph{no} hyperthreading is used, and from 25 to 48 \procs still only one socket is used but \emph{with} hyperthreading.
161This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192.
162On AMD, the same algorithm is used, but the machine only has 2 sockets.
163So hyperthreading\footnote{
164Hyperthreading normally refers specifically to the technique used by Intel, however it is often used generically to refer to any equivalent feature.}
165is used when the \proc count reach 65 and 193.
166
167The performance goal of \CFA is to obtain equivalent performance to other less fair schedulers.
168Figure~\ref{fig:cycle:jax:ops} and Figure~\ref{fig:cycle:jax:ns} show that for 100 cycles per \proc, \CFA, Go and Tokio all obtain effectively the same performance.
169Libfibre is slightly behind in this case but still scales decently.
170As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvements for all runtimes.
171As expected, this pattern repeats between \proc count 72 and 96.
172Hence, Figures~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show very good throughput and scalability for all runtimes.
173
174When running only a single cycle, the story is slightly different.
175\CFA and Tokio obtain very similar results overall, but Tokio shows notably more variations in the results.
176While \CFA, Go and Tokio achieve equivalent performance with 100 cycles per \proc, with only 1 cycle per \proc, Go achieves slightly better performance.
177This difference in throughput and scalability is due to the idle-sleep mechanism.
178With very few cycles, stealing or helping can cause a cascade of tasks migration and trick a \proc into very short idle sleeps, which negatively affect performance.
179
180An interesting and unusual result is that libfibre achieves better performance with 1 cycle.
181This suggest that the cascade effect is never present in libfibre and that some bottleneck disappears in this context.
182However, I did not investigate this result any deeper.
183
184Figure~\ref{fig:cycle:nasus} show a similar story happening on AMD as it does on Intel.
185The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
186Unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability for 100 cycles per \proc, with some variations in the results.
187
188In the 1 cycle per \proc experiment, the same performance increase for libfibre is visible.
189However, unlike on Intel, Tokio achieves the same performance as Go rather than \CFA.
190This leaves \CFA trailing behind in this particular case, but only at hight core counts.
191For this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
192Essentially, \CFA's fairness must result in slower performance in some workload.
193Fortunately, this effect is only problematic in pathological cases, \eg with 1 \at per \proc, which seldom occurs in most workloads.
194
195The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance for this micro-benchmark.
196This result shows that the \CFA scheduler has achieved the goal of obtaining equivalent performance to other less fair schedulers.
197
198\section{Yield}
199For completion, the classic yield benchmark is included.
200Here, the throughput is dominated by the mechanism used to handle the @yield@ function.
201Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the cycle @wait/next.wake@ is replaced by @yield@.
202As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@.
203The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty, which again puts a strain on the idle-sleep handling.
204
205\begin{figure}
206\begin{cfa}
207Thread.main() {
208        count := 0
209        for {
210                @yield()@
211                count ++
212                if must_stop() { break }
213        }
214        global.count += count
215}
216\end{cfa}
217\caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
218\label{fig:yield:code}
219%\end{figure}
220\bigskip
221%\begin{figure}
222        \subfloat[][Throughput, 100 \ats per \proc]{
223                \resizebox{0.5\linewidth}{!}{
224                        \input{result.yield.jax.ops.pstex_t}
225                }
226                \label{fig:yield:jax:ops}
227        }
228        \subfloat[][Throughput, 1 \ats per \proc]{
229                \resizebox{0.5\linewidth}{!}{
230                \input{result.yield.low.jax.ops.pstex_t}
231                }
232                \label{fig:yield:jax:low:ops}
233        }
234
235        \subfloat[][Scalability, 100 \ats per \proc]{
236                \resizebox{0.5\linewidth}{!}{
237                \input{result.yield.jax.ns.pstex_t}
238                }
239                \label{fig:yield:jax:ns}
240        }
241        \subfloat[][Scalability, 1 \ats per \proc]{
242                \resizebox{0.5\linewidth}{!}{
243                \input{result.yield.low.jax.ns.pstex_t}
244                }
245                \label{fig:yield:jax:low:ns}
246        }
247        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, using 1 \ats per \proc. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
248        \label{fig:yield:jax}
249\end{figure}
250
251\subsection{Results}
252
253Figures~\ref{fig:yield:jax} and~\ref{fig:yield:nasus} show the same throughput graphs as @cycle@ on Intel and AMD, respectively.
254Note, the Y-axis on the yield graph for Intel is twice as large as the Intel cycle-graph.
255A visual glance between the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
256
257For the Intel architecture, Figure~\ref{fig:yield:jax}:
258\begin{itemize}
259\item
260\CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment.
261Hence, the @yield@ throughput and scalability graphs for both 100 and 1 cycles/tasks per processor have similar shapes to the corresponding @cycle@ graphs.
262The only difference is sightly better performance for @yield@ because of less synchronization.
263As for @cycle@, the cost of idle sleep also comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
264\item
265libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, by-passing the idle-sleep mechanism entirely.
266Additionally, when only running 1 \at per \proc, libfibre optimizes further, and forgoes the context-switch entirely.
267Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance for 100 cycles/tasks and an 8 times increase for 1 cycle/task.
268\item
269Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it lower priority.
270The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
271Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@ for both 100 and 1 cycles/tasks.
272\item
273Tokio has a similar performance collapse after 16 processors, and therefore, its special @yield@ handling is probably related to a Go-like scheduler problem and/or a \CFA idle-sleep problem.
274(I did not dig through the Rust code to ascertain the exact reason for the collapse.)
275\end{itemize}
276
277\begin{figure}
278        \subfloat[][Throughput, 100 \ats per \proc]{
279                \resizebox{0.5\linewidth}{!}{
280                        \input{result.yield.nasus.ops.pstex_t}
281                }
282                \label{fig:yield:nasus:ops}
283        }
284        \subfloat[][Throughput, 1 \at per \proc]{
285                \resizebox{0.5\linewidth}{!}{
286                        \input{result.yield.low.nasus.ops.pstex_t}
287                }
288                \label{fig:yield:nasus:low:ops}
289        }
290
291        \subfloat[][Scalability, 100 \ats per \proc]{
292                \resizebox{0.5\linewidth}{!}{
293                        \input{result.yield.nasus.ns.pstex_t}
294                }
295                \label{fig:yield:nasus:ns}
296        }
297        \subfloat[][Scalability, 1 \at per \proc]{
298                \resizebox{0.5\linewidth}{!}{
299                        \input{result.yield.low.nasus.ns.pstex_t}
300                }
301                \label{fig:yield:nasus:low:ns}
302        }
303        \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, using 1 \ats per \proc. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
304        \label{fig:yield:nasus}
305\end{figure}
306
307For the AMD, Figure~\ref{fig:yield:nasus}, the results show the same story as on the Intel, with slightly increased jitter.
308Also, some transition points on the X-axis differ because of the architectures, like at 16 versus 24 processors.
309
310It is difficult to draw conclusions for this benchmark when runtime system treat @yield@ so differently.
311The win for \CFA is its consistency between the cycle and yield benchmarks making it simpler for programmers to use and understand, \ie it the \CFA semantics matches with programmer intuition..
312
313
314\section{Churn}
315The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.
316In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other.
317
318The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relationship between the last \proc on which a \at ran and blocked and the \proc that subsequently unblocks it.
319With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a remote queue.
320This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
321In either case, this benchmark aims to measure how well each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
322
323This benchmark uses a fixed-size array of counting semaphores.
324Each \at picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore.
325This creates a flow where \ats push each other out of the semaphores before being pushed out themselves.
326For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs.
327Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@.
328Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@.
329
330\begin{figure}
331\begin{cfa}
332Thread.main() {
333        count := 0
334        for {
335                r := random() % len(spots)
336                @spots[r].V()@
337                @spots[r].P()@
338                count ++
339                if must_stop() { break }
340        }
341        global.count += count
342}
343\end{cfa}
344\caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code}
345\label{fig:churn:code}
346%\end{figure}
347\bigskip
348%\begin{figure}
349        \subfloat[][Throughput, 100 \ats per \proc]{
350                \resizebox{0.5\linewidth}{!}{
351                        \input{result.churn.jax.ops.pstex_t}
352                }
353                \label{fig:churn:jax:ops}
354        }
355        \subfloat[][Throughput, 2 \ats per \proc]{
356                \resizebox{0.5\linewidth}{!}{
357                        \input{result.churn.low.jax.ops.pstex_t}
358                }
359                \label{fig:churn:jax:low:ops}
360        }
361
362        \subfloat[][Latency, 100 \ats per \proc]{
363                \resizebox{0.5\linewidth}{!}{
364                        \input{result.churn.jax.ns.pstex_t}
365                }
366                \label{fig:churn:jax:ns}
367        }
368        \subfloat[][Latency, 2 \ats per \proc]{
369                \resizebox{0.5\linewidth}{!}{
370                        \input{result.churn.low.jax.ns.pstex_t}
371                }
372                \label{fig:churn:jax:low:ns}
373        }
374        \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
375        \label{fig:churn:jax}
376\end{figure}
377
378\subsection{Results}
379Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput as a function of \proc count on Intel and AMD respectively.
380It uses the same representation as the previous benchmark : 15 runs where the dashed line show the extremes and the solid line the median.
381The performance cost of crossing the cache boundaries is still visible at the same \proc count.
382However, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the each other's data.
383Scalability is notably worst than the previous benchmarks since there is inherently more communication between processors.
384Indeed, once the number of \glspl{hthrd} goes beyond a single socket, performance ceases to improve.
385An interesting aspect to note here is that the runtimes differ in how they handle this situation.
386Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at.
387\CFA, Tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc.
388In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
389
390\begin{figure}
391        \subfloat[][Throughput, 100 \ats per \proc]{
392                \resizebox{0.5\linewidth}{!}{
393                        \input{result.churn.nasus.ops.pstex_t}
394                }
395                \label{fig:churn:nasus:ops}
396        }
397        \subfloat[][Throughput, 2 \ats per \proc]{
398                \resizebox{0.5\linewidth}{!}{
399                        \input{result.churn.low.nasus.ops.pstex_t}
400                }
401                \label{fig:churn:nasus:low:ops}
402        }
403
404        \subfloat[][Latency, 100 \ats per \proc]{
405                \resizebox{0.5\linewidth}{!}{
406                        \input{result.churn.nasus.ns.pstex_t}
407                }
408                \label{fig:churn:nasus:ns}
409        }
410        \subfloat[][Latency, 2 \ats per \proc]{
411                \resizebox{0.5\linewidth}{!}{
412                        \input{result.churn.low.nasus.ns.pstex_t}
413                }
414                \label{fig:churn:nasus:low:ns}
415        }
416        \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine.
417        For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
418        \label{fig:churn:nasus}
419\end{figure}
420
421Like for the cycle benchmark, here all runtimes achieve fairly similar performance.
422Performance improves as long as all \procs fit on a single socket.
423Beyond that performance starts to suffer from increased caching costs.
424
425Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 1 and 100 \ats per \proc, \CFA, libfibre, Go and Tokio achieve effectively equivalent performance for most \proc count.
426
427However, Figure~\ref{fig:churn:nasus} again shows a somewhat different story on AMD.
428While \CFA, libfibre, and Tokio achieve effectively equivalent performance for most \proc count, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts.
429This performance difference is visible at both high and low \at counts.
430
431One possible explanation for this difference is that since Go has very few available concurrent primitives, a channel was used instead of a semaphore.
432On paper a semaphore can be replaced by a channel and with zero-sized objects passed along equivalent performance could be expected.
433However, in practice there can be implementation difference between the two.
434This is especially true if the semaphore count can get somewhat high.
435Note that this replacement is also made in the cycle benchmark, however in that context it did not seem to have a notable impact.
436
437As second possible explanation is that Go may sometimes use the heap when allocating variables based on the result of escape analysis of the code.
438It is possible that variables that should be placed on the stack are placed on the heap.
439This could cause extra pointer chasing in the benchmark, heightening locality effects.
440Depending on how the heap is structure, this could also lead to false sharing.
441
442The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues.
443Indeed, the fact all runtimes achieve some scaling at lower \proc count demonstrate that migrations do not need to be serialized.
444Again these result demonstrate \CFA achieves satisfactory performance.
445
446\section{Locality}
447
448\begin{figure}
449\newsavebox{\myboxA}
450\newsavebox{\myboxB}
451
452\begin{lrbox}{\myboxA}
453\begin{cfa}[tabsize=3]
454Thread.main() {
455        count := 0
456        for {
457                r := random() % len(spots)
458                // go through the array
459                @work( a )@
460
461                spots[r].V()
462                spots[r].P()
463                count ++
464                if must_stop() { break }
465        }
466        global.count += count
467}
468\end{cfa}
469\end{lrbox}
470
471\begin{lrbox}{\myboxB}
472\begin{cfa}[tabsize=3]
473Thread.main() {
474        count := 0
475        for {
476                r := random() % len(spots)
477                // go through the array
478                @work( a )@
479                // pass array to next thread
480                spots[r].V( @a@ )
481                @a = @spots[r].P()
482                count ++
483                if must_stop() { break }
484        }
485        global.count += count
486}
487\end{cfa}
488\end{lrbox}
489
490\subfloat[Thread$_1$]{\label{f:CFibonacci}\usebox\myboxA}
491\hspace{3pt}
492\vrule
493\hspace{3pt}
494\subfloat[Thread$_2$]{\label{f:CFAFibonacciGen}\usebox\myboxB}
495
496\caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code}
497\label{fig:locality:code}
498\end{figure}
499
500As mentioned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.
501\footnote{It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, there is little to suggest this would not degrade performance.}
502The locality experiment includes two variations of the churn benchmark, where an array of data is added.
503In both variations, before @V@ing the semaphore, each \at increment random cells inside the array.
504The @share@ variation then passes the array to the shadow-queue of the semaphore, transferring ownership of the array to the woken thread.
505In the @noshare@ variation the array is not passed on and each thread continuously accesses its private array.
506
507The objective here is to highlight the different decision made by the runtime when unparking.
508Since each thread unparks a random semaphore, it means that it is unlikely that a \at will be unparked from the last \proc it ran on.
509In the @share@ version, this means that unparking the \at on the local \proc is appropriate since the data was last modified on that \proc.
510In the @noshare@ version, the unparking the \at on the remote \proc is the appropriate approach.
511
512The expectation for this benchmark is to see a performance inversion, where runtimes will fare notably better in the variation which matches their unparking policy.
513This should lead to \CFA, Go and Tokio achieving better performance in @share@ while libfibre achieves better performance in @noshare@.
514Indeed, \CFA, Go and Tokio have the default policy of unpark \ats on the local \proc, where as libfibre has the default policy of unparks \ats wherever they last ran.
515
516\subsection{Results}
517
518\begin{figure}
519        \subfloat[][Throughput share]{
520                \resizebox{0.5\linewidth}{!}{
521                        \input{result.locality.share.jax.ops.pstex_t}
522                }
523                \label{fig:locality:jax:share:ops}
524        }
525        \subfloat[][Throughput noshare]{
526                \resizebox{0.5\linewidth}{!}{
527                        \input{result.locality.noshare.jax.ops.pstex_t}
528                }
529                \label{fig:locality:jax:noshare:ops}
530        }
531
532        \subfloat[][Scalability share]{
533                \resizebox{0.5\linewidth}{!}{
534                        \input{result.locality.share.jax.ns.pstex_t}
535                }
536                \label{fig:locality:jax:share:ns}
537        }
538        \subfloat[][Scalability noshare]{
539                \resizebox{0.5\linewidth}{!}{
540                        \input{result.locality.noshare.jax.ns.pstex_t}
541                }
542                \label{fig:locality:jax:noshare:ns}
543        }
544        \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
545        \label{fig:locality:jax}
546\end{figure}
547\begin{figure}
548        \subfloat[][Throughput share]{
549                \resizebox{0.5\linewidth}{!}{
550                        \input{result.locality.share.nasus.ops.pstex_t}
551                }
552                \label{fig:locality:nasus:share:ops}
553        }
554        \subfloat[][Throughput noshare]{
555                \resizebox{0.5\linewidth}{!}{
556                        \input{result.locality.noshare.nasus.ops.pstex_t}
557                }
558                \label{fig:locality:nasus:noshare:ops}
559        }
560
561        \subfloat[][Scalability share]{
562                \resizebox{0.5\linewidth}{!}{
563                        \input{result.locality.share.nasus.ns.pstex_t}
564                }
565                \label{fig:locality:nasus:share:ns}
566        }
567        \subfloat[][Scalability noshare]{
568                \resizebox{0.5\linewidth}{!}{
569                        \input{result.locality.noshare.nasus.ns.pstex_t}
570                }
571                \label{fig:locality:nasus:noshare:ns}
572        }
573        \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
574        \label{fig:locality:nasus}
575\end{figure}
576
577Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
578In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@.
579
580On Intel, Figure~\ref{fig:locality:jax} shows Go trailing behind the 3 other runtimes.
581On the left of the figure showing the results for the shared variation, where \CFA and Tokio slightly outperform libfibre as expected.
582And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and Tokio.
583Otherwise the results are similar to the churn benchmark, with lower throughput due to the array processing.
584Presumably the reason why Go trails behind are the same as in Figure~\ref{fig:churn:nasus}.
585
586Figure~\ref{fig:locality:nasus} shows the same experiment on AMD.
587\todo{why is cfa slower?}
588Again, we see the same story, where Tokio and libfibre swap places and Go trails behind.
589
590\section{Transfer}
591The last benchmark is more of an experiment than a benchmark.
592It tests the behaviour of the schedulers for a misbehaved workload.
593In this workload, one of the \at is selected at random to be the leader.
594The leader then spins in a tight loop until it has observed that all other \ats have acknowledged its leadership.
595The leader \at then picks a new \at to be the next leader and the cycle repeats.
596The benchmark comes in two flavours for the non-leader \ats:
597once they acknowledged the leader, they either block on a semaphore or spin yielding.
598
599The experiment is designed to evaluate the short-term load-balancing of a scheduler.
600Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate.
601This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at.
602In the semaphore flavour, the number of runnable \ats eventually dwindles down to only the leader.
603This scenario is a simpler case to handle for schedulers since \procs eventually run out of work.
604In the yielding flavour, the number of runnable \ats stays constant.
605This scenario is a harder case to handle because corrective measures must be taken even when work is available.
606Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield.
607
608In both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
609In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by:
610$ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueueing and dequeuing a \at and $NP$ is the number of \procs.
611However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay will increase.
612The semaphore version is an approximation of the strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once.
613The benchmark effectively provides the fairness guarantee in this case.
614In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness will be measurable.
615
616While this is a fairly artificial scenario, it requires only a few simple pieces.
617The yielding version of this simply creates a scenario where a \at runs uninterrupted in a saturated system, and starvation has an easily measured impact.
618However, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation.
619
620\begin{figure}
621\begin{cfa}
622Thread.lead() {
623        this.idx_seen = ++lead_idx
624        if lead_idx > stop_idx {
625                done := true
626                return
627        }
628        // Wait for everyone to acknowledge my leadership
629        start: = timeNow()
630        for t in threads {
631                while t.idx_seen != lead_idx {
632                        asm pause
633                        if (timeNow() - start) > 5 seconds { error() }
634                }
635        }
636        // pick next leader
637        leader := threads[ prng() % len(threads) ]
638        // wake every one
639        if ! exhaust {
640                for t in threads {
641                        if t != me { t.wake() }
642                }
643        }
644}
645Thread.wait() {
646        this.idx_seen := lead_idx
647        if exhaust { wait() }
648        else { yield() }
649}
650Thread.main() {
651        while !done  {
652                if leader == me { this.lead() }
653                else { this.wait() }
654        }
655}
656\end{cfa}
657\caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark : Pseudo Code}
658\label{fig:transfer:code}
659\end{figure}
660
661\subsection{Results}
662\begin{figure}
663\begin{centering}
664\begin{tabular}{r | c c c c | c c c c }
665Machine   &                     \multicolumn{4}{c |}{Intel}                &          \multicolumn{4}{c}{AMD}                    \\
666Variation & \multicolumn{2}{c}{Park} & \multicolumn{2}{c |}{Yield} & \multicolumn{2}{c}{Park} & \multicolumn{2}{c}{Yield} \\
667\procs    &      2      &      192   &      2      &      192      &      2      &      256   &      2      &      256    \\
668\hline
669\CFA      & 106 $\mu$& ~19.9 ms   & 68.4 $\mu$s & ~1.2 ms       & 174 $\mu$& ~28.4 ms   & 78.8~~$\mu$s& ~~1.21 ms   \\
670libfibre  & 127 $\mu$& ~33.5 ms   & DNC         & DNC           & 156 $\mu$& ~36.7 ms   & DNC         & DNC         \\
671Go        & 106 $\mu$& ~64.0 ms   & 24.6 ms     & 74.3 ms       & 271 $\mu$& 121.6 ms   & ~~1.21~ms   & 117.4 ms    \\
672Tokio     & 289 $\mu$& 180.6 ms   & DNC         & DNC           & 157 $\mu$& 111.0 ms   & DNC         & DNC
673\end{tabular}
674\end{centering}
675\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
676\label{fig:transfer:res}
677\end{figure}
678
679Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiment runs 100 \at per \proc.
680Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter.
681As such, data points that are the on the same order of magnitude as each other should be basically considered equal.
682The takeaway of this experiment is the presence of very large differences.
683The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged.
684The yielding variation is denoted ``Yield''.
685The experiment was only run for the extremes of the number of cores since the scaling per core behaves like previous experiments.
686This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness.
687The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly.
688Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running leader.
689Figure~\ref{fig:transfer:res} shows that while Go and Tokio are slower, all runtime achieve decent latency.
690However, the yielding variation shows an entirely different picture.
691Since libfibre and Tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs.
692The result is that the experiment simply does not complete for these runtime.
693Without \procs stealing from the \proc running the leader, the experiment will simply never terminate.
694Go manages to complete the experiment because it adds preemption on top of classic work-stealing.
695However, since preemption is fairly costly it achieves significantly worst performance.
696In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness.
697Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores and idle-sleep.
Note: See TracBrowser for help on using the repository browser.