source: doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex @ 6f774be

ast-experimental
Last change on this file since 6f774be was ddcaff6, checked in by Thierry Delisle <tdelisle@…>, 2 years ago

Last corrections to my thesis... hopefully

  • Property mode set to 100644
File size: 47.3 KB
Line 
1\chapter{Micro-Benchmarks}\label{microbench}
2
3The first step in evaluating this work is to test small controlled cases to ensure the basics work properly.
4This chapter presents five different experimental setups for evaluating the basic features of the \CFA, libfibre~\cite{libfibre}, Go, and Tokio~\cite{Tokio} schedulers.
5All of these systems have a \gls{uthrding} model.
6The goal of this chapter is to show, through the different experiments, that the \CFA scheduler obtains equivalent performance to other schedulers with lesser fairness guarantees.
7Note that only the code of the \CFA tests is shown;
8all tests in the other systems are functionally identical and available both online~\cite{GITHUB:SchedulingBenchmarks} and submitted to UWSpace with the thesis itself.
9
10\section{Benchmark Environment}\label{microenv}
11
12All benchmarks are run on two distinct hardware platforms.
13\begin{description}
14\item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
15The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for a total of 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}.
16Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively.
17Each L1 and L2 instance is only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
18The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
19
20\item[Intel] is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
21The Xeon CPU has 24 cores with 2 \glspl{hthrd} per core, for 48 \glspl{hthrd} per socket with 4 sockets for a total of 196 \glspl{hthrd}.
22Each CPU has 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
23Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
24The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
25\end{description}
26
27For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyperthreading.
28If more \glspl{hthrd} are needed, then 1 NUMA node with hyperthreading is used.
29If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA nodes as needed.
30For the Intel machine, this means that from 1 to 24 \procs one socket and \emph{no} hyperthreading is used, and from 25 to 48 \procs still only one socket is used but \emph{with} hyperthreading.
31This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192.
32On AMD, the same algorithm is used, but the machine only has 2 sockets.
33So hyperthreading\footnote{
34Hyperthreading normally refers specifically to the technique used by Intel, however, it is often used generically to refer to any equivalent feature.}
35is used when the \proc count reaches 65 and 193.
36
37The limited sharing of the last-level cache on the AMD machine is markedly different from the Intel machine.
38Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU also incur high latency.
39
40\section{Experimental setup}
41
42Each experiment is run 15 times varying the number of processors depending on the two different computers.
43All experiments gather throughput data and secondary data for scalability or latency.
44The data is graphed using a solid, a dashed, and a dotted line, representing the median, maximum and minimum results respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
45An alternative display is to use error bars with min/max as the bottom/top for the bar.
46However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.}
47This graph presentation offers an overview of the distribution of the results for each experiment.
48
49For each experiment, four graphs are generated showing traditional throughput on the top row and \newterm{scalability} or \newterm{latency} on the bottom row (peek ahead to Figure~\ref{fig:cycle:jax}).
50Scalability uses the same data as throughput but the Y-axis is calculated as the number of \procs over the throughput.
51In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same.
52
53The left column shows results for hundreds of \ats per \proc, enough to always keep every \proc busy.
54The right column shows results for very few \ats per \proc, where the ready queues are expected to be near empty most of the time.
55The distinction between many and few \ats is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
56
57\section{Cycle}
58
59The most basic evaluation of any ready queue is the latency needed to push and pop one element from the ready queue.
60Since these two operations also describe a @yield@ operation, many systems use this operation as the fundamental benchmark.
61However, yielding can be treated as a special case by optimizing it away since the number of ready \ats does not change.
62Hence, systems that perform this optimization have an artificial performance benefit because the yield becomes a \emph{nop}.
63For this reason, I designed a different push/pop benchmark, called \newterm{Cycle Benchmark}.
64This benchmark arranges several \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
65At runtime, each \at unparks the next \at before \glslink{atblock}{parking} itself.
66Unparking the next \at pushes that \at onto the ready queue while the ensuing \park leads to a \at being popped from the ready queue.
67
68\begin{figure}
69        \centering
70        \input{cycle.pstex_t}
71        \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \at unparks the next \at in the cycle before \glslink{atblock}{parking} itself.}
72        \label{fig:cycle}
73\end{figure}
74
75Therefore, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment.
76In fact, the total number of \ats waiting on the ready queue is expected to vary because of the race between the next \at \glslink{atsched}{unparking} and the current \at \glslink{atblock}{parking}.
77That is, the runtime cannot anticipate that the current task immediately parks.
78As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \at parks because of time-slicing or multiple \procs.
79If this happens, the scheduler push and pop are avoided and the results of the experiment are skewed.
80(Note, an \unpark is like a V on a semaphore, so the subsequent \park (P) may not block.)
81Every runtime system must handle this race and cannot optimize away the ready-queue pushes and pops.
82To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully \park before being unparked again.
83Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
84
85Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark, where each cycle has 5 \ats.
86There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw \park/\unpark and carefully picking the order of the @P@ and @V@ with respect to the loop condition.
87
88\begin{figure}
89\begin{cfa}
90Thread.main() {
91        count := 0
92        for {
93                @this.next.wake()@
94                @wait()@
95                count ++
96                if must_stop() { break }
97        }
98        global.count += count
99}
100\end{cfa}
101\caption[Cycle Benchmark: Pseudo Code]{Cycle Benchmark: Pseudo Code}
102\label{fig:cycle:code}
103\bigskip
104        \subfloat[][Throughput, 100 cycles per \proc]{
105                \resizebox{0.5\linewidth}{!}{
106                        \input{result.cycle.jax.ops.pstex_t}
107                }
108                \label{fig:cycle:jax:ops}
109        }
110        \subfloat[][Throughput, 1 cycle per \proc]{
111                \resizebox{0.5\linewidth}{!}{
112                        \input{result.cycle.low.jax.ops.pstex_t}
113                }
114                \label{fig:cycle:jax:low:ops}
115        }
116
117        \subfloat[][Scalability, 100 cycles per \proc]{
118                \resizebox{0.5\linewidth}{!}{
119                        \input{result.cycle.jax.ns.pstex_t}
120                }
121                \label{fig:cycle:jax:ns}
122        }
123        \subfloat[][Scalability, 1 cycle per \proc]{
124                \resizebox{0.5\linewidth}{!}{
125                        \input{result.cycle.low.jax.ns.pstex_t}
126                }
127                \label{fig:cycle:jax:low:ns}
128        }
129        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts.
130        For throughput, higher is better, for scalability, lower is better.
131        Each series represents 15 independent runs.
132        The dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
133        \label{fig:cycle:jax}
134\end{figure}
135
136\begin{figure}
137        \subfloat[][Throughput, 100 cycles per \proc]{
138                \resizebox{0.5\linewidth}{!}{
139                        \input{result.cycle.nasus.ops.pstex_t}
140                }
141                \label{fig:cycle:nasus:ops}
142        }
143        \subfloat[][Throughput, 1 cycle per \proc]{
144                \resizebox{0.5\linewidth}{!}{
145                        \input{result.cycle.low.nasus.ops.pstex_t}
146                }
147                \label{fig:cycle:nasus:low:ops}
148        }
149
150        \subfloat[][Scalability, 100 cycles per \proc]{
151                \resizebox{0.5\linewidth}{!}{
152                        \input{result.cycle.nasus.ns.pstex_t}
153                }
154                \label{fig:cycle:nasus:ns}
155        }
156        \subfloat[][Scalability, 1 cycle per \proc]{
157                \resizebox{0.5\linewidth}{!}{
158                        \input{result.cycle.low.nasus.ns.pstex_t}
159                }
160                \label{fig:cycle:nasus:low:ns}
161        }
162        \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts.
163        For throughput, higher is better, for scalability, lower is better.
164        Each series represents 15 independent runs.
165        The dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
166        \label{fig:cycle:nasus}
167\end{figure}
168
169\subsection{Results}
170
171Figures~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment on Intel and AMD, respectively.
172Looking at the left column on Intel, Figures~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show the results for 100 cycles of 5 \ats for each \proc.
173\CFA, Go and Tokio all obtain effectively the same throughput performance.
174Libfibre is slightly behind in this case but still scales decently.
175As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvement for all runtimes, which can be seen as a flattening of the line.
176This effect even causes a decrease in throughput in libfibre's case.
177As expected, this pattern repeats between \proc count 72 and 96.
178
179Looking next at the right column on Intel, Figures~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns} show the results for 1 cycle of 5 \ats for each \proc.
180\CFA and Tokio obtain very similar results overall, but Tokio shows more variations in the results.
181Go achieves slightly better performance than \CFA and Tokio, but all three display significantly worse performance compared to the left column.
182This decrease in performance is likely due to the additional overhead of the idle-sleep mechanism.
183This can either be the result of \procs actually running out of work or simply additional overhead from tracking whether or not there is work available.
184Indeed, unlike the left column, it is likely that the ready queue is transiently empty, which likely triggers additional synchronization steps.
185Interestingly, libfibre achieves better performance with 1 cycle.
186
187Looking now at the results for the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results are overall similar to the Intel results, but with close to double the performance, slightly increased variation, and some differences in the details.
188Note the maximum of the Y-axis on Intel and AMD differ significantly.
189Looking at the left column on AMD, Figures~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns}, all 4 runtimes achieve very similar throughput and scalability.
190However, as the number of \procs grows higher, the results on AMD show notably more variability than on Intel.
191The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
192Looking next at the right column on AMD, Figures~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
193This result is different than on Intel, where Tokio behaved like \CFA rather than behaving like Go.
194Again, the same performance increase for libfibre is visible when running fewer \ats.
195I did not investigate the libfibre performance boost for 1 cycle in this experiment.
196
197The conclusion from both architectures is that all of the compared runtimes have fairly equivalent performance for this micro-benchmark.
198Clearly, the pathological case with 1 cycle per \proc can affect fairness algorithms managing mostly idle processors, \eg \CFA, but only at high core counts.
199In this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
200For this experiment, the \CFA scheduler has achieved the goal of obtaining equivalent performance to other schedulers with lesser fairness guarantees.
201
202\section{Yield}
203
204For completeness, the classic yield benchmark is included.
205Here, the throughput is dominated by the mechanism used to handle the @yield@ function.
206Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the cycle @wait/next.wake@ is replaced by @yield@.
207As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@.
208The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty, which again puts a strain on the idle-sleep handling.
209
210\begin{figure}
211\begin{cfa}
212Thread.main() {
213        count := 0
214        for {
215                @yield()@
216                count ++
217                if must_stop() { break }
218        }
219        global.count += count
220}
221\end{cfa}
222\caption[Yield Benchmark: Pseudo Code]{Yield Benchmark: Pseudo Code}
223\label{fig:yield:code}
224%\end{figure}
225\bigskip
226%\begin{figure}
227        \subfloat[][Throughput, 100 \ats per \proc]{
228                \resizebox{0.5\linewidth}{!}{
229                        \input{result.yield.jax.ops.pstex_t}
230                }
231                \label{fig:yield:jax:ops}
232        }
233        \subfloat[][Throughput, 1 \at per \proc]{
234                \resizebox{0.5\linewidth}{!}{
235                \input{result.yield.low.jax.ops.pstex_t}
236                }
237                \label{fig:yield:jax:low:ops}
238        }
239
240        \subfloat[][Scalability, 100 \ats per \proc]{
241                \resizebox{0.5\linewidth}{!}{
242                \input{result.yield.jax.ns.pstex_t}
243                }
244                \label{fig:yield:jax:ns}
245        }
246        \subfloat[][Scalability, 1 \at per \proc]{
247                \resizebox{0.5\linewidth}{!}{
248                \input{result.yield.low.jax.ns.pstex_t}
249                }
250                \label{fig:yield:jax:low:ns}
251        }
252        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
253        For throughput, higher is better, for scalability, lower is better.
254        Each series represents 15 independent runs.
255        The dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
256        \label{fig:yield:jax}
257\end{figure}
258
259\subsection{Results}
260
261Figures~\ref{fig:yield:jax} and \ref{fig:yield:nasus} show the results for the yield experiment on Intel and AMD, respectively.
262Looking at the left column on Intel, Figures~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns} show the results for 100 \ats for each \proc.
263Note that the Y-axis on this graph is twice as large as the Intel cycle graph.
264A visual glance between the left columns of the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
265\CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment.
266Hence, the @yield@ throughput and scalability graphs have similar shapes to the corresponding @cycle@ graphs.
267The only difference is slightly better performance for @yield@ because of less synchronization.
268Libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, bypassing the idle-sleep mechanism entirely.
269Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance on the left column.
270Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it a lower priority.
271The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
272Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@.
273Tokio has a similar performance collapse after 16 processors, and therefore, its special @yield@ handling is probably related to a Go-like scheduler problem and/or a \CFA idle-sleep problem.
274(I did not dig through the Rust code to ascertain the exact reason for the collapse.)
275Note that since there is no communication among \ats, locality problems are much less likely than for the cycle benchmark.
276This lack of communication is probably why the plateaus due to topology are not present.
277
278Looking next at the right column on Intel, Figures~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns} show the results for 1 \at for each \proc.
279As for @cycle@, \CFA's cost of idle sleep comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
280This result is to be expected since fewer \ats mean \procs are more likely to run out of work.
281On the other hand, when only running 1 \at per \proc, libfibre optimizes further and forgoes the context switch entirely.
282This results in libfibre outperforming other runtimes, even more, achieving 8 times more throughput than for @cycle@.
283Finally, Go and Tokio's performance collapse is still the same with fewer \ats.
284The only exception is Tokio running on 24 \procs, deepening the mystery of its yielding mechanism further.
285
286\begin{figure}
287        \subfloat[][Throughput, 100 \ats per \proc]{
288                \resizebox{0.5\linewidth}{!}{
289                        \input{result.yield.nasus.ops.pstex_t}
290                }
291                \label{fig:yield:nasus:ops}
292        }
293        \subfloat[][Throughput, 1 \at per \proc]{
294                \resizebox{0.5\linewidth}{!}{
295                        \input{result.yield.low.nasus.ops.pstex_t}
296                }
297                \label{fig:yield:nasus:low:ops}
298        }
299
300        \subfloat[][Scalability, 100 \ats per \proc]{
301                \resizebox{0.5\linewidth}{!}{
302                        \input{result.yield.nasus.ns.pstex_t}
303                }
304                \label{fig:yield:nasus:ns}
305        }
306        \subfloat[][Scalability, 1 \at per \proc]{
307                \resizebox{0.5\linewidth}{!}{
308                        \input{result.yield.low.nasus.ns.pstex_t}
309                }
310                \label{fig:yield:nasus:low:ns}
311        }
312        \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
313        For throughput, higher is better, for scalability, lower is better.
314        Each series represents 15 independent runs.
315        The dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
316        \label{fig:yield:nasus}
317\end{figure}
318
319Looking now at the results for the AMD architecture, Figure~\ref{fig:yield:nasus}, the results again show a story that is overall similar to the results on the Intel, with increased variation and some differences in the details.
320Note that the maximum of the Y-axis on Intel and AMD differ less in @yield@ than @cycle@.
321Looking at the left column first, Figures~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling.
322Libfibre still outpaces all other runtimes, but it encounters a performance hit at 64 \procs.
323This anomaly suggests some amount of communication between the \procs that the Intel machine is able to mask where the AMD is not, once hyperthreading is needed.
324Go and Tokio still display the same performance collapse as on Intel.
325Looking next at the right column on AMD, Figures~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime systems effectively behave the same as they did on the Intel machine.
326At the high \ats count, the only difference is Libfibre's scaling and this difference disappears on the right column.
327This behaviour suggests whatever communication issue it encountered on the left is completely circumvented on the right.
328
329It is difficult to draw conclusions for this benchmark when runtime systems treat @yield@ so differently.
330The win for \CFA is its consistency between the cycle and yield benchmarks, making it simpler for programmers to use and understand, \ie the \CFA semantics match with programmer intuition.
331
332
333\section{Churn}
334
335The Cycle and Yield benchmarks represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.
336In these benchmarks \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other.
337
338The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relationship between the last \proc on which a \at ran and blocked, and the \proc that subsequently unblocks it.
339With processor-specific ready-queues, when a \at is unblocked by a different \proc, that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a remote queue.
340This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
341Hence, this benchmark has performance dominated by the cache traffic as \procs are constantly accessing each others' data.
342In either case, this benchmark aims to measure how well a scheduler handles these cases since both cases can lead to performance degradation if not handled correctly.
343
344This benchmark uses a fixed-size array of counting semaphores.
345Each \at picks a random semaphore, @V@s it to unblock any waiting \at, and then @P@s (maybe blocks) the \at on the semaphore.
346This creates a flow where \ats push each other out of the semaphores before being pushed out themselves.
347For this benchmark to work, the number of \ats must be equal to or greater than the number of semaphores plus the number of \procs;
348\eg if there are 10 semaphores and 5 \procs, but only 3 \ats, all 3 \ats can block (P) on a random semaphore and now there are no \ats to unblock (V) them.
349Note that the nature of these semaphores means the counter can go beyond 1, which can lead to nonblocking calls to @P@.
350Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@.
351
352\begin{figure}
353\begin{cfa}
354Thread.main() {
355        count := 0
356        for {
357                r := random() % len(spots)
358                @spots[r].V()@
359                @spots[r].P()@
360                count ++
361                if must_stop() { break }
362        }
363        global.count += count
364}
365\end{cfa}
366\caption[Churn Benchmark: Pseudo Code]{Churn Benchmark: Pseudo Code}
367\label{fig:churn:code}
368%\end{figure}
369\bigskip
370%\begin{figure}
371        \subfloat[][Throughput, 100 \ats per \proc]{
372                \resizebox{0.5\linewidth}{!}{
373                        \input{result.churn.jax.ops.pstex_t}
374                }
375                \label{fig:churn:jax:ops}
376        }
377        \subfloat[][Throughput, 2 \ats per \proc]{
378                \resizebox{0.5\linewidth}{!}{
379                        \input{result.churn.low.jax.ops.pstex_t}
380                }
381                \label{fig:churn:jax:low:ops}
382        }
383
384        \subfloat[][Scalability, 100 \ats per \proc]{
385                \resizebox{0.5\linewidth}{!}{
386                        \input{result.churn.jax.ns.pstex_t}
387                }
388                \label{fig:churn:jax:ns}
389        }
390        \subfloat[][Scalability, 2 \ats per \proc]{
391                \resizebox{0.5\linewidth}{!}{
392                        \input{result.churn.low.jax.ns.pstex_t}
393                }
394                \label{fig:churn:jax:low:ns}
395        }
396        \caption[Churn Benchmark on Intel]{Churn Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
397        For throughput, higher is better, for scalability, lower is better.
398        Each series represents 15 independent runs.
399        The dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
400        \label{fig:churn:jax}
401\end{figure}
402
403\subsection{Results}
404
405Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the results for the churn experiment on Intel and AMD, respectively.
406Looking at the left column on Intel, Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show the results for 100 \ats for each \proc, and all runtimes obtain fairly similar throughput for most \proc counts.
407\CFA does very well on a single \proc but quickly loses its advantage over the other runtimes.
408As expected, it scales decently up to 48 \procs, drops from 48 to 72 \procs, and then plateaus.
409Tokio achieves very similar performance to \CFA, with the starting boost, scaling decently until 48 \procs, drops from 48 to 72 \procs, and starts increasing again to 192 \procs.
410Libfibre obtains effectively the same results as Tokio with slightly less scaling, \ie the scaling curve is the same but with slightly lower values.
411Finally, Go gets the most peculiar results, scaling worse than other runtimes until 48 \procs.
412At 72 \procs, the results of the Go runtime vary significantly, sometimes scaling sometimes plateauing.
413However, beyond this point Go keeps this level of variation but does not scale further in any of the runs.
414
415Throughput and scalability are notably worse for all runtimes than the previous benchmarks since there is inherently more communication between processors.
416Indeed, none of the runtimes reach 40 million operations per second while in the cycle benchmark all but libfibre reached 400 million operations per second.
417Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc counts, all runtimes produce poor scaling.
418However, once the number of \glspl{hthrd} goes beyond a single socket, at 48 \procs, scaling goes from bad to worse and performance completely ceases to improve.
419At this point, the benchmark is dominated by inter-socket communication costs for all runtimes.
420
421An interesting aspect to note here is that the runtimes differ in how they handle this situation.
422Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready queue of the local \proc or to the ready queue of the remote \proc, which previously ran the \at.
423\CFA, Tokio and Go all use the approach of \glslink{atsched}{unparking} to the local \proc, while Libfibre unparks to the remote \proc.
424In this particular benchmark, the inherent chaos of the benchmark, in addition to the small memory footprint, means neither approach wins over the other.
425
426Looking next at the right column on Intel, Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns} show the results for 1 \at for each \proc, and many of the differences between the runtimes disappear.
427\CFA outperforms other runtimes by a minuscule margin.
428Libfibre follows very closely behind with basically the same performance and scaling.
429Tokio maintains effectively the same curve shapes as \CFA and libfibre, but it incurs extra costs for all \proc counts.
430While Go maintains overall similar results to the others, it again encounters significant variation at high \proc counts.
431Inexplicably resulting in super-linear scaling for some runs, \ie the scalability curves display a negative slope.
432
433Interestingly, unlike the cycle benchmark, running with fewer \ats does not produce drastically different results.
434In fact, the overall throughput stays almost exactly the same on the left and right columns.
435
436\begin{figure}
437        \subfloat[][Throughput, 100 \ats per \proc]{
438                \resizebox{0.5\linewidth}{!}{
439                        \input{result.churn.nasus.ops.pstex_t}
440                }
441                \label{fig:churn:nasus:ops}
442        }
443        \subfloat[][Throughput, 2 \ats per \proc]{
444                \resizebox{0.5\linewidth}{!}{
445                        \input{result.churn.low.nasus.ops.pstex_t}
446                }
447                \label{fig:churn:nasus:low:ops}
448        }
449
450        \subfloat[][Scalability, 100 \ats per \proc]{
451                \resizebox{0.5\linewidth}{!}{
452                        \input{result.churn.nasus.ns.pstex_t}
453                }
454                \label{fig:churn:nasus:ns}
455        }
456        \subfloat[][Scalability, 2 \ats per \proc]{
457                \resizebox{0.5\linewidth}{!}{
458                        \input{result.churn.low.nasus.ns.pstex_t}
459                }
460                \label{fig:churn:nasus:low:ns}
461        }
462        \caption[Churn Benchmark on AMD]{Churn Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
463        For throughput, higher is better, for scalability, lower is better.
464        Each series represents 15 independent runs.
465        The dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
466        \label{fig:churn:nasus}
467\end{figure}
468
469
470Looking now at the results for the AMD architecture, Figure~\ref{fig:churn:nasus}, the results show a somewhat different story.
471Looking at the left column first, Figures~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability.
472\CFA suffers particularly from larger variations at higher \proc counts, but largely outperforms the other runtimes.
473Go still produces intriguing results in this case and even more intriguingly, the results have fairly low variation.
474
475One possible explanation for Go's difference is that it has very few available concurrent primitives, so a channel is substituted for a semaphore.
476On paper, a semaphore can be replaced by a channel, and with zero-sized objects passed through the channel, equivalent performance could be expected.
477However, in practice, there are implementation differences between the two, \eg if the semaphore count can get somewhat high so objects accumulate in the channel.
478Note that this substitution is also made in the cycle benchmark;
479however, in that context, it did not have a notable impact.
480
481A second possible explanation is that Go may use the heap when allocating variables based on the result of the escape analysis of the code.
482It is possible for variables that could be placed on the stack to instead be placed on the heap.
483This placement could cause extra pointer chasing in the benchmark, heightening locality effects.
484Depending on how the heap is structured, this could also lead to false sharing.
485I did not investigate what causes these unusual results.
486
487Looking next at the right column, Figures~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, as for Intel, all runtimes obtain overall similar throughput between the left and right column.
488\CFA, Libfibre and Tokio all have very close results.
489Go still suffers from poor scalability but is now unusual in a different way.
490While it obtains effectively constant performance regardless of \proc count, this ``sequential'' performance is higher than the other runtimes for low \proc count.
491Up to 32 \procs, after which the other runtimes manage to outscale Go.
492
493In conclusion, the objective of this benchmark is to demonstrate that \glslink{atsched}{unparking} \ats from remote \procs does not cause too much contention on the local queues.
494Indeed, the fact that most runtimes achieve some scaling between various \proc counts demonstrates migrations do not need to be serialized.
495Again these results demonstrate that \CFA achieves satisfactory performance compared to the other runtimes.
496
497\section{Locality}
498
499As mentioned in the churn benchmark, when \glslink{atsched}{unparking} a \at, it is possible to either \unpark to the local or remote sub-queue.\footnote{
500It is also possible to \unpark to a third unrelated sub-queue, but without additional knowledge about the situation, it is likely to degrade performance.}
501The locality experiment includes two variations of the churn benchmark, where a data array is added.
502In both variations, before @V@ing the semaphore, each \at calls a @work@ function which increments random cells inside the data array.
503In the noshare variation, the array is not passed on and each thread continuously accesses its private array.
504In the share variation, the array is passed to another thread via the semaphore's shadow queue (each blocking thread can save a word of user data in its blocking node), transferring ownership of the array to the woken thread.
505Figure~\ref{fig:locality:code} shows the pseudo code for this benchmark.
506
507The objective here is to highlight the different decisions made by the runtime when \glslink{atsched}{unparking}.
508Since each thread unparks a random semaphore, it means that it is unlikely that a \at is unparked from the last \proc it ran on.
509In the noshare variation, \glslink{atsched}{unparking} the \at on the local \proc is an appropriate choice since the data was last modified on that \proc.
510In the shared variation, \glslink{atsched}{unparking} the \at on a remote \proc is an appropriate choice.
511
512The expectation for this benchmark is to see a performance inversion, where runtimes fare notably better in the variation which matches their \glslink{atsched}{unparking} policy.
513This decision should lead to \CFA, Go and Tokio achieving better performance in the share variation while libfibre achieves better performance in noshare.
514Indeed, \CFA, Go and Tokio have the default policy of \glslink{atsched}{unparking} \ats on the local \proc, whereas libfibre has the default policy of \glslink{atsched}{unparking} \ats wherever they last ran.
515
516\begin{figure}
517\newsavebox{\myboxA}
518\newsavebox{\myboxB}
519
520\begin{lrbox}{\myboxA}
521\begin{cfa}[tabsize=3]
522Thread.main() {
523        count := 0
524        for {
525                r := random() % len(spots)
526                // go through the array
527                @work( a )@
528
529                spots[r].V()
530                spots[r].P()
531                count ++
532                if must_stop() { break }
533        }
534        global.count += count
535}
536\end{cfa}
537\end{lrbox}
538
539\begin{lrbox}{\myboxB}
540\begin{cfa}[tabsize=3]
541Thread.main() {
542        count := 0
543        for {
544                r := random() % len(spots)
545                // go through the array
546                @work( a )@
547                // pass array to next thread
548                spots[r].V( @a@ )
549                @a = @spots[r].P()
550                count ++
551                if must_stop() { break }
552        }
553        global.count += count
554}
555\end{cfa}
556\end{lrbox}
557
558\subfloat[Noshare]{\label{fig:locality:code:T1}\usebox\myboxA}
559\hspace{3pt}
560\vrule
561\hspace{3pt}
562\subfloat[Share]{\label{fig:locality:code:T2}\usebox\myboxB}
563
564\caption[Locality Benchmark: Pseudo Code]{Locality Benchmark: Pseudo Code}
565\label{fig:locality:code}
566\end{figure}
567
568\subsection{Results}
569
570Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} show the results for the locality experiment on Intel and AMD, respectively.
571In both cases, the graphs on the left column show the results for the share variation and the graphs on the right column show the results for the noshare.
572Looking at the left column on Intel, Figures~\ref{fig:locality:jax:share:ops} and \ref{fig:locality:jax:share:ns} show the results for the share variation.
573\CFA and Tokio slightly outperform libfibre, as expected, based on their \ats placement approach.
574\CFA and Tokio both \unpark locally and do not suffer cache misses on the transferred array.
575Libfibre, on the other hand, unparks remotely, and as such the unparked \at is likely to miss on the shared data.
576Go trails behind in this experiment, presumably for the same reasons that were observable in the churn benchmark.
577Otherwise, the results are similar to the churn benchmark, with lower throughput due to the array processing.
578As for most previous results, all runtimes suffer a performance hit after 48 \procs, which is the socket boundary, and climb again from 96 to 192 \procs.
579
580\begin{figure}
581        \subfloat[][Throughput share]{
582                \resizebox{0.5\linewidth}{!}{
583                        \input{result.locality.share.jax.ops.pstex_t}
584                }
585                \label{fig:locality:jax:share:ops}
586        }
587        \subfloat[][Throughput noshare]{
588                \resizebox{0.5\linewidth}{!}{
589                        \input{result.locality.noshare.jax.ops.pstex_t}
590                }
591                \label{fig:locality:jax:noshare:ops}
592        }
593
594        \subfloat[][Scalability share]{
595                \resizebox{0.5\linewidth}{!}{
596                        \input{result.locality.share.jax.ns.pstex_t}
597                }
598                \label{fig:locality:jax:share:ns}
599        }
600        \subfloat[][Scalability noshare]{
601                \resizebox{0.5\linewidth}{!}{
602                        \input{result.locality.noshare.jax.ns.pstex_t}
603                }
604                \label{fig:locality:jax:noshare:ns}
605        }
606        \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
607        For throughput, higher is better, for scalability, lower is better.
608        Each series represents 15 independent runs.
609        The dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
610        \label{fig:locality:jax}
611\end{figure}
612
613\begin{figure}
614        \subfloat[][Throughput share]{
615                \resizebox{0.5\linewidth}{!}{
616                        \input{result.locality.share.nasus.ops.pstex_t}
617                }
618                \label{fig:locality:nasus:share:ops}
619        }
620        \subfloat[][Throughput noshare]{
621                \resizebox{0.5\linewidth}{!}{
622                        \input{result.locality.noshare.nasus.ops.pstex_t}
623                }
624                \label{fig:locality:nasus:noshare:ops}
625        }
626
627        \subfloat[][Scalability share]{
628                \resizebox{0.5\linewidth}{!}{
629                        \input{result.locality.share.nasus.ns.pstex_t}
630                }
631                \label{fig:locality:nasus:share:ns}
632        }
633        \subfloat[][Scalability noshare]{
634                \resizebox{0.5\linewidth}{!}{
635                        \input{result.locality.noshare.nasus.ns.pstex_t}
636                }
637                \label{fig:locality:nasus:noshare:ns}
638        }
639        \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
640        For throughput, higher is better, for scalability, lower is better.
641        Each series represents 15 independent runs.
642        The dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
643        \label{fig:locality:nasus}
644\end{figure}
645
646Looking at the right column on Intel, Figures~\ref{fig:locality:jax:noshare:ops} and \ref{fig:locality:jax:noshare:ns} show the results for the noshare variation.
647The graphs show the expected performance inversion where libfibre now outperforms \CFA and Tokio.
648Indeed, in this case, unparking remotely means the unparked \at is less likely to suffer a cache miss on the array, which leaves the \at data structure and the remote queue as the only source of likely cache misses.
649Results show both are amortized fairly well in this case.
650\CFA and Tokio both \unpark locally and as a result, suffer a marginal performance degradation from the cache miss on the array.
651
652Looking at the results for the AMD architecture, Figure~\ref{fig:locality:nasus}, shows results similar to the Intel.
653Again the overall performance is higher and slightly more variation is visible.
654Looking at the left column first, Figures~\ref{fig:locality:nasus:share:ops} and \ref{fig:locality:nasus:share:ns}, \CFA and Tokio still outperform libfibre, this time more significantly.
655This advantage is expected from the AMD server with its smaller and narrower caches that magnify the costs of processing the array.
656Go still has the same poor performance as on Intel.
657
658Finally, looking at the right column, Figures~\ref{fig:locality:nasus:noshare:ops} and \ref{fig:locality:nasus:noshare:ns}, like on Intel, the same performance inversion is present between libfibre and \CFA/Tokio.
659Go still has the same poor performance.
660
661Overall, this benchmark mostly demonstrates the two options available when \glslink{atsched}{unparking} a \at.
662Depending on the workload, either of these options can be the appropriate one.
663Since it is prohibitively difficult to dynamically detect which approach is appropriate, all runtimes much choose one of the two and live with the consequences.
664
665Once again, these experiments demonstrate that \CFA achieves equivalent performance to the other runtimes, in this case matching the faster Tokio rather than Go, which is trailing behind.
666
667\section{Transfer}
668The last benchmark is more of an experiment than a benchmark.
669It tests the behaviour of the schedulers for a misbehaved workload.
670In this workload, one \at is selected at random to be the leader.
671The leader then spins in a tight loop until it has observed that all other \ats have acknowledged its leadership.
672The leader \at then picks a new \at to be the next leader and the cycle repeats.
673The benchmark comes in two variations for the non-leader \ats:
674once they acknowledged the leader, they either block on a semaphore or spin yielding.
675Figure~\ref{fig:transfer:code} shows pseudo code for this benchmark.
676
677\begin{figure}
678\begin{cfa}
679Thread.lead() {
680        this.idx_seen = ++lead_idx
681        if lead_idx > stop_idx {
682                done := true
683                return
684        }
685        // Wait for everyone to acknowledge my leadership
686        start: = timeNow()
687        for t in threads {
688                while t.idx_seen != lead_idx {
689                        asm pause
690                        if (timeNow() - start) > 5 seconds { error() }
691                }
692        }
693        // pick next leader
694        leader := threads[ prng() % len(threads) ]
695        // wake everyone
696        if ! exhaust {
697                for t in threads {
698                        if t != me { t.wake() }
699                }
700        }
701}
702Thread.wait() {
703        this.idx_seen := lead_idx
704        if exhaust { wait() }
705        else { yield() }
706}
707Thread.main() {
708        while !done  {
709                if leader == me { this.lead() }
710                else { this.wait() }
711        }
712}
713\end{cfa}
714\caption[Transfer Benchmark: Pseudo Code]{Transfer Benchmark: Pseudo Code}
715\label{fig:transfer:code}
716\end{figure}
717
718The experiment is designed to evaluate the short-term load balancing of a scheduler.
719Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate.
720This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at.
721In the semaphore variation, the number of runnable \ats eventually dwindles to only the leader.
722This scenario is a simpler case to handle for schedulers since \procs eventually run out of work.
723In the yielding variation, the number of runnable \ats stays constant.
724This scenario is a harder case to handle because corrective measures must be taken even when work is available.
725Note that runtimes with preemption circumvent this problem by forcing the spinner to yield.
726In \CFA preemption was disabled as it only obfuscates the results.
727I am not aware of a method to disable preemption in Go.
728
729In both variations, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
730In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by, $NT(CSL + SL) / (NP - 1)$,
731where $CSL$ is the context-switch latency, $SL$ is the cost for enqueueing and dequeuing a \at, $NT$ is the number of \ats, and $NP$ is the number of \procs.
732However, if the scheduler allows \ats to run many times before other \ats can run once, this delay increases.
733The semaphore version is an approximation of strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once.
734The benchmark effectively provides the fairness guarantee in this case.
735In the yielding version, however, the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness is measurable.
736
737While this is an artificial scenario, in real life it requires only a few simple pieces.
738The yielding version simply creates a scenario where a \at runs uninterrupted in a saturated system and the starvation has an easily measured impact.
739Hence, \emph{any} \at that runs uninterrupted for a significant time in a saturated system could lead to this kind of starvation.
740
741\subsection{Results}
742
743\begin{table}
744\setlength{\extrarowheight}{2pt}
745\setlength{\tabcolsep}{5pt}
746\begin{centering}
747\begin{tabular}{r | c | c | c | c | c | c | c | c}
748Machine   &                     \multicolumn{4}{c |}{Intel}                &          \multicolumn{4}{c}{AMD}             \\
749\cline{2-9}
750Variation & \multicolumn{2}{c|}{Park} & \multicolumn{2}{c |}{Yield} & \multicolumn{2}{c|}{Park} & \multicolumn{2}{c}{Yield} \\
751\cline{2-9}
752\procs    &      2      &      192   &      2      &      192      &      2      &      256   &      2      &      256    \\
753\hline
754\CFA      & 106 $\mu$& ~19.9 ms   & 68.4 $\mu$s & ~1.2 ms       & 174 $\mu$& ~28.4 ms   & 78.8~~$\mu$s& ~~1.21 ms   \\
755libfibre  & 127 $\mu$& ~33.5 ms   & DNC         & DNC           & 156 $\mu$& ~36.7 ms   & DNC         & DNC         \\
756Go        & 106 $\mu$& ~64.0 ms   & 24.6 ms     & 74.3 ms       & 271 $\mu$& 121.6 ms   & ~~1.21~ms   & 117.4 ms    \\
757Tokio     & 289 $\mu$& 180.6 ms   & DNC         & DNC           & 157 $\mu$& 111.0 ms   & DNC         & DNC
758\end{tabular}
759\end{centering}
760\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at.
761For each runtime, the average is calculated over 100'000 transfers, except for Go which only has 1000 transfer (due to the difference in transfer time).
762DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
763\label{fig:transfer:res}
764\end{table}
765
766Table~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs on the computer, where each experiment runs 100 \ats per \proc.
767Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter.
768As such, data points within the same order of magnitude are considered equal.
769That is, the takeaway of this experiment is the presence of very large differences.
770The semaphore variation is denoted ``Park'', where the number of \ats dwindles as the new leader is acknowledged.
771The yielding variation is denoted ``Yield''.
772The experiment is only run for a few and many \procs since scaling is not the focus of this experiment.
773
774The first two columns show the results for the semaphore variation on Intel.
775While there are some differences in latencies, with \CFA consistently the fastest and Tokio the slowest, all runtimes achieve fairly close results.
776Again, this experiment is meant to highlight major differences, so latencies within $10\times$ of each other are considered equal.
777
778Looking at the next two columns, the results for the yield variation on Intel, the story is very different.
779\CFA achieves better latencies, presumably due to the lack of synchronization with the yield.
780Go does complete the experiment, but with drastically higher latency:
781latency at 2 \procs is $350\times$ higher than \CFA and $70\times$ higher at 192 \procs.
782This difference is because Go has a classic work-stealing scheduler, but it adds coarse-grain preemption, which interrupts the spinning leader after a period.
783Neither Libfibre nor Tokio complete the experiment.
784Both runtimes also use classical work-stealing scheduling without preemption, and therefore, none of the work queues are ever emptied so no load balancing occurs.
785
786Looking now at the results for the AMD architecture, the results show effectively the same story.
787The first two columns show all runtime obtaining results well within $10\times$ of each other.
788The next two columns again show \CFA producing low latencies, while Go still has notably higher latency but the difference is less drastic on 2 \procs, where it produces a $15\times$ difference as opposed to a $100\times$ difference on 256 \procs.
789Neither Libfibre nor Tokio complete the experiment.
790
791This experiment clearly demonstrates that \CFA achieves a stronger fairness guarantee.
792The semaphore variation serves as a control, where all runtimes are expected to transfer leadership fairly quickly.
793Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running the leader.
794Table~\ref{fig:transfer:res} shows that while Go and Tokio are slower using the semaphore, all runtimes achieve decent latency.
795
796However, the yielding variation shows an entirely different picture.
797Since libfibre and Tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues never steal from other \procs.
798The result is that the experiment simply does not complete for these runtimes.
799Without \procs stealing from the \proc running the leader, the experiment cannot terminate.
800Go manages to complete the experiment because it adds preemption on top of classic work-stealing.
801However, since preemption is fairly infrequent, it achieves significantly worse performance.
802In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness.
803Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores and idle sleep.
Note: See TracBrowser for help on using the repository browser.