source: doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex @ e378c730

ADTast-experimentalpthread-emulation
Last change on this file since e378c730 was e378c730, checked in by Thierry Delisle <tdelisle@…>, 21 months ago

Fleshed out some more the evaluation sections, still waiting on some new data for churn nasus, locality nasus and memcached update

  • Property mode set to 100644
File size: 36.2 KB
RevLine 
[36a05d7]1\chapter{Micro-Benchmarks}\label{microbench}
2
[c4072d8e]3The first step in evaluating this work is to test-out small controlled cases to ensure the basics work properly.
4This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
[36a05d7]5
[e378c730]6\section{Benchmark Environment}\label{microenv}
[c4072d8e]7All benchmarks are run on two distinct hardware platforms.
8\begin{description}
9\item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
10The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}.
11Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively.
[6db62fa]12Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
13The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
[c4072d8e]14
15\item[Intel] is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
16The Xeon CPU has 24 cores with 2 \glspl{hthrd} per core, for 48 \glspl{hthrd} per socket with 4 sockets for a total of 196 \glspl{hthrd}.
17Each CPU has 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
[6db62fa]18Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
[c4072d8e]19The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
20\end{description}
21
22For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading.
23If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used.
24If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA Nodes as needed.
[6db62fa]25
[c4072d8e]26The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine.
27Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU still incur high latency.
[6db62fa]28
29
[36a05d7]30\section{Cycling latency}
[622a358]31\begin{figure}
32        \centering
33        \input{cycle.pstex_t}
[e76fa30]34        \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \at unparks the next \at in the cycle before parking itself.}
[622a358]35        \label{fig:cycle}
36\end{figure}
[c4072d8e]37The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue.
38Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark.
[e378c730]39However, yielding can be treated as a special case and some aspects of the scheduler can be optimized away since the number of ready \ats does not change.
40Not all systems perform this type of optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}.
[c4072d8e]41For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}.
[e76fa30]42This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
43At runtime, each \at unparks the next \at before parking itself.
44Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue.
[36a05d7]45
[e76fa30]46Hence, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment.
[e378c730]47In fact, the total number of \ats waiting on the ready queue is expected to vary because of the delay between the next \at unparking and the current \at parking.
[c4072d8e]48That is, the runtime cannot anticipate that the current task will immediately park.
[e378c730]49As well, the size of the cycle is also decided based on this delay.
50Note that, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.
51If this happens, the scheduler push and pop are avoided and the results of the experiment would be skewed.
52Because of time-slicing or because cycles can be spread over multiple \procs, a small cycle may see the chain of unparks go full circle before the first \at parks.
53Every runtime system must handle this race and but cannot optimized away the ready-queue pushes and pops if the cycle is long enough.
[e76fa30]54To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully park before being unparked again.
[c4072d8e]55Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
[36a05d7]56
[c4072d8e]57Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark.
58There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition.
[36a05d7]59
[622a358]60\begin{figure}
[c4072d8e]61\begin{cfa}
62Thread.main() {
63        count := 0
64        for {
65                @this.next.wake()@
[e76fa30]66                @wait()@
[c4072d8e]67                count ++
68                if must_stop() { break }
69        }
70        global.count += count
71}
72\end{cfa}
73\caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
74\label{fig:cycle:code}
[622a358]75\end{figure}
76
77\subsection{Results}
[6db62fa]78\begin{figure}
[999faf1]79        \subfloat[][Throughput, 100 cycles per \proc]{
[622a358]80                \resizebox{0.5\linewidth}{!}{
81                        \input{result.cycle.jax.ops.pstex_t}
82                }
83                \label{fig:cycle:jax:ops}
84        }
[999faf1]85        \subfloat[][Throughput, 1 cycle per \proc]{
[622a358]86                \resizebox{0.5\linewidth}{!}{
87                        \input{result.cycle.low.jax.ops.pstex_t}
88                }
89                \label{fig:cycle:jax:low:ops}
90        }
91
[999faf1]92        \subfloat[][Scalability, 100 cycles per \proc]{
[622a358]93                \resizebox{0.5\linewidth}{!}{
94                        \input{result.cycle.jax.ns.pstex_t}
95                }
[999faf1]96                \label{fig:cycle:jax:ns}
[622a358]97        }
[999faf1]98        \subfloat[][Scalability, 1 cycle per \proc]{
[622a358]99                \resizebox{0.5\linewidth}{!}{
100                        \input{result.cycle.low.jax.ns.pstex_t}
101                }
102                \label{fig:cycle:jax:low:ns}
103        }
[e76fa30]104        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
[622a358]105        \label{fig:cycle:jax}
[6db62fa]106\end{figure}
[622a358]107
[999faf1]108\begin{figure}
109        \subfloat[][Throughput, 100 cycles per \proc]{
110                \resizebox{0.5\linewidth}{!}{
111                        \input{result.cycle.nasus.ops.pstex_t}
112                }
113                \label{fig:cycle:nasus:ops}
114        }
115        \subfloat[][Throughput, 1 cycle per \proc]{
116                \resizebox{0.5\linewidth}{!}{
117                        \input{result.cycle.low.nasus.ops.pstex_t}
118                }
119                \label{fig:cycle:nasus:low:ops}
120        }
121
122        \subfloat[][Scalability, 100 cycles per \proc]{
123                \resizebox{0.5\linewidth}{!}{
124                        \input{result.cycle.nasus.ns.pstex_t}
125                }
[e76fa30]126                \label{fig:cycle:nasus:ns}
[999faf1]127        }
128        \subfloat[][Scalability, 1 cycle per \proc]{
129                \resizebox{0.5\linewidth}{!}{
130                        \input{result.cycle.low.nasus.ns.pstex_t}
131                }
132                \label{fig:cycle:nasus:low:ns}
133        }
[e76fa30]134        \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
[999faf1]135        \label{fig:cycle:nasus}
136\end{figure}
137Figure~\ref{fig:cycle:jax} and Figure~\ref{fig:cycle:nasus} shows the throughput as a function of \proc count on Intel and AMD respectively, where each cycle has 5 \ats.
138The graphs show traditional throughput on the top row and \newterm{scalability} on the bottom row.
[e76fa30]139Where scalability uses the same data but the Y axis is calculated as the number of \procs over the throughput.
[999faf1]140In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same.
141The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy.
142The right column shows results for only 1 cycle per \proc, where the ready queues are expected to be near empty at all times.
143The distinction is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
144
[e378c730]145The experiment was run 15 times for each series and processor count and the \emph{$\times$}s on the graph show all of the results obtained.
146Each series also has a solid and two dashed lines highlighting the median, maximum and minimum result respectively.
147This presentation offers an overview of the distribution of the results for each series.
148
149The experimental setup uses taskset to limit the placement of \glspl{kthrd} by the operating system.
150As mentioned in Section~\ref{microenv}, the experiement is setup to prioritize running on 2 \glspl{hthrd} per core before running on multiple sockets.
151For the Intel machine, this means that from 1 to 24 \procs, one socket and \emph{no} hyperthreading is used and from 25 to 48 \procs, still only one socket but \emph{with} hyperthreading.
152This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192.
153On AMD, the same algorithm is used, but the machine only has 2 sockets.
154So hyperthreading\footnote{Hyperthreading normally refers specifically to the technique used by Intel, however here it is loosely used to refer to AMD's equivalent feature.} is used when the \proc count reach 65 and 193.
155
156Figure~\ref{fig:cycle:jax:ops} and Figure~\ref{fig:cycle:jax:ns} show that for 100 cycles per \proc, \CFA, Go and Tokio all obtain effectively the same performance.
157Libfibre is slightly behind in this case but still scales decently.
158As a result of the \gls{kthrd} placement, we can see that additional \procs from 25 to 48 offer less performance improvements for all runtimes.
159As expected, this pattern repeats between \proc count 72 and 96.
[999faf1]160The performance goal of \CFA is to obtain equivalent performance to other, less fair schedulers and that is what results show.
161Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show very good throughput and scalability for all runtimes.
162
[e378c730]163When running only a single cycle, the story is slightly different.
164\CFA and tokio obtain very smiliar results overall, but tokio shows notably more variations in the results.
165While \CFA, Go and tokio achive equivalent performance with 100 cycles per \proc, with only 1 cycle per \proc Go achieves slightly better performance.
166This difference in throughput and scalability is due to the idle-sleep mechanism.
167With very few cycles, stealing or helping can cause a cascade of tasks migration and trick \proc into very short idle sleeps.
168Both effect will negatively affect performance.
[999faf1]169
[e378c730]170An interesting and unusual result is that libfibre achieves better performance with fewer cycle.
171This suggest that the cascade effect is never present in libfibre and that some bottleneck disappears in this context.
172However, I did not investigate this result any deeper.
173
174Figure~\ref{fig:cycle:nasus} show a similar story happening on AMD as it does on Intel.
175The different performance improvements and plateaus due to cache topology appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
176Unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability for 100 cycles per \proc.
177
178In the 1 cycle per \proc experiment, the same performance increase for libfibre is visible.
179However, unlike on Intel, tokio achieves the same performance as Go rather than \CFA.
180This leaves \CFA trailing behind in this particular case, but only at hight core counts.
181Presumably this is because in this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
182Since this effect is only problematic in cases with 1 \at per \proc it is not very meaningful for the general performance.
183
184The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance in this scenario.
185Which demonstrate that in this case \CFA achieves equivalent performance.
[36a05d7]186
187\section{Yield}
[c4072d8e]188For completion, the classic yield benchmark is included.
[e76fa30]189This benchmark is simpler than the cycle test: it creates many \ats that call @yield@.
[c4072d8e]190As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@.
[e76fa30]191The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty.
[c4072d8e]192This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work.
193Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@.
[729c991]194
[622a358]195\begin{figure}
[c4072d8e]196\begin{cfa}
197Thread.main() {
198        count := 0
199        for {
200                @yield()@
201                count ++
202                if must_stop() { break }
203        }
204        global.count += count
205}
206\end{cfa}
207\caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
208\label{fig:yield:code}
[622a358]209\end{figure}
[729c991]210
[622a358]211\subsection{Results}
212\begin{figure}
213        \subfloat[][Throughput, 100 \ats per \proc]{
214                \resizebox{0.5\linewidth}{!}{
215                        \input{result.yield.jax.ops.pstex_t}
[729c991]216                }
[622a358]217                \label{fig:yield:jax:ops}
[729c991]218        }
[622a358]219        \subfloat[][Throughput, 1 \ats per \proc]{
220                \resizebox{0.5\linewidth}{!}{
221                \input{result.yield.low.jax.ops.pstex_t}
222                }
223                \label{fig:yield:jax:low:ops}
224        }
225
[999faf1]226        \subfloat[][Scalability, 100 \ats per \proc]{
[622a358]227                \resizebox{0.5\linewidth}{!}{
228                \input{result.yield.jax.ns.pstex_t}
229                }
230                \label{fig:yield:jax:ns}
231        }
[999faf1]232        \subfloat[][Scalability, 1 \ats per \proc]{
[622a358]233                \resizebox{0.5\linewidth}{!}{
234                \input{result.yield.low.jax.ns.pstex_t}
235                }
236                \label{fig:yield:jax:low:ns}
237        }
[e76fa30]238        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
[622a358]239        \label{fig:yield:jax}
240\end{figure}
241
[999faf1]242\begin{figure}
243        \subfloat[][Throughput, 100 \ats per \proc]{
244                \resizebox{0.5\linewidth}{!}{
245                        \input{result.yield.nasus.ops.pstex_t}
246                }
247                \label{fig:yield:nasus:ops}
248        }
249        \subfloat[][Throughput, 1 \at per \proc]{
250                \resizebox{0.5\linewidth}{!}{
251                        \input{result.yield.low.nasus.ops.pstex_t}
252                }
253                \label{fig:yield:nasus:low:ops}
254        }
255
256        \subfloat[][Scalability, 100 \ats per \proc]{
257                \resizebox{0.5\linewidth}{!}{
258                        \input{result.yield.nasus.ns.pstex_t}
259                }
[e76fa30]260                \label{fig:yield:nasus:ns}
[999faf1]261        }
262        \subfloat[][Scalability, 1 \at per \proc]{
263                \resizebox{0.5\linewidth}{!}{
264                        \input{result.yield.low.nasus.ns.pstex_t}
265                }
266                \label{fig:yield:nasus:low:ns}
267        }
[e76fa30]268        \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
[999faf1]269        \label{fig:yield:nasus}
270\end{figure}
[e76fa30]271Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count on Intel.
[999faf1]272It is fairly obvious why I claim this benchmark is more artificial.
273The throughput is dominated by the mechanism used to handle the @yield@.
[e378c730]274\CFA does not have special handling for @yield@ but the experiment requires less synchronization.
275As a result achieves better performance than the cycle benchmark, but still comparable.
[999faf1]276
277When the number of \ats is reduce to 1 per \proc, the cost of idle sleep also comes into play in a very significant way.
[e378c730]278If anything causes a \at migration, where two \ats end-up on the same ready-queue, work-stealing will start occuring and could cause several \ats to shuffle around.
[999faf1]279In the process, several \procs can go to sleep transiently if they fail to find where the \ats were shuffled to.
280In \CFA, spurious bursts of latency can trick a \proc into helping, triggering this effect.
[e378c730]281However, since user-level threading with equal number of \ats and \procs is a somewhat degenerate case, especially when context-switching very often, this result is not particularly meaningful and is only included for completness.
282
283Libfibre uses the fact that @yield@ doesn't change the number of ready fibres and by-passes the idle-sleep mechanism entirely, producing significantly better throughput.
284Additionally, when only running 1 \at per \proc, libfibre optimizes further and forgoes the context-switch entirely.
285This results in incredible performance results comparing to the other runtimes.
286
287In stark contrast with libfibre, Go puts yielding goroutines on a secondary global ready-queue, giving them lower priority.
288The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
289Based on the scalability, Tokio obtains the similarly poor performance and therefore it is likely it handles @yield@ in a similar fashion.
290However, it must be doing something different since it does scale at low \proc count.
[999faf1]291
292Again, Figure~\ref{fig:yield:nasus} show effectively the same story happening on AMD as it does on Intel.
293\CFA fairs slightly better with many \ats per \proc, but the performance is satisfactory on both architectures.
294
295Since \CFA obtains the same satisfactory performance as the previous benchmark this is still a success, albeit a less meaningful one.
296
[729c991]297
298\section{Churn}
[c4072d8e]299The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.
[e76fa30]300In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other.
[729c991]301
[e378c730]302The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no apparent relation between the last \proc on which a \at ran and blocked, and the \proc that subsequently unblocks it.
303With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or place it on a remote queue.
304This enqueuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
[e76fa30]305In either case, this benchmark aims to measure how well each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
[729c991]306
[c4072d8e]307This benchmark uses a fixed-size array of counting semaphores.
[e76fa30]308Each \at picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore.
309This creates a flow where \ats push each other out of the semaphores before being pushed out themselves.
310For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs.
[c4072d8e]311Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@.
312Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@.
[729c991]313
[c4072d8e]314\begin{figure}
315\begin{cfa}
316Thread.main() {
317        count := 0
318        for {
319                r := random() % len(spots)
320                @spots[r].V()@
321                @spots[r].P()@
322                count ++
323                if must_stop() { break }
[729c991]324        }
[c4072d8e]325        global.count += count
326}
327\end{cfa}
328\caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code}
329\label{fig:churn:code}
330\end{figure}
331
332\subsection{Results}
[622a358]333\begin{figure}
334        \subfloat[][Throughput, 100 \ats per \proc]{
335                \resizebox{0.5\linewidth}{!}{
336                        \input{result.churn.jax.ops.pstex_t}
337                }
338                \label{fig:churn:jax:ops}
339        }
[e378c730]340        \subfloat[][Throughput, 2 \ats per \proc]{
[622a358]341                \resizebox{0.5\linewidth}{!}{
342                        \input{result.churn.low.jax.ops.pstex_t}
343                }
344                \label{fig:churn:jax:low:ops}
345        }
346
347        \subfloat[][Latency, 100 \ats per \proc]{
348                \resizebox{0.5\linewidth}{!}{
349                        \input{result.churn.jax.ns.pstex_t}
350                }
[e76fa30]351                \label{fig:churn:jax:ns}
[622a358]352        }
[e378c730]353        \subfloat[][Latency, 2 \ats per \proc]{
[622a358]354                \resizebox{0.5\linewidth}{!}{
355                        \input{result.churn.low.jax.ns.pstex_t}
356                }
357                \label{fig:churn:jax:low:ns}
358        }
[e76fa30]359        \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
[622a358]360        \label{fig:churn:jax}
361\end{figure}
362
[e76fa30]363\begin{figure}
364        \subfloat[][Throughput, 100 \ats per \proc]{
365                \resizebox{0.5\linewidth}{!}{
366                        \input{result.churn.nasus.ops.pstex_t}
367                }
368                \label{fig:churn:nasus:ops}
369        }
[e378c730]370        \subfloat[][Throughput, 2 \ats per \proc]{
[e76fa30]371                \resizebox{0.5\linewidth}{!}{
372                        \input{result.churn.low.nasus.ops.pstex_t}
373                }
374                \label{fig:churn:nasus:low:ops}
375        }
376
377        \subfloat[][Latency, 100 \ats per \proc]{
378                \resizebox{0.5\linewidth}{!}{
379                        \input{result.churn.nasus.ns.pstex_t}
380                }
381                \label{fig:churn:nasus:ns}
382        }
[e378c730]383        \subfloat[][Latency, 2 \ats per \proc]{
[e76fa30]384                \resizebox{0.5\linewidth}{!}{
385                        \input{result.churn.low.nasus.ns.pstex_t}
386                }
387                \label{fig:churn:nasus:low:ns}
388        }
389        \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine.
390        For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
391        \label{fig:churn:nasus}
392\end{figure}
[e378c730]393Figure~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput as a function of \proc count on Intel and AMD respectively.
394It uses the same representation as the previous benchmark : 15 runs where the dashed line show the extremums and the solid line the median.
395The performance cost of crossing the cache boundaries is still visible at the same \proc count.
396However, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the eachother's data.
[e76fa30]397Scalability is notably worst than the previous benchmarks since there is inherently more communication between processors.
398Indeed, once the number of \glspl{hthrd} goes beyond a single socket, performance ceases to improve.
399An interesting aspect to note here is that the runtimes differ in how they handle this situation.
400Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at.
401\CFA, tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc.
402In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
403
[e378c730]404Like for the cycle benchmark, here all runtimes achieve fairly similar performance.
[e76fa30]405Performance improves as long as all \procs fit on a single socket.
[e378c730]406Beyond that performance starts to suffer from increased caching costs.
407
408        Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 100 \ats per \proc, \CFA, libfibre, and tokio achieve effectively equivalent performance for most \proc count.
409        Interestingly, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts.
410        This performance difference disappears in Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, where the performance of all runtimes is equivalent.
[e76fa30]411
[e378c730]412        Figure~\ref{fig:churn:nasus} again shows a similar story.
413        \CFA, libfibre, and tokio achieve effectively equivalent performance for most \proc count.
414        Go still shows different scaling than the other 3 runtimes.
415        The distinction is that on AMD the difference between Go and the other runtime is more significant.
416        Indeed, even with only 1 \at per \proc, Go achieves notably different scaling than the other runtimes.
417
418        One possible explanation for this difference is that since Go has very few available concurrent primitives, a channel was used instead of a semaphore.
419        On paper a semaphore can be replaced by a channel and with zero-sized objects passed along equivalent performance could be expected.
420        However, in practice there can be implementation difference between the two.
421        This is especially true if the semaphore count can get somewhat high.
422        Note that this replacement is also made in the cycle benchmark, however in that context it did not seem to have a notable impact.
423
424The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues.
425Indeed, the fact all runtimes achieve some scaling at lower \proc count demontrate that migrations do not need to be serialized.
426Again these result demonstrate \CFA achieves satisfactory performance.
[c4072d8e]427
[36a05d7]428\section{Locality}
[e76fa30]429\begin{figure}
430\begin{cfa}
431Thread.main() {
432        count := 0
433        for {
434                r := random() % len(spots)
435                // go through the array
436                @work( a )@
437                spots[r].V()
438                spots[r].P()
439                count ++
440                if must_stop() { break }
441        }
442        global.count += count
443}
444\end{cfa}
445\begin{cfa}
446Thread.main() {
447        count := 0
448        for {
449                r := random() % len(spots)
450                // go through the array
451                @work( a )@
452                // pass array to next thread
453                spots[r].V( @a@ )
454                @a = @spots[r].P()
455                count ++
456                if must_stop() { break }
457        }
458        global.count += count
459}
460\end{cfa}
461\caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code}
462\label{fig:locality:code}
463\end{figure}
464As mentionned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.
[e378c730]465\footnote{It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, there is little to suggest this would not degrade performance.}
[e76fa30]466The locality experiment includes two variations of the churn benchmark, where an array of data is added.
467In both variations, before @V@ing the semaphore, each \at increment random cells inside the array.
[e378c730]468The @share@ variation then passes the array to the shadow-queue of the semaphore, transferring ownership of the array to the woken thread.
[e76fa30]469In the @noshare@ variation the array is not passed on and each thread continously accesses its private array.
470
471The objective here is to highlight the different decision made by the runtime when unparking.
472Since each thread unparks a random semaphore, it means that it is unlikely that a \at will be unparked from the last \proc it ran on.
473In the @share@ version, this means that unparking the \at on the local \proc is appropriate since the data was last modified on that \proc.
[e378c730]474In the @noshare@ version, the unparking the \at on the remote \proc is the appropriate approach.
[e76fa30]475
476The expectation for this benchmark is to see a performance inversion, where runtimes will fare notably better in the variation which matches their unparking policy.
477This should lead to \CFA, Go and Tokio achieving better performance in @share@ while libfibre achieves better performance in @noshare@.
[e378c730]478Indeed, \CFA, Go and Tokio have the default policy of unpark \ats on the local \proc, where as libfibre has the default policy of unparks \ats wherever they last ran.
[e76fa30]479
480\subsection{Results}
481\begin{figure}
482        \subfloat[][Throughput share]{
483                \resizebox{0.5\linewidth}{!}{
484                        \input{result.locality.share.jax.ops.pstex_t}
485                }
486                \label{fig:locality:jax:share:ops}
487        }
488        \subfloat[][Throughput noshare]{
489                \resizebox{0.5\linewidth}{!}{
490                        \input{result.locality.noshare.jax.ops.pstex_t}
491                }
492                \label{fig:locality:jax:noshare:ops}
493        }
494
495        \subfloat[][Scalability share]{
496                \resizebox{0.5\linewidth}{!}{
497                        \input{result.locality.share.jax.ns.pstex_t}
498                }
499                \label{fig:locality:jax:share:ns}
500        }
501        \subfloat[][Scalability noshare]{
502                \resizebox{0.5\linewidth}{!}{
503                        \input{result.locality.noshare.jax.ns.pstex_t}
504                }
505                \label{fig:locality:jax:noshare:ns}
506        }
507        \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
508        \label{fig:locality:jax}
509\end{figure}
510\begin{figure}
511        \subfloat[][Throughput share]{
512                \resizebox{0.5\linewidth}{!}{
513                        \input{result.locality.share.nasus.ops.pstex_t}
514                }
515                \label{fig:locality:nasus:share:ops}
516        }
517        \subfloat[][Throughput noshare]{
518                \resizebox{0.5\linewidth}{!}{
519                        \input{result.locality.noshare.nasus.ops.pstex_t}
520                }
521                \label{fig:locality:nasus:noshare:ops}
522        }
523
524        \subfloat[][Scalability share]{
525                \resizebox{0.5\linewidth}{!}{
526                        \input{result.locality.share.nasus.ns.pstex_t}
527                }
528                \label{fig:locality:nasus:share:ns}
529        }
530        \subfloat[][Scalability noshare]{
531                \resizebox{0.5\linewidth}{!}{
532                        \input{result.locality.noshare.nasus.ns.pstex_t}
533                }
534                \label{fig:locality:nasus:noshare:ns}
535        }
536        \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
537        \label{fig:locality:nasus}
538\end{figure}
539
[e378c730]540Figure~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
541In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@.
542
543that the results somewhat follow the expectation.
[e76fa30]544On the left of the figure showing the results for the shared variation, where \CFA and tokio outperform libfibre as expected.
545And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and tokio.
546Otherwise the results are similar to the churn benchmark, with lower throughtput due to the array processing.
547It is unclear why Go's performance is notably worst than the other runtimes.
[36a05d7]548
[e76fa30]549Figure~\ref{fig:locality:nasus} shows the same experiment on AMD.
550\todo{why is cfa slower?}
551Again, we see the same story, where tokio and libfibre swap places and Go trails behind.
[729c991]552
553\section{Transfer}
[c4072d8e]554The last benchmark is more of an experiment than a benchmark.
555It tests the behaviour of the schedulers for a misbehaved workload.
[e76fa30]556In this workload, one of the \at is selected at random to be the leader.
557The leader then spins in a tight loop until it has observed that all other \ats have acknowledged its leadership.
558The leader \at then picks a new \at to be the next leader and the cycle repeats.
559The benchmark comes in two flavours for the non-leader \ats:
[c4072d8e]560once they acknowledged the leader, they either block on a semaphore or spin yielding.
561
562The experiment is designed to evaluate the short-term load-balancing of a scheduler.
[e76fa30]563Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate.
564This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at.
565In the semaphore flavour, the number of runnable \ats eventually dwindles down to only the leader.
566This scenario is a simpler case to handle for schedulers since \procs eventually run out of work.
567In the yielding flavour, the number of runnable \ats stays constant.
[c4072d8e]568This scenario is a harder case to handle because corrective measures must be taken even when work is available.
569Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield.
[729c991]570
[e378c730]571In both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
[e76fa30]572In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by:
573$ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueuing and dequeuing a \at and $NP$ is the number of \procs.
574However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay will increase.
575The semaphore version is an approximation of the strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once.
576The benchmark effectively provides the fairness guarantee in this case.
577In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsability and any unfairness will be measurable.
578
579While this is a fairly artificial scenario, it requires only a few simple pieces.
[e378c730]580The yielding version of this simply creates a scenario where a \at runs uninterrupted in a saturated system, and starvation has an easily measured impact.
[e76fa30]581However, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation.
[729c991]582
[c4072d8e]583\begin{figure}
584\begin{cfa}
585Thread.lead() {
586        this.idx_seen = ++lead_idx
587        if lead_idx > stop_idx {
588                done := true
589                return
590        }
591        // Wait for everyone to acknowledge my leadership
592        start: = timeNow()
593        for t in threads {
594                while t.idx_seen != lead_idx {
595                        asm pause
596                        if (timeNow() - start) > 5 seconds { error() }
[729c991]597                }
598        }
[c4072d8e]599        // pick next leader
600        leader := threads[ prng() % len(threads) ]
601        // wake every one
602        if ! exhaust {
603                for t in threads {
604                        if t != me { t.wake() }
[729c991]605                }
606        }
[c4072d8e]607}
608Thread.wait() {
609        this.idx_seen := lead_idx
610        if exhaust { wait() }
611        else { yield() }
612}
613Thread.main() {
614        while !done  {
615                if leader == me { this.lead() }
616                else { this.wait() }
617        }
618}
619\end{cfa}
620\caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark : Pseudo Code}
621\label{fig:transfer:code}
622\end{figure}
623
624\subsection{Results}
[e76fa30]625\begin{figure}
626\begin{centering}
627\begin{tabular}{r | c c c c | c c c c }
628Machine   &                     \multicolumn{4}{c |}{Intel}                &          \multicolumn{4}{c}{AMD}                    \\
629Variation & \multicolumn{2}{c}{Park} & \multicolumn{2}{c |}{Yield} & \multicolumn{2}{c}{Park} & \multicolumn{2}{c}{Yield} \\
630\procs    &      2      &      192   &      2      &      192      &      2      &      256   &      2      &      256    \\
631\hline
[e378c730]632\CFA      & 106 $\mu$& ~19.9 ms   & 68.4 $\mu$s & ~1.2 ms       & 174 $\mu$& ~28.4 ms   & 78.8~~$\mu$s& ~~1.21 ms   \\
633libfibre  & 127 $\mu$& ~33.5 ms   & DNC         & DNC           & 156 $\mu$& ~36.7 ms   & DNC         & DNC         \\
634Go        & 106 $\mu$& ~64.0 ms   & 24.6 ms     & 74.3 ms       & 271 $\mu$& 121.6 ms   & ~~1.21~ms   & 117.4 ms    \\
635tokio     & 289 $\mu$& 180.6 ms   & DNC         & DNC           & 157 $\mu$& 111.0 ms   & DNC         & DNC
[e76fa30]636\end{tabular}
637\end{centering}
638\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader. }
639\label{fig:transfer:res}
640\end{figure}
641Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiement runs 100 \at per \proc.
642Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter.
643As such, data points that are the on the same order of magnitude as eachother should be basically considered equal.
644The takeaway of this experiement is the presence of very large differences.
645The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged.
646The yielding variation is denoted ``Yield''.
647The experiement was only run for the extremums of the number of cores since the scaling per core behaves like previous experiements.
[e378c730]648This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness.
[e76fa30]649The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly.
650Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running leader.
651Figure~\ref{fig:transfer:res} shows that while Go and Tokio are slower, all runtime achieve decent latency.
652However, the yielding variation shows an entirely different picture.
653Since libfibre and tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs.
654The result is that the experiement simply does not complete for these runtime.
655Without \procs stealing from the \proc running the leader, the experiment will simply never terminate.
656Go manages to complete the experiement because it adds preemption on top of classic work-stealing.
657However, since preemption is fairly costly it achieves significantly worst performance.
658In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness.
[e378c730]659Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores and idle-sleep.
Note: See TracBrowser for help on using the repository browser.