1 | \chapter{Micro-Benchmarks}\label{microbench} |
---|
2 | |
---|
3 | The first step in evaluating this work is to test small controlled cases to ensure the basics work properly. |
---|
4 | This chapter presents five different experimental setups for evaluating the basic features of the \CFA, libfibre~\cite{libfibre}, Go, and Tokio~\cite{Tokio} schedulers. |
---|
5 | All of these systems have a \gls{uthrding} model. |
---|
6 | The goal in this chapter is show the \CFA scheduler obtains equivalent performance to other less fair schedulers through the different experiments. |
---|
7 | Note, only the code of the \CFA tests is shown; |
---|
8 | all tests in the other systems are functionally identical and available online~\cite{SchedulingBenchmarks}. |
---|
9 | |
---|
10 | \section{Benchmark Environment}\label{microenv} |
---|
11 | |
---|
12 | All benchmarks are run on two distinct hardware platforms. |
---|
13 | \begin{description} |
---|
14 | \item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM. |
---|
15 | The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}. |
---|
16 | Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively. |
---|
17 | Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}. |
---|
18 | The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. |
---|
19 | |
---|
20 | \item[Intel] is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM. |
---|
21 | The Xeon CPU has 24 cores with 2 \glspl{hthrd} per core, for 48 \glspl{hthrd} per socket with 4 sockets for a total of 196 \glspl{hthrd}. |
---|
22 | Each CPU has 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively. |
---|
23 | Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}. |
---|
24 | The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. |
---|
25 | \end{description} |
---|
26 | |
---|
27 | For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyper threading. |
---|
28 | If more \glspl{hthrd} are needed, then 1 NUMA node with hyperthreading is used. |
---|
29 | If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA nodes as needed. |
---|
30 | For the Intel machine, this means that from 1 to 24 \procs one socket and \emph{no} hyperthreading is used, and from 25 to 48 \procs still only one socket is used but \emph{with} hyperthreading. |
---|
31 | This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192. |
---|
32 | On AMD, the same algorithm is used, but the machine only has 2 sockets. |
---|
33 | So hyperthreading\footnote{ |
---|
34 | Hyperthreading normally refers specifically to the technique used by Intel, however it is often used generically to refer to any equivalent feature.} |
---|
35 | is used when the \proc count reach 65 and 193. |
---|
36 | |
---|
37 | The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine. |
---|
38 | Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU also incur high latency. |
---|
39 | |
---|
40 | \section{Experimental setup} |
---|
41 | |
---|
42 | Each experiment is run 15 times varying the number of processors depending on the two different computers. |
---|
43 | All experiments gather throughput data and secondary data for scalability or latency. |
---|
44 | The data is graphed using a solid, a dashed and a dotted line, representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{ |
---|
45 | An alternative display is to use error bars with min/max as the bottom/top for the bar. |
---|
46 | However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.} |
---|
47 | This graph presentation offers an overview of the distribution of the results for each experiment. |
---|
48 | |
---|
49 | For each experiment, four graphs are generated showing traditional throughput on the top row and \newterm{scalability} or \newterm{latency} on the bottom row (peek ahead to Figure~\ref{fig:cycle:jax}). |
---|
50 | Scalability uses the same data as throughput but the Y axis is calculated as the number of \procs over the throughput. |
---|
51 | In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same. |
---|
52 | |
---|
53 | The left column shows results for hundreds of \ats per \proc, enough to always keep every \proc busy. |
---|
54 | The right column shows results for very few \ats per \proc, where the ready queues are expected to be near empty most of the time. |
---|
55 | The distinction between many and few \ats is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily. |
---|
56 | |
---|
57 | \section{Cycle} |
---|
58 | |
---|
59 | The most basic evaluation of any ready queue is the latency needed to push and pop one element from the ready queue. |
---|
60 | Since these two operations also describe a @yield@ operation, many systems use this operation as the fundamental benchmark. |
---|
61 | However, yielding can be treated as a special case by optimizing it away since the number of ready \ats does not change. |
---|
62 | Hence, systems that perform this optimization have an artificial performance benefit because the yield becomes a \emph{nop}. |
---|
63 | For this reason, I designed a different push/pop benchmark, called \newterm{Cycle Benchmark}. |
---|
64 | This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list. |
---|
65 | At runtime, each \at unparks the next \at before parking itself. |
---|
66 | Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue. |
---|
67 | |
---|
68 | \begin{figure} |
---|
69 | \centering |
---|
70 | \input{cycle.pstex_t} |
---|
71 | \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \at unparks the next \at in the cycle before parking itself.} |
---|
72 | \label{fig:cycle} |
---|
73 | \end{figure} |
---|
74 | |
---|
75 | Therefore, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment. |
---|
76 | In fact, the total number of \ats waiting on the ready queue is expected to vary because of the race between the next \at unparking and the current \at parking. |
---|
77 | That is, the runtime cannot anticipate that the current task immediately parks. |
---|
78 | As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \at parks because of time-slicing or multiple \procs. |
---|
79 | If this happens, the scheduler push and pop are avoided and the results of the experiment are skewed. |
---|
80 | (Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.) |
---|
81 | Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops. |
---|
82 | To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully park before being unparked again. |
---|
83 | Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment. |
---|
84 | |
---|
85 | Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark, where each cycle has 5 \ats. |
---|
86 | There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition. |
---|
87 | |
---|
88 | \begin{figure} |
---|
89 | \begin{cfa} |
---|
90 | Thread.main() { |
---|
91 | count := 0 |
---|
92 | for { |
---|
93 | @this.next.wake()@ |
---|
94 | @wait()@ |
---|
95 | count ++ |
---|
96 | if must_stop() { break } |
---|
97 | } |
---|
98 | global.count += count |
---|
99 | } |
---|
100 | \end{cfa} |
---|
101 | \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code} |
---|
102 | \label{fig:cycle:code} |
---|
103 | %\end{figure}ll have a physical key so it's not urgent. |
---|
104 | |
---|
105 | |
---|
106 | %\begin{figure} |
---|
107 | \subfloat[][Throughput, 100 cycles per \proc]{ |
---|
108 | \resizebox{0.5\linewidth}{!}{ |
---|
109 | \input{result.cycle.jax.ops.pstex_t} |
---|
110 | } |
---|
111 | \label{fig:cycle:jax:ops} |
---|
112 | } |
---|
113 | \subfloat[][Throughput, 1 cycle per \proc]{ |
---|
114 | \resizebox{0.5\linewidth}{!}{ |
---|
115 | \input{result.cycle.low.jax.ops.pstex_t} |
---|
116 | } |
---|
117 | \label{fig:cycle:jax:low:ops} |
---|
118 | } |
---|
119 | |
---|
120 | \subfloat[][Scalability, 100 cycles per \proc]{ |
---|
121 | \resizebox{0.5\linewidth}{!}{ |
---|
122 | \input{result.cycle.jax.ns.pstex_t} |
---|
123 | } |
---|
124 | \label{fig:cycle:jax:ns} |
---|
125 | } |
---|
126 | \subfloat[][Scalability, 1 cycle per \proc]{ |
---|
127 | \resizebox{0.5\linewidth}{!}{ |
---|
128 | \input{result.cycle.low.jax.ns.pstex_t} |
---|
129 | } |
---|
130 | \label{fig:cycle:jax:low:ns} |
---|
131 | } |
---|
132 | \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. |
---|
133 | For throughput, higher is better, for scalability, lower is better. |
---|
134 | Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.} |
---|
135 | \label{fig:cycle:jax} |
---|
136 | \end{figure} |
---|
137 | |
---|
138 | \begin{figure} |
---|
139 | \subfloat[][Throughput, 100 cycles per \proc]{ |
---|
140 | \resizebox{0.5\linewidth}{!}{ |
---|
141 | \input{result.cycle.nasus.ops.pstex_t} |
---|
142 | } |
---|
143 | \label{fig:cycle:nasus:ops} |
---|
144 | } |
---|
145 | \subfloat[][Throughput, 1 cycle per \proc]{ |
---|
146 | \resizebox{0.5\linewidth}{!}{ |
---|
147 | \input{result.cycle.low.nasus.ops.pstex_t} |
---|
148 | } |
---|
149 | \label{fig:cycle:nasus:low:ops} |
---|
150 | } |
---|
151 | |
---|
152 | \subfloat[][Scalability, 100 cycles per \proc]{ |
---|
153 | \resizebox{0.5\linewidth}{!}{ |
---|
154 | \input{result.cycle.nasus.ns.pstex_t} |
---|
155 | } |
---|
156 | \label{fig:cycle:nasus:ns} |
---|
157 | } |
---|
158 | \subfloat[][Scalability, 1 cycle per \proc]{ |
---|
159 | \resizebox{0.5\linewidth}{!}{ |
---|
160 | \input{result.cycle.low.nasus.ns.pstex_t} |
---|
161 | } |
---|
162 | \label{fig:cycle:nasus:low:ns} |
---|
163 | } |
---|
164 | \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts. |
---|
165 | For throughput, higher is better, for scalability, lower is better. |
---|
166 | Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.} |
---|
167 | \label{fig:cycle:nasus} |
---|
168 | \end{figure} |
---|
169 | |
---|
170 | \subsection{Results} |
---|
171 | |
---|
172 | Figures~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment. |
---|
173 | Looking at the left column on Intel first, Figures~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}, which shows the results for many \ats, in this case 100 cycles of 5 \ats for each \proc. |
---|
174 | \CFA, Go and Tokio all obtain effectively the same throughput performance. |
---|
175 | Libfibre is slightly behind in this case but still scales decently. |
---|
176 | As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvement for all runtimes, which can be seen as a flatting of the line. |
---|
177 | This effect even causes a decrease in throughput in libfibre's case. |
---|
178 | As expected, this pattern repeats again between \proc count 72 and 96. |
---|
179 | |
---|
180 | Looking next at the right column on Intel, Figures~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns}, which shows the results for few threads, in this case 1 cycle of 5 \ats for each \proc. |
---|
181 | \CFA and Tokio obtain very similar results overall, but Tokio shows more variations in the results. |
---|
182 | Go achieves slightly better performance than \CFA and Tokio, but all three display significantly workst performance compared to the left column. |
---|
183 | This decrease in performance is likely due to the additional overhead of the idle-sleep mechanism. |
---|
184 | This can either be the result of \procs actually running out of work, or simply additional overhead from tracking whether or not there is work available. |
---|
185 | Indeed, unlike the left column, it is likely that the ready-queue will be transiently empty, which likely triggers additional synchronization steps. |
---|
186 | Interestingly, libfibre achieves better performance with 1 cycle. |
---|
187 | |
---|
188 | Looking now at the results for the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results show a story that is overall similar to the results on the Intel, with close to double the performance overall but with slightly increased variation and some differences in the details. |
---|
189 | Note that the maximum of the Y-axis on Intel and AMD differ significantly. |
---|
190 | Looking at the left column first, Figures~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns}, unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability. |
---|
191 | However, as the number of \procs grows higher, the results on AMD show notably more variability than on Intel. |
---|
192 | The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel. |
---|
193 | Looking next at the right column, Figures~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower. |
---|
194 | This is different than on Intel, where Tokio behaved like \CFA rather than behaving like Go. |
---|
195 | Again, the same performance increase for libfibre is visible when running fewer \ats. |
---|
196 | Note, I did not investigate the libfibre performance boost for 1 cycle in this experiment. |
---|
197 | |
---|
198 | The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance for this micro-benchmark. |
---|
199 | Clearly, the pathological case with 1 cycle per \proc can affect fairness algorithms managing mostly idle processors, \eg \CFA, but only at high core counts. |
---|
200 | For this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal. |
---|
201 | For this experiment, the \CFA scheduler has achieved the goal of obtaining equivalent performance to other less fair schedulers. |
---|
202 | |
---|
203 | \section{Yield} |
---|
204 | |
---|
205 | For completion, the classic yield benchmark is included. |
---|
206 | Here, the throughput is dominated by the mechanism used to handle the @yield@ function. |
---|
207 | Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the cycle @wait/next.wake@ is replaced by @yield@. |
---|
208 | As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@. |
---|
209 | The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty, which again puts a strain on the idle-sleep handling. |
---|
210 | |
---|
211 | \begin{figure} |
---|
212 | \begin{cfa} |
---|
213 | Thread.main() { |
---|
214 | count := 0 |
---|
215 | for { |
---|
216 | @yield()@ |
---|
217 | count ++ |
---|
218 | if must_stop() { break } |
---|
219 | } |
---|
220 | global.count += count |
---|
221 | } |
---|
222 | \end{cfa} |
---|
223 | \caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code} |
---|
224 | \label{fig:yield:code} |
---|
225 | %\end{figure} |
---|
226 | \bigskip |
---|
227 | %\begin{figure} |
---|
228 | \subfloat[][Throughput, 100 \ats per \proc]{ |
---|
229 | \resizebox{0.5\linewidth}{!}{ |
---|
230 | \input{result.yield.jax.ops.pstex_t} |
---|
231 | } |
---|
232 | \label{fig:yield:jax:ops} |
---|
233 | } |
---|
234 | \subfloat[][Throughput, 1 \ats per \proc]{ |
---|
235 | \resizebox{0.5\linewidth}{!}{ |
---|
236 | \input{result.yield.low.jax.ops.pstex_t} |
---|
237 | } |
---|
238 | \label{fig:yield:jax:low:ops} |
---|
239 | } |
---|
240 | |
---|
241 | \subfloat[][Scalability, 100 \ats per \proc]{ |
---|
242 | \resizebox{0.5\linewidth}{!}{ |
---|
243 | \input{result.yield.jax.ns.pstex_t} |
---|
244 | } |
---|
245 | \label{fig:yield:jax:ns} |
---|
246 | } |
---|
247 | \subfloat[][Scalability, 1 \ats per \proc]{ |
---|
248 | \resizebox{0.5\linewidth}{!}{ |
---|
249 | \input{result.yield.low.jax.ns.pstex_t} |
---|
250 | } |
---|
251 | \label{fig:yield:jax:low:ns} |
---|
252 | } |
---|
253 | \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. |
---|
254 | For throughput, higher is better, for scalability, lower is better. |
---|
255 | Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.} |
---|
256 | \label{fig:yield:jax} |
---|
257 | \end{figure} |
---|
258 | |
---|
259 | \subsection{Results} |
---|
260 | |
---|
261 | Figures~\ref{fig:yield:jax} and \ref{fig:yield:nasus} show the results for the yield experiment. |
---|
262 | Looking at the left column on Intel first, Figures~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc. |
---|
263 | Note, the Y-axis on the graph is twice as large as the Intel cycle-graph. |
---|
264 | A visual glance between the left columns of the cycle and yield graphs confirms my claim that the yield benchmark is unreliable. |
---|
265 | \CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment. |
---|
266 | Hence, the @yield@ throughput and scalability graphs have similar shapes to the corresponding @cycle@ graphs. |
---|
267 | The only difference is sightly better performance for @yield@ because of less synchronization. |
---|
268 | Libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, by-passing the idle-sleep mechanism entirely. |
---|
269 | Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance on the left column. |
---|
270 | Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it lower priority. |
---|
271 | The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically. |
---|
272 | Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@. |
---|
273 | Tokio has a similar performance collapse after 16 processors, and therefore, its special @yield@ handling is probably related to a Go-like scheduler problem and/or a \CFA idle-sleep problem. |
---|
274 | (I did not dig through the Rust code to ascertain the exact reason for the collapse.) |
---|
275 | Note that since there is no communication among \ats, locality problems are much less likely than for the cycle benchmark. |
---|
276 | This lack of communication is probably why the plateaus due to topology are not present. |
---|
277 | |
---|
278 | Lookking next at the right column on Intel, Figures~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc. |
---|
279 | As for @cycle@, \CFA's cost of idle sleep comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat. |
---|
280 | This is to be expected since fewet \ats means \procs are more likely to run out of work. |
---|
281 | On the other hand, when only running 1 \at per \proc, libfibre optimizes further, and forgoes the context-switch entirely. |
---|
282 | This results in libfibre outperforming other runtimes even more, achieving 8 times more throughput than for @cycle@. |
---|
283 | Finally, Go and Tokio performance collapse is still the same with fewer \ats. |
---|
284 | The only exception is Tokio running on 24 \proc, deepening the mystery of its yielding mechanism further. |
---|
285 | |
---|
286 | \begin{figure} |
---|
287 | \subfloat[][Throughput, 100 \ats per \proc]{ |
---|
288 | \resizebox{0.5\linewidth}{!}{ |
---|
289 | \input{result.yield.nasus.ops.pstex_t} |
---|
290 | } |
---|
291 | \label{fig:yield:nasus:ops} |
---|
292 | } |
---|
293 | \subfloat[][Throughput, 1 \at per \proc]{ |
---|
294 | \resizebox{0.5\linewidth}{!}{ |
---|
295 | \input{result.yield.low.nasus.ops.pstex_t} |
---|
296 | } |
---|
297 | \label{fig:yield:nasus:low:ops} |
---|
298 | } |
---|
299 | |
---|
300 | \subfloat[][Scalability, 100 \ats per \proc]{ |
---|
301 | \resizebox{0.5\linewidth}{!}{ |
---|
302 | \input{result.yield.nasus.ns.pstex_t} |
---|
303 | } |
---|
304 | \label{fig:yield:nasus:ns} |
---|
305 | } |
---|
306 | \subfloat[][Scalability, 1 \at per \proc]{ |
---|
307 | \resizebox{0.5\linewidth}{!}{ |
---|
308 | \input{result.yield.low.nasus.ns.pstex_t} |
---|
309 | } |
---|
310 | \label{fig:yield:nasus:low:ns} |
---|
311 | } |
---|
312 | \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. |
---|
313 | For throughput, higher is better, for scalability, lower is better. |
---|
314 | Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.} |
---|
315 | \label{fig:yield:nasus} |
---|
316 | \end{figure} |
---|
317 | |
---|
318 | Looking now at the results for the AMD architecture, Figure~\ref{fig:yield:nasus}, the results again show a story that is overall similar to the results on the Intel, with increased variation and some differences in the details. |
---|
319 | Note that the maximum of the Y-axis on Intel and AMD differ less in @yield@ than @cycle@. |
---|
320 | Looking at the left column first, Figures~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling. |
---|
321 | Libfibre still outpaces all other runtimes, but it encounter a performance hit at 64 \procs. |
---|
322 | This suggest some amount of communication between the \procs that the Intel machine was able to mask where the AMD is not once hyperthreading is needed. |
---|
323 | Go and Tokio still display the same performance collapse than on Intel. |
---|
324 | Looking next at the right column, Figures~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime effectively behave the same as they did on the Intel machine. |
---|
325 | At high \ats count the only difference was Libfibre's scaling and this difference disappears on the right column. |
---|
326 | This suggest that whatever communication benchmark it encountered on the left is completely circumvented on the right. |
---|
327 | |
---|
328 | It is difficult to draw conclusions for this benchmark when runtime system treat @yield@ so differently. |
---|
329 | The win for \CFA is its consistency between the cycle and yield benchmarks making it simpler for programmers to use and understand, \ie the \CFA semantics match with programmer intuition. |
---|
330 | |
---|
331 | |
---|
332 | \section{Churn} |
---|
333 | |
---|
334 | The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application. |
---|
335 | In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other. |
---|
336 | |
---|
337 | The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relationship between the last \proc on which a \at ran and blocked and the \proc that subsequently unblocks it. |
---|
338 | With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a remote queue. |
---|
339 | This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure. |
---|
340 | Hence, this benchmark has performance dominated by the cache traffic as \procs are constantly accessing each other's data. |
---|
341 | In either case, this benchmark aims to measure how well a scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly. |
---|
342 | |
---|
343 | This benchmark uses a fixed-size array of counting semaphores. |
---|
344 | Each \at picks a random semaphore, @V@s it to unblock any waiting \at, and then @P@s (maybe blocks) the \ats on the semaphore. |
---|
345 | This creates a flow where \ats push each other out of the semaphores before being pushed out themselves. |
---|
346 | For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs; |
---|
347 | \eg if there are 10 semaphores and 5 \procs, but only 3 \ats, all 3 \ats can block (P) on a random semaphore and now there is no \ats to unblock (V) them. |
---|
348 | Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@. |
---|
349 | Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@. |
---|
350 | |
---|
351 | \begin{figure} |
---|
352 | \begin{cfa} |
---|
353 | Thread.main() { |
---|
354 | count := 0 |
---|
355 | for { |
---|
356 | r := random() % len(spots) |
---|
357 | @spots[r].V()@ |
---|
358 | @spots[r].P()@ |
---|
359 | count ++ |
---|
360 | if must_stop() { break } |
---|
361 | } |
---|
362 | global.count += count |
---|
363 | } |
---|
364 | \end{cfa} |
---|
365 | \caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code} |
---|
366 | \label{fig:churn:code} |
---|
367 | %\end{figure} |
---|
368 | \bigskip |
---|
369 | %\begin{figure} |
---|
370 | \subfloat[][Throughput, 100 \ats per \proc]{ |
---|
371 | \resizebox{0.5\linewidth}{!}{ |
---|
372 | \input{result.churn.jax.ops.pstex_t} |
---|
373 | } |
---|
374 | \label{fig:churn:jax:ops} |
---|
375 | } |
---|
376 | \subfloat[][Throughput, 2 \ats per \proc]{ |
---|
377 | \resizebox{0.5\linewidth}{!}{ |
---|
378 | \input{result.churn.low.jax.ops.pstex_t} |
---|
379 | } |
---|
380 | \label{fig:churn:jax:low:ops} |
---|
381 | } |
---|
382 | |
---|
383 | \subfloat[][Scalability, 100 \ats per \proc]{ |
---|
384 | \resizebox{0.5\linewidth}{!}{ |
---|
385 | \input{result.churn.jax.ns.pstex_t} |
---|
386 | } |
---|
387 | \label{fig:churn:jax:ns} |
---|
388 | } |
---|
389 | \subfloat[][Scalability, 2 \ats per \proc]{ |
---|
390 | \resizebox{0.5\linewidth}{!}{ |
---|
391 | \input{result.churn.low.jax.ns.pstex_t} |
---|
392 | } |
---|
393 | \label{fig:churn:jax:low:ns} |
---|
394 | } |
---|
395 | \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and scalability of the Churn on the benchmark on the Intel machine. |
---|
396 | For throughput, higher is better, for scalability, lower is better. |
---|
397 | Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.} |
---|
398 | \label{fig:churn:jax} |
---|
399 | \end{figure} |
---|
400 | |
---|
401 | \subsection{Results} |
---|
402 | |
---|
403 | Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the results for the churn experiment. |
---|
404 | Looking at the left column on Intel first, Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc, all runtime obtain fairly similar throughput for most \proc counts. |
---|
405 | \CFA does very well on a single \proc but quickly loses its advantage over the other runtimes. |
---|
406 | As expected it scales decently up to 48 \procs and then basically plateaus. |
---|
407 | Tokio achieves very similar performance to \CFA until 48 \procs, after which it takes a significant hit but does keep scaling somewhat. |
---|
408 | Libfibre obtains effectively the same results as Tokio with slightly less scaling, \ie the scaling curve is the same but with slightly higher values. |
---|
409 | Finally Go gets the most peculiar results, scaling worst than other runtimes until 48 \procs. |
---|
410 | At 72 \procs, the results of the Go runtime vary significantly, sometimes scaling sometimes plateauing. |
---|
411 | However, beyond this point Go keeps this level of variation but does not scale in any of the runs. |
---|
412 | |
---|
413 | Throughput and scalability is notably worst for all runtimes than the previous benchmarks since there is inherently more communication between processors. |
---|
414 | Indeed, none of the runtime reach 40 million operations per second while in the cycle benchmark all but libfibre reached 400 million operations per second. |
---|
415 | Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc count, all runtime produce poor scaling. |
---|
416 | However, once the number of \glspl{hthrd} goes beyond a single socket, at 48 \procs, scaling goes from bad to worst and performance completely ceases to improve. |
---|
417 | At this point, the benchmark is dominated by inter-socket communication costs for all runtimes. |
---|
418 | |
---|
419 | An interesting aspect to note here is that the runtimes differ in how they handle this situation. |
---|
420 | Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at. |
---|
421 | \CFA, Tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc. |
---|
422 | In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other. |
---|
423 | |
---|
424 | Looking next at the right column on Intel, Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc, many of the differences between the runtime disappear. |
---|
425 | \CFA outperforms other runtimes by a minuscule margin. |
---|
426 | Libfibre follows very closely behind with basically the same performance and scaling. |
---|
427 | Tokio maintains effectively the same curve shapes as it did with many threads, but it incurs extra costs for all \proc count. |
---|
428 | As a result it is slightly outperformed by \CFA and libfibre. |
---|
429 | While Go maintains overall similar results to the others, it again encounters significant variation at high \proc counts. |
---|
430 | Inexplicably resulting in super-linear scaling for some runs, \ie the scalability curves displays a negative slope. |
---|
431 | |
---|
432 | Interestingly, unlike the cycle benchmark, running with fewer \ats does not produce drastically different results. |
---|
433 | In fact, the overall throughput stays almost exactly the same on the left and right column. |
---|
434 | |
---|
435 | \begin{figure} |
---|
436 | \subfloat[][Throughput, 100 \ats per \proc]{ |
---|
437 | \resizebox{0.5\linewidth}{!}{ |
---|
438 | \input{result.churn.nasus.ops.pstex_t} |
---|
439 | } |
---|
440 | \label{fig:churn:nasus:ops} |
---|
441 | } |
---|
442 | \subfloat[][Throughput, 2 \ats per \proc]{ |
---|
443 | \resizebox{0.5\linewidth}{!}{ |
---|
444 | \input{result.churn.low.nasus.ops.pstex_t} |
---|
445 | } |
---|
446 | \label{fig:churn:nasus:low:ops} |
---|
447 | } |
---|
448 | |
---|
449 | \subfloat[][Scalability, 100 \ats per \proc]{ |
---|
450 | \resizebox{0.5\linewidth}{!}{ |
---|
451 | \input{result.churn.nasus.ns.pstex_t} |
---|
452 | } |
---|
453 | \label{fig:churn:nasus:ns} |
---|
454 | } |
---|
455 | \subfloat[][Scalability, 2 \ats per \proc]{ |
---|
456 | \resizebox{0.5\linewidth}{!}{ |
---|
457 | \input{result.churn.low.nasus.ns.pstex_t} |
---|
458 | } |
---|
459 | \label{fig:churn:nasus:low:ns} |
---|
460 | } |
---|
461 | \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and scalability of the Churn on the benchmark on the AMD machine. |
---|
462 | For throughput, higher is better, for scalability, lower is better. |
---|
463 | Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.} |
---|
464 | \label{fig:churn:nasus} |
---|
465 | \end{figure} |
---|
466 | |
---|
467 | |
---|
468 | Looking now at the results for the AMD architecture, Figure~\ref{fig:churn:nasus}, the results show a somewhat different story. |
---|
469 | Looking at the left column first, Figures~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability. |
---|
470 | \CFA suffers particular from a larger variations at higher \proc counts, but almost all run still outperform the other runtimes. |
---|
471 | Go still produces intriguing results in this case and even more intriguingly, the results have fairly low variation. |
---|
472 | |
---|
473 | One possible explanation for this difference is that since Go has very few available concurrent primitives, a channel was used instead of a semaphore. |
---|
474 | On paper a semaphore can be replaced by a channel and with zero-sized objects passed along equivalent performance could be expected. |
---|
475 | However, in practice there can be implementation difference between the two. |
---|
476 | This is especially true if the semaphore count can get somewhat high. |
---|
477 | Note that this replacement is also made in the cycle benchmark, however in that context it did not seem to have a notable impact. |
---|
478 | |
---|
479 | As second possible explanation is that Go may sometimes use the heap when allocating variables based on the result of escape analysis of the code. |
---|
480 | It is possible that variables that should be placed on the stack are placed on the heap. |
---|
481 | This could cause extra pointer chasing in the benchmark, heightening locality effects. |
---|
482 | Depending on how the heap is structure, this could also lead to false sharing. |
---|
483 | |
---|
484 | I did not further investigate what causes these unusual results. |
---|
485 | |
---|
486 | Looking next at the right column, Figures~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, like for Intel all runtime obtain overall similar throughput between the left and right column. |
---|
487 | \CFA, Libfibre and Tokio all have very close results. |
---|
488 | Go still suffers from poor scalability but is now unusual in a different way. |
---|
489 | While it obtains effectively constant performance regardless of \proc count, this ``sequential'' performance is higher than the other runtimes for low \proc count. |
---|
490 | Up to 32 \procs, after which the other runtime manage to outscale Go. |
---|
491 | |
---|
492 | The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues. |
---|
493 | Indeed, the fact most runtimes achieve some scaling between various \proc count demonstrate that migrations do not need to be serialized. |
---|
494 | Again these result demonstrate \CFA achieves satisfactory performance. |
---|
495 | |
---|
496 | \section{Locality} |
---|
497 | |
---|
498 | \begin{figure} |
---|
499 | \newsavebox{\myboxA} |
---|
500 | \newsavebox{\myboxB} |
---|
501 | |
---|
502 | \begin{lrbox}{\myboxA} |
---|
503 | \begin{cfa}[tabsize=3] |
---|
504 | Thread.main() { |
---|
505 | count := 0 |
---|
506 | for { |
---|
507 | r := random() % len(spots) |
---|
508 | // go through the array |
---|
509 | @work( a )@ |
---|
510 | |
---|
511 | spots[r].V() |
---|
512 | spots[r].P() |
---|
513 | count ++ |
---|
514 | if must_stop() { break } |
---|
515 | } |
---|
516 | global.count += count |
---|
517 | } |
---|
518 | \end{cfa} |
---|
519 | \end{lrbox} |
---|
520 | |
---|
521 | \begin{lrbox}{\myboxB} |
---|
522 | \begin{cfa}[tabsize=3] |
---|
523 | Thread.main() { |
---|
524 | count := 0 |
---|
525 | for { |
---|
526 | r := random() % len(spots) |
---|
527 | // go through the array |
---|
528 | @work( a )@ |
---|
529 | // pass array to next thread |
---|
530 | spots[r].V( @a@ ) |
---|
531 | @a = @spots[r].P() |
---|
532 | count ++ |
---|
533 | if must_stop() { break } |
---|
534 | } |
---|
535 | global.count += count |
---|
536 | } |
---|
537 | \end{cfa} |
---|
538 | \end{lrbox} |
---|
539 | |
---|
540 | \subfloat[Thread$_1$]{\label{f:CFibonacci}\usebox\myboxA} |
---|
541 | \hspace{3pt} |
---|
542 | \vrule |
---|
543 | \hspace{3pt} |
---|
544 | \subfloat[Thread$_2$]{\label{f:CFAFibonacciGen}\usebox\myboxB} |
---|
545 | |
---|
546 | \caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code} |
---|
547 | \label{fig:locality:code} |
---|
548 | \end{figure} |
---|
549 | |
---|
550 | As mentioned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue. |
---|
551 | \footnote{It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, there is little to suggest this would not degrade performance.} |
---|
552 | The locality experiment includes two variations of the churn benchmark, where an array of data is added. |
---|
553 | In both variations, before @V@ing the semaphore, each \at increment random cells inside the array. |
---|
554 | The @share@ variation then passes the array to the shadow-queue of the semaphore, transferring ownership of the array to the woken thread. |
---|
555 | In the @noshare@ variation the array is not passed on and each thread continuously accesses its private array. |
---|
556 | |
---|
557 | The objective here is to highlight the different decision made by the runtime when unparking. |
---|
558 | Since each thread unparks a random semaphore, it means that it is unlikely that a \at will be unparked from the last \proc it ran on. |
---|
559 | In the @share@ version, this means that unparking the \at on the local \proc is appropriate since the data was last modified on that \proc. |
---|
560 | In the @noshare@ version, the unparking the \at on the remote \proc is the appropriate approach. |
---|
561 | |
---|
562 | The expectation for this benchmark is to see a performance inversion, where runtimes will fare notably better in the variation which matches their unparking policy. |
---|
563 | This should lead to \CFA, Go and Tokio achieving better performance in @share@ while libfibre achieves better performance in @noshare@. |
---|
564 | Indeed, \CFA, Go and Tokio have the default policy of unpark \ats on the local \proc, where as libfibre has the default policy of unparks \ats wherever they last ran. |
---|
565 | |
---|
566 | \subsection{Results} |
---|
567 | |
---|
568 | \begin{figure} |
---|
569 | \subfloat[][Throughput share]{ |
---|
570 | \resizebox{0.5\linewidth}{!}{ |
---|
571 | \input{result.locality.share.jax.ops.pstex_t} |
---|
572 | } |
---|
573 | \label{fig:locality:jax:share:ops} |
---|
574 | } |
---|
575 | \subfloat[][Throughput noshare]{ |
---|
576 | \resizebox{0.5\linewidth}{!}{ |
---|
577 | \input{result.locality.noshare.jax.ops.pstex_t} |
---|
578 | } |
---|
579 | \label{fig:locality:jax:noshare:ops} |
---|
580 | } |
---|
581 | |
---|
582 | \subfloat[][Scalability share]{ |
---|
583 | \resizebox{0.5\linewidth}{!}{ |
---|
584 | \input{result.locality.share.jax.ns.pstex_t} |
---|
585 | } |
---|
586 | \label{fig:locality:jax:share:ns} |
---|
587 | } |
---|
588 | \subfloat[][Scalability noshare]{ |
---|
589 | \resizebox{0.5\linewidth}{!}{ |
---|
590 | \input{result.locality.noshare.jax.ns.pstex_t} |
---|
591 | } |
---|
592 | \label{fig:locality:jax:noshare:ns} |
---|
593 | } |
---|
594 | \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. |
---|
595 | For throughput, higher is better, for scalability, lower is better. |
---|
596 | Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.} |
---|
597 | \label{fig:locality:jax} |
---|
598 | \end{figure} |
---|
599 | \begin{figure} |
---|
600 | \subfloat[][Throughput share]{ |
---|
601 | \resizebox{0.5\linewidth}{!}{ |
---|
602 | \input{result.locality.share.nasus.ops.pstex_t} |
---|
603 | } |
---|
604 | \label{fig:locality:nasus:share:ops} |
---|
605 | } |
---|
606 | \subfloat[][Throughput noshare]{ |
---|
607 | \resizebox{0.5\linewidth}{!}{ |
---|
608 | \input{result.locality.noshare.nasus.ops.pstex_t} |
---|
609 | } |
---|
610 | \label{fig:locality:nasus:noshare:ops} |
---|
611 | } |
---|
612 | |
---|
613 | \subfloat[][Scalability share]{ |
---|
614 | \resizebox{0.5\linewidth}{!}{ |
---|
615 | \input{result.locality.share.nasus.ns.pstex_t} |
---|
616 | } |
---|
617 | \label{fig:locality:nasus:share:ns} |
---|
618 | } |
---|
619 | \subfloat[][Scalability noshare]{ |
---|
620 | \resizebox{0.5\linewidth}{!}{ |
---|
621 | \input{result.locality.noshare.nasus.ns.pstex_t} |
---|
622 | } |
---|
623 | \label{fig:locality:nasus:noshare:ns} |
---|
624 | } |
---|
625 | \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. |
---|
626 | For throughput, higher is better, for scalability, lower is better. |
---|
627 | Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.} |
---|
628 | \label{fig:locality:nasus} |
---|
629 | \end{figure} |
---|
630 | |
---|
631 | Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} show the results for the locality experiment. |
---|
632 | In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@. |
---|
633 | Looking at the left column on Intel first, Figures~\ref{fig:locality:jax:share:ops} and \ref{fig:locality:jax:share:ns}, which shows the results for the @share@ variation. |
---|
634 | \CFA and Tokio slightly outperform libfibre, as expected based on their \ats placement approach. |
---|
635 | \CFA and Tokio both unpark locally and do not suffer cache misses on the transferred array. |
---|
636 | Libfibre on the other hand unparks remotely, and as such the unparked \at is likely to miss on the shared data. |
---|
637 | Go trails behind in this experiment, presumably for the same reasons that were observable in the churn benchmark. |
---|
638 | Otherwise the results are similar to the churn benchmark, with lower throughput due to the array processing. |
---|
639 | As for most previous results, all runtime suffer a performance hit after 48 \proc, which is the socket boundary. |
---|
640 | |
---|
641 | Looking next at the right column on Intel, Figures~\ref{fig:locality:jax:noshare:ops} and \ref{fig:locality:jax:noshare:ns}, which shows the results for the @noshare@ variation. |
---|
642 | The graph show the expected performance inversion where libfibre now outperforms \CFA and Tokio. |
---|
643 | Indeed, in this case, unparking remotely means the unparked \at is less likely to suffer a cache miss on the array. |
---|
644 | The leaves the \at data structure and the remote queue as the only source of likely cache misses. |
---|
645 | Results show both are armotized fairly well in this case. |
---|
646 | \CFA and Tokio both unpark locally and as a result suffer a marginal performance degradation from the cache miss on the array. |
---|
647 | |
---|
648 | Looking now at the results for the AMD architecture, Figure~\ref{fig:locality:nasus}, the results show a story that is overall similar to the results on the Intel. |
---|
649 | Again overall performance is higher and slightly more variation is visible. |
---|
650 | Looking at the left column first, Figures~\ref{fig:locality:nasus:share:ops} and \ref{fig:locality:nasus:share:ns}, \CFA and Tokio still outperform libfibre, this time more significantly. |
---|
651 | This is expected from the AMD server, which has smaller and more narrow caches that magnify the costs of processing the array. |
---|
652 | Go still sees the same poor performance as on Intel. |
---|
653 | |
---|
654 | Finally looking at the right column, Figures~\ref{fig:locality:nasus:noshare:ops} and \ref{fig:locality:nasus:noshare:ns}, like on Intel, the same performance inversion is present between libfibre and \CFA/Tokio. |
---|
655 | Go still sees the same poor performance. |
---|
656 | |
---|
657 | Overall, this experiment mostly demonstrates the two options available when unparking a \at. |
---|
658 | Depending on the workload, either of these options can be the appropriate one. |
---|
659 | Since it is prohibitively difficult to detect which approach is appropriate, all runtime much choose one of the two and live with the consequences. |
---|
660 | |
---|
661 | Once again, this demonstrate that \CFA achieves equivalent performance to the other runtime, in this case matching the faster Tokio rather than Go which is trailing behind. |
---|
662 | |
---|
663 | \section{Transfer} |
---|
664 | The last benchmark is more of an experiment than a benchmark. |
---|
665 | It tests the behaviour of the schedulers for a misbehaved workload. |
---|
666 | In this workload, one of the \at is selected at random to be the leader. |
---|
667 | The leader then spins in a tight loop until it has observed that all other \ats have acknowledged its leadership. |
---|
668 | The leader \at then picks a new \at to be the next leader and the cycle repeats. |
---|
669 | The benchmark comes in two flavours for the non-leader \ats: |
---|
670 | once they acknowledged the leader, they either block on a semaphore or spin yielding. |
---|
671 | |
---|
672 | The experiment is designed to evaluate the short-term load-balancing of a scheduler. |
---|
673 | Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate. |
---|
674 | This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at. |
---|
675 | In the semaphore flavour, the number of runnable \ats eventually dwindles down to only the leader. |
---|
676 | This scenario is a simpler case to handle for schedulers since \procs eventually run out of work. |
---|
677 | In the yielding flavour, the number of runnable \ats stays constant. |
---|
678 | This scenario is a harder case to handle because corrective measures must be taken even when work is available. |
---|
679 | Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield. |
---|
680 | |
---|
681 | In both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point. |
---|
682 | In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by: |
---|
683 | $ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueueing and dequeuing a \at and $NP$ is the number of \procs. |
---|
684 | However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay will increase. |
---|
685 | The semaphore version is an approximation of the strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once. |
---|
686 | The benchmark effectively provides the fairness guarantee in this case. |
---|
687 | In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness will be measurable. |
---|
688 | |
---|
689 | While this is a fairly artificial scenario, it requires only a few simple pieces. |
---|
690 | The yielding version of this simply creates a scenario where a \at runs uninterrupted in a saturated system, and starvation has an easily measured impact. |
---|
691 | However, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation. |
---|
692 | |
---|
693 | \begin{figure} |
---|
694 | \begin{cfa} |
---|
695 | Thread.lead() { |
---|
696 | this.idx_seen = ++lead_idx |
---|
697 | if lead_idx > stop_idx { |
---|
698 | done := true |
---|
699 | return |
---|
700 | } |
---|
701 | // Wait for everyone to acknowledge my leadership |
---|
702 | start: = timeNow() |
---|
703 | for t in threads { |
---|
704 | while t.idx_seen != lead_idx { |
---|
705 | asm pause |
---|
706 | if (timeNow() - start) > 5 seconds { error() } |
---|
707 | } |
---|
708 | } |
---|
709 | // pick next leader |
---|
710 | leader := threads[ prng() % len(threads) ] |
---|
711 | // wake every one |
---|
712 | if ! exhaust { |
---|
713 | for t in threads { |
---|
714 | if t != me { t.wake() } |
---|
715 | } |
---|
716 | } |
---|
717 | } |
---|
718 | Thread.wait() { |
---|
719 | this.idx_seen := lead_idx |
---|
720 | if exhaust { wait() } |
---|
721 | else { yield() } |
---|
722 | } |
---|
723 | Thread.main() { |
---|
724 | while !done { |
---|
725 | if leader == me { this.lead() } |
---|
726 | else { this.wait() } |
---|
727 | } |
---|
728 | } |
---|
729 | \end{cfa} |
---|
730 | \caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark : Pseudo Code} |
---|
731 | \label{fig:transfer:code} |
---|
732 | \end{figure} |
---|
733 | |
---|
734 | \subsection{Results} |
---|
735 | \begin{figure} |
---|
736 | \begin{centering} |
---|
737 | \begin{tabular}{r | c c c c | c c c c } |
---|
738 | Machine & \multicolumn{4}{c |}{Intel} & \multicolumn{4}{c}{AMD} \\ |
---|
739 | Variation & \multicolumn{2}{c}{Park} & \multicolumn{2}{c |}{Yield} & \multicolumn{2}{c}{Park} & \multicolumn{2}{c}{Yield} \\ |
---|
740 | \procs & 2 & 192 & 2 & 192 & 2 & 256 & 2 & 256 \\ |
---|
741 | \hline |
---|
742 | \CFA & 106 $\mu$s & ~19.9 ms & 68.4 $\mu$s & ~1.2 ms & 174 $\mu$s & ~28.4 ms & 78.8~~$\mu$s& ~~1.21 ms \\ |
---|
743 | libfibre & 127 $\mu$s & ~33.5 ms & DNC & DNC & 156 $\mu$s & ~36.7 ms & DNC & DNC \\ |
---|
744 | Go & 106 $\mu$s & ~64.0 ms & 24.6 ms & 74.3 ms & 271 $\mu$s & 121.6 ms & ~~1.21~ms & 117.4 ms \\ |
---|
745 | Tokio & 289 $\mu$s & 180.6 ms & DNC & DNC & 157 $\mu$s & 111.0 ms & DNC & DNC |
---|
746 | \end{tabular} |
---|
747 | \end{centering} |
---|
748 | \caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. |
---|
749 | DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.} |
---|
750 | \label{fig:transfer:res} |
---|
751 | \end{figure} |
---|
752 | |
---|
753 | Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiment runs 100 \at per \proc. |
---|
754 | Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter. |
---|
755 | As such, data points that are the on the same order of magnitude as each other should be basically considered equal. |
---|
756 | The takeaway of this experiment is the presence of very large differences. |
---|
757 | The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged. |
---|
758 | The yielding variation is denoted ``Yield''. |
---|
759 | The experiment was only run for the extremes of the number of \procs since the scaling is not the focus of this experiment. |
---|
760 | |
---|
761 | The first two columns show the results for the the semaphore variation on Intel. |
---|
762 | While there are some differences in latencies, \CFA is consistenly the fastest and Tokio the slowest, all runtime achieve results that are fairly close. |
---|
763 | Again, this experiment is meant to highlight major differences so latencies within $10\times$ of each other are considered close to each other. |
---|
764 | |
---|
765 | Looking at the next two columns, the results for the yield variation in Intel, the story is very different. |
---|
766 | \CFA achieves better latencies, presumably due to the lack of synchronization on the semaphore. |
---|
767 | Neither Libfibre or Tokio complete the experiment. |
---|
768 | Both runtime use classical work-stealing scheduling and therefore since non of the work-queues are ever emptied no load balancing occurs. |
---|
769 | Go does complete the experiment, but with drastically higher latency: |
---|
770 | latency at 2 \procs is $350\times$ higher than \CFA and $70\times$ higher at 192 \procs. |
---|
771 | This is because Go also has a classic work-stealing scheduler, but it adds preemption which interrupts the spinning leader after a period. |
---|
772 | |
---|
773 | Looking now at the results for the AMD architecture, the results show effectively the same story. |
---|
774 | The first two columns show all runtime obtaining results well within $10\times$ of each other. |
---|
775 | The next two columns again show \CFA producing low latencies while Libfibre and Tokio do not complete the experiment. |
---|
776 | Go still has notably higher latency but the difference is less drastic on 2 \procs, where it produces a $15\times$ difference as opposed to a $100\times$ difference on 256 \procs. |
---|
777 | |
---|
778 | This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness. |
---|
779 | The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly. |
---|
780 | Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running leader. |
---|
781 | Figure~\ref{fig:transfer:res} shows that while Go and Tokio are slower, all runtime achieve decent latency. |
---|
782 | However, the yielding variation shows an entirely different picture. |
---|
783 | Since libfibre and Tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs. |
---|
784 | The result is that the experiment simply does not complete for these runtime. |
---|
785 | Without \procs stealing from the \proc running the leader, the experiment will simply never terminate. |
---|
786 | Go manages to complete the experiment because it adds preemption on top of classic work-stealing. |
---|
787 | However, since preemption is fairly costly it achieves significantly worst performance. |
---|
788 | In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness. |
---|
789 | Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores and idle-sleep. |
---|