- Timestamp:
- Aug 16, 2022, 4:04:47 PM (3 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation
- Children:
- aec2c022
- Parents:
- 741e22c (diff), 17c6edeb (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
r741e22c r71cf630 4 4 This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler. 5 5 6 \section{Benchmark Environment} 6 \section{Benchmark Environment}\label{microenv} 7 7 All benchmarks are run on two distinct hardware platforms. 8 8 \begin{description} … … 32 32 \centering 33 33 \input{cycle.pstex_t} 34 \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \ gls{at} unparks the next \gls{at}in the cycle before parking itself.}34 \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \at unparks the next \at in the cycle before parking itself.} 35 35 \label{fig:cycle} 36 36 \end{figure} 37 37 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue. 38 38 Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark. 39 However, yielding can be treated as a special case by optimizing it away since the number of ready \glspl{at}does not change.40 Not all systems perform this optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}.39 However, yielding can be treated as a special case and some aspects of the scheduler can be optimized away since the number of ready \ats does not change. 40 Not all systems perform this type of optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}. 41 41 For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}. 42 This benchmark arranges a number of \ glspl{at}into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.43 At runtime, each \ gls{at} unparks the next \gls{at}before parking itself.44 Unparking the next \ gls{at} pushes that \gls{at} onto the ready queue as does the ensuing park.45 46 Hence, the underlying runtime cannot rely on the number of ready \ glspl{at}staying constant over the duration of the experiment.47 In fact, the total number of \ glspl{at} waiting on the ready queue is expected to vary because of the race between the next \gls{at} unparking and the current \gls{at}parking.42 This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list. 43 At runtime, each \at unparks the next \at before parking itself. 44 Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue. 45 46 Hence, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment. 47 In fact, the total number of \ats waiting on the ready queue is expected to vary because of the delay between the next \at unparking and the current \at parking. 48 48 That is, the runtime cannot anticipate that the current task will immediately park. 49 As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \gls{at} parks because of time-slicing or multiple \procs. 50 Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops. 51 To prevent any attempt of silently omitting ready-queue operations, the ring of \glspl{at} is made big enough so the \glspl{at} have time to fully park before being unparked again. 52 (Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.) 49 As well, the size of the cycle is also decided based on this delay. 50 Note that, an unpark is like a V on a semaphore, so the subsequent park (P) may not block. 51 If this happens, the scheduler push and pop are avoided and the results of the experiment would be skewed. 52 Because of time-slicing or because cycles can be spread over multiple \procs, a small cycle may see the chain of unparks go full circle before the first \at parks. 53 Every runtime system must handle this race and but cannot optimized away the ready-queue pushes and pops if the cycle is long enough. 54 To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully park before being unparked again. 53 55 Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment. 54 55 To avoid this benchmark being affected by idle-sleep handling, the number of rings is multiple times greater than the number of \glspl{proc}.56 This design avoids the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentioned above.57 56 58 57 Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark. … … 64 63 count := 0 65 64 for { 65 @this.next.wake()@ 66 66 @wait()@ 67 @this.next.wake()@68 67 count ++ 69 68 if must_stop() { break } … … 103 102 \label{fig:cycle:jax:low:ns} 104 103 } 105 \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. }104 \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 106 105 \label{fig:cycle:jax} 107 106 \end{figure} … … 125 124 \input{result.cycle.nasus.ns.pstex_t} 126 125 } 127 126 \label{fig:cycle:nasus:ns} 128 127 } 129 128 \subfloat[][Scalability, 1 cycle per \proc]{ … … 133 132 \label{fig:cycle:nasus:low:ns} 134 133 } 135 \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. }134 \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 136 135 \label{fig:cycle:nasus} 137 136 \end{figure} 138 137 Figure~\ref{fig:cycle:jax} and Figure~\ref{fig:cycle:nasus} shows the throughput as a function of \proc count on Intel and AMD respectively, where each cycle has 5 \ats. 139 138 The graphs show traditional throughput on the top row and \newterm{scalability} on the bottom row. 140 Where scalability uses the same data but the Y axis is calculated as th roughput over the number of \procs.139 Where scalability uses the same data but the Y axis is calculated as the number of \procs over the throughput. 141 140 In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same. 142 141 The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy. … … 144 143 The distinction is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily. 145 144 145 The experiment was run 15 times for each series and processor count and the \emph{$\times$}s on the graph show all of the results obtained. 146 Each series also has a solid and two dashed lines highlighting the median, maximum and minimum result respectively. 147 This presentation offers an overview of the distribution of the results for each series. 148 149 The experimental setup uses taskset to limit the placement of \glspl{kthrd} by the operating system. 150 As mentioned in Section~\ref{microenv}, the experiement is setup to prioritize running on 2 \glspl{hthrd} per core before running on multiple sockets. 151 For the Intel machine, this means that from 1 to 24 \procs, one socket and \emph{no} hyperthreading is used and from 25 to 48 \procs, still only one socket but \emph{with} hyperthreading. 152 This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192. 153 On AMD, the same algorithm is used, but the machine only has 2 sockets. 154 So hyperthreading\footnote{Hyperthreading normally refers specifically to the technique used by Intel, however here it is loosely used to refer to AMD's equivalent feature.} is used when the \proc count reach 65 and 193. 155 156 Figure~\ref{fig:cycle:jax:ops} and Figure~\ref{fig:cycle:jax:ns} show that for 100 cycles per \proc, \CFA, Go and Tokio all obtain effectively the same performance. 157 Libfibre is slightly behind in this case but still scales decently. 158 As a result of the \gls{kthrd} placement, we can see that additional \procs from 25 to 48 offer less performance improvements for all runtimes. 159 As expected, this pattern repeats between \proc count 72 and 96. 146 160 The performance goal of \CFA is to obtain equivalent performance to other, less fair schedulers and that is what results show. 147 161 Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show very good throughput and scalability for all runtimes. 148 The experimental setup prioritizes running on 2 \glspl{hthrd} per core before running on multiple sockets. 149 The effect of that setup is seen from 25 to 48 \procs, running on 24 core with 2 \glspl{hthrd} per core. 150 This effect is again repeated from 73 and 96 \procs, where it happens on the second CPU. 151 When running only a single cycle, most runtime achieve lower throughput because of the idle-sleep mechanism. 152 In Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} 153 154 Figure~\ref{fig:cycle:nasus} show effectively the same story happening on AMD as it does on Intel. 155 The different performance bumps due to cache topology happen at different locations and there is a little more variability. 156 However, in all cases \CFA is still competitive with other runtimes. 157 162 163 When running only a single cycle, the story is slightly different. 164 \CFA and tokio obtain very smiliar results overall, but tokio shows notably more variations in the results. 165 While \CFA, Go and tokio achive equivalent performance with 100 cycles per \proc, with only 1 cycle per \proc Go achieves slightly better performance. 166 This difference in throughput and scalability is due to the idle-sleep mechanism. 167 With very few cycles, stealing or helping can cause a cascade of tasks migration and trick \proc into very short idle sleeps. 168 Both effect will negatively affect performance. 169 170 An interesting and unusual result is that libfibre achieves better performance with fewer cycle. 171 This suggest that the cascade effect is never present in libfibre and that some bottleneck disappears in this context. 172 However, I did not investigate this result any deeper. 173 174 Figure~\ref{fig:cycle:nasus} show a similar story happening on AMD as it does on Intel. 175 The different performance improvements and plateaus due to cache topology appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel. 176 Unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability for 100 cycles per \proc. 177 178 In the 1 cycle per \proc experiment, the same performance increase for libfibre is visible. 179 However, unlike on Intel, tokio achieves the same performance as Go rather than \CFA. 180 This leaves \CFA trailing behind in this particular case, but only at hight core counts. 181 Presumably this is because in this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal. 182 Since this effect is only problematic in cases with 1 \at per \proc it is not very meaningful for the general performance. 183 184 The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance in this scenario. 185 Which demonstrate that in this case \CFA achieves equivalent performance. 158 186 159 187 \section{Yield} 160 188 For completion, the classic yield benchmark is included. 161 This benchmark is simpler than the cycle test: it creates many \ glspl{at}that call @yield@.189 This benchmark is simpler than the cycle test: it creates many \ats that call @yield@. 162 190 As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@. 163 The only interesting variable in this benchmark is the number of \ glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) can be empty.191 The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty. 164 192 This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work. 165 193 Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@. … … 208 236 \label{fig:yield:jax:low:ns} 209 237 } 210 \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. }238 \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 211 239 \label{fig:yield:jax} 212 240 \end{figure} … … 230 258 \input{result.yield.nasus.ns.pstex_t} 231 259 } 232 260 \label{fig:yield:nasus:ns} 233 261 } 234 262 \subfloat[][Scalability, 1 \at per \proc]{ … … 238 266 \label{fig:yield:nasus:low:ns} 239 267 } 240 \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. }268 \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 241 269 \label{fig:yield:nasus} 242 270 \end{figure} 243 244 Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count, where each run uses 100 \ats per \proc. 271 Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count on Intel. 245 272 It is fairly obvious why I claim this benchmark is more artificial. 246 273 The throughput is dominated by the mechanism used to handle the @yield@. 247 \CFA does not have special handling for @yield@ and achieves very similar performance to the cycle benchmark. 248 Libfibre uses the fact that @yield@ doesn't change the number of ready fibres and by-passes the idle-sleep mechanism entirely, producing significantly better throughput. 249 Go puts yielding goroutines on a secondary global ready-queue, giving them lower priority. 250 The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically. 251 Based on the scalability, Tokio obtains the same poor performance and therefore it is likely it handles @yield@ in a similar fashion. 274 \CFA does not have special handling for @yield@ but the experiment requires less synchronization. 275 As a result achieves better performance than the cycle benchmark, but still comparable. 252 276 253 277 When the number of \ats is reduce to 1 per \proc, the cost of idle sleep also comes into play in a very significant way. 254 If anything causes a \at migration, where two \ats end-up on the same ready-queue, work-stealing will start occuring and c ause every \atto shuffle around.278 If anything causes a \at migration, where two \ats end-up on the same ready-queue, work-stealing will start occuring and could cause several \ats to shuffle around. 255 279 In the process, several \procs can go to sleep transiently if they fail to find where the \ats were shuffled to. 256 280 In \CFA, spurious bursts of latency can trick a \proc into helping, triggering this effect. 257 However, since user-level threading with equal number of \ats and \procs is a somewhat degenerate case, especially when ctxswitching very often, this result is not particularly meaningful and is only included for completness. 281 However, since user-level threading with equal number of \ats and \procs is a somewhat degenerate case, especially when context-switching very often, this result is not particularly meaningful and is only included for completness. 282 283 Libfibre uses the fact that @yield@ doesn't change the number of ready fibres and by-passes the idle-sleep mechanism entirely, producing significantly better throughput. 284 Additionally, when only running 1 \at per \proc, libfibre optimizes further and forgoes the context-switch entirely. 285 This results in incredible performance results comparing to the other runtimes. 286 287 In stark contrast with libfibre, Go puts yielding goroutines on a secondary global ready-queue, giving them lower priority. 288 The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically. 289 Based on the scalability, Tokio obtains the similarly poor performance and therefore it is likely it handles @yield@ in a similar fashion. 290 However, it must be doing something different since it does scale at low \proc count. 258 291 259 292 Again, Figure~\ref{fig:yield:nasus} show effectively the same story happening on AMD as it does on Intel. … … 265 298 \section{Churn} 266 299 The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application. 267 In these benchmarks, \ glspl{at} can be easily partitioned over the different \glspl{proc} upfront and none of the \glspl{at}communicate with each other.268 269 The Churn benchmark represents more chaotic execution , where there is no relation between the last \gls{proc} on which a \gls{at} ran and blocked and the \gls{proc}that subsequently unblocks it.270 With processor-specific ready-queues, when a \ gls{at} is unblocked by a different \gls{proc} that means the unblocking \gls{proc} must either ``steal'' the \gls{at} from another processor or find it on a globalqueue.271 This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on \gls{at}data structure.272 In either case, this benchmark aims to highlight howeach scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.300 In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other. 301 302 The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no apparent relation between the last \proc on which a \at ran and blocked, and the \proc that subsequently unblocks it. 303 With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or place it on a remote queue. 304 This enqueuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure. 305 In either case, this benchmark aims to measure how well each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly. 273 306 274 307 This benchmark uses a fixed-size array of counting semaphores. 275 Each \ gls{at}picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore.276 This creates a flow where \ glspl{at}push each other out of the semaphores before being pushed out themselves.277 For this benchmark to work, the number of \ glspl{at} must be equal or greater than the number of semaphores plus the number of \glspl{proc}.308 Each \at picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore. 309 This creates a flow where \ats push each other out of the semaphores before being pushed out themselves. 310 For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs. 278 311 Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@. 279 312 Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@. … … 298 331 299 332 \subsection{Results} 300 Figure~\ref{fig:churn:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle.301 302 333 \begin{figure} 303 334 \subfloat[][Throughput, 100 \ats per \proc]{ … … 307 338 \label{fig:churn:jax:ops} 308 339 } 309 \subfloat[][Throughput, 1\ats per \proc]{340 \subfloat[][Throughput, 2 \ats per \proc]{ 310 341 \resizebox{0.5\linewidth}{!}{ 311 342 \input{result.churn.low.jax.ops.pstex_t} … … 318 349 \input{result.churn.jax.ns.pstex_t} 319 350 } 320 321 } 322 \subfloat[][Latency, 1\ats per \proc]{351 \label{fig:churn:jax:ns} 352 } 353 \subfloat[][Latency, 2 \ats per \proc]{ 323 354 \resizebox{0.5\linewidth}{!}{ 324 355 \input{result.churn.low.jax.ns.pstex_t} … … 326 357 \label{fig:churn:jax:low:ns} 327 358 } 328 \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. 329 Throughput is the total operation per second across all cores. Latency is the duration of each operation.} 359 \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 330 360 \label{fig:churn:jax} 331 361 \end{figure} 332 362 333 \todo{results discussion} 363 \begin{figure} 364 \subfloat[][Throughput, 100 \ats per \proc]{ 365 \resizebox{0.5\linewidth}{!}{ 366 \input{result.churn.nasus.ops.pstex_t} 367 } 368 \label{fig:churn:nasus:ops} 369 } 370 \subfloat[][Throughput, 2 \ats per \proc]{ 371 \resizebox{0.5\linewidth}{!}{ 372 \input{result.churn.low.nasus.ops.pstex_t} 373 } 374 \label{fig:churn:nasus:low:ops} 375 } 376 377 \subfloat[][Latency, 100 \ats per \proc]{ 378 \resizebox{0.5\linewidth}{!}{ 379 \input{result.churn.nasus.ns.pstex_t} 380 } 381 \label{fig:churn:nasus:ns} 382 } 383 \subfloat[][Latency, 2 \ats per \proc]{ 384 \resizebox{0.5\linewidth}{!}{ 385 \input{result.churn.low.nasus.ns.pstex_t} 386 } 387 \label{fig:churn:nasus:low:ns} 388 } 389 \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine. 390 For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 391 \label{fig:churn:nasus} 392 \end{figure} 393 Figure~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput as a function of \proc count on Intel and AMD respectively. 394 It uses the same representation as the previous benchmark : 15 runs where the dashed line show the extremums and the solid line the median. 395 The performance cost of crossing the cache boundaries is still visible at the same \proc count. 396 However, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the eachother's data. 397 Scalability is notably worst than the previous benchmarks since there is inherently more communication between processors. 398 Indeed, once the number of \glspl{hthrd} goes beyond a single socket, performance ceases to improve. 399 An interesting aspect to note here is that the runtimes differ in how they handle this situation. 400 Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at. 401 \CFA, tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc. 402 In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other. 403 404 Like for the cycle benchmark, here all runtimes achieve fairly similar performance. 405 Performance improves as long as all \procs fit on a single socket. 406 Beyond that performance starts to suffer from increased caching costs. 407 408 Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 1 and 100 \ats per \proc, \CFA, libfibre, Go and tokio achieve effectively equivalent performance for most \proc count. 409 410 However, Figure~\ref{fig:churn:nasus} again shows a somewhat different story on AMD. 411 While \CFA, libfibre, and tokio achieve effectively equivalent performance for most \proc count, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts. 412 This performance difference is visible at both high and low \at counts. 413 414 One possible explanation for this difference is that since Go has very few available concurrent primitives, a channel was used instead of a semaphore. 415 On paper a semaphore can be replaced by a channel and with zero-sized objects passed along equivalent performance could be expected. 416 However, in practice there can be implementation difference between the two. 417 This is especially true if the semaphore count can get somewhat high. 418 Note that this replacement is also made in the cycle benchmark, however in that context it did not seem to have a notable impact. 419 420 As second possible explanation is that Go may sometimes use the heap when allocating variables based on the result of escape analysis of the code. 421 It is possible that variables that should be placed on the stack are placed on the heap. 422 This could cause extra pointer chasing in the benchmark, heightning locality effects. 423 Depending on how the heap is structure, this could also lead to false sharing. 424 425 The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues. 426 Indeed, the fact all runtimes achieve some scaling at lower \proc count demontrate that migrations do not need to be serialized. 427 Again these result demonstrate \CFA achieves satisfactory performance. 334 428 335 429 \section{Locality} 336 337 \todo{code, setup, results} 430 \begin{figure} 431 \begin{cfa} 432 Thread.main() { 433 count := 0 434 for { 435 r := random() % len(spots) 436 // go through the array 437 @work( a )@ 438 spots[r].V() 439 spots[r].P() 440 count ++ 441 if must_stop() { break } 442 } 443 global.count += count 444 } 445 \end{cfa} 446 \begin{cfa} 447 Thread.main() { 448 count := 0 449 for { 450 r := random() % len(spots) 451 // go through the array 452 @work( a )@ 453 // pass array to next thread 454 spots[r].V( @a@ ) 455 @a = @spots[r].P() 456 count ++ 457 if must_stop() { break } 458 } 459 global.count += count 460 } 461 \end{cfa} 462 \caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code} 463 \label{fig:locality:code} 464 \end{figure} 465 As mentionned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue. 466 \footnote{It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, there is little to suggest this would not degrade performance.} 467 The locality experiment includes two variations of the churn benchmark, where an array of data is added. 468 In both variations, before @V@ing the semaphore, each \at increment random cells inside the array. 469 The @share@ variation then passes the array to the shadow-queue of the semaphore, transferring ownership of the array to the woken thread. 470 In the @noshare@ variation the array is not passed on and each thread continously accesses its private array. 471 472 The objective here is to highlight the different decision made by the runtime when unparking. 473 Since each thread unparks a random semaphore, it means that it is unlikely that a \at will be unparked from the last \proc it ran on. 474 In the @share@ version, this means that unparking the \at on the local \proc is appropriate since the data was last modified on that \proc. 475 In the @noshare@ version, the unparking the \at on the remote \proc is the appropriate approach. 476 477 The expectation for this benchmark is to see a performance inversion, where runtimes will fare notably better in the variation which matches their unparking policy. 478 This should lead to \CFA, Go and Tokio achieving better performance in @share@ while libfibre achieves better performance in @noshare@. 479 Indeed, \CFA, Go and Tokio have the default policy of unpark \ats on the local \proc, where as libfibre has the default policy of unparks \ats wherever they last ran. 480 481 \subsection{Results} 482 \begin{figure} 483 \subfloat[][Throughput share]{ 484 \resizebox{0.5\linewidth}{!}{ 485 \input{result.locality.share.jax.ops.pstex_t} 486 } 487 \label{fig:locality:jax:share:ops} 488 } 489 \subfloat[][Throughput noshare]{ 490 \resizebox{0.5\linewidth}{!}{ 491 \input{result.locality.noshare.jax.ops.pstex_t} 492 } 493 \label{fig:locality:jax:noshare:ops} 494 } 495 496 \subfloat[][Scalability share]{ 497 \resizebox{0.5\linewidth}{!}{ 498 \input{result.locality.share.jax.ns.pstex_t} 499 } 500 \label{fig:locality:jax:share:ns} 501 } 502 \subfloat[][Scalability noshare]{ 503 \resizebox{0.5\linewidth}{!}{ 504 \input{result.locality.noshare.jax.ns.pstex_t} 505 } 506 \label{fig:locality:jax:noshare:ns} 507 } 508 \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 509 \label{fig:locality:jax} 510 \end{figure} 511 \begin{figure} 512 \subfloat[][Throughput share]{ 513 \resizebox{0.5\linewidth}{!}{ 514 \input{result.locality.share.nasus.ops.pstex_t} 515 } 516 \label{fig:locality:nasus:share:ops} 517 } 518 \subfloat[][Throughput noshare]{ 519 \resizebox{0.5\linewidth}{!}{ 520 \input{result.locality.noshare.nasus.ops.pstex_t} 521 } 522 \label{fig:locality:nasus:noshare:ops} 523 } 524 525 \subfloat[][Scalability share]{ 526 \resizebox{0.5\linewidth}{!}{ 527 \input{result.locality.share.nasus.ns.pstex_t} 528 } 529 \label{fig:locality:nasus:share:ns} 530 } 531 \subfloat[][Scalability noshare]{ 532 \resizebox{0.5\linewidth}{!}{ 533 \input{result.locality.noshare.nasus.ns.pstex_t} 534 } 535 \label{fig:locality:nasus:noshare:ns} 536 } 537 \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 538 \label{fig:locality:nasus} 539 \end{figure} 540 541 Figure~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively. 542 In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@. 543 544 On Intel, Figure~\ref{fig:locality:jax} shows Go trailing behind the 3 other runtimes. 545 On the left of the figure showing the results for the shared variation, where \CFA and tokio slightly outperform libfibre as expected. 546 And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and tokio. 547 Otherwise the results are similar to the churn benchmark, with lower throughtput due to the array processing. 548 Presumably the reason why Go trails behind are the same as in Figure~\ref{fig:churn:nasus}. 549 550 Figure~\ref{fig:locality:nasus} shows the same experiment on AMD. 551 \todo{why is cfa slower?} 552 Again, we see the same story, where tokio and libfibre swap places and Go trails behind. 338 553 339 554 \section{Transfer} 340 555 The last benchmark is more of an experiment than a benchmark. 341 556 It tests the behaviour of the schedulers for a misbehaved workload. 342 In this workload, one of the \ gls{at}is selected at random to be the leader.343 The leader then spins in a tight loop until it has observed that all other \ glspl{at}have acknowledged its leadership.344 The leader \ gls{at} then picks a new \gls{at} to be the ``spinner''and the cycle repeats.345 The benchmark comes in two flavours for the non-leader \ glspl{at}:557 In this workload, one of the \at is selected at random to be the leader. 558 The leader then spins in a tight loop until it has observed that all other \ats have acknowledged its leadership. 559 The leader \at then picks a new \at to be the next leader and the cycle repeats. 560 The benchmark comes in two flavours for the non-leader \ats: 346 561 once they acknowledged the leader, they either block on a semaphore or spin yielding. 347 562 348 563 The experiment is designed to evaluate the short-term load-balancing of a scheduler. 349 Indeed, schedulers where the runnable \ glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at}for this experiment to terminate.350 This problem occurs because the spinning \ gls{at} is effectively preventing the \gls{proc} from running any other \glspl{thrd}.351 In the semaphore flavour, the number of runnable \ glspl{at}eventually dwindles down to only the leader.352 This scenario is a simpler case to handle for schedulers since \ glspl{proc}eventually run out of work.353 In the yielding flavour, the number of runnable \ glspl{at}stays constant.564 Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate. 565 This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at. 566 In the semaphore flavour, the number of runnable \ats eventually dwindles down to only the leader. 567 This scenario is a simpler case to handle for schedulers since \procs eventually run out of work. 568 In the yielding flavour, the number of runnable \ats stays constant. 354 569 This scenario is a harder case to handle because corrective measures must be taken even when work is available. 355 570 Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield. 356 571 357 \todo{code, setup, results} 572 In both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point. 573 In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by: 574 $ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueuing and dequeuing a \at and $NP$ is the number of \procs. 575 However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay will increase. 576 The semaphore version is an approximation of the strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once. 577 The benchmark effectively provides the fairness guarantee in this case. 578 In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsability and any unfairness will be measurable. 579 580 While this is a fairly artificial scenario, it requires only a few simple pieces. 581 The yielding version of this simply creates a scenario where a \at runs uninterrupted in a saturated system, and starvation has an easily measured impact. 582 However, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation. 358 583 359 584 \begin{figure} … … 365 590 return 366 591 } 367 368 592 // Wait for everyone to acknowledge my leadership 369 593 start: = timeNow() … … 374 598 } 375 599 } 376 377 600 // pick next leader 378 601 leader := threads[ prng() % len(threads) ] 379 380 602 // wake every one 381 603 if ! exhaust { … … 385 607 } 386 608 } 387 388 609 Thread.wait() { 389 610 this.idx_seen := lead_idx … … 391 612 else { yield() } 392 613 } 393 394 614 Thread.main() { 395 615 while !done { … … 404 624 405 625 \subsection{Results} 406 Figure~\ref{fig:transfer:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle. 407 408 \todo{results discussion} 626 \begin{figure} 627 \begin{centering} 628 \begin{tabular}{r | c c c c | c c c c } 629 Machine & \multicolumn{4}{c |}{Intel} & \multicolumn{4}{c}{AMD} \\ 630 Variation & \multicolumn{2}{c}{Park} & \multicolumn{2}{c |}{Yield} & \multicolumn{2}{c}{Park} & \multicolumn{2}{c}{Yield} \\ 631 \procs & 2 & 192 & 2 & 192 & 2 & 256 & 2 & 256 \\ 632 \hline 633 \CFA & 106 $\mu$s & ~19.9 ms & 68.4 $\mu$s & ~1.2 ms & 174 $\mu$s & ~28.4 ms & 78.8~~$\mu$s& ~~1.21 ms \\ 634 libfibre & 127 $\mu$s & ~33.5 ms & DNC & DNC & 156 $\mu$s & ~36.7 ms & DNC & DNC \\ 635 Go & 106 $\mu$s & ~64.0 ms & 24.6 ms & 74.3 ms & 271 $\mu$s & 121.6 ms & ~~1.21~ms & 117.4 ms \\ 636 tokio & 289 $\mu$s & 180.6 ms & DNC & DNC & 157 $\mu$s & 111.0 ms & DNC & DNC 637 \end{tabular} 638 \end{centering} 639 \caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader. } 640 \label{fig:transfer:res} 641 \end{figure} 642 Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiement runs 100 \at per \proc. 643 Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter. 644 As such, data points that are the on the same order of magnitude as eachother should be basically considered equal. 645 The takeaway of this experiement is the presence of very large differences. 646 The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged. 647 The yielding variation is denoted ``Yield''. 648 The experiement was only run for the extremums of the number of cores since the scaling per core behaves like previous experiements. 649 This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness. 650 The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly. 651 Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running leader. 652 Figure~\ref{fig:transfer:res} shows that while Go and Tokio are slower, all runtime achieve decent latency. 653 However, the yielding variation shows an entirely different picture. 654 Since libfibre and tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs. 655 The result is that the experiement simply does not complete for these runtime. 656 Without \procs stealing from the \proc running the leader, the experiment will simply never terminate. 657 Go manages to complete the experiement because it adds preemption on top of classic work-stealing. 658 However, since preemption is fairly costly it achieves significantly worst performance. 659 In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness. 660 Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores and idle-sleep.
Note:
See TracChangeset
for help on using the changeset viewer.