Changeset e76fa30 for doc/theses/thierry_delisle_PhD
- Timestamp:
- Aug 4, 2022, 8:47:12 PM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation
- Children:
- 0c11d3c
- Parents:
- e5e2334
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
re5e2334 re76fa30 32 32 \centering 33 33 \input{cycle.pstex_t} 34 \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \ gls{at} unparks the next \gls{at}in the cycle before parking itself.}34 \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \at unparks the next \at in the cycle before parking itself.} 35 35 \label{fig:cycle} 36 36 \end{figure} 37 37 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue. 38 38 Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark. 39 However, yielding can be treated as a special case by optimizing it away since the number of ready \ glspl{at}does not change.39 However, yielding can be treated as a special case by optimizing it away since the number of ready \ats does not change. 40 40 Not all systems perform this optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}. 41 41 For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}. 42 This benchmark arranges a number of \ glspl{at}into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.43 At runtime, each \ gls{at} unparks the next \gls{at}before parking itself.44 Unparking the next \ gls{at} pushes that \gls{at} onto the ready queue as does the ensuing park.45 46 Hence, the underlying runtime cannot rely on the number of ready \ glspl{at}staying constant over the duration of the experiment.47 In fact, the total number of \ glspl{at} waiting on the ready queue is expected to vary because of the race between the next \gls{at} unparking and the current \gls{at}parking.42 This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list. 43 At runtime, each \at unparks the next \at before parking itself. 44 Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue. 45 46 Hence, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment. 47 In fact, the total number of \ats waiting on the ready queue is expected to vary because of the race between the next \at unparking and the current \at parking. 48 48 That is, the runtime cannot anticipate that the current task will immediately park. 49 As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \ gls{at}parks because of time-slicing or multiple \procs.49 As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \at parks because of time-slicing or multiple \procs. 50 50 Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops. 51 To prevent any attempt of silently omitting ready-queue operations, the ring of \ glspl{at} is made big enough so the \glspl{at}have time to fully park before being unparked again.51 To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully park before being unparked again. 52 52 (Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.) 53 53 Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment. 54 55 To avoid this benchmark being affected by idle-sleep handling, the number of rings is multiple times greater than the number of \glspl{proc}.56 This design avoids the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentioned above.57 54 58 55 Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark. … … 64 61 count := 0 65 62 for { 63 @this.next.wake()@ 66 64 @wait()@ 67 @this.next.wake()@68 65 count ++ 69 66 if must_stop() { break } … … 103 100 \label{fig:cycle:jax:low:ns} 104 101 } 105 \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. }102 \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 106 103 \label{fig:cycle:jax} 107 104 \end{figure} … … 125 122 \input{result.cycle.nasus.ns.pstex_t} 126 123 } 127 124 \label{fig:cycle:nasus:ns} 128 125 } 129 126 \subfloat[][Scalability, 1 cycle per \proc]{ … … 133 130 \label{fig:cycle:nasus:low:ns} 134 131 } 135 \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. }132 \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 136 133 \label{fig:cycle:nasus} 137 134 \end{figure} 138 135 Figure~\ref{fig:cycle:jax} and Figure~\ref{fig:cycle:nasus} shows the throughput as a function of \proc count on Intel and AMD respectively, where each cycle has 5 \ats. 139 136 The graphs show traditional throughput on the top row and \newterm{scalability} on the bottom row. 140 Where scalability uses the same data but the Y axis is calculated as th roughput over the number of \procs.137 Where scalability uses the same data but the Y axis is calculated as the number of \procs over the throughput. 141 138 In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same. 142 139 The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy. … … 150 147 This effect is again repeated from 73 and 96 \procs, where it happens on the second CPU. 151 148 When running only a single cycle, most runtime achieve lower throughput because of the idle-sleep mechanism. 152 In Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}153 149 154 150 Figure~\ref{fig:cycle:nasus} show effectively the same story happening on AMD as it does on Intel. … … 159 155 \section{Yield} 160 156 For completion, the classic yield benchmark is included. 161 This benchmark is simpler than the cycle test: it creates many \ glspl{at}that call @yield@.157 This benchmark is simpler than the cycle test: it creates many \ats that call @yield@. 162 158 As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@. 163 The only interesting variable in this benchmark is the number of \ glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) can be empty.159 The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty. 164 160 This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work. 165 161 Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@. … … 208 204 \label{fig:yield:jax:low:ns} 209 205 } 210 \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. }206 \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 211 207 \label{fig:yield:jax} 212 208 \end{figure} … … 230 226 \input{result.yield.nasus.ns.pstex_t} 231 227 } 232 228 \label{fig:yield:nasus:ns} 233 229 } 234 230 \subfloat[][Scalability, 1 \at per \proc]{ … … 238 234 \label{fig:yield:nasus:low:ns} 239 235 } 240 \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. }236 \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 241 237 \label{fig:yield:nasus} 242 238 \end{figure} 243 244 Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count, where each run uses 100 \ats per \proc. 239 Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count on Intel. 245 240 It is fairly obvious why I claim this benchmark is more artificial. 246 241 The throughput is dominated by the mechanism used to handle the @yield@. … … 265 260 \section{Churn} 266 261 The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application. 267 In these benchmarks, \ glspl{at} can be easily partitioned over the different \glspl{proc} upfront and none of the \glspl{at}communicate with each other.268 269 The Churn benchmark represents more chaotic execution , where there is no relation between the last \gls{proc} on which a \gls{at} ran and blocked and the \gls{proc}that subsequently unblocks it.270 With processor-specific ready-queues, when a \ gls{at} is unblocked by a different \gls{proc} that means the unblocking \gls{proc} must either ``steal'' the \gls{at}from another processor or find it on a global queue.271 This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on \ gls{at}data structure.272 In either case, this benchmark aims to highlight howeach scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.262 In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other. 263 264 The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relation between the last \proc on which a \at ran and blocked and the \proc that subsequently unblocks it. 265 With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a global queue. 266 This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on \at data structure. 267 In either case, this benchmark aims to measure how well each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly. 273 268 274 269 This benchmark uses a fixed-size array of counting semaphores. 275 Each \ gls{at}picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore.276 This creates a flow where \ glspl{at}push each other out of the semaphores before being pushed out themselves.277 For this benchmark to work, the number of \ glspl{at} must be equal or greater than the number of semaphores plus the number of \glspl{proc}.270 Each \at picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore. 271 This creates a flow where \ats push each other out of the semaphores before being pushed out themselves. 272 For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs. 278 273 Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@. 279 274 Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@. … … 298 293 299 294 \subsection{Results} 300 Figure~\ref{fig:churn:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle.301 302 295 \begin{figure} 303 296 \subfloat[][Throughput, 100 \ats per \proc]{ … … 318 311 \input{result.churn.jax.ns.pstex_t} 319 312 } 320 313 \label{fig:churn:jax:ns} 321 314 } 322 315 \subfloat[][Latency, 1 \ats per \proc]{ … … 326 319 \label{fig:churn:jax:low:ns} 327 320 } 328 \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. 329 Throughput is the total operation per second across all cores. Latency is the duration of each operation.} 321 \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 330 322 \label{fig:churn:jax} 331 323 \end{figure} 332 324 333 \todo{results discussion} 325 \begin{figure} 326 \subfloat[][Throughput, 100 \ats per \proc]{ 327 \resizebox{0.5\linewidth}{!}{ 328 \input{result.churn.nasus.ops.pstex_t} 329 } 330 \label{fig:churn:nasus:ops} 331 } 332 \subfloat[][Throughput, 1 \ats per \proc]{ 333 \resizebox{0.5\linewidth}{!}{ 334 \input{result.churn.low.nasus.ops.pstex_t} 335 } 336 \label{fig:churn:nasus:low:ops} 337 } 338 339 \subfloat[][Latency, 100 \ats per \proc]{ 340 \resizebox{0.5\linewidth}{!}{ 341 \input{result.churn.nasus.ns.pstex_t} 342 } 343 \label{fig:churn:nasus:ns} 344 } 345 \subfloat[][Latency, 1 \ats per \proc]{ 346 \resizebox{0.5\linewidth}{!}{ 347 \input{result.churn.low.nasus.ns.pstex_t} 348 } 349 \label{fig:churn:nasus:low:ns} 350 } 351 \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine. 352 For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 353 \label{fig:churn:nasus} 354 \end{figure} 355 Figure~\ref{fig:churn:jax} shows the throughput as a function of \proc count on Intel. 356 Like for the cycle benchmark, here are runtimes achieve fairly similar performance. 357 Scalability is notably worst than the previous benchmarks since there is inherently more communication between processors. 358 Indeed, once the number of \glspl{hthrd} goes beyond a single socket, performance ceases to improve. 359 An interesting aspect to note here is that the runtimes differ in how they handle this situation. 360 Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at. 361 \CFA, tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc. 362 In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other. 363 364 Figure~\ref{fig:churn:nasus} shows effectively the same picture. 365 Performance improves as long as all \procs fit on a single socket. 366 Beyond that performance plateaus. 367 368 Again this performance demonstrate \CFA achieves satisfactory performance. 334 369 335 370 \section{Locality} 336 337 \todo{code, setup, results} 371 \begin{figure} 372 \begin{cfa} 373 Thread.main() { 374 count := 0 375 for { 376 r := random() % len(spots) 377 // go through the array 378 @work( a )@ 379 spots[r].V() 380 spots[r].P() 381 count ++ 382 if must_stop() { break } 383 } 384 global.count += count 385 } 386 \end{cfa} 387 \begin{cfa} 388 Thread.main() { 389 count := 0 390 for { 391 r := random() % len(spots) 392 // go through the array 393 @work( a )@ 394 // pass array to next thread 395 spots[r].V( @a@ ) 396 @a = @spots[r].P() 397 count ++ 398 if must_stop() { break } 399 } 400 global.count += count 401 } 402 \end{cfa} 403 \caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code} 404 \label{fig:locality:code} 405 \end{figure} 406 As mentionned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue. 407 \footnote{It is also possible to unpark to a third unrelated ready-queue, but unless the scheduler has additional knowledge about the situation, it is unlikely to result in good cache locality.} 408 The locality experiment includes two variations of the churn benchmark, where an array of data is added. 409 In both variations, before @V@ing the semaphore, each \at increment random cells inside the array. 410 The @share@ variation then passes the array to the shadow-queue of the semaphore, effectively transferring ownership of the array to the woken thread. 411 In the @noshare@ variation the array is not passed on and each thread continously accesses its private array. 412 413 The objective here is to highlight the different decision made by the runtime when unparking. 414 Since each thread unparks a random semaphore, it means that it is unlikely that a \at will be unparked from the last \proc it ran on. 415 In the @share@ version, this means that unparking the \at on the local \proc is appropriate since the data was last modified on that \proc. 416 In the @noshare@ version, the reverse is true. 417 418 The expectation for this benchmark is to see a performance inversion, where runtimes will fare notably better in the variation which matches their unparking policy. 419 This should lead to \CFA, Go and Tokio achieving better performance in @share@ while libfibre achieves better performance in @noshare@. 420 421 \subsection{Results} 422 \begin{figure} 423 \subfloat[][Throughput share]{ 424 \resizebox{0.5\linewidth}{!}{ 425 \input{result.locality.share.jax.ops.pstex_t} 426 } 427 \label{fig:locality:jax:share:ops} 428 } 429 \subfloat[][Throughput noshare]{ 430 \resizebox{0.5\linewidth}{!}{ 431 \input{result.locality.noshare.jax.ops.pstex_t} 432 } 433 \label{fig:locality:jax:noshare:ops} 434 } 435 436 \subfloat[][Scalability share]{ 437 \resizebox{0.5\linewidth}{!}{ 438 \input{result.locality.share.jax.ns.pstex_t} 439 } 440 \label{fig:locality:jax:share:ns} 441 } 442 \subfloat[][Scalability noshare]{ 443 \resizebox{0.5\linewidth}{!}{ 444 \input{result.locality.noshare.jax.ns.pstex_t} 445 } 446 \label{fig:locality:jax:noshare:ns} 447 } 448 \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 449 \label{fig:locality:jax} 450 \end{figure} 451 \begin{figure} 452 \subfloat[][Throughput share]{ 453 \resizebox{0.5\linewidth}{!}{ 454 \input{result.locality.share.nasus.ops.pstex_t} 455 } 456 \label{fig:locality:nasus:share:ops} 457 } 458 \subfloat[][Throughput noshare]{ 459 \resizebox{0.5\linewidth}{!}{ 460 \input{result.locality.noshare.nasus.ops.pstex_t} 461 } 462 \label{fig:locality:nasus:noshare:ops} 463 } 464 465 \subfloat[][Scalability share]{ 466 \resizebox{0.5\linewidth}{!}{ 467 \input{result.locality.share.nasus.ns.pstex_t} 468 } 469 \label{fig:locality:nasus:share:ns} 470 } 471 \subfloat[][Scalability noshare]{ 472 \resizebox{0.5\linewidth}{!}{ 473 \input{result.locality.noshare.nasus.ns.pstex_t} 474 } 475 \label{fig:locality:nasus:noshare:ns} 476 } 477 \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.} 478 \label{fig:locality:nasus} 479 \end{figure} 480 481 Figure~\ref{fig:locality:jax} shows that the results somewhat follow the expectation. 482 On the left of the figure showing the results for the shared variation, where \CFA and tokio outperform libfibre as expected. 483 And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and tokio. 484 Otherwise the results are similar to the churn benchmark, with lower throughtput due to the array processing. 485 It is unclear why Go's performance is notably worst than the other runtimes. 486 487 Figure~\ref{fig:locality:nasus} shows the same experiment on AMD. 488 \todo{why is cfa slower?} 489 Again, we see the same story, where tokio and libfibre swap places and Go trails behind. 338 490 339 491 \section{Transfer} 340 492 The last benchmark is more of an experiment than a benchmark. 341 493 It tests the behaviour of the schedulers for a misbehaved workload. 342 In this workload, one of the \ gls{at}is selected at random to be the leader.343 The leader then spins in a tight loop until it has observed that all other \ glspl{at}have acknowledged its leadership.344 The leader \ gls{at} then picks a new \gls{at} to be the ``spinner''and the cycle repeats.345 The benchmark comes in two flavours for the non-leader \ glspl{at}:494 In this workload, one of the \at is selected at random to be the leader. 495 The leader then spins in a tight loop until it has observed that all other \ats have acknowledged its leadership. 496 The leader \at then picks a new \at to be the next leader and the cycle repeats. 497 The benchmark comes in two flavours for the non-leader \ats: 346 498 once they acknowledged the leader, they either block on a semaphore or spin yielding. 347 499 348 500 The experiment is designed to evaluate the short-term load-balancing of a scheduler. 349 Indeed, schedulers where the runnable \ glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at}for this experiment to terminate.350 This problem occurs because the spinning \ gls{at} is effectively preventing the \gls{proc} from running any other \glspl{thrd}.351 In the semaphore flavour, the number of runnable \ glspl{at}eventually dwindles down to only the leader.352 This scenario is a simpler case to handle for schedulers since \ glspl{proc}eventually run out of work.353 In the yielding flavour, the number of runnable \ glspl{at}stays constant.501 Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate. 502 This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at. 503 In the semaphore flavour, the number of runnable \ats eventually dwindles down to only the leader. 504 This scenario is a simpler case to handle for schedulers since \procs eventually run out of work. 505 In the yielding flavour, the number of runnable \ats stays constant. 354 506 This scenario is a harder case to handle because corrective measures must be taken even when work is available. 355 507 Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield. 356 508 357 \todo{code, setup, results} 509 I both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point. 510 In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by: 511 $ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueuing and dequeuing a \at and $NP$ is the number of \procs. 512 However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay will increase. 513 The semaphore version is an approximation of the strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once. 514 The benchmark effectively provides the fairness guarantee in this case. 515 In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsability and any unfairness will be measurable. 516 517 While this is a fairly artificial scenario, it requires only a few simple pieces. 518 The yielding version of this simply creates a scenario where a \at runs uninterrupted in a saturated system, and starvation has a easily measured impact. 519 However, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation. 358 520 359 521 \begin{figure} … … 365 527 return 366 528 } 367 368 529 // Wait for everyone to acknowledge my leadership 369 530 start: = timeNow() … … 374 535 } 375 536 } 376 377 537 // pick next leader 378 538 leader := threads[ prng() % len(threads) ] 379 380 539 // wake every one 381 540 if ! exhaust { … … 385 544 } 386 545 } 387 388 546 Thread.wait() { 389 547 this.idx_seen := lead_idx … … 391 549 else { yield() } 392 550 } 393 394 551 Thread.main() { 395 552 while !done { … … 404 561 405 562 \subsection{Results} 406 Figure~\ref{fig:transfer:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle. 407 408 \todo{results discussion} 563 \begin{figure} 564 \begin{centering} 565 \begin{tabular}{r | c c c c | c c c c } 566 Machine & \multicolumn{4}{c |}{Intel} & \multicolumn{4}{c}{AMD} \\ 567 Variation & \multicolumn{2}{c}{Park} & \multicolumn{2}{c |}{Yield} & \multicolumn{2}{c}{Park} & \multicolumn{2}{c}{Yield} \\ 568 \procs & 2 & 192 & 2 & 192 & 2 & 256 & 2 & 256 \\ 569 \hline 570 \CFA & 106 $\mu$s & j200 & 68.4 $\mu$s & ~1.2 ms & 174 $\mu$s & ~28.4 ms & 78.8~~$\mu$s& ~~1.21 ms \\ 571 libfibre & 127 $\mu$s & & DNC & DNC & 156 $\mu$s & ~36.7 ms & DNC & DNC \\ 572 Go & 106 $\mu$s & j200 & 24.6 ms & 74.3 ms & 271 $\mu$s & 121.6 ms & ~~1.21~ms & 117.4 ms \\ 573 tokio & 289 $\mu$s & & DNC & DNC & 157 $\mu$s & 111.0 ms & DNC & DNC 574 \end{tabular} 575 \end{centering} 576 \caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader. } 577 \label{fig:transfer:res} 578 \end{figure} 579 Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiement runs 100 \at per \proc. 580 Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter. 581 As such, data points that are the on the same order of magnitude as eachother should be basically considered equal. 582 The takeaway of this experiement is the presence of very large differences. 583 The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged. 584 The yielding variation is denoted ``Yield''. 585 The experiement was only run for the extremums of the number of cores since the scaling per core behaves like previous experiements. 586 This experiments clearly demonstrate that while the other runtimes achieve similar performance, \CFA achieves significantly better fairness. 587 The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly. 588 Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running leader. 589 Figure~\ref{fig:transfer:res} shows that while Go and Tokio are slower, all runtime achieve decent latency. 590 However, the yielding variation shows an entirely different picture. 591 Since libfibre and tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs. 592 The result is that the experiement simply does not complete for these runtime. 593 Without \procs stealing from the \proc running the leader, the experiment will simply never terminate. 594 Go manages to complete the experiement because it adds preemption on top of classic work-stealing. 595 However, since preemption is fairly costly it achieves significantly worst performance. 596 In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness. 597 Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores.
Note: See TracChangeset
for help on using the changeset viewer.