Changeset 5548175 for doc/theses
- Timestamp:
- Sep 12, 2022, 2:58:27 PM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation
- Children:
- 4ab54c9
- Parents:
- 3e8dacc
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
TabularUnified doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex ¶
r3e8dacc r5548175 4 4 This chapter presents five different experimental setups for evaluating the basic features of the \CFA, libfibre~\cite{libfibre}, Go, and Tokio~\cite{Tokio} schedulers. 5 5 All of these systems have a \gls{uthrding} model. 6 The goal in this chapter is showthe \CFA scheduler obtains equivalent performance to other less fair schedulers through the different experiments.7 Note ,only the code of the \CFA tests is shown;6 The goal of this chapter is show that the \CFA scheduler obtains equivalent performance to other less fair schedulers through the different experiments. 7 Note that only the code of the \CFA tests is shown; 8 8 all tests in the other systems are functionally identical and available online~\cite{GITHUB:SchedulingBenchmarks}. 9 9 … … 13 13 \begin{description} 14 14 \item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM. 15 The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}.15 The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for a total of 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}. 16 16 Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively. 17 Each L1 and L2 instance areonly shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.17 Each L1 and L2 instance is only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}. 18 18 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. 19 19 … … 25 25 \end{description} 26 26 27 For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyper 27 For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyperthreading. 28 28 If more \glspl{hthrd} are needed, then 1 NUMA node with hyperthreading is used. 29 29 If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA nodes as needed. … … 32 32 On AMD, the same algorithm is used, but the machine only has 2 sockets. 33 33 So hyperthreading\footnote{ 34 Hyperthreading normally refers specifically to the technique used by Intel, however it is often used generically to refer to any equivalent feature.}35 is used when the \proc count reach 65 and 193.36 37 The limited sharing of the last-level cache on the AMD machine is markedly different thanthe Intel machine.34 Hyperthreading normally refers specifically to the technique used by Intel, however, it is often used generically to refer to any equivalent feature.} 35 is used when the \proc count reaches 65 and 193. 36 37 The limited sharing of the last-level cache on the AMD machine is markedly different from the Intel machine. 38 38 Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU also incur high latency. 39 39 … … 42 42 Each experiment is run 15 times varying the number of processors depending on the two different computers. 43 43 All experiments gather throughput data and secondary data for scalability or latency. 44 The data is graphed using a solid, a dashed, and a dotted line, representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{44 The data is graphed using a solid, a dashed, and a dotted line, representing the median, maximum and minimum results respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{ 45 45 An alternative display is to use error bars with min/max as the bottom/top for the bar. 46 46 However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.} … … 62 62 Hence, systems that perform this optimization have an artificial performance benefit because the yield becomes a \emph{nop}. 63 63 For this reason, I designed a different push/pop benchmark, called \newterm{Cycle Benchmark}. 64 This benchmark arranges a number of\ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.64 This benchmark arranges several \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list. 65 65 At runtime, each \at unparks the next \at before \glslink{atblock}{parking} itself. 66 66 Unparking the next \at pushes that \at onto the ready queue while the ensuing \park leads to a \at being popped from the ready queue. … … 79 79 If this happens, the scheduler push and pop are avoided and the results of the experiment are skewed. 80 80 (Note, an \unpark is like a V on a semaphore, so the subsequent \park (P) may not block.) 81 Every runtime system must handle this race and cannot optimize daway the ready-queue pushes and pops.81 Every runtime system must handle this race and cannot optimize away the ready-queue pushes and pops. 82 82 To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully \park before being unparked again. 83 83 Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment. … … 99 99 } 100 100 \end{cfa} 101 \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark: Pseudo Code}101 \caption[Cycle Benchmark: Pseudo Code]{Cycle Benchmark: Pseudo Code} 102 102 \label{fig:cycle:code} 103 103 \bigskip … … 129 129 \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts. 130 130 For throughput, higher is better, for scalability, lower is better. 131 Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}131 Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.} 132 132 \label{fig:cycle:jax} 133 133 \end{figure} … … 161 161 \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts. 162 162 For throughput, higher is better, for scalability, lower is better. 163 Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}163 Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.} 164 164 \label{fig:cycle:nasus} 165 165 \end{figure} … … 173 173 As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvement for all runtimes, which can be seen as a flatting of the line. 174 174 This effect even causes a decrease in throughput in libfibre's case. 175 As expected, this pattern repeats againbetween \proc count 72 and 96.175 As expected, this pattern repeats between \proc count 72 and 96. 176 176 177 177 Looking next at the right column on Intel, Figures~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns} show the results for 1 cycle of 5 \ats for each \proc. … … 179 179 Go achieves slightly better performance than \CFA and Tokio, but all three display significantly worst performance compared to the left column. 180 180 This decrease in performance is likely due to the additional overhead of the idle-sleep mechanism. 181 This can either be the result of \procs actually running out of work ,or simply additional overhead from tracking whether or not there is work available.181 This can either be the result of \procs actually running out of work or simply additional overhead from tracking whether or not there is work available. 182 182 Indeed, unlike the left column, it is likely that the ready-queue is transiently empty, which likely triggers additional synchronization steps. 183 183 Interestingly, libfibre achieves better performance with 1 cycle. … … 193 193 Note, I did not investigate the libfibre performance boost for 1 cycle in this experiment. 194 194 195 The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance for this micro-benchmark.195 The conclusion from both architectures is that all of the compared runtimes have fairly equivalent performance for this micro-benchmark. 196 196 Clearly, the pathological case with 1 cycle per \proc can affect fairness algorithms managing mostly idle processors, \eg \CFA, but only at high core counts. 197 Forthis case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.197 In this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal. 198 198 For this experiment, the \CFA scheduler has achieved the goal of obtaining equivalent performance to other less fair schedulers. 199 199 … … 218 218 } 219 219 \end{cfa} 220 \caption[Yield Benchmark : Pseudo Code]{Yield Benchmark: Pseudo Code}220 \caption[Yield Benchmark: Pseudo Code]{Yield Benchmark: Pseudo Code} 221 221 \label{fig:yield:code} 222 222 %\end{figure} … … 229 229 \label{fig:yield:jax:ops} 230 230 } 231 \subfloat[][Throughput, 1 \at sper \proc]{231 \subfloat[][Throughput, 1 \at per \proc]{ 232 232 \resizebox{0.5\linewidth}{!}{ 233 233 \input{result.yield.low.jax.ops.pstex_t} … … 242 242 \label{fig:yield:jax:ns} 243 243 } 244 \subfloat[][Scalability, 1 \at sper \proc]{244 \subfloat[][Scalability, 1 \at per \proc]{ 245 245 \resizebox{0.5\linewidth}{!}{ 246 246 \input{result.yield.low.jax.ns.pstex_t} … … 250 250 \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. 251 251 For throughput, higher is better, for scalability, lower is better. 252 Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}252 Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.} 253 253 \label{fig:yield:jax} 254 254 \end{figure} … … 258 258 Figures~\ref{fig:yield:jax} and \ref{fig:yield:nasus} show the results for the yield experiment on Intel and AMD, respectively. 259 259 Looking at the left column on Intel, Figures~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns} show the results for 100 \ats for each \proc. 260 Note, t he Y-axis on this graph is twice as large as the Intel cycle-graph.260 Note, taht the Y-axis on this graph is twice as large as the Intel cycle graph. 261 261 A visual glance between the left columns of the cycle and yield graphs confirms my claim that the yield benchmark is unreliable. 262 262 \CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment. 263 263 Hence, the @yield@ throughput and scalability graphs have similar shapes to the corresponding @cycle@ graphs. 264 The only difference is s ightly better performance for @yield@ because of less synchronization.264 The only difference is slightly better performance for @yield@ because of less synchronization. 265 265 Libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, by-passing the idle-sleep mechanism entirely. 266 266 Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance on the left column. 267 Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it lower priority.267 Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it a lower priority. 268 268 The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically. 269 269 Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@. … … 275 275 Looking next at the right column on Intel, Figures~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns} show the results for 1 \at for each \proc. 276 276 As for @cycle@, \CFA's cost of idle sleep comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat. 277 This result is to be expected since fewer \ats mean s\procs are more likely to run out of work.278 On the other hand, when only running 1 \at per \proc, libfibre optimizes further , and forgoes the context-switch entirely.279 This results in libfibre outperforming other runtimes even more, achieving 8 times more throughput than for @cycle@.280 Finally, Go and Tokio performance collapse is still the same with fewer \ats.281 The only exception is Tokio running on 24 \proc , deepening the mystery of its yielding mechanism further.277 This result is to be expected since fewer \ats mean \procs are more likely to run out of work. 278 On the other hand, when only running 1 \at per \proc, libfibre optimizes further and forgoes the context switch entirely. 279 This results in libfibre outperforming other runtimes, even more, achieving 8 times more throughput than for @cycle@. 280 Finally, Go and Tokio's performance collapse is still the same with fewer \ats. 281 The only exception is Tokio running on 24 \procs, deepening the mystery of its yielding mechanism further. 282 282 283 283 \begin{figure} … … 309 309 \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. 310 310 For throughput, higher is better, for scalability, lower is better. 311 Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}311 Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.} 312 312 \label{fig:yield:nasus} 313 313 \end{figure} … … 316 316 Note that the maximum of the Y-axis on Intel and AMD differ less in @yield@ than @cycle@. 317 317 Looking at the left column first, Figures~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling. 318 Libfibre still outpaces all other runtimes, but it encounter a performance hit at 64 \procs.319 This anomaly suggest some amount of communication between the \procs that the Intel machine is able to mask where the AMD is not once hyperthreading is needed.318 Libfibre still outpaces all other runtimes, but it encounters a performance hit at 64 \procs. 319 This anomaly suggests some amount of communication between the \procs that the Intel machine is able to mask where the AMD is not once hyperthreading is needed. 320 320 Go and Tokio still display the same performance collapse as on Intel. 321 321 Looking next at the right column on AMD, Figures~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime systems effectively behave the same as they did on the Intel machine. 322 322 At the high \ats count, the only difference is Libfibre's scaling and this difference disappears on the right column. 323 This behaviour suggest whatever communication issue it encountered on the left is completely circumvented on the right.324 325 It is difficult to draw conclusions for this benchmark when runtime system treat @yield@ so differently.323 This behaviour suggests whatever communication issue it encountered on the left is completely circumvented on the right. 324 325 It is difficult to draw conclusions for this benchmark when runtime systems treat @yield@ so differently. 326 326 The win for \CFA is its consistency between the cycle and yield benchmarks making it simpler for programmers to use and understand, \ie the \CFA semantics match with programmer intuition. 327 327 … … 329 329 \section{Churn} 330 330 331 The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.331 The Cycle and Yield benchmarks represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application. 332 332 In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other. 333 333 … … 336 336 This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure. 337 337 Hence, this benchmark has performance dominated by the cache traffic as \procs are constantly accessing each other's data. 338 In either case, this benchmark aims to measure how well a scheduler handles these cases ,since both cases can lead to performance degradation if not handled correctly.338 In either case, this benchmark aims to measure how well a scheduler handles these cases since both cases can lead to performance degradation if not handled correctly. 339 339 340 340 This benchmark uses a fixed-size array of counting semaphores. 341 341 Each \at picks a random semaphore, @V@s it to unblock any waiting \at, and then @P@s (maybe blocks) the \at on the semaphore. 342 342 This creates a flow where \ats push each other out of the semaphores before being pushed out themselves. 343 For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs;344 \eg if there are 10 semaphores and 5 \procs, but only 3 \ats, all 3 \ats can block (P) on a random semaphore and now there isno \ats to unblock (V) them.345 Note , the nature of these semaphores meanthe counter can go beyond 1, which can lead to nonblocking calls to @P@.343 For this benchmark to work, the number of \ats must be equal to or greater than the number of semaphores plus the number of \procs; 344 \eg if there are 10 semaphores and 5 \procs, but only 3 \ats, all 3 \ats can block (P) on a random semaphore and now there are no \ats to unblock (V) them. 345 Note that the nature of these semaphores means the counter can go beyond 1, which can lead to nonblocking calls to @P@. 346 346 Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@. 347 347 … … 360 360 } 361 361 \end{cfa} 362 \caption[Churn Benchmark : Pseudo Code]{Churn Benchmark: Pseudo Code}362 \caption[Churn Benchmark: Pseudo Code]{Churn Benchmark: Pseudo Code} 363 363 \label{fig:churn:code} 364 364 %\end{figure} … … 390 390 \label{fig:churn:jax:low:ns} 391 391 } 392 \caption[Churn Benchmark on Intel]{Churn Benchmark on Intel\smallskip\newline Throughput and scalability of the Churn on the benchmark on the Intel machine.392 \caption[Churn Benchmark on Intel]{Churn Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. 393 393 For throughput, higher is better, for scalability, lower is better. 394 Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}394 Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.} 395 395 \label{fig:churn:jax} 396 396 \end{figure} … … 408 408 However, beyond this point Go keeps this level of variation but does not scale further in any of the runs. 409 409 410 Throughput and scalability isnotably worst for all runtimes than the previous benchmarks since there is inherently more communication between processors.410 Throughput and scalability are notably worst for all runtimes than the previous benchmarks since there is inherently more communication between processors. 411 411 Indeed, none of the runtimes reach 40 million operations per second while in the cycle benchmark all but libfibre reached 400 million operations per second. 412 Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc count , all runtimes produce poor scaling.412 Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc counts, all runtimes produce poor scaling. 413 413 However, once the number of \glspl{hthrd} goes beyond a single socket, at 48 \procs, scaling goes from bad to worst and performance completely ceases to improve. 414 414 At this point, the benchmark is dominated by inter-socket communication costs for all runtimes. … … 417 417 Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready queue of the local \proc or to the ready queue of the remote \proc, which previously ran the \at. 418 418 \CFA, Tokio and Go all use the approach of \glslink{atsched}{unparking} to the local \proc, while Libfibre unparks to the remote \proc. 419 In this particular benchmark, the inherent chaos of the benchmark, in addition to small memory footprint, means neither approach wins over the other.419 In this particular benchmark, the inherent chaos of the benchmark, in addition to the small memory footprint, means neither approach wins over the other. 420 420 421 421 Looking next at the right column on Intel, Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns} show the results for 1 \at for each \proc, and many of the differences between the runtimes disappear. … … 424 424 Tokio maintains effectively the same curve shapes as \CFA and libfibre, but it incurs extra costs for all \proc counts. 425 425 While Go maintains overall similar results to the others, it again encounters significant variation at high \proc counts. 426 Inexplicably resulting in super-linear scaling for some runs, \ie the scalability curves display sa negative slope.426 Inexplicably resulting in super-linear scaling for some runs, \ie the scalability curves display a negative slope. 427 427 428 428 Interestingly, unlike the cycle benchmark, running with fewer \ats does not produce drastically different results. 429 In fact, the overall throughput stays almost exactly the same on the left and right column .429 In fact, the overall throughput stays almost exactly the same on the left and right columns. 430 430 431 431 \begin{figure} … … 455 455 \label{fig:churn:nasus:low:ns} 456 456 } 457 \caption[Churn Benchmark on AMD]{Churn Benchmark on AMD\smallskip\newline Throughput and scalability of the Churn on the benchmark on the AMD machine.457 \caption[Churn Benchmark on AMD]{Churn Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. 458 458 For throughput, higher is better, for scalability, lower is better. 459 Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}459 Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.} 460 460 \label{fig:churn:nasus} 461 461 \end{figure} … … 464 464 Looking now at the results for the AMD architecture, Figure~\ref{fig:churn:nasus}, the results show a somewhat different story. 465 465 Looking at the left column first, Figures~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability. 466 \CFA suffers particular from alarger variations at higher \proc counts, but largely outperforms the other runtimes.466 \CFA suffers particularly from larger variations at higher \proc counts, but largely outperforms the other runtimes. 467 467 Go still produces intriguing results in this case and even more intriguingly, the results have fairly low variation. 468 468 469 469 One possible explanation for Go's difference is that it has very few available concurrent primitives, so a channel is substituted for a semaphore. 470 On paper a semaphore can be replaced by a channel, and with zero-sized objects passed through the channel, equivalent performance could be expected.471 However, in practice, there are implementation difference between the two, \eg if the semaphore count can get somewhat high so objectaccumulate in the channel.470 On paper, a semaphore can be replaced by a channel, and with zero-sized objects passed through the channel, equivalent performance could be expected. 471 However, in practice, there are implementation differences between the two, \eg if the semaphore count can get somewhat high so objects accumulate in the channel. 472 472 Note that this substitution is also made in the cycle benchmark; 473 473 however, in that context, it did not have a notable impact. 474 474 475 A s second possible explanation is that Go may use the heap when allocating variables based on the result ofescape analysis of the code.475 A second possible explanation is that Go may use the heap when allocating variables based on the result of the escape analysis of the code. 476 476 It is possible for variables that could be placed on the stack to instead be placed on the heap. 477 477 This placement could cause extra pointer chasing in the benchmark, heightening locality effects. 478 Depending on how the heap is structure , this could also lead to false sharing.478 Depending on how the heap is structured, this could also lead to false sharing. 479 479 I did not investigate what causes these unusual results. 480 480 … … 483 483 Go still suffers from poor scalability but is now unusual in a different way. 484 484 While it obtains effectively constant performance regardless of \proc count, this ``sequential'' performance is higher than the other runtimes for low \proc count. 485 Up to 32 \procs, after which the other runtime manage to outscale Go.485 Up to 32 \procs, after which the other runtimes manage to outscale Go. 486 486 487 487 In conclusion, the objective of this benchmark is to demonstrate that \glslink{atsched}{unparking} \ats from remote \procs does not cause too much contention on the local queues. 488 488 Indeed, the fact that most runtimes achieve some scaling between various \proc count demonstrate migrations do not need to be serialized. 489 Again these result demonstrate \CFA achieves satisfactory performance with respectto the other runtimes.489 Again these results demonstrate that \CFA achieves satisfactory performance compared to the other runtimes. 490 490 491 491 \section{Locality} … … 496 496 In both variations, before @V@ing the semaphore, each \at calls a @work@ function which increments random cells inside the data array. 497 497 In the noshare variation, the array is not passed on and each thread continuously accesses its private array. 498 In the share variation, the array is passed to another thread via the semaphore's shadow -queue (each blocking thread can save a word of user data in its blocking node), transferring ownership of the array to the woken thread.499 Figure~\ref{fig:locality:code} shows pseudocode for this benchmark.500 501 The objective here is to highlight the different decision made by the runtime when \glslink{atsched}{unparking}.498 In the share variation, the array is passed to another thread via the semaphore's shadow queue (each blocking thread can save a word of user data in its blocking node), transferring ownership of the array to the woken thread. 499 Figure~\ref{fig:locality:code} shows the pseudo-code for this benchmark. 500 501 The objective here is to highlight the different decisions made by the runtime when \glslink{atsched}{unparking}. 502 502 Since each thread unparks a random semaphore, it means that it is unlikely that a \at is unparked from the last \proc it ran on. 503 503 In the noshare variation, \glslink{atsched}{unparking} the \at on the local \proc is an appropriate choice since the data was last modified on that \proc. … … 506 506 The expectation for this benchmark is to see a performance inversion, where runtimes fare notably better in the variation which matches their \glslink{atsched}{unparking} policy. 507 507 This decision should lead to \CFA, Go and Tokio achieving better performance in the share variation while libfibre achieves better performance in noshare. 508 Indeed, \CFA, Go and Tokio have the default policy of \glslink{atsched}{unparking} \ats on the local \proc, where 508 Indeed, \CFA, Go and Tokio have the default policy of \glslink{atsched}{unparking} \ats on the local \proc, whereas libfibre has the default policy of \glslink{atsched}{unparking} \ats wherever they last ran. 509 509 510 510 \begin{figure} … … 556 556 \subfloat[Share]{\label{fig:locality:code:T2}\usebox\myboxB} 557 557 558 \caption[Locality Benchmark : Pseudo Code]{Locality Benchmark: Pseudo Code}558 \caption[Locality Benchmark: Pseudo Code]{Locality Benchmark: Pseudo Code} 559 559 \label{fig:locality:code} 560 560 \end{figure} … … 570 570 Go trails behind in this experiment, presumably for the same reasons that were observable in the churn benchmark. 571 571 Otherwise, the results are similar to the churn benchmark, with lower throughput due to the array processing. 572 As for most previous results, all runtimes suffer a performance hit after 48 \proc , which is the socket boundary, and climb again from 96 to 192 \procs.572 As for most previous results, all runtimes suffer a performance hit after 48 \procs, which is the socket boundary, and climb again from 96 to 192 \procs. 573 573 574 574 \begin{figure} … … 600 600 \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. 601 601 For throughput, higher is better, for scalability, lower is better. 602 Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}602 Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.} 603 603 \label{fig:locality:jax} 604 604 \end{figure} … … 632 632 \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. 633 633 For throughput, higher is better, for scalability, lower is better. 634 Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}634 Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.} 635 635 \label{fig:locality:nasus} 636 636 \end{figure} … … 640 640 Indeed, in this case, unparking remotely means the unparked \at is less likely to suffer a cache miss on the array, which leaves the \at data structure and the remote queue as the only source of likely cache misses. 641 641 Results show both are amortized fairly well in this case. 642 \CFA and Tokio both \unpark locally and as a result suffer a marginal performance degradation from the cache miss on the array.642 \CFA and Tokio both \unpark locally and as a result, suffer a marginal performance degradation from the cache miss on the array. 643 643 644 644 Looking at the results for the AMD architecture, Figure~\ref{fig:locality:nasus}, shows results similar to the Intel. 645 Again overall performance is higher and slightly more variation is visible.645 Again the overall performance is higher and slightly more variation is visible. 646 646 Looking at the left column first, Figures~\ref{fig:locality:nasus:share:ops} and \ref{fig:locality:nasus:share:ns}, \CFA and Tokio still outperform libfibre, this time more significantly. 647 647 This advantage is expected from the AMD server with its smaller and more narrow caches that magnify the costs of processing the array. … … 685 685 // pick next leader 686 686 leader := threads[ prng() % len(threads) ] 687 // wake every 687 // wake everyone 688 688 if ! exhaust { 689 689 for t in threads { … … 704 704 } 705 705 \end{cfa} 706 \caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark: Pseudo Code}706 \caption[Transfer Benchmark: Pseudo Code]{Transfer Benchmark: Pseudo Code} 707 707 \label{fig:transfer:code} 708 708 \end{figure} 709 709 710 The experiment is designed to evaluate the short-term load -balancing of a scheduler.710 The experiment is designed to evaluate the short-term load balancing of a scheduler. 711 711 Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate. 712 712 This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at. 713 In the semaphore variation, the number of runnable \ats eventually dwindles downto only the leader.713 In the semaphore variation, the number of runnable \ats eventually dwindles to only the leader. 714 714 This scenario is a simpler case to handle for schedulers since \procs eventually run out of work. 715 715 In the yielding variation, the number of runnable \ats stays constant. 716 716 This scenario is a harder case to handle because corrective measures must be taken even when work is available. 717 Note ,runtimes with preemption circumvent this problem by forcing the spinner to yield.717 Note that runtimes with preemption circumvent this problem by forcing the spinner to yield. 718 718 In \CFA preemption was disabled as it only obfuscates the results. 719 719 I am not aware of a method to disable preemption in Go. … … 722 722 In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by, $(CSL + SL) / (NP - 1)$, 723 723 where $CSL$ is the context-switch latency, $SL$ is the cost for enqueueing and dequeuing a \at, and $NP$ is the number of \procs. 724 However, if the scheduler allows \ats to run many times before other \ats are able torun once, this delay increases.724 However, if the scheduler allows \ats to run many times before other \ats can run once, this delay increases. 725 725 The semaphore version is an approximation of strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once. 726 726 The benchmark effectively provides the fairness guarantee in this case. 727 In the yielding version however,the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness is measurable.728 729 While this is an artificial scenario, in real -life it requires only a few simple pieces.727 In the yielding version, however the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness is measurable. 728 729 While this is an artificial scenario, in real life it requires only a few simple pieces. 730 730 The yielding version simply creates a scenario where a \at runs uninterrupted in a saturated system and the starvation has an easily measured impact. 731 Hence, \emph{any} \at that runs uninterrupted for a significant period oftime in a saturated system could lead to this kind of starvation.731 Hence, \emph{any} \at that runs uninterrupted for a significant time in a saturated system could lead to this kind of starvation. 732 732 733 733 \subsection{Results} … … 755 755 \end{table} 756 756 757 Table~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs on the computer, where each experiment runs 100 \at per \proc.757 Table~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs on the computer, where each experiment runs 100 \ats per \proc. 758 758 Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter. 759 As such, data points within the same order of magnitude are basicallyconsidered equal.759 As such, data points within the same order of magnitude are considered equal. 760 760 That is, the takeaway of this experiment is the presence of very large differences. 761 The semaphore variation is denoted ``Park'', where the number of \ats dwindles downas the new leader is acknowledged.761 The semaphore variation is denoted ``Park'', where the number of \ats dwindles as the new leader is acknowledged. 762 762 The yielding variation is denoted ``Yield''. 763 The experiment is only run for few and many \procs,since scaling is not the focus of this experiment.763 The experiment is only run for a few and many \procs since scaling is not the focus of this experiment. 764 764 765 765 The first two columns show the results for the semaphore variation on Intel. 766 While there are some differences in latencies, \CFA is consistently the fastest and Tokio the slowest, all runtimes achieve results that are fairly close.766 While there are some differences in latencies, \CFA is consistently the fastest and Tokio the slowest, all runtimes achieve fairly close results. 767 767 Again, this experiment is meant to highlight major differences so latencies within $10\times$ of each other are considered equal. 768 768 … … 773 773 This difference is because Go has a classic work-stealing scheduler, but it adds coarse-grain preemption 774 774 , which interrupts the spinning leader after a period. 775 Neither Libfibre or Tokio complete the experiment.775 Neither Libfibre nor Tokio complete the experiment. 776 776 Both runtimes also use classical work-stealing scheduling without preemption, and therefore, none of the work queues are ever emptied so no load balancing occurs. 777 777 … … 779 779 The first two columns show all runtime obtaining results well within $10\times$ of each other. 780 780 The next two columns again show \CFA producing low latencies, while Go still has notably higher latency but the difference is less drastic on 2 \procs, where it produces a $15\times$ difference as opposed to a $100\times$ difference on 256 \procs. 781 Neither Libfibre or Tokio complete the experiment.782 783 This experiment s clearly demonstrates\CFA achieves significantly better fairness.781 Neither Libfibre nor Tokio complete the experiment. 782 783 This experiment clearly demonstrates that \CFA achieves significantly better fairness. 784 784 The semaphore variation serves as a control, where all runtimes are expected to transfer leadership fairly quickly. 785 785 Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running the leader.
Note: See TracChangeset
for help on using the changeset viewer.