Changeset 5548175 for doc


Ignore:
Timestamp:
Sep 12, 2022, 2:58:27 PM (20 months ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
4ab54c9
Parents:
3e8dacc
Message:

checking eval micro

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    r3e8dacc r5548175  
    44This chapter presents five different experimental setups for evaluating the basic features of the \CFA, libfibre~\cite{libfibre}, Go, and Tokio~\cite{Tokio} schedulers.
    55All of these systems have a \gls{uthrding} model.
    6 The goal in this chapter is show the \CFA scheduler obtains equivalent performance to other less fair schedulers through the different experiments.
    7 Note, only the code of the \CFA tests is shown;
     6The goal of this chapter is show that the \CFA scheduler obtains equivalent performance to other less fair schedulers through the different experiments.
     7Note that only the code of the \CFA tests is shown;
    88all tests in the other systems are functionally identical and available online~\cite{GITHUB:SchedulingBenchmarks}.
    99
     
    1313\begin{description}
    1414\item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
    15 The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}.
     15The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for a total of 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}.
    1616Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively.
    17 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
     17Each L1 and L2 instance is only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
    1818The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
    1919
     
    2525\end{description}
    2626
    27 For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyper threading.
     27For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyperthreading.
    2828If more \glspl{hthrd} are needed, then 1 NUMA node with hyperthreading is used.
    2929If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA nodes as needed.
     
    3232On AMD, the same algorithm is used, but the machine only has 2 sockets.
    3333So hyperthreading\footnote{
    34 Hyperthreading normally refers specifically to the technique used by Intel, however it is often used generically to refer to any equivalent feature.}
    35 is used when the \proc count reach 65 and 193.
    36 
    37 The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine.
     34Hyperthreading normally refers specifically to the technique used by Intel, however, it is often used generically to refer to any equivalent feature.}
     35is used when the \proc count reaches 65 and 193.
     36
     37The limited sharing of the last-level cache on the AMD machine is markedly different from the Intel machine.
    3838Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU also incur high latency.
    3939
     
    4242Each experiment is run 15 times varying the number of processors depending on the two different computers.
    4343All experiments gather throughput data and secondary data for scalability or latency.
    44 The data is graphed using a solid, a dashed, and a dotted line, representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
     44The data is graphed using a solid, a dashed, and a dotted line, representing the median, maximum and minimum results respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
    4545An alternative display is to use error bars with min/max as the bottom/top for the bar.
    4646However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.}
     
    6262Hence, systems that perform this optimization have an artificial performance benefit because the yield becomes a \emph{nop}.
    6363For this reason, I designed a different push/pop benchmark, called \newterm{Cycle Benchmark}.
    64 This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
     64This benchmark arranges several \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
    6565At runtime, each \at unparks the next \at before \glslink{atblock}{parking} itself.
    6666Unparking the next \at pushes that \at onto the ready queue while the ensuing \park leads to a \at being popped from the ready queue.
     
    7979If this happens, the scheduler push and pop are avoided and the results of the experiment are skewed.
    8080(Note, an \unpark is like a V on a semaphore, so the subsequent \park (P) may not block.)
    81 Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops.
     81Every runtime system must handle this race and cannot optimize away the ready-queue pushes and pops.
    8282To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully \park before being unparked again.
    8383Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
     
    9999}
    100100\end{cfa}
    101 \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
     101\caption[Cycle Benchmark: Pseudo Code]{Cycle Benchmark: Pseudo Code}
    102102\label{fig:cycle:code}
    103103\bigskip
     
    129129        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts.
    130130        For throughput, higher is better, for scalability, lower is better.
    131         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     131        Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    132132        \label{fig:cycle:jax}
    133133\end{figure}
     
    161161        \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts.
    162162        For throughput, higher is better, for scalability, lower is better.
    163         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     163        Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    164164        \label{fig:cycle:nasus}
    165165\end{figure}
     
    173173As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvement for all runtimes, which can be seen as a flatting of the line.
    174174This effect even causes a decrease in throughput in libfibre's case.
    175 As expected, this pattern repeats again between \proc count 72 and 96.
     175As expected, this pattern repeats between \proc count 72 and 96.
    176176
    177177Looking next at the right column on Intel, Figures~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns} show the results for 1 cycle of 5 \ats for each \proc.
     
    179179Go achieves slightly better performance than \CFA and Tokio, but all three display significantly worst performance compared to the left column.
    180180This decrease in performance is likely due to the additional overhead of the idle-sleep mechanism.
    181 This can either be the result of \procs actually running out of work, or simply additional overhead from tracking whether or not there is work available.
     181This can either be the result of \procs actually running out of work or simply additional overhead from tracking whether or not there is work available.
    182182Indeed, unlike the left column, it is likely that the ready-queue is transiently empty, which likely triggers additional synchronization steps.
    183183Interestingly, libfibre achieves better performance with 1 cycle.
     
    193193Note, I did not investigate the libfibre performance boost for 1 cycle in this experiment.
    194194
    195 The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance for this micro-benchmark.
     195The conclusion from both architectures is that all of the compared runtimes have fairly equivalent performance for this micro-benchmark.
    196196Clearly, the pathological case with 1 cycle per \proc can affect fairness algorithms managing mostly idle processors, \eg \CFA, but only at high core counts.
    197 For this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
     197In this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
    198198For this experiment, the \CFA scheduler has achieved the goal of obtaining equivalent performance to other less fair schedulers.
    199199
     
    218218}
    219219\end{cfa}
    220 \caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
     220\caption[Yield Benchmark: Pseudo Code]{Yield Benchmark: Pseudo Code}
    221221\label{fig:yield:code}
    222222%\end{figure}
     
    229229                \label{fig:yield:jax:ops}
    230230        }
    231         \subfloat[][Throughput, 1 \ats per \proc]{
     231        \subfloat[][Throughput, 1 \at per \proc]{
    232232                \resizebox{0.5\linewidth}{!}{
    233233                \input{result.yield.low.jax.ops.pstex_t}
     
    242242                \label{fig:yield:jax:ns}
    243243        }
    244         \subfloat[][Scalability, 1 \ats per \proc]{
     244        \subfloat[][Scalability, 1 \at per \proc]{
    245245                \resizebox{0.5\linewidth}{!}{
    246246                \input{result.yield.low.jax.ns.pstex_t}
     
    250250        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
    251251        For throughput, higher is better, for scalability, lower is better.
    252         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     252        Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    253253        \label{fig:yield:jax}
    254254\end{figure}
     
    258258Figures~\ref{fig:yield:jax} and \ref{fig:yield:nasus} show the results for the yield experiment on Intel and AMD, respectively.
    259259Looking at the left column on Intel, Figures~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns} show the results for 100 \ats for each \proc.
    260 Note, the Y-axis on this graph is twice as large as the Intel cycle-graph.
     260Note, taht the Y-axis on this graph is twice as large as the Intel cycle graph.
    261261A visual glance between the left columns of the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
    262262\CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment.
    263263Hence, the @yield@ throughput and scalability graphs have similar shapes to the corresponding @cycle@ graphs.
    264 The only difference is sightly better performance for @yield@ because of less synchronization.
     264The only difference is slightly better performance for @yield@ because of less synchronization.
    265265Libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, by-passing the idle-sleep mechanism entirely.
    266266Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance on the left column.
    267 Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it lower priority.
     267Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it a lower priority.
    268268The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
    269269Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@.
     
    275275Looking next at the right column on Intel, Figures~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns} show the results for 1 \at for each \proc.
    276276As for @cycle@, \CFA's cost of idle sleep comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
    277 This result is to be expected since fewer \ats means \procs are more likely to run out of work.
    278 On the other hand, when only running 1 \at per \proc, libfibre optimizes further, and forgoes the context-switch entirely.
    279 This results in libfibre outperforming other runtimes even more, achieving 8 times more throughput than for @cycle@.
    280 Finally, Go and Tokio performance collapse is still the same with fewer \ats.
    281 The only exception is Tokio running on 24 \proc, deepening the mystery of its yielding mechanism further.
     277This result is to be expected since fewer \ats mean \procs are more likely to run out of work.
     278On the other hand, when only running 1 \at per \proc, libfibre optimizes further and forgoes the context switch entirely.
     279This results in libfibre outperforming other runtimes, even more, achieving 8 times more throughput than for @cycle@.
     280Finally, Go and Tokio's performance collapse is still the same with fewer \ats.
     281The only exception is Tokio running on 24 \procs, deepening the mystery of its yielding mechanism further.
    282282
    283283\begin{figure}
     
    309309        \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
    310310        For throughput, higher is better, for scalability, lower is better.
    311         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     311        Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    312312        \label{fig:yield:nasus}
    313313\end{figure}
     
    316316Note that the maximum of the Y-axis on Intel and AMD differ less in @yield@ than @cycle@.
    317317Looking at the left column first, Figures~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling.
    318 Libfibre still outpaces all other runtimes, but it encounter a performance hit at 64 \procs.
    319 This anomaly suggest some amount of communication between the \procs that the Intel machine is able to mask where the AMD is not once hyperthreading is needed.
     318Libfibre still outpaces all other runtimes, but it encounters a performance hit at 64 \procs.
     319This anomaly suggests some amount of communication between the \procs that the Intel machine is able to mask where the AMD is not once hyperthreading is needed.
    320320Go and Tokio still display the same performance collapse as on Intel.
    321321Looking next at the right column on AMD, Figures~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime systems effectively behave the same as they did on the Intel machine.
    322322At the high \ats count, the only difference is Libfibre's scaling and this difference disappears on the right column.
    323 This behaviour suggest whatever communication issue it encountered on the left is completely circumvented on the right.
    324 
    325 It is difficult to draw conclusions for this benchmark when runtime system treat @yield@ so differently.
     323This behaviour suggests whatever communication issue it encountered on the left is completely circumvented on the right.
     324
     325It is difficult to draw conclusions for this benchmark when runtime systems treat @yield@ so differently.
    326326The win for \CFA is its consistency between the cycle and yield benchmarks making it simpler for programmers to use and understand, \ie the \CFA semantics match with programmer intuition.
    327327
     
    329329\section{Churn}
    330330
    331 The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.
     331The Cycle and Yield benchmarks represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.
    332332In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other.
    333333
     
    336336This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
    337337Hence, this benchmark has performance dominated by the cache traffic as \procs are constantly accessing each other's data.
    338 In either case, this benchmark aims to measure how well a scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
     338In either case, this benchmark aims to measure how well a scheduler handles these cases since both cases can lead to performance degradation if not handled correctly.
    339339
    340340This benchmark uses a fixed-size array of counting semaphores.
    341341Each \at picks a random semaphore, @V@s it to unblock any waiting \at, and then @P@s (maybe blocks) the \at on the semaphore.
    342342This creates a flow where \ats push each other out of the semaphores before being pushed out themselves.
    343 For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs;
    344 \eg if there are 10 semaphores and 5 \procs, but only 3 \ats, all 3 \ats can block (P) on a random semaphore and now there is no \ats to unblock (V) them.
    345 Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@.
     343For this benchmark to work, the number of \ats must be equal to or greater than the number of semaphores plus the number of \procs;
     344\eg if there are 10 semaphores and 5 \procs, but only 3 \ats, all 3 \ats can block (P) on a random semaphore and now there are no \ats to unblock (V) them.
     345Note that the nature of these semaphores means the counter can go beyond 1, which can lead to nonblocking calls to @P@.
    346346Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@.
    347347
     
    360360}
    361361\end{cfa}
    362 \caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code}
     362\caption[Churn Benchmark: Pseudo Code]{Churn Benchmark: Pseudo Code}
    363363\label{fig:churn:code}
    364364%\end{figure}
     
    390390                \label{fig:churn:jax:low:ns}
    391391        }
    392         \caption[Churn Benchmark on Intel]{Churn Benchmark on Intel\smallskip\newline Throughput and scalability of the Churn on the benchmark on the Intel machine.
     392        \caption[Churn Benchmark on Intel]{Churn Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
    393393        For throughput, higher is better, for scalability, lower is better.
    394         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     394        Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    395395        \label{fig:churn:jax}
    396396\end{figure}
     
    408408However, beyond this point Go keeps this level of variation but does not scale further in any of the runs.
    409409
    410 Throughput and scalability is notably worst for all runtimes than the previous benchmarks since there is inherently more communication between processors.
     410Throughput and scalability are notably worst for all runtimes than the previous benchmarks since there is inherently more communication between processors.
    411411Indeed, none of the runtimes reach 40 million operations per second while in the cycle benchmark all but libfibre reached 400 million operations per second.
    412 Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc count, all runtimes produce poor scaling.
     412Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc counts, all runtimes produce poor scaling.
    413413However, once the number of \glspl{hthrd} goes beyond a single socket, at 48 \procs, scaling goes from bad to worst and performance completely ceases to improve.
    414414At this point, the benchmark is dominated by inter-socket communication costs for all runtimes.
     
    417417Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready queue of the local \proc or to the ready queue of the remote \proc, which previously ran the \at.
    418418\CFA, Tokio and Go all use the approach of \glslink{atsched}{unparking} to the local \proc, while Libfibre unparks to the remote \proc.
    419 In this particular benchmark, the inherent chaos of the benchmark, in addition to small memory footprint, means neither approach wins over the other.
     419In this particular benchmark, the inherent chaos of the benchmark, in addition to the small memory footprint, means neither approach wins over the other.
    420420
    421421Looking next at the right column on Intel, Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns} show the results for 1 \at for each \proc, and many of the differences between the runtimes disappear.
     
    424424Tokio maintains effectively the same curve shapes as \CFA and libfibre, but it incurs extra costs for all \proc counts.
    425425While Go maintains overall similar results to the others, it again encounters significant variation at high \proc counts.
    426 Inexplicably resulting in super-linear scaling for some runs, \ie the scalability curves displays a negative slope.
     426Inexplicably resulting in super-linear scaling for some runs, \ie the scalability curves display a negative slope.
    427427
    428428Interestingly, unlike the cycle benchmark, running with fewer \ats does not produce drastically different results.
    429 In fact, the overall throughput stays almost exactly the same on the left and right column.
     429In fact, the overall throughput stays almost exactly the same on the left and right columns.
    430430
    431431\begin{figure}
     
    455455                \label{fig:churn:nasus:low:ns}
    456456        }
    457         \caption[Churn Benchmark on AMD]{Churn Benchmark on AMD\smallskip\newline Throughput and scalability of the Churn on the benchmark on the AMD machine.
     457        \caption[Churn Benchmark on AMD]{Churn Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
    458458        For throughput, higher is better, for scalability, lower is better.
    459         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     459        Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    460460        \label{fig:churn:nasus}
    461461\end{figure}
     
    464464Looking now at the results for the AMD architecture, Figure~\ref{fig:churn:nasus}, the results show a somewhat different story.
    465465Looking at the left column first, Figures~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability.
    466 \CFA suffers particular from a larger variations at higher \proc counts, but largely outperforms the other runtimes.
     466\CFA suffers particularly from larger variations at higher \proc counts, but largely outperforms the other runtimes.
    467467Go still produces intriguing results in this case and even more intriguingly, the results have fairly low variation.
    468468
    469469One possible explanation for Go's difference is that it has very few available concurrent primitives, so a channel is substituted for a semaphore.
    470 On paper a semaphore can be replaced by a channel, and with zero-sized objects passed through the channel, equivalent performance could be expected.
    471 However, in practice, there are implementation difference between the two, \eg if the semaphore count can get somewhat high so object accumulate in the channel.
     470On paper, a semaphore can be replaced by a channel, and with zero-sized objects passed through the channel, equivalent performance could be expected.
     471However, in practice, there are implementation differences between the two, \eg if the semaphore count can get somewhat high so objects accumulate in the channel.
    472472Note that this substitution is also made in the cycle benchmark;
    473473however, in that context, it did not have a notable impact.
    474474
    475 As second possible explanation is that Go may use the heap when allocating variables based on the result of escape analysis of the code.
     475A second possible explanation is that Go may use the heap when allocating variables based on the result of the escape analysis of the code.
    476476It is possible for variables that could be placed on the stack to instead be placed on the heap.
    477477This placement could cause extra pointer chasing in the benchmark, heightening locality effects.
    478 Depending on how the heap is structure, this could also lead to false sharing.
     478Depending on how the heap is structured, this could also lead to false sharing.
    479479I did not investigate what causes these unusual results.
    480480
     
    483483Go still suffers from poor scalability but is now unusual in a different way.
    484484While it obtains effectively constant performance regardless of \proc count, this ``sequential'' performance is higher than the other runtimes for low \proc count.
    485 Up to 32 \procs, after which the other runtime manage to outscale Go.
     485Up to 32 \procs, after which the other runtimes manage to outscale Go.
    486486
    487487In conclusion, the objective of this benchmark is to demonstrate that \glslink{atsched}{unparking} \ats from remote \procs does not cause too much contention on the local queues.
    488488Indeed, the fact that most runtimes achieve some scaling between various \proc count demonstrate migrations do not need to be serialized.
    489 Again these result demonstrate \CFA achieves satisfactory performance with respect to the other runtimes.
     489Again these results demonstrate that \CFA achieves satisfactory performance compared to the other runtimes.
    490490
    491491\section{Locality}
     
    496496In both variations, before @V@ing the semaphore, each \at calls a @work@ function which increments random cells inside the data array.
    497497In the noshare variation, the array is not passed on and each thread continuously accesses its private array.
    498 In the share variation, the array is passed to another thread via the semaphore's shadow-queue (each blocking thread can save a word of user data in its blocking node), transferring ownership of the array to the woken thread.
    499 Figure~\ref{fig:locality:code} shows pseudo code for this benchmark.
    500 
    501 The objective here is to highlight the different decision made by the runtime when \glslink{atsched}{unparking}.
     498In the share variation, the array is passed to another thread via the semaphore's shadow queue (each blocking thread can save a word of user data in its blocking node), transferring ownership of the array to the woken thread.
     499Figure~\ref{fig:locality:code} shows the pseudo-code for this benchmark.
     500
     501The objective here is to highlight the different decisions made by the runtime when \glslink{atsched}{unparking}.
    502502Since each thread unparks a random semaphore, it means that it is unlikely that a \at is unparked from the last \proc it ran on.
    503503In the noshare variation, \glslink{atsched}{unparking} the \at on the local \proc is an appropriate choice since the data was last modified on that \proc.
     
    506506The expectation for this benchmark is to see a performance inversion, where runtimes fare notably better in the variation which matches their \glslink{atsched}{unparking} policy.
    507507This decision should lead to \CFA, Go and Tokio achieving better performance in the share variation while libfibre achieves better performance in noshare.
    508 Indeed, \CFA, Go and Tokio have the default policy of \glslink{atsched}{unparking} \ats on the local \proc, where as libfibre has the default policy of \glslink{atsched}{unparking} \ats wherever they last ran.
     508Indeed, \CFA, Go and Tokio have the default policy of \glslink{atsched}{unparking} \ats on the local \proc, whereas libfibre has the default policy of \glslink{atsched}{unparking} \ats wherever they last ran.
    509509
    510510\begin{figure}
     
    556556\subfloat[Share]{\label{fig:locality:code:T2}\usebox\myboxB}
    557557
    558 \caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code}
     558\caption[Locality Benchmark: Pseudo Code]{Locality Benchmark: Pseudo Code}
    559559\label{fig:locality:code}
    560560\end{figure}
     
    570570Go trails behind in this experiment, presumably for the same reasons that were observable in the churn benchmark.
    571571Otherwise, the results are similar to the churn benchmark, with lower throughput due to the array processing.
    572 As for most previous results, all runtimes suffer a performance hit after 48 \proc, which is the socket boundary, and climb again from 96 to 192 \procs.
     572As for most previous results, all runtimes suffer a performance hit after 48 \procs, which is the socket boundary, and climb again from 96 to 192 \procs.
    573573
    574574\begin{figure}
     
    600600        \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
    601601        For throughput, higher is better, for scalability, lower is better.
    602         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     602        Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    603603        \label{fig:locality:jax}
    604604\end{figure}
     
    632632        \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
    633633        For throughput, higher is better, for scalability, lower is better.
    634         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     634        Each series represent 15 independent runs, the dashed lines are the maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    635635        \label{fig:locality:nasus}
    636636\end{figure}
     
    640640Indeed, in this case, unparking remotely means the unparked \at is less likely to suffer a cache miss on the array, which leaves the \at data structure and the remote queue as the only source of likely cache misses.
    641641Results show both are amortized fairly well in this case.
    642 \CFA and Tokio both \unpark locally and as a result suffer a marginal performance degradation from the cache miss on the array.
     642\CFA and Tokio both \unpark locally and as a result, suffer a marginal performance degradation from the cache miss on the array.
    643643
    644644Looking at the results for the AMD architecture, Figure~\ref{fig:locality:nasus}, shows results similar to the Intel.
    645 Again overall performance is higher and slightly more variation is visible.
     645Again the overall performance is higher and slightly more variation is visible.
    646646Looking at the left column first, Figures~\ref{fig:locality:nasus:share:ops} and \ref{fig:locality:nasus:share:ns}, \CFA and Tokio still outperform libfibre, this time more significantly.
    647647This advantage is expected from the AMD server with its smaller and more narrow caches that magnify the costs of processing the array.
     
    685685        // pick next leader
    686686        leader := threads[ prng() % len(threads) ]
    687         // wake every one
     687        // wake everyone
    688688        if ! exhaust {
    689689                for t in threads {
     
    704704}
    705705\end{cfa}
    706 \caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark : Pseudo Code}
     706\caption[Transfer Benchmark: Pseudo Code]{Transfer Benchmark: Pseudo Code}
    707707\label{fig:transfer:code}
    708708\end{figure}
    709709
    710 The experiment is designed to evaluate the short-term load-balancing of a scheduler.
     710The experiment is designed to evaluate the short-term load balancing of a scheduler.
    711711Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate.
    712712This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at.
    713 In the semaphore variation, the number of runnable \ats eventually dwindles down to only the leader.
     713In the semaphore variation, the number of runnable \ats eventually dwindles to only the leader.
    714714This scenario is a simpler case to handle for schedulers since \procs eventually run out of work.
    715715In the yielding variation, the number of runnable \ats stays constant.
    716716This scenario is a harder case to handle because corrective measures must be taken even when work is available.
    717 Note, runtimes with preemption circumvent this problem by forcing the spinner to yield.
     717Note that runtimes with preemption circumvent this problem by forcing the spinner to yield.
    718718In \CFA preemption was disabled as it only obfuscates the results.
    719719I am not aware of a method to disable preemption in Go.
     
    722722In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by, $(CSL + SL) / (NP - 1)$,
    723723where $CSL$ is the context-switch latency, $SL$ is the cost for enqueueing and dequeuing a \at, and $NP$ is the number of \procs.
    724 However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay increases.
     724However, if the scheduler allows \ats to run many times before other \ats can run once, this delay increases.
    725725The semaphore version is an approximation of strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once.
    726726The benchmark effectively provides the fairness guarantee in this case.
    727 In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness is measurable.
    728 
    729 While this is an artificial scenario, in real-life it requires only a few simple pieces.
     727In the yielding version, however the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness is measurable.
     728
     729While this is an artificial scenario, in real life it requires only a few simple pieces.
    730730The yielding version simply creates a scenario where a \at runs uninterrupted in a saturated system and the starvation has an easily measured impact.
    731 Hence, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation.
     731Hence, \emph{any} \at that runs uninterrupted for a significant time in a saturated system could lead to this kind of starvation.
    732732
    733733\subsection{Results}
     
    755755\end{table}
    756756
    757 Table~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs on the computer, where each experiment runs 100 \at per \proc.
     757Table~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs on the computer, where each experiment runs 100 \ats per \proc.
    758758Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter.
    759 As such, data points within the same order of magnitude are basically considered equal.
     759As such, data points within the same order of magnitude are considered equal.
    760760That is, the takeaway of this experiment is the presence of very large differences.
    761 The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged.
     761The semaphore variation is denoted ``Park'', where the number of \ats dwindles as the new leader is acknowledged.
    762762The yielding variation is denoted ``Yield''.
    763 The experiment is only run for few and many \procs, since scaling is not the focus of this experiment.
     763The experiment is only run for a few and many \procs since scaling is not the focus of this experiment.
    764764
    765765The first two columns show the results for the semaphore variation on Intel.
    766 While there are some differences in latencies, \CFA is consistently the fastest and Tokio the slowest, all runtimes achieve results that are fairly close.
     766While there are some differences in latencies, \CFA is consistently the fastest and Tokio the slowest, all runtimes achieve fairly close results.
    767767Again, this experiment is meant to highlight major differences so latencies within $10\times$ of each other are considered equal.
    768768
     
    773773This difference is because Go has a classic work-stealing scheduler, but it adds coarse-grain preemption
    774774, which interrupts the spinning leader after a period.
    775 Neither Libfibre or Tokio complete the experiment.
     775Neither Libfibre nor Tokio complete the experiment.
    776776Both runtimes also use classical work-stealing scheduling without preemption, and therefore, none of the work queues are ever emptied so no load balancing occurs.
    777777
     
    779779The first two columns show all runtime obtaining results well within $10\times$ of each other.
    780780The next two columns again show \CFA producing low latencies, while Go still has notably higher latency but the difference is less drastic on 2 \procs, where it produces a $15\times$ difference as opposed to a $100\times$ difference on 256 \procs.
    781 Neither Libfibre or Tokio complete the experiment.
    782 
    783 This experiments clearly demonstrates \CFA achieves significantly better fairness.
     781Neither Libfibre nor Tokio complete the experiment.
     782
     783This experiment clearly demonstrates that \CFA achieves significantly better fairness.
    784784The semaphore variation serves as a control, where all runtimes are expected to transfer leadership fairly quickly.
    785785Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running the leader.
Note: See TracChangeset for help on using the changeset viewer.