Changeset 1fcbce7 for doc/theses


Ignore:
Timestamp:
Sep 5, 2022, 10:47:45 PM (2 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
901c0f6
Parents:
0fec6c1
Message:

proofred chapter eval_micro

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    r0fec6c1 r1fcbce7  
    4242Each experiment is run 15 times varying the number of processors depending on the two different computers.
    4343All experiments gather throughput data and secondary data for scalability or latency.
    44 The data is graphed using a solid, a dashed and a dotted line, representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
     44The data is graphed using a solid, a dashed, and a dotted line, representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
    4545An alternative display is to use error bars with min/max as the bottom/top for the bar.
    4646However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.}
     
    102102\label{fig:cycle:code}
    103103%\end{figure}ll have a physical key so it's not urgent.
    104 
    105 
     104\bigskip
    106105%\begin{figure}
    107106        \subfloat[][Throughput, 100 cycles per \proc]{
     
    130129                \label{fig:cycle:jax:low:ns}
    131130        }
    132         \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count.
     131        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts.
    133132        For throughput, higher is better, for scalability, lower is better.
    134133        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     
    170169\subsection{Results}
    171170
    172 Figures~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment.
    173 Looking at the left column on Intel first, Figures~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}, which shows the results for many \ats, in this case 100 cycles of 5 \ats for each \proc.
     171Figures~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment on Intel and AMD, respectively.
     172Looking at the left column on Intel, Figures~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show the results for 100 cycles of 5 \ats for each \proc.
    174173\CFA, Go and Tokio all obtain effectively the same throughput performance.
    175174Libfibre is slightly behind in this case but still scales decently.
     
    178177As expected, this pattern repeats again between \proc count 72 and 96.
    179178
    180 Looking next at the right column on Intel, Figures~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns}, which shows the results for few threads, in this case 1 cycle of 5 \ats for each \proc.
     179Looking next at the right column on Intel, Figures~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns} show the results for 1 cycle of 5 \ats for each \proc.
    181180\CFA and Tokio obtain very similar results overall, but Tokio shows more variations in the results.
    182 Go achieves slightly better performance than \CFA and Tokio, but all three display significantly workst performance compared to the left column.
     181Go achieves slightly better performance than \CFA and Tokio, but all three display significantly worst performance compared to the left column.
    183182This decrease in performance is likely due to the additional overhead of the idle-sleep mechanism.
    184183This can either be the result of \procs actually running out of work, or simply additional overhead from tracking whether or not there is work available.
    185 Indeed, unlike the left column, it is likely that the ready-queue will be transiently empty, which likely triggers additional synchronization steps.
     184Indeed, unlike the left column, it is likely that the ready-queue is transiently empty, which likely triggers additional synchronization steps.
    186185Interestingly, libfibre achieves better performance with 1 cycle.
    187186
    188 Looking now at the results for the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results show a story that is overall similar to the results on the Intel, with close to double the performance overall but with slightly increased variation and some differences in the details.
    189 Note that the maximum of the Y-axis on Intel and AMD differ significantly.
    190 Looking at the left column first, Figures~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns}, unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability.
     187Looking now at the results for the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results are overall similar to the Intel results, but with close to double the performance, slightly increased variation, and some differences in the details.
     188Note the maximum of the Y-axis on Intel and AMD differ significantly.
     189Looking at the left column on AMD, Figures~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns} all 4 runtimes achieve very similar throughput and scalability.
    191190However, as the number of \procs grows higher, the results on AMD show notably more variability than on Intel.
    192 The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
    193 Looking next at the right column, Figures~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
    194 This is different than on Intel, where Tokio behaved like \CFA rather than behaving like Go.
     191The different performance improvements and plateaus are due to cache topology and appear at the expected: \proc counts of 64, 128 and 192, for the same reasons as on Intel.
     192Looking next at the right column on AMD, Figures~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
     193This result is different than on Intel, where Tokio behaved like \CFA rather than behaving like Go.
    195194Again, the same performance increase for libfibre is visible when running fewer \ats.
    196195Note, I did not investigate the libfibre performance boost for 1 cycle in this experiment.
     
    203202\section{Yield}
    204203
    205 For completion, the classic yield benchmark is included.
     204For completeness, the classic yield benchmark is included.
    206205Here, the throughput is dominated by the mechanism used to handle the @yield@ function.
    207206Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the cycle @wait/next.wake@ is replaced by @yield@.
     
    259258\subsection{Results}
    260259
    261 Figures~\ref{fig:yield:jax} and \ref{fig:yield:nasus} show the results for the yield experiment.
    262 Looking at the left column on Intel first, Figures~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc.
    263 Note, the Y-axis on the graph is twice as large as the Intel cycle-graph.
     260Figures~\ref{fig:yield:jax} and \ref{fig:yield:nasus} show the results for the yield experiment on Intel and AMD, respectively.
     261Looking at the left column on Intel, Figures~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns} show the results for 100 \ats for each \proc.
     262Note, the Y-axis on this graph is twice as large as the Intel cycle-graph.
    264263A visual glance between the left columns of the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
    265264\CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment.
     
    276275This lack of communication is probably why the plateaus due to topology are not present.
    277276
    278 Lookking next at the right column on Intel, Figures~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc.
     277Looking next at the right column on Intel, Figures~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns} show the results for 1 \at for each \proc.
    279278As for @cycle@, \CFA's cost of idle sleep comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
    280 This is to be expected since fewet \ats means \procs are more likely to run out of work.
     279This result is to be expected since fewer \ats means \procs are more likely to run out of work.
    281280On the other hand, when only running 1 \at per \proc, libfibre optimizes further, and forgoes the context-switch entirely.
    282281This results in libfibre outperforming other runtimes even more, achieving 8 times more throughput than for @cycle@.
     
    320319Looking at the left column first, Figures~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling.
    321320Libfibre still outpaces all other runtimes, but it encounter a performance hit at 64 \procs.
    322 This suggest some amount of communication between the \procs that the Intel machine was able to mask where the AMD is not once hyperthreading is needed.
    323 Go and Tokio still display the same performance collapse than on Intel.
    324 Looking next at the right column, Figures~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime effectively behave the same as they did on the Intel machine.
    325 At high \ats count the only difference was Libfibre's scaling and this difference disappears on the right column.
    326 This suggest that whatever communication benchmark it encountered on the left is completely circumvented on the right.
     321This anomaly suggest some amount of communication between the \procs that the Intel machine is able to mask where the AMD is not once hyperthreading is needed.
     322Go and Tokio still display the same performance collapse as on Intel.
     323Looking next at the right column on AMD, Figures~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime systems effectively behave the same as they did on the Intel machine.
     324At the high \ats count, the only difference is Libfibre's scaling and this difference disappears on the right column.
     325This behaviour suggest whatever communication issue it encountered on the left is completely circumvented on the right.
    327326
    328327It is difficult to draw conclusions for this benchmark when runtime system treat @yield@ so differently.
     
    335334In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other.
    336335
    337 The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relationship between the last \proc on which a \at ran and blocked and the \proc that subsequently unblocks it.
     336The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relationship between the last \proc on which a \at ran and blocked, and the \proc that subsequently unblocks it.
    338337With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a remote queue.
    339338This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
     
    342341
    343342This benchmark uses a fixed-size array of counting semaphores.
    344 Each \at picks a random semaphore, @V@s it to unblock any waiting \at, and then @P@s (maybe blocks) the \ats on the semaphore.
     343Each \at picks a random semaphore, @V@s it to unblock any waiting \at, and then @P@s (maybe blocks) the \at on the semaphore.
    345344This creates a flow where \ats push each other out of the semaphores before being pushed out themselves.
    346345For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs;
     
    393392                \label{fig:churn:jax:low:ns}
    394393        }
    395         \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and scalability of the Churn on the benchmark on the Intel machine.
     394        \caption[Churn Benchmark on Intel]{Churn Benchmark on Intel\smallskip\newline Throughput and scalability of the Churn on the benchmark on the Intel machine.
    396395        For throughput, higher is better, for scalability, lower is better.
    397396        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     
    401400\subsection{Results}
    402401
    403 Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the results for the churn experiment.
    404 Looking at the left column on Intel first, Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc, all runtime obtain fairly similar throughput for most \proc counts.
     402Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the results for the churn experiment on Intel and AMD, respectively.
     403Looking at the left column on Intel, Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show the results for 100 \ats for each \proc have, and all runtimes obtain fairly similar throughput for most \proc counts.
    405404\CFA does very well on a single \proc but quickly loses its advantage over the other runtimes.
    406 As expected it scales decently up to 48 \procs and then basically plateaus.
    407 Tokio achieves very similar performance to \CFA until 48 \procs, after which it takes a significant hit but does keep scaling somewhat.
    408 Libfibre obtains effectively the same results as Tokio with slightly less scaling, \ie the scaling curve is the same but with slightly higher values.
    409 Finally Go gets the most peculiar results, scaling worst than other runtimes until 48 \procs.
     405As expected, it scales decently up to 48 \procs, drops from 48 to 72 \procs, and then plateaus.
     406Tokio achieves very similar performance to \CFA, with the starting boost, scaling decently until 48 \procs, drops from 48 to 72 \procs, and starts increasing again to 192 \procs.
     407Libfibre obtains effectively the same results as Tokio with slightly less scaling, \ie the scaling curve is the same but with slightly lower values.
     408Finally, Go gets the most peculiar results, scaling worst than other runtimes until 48 \procs.
    410409At 72 \procs, the results of the Go runtime vary significantly, sometimes scaling sometimes plateauing.
    411 However, beyond this point Go keeps this level of variation but does not scale in any of the runs.
     410However, beyond this point Go keeps this level of variation but does not scale further in any of the runs.
    412411
    413412Throughput and scalability is notably worst for all runtimes than the previous benchmarks since there is inherently more communication between processors.
    414 Indeed, none of the runtime reach 40 million operations per second while in the cycle benchmark all but libfibre reached 400 million operations per second.
    415 Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc count, all runtime produce poor scaling.
     413Indeed, none of the runtimes reach 40 million operations per second while in the cycle benchmark all but libfibre reached 400 million operations per second.
     414Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc count, all runtimes produce poor scaling.
    416415However, once the number of \glspl{hthrd} goes beyond a single socket, at 48 \procs, scaling goes from bad to worst and performance completely ceases to improve.
    417416At this point, the benchmark is dominated by inter-socket communication costs for all runtimes.
    418417
    419418An interesting aspect to note here is that the runtimes differ in how they handle this situation.
    420 Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at.
    421 \CFA, Tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc.
    422 In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
    423 
    424 Looking next at the right column on Intel, Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc, many of the differences between the runtime disappear.
     419Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready queue of the local \proc or to the ready queue of the remote \proc, which previously ran the \at.
     420\CFA, Tokio and Go all use the approach of unparking to the local \proc, while Libfibre unparks to the remote \proc.
     421In this particular benchmark, the inherent chaos of the benchmark, in addition to small memory footprint, means neither approach wins over the other.
     422
     423Looking next at the right column on Intel, Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns} show the results for 1 \at for each \proc, and many of the differences between the runtimes disappear.
    425424\CFA outperforms other runtimes by a minuscule margin.
    426425Libfibre follows very closely behind with basically the same performance and scaling.
    427 Tokio maintains effectively the same curve shapes as it did with many threads, but it incurs extra costs for all \proc count.
    428 As a result it is slightly outperformed by \CFA and libfibre.
     426Tokio maintains effectively the same curve shapes as \CFA and libfibre, but it incurs extra costs for all \proc counts.
     427% As a result it is slightly outperformed by \CFA and libfibre.
    429428While Go maintains overall similar results to the others, it again encounters significant variation at high \proc counts.
    430429Inexplicably resulting in super-linear scaling for some runs, \ie the scalability curves displays a negative slope.
     
    459458                \label{fig:churn:nasus:low:ns}
    460459        }
    461         \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and scalability of the Churn on the benchmark on the AMD machine.
     460        \caption[Churn Benchmark on AMD]{Churn Benchmark on AMD\smallskip\newline Throughput and scalability of the Churn on the benchmark on the AMD machine.
    462461        For throughput, higher is better, for scalability, lower is better.
    463462        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     
    468467Looking now at the results for the AMD architecture, Figure~\ref{fig:churn:nasus}, the results show a somewhat different story.
    469468Looking at the left column first, Figures~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability.
    470 \CFA suffers particular from a larger variations at higher \proc counts, but almost all run still outperform the other runtimes.
     469\CFA suffers particular from a larger variations at higher \proc counts, but largely outperforms the other runtimes.
    471470Go still produces intriguing results in this case and even more intriguingly, the results have fairly low variation.
    472471
    473 One possible explanation for this difference is that since Go has very few available concurrent primitives, a channel was used instead of a semaphore.
    474 On paper a semaphore can be replaced by a channel and with zero-sized objects passed along equivalent performance could be expected.
    475 However, in practice there can be implementation difference between the two.
    476 This is especially true if the semaphore count can get somewhat high.
    477 Note that this replacement is also made in the cycle benchmark, however in that context it did not seem to have a notable impact.
    478 
    479 As second possible explanation is that Go may sometimes use the heap when allocating variables based on the result of escape analysis of the code.
    480 It is possible that variables that should be placed on the stack are placed on the heap.
    481 This could cause extra pointer chasing in the benchmark, heightening locality effects.
     472One possible explanation for Go's difference is that it has very few available concurrent primitives, so a channel is substituted for a semaphore.
     473On paper a semaphore can be replaced by a channel, and with zero-sized objects passed through the channel, equivalent performance could be expected.
     474However, in practice, there are implementation difference between the two, \eg if the semaphore count can get somewhat high so object accumulate in the channel.
     475Note that this substitution is also made in the cycle benchmark;
     476however, in that context, it did not have a notable impact.
     477
     478As second possible explanation is that Go may use the heap when allocating variables based on the result of escape analysis of the code.
     479It is possible for variables that could be placed on the stack to instead be placed on the heap.
     480This placement could cause extra pointer chasing in the benchmark, heightening locality effects.
    482481Depending on how the heap is structure, this could also lead to false sharing.
    483 
    484 I did not further investigate what causes these unusual results.
    485 
    486 Looking next at the right column, Figures~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, like for Intel all runtime obtain overall similar throughput between the left and right column.
     482I did not investigate what causes these unusual results.
     483
     484Looking next at the right column, Figures~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, as for Intel, all runtimes obtain overall similar throughput between the left and right column.
    487485\CFA, Libfibre and Tokio all have very close results.
    488486Go still suffers from poor scalability but is now unusual in a different way.
     
    490488Up to 32 \procs, after which the other runtime manage to outscale Go.
    491489
    492 The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues.
    493 Indeed, the fact most runtimes achieve some scaling between various \proc count demonstrate that migrations do not need to be serialized.
    494 Again these result demonstrate \CFA achieves satisfactory performance.
     490In conclusion, the objective of this benchmark is to demonstrate that unparking \ats from remote \procs does not cause too much contention on the local queues.
     491Indeed, the fact that most runtimes achieve some scaling between various \proc count demonstrate migrations do not need to be serialized.
     492Again these result demonstrate \CFA achieves satisfactory performance with respect to the other runtimes.
    495493
    496494\section{Locality}
     495
     496As mentioned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.\footnote{
     497It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, it is likely to degrade performance.}
     498The locality experiment includes two variations of the churn benchmark, where a data array is added.
     499In both variations, before @V@ing the semaphore, each \at increments random cells inside the data array by calling a @work@ function.
     500In the noshare variation, the array is not passed on and each thread continuously accesses its private array.
     501In the share variation, the array is passed to another thread via the semaphore's shadow-queue (each blocking thread can save a word of user data in its blocking node), transferring ownership of the array to the woken thread.
     502Figure~\ref{fig:locality:code} shows pseudo code for this benchmark.
     503
     504The objective here is to highlight the different decision made by the runtime when unparking.
     505Since each thread unparks a random semaphore, it means that it is unlikely that a \at is unparked from the last \proc it ran on.
     506In the noshare variation, unparking the \at on the local \proc is an appropriate choice since the data was last modified on that \proc.
     507In the shared variation, unparking the \at on a remote \proc is an appropriate choice.
     508\todo{PAB: I changed these sentences around.}
     509
     510The expectation for this benchmark is to see a performance inversion, where runtimes fare notably better in the variation which matches their unparking policy.
     511This decision should lead to \CFA, Go and Tokio achieving better performance in the share variation while libfibre achieves better performance in noshare.
     512Indeed, \CFA, Go and Tokio have the default policy of unparking \ats on the local \proc, where as libfibre has the default policy of unparking \ats wherever they last ran.
    497513
    498514\begin{figure}
     
    538554\end{lrbox}
    539555
    540 \subfloat[Thread$_1$]{\label{f:CFibonacci}\usebox\myboxA}
     556\subfloat[Noshare]{\label{fig:locality:code:T1}\usebox\myboxA}
    541557\hspace{3pt}
    542558\vrule
    543559\hspace{3pt}
    544 \subfloat[Thread$_2$]{\label{f:CFAFibonacciGen}\usebox\myboxB}
     560\subfloat[Share]{\label{fig:locality:code:T1}\usebox\myboxB}
    545561
    546562\caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code}
     
    548564\end{figure}
    549565
    550 As mentioned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.
    551 \footnote{It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, there is little to suggest this would not degrade performance.}
    552 The locality experiment includes two variations of the churn benchmark, where an array of data is added.
    553 In both variations, before @V@ing the semaphore, each \at increment random cells inside the array.
    554 The @share@ variation then passes the array to the shadow-queue of the semaphore, transferring ownership of the array to the woken thread.
    555 In the @noshare@ variation the array is not passed on and each thread continuously accesses its private array.
    556 
    557 The objective here is to highlight the different decision made by the runtime when unparking.
    558 Since each thread unparks a random semaphore, it means that it is unlikely that a \at will be unparked from the last \proc it ran on.
    559 In the @share@ version, this means that unparking the \at on the local \proc is appropriate since the data was last modified on that \proc.
    560 In the @noshare@ version, the unparking the \at on the remote \proc is the appropriate approach.
    561 
    562 The expectation for this benchmark is to see a performance inversion, where runtimes will fare notably better in the variation which matches their unparking policy.
    563 This should lead to \CFA, Go and Tokio achieving better performance in @share@ while libfibre achieves better performance in @noshare@.
    564 Indeed, \CFA, Go and Tokio have the default policy of unpark \ats on the local \proc, where as libfibre has the default policy of unparks \ats wherever they last ran.
    565 
    566566\subsection{Results}
     567
     568Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} show the results for the locality experiment on Intel and AMD, respectively.
     569In both cases, the graphs on the left column show the results for the share variation and the graphs on the right column show the results for the noshare.
     570Looking at the left column on Intel, Figures~\ref{fig:locality:jax:share:ops} and \ref{fig:locality:jax:share:ns} show the results for the share variation.
     571\CFA and Tokio slightly outperform libfibre, as expected, based on their \ats placement approach.
     572\CFA and Tokio both unpark locally and do not suffer cache misses on the transferred array.
     573Libfibre on the other hand unparks remotely, and as such the unparked \at is likely to miss on the shared data.
     574Go trails behind in this experiment, presumably for the same reasons that were observable in the churn benchmark.
     575Otherwise, the results are similar to the churn benchmark, with lower throughput due to the array processing.
     576As for most previous results, all runtimes suffer a performance hit after 48 \proc, which is the socket boundary, and climb again from 96 to 192 \procs.
    567577
    568578\begin{figure}
     
    597607        \label{fig:locality:jax}
    598608\end{figure}
     609
    599610\begin{figure}
    600611        \subfloat[][Throughput share]{
     
    629640\end{figure}
    630641
    631 Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} show the results for the locality experiment.
    632 In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@.
    633 Looking at the left column on Intel first, Figures~\ref{fig:locality:jax:share:ops} and \ref{fig:locality:jax:share:ns}, which shows the results for the @share@ variation.
    634 \CFA and Tokio slightly outperform libfibre, as expected based on their \ats placement approach.
    635 \CFA and Tokio both unpark locally and do not suffer cache misses on the transferred array.
    636 Libfibre on the other hand unparks remotely, and as such the unparked \at is likely to miss on the shared data.
    637 Go trails behind in this experiment, presumably for the same reasons that were observable in the churn benchmark.
    638 Otherwise the results are similar to the churn benchmark, with lower throughput due to the array processing.
    639 As for most previous results, all runtime suffer a performance hit after 48 \proc, which is the socket boundary.
    640 
    641 Looking next at the right column on Intel, Figures~\ref{fig:locality:jax:noshare:ops} and \ref{fig:locality:jax:noshare:ns}, which shows the results for the @noshare@ variation.
    642 The graph show the expected performance inversion where libfibre now outperforms \CFA and Tokio.
    643 Indeed, in this case, unparking remotely means the unparked \at is less likely to suffer a cache miss on the array.
    644 The leaves the \at data structure and the remote queue as the only source of likely cache misses.
    645 Results show both are armotized fairly well in this case.
     642Looking at the right column on Intel, Figures~\ref{fig:locality:jax:noshare:ops} and \ref{fig:locality:jax:noshare:ns} show the results for the noshare variation.
     643The graphs show the expected performance inversion where libfibre now outperforms \CFA and Tokio.
     644Indeed, in this case, unparking remotely means the unparked \at is less likely to suffer a cache miss on the array, which leaves the \at data structure and the remote queue as the only source of likely cache misses.
     645Results show both are amortized fairly well in this case.
    646646\CFA and Tokio both unpark locally and as a result suffer a marginal performance degradation from the cache miss on the array.
    647647
    648 Looking now at the results for the AMD architecture, Figure~\ref{fig:locality:nasus}, the results show a story that is overall similar to the results on the Intel.
     648Looking at the results for the AMD architecture, Figure~\ref{fig:locality:nasus}, shows results similar to the Intel.
    649649Again overall performance is higher and slightly more variation is visible.
    650650Looking at the left column first, Figures~\ref{fig:locality:nasus:share:ops} and \ref{fig:locality:nasus:share:ns}, \CFA and Tokio still outperform libfibre, this time more significantly.
    651 This is expected from the AMD server, which has smaller and more narrow caches that magnify the costs of processing the array.
    652 Go still sees the same poor performance as on Intel.
     651This advantage is expected from the AMD server with its smaller and more narrow caches that magnify the costs of processing the array.
     652Go still has the same poor performance as on Intel.
    653653
    654654Finally looking at the right column, Figures~\ref{fig:locality:nasus:noshare:ops} and \ref{fig:locality:nasus:noshare:ns}, like on Intel, the same performance inversion is present between libfibre and \CFA/Tokio.
    655 Go still sees the same poor performance.
    656 
    657 Overall, this experiment mostly demonstrates the two options available when unparking a \at.
     655Go still has the same poor performance.
     656
     657Overall, this benchmark mostly demonstrates the two options available when unparking a \at.
    658658Depending on the workload, either of these options can be the appropriate one.
    659 Since it is prohibitively difficult to detect which approach is appropriate, all runtime much choose one of the two and live with the consequences.
    660 
    661 Once again, this demonstrate that \CFA achieves equivalent performance to the other runtime, in this case matching the faster Tokio rather than Go which is trailing behind.
     659Since it is prohibitively difficult to dynamically detect which approach is appropriate, all runtimes much choose one of the two and live with the consequences.
     660
     661Once again, these experiments demonstrate that \CFA achieves equivalent performance to the other runtimes, in this case matching the faster Tokio rather than Go, which is trailing behind.
    662662
    663663\section{Transfer}
    664664The last benchmark is more of an experiment than a benchmark.
    665665It tests the behaviour of the schedulers for a misbehaved workload.
    666 In this workload, one of the \at is selected at random to be the leader.
     666In this workload, one \at is selected at random to be the leader.
    667667The leader then spins in a tight loop until it has observed that all other \ats have acknowledged its leadership.
    668668The leader \at then picks a new \at to be the next leader and the cycle repeats.
    669 The benchmark comes in two flavours for the non-leader \ats:
     669The benchmark comes in two variations for the non-leader \ats:
    670670once they acknowledged the leader, they either block on a semaphore or spin yielding.
    671 
    672 The experiment is designed to evaluate the short-term load-balancing of a scheduler.
    673 Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate.
    674 This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at.
    675 In the semaphore flavour, the number of runnable \ats eventually dwindles down to only the leader.
    676 This scenario is a simpler case to handle for schedulers since \procs eventually run out of work.
    677 In the yielding flavour, the number of runnable \ats stays constant.
    678 This scenario is a harder case to handle because corrective measures must be taken even when work is available.
    679 Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield.
    680 
    681 In both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
    682 In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by:
    683 $ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueueing and dequeuing a \at and $NP$ is the number of \procs.
    684 However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay will increase.
    685 The semaphore version is an approximation of the strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once.
    686 The benchmark effectively provides the fairness guarantee in this case.
    687 In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness will be measurable.
    688 
    689 While this is a fairly artificial scenario, it requires only a few simple pieces.
    690 The yielding version of this simply creates a scenario where a \at runs uninterrupted in a saturated system, and starvation has an easily measured impact.
    691 However, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation.
     671Figure~\ref{fig:transfer:code} shows pseudo code for this benchmark.
    692672
    693673\begin{figure}
     
    732712\end{figure}
    733713
     714The experiment is designed to evaluate the short-term load-balancing of a scheduler.
     715Indeed, schedulers where the runnable \ats are partitioned on the \procs may need to balance the \ats for this experiment to terminate.
     716This problem occurs because the spinning \at is effectively preventing the \proc from running any other \at.
     717In the semaphore variation, the number of runnable \ats eventually dwindles down to only the leader.
     718This scenario is a simpler case to handle for schedulers since \procs eventually run out of work.
     719In the yielding variation, the number of runnable \ats stays constant.
     720This scenario is a harder case to handle because corrective measures must be taken even when work is available.
     721Note, runtimes with preemption circumvent this problem by forcing the spinner to yield.
     722
     723In both variations, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
     724In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by, $(CSL + SL) / (NP - 1)$,
     725where $CSL$ is the context-switch latency, $SL$ is the cost for enqueueing and dequeuing a \at, and $NP$ is the number of \procs.
     726However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay increases.
     727The semaphore version is an approximation of strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once.
     728The benchmark effectively provides the fairness guarantee in this case.
     729In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness is measurable.
     730
     731While this is an artificial scenario, in real-life it requires only a few simple pieces.
     732The yielding version simply creates a scenario where a \at runs uninterrupted in a saturated system and the starvation has an easily measured impact.
     733Hence, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation.
     734
    734735\subsection{Results}
    735 \begin{figure}
     736
     737\begin{table}
     738\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at.
     739DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
     740\label{fig:transfer:res}
     741\setlength{\extrarowheight}{2pt}
     742\setlength{\tabcolsep}{5pt}
    736743\begin{centering}
    737 \begin{tabular}{r | c c c c | c c c c }
    738 Machine   &                     \multicolumn{4}{c |}{Intel}                &          \multicolumn{4}{c}{AMD}                    \\
    739 Variation & \multicolumn{2}{c}{Park} & \multicolumn{2}{c |}{Yield} & \multicolumn{2}{c}{Park} & \multicolumn{2}{c}{Yield} \\
     744\begin{tabular}{r | c | c | c | c | c | c | c | c}
     745Machine   &                     \multicolumn{4}{c |}{Intel}                &          \multicolumn{4}{c}{AMD}             \\
     746\cline{2-9}
     747Variation & \multicolumn{2}{c|}{Park} & \multicolumn{2}{c |}{Yield} & \multicolumn{2}{c|}{Park} & \multicolumn{2}{c}{Yield} \\
     748\cline{2-9}
    740749\procs    &      2      &      192   &      2      &      192      &      2      &      256   &      2      &      256    \\
    741750\hline
     
    746755\end{tabular}
    747756\end{centering}
    748 \caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at.
    749 DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
    750 \label{fig:transfer:res}
    751 \end{figure}
    752 
    753 Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiment runs 100 \at per \proc.
     757\end{table}
     758
     759Table~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs on the computer, where each experiment runs 100 \at per \proc.
    754760Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter.
    755 As such, data points that are the on the same order of magnitude as each other should be basically considered equal.
    756 The takeaway of this experiment is the presence of very large differences.
     761As such, data points within the same order of magnitude are basically considered equal.
     762That is, the takeaway of this experiment is the presence of very large differences.
    757763The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged.
    758764The yielding variation is denoted ``Yield''.
    759 The experiment was only run for the extremes of the number of \procs since the scaling is not the focus of this experiment.
    760 
    761 The first two columns show the results for the the semaphore variation on Intel.
    762 While there are some differences in latencies, \CFA is consistenly the fastest and Tokio the slowest, all runtime achieve results that are fairly close.
    763 Again, this experiment is meant to highlight major differences so latencies within $10\times$ of each other are considered close to each other.
    764 
    765 Looking at the next two columns, the results for the yield variation in Intel, the story is very different.
    766 \CFA achieves better latencies, presumably due to the lack of synchronization on the semaphore.
    767 Neither Libfibre or Tokio complete the experiment.
    768 Both runtime use classical work-stealing scheduling and therefore since non of the work-queues are ever emptied no load balancing occurs.
     765The experiment is only run for many \procs, since scaling is not the focus of this experiment.
     766
     767The first two columns show the results for the semaphore variation on Intel.
     768While there are some differences in latencies, \CFA is consistently the fastest and Tokio the slowest, all runtimes achieve results that are fairly close.
     769Again, this experiment is meant to highlight major differences so latencies within $10\times$ of each other are considered equal.
     770
     771Looking at the next two columns, the results for the yield variation on Intel, the story is very different.
     772\CFA achieves better latencies, presumably due to no synchronization with the yield.
     773\todo{PAB: what about \CFA preemption? How does that come into play for your scheduler?}
    769774Go does complete the experiment, but with drastically higher latency:
    770775latency at 2 \procs is $350\times$ higher than \CFA and $70\times$ higher at 192 \procs.
    771 This is because Go also has a classic work-stealing scheduler, but it adds preemption which interrupts the spinning leader after a period.
     776This difference is because Go has a classic work-stealing scheduler, but it adds coarse-grain preemption\footnote{
     777Preemption is done at the function prolog when the goroutine's stack is increasing;
     778whereas \CFA uses fine-grain preemption between any two instructions.}
     779, which interrupts the spinning leader after a period.
     780Neither Libfibre or Tokio complete the experiment.
     781Both runtimes also use classical work-stealing scheduling without preemption, and therefore, none of the work queues are ever emptied so no load balancing occurs.
    772782
    773783Looking now at the results for the AMD architecture, the results show effectively the same story.
    774784The first two columns show all runtime obtaining results well within $10\times$ of each other.
    775 The next two columns again show \CFA producing low latencies while Libfibre and Tokio do not complete the experiment.
    776 Go still has notably higher latency but the difference is less drastic on 2 \procs, where it produces a $15\times$ difference as opposed to a $100\times$ difference on 256 \procs.
    777 
    778 This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness.
    779 The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly.
    780 Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running leader.
    781 Figure~\ref{fig:transfer:res} shows that while Go and Tokio are slower, all runtime achieve decent latency.
     785The next two columns again show \CFA producing low latencies, while Go still has notably higher latency but the difference is less drastic on 2 \procs, where it produces a $15\times$ difference as opposed to a $100\times$ difference on 256 \procs.
     786Neither Libfibre or Tokio complete the experiment.
     787
     788This experiments clearly demonstrates \CFA achieves significantly better fairness.
     789The semaphore variation serves as a control, where all runtimes are expected to transfer leadership fairly quickly.
     790Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running the leader.
     791Table~\ref{fig:transfer:res} shows that while Go and Tokio are slower using the semaphore, all runtimes achieve decent latency.
     792
    782793However, the yielding variation shows an entirely different picture.
    783 Since libfibre and Tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs.
    784 The result is that the experiment simply does not complete for these runtime.
    785 Without \procs stealing from the \proc running the leader, the experiment will simply never terminate.
     794Since libfibre and Tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues never steal from other \procs.
     795The result is that the experiment simply does not complete for these runtimes.
     796Without \procs stealing from the \proc running the leader, the experiment cannot terminate.
    786797Go manages to complete the experiment because it adds preemption on top of classic work-stealing.
    787 However, since preemption is fairly costly it achieves significantly worst performance.
     798However, since preemption is fairly infrequent, it achieves significantly worst performance.
    788799In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness.
    789 Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores and idle-sleep.
     800Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores and idle sleep.
Note: See TracChangeset for help on using the changeset viewer.