Changeset 31b9d3c


Ignore:
Timestamp:
Aug 29, 2022, 4:32:35 PM (20 months ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
a8dd247, e173d3c
Parents:
507d48d
Message:

Updated cycle, yield and churn to have a consistent pattern to the results discussion

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    r507d48d r31b9d3c  
    4242Each experiment is run 15 times varying the number of processors depending on the two different computers.
    4343All experiments gather throughput data and secondary data for scalability or latency.
    44 The data is graphed using a solid and two dashed lines representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
     44The data is graphed using a solid, a dashed and a dotted line, representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
    4545An alternative display is to use error bars with min/max as the bottom/top for the bar.
    4646However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.}
     
    5151In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same.
    5252
    53 The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy.
    54 The right column shows results for 1 cycle per \proc, where the ready queues are expected to be near empty most of the time.
    55 The distinction between 100 and 1 cycles is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
     53The left column shows results for hundreds of \ats per \proc, enough to always keep every \proc busy.
     54The right column shows results for very few \ats per \proc, where the ready queues are expected to be near empty most of the time.
     55The distinction between many and few \ats is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
    5656
    5757\section{Cycle}
     
    101101\caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
    102102\label{fig:cycle:code}
    103 %\end{figure}
    104 
    105 \bigskip
     103%\end{figure}ll have a physical key so it's not urgent.
     104
    106105
    107106%\begin{figure}
     
    131130                \label{fig:cycle:jax:low:ns}
    132131        }
    133         \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are maximums while the solid line is the medium.}
     132        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count.
     133        For throughput, higher is better, for scalability, lower is better.
     134        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
    134135        \label{fig:cycle:jax}
    135136\end{figure}
     
    161162                \label{fig:cycle:nasus:low:ns}
    162163        }
    163         \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     164        \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts.
     165        For throughput, higher is better, for scalability, lower is better.
     166        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
    164167        \label{fig:cycle:nasus}
    165168\end{figure}
     
    167170\subsection{Results}
    168171
    169 For the Intel architecture, Figure~\ref{fig:cycle:jax}:
    170 \begin{itemize}
    171 \item
    172 For 100 cycles per \proc (first column), \CFA, Go and Tokio all obtain effectively the same throughput performance.
     172Figure~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment.
     173Looking at the left column on Intel first, Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}, which shows the results for many \ats, in this case 100 cycles of 5 \ats for each \proc.
     174\CFA, Go and Tokio all obtain effectively the same throughput performance.
    173175Libfibre is slightly behind in this case but still scales decently.
    174 As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvement (flatting of the line) for all runtimes.
     176As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvement for all runtimes, which can be seen as a flatting of the line.
     177This effect even causes a decrease in throughput in libfibre's case.
    175178As expected, this pattern repeats again between \proc count 72 and 96.
    176 \item
    177 For 1 cycle per \proc, \CFA and Tokio obtain very similar results overall, but Tokio shows more variations in the results.
    178 Go achieves slightly better performance.
     179
     180Looking next at the right column on Intel, Figure~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns}, which shows the results for few threads, in this case 1 cycle of 5 \ats for each \proc.
     181\CFA and Tokio obtain very similar results overall, but Tokio shows more variations in the results.
     182Go achieves slightly better performance than \CFA and Tokio, but all three display significantly workst performance compared to the left column.
     183This decrease in performance is likely due to the additional overhead of the idle-sleep mechanism.
     184This can either be the result of \procs actually running out of work, or simply additional overhead from tracking whether or not there is work available.
     185Indeed, unlike the left column, it is likely that the ready-queue will be transiently empty, which likely triggers additional synchronization steps.
    179186Interestingly, libfibre achieves better performance with 1 cycle.
    180 \end{itemize}
    181 
    182 For the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results show the same story as on the Intel, with close to double the performance overall but with slightly increased variation.
     187
     188Looking now at the results for the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results show a story that is overall similar to the results on the Intel, with close to double the performance overall but with slightly increased variation and some differences in the details.
     189Note that the maximum of the Y-axis on Intel and AMD differ significantly.
     190Looking at the left column first, Figure~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns}, unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability.
     191However, as the number of \procs grows higher, the results on AMD show notably more variability than on Intel.
    183192The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
    184 \begin{itemize}
    185 \item
    186 For 100 cycles per \proc, unlike Intel, all 4 runtimes achieve very similar throughput and scalability.
    187 \item
    188 For 1 cycle per \proc, unlike on Intel, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
    189 Again, the same performance increase for libfibre is visible.
    190 \end{itemize}
     193Looking next at the right column, Figure~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
     194This is different than on Intel, where Tokio behaved like \CFA rather than behaving like Go.
     195Again, the same performance increase for libfibre is visible when running fewer \ats.
    191196Note, I did not investigate the libfibre performance boost for 1 cycle in this experiment.
    192197
    193198The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance for this micro-benchmark.
    194 Clearly, the pathological case with 1 \at per \proc, can affect fairness algorithms managing mostly idle processors, \eg \CFA, but only at high core counts.
     199Clearly, the pathological case with 1 cycle per \proc can affect fairness algorithms managing mostly idle processors, \eg \CFA, but only at high core counts.
    195200For this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
    196 For this experiment, the \CFA scheduler has achieved the goal of obtaining equivalent performance to other less fair schedulers, except for very unusual workloads.
     201For this experiment, the \CFA scheduler has achieved the goal of obtaining equivalent performance to other less fair schedulers.
    197202
    198203\section{Yield}
     
    246251                \label{fig:yield:jax:low:ns}
    247252        }
    248         \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, using 1 \ats per \proc. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     253        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
     254        For throughput, higher is better, for scalability, lower is better.
     255        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
    249256        \label{fig:yield:jax}
    250257\end{figure}
     
    252259\subsection{Results}
    253260
    254 Figures~\ref{fig:yield:jax} and~\ref{fig:yield:nasus} show the same throughput graphs as @cycle@ on Intel and AMD, respectively.
    255 Note, the Y-axis on the yield graph for Intel is twice as large as the Intel cycle-graph.
    256 A visual glance between the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
    257 
    258 For the Intel architecture, Figure~\ref{fig:yield:jax}:
    259 \begin{itemize}
    260 \item
     261Figures~\ref{fig:yield:jax} and~\ref{fig:yield:nasus} show the results for the yield experiment.
     262Looking at the left column on Intel first, Figure~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc.
     263Note, the Y-axis on the graph is twice as large as the Intel cycle-graph.
     264A visual glance between the left columns of the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
    261265\CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment.
    262 Hence, the @yield@ throughput and scalability graphs for both 100 and 1 cycles/tasks per processor have similar shapes to the corresponding @cycle@ graphs.
     266Hence, the @yield@ throughput and scalability graphs have similar shapes to the corresponding @cycle@ graphs.
    263267The only difference is sightly better performance for @yield@ because of less synchronization.
    264 As for @cycle@, the cost of idle sleep also comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
    265 \item
    266 libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, by-passing the idle-sleep mechanism entirely.
    267 Additionally, when only running 1 \at per \proc, libfibre optimizes further, and forgoes the context-switch entirely.
    268 Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance for 100 cycles/tasks and an 8 times increase for 1 cycle/task.
    269 \item
     268Libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, by-passing the idle-sleep mechanism entirely.
     269Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance on the left column.
    270270Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it lower priority.
    271271The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
    272 Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@ for both 100 and 1 cycles/tasks.
    273 \item
     272Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@.
    274273Tokio has a similar performance collapse after 16 processors, and therefore, its special @yield@ handling is probably related to a Go-like scheduler problem and/or a \CFA idle-sleep problem.
    275274(I did not dig through the Rust code to ascertain the exact reason for the collapse.)
    276 \end{itemize}
     275Note that since there is no communication among \ats, locality problems are much less likely than for the cycle benchmark.
     276This lack of communication is probably why the plateaus due to topology are not present.
     277
     278Lookking next at the right column on Intel, Figure~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc.
     279As for @cycle@, \CFA's cost of idle sleep comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
     280This is to be expected since fewet \ats means \procs are more likely to run out of work.
     281On the other hand, when only running 1 \at per \proc, libfibre optimizes further, and forgoes the context-switch entirely.
     282This results in libfibre outperforming other runtimes even more, achieving 8 times more throughput than for @cycle@.
     283Finally, Go and Tokio performance collapse is still the same with fewer \ats.
     284The only exception is Tokio running on 24 \proc, deepening the mystery of its yielding mechanism further.
    277285
    278286\begin{figure}
     
    302310                \label{fig:yield:nasus:low:ns}
    303311        }
    304         \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, using 1 \ats per \proc. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     312        \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
     313        For throughput, higher is better, for scalability, lower is better.
     314        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
    305315        \label{fig:yield:nasus}
    306316\end{figure}
    307317
    308 For the AMD architecture, Figure~\ref{fig:yield:nasus}, the results show the same story as on the Intel, with slightly increased variations.
    309 Also, some transition points on the X-axis differ because of the architectures, like at 16 versus 24 processors.
     318Looking now at the results for the AMD architecture, Figure~\ref{fig:yield:nasus}, the results again show a story that is overall similar to the results on the Intel, with increased variation and some differences in the details.
     319Note that the maximum of the Y-axis on Intel and AMD differ less in @yield@ than @cycle@.
     320Looking at the left column first, Figure~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling.
     321Libfibre still outpaces all other runtimes, but it encounter a performance hit at 64 \procs.
     322This suggest some amount of communication between the \procs that the Intel machine was able to mask where the AMD is not once hyperthreading is needed.
     323Go and Tokio still display the same performance collapse than on Intel.
     324Looking next at the right column, Figure~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime effectively behave the same as they did on the Intel machine.
     325At high \ats count the only difference was Libfibre's scaling and this difference disappears on the right column.
     326This suggest that whatever communication benchmark it encountered on the left is completely circumvented on the right.
    310327
    311328It is difficult to draw conclusions for this benchmark when runtime system treat @yield@ so differently.
     
    321338With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a remote queue.
    322339This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
    323 Hence, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the each other's data.
     340Hence, this benchmark has performance dominated by the cache traffic as \procs are constantly accessing each other's data.
    324341In either case, this benchmark aims to measure how well a scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
    325342
     
    364381        }
    365382
    366         \subfloat[][Latency, 100 \ats per \proc]{
     383        \subfloat[][Scalability, 100 \ats per \proc]{
    367384                \resizebox{0.5\linewidth}{!}{
    368385                        \input{result.churn.jax.ns.pstex_t}
     
    370387                \label{fig:churn:jax:ns}
    371388        }
    372         \subfloat[][Latency, 2 \ats per \proc]{
     389        \subfloat[][Scalability, 2 \ats per \proc]{
    373390                \resizebox{0.5\linewidth}{!}{
    374391                        \input{result.churn.low.jax.ns.pstex_t}
     
    376393                \label{fig:churn:jax:low:ns}
    377394        }
    378         \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     395        \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and scalability of the Churn on the benchmark on the Intel machine.
     396        For throughput, higher is better, for scalability, lower is better.
     397        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
    379398        \label{fig:churn:jax}
    380399\end{figure}
     
    382401\subsection{Results}
    383402
    384 Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput on Intel and AMD respectively.
    385 
    386 The performance cost of crossing the cache boundaries is still visible at the same \proc count.
    387 
    388 Scalability is notably worst than the previous benchmarks since there is inherently more communication between processors.
    389 Indeed, once the number of \glspl{hthrd} goes beyond a single socket, performance ceases to improve.
     403Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the results for the churn experiment.
     404Looking at the left column on Intel first, Figure~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc, all runtime obtain fairly similar throughput for most \proc counts.
     405\CFA does very well on a single \proc but quickly loses its advantage over the other runtimes.
     406As expected it scales decently up to 48 \procs and then basically plateaus.
     407Tokio achieves very similar performance to \CFA until 48 \procs, after which it takes a significant hit but does keep scaling somewhat.
     408Libfibre obtains effectively the same results as Tokio with slightly less scaling, \ie the scaling curve is the same but with slightly higher values.
     409Finally Go gets the most peculiar results, scaling worst than other runtimes until 48 \procs.
     410At 72 \procs, the results of the Go runtime vary significantly, sometimes scaling sometimes plateauing.
     411However, beyond this point Go keeps this level of variation but does not scale in any of the runs.
     412
     413Throughput and scalability is notably worst for all runtimes than the previous benchmarks since there is inherently more communication between processors.
     414Indeed, none of the runtime reach 40 million operations per second while in the cycle benchmark all but libfibre reached 400 million operations per second.
     415Figures~\ref{fig:churn:jax:ns} and \ref{fig:churn:jax:low:ns} show that for all \proc count, all runtime produce poor scaling.
     416However, once the number of \glspl{hthrd} goes beyond a single socket, at 48 \procs, scaling goes from bad to worst and performance completely ceases to improve.
     417At this point, the benchmark is dominated by inter-socket communication costs for all runtimes.
     418
    390419An interesting aspect to note here is that the runtimes differ in how they handle this situation.
    391420Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at.
     
    393422In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
    394423
     424Looking next at the right column on Intel, Figure~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc, many of the differences between the runtime disappear.
     425\CFA outperforms other runtimes by a minuscule margin.
     426Libfibre follows very closely behind with basically the same performance and scaling.
     427Tokio maintains effectively the same curve shapes as it did with many threads, but it incurs extra costs for all \proc count.
     428As a result it is slightly outperformed by \CFA and libfibre.
     429While Go maintains overall similar results to the others, it again encounters significant variation at high \proc counts.
     430Inexplicably resulting in super-linear scaling for some runs, \ie the scalability curves displays a negative slope.
     431
     432Interestingly, unlike the cycle benchmark, running with fewer \ats does not produce drastically different results.
     433In fact, the overall throughput stays almost exactly the same on the left and right column.
     434
    395435\begin{figure}
    396436        \subfloat[][Throughput, 100 \ats per \proc]{
     
    407447        }
    408448
    409         \subfloat[][Latency, 100 \ats per \proc]{
     449        \subfloat[][Scalability, 100 \ats per \proc]{
    410450                \resizebox{0.5\linewidth}{!}{
    411451                        \input{result.churn.nasus.ns.pstex_t}
     
    413453                \label{fig:churn:nasus:ns}
    414454        }
    415         \subfloat[][Latency, 2 \ats per \proc]{
     455        \subfloat[][Scalability, 2 \ats per \proc]{
    416456                \resizebox{0.5\linewidth}{!}{
    417457                        \input{result.churn.low.nasus.ns.pstex_t}
     
    419459                \label{fig:churn:nasus:low:ns}
    420460        }
    421         \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine.
    422         For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     461        \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and scalability of the Churn on the benchmark on the AMD machine.
     462        For throughput, higher is better, for scalability, lower is better.
     463        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
    423464        \label{fig:churn:nasus}
    424465\end{figure}
    425466
    426 Like for the cycle benchmark, here all runtimes achieve fairly similar performance.
    427 Performance improves as long as all \procs fit on a single socket.
    428 Beyond that performance starts to suffer from increased caching costs.
    429 
    430 Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 1 and 100 \ats per \proc, \CFA, libfibre, Go and Tokio achieve effectively equivalent performance for most \proc count.
    431 
    432 However, Figure~\ref{fig:churn:nasus} again shows a somewhat different story on AMD.
    433 While \CFA, libfibre, and Tokio achieve effectively equivalent performance for most \proc count, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts.
    434 This performance difference is visible at both high and low \at counts.
     467
     468Looking now at the results for the AMD architecture, Figure~\ref{fig:churn:nasus}, the results show a somewhat different story.
     469Looking at the left column first, Figure~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability.
     470\CFA suffers particular from a larger variations at higher \proc counts, but almost all run still outperform the other runtimes.
     471Go still produces intriguing results in this case and even more intriguingly, the results have fairly low variation.
    435472
    436473One possible explanation for this difference is that since Go has very few available concurrent primitives, a channel was used instead of a semaphore.
     
    445482Depending on how the heap is structure, this could also lead to false sharing.
    446483
     484I did not further investigate what causes these unusual results.
     485
     486Looking next at the right column, Figure~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, like for Intel all runtime obtain overall similar throughput between the left and right column.
     487\CFA, Libfibre and Tokio all have very close results.
     488Go still suffers from poor scalability but is now unusual in a different way.
     489While it obtains effectively constant performance regardless of \proc count, this ``sequential'' performance is higher than the other runtimes for low \proc count.
     490Up to 32 \procs, after which the other runtime manage to outscale Go.
     491
    447492The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues.
    448 Indeed, the fact all runtimes achieve some scaling at lower \proc count demonstrate that migrations do not need to be serialized.
     493Indeed, the fact most runtimes achieve some scaling between various \proc count demonstrate that migrations do not need to be serialized.
    449494Again these result demonstrate \CFA achieves satisfactory performance.
    450495
Note: See TracChangeset for help on using the changeset viewer.