Changeset e378c730 for doc/theses


Ignore:
Timestamp:
Aug 13, 2022, 4:54:32 PM (2 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
2ae6a99
Parents:
111d993
Message:

Fleshed out some more the evaluation sections, still waiting on some new data for churn nasus, locality nasus and memcached update

Location:
doc/theses/thierry_delisle_PhD/thesis/text
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex

    r111d993 re378c730  
    7272\begin{figure}
    7373        \centering
    74         \input{result.memcd.updt.qps.pstex_t}
    75         \caption[Churn Benchmark : Throughput on Intel]{Churn Benchmark : Throughput on Intel\smallskip\newline Description}
    76         \label{fig:memcd:updt:qps}
    77 \end{figure}
    78 
    79 \begin{figure}
    80         \centering
    81         \input{result.memcd.updt.lat.pstex_t}
    82         \caption[Churn Benchmark : Throughput on Intel]{Churn Benchmark : Throughput on Intel\smallskip\newline Description}
    83         \label{fig:memcd:updt:lat}
     74        \subfloat[][Throughput]{
     75                \input{result.memcd.forall.qps.pstex_t}
     76        }
     77
     78        \subfloat[][Latency]{
     79                \input{result.memcd.forall.lat.pstex_t}
     80        }
     81        \caption[forall Latency results at different update rates]{forall Latency results at different update rates\smallskip\newline Description}
     82        \label{fig:memcd:updt:forall}
     83\end{figure}
     84
     85\begin{figure}
     86        \centering
     87        \subfloat[][Throughput]{
     88                \input{result.memcd.fibre.qps.pstex_t}
     89        }
     90
     91        \subfloat[][Latency]{
     92                \input{result.memcd.fibre.lat.pstex_t}
     93        }
     94        \caption[fibre Latency results at different update rates]{fibre Latency results at different update rates\smallskip\newline Description}
     95        \label{fig:memcd:updt:fibre}
     96\end{figure}
     97
     98\begin{figure}
     99        \centering
     100        \subfloat[][Throughput]{
     101                \input{result.memcd.vanilla.qps.pstex_t}
     102        }
     103
     104        \subfloat[][Latency]{
     105                \input{result.memcd.vanilla.lat.pstex_t}
     106        }
     107        \caption[vanilla Latency results at different update rates]{vanilla Latency results at different update rates\smallskip\newline Description}
     108        \label{fig:memcd:updt:vanilla}
    84109\end{figure}
    85110
     
    89114The memcached experiment has two aspects of the \io subsystem it does not exercise, accepting new connections and interacting with disks.
    90115On the other hand, static webservers, servers that offer static webpages, do stress disk \io since they serve files from disk\footnote{Dynamic webservers, which construct pages as they are sent, are not as interesting since the construction of the pages do not exercise the runtime in a meaningfully different way.}.
    91 The static webserver experiments will compare NGINX with a custom webserver developped for this experiment.
     116The static webserver experiments will compare NGINX~\cit{nginx} with a custom webserver developped for this experiment.
    92117
    93118\subsection{\CFA webserver}
     
    98123
    99124Normally, webservers use @sendfile@\cite{MAN:sendfile} to send files over the socket.
    100 @io_uring@ does not support @sendfile@, it supports @splice@\cite{splice} instead, which is strictly more powerful.
     125@io_uring@ does not support @sendfile@, it supports @splice@\cite{MAN:splice} instead, which is strictly more powerful.
    101126However, because of how linux implements file \io, see Subsection~\ref{ononblock}, @io_uring@'s implementation must delegate calls to splice to worker threads inside the kernel.
    102127As of Linux 5.13, @io_uring@ caps the numer of these worker threads to @RLIMIT_NPROC@ and therefore, when tens of thousands of splice requests are made, it can create tens of thousands of \glspl{kthrd}.
     
    140165
    141166\subsection{Throughput}
     167To measure the throughput of both webservers, each server is loaded with over 30,000 files making over 4.5 Gigabytes in total.
     168Each client runs httperf~\cit{httperf} which establishes a connection, does an http request for one or more files, closes the connection and repeats the process.
     169The connections and requests are made according to a Zipfian distribution~\cite{zipf}.
     170Throughput is measured by aggregating the results from httperf of all the clients.
    142171\begin{figure}
    143172        \subfloat[][Throughput]{
     
    153182        \label{fig:swbsrv}
    154183\end{figure}
    155 Figure~\ref{fig:swbsrv} shows the results comparing \CFA to nginx in terms of throughput.
    156 It demonstrate that the \CFA webserver described above is able to match the performance of nginx up-to and beyond the saturation point of the machine.
    157 Furthermore, Figure~\ref{fig:swbsrv:err} shows the rate of errors, a gross approximation of tail latency, where \CFA achives notably fewet errors once the machine reaches saturation.
     184Figure~\ref{fig:swbsrv} shows the results comparing \CFA to NGINX in terms of throughput.
     185These results are fairly straight forward.
     186Both servers achieve the same throughput until around 57,500 requests per seconds.
     187Since the clients are asking for the same files, the fact that the throughput matches exactly is expected as long as both servers are able to serve the desired rate.
     188Once the saturation point is reached, both servers are still very close.
     189NGINX achieves slightly better throughtput.
     190However, Figure~\ref{fig:swbsrv:err} shows the rate of errors, a gross approximation of tail latency, where \CFA achives notably fewet errors once the machine reaches saturation.
     191This suggest that \CFA is slightly more fair and NGINX may sloghtly sacrifice some fairness for improved throughtput.
     192It demonstrate that the \CFA webserver described above is able to match the performance of NGINX up-to and beyond the saturation point of the machine.
     193
     194\subsection{Disk Operations}
     195The throughput was made using a server with 25gb of memory, this was sufficient to hold the entire fileset in addition to all the code and data needed to run the webserver and the reste of the machine.
     196Previous work like \cit{Cite Ashif's stuff} demonstrate that an interesting follow-up experiment is to rerun the same throughput experiment but allowing significantly less memory on the machine.
     197If the machine is constrained enough, it will force the OS to evict files from the file cache and cause calls to @sendfile@ to have to read from disk.
     198However, what these low memory experiments demonstrate is how the memory footprint of the webserver affects the performance.
     199However, since what I am to evaluate in this thesis is the runtime of \CFA, I diceded to forgo experiments on low memory server.
     200The implementation of the webserver itself is simply too impactful to be an interesting evaluation of the underlying runtime.
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    r111d993 re378c730  
    44This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
    55
    6 \section{Benchmark Environment}
     6\section{Benchmark Environment}\label{microenv}
    77All benchmarks are run on two distinct hardware platforms.
    88\begin{description}
     
    3737The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue.
    3838Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark.
    39 However, yielding can be treated as a special case by optimizing it away since the number of ready \ats does not change.
    40 Not all systems perform this optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}.
     39However, yielding can be treated as a special case and some aspects of the scheduler can be optimized away since the number of ready \ats does not change.
     40Not all systems perform this type of optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}.
    4141For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}.
    4242This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
     
    4545
    4646Hence, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment.
    47 In fact, the total number of \ats waiting on the ready queue is expected to vary because of the race between the next \at unparking and the current \at parking.
     47In fact, the total number of \ats waiting on the ready queue is expected to vary because of the delay between the next \at unparking and the current \at parking.
    4848That is, the runtime cannot anticipate that the current task will immediately park.
    49 As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \at parks because of time-slicing or multiple \procs.
    50 Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops.
     49As well, the size of the cycle is also decided based on this delay.
     50Note that, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.
     51If this happens, the scheduler push and pop are avoided and the results of the experiment would be skewed.
     52Because of time-slicing or because cycles can be spread over multiple \procs, a small cycle may see the chain of unparks go full circle before the first \at parks.
     53Every runtime system must handle this race and but cannot optimized away the ready-queue pushes and pops if the cycle is long enough.
    5154To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully park before being unparked again.
    52 (Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.)
    5355Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
    5456
     
    141143The distinction is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
    142144
     145The experiment was run 15 times for each series and processor count and the \emph{$\times$}s on the graph show all of the results obtained.
     146Each series also has a solid and two dashed lines highlighting the median, maximum and minimum result respectively.
     147This presentation offers an overview of the distribution of the results for each series.
     148
     149The experimental setup uses taskset to limit the placement of \glspl{kthrd} by the operating system.
     150As mentioned in Section~\ref{microenv}, the experiement is setup to prioritize running on 2 \glspl{hthrd} per core before running on multiple sockets.
     151For the Intel machine, this means that from 1 to 24 \procs, one socket and \emph{no} hyperthreading is used and from 25 to 48 \procs, still only one socket but \emph{with} hyperthreading.
     152This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192.
     153On AMD, the same algorithm is used, but the machine only has 2 sockets.
     154So hyperthreading\footnote{Hyperthreading normally refers specifically to the technique used by Intel, however here it is loosely used to refer to AMD's equivalent feature.} is used when the \proc count reach 65 and 193.
     155
     156Figure~\ref{fig:cycle:jax:ops} and Figure~\ref{fig:cycle:jax:ns} show that for 100 cycles per \proc, \CFA, Go and Tokio all obtain effectively the same performance.
     157Libfibre is slightly behind in this case but still scales decently.
     158As a result of the \gls{kthrd} placement, we can see that additional \procs from 25 to 48 offer less performance improvements for all runtimes.
     159As expected, this pattern repeats between \proc count 72 and 96.
    143160The performance goal of \CFA is to obtain equivalent performance to other, less fair schedulers and that is what results show.
    144161Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show very good throughput and scalability for all runtimes.
    145 The experimental setup prioritizes running on 2 \glspl{hthrd} per core before running on multiple sockets.
    146 The effect of that setup is seen from 25 to 48 \procs, running on 24 core with 2 \glspl{hthrd} per core.
    147 This effect is again repeated from 73 and 96 \procs, where it happens on the second CPU.
    148 When running only a single cycle, most runtime achieve lower throughput because of the idle-sleep mechanism.
    149 
    150 Figure~\ref{fig:cycle:nasus} show effectively the same story happening on AMD as it does on Intel.
    151 The different performance bumps due to cache topology happen at different locations and there is a little more variability.
    152 However, in all cases \CFA is still competitive with other runtimes.
    153 
     162
     163When running only a single cycle, the story is slightly different.
     164\CFA and tokio obtain very smiliar results overall, but tokio shows notably more variations in the results.
     165While \CFA, Go and tokio achive equivalent performance with 100 cycles per \proc, with only 1 cycle per \proc Go achieves slightly better performance.
     166This difference in throughput and scalability is due to the idle-sleep mechanism.
     167With very few cycles, stealing or helping can cause a cascade of tasks migration and trick \proc into very short idle sleeps.
     168Both effect will negatively affect performance.
     169
     170An interesting and unusual result is that libfibre achieves better performance with fewer cycle.
     171This suggest that the cascade effect is never present in libfibre and that some bottleneck disappears in this context.
     172However, I did not investigate this result any deeper.
     173
     174Figure~\ref{fig:cycle:nasus} show a similar story happening on AMD as it does on Intel.
     175The different performance improvements and plateaus due to cache topology appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
     176Unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability for 100 cycles per \proc.
     177
     178In the 1 cycle per \proc experiment, the same performance increase for libfibre is visible.
     179However, unlike on Intel, tokio achieves the same performance as Go rather than \CFA.
     180This leaves \CFA trailing behind in this particular case, but only at hight core counts.
     181Presumably this is because in this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
     182Since this effect is only problematic in cases with 1 \at per \proc it is not very meaningful for the general performance.
     183
     184The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance in this scenario.
     185Which demonstrate that in this case \CFA achieves equivalent performance.
    154186
    155187\section{Yield}
     
    240272It is fairly obvious why I claim this benchmark is more artificial.
    241273The throughput is dominated by the mechanism used to handle the @yield@.
    242 \CFA does not have special handling for @yield@ and achieves very similar performance to the cycle benchmark.
    243 Libfibre uses the fact that @yield@ doesn't change the number of ready fibres and by-passes the idle-sleep mechanism entirely, producing significantly better throughput.
    244 Go puts yielding goroutines on a secondary global ready-queue, giving them lower priority.
    245 The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
    246 Based on the scalability, Tokio obtains the same poor performance and therefore it is likely it handles @yield@ in a similar fashion.
     274\CFA does not have special handling for @yield@ but the experiment requires less synchronization.
     275As a result achieves better performance than the cycle benchmark, but still comparable.
    247276
    248277When the number of \ats is reduce to 1 per \proc, the cost of idle sleep also comes into play in a very significant way.
    249 If anything causes a \at migration, where two \ats end-up on the same ready-queue, work-stealing will start occuring and cause every \at to shuffle around.
     278If anything causes a \at migration, where two \ats end-up on the same ready-queue, work-stealing will start occuring and could cause several \ats to shuffle around.
    250279In the process, several \procs can go to sleep transiently if they fail to find where the \ats were shuffled to.
    251280In \CFA, spurious bursts of latency can trick a \proc into helping, triggering this effect.
    252 However, since user-level threading with equal number of \ats and \procs is a somewhat degenerate case, especially when ctxswitching very often, this result is not particularly meaningful and is only included for completness.
     281However, since user-level threading with equal number of \ats and \procs is a somewhat degenerate case, especially when context-switching very often, this result is not particularly meaningful and is only included for completness.
     282
     283Libfibre uses the fact that @yield@ doesn't change the number of ready fibres and by-passes the idle-sleep mechanism entirely, producing significantly better throughput.
     284Additionally, when only running 1 \at per \proc, libfibre optimizes further and forgoes the context-switch entirely.
     285This results in incredible performance results comparing to the other runtimes.
     286
     287In stark contrast with libfibre, Go puts yielding goroutines on a secondary global ready-queue, giving them lower priority.
     288The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
     289Based on the scalability, Tokio obtains the similarly poor performance and therefore it is likely it handles @yield@ in a similar fashion.
     290However, it must be doing something different since it does scale at low \proc count.
    253291
    254292Again, Figure~\ref{fig:yield:nasus} show effectively the same story happening on AMD as it does on Intel.
     
    262300In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other.
    263301
    264 The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relation between the last \proc on which a \at ran and blocked and the \proc that subsequently unblocks it.
    265 With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a global queue.
    266 This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on \at data structure.
     302The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no apparent relation between the last \proc on which a \at ran and blocked, and the \proc that subsequently unblocks it.
     303With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or place it on a remote queue.
     304This enqueuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
    267305In either case, this benchmark aims to measure how well each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
    268306
     
    300338                \label{fig:churn:jax:ops}
    301339        }
    302         \subfloat[][Throughput, 1 \ats per \proc]{
     340        \subfloat[][Throughput, 2 \ats per \proc]{
    303341                \resizebox{0.5\linewidth}{!}{
    304342                        \input{result.churn.low.jax.ops.pstex_t}
     
    313351                \label{fig:churn:jax:ns}
    314352        }
    315         \subfloat[][Latency, 1 \ats per \proc]{
     353        \subfloat[][Latency, 2 \ats per \proc]{
    316354                \resizebox{0.5\linewidth}{!}{
    317355                        \input{result.churn.low.jax.ns.pstex_t}
     
    330368                \label{fig:churn:nasus:ops}
    331369        }
    332         \subfloat[][Throughput, 1 \ats per \proc]{
     370        \subfloat[][Throughput, 2 \ats per \proc]{
    333371                \resizebox{0.5\linewidth}{!}{
    334372                        \input{result.churn.low.nasus.ops.pstex_t}
     
    343381                \label{fig:churn:nasus:ns}
    344382        }
    345         \subfloat[][Latency, 1 \ats per \proc]{
     383        \subfloat[][Latency, 2 \ats per \proc]{
    346384                \resizebox{0.5\linewidth}{!}{
    347385                        \input{result.churn.low.nasus.ns.pstex_t}
     
    353391        \label{fig:churn:nasus}
    354392\end{figure}
    355 Figure~\ref{fig:churn:jax} shows the throughput as a function of \proc count on Intel.
    356 Like for the cycle benchmark, here are runtimes achieve fairly similar performance.
     393Figure~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput as a function of \proc count on Intel and AMD respectively.
     394It uses the same representation as the previous benchmark : 15 runs where the dashed line show the extremums and the solid line the median.
     395The performance cost of crossing the cache boundaries is still visible at the same \proc count.
     396However, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the eachother's data.
    357397Scalability is notably worst than the previous benchmarks since there is inherently more communication between processors.
    358398Indeed, once the number of \glspl{hthrd} goes beyond a single socket, performance ceases to improve.
     
    362402In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
    363403
    364 Figure~\ref{fig:churn:nasus} shows effectively the same picture.
     404Like for the cycle benchmark, here all runtimes achieve fairly similar performance.
    365405Performance improves as long as all \procs fit on a single socket.
    366 Beyond that performance plateaus.
    367 
    368 Again this performance demonstrate \CFA achieves satisfactory performance.
     406Beyond that performance starts to suffer from increased caching costs.
     407
     408        Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 100 \ats per \proc, \CFA, libfibre, and tokio achieve effectively equivalent performance for most \proc count.
     409        Interestingly, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts.
     410        This performance difference disappears in Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, where the performance of all runtimes is equivalent.
     411
     412        Figure~\ref{fig:churn:nasus} again shows a similar story.
     413        \CFA, libfibre, and tokio achieve effectively equivalent performance for most \proc count.
     414        Go still shows different scaling than the other 3 runtimes.
     415        The distinction is that on AMD the difference between Go and the other runtime is more significant.
     416        Indeed, even with only 1 \at per \proc, Go achieves notably different scaling than the other runtimes.
     417
     418        One possible explanation for this difference is that since Go has very few available concurrent primitives, a channel was used instead of a semaphore.
     419        On paper a semaphore can be replaced by a channel and with zero-sized objects passed along equivalent performance could be expected.
     420        However, in practice there can be implementation difference between the two.
     421        This is especially true if the semaphore count can get somewhat high.
     422        Note that this replacement is also made in the cycle benchmark, however in that context it did not seem to have a notable impact.
     423
     424The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues.
     425Indeed, the fact all runtimes achieve some scaling at lower \proc count demontrate that migrations do not need to be serialized.
     426Again these result demonstrate \CFA achieves satisfactory performance.
    369427
    370428\section{Locality}
     
    405463\end{figure}
    406464As mentionned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.
    407 \footnote{It is also possible to unpark to a third unrelated ready-queue, but unless the scheduler has additional knowledge about the situation, it is unlikely to result in good cache locality.}
     465\footnote{It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, there is little to suggest this would not degrade performance.}
    408466The locality experiment includes two variations of the churn benchmark, where an array of data is added.
    409467In both variations, before @V@ing the semaphore, each \at increment random cells inside the array.
    410 The @share@ variation then passes the array to the shadow-queue of the semaphore, effectively transferring ownership of the array to the woken thread.
     468The @share@ variation then passes the array to the shadow-queue of the semaphore, transferring ownership of the array to the woken thread.
    411469In the @noshare@ variation the array is not passed on and each thread continously accesses its private array.
    412470
     
    414472Since each thread unparks a random semaphore, it means that it is unlikely that a \at will be unparked from the last \proc it ran on.
    415473In the @share@ version, this means that unparking the \at on the local \proc is appropriate since the data was last modified on that \proc.
    416 In the @noshare@ version, the reverse is true.
     474In the @noshare@ version, the unparking the \at on the remote \proc is the appropriate approach.
    417475
    418476The expectation for this benchmark is to see a performance inversion, where runtimes will fare notably better in the variation which matches their unparking policy.
    419477This should lead to \CFA, Go and Tokio achieving better performance in @share@ while libfibre achieves better performance in @noshare@.
     478Indeed, \CFA, Go and Tokio have the default policy of unpark \ats on the local \proc, where as libfibre has the default policy of unparks \ats wherever they last ran.
    420479
    421480\subsection{Results}
     
    479538\end{figure}
    480539
    481 Figure~\ref{fig:locality:jax} shows that the results somewhat follow the expectation.
     540Figure~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
     541In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@.
     542
     543that the results somewhat follow the expectation.
    482544On the left of the figure showing the results for the shared variation, where \CFA and tokio outperform libfibre as expected.
    483545And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and tokio.
     
    507569Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield.
    508570
    509 I both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
     571In both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
    510572In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by:
    511573$ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueuing and dequeuing a \at and $NP$ is the number of \procs.
     
    516578
    517579While this is a fairly artificial scenario, it requires only a few simple pieces.
    518 The yielding version of this simply creates a scenario where a \at runs uninterrupted in a saturated system, and starvation has a easily measured impact.
     580The yielding version of this simply creates a scenario where a \at runs uninterrupted in a saturated system, and starvation has an easily measured impact.
    519581However, \emph{any} \at that runs uninterrupted for a significant period of time in a saturated system could lead to this kind of starvation.
    520582
     
    568630\procs    &      2      &      192   &      2      &      192      &      2      &      256   &      2      &      256    \\
    569631\hline
    570 \CFA      & 106 $\mu$s  & j200       & 68.4 $\mu$s & ~1.2 ms       & 174 $\mu$s  & ~28.4 ms   & 78.8~~$\mu$s& ~~1.21 ms   \\
    571 libfibre  & 127 $\mu$s  &            & DNC         & DNC           & 156 $\mu$s  & ~36.7 ms   & DNC         & DNC         \\
    572 Go        & 106 $\mu$s  & j200       & 24.6 ms     & 74.3 ms       & 271 $\mu$s  & 121.6 ms   & ~~1.21~ms   & 117.4 ms    \\
    573 tokio     & 289 $\mu$s  &            & DNC         & DNC           & 157 $\mu$s  & 111.0 ms   & DNC         & DNC
     632\CFA      & 106 $\mu$s  & ~19.9 ms   & 68.4 $\mu$s & ~1.2 ms       & 174 $\mu$s  & ~28.4 ms   & 78.8~~$\mu$s& ~~1.21 ms   \\
     633libfibre  & 127 $\mu$s  & ~33.5 ms   & DNC         & DNC           & 156 $\mu$s  & ~36.7 ms   & DNC         & DNC         \\
     634Go        & 106 $\mu$s  & ~64.0 ms   & 24.6 ms     & 74.3 ms       & 271 $\mu$s  & 121.6 ms   & ~~1.21~ms   & 117.4 ms    \\
     635tokio     & 289 $\mu$s  & 180.6 ms   & DNC         & DNC           & 157 $\mu$s  & 111.0 ms   & DNC         & DNC
    574636\end{tabular}
    575637\end{centering}
     
    584646The yielding variation is denoted ``Yield''.
    585647The experiement was only run for the extremums of the number of cores since the scaling per core behaves like previous experiements.
    586 This experiments clearly demonstrate that while the other runtimes achieve similar performance, \CFA achieves significantly better fairness.
     648This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness.
    587649The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly.
    588650Since \ats block after acknowledging the leader, this experiment effectively measures how quickly \procs can steal \ats from the \proc running leader.
     
    595657However, since preemption is fairly costly it achieves significantly worst performance.
    596658In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness.
    597 Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores.
     659Interestingly \CFA achieves better delays in the yielding version than the semaphore version, however, that is likely due to fairness being equivalent but removing the cost of the semaphores and idle-sleep.
Note: See TracChangeset for help on using the changeset viewer.