Ignore:
Timestamp:
Aug 17, 2022, 4:27:43 PM (20 months ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
36cc24a
Parents:
e116db3
Message:

small changes and first attempt to present graphs in micro-benchmarks chapter

Location:
doc/theses/thierry_delisle_PhD/thesis/text
Files:
4 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex

    re116db3 r3ce3fb9  
    3131This models adds flexibility to the implementation, as the serving logic can now block on user-level primitives without affecting other connections.
    3232
    33 Memcached is not built according to a thread-per-connection model, but there exists a port of it that is, which was built for libfibre in \cite{DBLP:journals/pomacs/KarstenB20}.
     33Memcached is not built according to a thread-per-connection model, but there exists a port of it that is, which was built for libfibre~\cite{DBLP:journals/pomacs/KarstenB20}.
    3434Therefore this version can both be compared to the original version and to a port to the \CFA runtime.
    3535
     
    3737\begin{itemize}
    3838 \item \emph{vanilla}: the official release of memcached, version~1.6.9.
    39  \item \emph{fibre}: a modification of vanilla which uses the thread per connection model on top of the libfibre runtime~\cite{DBLP:journals/pomacs/KarstenB20}.
     39 \item \emph{fibre}: a modification of vanilla which uses the thread per connection model on top of the libfibre runtime.
    4040 \item \emph{cfa}: a modification of the fibre webserver that replaces the libfibre runtime with \CFA.
    4141\end{itemize}
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    re116db3 r3ce3fb9  
    11\chapter{Micro-Benchmarks}\label{microbench}
    22
    3 The first step in evaluating this work is to test-out small controlled cases to ensure the basics work properly.
    4 This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
     3The first step in evaluating this work is to test small controlled cases to ensure the basics work properly.
     4This chapter presents five different experimental setups, evaluating the basic features of the \CFA, libfibre~\cite{libfibre}, Go, and Tokio~\cite{Tokio} schedulers.
     5All of these systems have a \gls{uthrding} model.
     6Note, all tests in each system are functionally identical and available online~\cite{SchedulingBenchmarks}.
    57
    68\section{Benchmark Environment}\label{microenv}
     
    2022\end{description}
    2123
    22 For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading.
    23 If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used.
    24 If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA Nodes as needed.
     24For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyper threading.
     25If more \glspl{hthrd} are needed, then 1 NUMA node with hyperthreading is used.
     26If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA nodes as needed.
    2527
    2628The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine.
    27 Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU still incur high latency.
     29Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU also incur high latency.
    2830
    2931
    3032\section{Cycling latency}
     33
     34The most basic evaluation of any ready queue is the latency needed to push and pop one element from the ready queue.
     35Since these two operations also describe a @yield@ operation, many systems use this operation as the fundamental benchmark.
     36However, yielding can be treated as a special case by optimizing it away since the number of ready \ats does not change.
     37Hence, systems that perform this optimization have an artificial performance benefit because the yield becomes a \emph{nop}.
     38For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}.
     39This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
     40At runtime, each \at unparks the next \at before parking itself.
     41Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue.
     42
    3143\begin{figure}
    3244        \centering
     
    3547        \label{fig:cycle}
    3648\end{figure}
    37 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue.
    38 Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark.
    39 However, yielding can be treated as a special case and some aspects of the scheduler can be optimized away since the number of ready \ats does not change.
    40 Not all systems perform this type of optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}.
    41 For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}.
    42 This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
    43 At runtime, each \at unparks the next \at before parking itself.
    44 Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue.
    45 
    46 Hence, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment.
    47 In fact, the total number of \ats waiting on the ready queue is expected to vary because of the delay between the next \at unparking and the current \at parking.
    48 That is, the runtime cannot anticipate that the current task will immediately park.
    49 As well, the size of the cycle is also decided based on this delay.
    50 Note that, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.
    51 If this happens, the scheduler push and pop are avoided and the results of the experiment would be skewed.
    52 Because of time-slicing or because cycles can be spread over multiple \procs, a small cycle may see the chain of unparks go full circle before the first \at parks.
    53 Every runtime system must handle this race and but cannot optimized away the ready-queue pushes and pops if the cycle is long enough.
     49
     50Therefore, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment.
     51In fact, the total number of \ats waiting on the ready queue is expected to vary because of the race between the next \at unparking and the current \at parking.
     52That is, the runtime cannot anticipate that the current task immediately parks.
     53As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \at parks because of time-slicing or multiple \procs.
     54If this happens, the scheduler push and pop are avoided and the results of the experiment are skewed.
     55(Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.)
     56Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops.
    5457To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully park before being unparked again.
    5558Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
     
    7376\caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
    7477\label{fig:cycle:code}
     78%\end{figure}
     79
     80\bigskip
     81
     82%\begin{figure}
     83        \subfloat[][Throughput, 100 cycles per \proc]{
     84                \resizebox{0.5\linewidth}{!}{
     85                        \input{result.cycle.jax.ops.pstex_t}
     86                }
     87                \label{fig:cycle:jax:ops}
     88        }
     89        \subfloat[][Throughput, 1 cycle per \proc]{
     90                \resizebox{0.5\linewidth}{!}{
     91                        \input{result.cycle.low.jax.ops.pstex_t}
     92                }
     93                \label{fig:cycle:jax:low:ops}
     94        }
     95
     96        \subfloat[][Scalability, 100 cycles per \proc]{
     97                \resizebox{0.5\linewidth}{!}{
     98                        \input{result.cycle.jax.ns.pstex_t}
     99                }
     100                \label{fig:cycle:jax:ns}
     101        }
     102        \subfloat[][Scalability, 1 cycle per \proc]{
     103                \resizebox{0.5\linewidth}{!}{
     104                        \input{result.cycle.low.jax.ns.pstex_t}
     105                }
     106                \label{fig:cycle:jax:low:ns}
     107        }
     108        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are maximums while the solid line is the medium.}
     109        \label{fig:cycle:jax}
    75110\end{figure}
    76111
    77112\subsection{Results}
    78 \begin{figure}
    79         \subfloat[][Throughput, 100 cycles per \proc]{
    80                 \resizebox{0.5\linewidth}{!}{
    81                         \input{result.cycle.jax.ops.pstex_t}
    82                 }
    83                 \label{fig:cycle:jax:ops}
    84         }
    85         \subfloat[][Throughput, 1 cycle per \proc]{
    86                 \resizebox{0.5\linewidth}{!}{
    87                         \input{result.cycle.low.jax.ops.pstex_t}
    88                 }
    89                 \label{fig:cycle:jax:low:ops}
    90         }
    91 
    92         \subfloat[][Scalability, 100 cycles per \proc]{
    93                 \resizebox{0.5\linewidth}{!}{
    94                         \input{result.cycle.jax.ns.pstex_t}
    95                 }
    96                 \label{fig:cycle:jax:ns}
    97         }
    98         \subfloat[][Scalability, 1 cycle per \proc]{
    99                 \resizebox{0.5\linewidth}{!}{
    100                         \input{result.cycle.low.jax.ns.pstex_t}
    101                 }
    102                 \label{fig:cycle:jax:low:ns}
    103         }
    104         \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    105         \label{fig:cycle:jax}
    106 \end{figure}
    107 
    108 \begin{figure}
    109         \subfloat[][Throughput, 100 cycles per \proc]{
    110                 \resizebox{0.5\linewidth}{!}{
    111                         \input{result.cycle.nasus.ops.pstex_t}
    112                 }
    113                 \label{fig:cycle:nasus:ops}
    114         }
    115         \subfloat[][Throughput, 1 cycle per \proc]{
    116                 \resizebox{0.5\linewidth}{!}{
    117                         \input{result.cycle.low.nasus.ops.pstex_t}
    118                 }
    119                 \label{fig:cycle:nasus:low:ops}
    120         }
    121 
    122         \subfloat[][Scalability, 100 cycles per \proc]{
    123                 \resizebox{0.5\linewidth}{!}{
    124                         \input{result.cycle.nasus.ns.pstex_t}
    125                 }
    126                 \label{fig:cycle:nasus:ns}
    127         }
    128         \subfloat[][Scalability, 1 cycle per \proc]{
    129                 \resizebox{0.5\linewidth}{!}{
    130                         \input{result.cycle.low.nasus.ns.pstex_t}
    131                 }
    132                 \label{fig:cycle:nasus:low:ns}
    133         }
    134         \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    135         \label{fig:cycle:nasus}
    136 \end{figure}
    137 Figure~\ref{fig:cycle:jax} and Figure~\ref{fig:cycle:nasus} shows the throughput as a function of \proc count on Intel and AMD respectively, where each cycle has 5 \ats.
     113
     114Figures~\ref{fig:cycle:jax} and~\ref{fig:cycle:nasus} show the throughput as a function of \proc count on Intel and AMD respectively, where each cycle has 5 \ats.
    138115The graphs show traditional throughput on the top row and \newterm{scalability} on the bottom row.
    139 Where scalability uses the same data but the Y axis is calculated as the number of \procs over the throughput.
     116Scalability uses the same data as throughput but the Y axis is calculated as the number of \procs over the throughput.
    140117In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same.
    141118The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy.
    142 The right column shows results for only 1 cycle per \proc, where the ready queues are expected to be near empty at all times.
    143 The distinction is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
    144 
    145 The experiment was run 15 times for each series and processor count and the \emph{$\times$}s on the graph show all of the results obtained.
    146 Each series also has a solid and two dashed lines highlighting the median, maximum and minimum result respectively.
    147 This presentation offers an overview of the distribution of the results for each series.
     119The right column shows results for 1 cycle per \proc, where the ready queues are expected to be near empty most of the time.
     120The distinction between 100 and 1 cycles is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
     121
     122\begin{figure}
     123        \subfloat[][Throughput, 100 cycles per \proc]{
     124                \resizebox{0.5\linewidth}{!}{
     125                        \input{result.cycle.nasus.ops.pstex_t}
     126                }
     127                \label{fig:cycle:nasus:ops}
     128        }
     129        \subfloat[][Throughput, 1 cycle per \proc]{
     130                \resizebox{0.5\linewidth}{!}{
     131                        \input{result.cycle.low.nasus.ops.pstex_t}
     132                }
     133                \label{fig:cycle:nasus:low:ops}
     134        }
     135
     136        \subfloat[][Scalability, 100 cycles per \proc]{
     137                \resizebox{0.5\linewidth}{!}{
     138                        \input{result.cycle.nasus.ns.pstex_t}
     139                }
     140                \label{fig:cycle:nasus:ns}
     141        }
     142        \subfloat[][Scalability, 1 cycle per \proc]{
     143                \resizebox{0.5\linewidth}{!}{
     144                        \input{result.cycle.low.nasus.ns.pstex_t}
     145                }
     146                \label{fig:cycle:nasus:low:ns}
     147        }
     148        \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     149        \label{fig:cycle:nasus}
     150\end{figure}
     151
     152The experiment ran 15 times for each series and processor count.
     153Each series has a solid and two dashed lines representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
     154An alternative display is to use error bars with min/max as the bottom/top for the bar.
     155However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.}
     156This graph presentation offers an overview of the distribution of the results for each series.
    148157
    149158The experimental setup uses taskset to limit the placement of \glspl{kthrd} by the operating system.
    150 As mentioned in Section~\ref{microenv}, the experiement is setup to prioritize running on 2 \glspl{hthrd} per core before running on multiple sockets.
    151 For the Intel machine, this means that from 1 to 24 \procs, one socket and \emph{no} hyperthreading is used and from 25 to 48 \procs, still only one socket but \emph{with} hyperthreading.
     159As mentioned in Section~\ref{microenv}, the experiment is setup to prioritize running on two \glspl{hthrd} per core before running on multiple sockets.
     160For the Intel machine, this means that from 1 to 24 \procs one socket and \emph{no} hyperthreading is used, and from 25 to 48 \procs still only one socket is used but \emph{with} hyperthreading.
    152161This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192.
    153162On AMD, the same algorithm is used, but the machine only has 2 sockets.
    154 So hyperthreading\footnote{Hyperthreading normally refers specifically to the technique used by Intel, however here it is loosely used to refer to AMD's equivalent feature.} is used when the \proc count reach 65 and 193.
    155 
     163So hyperthreading\footnote{
     164Hyperthreading normally refers specifically to the technique used by Intel, however it is often used generically to refer to any equivalent feature.}
     165is used when the \proc count reach 65 and 193.
     166
     167The performance goal of \CFA is to obtain equivalent performance to other less fair schedulers.
    156168Figure~\ref{fig:cycle:jax:ops} and Figure~\ref{fig:cycle:jax:ns} show that for 100 cycles per \proc, \CFA, Go and Tokio all obtain effectively the same performance.
    157169Libfibre is slightly behind in this case but still scales decently.
    158 As a result of the \gls{kthrd} placement, we can see that additional \procs from 25 to 48 offer less performance improvements for all runtimes.
     170As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvements for all runtimes.
    159171As expected, this pattern repeats between \proc count 72 and 96.
    160 The performance goal of \CFA is to obtain equivalent performance to other, less fair schedulers and that is what results show.
    161 Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show very good throughput and scalability for all runtimes.
     172Hence, Figures~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show very good throughput and scalability for all runtimes.
    162173
    163174When running only a single cycle, the story is slightly different.
    164 \CFA and tokio obtain very smiliar results overall, but tokio shows notably more variations in the results.
    165 While \CFA, Go and tokio achive equivalent performance with 100 cycles per \proc, with only 1 cycle per \proc Go achieves slightly better performance.
     175\CFA and Tokio obtain very similar results overall, but Tokio shows notably more variations in the results.
     176While \CFA, Go and Tokio achieve equivalent performance with 100 cycles per \proc, with only 1 cycle per \proc, Go achieves slightly better performance.
    166177This difference in throughput and scalability is due to the idle-sleep mechanism.
    167 With very few cycles, stealing or helping can cause a cascade of tasks migration and trick \proc into very short idle sleeps.
    168 Both effect will negatively affect performance.
    169 
    170 An interesting and unusual result is that libfibre achieves better performance with fewer cycle.
     178With very few cycles, stealing or helping can cause a cascade of tasks migration and trick a \proc into very short idle sleeps, which negatively affect performance.
     179
     180An interesting and unusual result is that libfibre achieves better performance with 1 cycle.
    171181This suggest that the cascade effect is never present in libfibre and that some bottleneck disappears in this context.
    172182However, I did not investigate this result any deeper.
    173183
    174184Figure~\ref{fig:cycle:nasus} show a similar story happening on AMD as it does on Intel.
    175 The different performance improvements and plateaus due to cache topology appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
    176 Unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability for 100 cycles per \proc.
     185The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
     186Unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability for 100 cycles per \proc, with some variations in the results.
    177187
    178188In the 1 cycle per \proc experiment, the same performance increase for libfibre is visible.
    179 However, unlike on Intel, tokio achieves the same performance as Go rather than \CFA.
     189However, unlike on Intel, Tokio achieves the same performance as Go rather than \CFA.
    180190This leaves \CFA trailing behind in this particular case, but only at hight core counts.
    181 Presumably this is because in this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
    182 Since this effect is only problematic in cases with 1 \at per \proc it is not very meaningful for the general performance.
    183 
    184 The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance in this scenario.
    185 Which demonstrate that in this case \CFA achieves equivalent performance.
     191For this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
     192Essentially, \CFA's fairness must result in slower performance in some workload.
     193Fortunately, this effect is only problematic in pathological cases, \eg with 1 \at per \proc, which seldom occurs in most workloads.
     194
     195The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance for this micro-benchmark.
     196This result shows that the \CFA scheduler has achieved the goal of obtaining equivalent performance to other less fair schedulers.
    186197
    187198\section{Yield}
    188199For completion, the classic yield benchmark is included.
    189 This benchmark is simpler than the cycle test: it creates many \ats that call @yield@.
     200Here, the throughput is dominated by the mechanism used to handle the @yield@ function.
     201Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the cycle @wait/next.wake@ is replaced by @yield@.
    190202As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@.
    191 The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty.
    192 This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work.
    193 Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@.
     203The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty, which again puts a strain on the idle-sleep handling.
    194204
    195205\begin{figure}
     
    207217\caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
    208218\label{fig:yield:code}
     219%\end{figure}
     220\bigskip
     221%\begin{figure}
     222        \subfloat[][Throughput, 100 \ats per \proc]{
     223                \resizebox{0.5\linewidth}{!}{
     224                        \input{result.yield.jax.ops.pstex_t}
     225                }
     226                \label{fig:yield:jax:ops}
     227        }
     228        \subfloat[][Throughput, 1 \ats per \proc]{
     229                \resizebox{0.5\linewidth}{!}{
     230                \input{result.yield.low.jax.ops.pstex_t}
     231                }
     232                \label{fig:yield:jax:low:ops}
     233        }
     234
     235        \subfloat[][Scalability, 100 \ats per \proc]{
     236                \resizebox{0.5\linewidth}{!}{
     237                \input{result.yield.jax.ns.pstex_t}
     238                }
     239                \label{fig:yield:jax:ns}
     240        }
     241        \subfloat[][Scalability, 1 \ats per \proc]{
     242                \resizebox{0.5\linewidth}{!}{
     243                \input{result.yield.low.jax.ns.pstex_t}
     244                }
     245                \label{fig:yield:jax:low:ns}
     246        }
     247        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, using 1 \ats per \proc. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     248        \label{fig:yield:jax}
    209249\end{figure}
    210250
    211251\subsection{Results}
     252
     253Figures~\ref{fig:yield:jax} and~\ref{fig:yield:nasus} show the same throughput graphs as @cycle@ on Intel and AMD, respectively.
     254Note, the Y-axis on the yield graph for Intel is twice as large as the Intel cycle-graph.
     255A visual glance between the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
     256
     257For the Intel architecture, Figure~\ref{fig:yield:jax}:
     258\begin{itemize}
     259\item
     260\CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment.
     261Hence, the @yield@ throughput and scalability graphs for both 100 and 1 cycles/tasks per processor have similar shapes to the corresponding @cycle@ graphs.
     262The only difference is sightly better performance for @yield@ because of less synchronization.
     263As for @cycle@, the cost of idle sleep also comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
     264\item
     265libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, by-passing the idle-sleep mechanism entirely.
     266Additionally, when only running 1 \at per \proc, libfibre optimizes further, and forgoes the context-switch entirely.
     267Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance for 100 cycles/tasks and an 8 times increase for 1 cycle/task.
     268\item
     269Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it lower priority.
     270The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
     271Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@ for both 100 and 1 cycles/tasks.
     272\item
     273Tokio has a similar performance collapse after 16 processors, and therefore, its special @yield@ handling is probably related to a Go-like scheduler problem and/or a \CFA idle-sleep problem.
     274(I did not dig through the Rust code to ascertain the exact reason for the collapse.)
     275\end{itemize}
     276
    212277\begin{figure}
    213278        \subfloat[][Throughput, 100 \ats per \proc]{
    214279                \resizebox{0.5\linewidth}{!}{
    215                         \input{result.yield.jax.ops.pstex_t}
    216                 }
    217                 \label{fig:yield:jax:ops}
    218         }
    219         \subfloat[][Throughput, 1 \ats per \proc]{
    220                 \resizebox{0.5\linewidth}{!}{
    221                 \input{result.yield.low.jax.ops.pstex_t}
    222                 }
    223                 \label{fig:yield:jax:low:ops}
     280                        \input{result.yield.nasus.ops.pstex_t}
     281                }
     282                \label{fig:yield:nasus:ops}
     283        }
     284        \subfloat[][Throughput, 1 \at per \proc]{
     285                \resizebox{0.5\linewidth}{!}{
     286                        \input{result.yield.low.nasus.ops.pstex_t}
     287                }
     288                \label{fig:yield:nasus:low:ops}
    224289        }
    225290
    226291        \subfloat[][Scalability, 100 \ats per \proc]{
    227292                \resizebox{0.5\linewidth}{!}{
    228                 \input{result.yield.jax.ns.pstex_t}
    229                 }
    230                 \label{fig:yield:jax:ns}
    231         }
    232         \subfloat[][Scalability, 1 \ats per \proc]{
    233                 \resizebox{0.5\linewidth}{!}{
    234                 \input{result.yield.low.jax.ns.pstex_t}
    235                 }
    236                 \label{fig:yield:jax:low:ns}
    237         }
    238         \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    239         \label{fig:yield:jax}
    240 \end{figure}
    241 
    242 \begin{figure}
    243         \subfloat[][Throughput, 100 \ats per \proc]{
    244                 \resizebox{0.5\linewidth}{!}{
    245                         \input{result.yield.nasus.ops.pstex_t}
    246                 }
    247                 \label{fig:yield:nasus:ops}
    248         }
    249         \subfloat[][Throughput, 1 \at per \proc]{
    250                 \resizebox{0.5\linewidth}{!}{
    251                         \input{result.yield.low.nasus.ops.pstex_t}
    252                 }
    253                 \label{fig:yield:nasus:low:ops}
    254         }
    255 
    256         \subfloat[][Scalability, 100 \ats per \proc]{
    257                 \resizebox{0.5\linewidth}{!}{
    258293                        \input{result.yield.nasus.ns.pstex_t}
    259294                }
     
    266301                \label{fig:yield:nasus:low:ns}
    267302        }
    268         \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
     303        \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, using 1 \ats per \proc. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
    269304        \label{fig:yield:nasus}
    270305\end{figure}
    271 Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count on Intel.
    272 It is fairly obvious why I claim this benchmark is more artificial.
    273 The throughput is dominated by the mechanism used to handle the @yield@.
    274 \CFA does not have special handling for @yield@ but the experiment requires less synchronization.
    275 As a result achieves better performance than the cycle benchmark, but still comparable.
    276 
    277 When the number of \ats is reduce to 1 per \proc, the cost of idle sleep also comes into play in a very significant way.
    278 If anything causes a \at migration, where two \ats end-up on the same ready-queue, work-stealing will start occuring and could cause several \ats to shuffle around.
    279 In the process, several \procs can go to sleep transiently if they fail to find where the \ats were shuffled to.
    280 In \CFA, spurious bursts of latency can trick a \proc into helping, triggering this effect.
    281 However, since user-level threading with equal number of \ats and \procs is a somewhat degenerate case, especially when context-switching very often, this result is not particularly meaningful and is only included for completness.
    282 
    283 Libfibre uses the fact that @yield@ doesn't change the number of ready fibres and by-passes the idle-sleep mechanism entirely, producing significantly better throughput.
    284 Additionally, when only running 1 \at per \proc, libfibre optimizes further and forgoes the context-switch entirely.
    285 This results in incredible performance results comparing to the other runtimes.
    286 
    287 In stark contrast with libfibre, Go puts yielding goroutines on a secondary global ready-queue, giving them lower priority.
    288 The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
    289 Based on the scalability, Tokio obtains the similarly poor performance and therefore it is likely it handles @yield@ in a similar fashion.
    290 However, it must be doing something different since it does scale at low \proc count.
    291 
    292 Again, Figure~\ref{fig:yield:nasus} show effectively the same story happening on AMD as it does on Intel.
    293 \CFA fairs slightly better with many \ats per \proc, but the performance is satisfactory on both architectures.
    294 
    295 Since \CFA obtains the same satisfactory performance as the previous benchmark this is still a success, albeit a less meaningful one.
     306
     307For the AMD, Figure~\ref{fig:yield:nasus}, the results show the same story as on the Intel, with slightly increased jitter.
     308Also, some transition points on the X-axis differ because of the architectures, like at 16 versus 24 processors.
     309
     310It is difficult to draw conclusions for this benchmark when runtime system treat @yield@ so differently.
     311The win for \CFA is its consistency between the cycle and yield benchmarks making it simpler for programmers to use and understand, \ie it the \CFA semantics matches with programmer intuition..
    296312
    297313
     
    300316In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other.
    301317
    302 The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no apparent relation between the last \proc on which a \at ran and blocked, and the \proc that subsequently unblocks it.
    303 With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or place it on a remote queue.
    304 This enqueuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
     318The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relationship between the last \proc on which a \at ran and blocked and the \proc that subsequently unblocks it.
     319With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a remote queue.
     320This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
    305321In either case, this benchmark aims to measure how well each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
    306322
     
    328344\caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code}
    329345\label{fig:churn:code}
     346%\end{figure}
     347\bigskip
     348%\begin{figure}
     349        \subfloat[][Throughput, 100 \ats per \proc]{
     350                \resizebox{0.5\linewidth}{!}{
     351                        \input{result.churn.jax.ops.pstex_t}
     352                }
     353                \label{fig:churn:jax:ops}
     354        }
     355        \subfloat[][Throughput, 2 \ats per \proc]{
     356                \resizebox{0.5\linewidth}{!}{
     357                        \input{result.churn.low.jax.ops.pstex_t}
     358                }
     359                \label{fig:churn:jax:low:ops}
     360        }
     361
     362        \subfloat[][Latency, 100 \ats per \proc]{
     363                \resizebox{0.5\linewidth}{!}{
     364                        \input{result.churn.jax.ns.pstex_t}
     365                }
     366                \label{fig:churn:jax:ns}
     367        }
     368        \subfloat[][Latency, 2 \ats per \proc]{
     369                \resizebox{0.5\linewidth}{!}{
     370                        \input{result.churn.low.jax.ns.pstex_t}
     371                }
     372                \label{fig:churn:jax:low:ns}
     373        }
     374        \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     375        \label{fig:churn:jax}
    330376\end{figure}
    331377
    332378\subsection{Results}
    333 \begin{figure}
    334         \subfloat[][Throughput, 100 \ats per \proc]{
    335                 \resizebox{0.5\linewidth}{!}{
    336                         \input{result.churn.jax.ops.pstex_t}
    337                 }
    338                 \label{fig:churn:jax:ops}
    339         }
    340         \subfloat[][Throughput, 2 \ats per \proc]{
    341                 \resizebox{0.5\linewidth}{!}{
    342                         \input{result.churn.low.jax.ops.pstex_t}
    343                 }
    344                 \label{fig:churn:jax:low:ops}
    345         }
    346 
    347         \subfloat[][Latency, 100 \ats per \proc]{
    348                 \resizebox{0.5\linewidth}{!}{
    349                         \input{result.churn.jax.ns.pstex_t}
    350                 }
    351                 \label{fig:churn:jax:ns}
    352         }
    353         \subfloat[][Latency, 2 \ats per \proc]{
    354                 \resizebox{0.5\linewidth}{!}{
    355                         \input{result.churn.low.jax.ns.pstex_t}
    356                 }
    357                 \label{fig:churn:jax:low:ns}
    358         }
    359         \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    360         \label{fig:churn:jax}
    361 \end{figure}
    362 
    363 \begin{figure}
    364         \subfloat[][Throughput, 100 \ats per \proc]{
    365                 \resizebox{0.5\linewidth}{!}{
    366                         \input{result.churn.nasus.ops.pstex_t}
    367                 }
    368                 \label{fig:churn:nasus:ops}
    369         }
    370         \subfloat[][Throughput, 2 \ats per \proc]{
    371                 \resizebox{0.5\linewidth}{!}{
    372                         \input{result.churn.low.nasus.ops.pstex_t}
    373                 }
    374                 \label{fig:churn:nasus:low:ops}
    375         }
    376 
    377         \subfloat[][Latency, 100 \ats per \proc]{
    378                 \resizebox{0.5\linewidth}{!}{
    379                         \input{result.churn.nasus.ns.pstex_t}
    380                 }
    381                 \label{fig:churn:nasus:ns}
    382         }
    383         \subfloat[][Latency, 2 \ats per \proc]{
    384                 \resizebox{0.5\linewidth}{!}{
    385                         \input{result.churn.low.nasus.ns.pstex_t}
    386                 }
    387                 \label{fig:churn:nasus:low:ns}
    388         }
    389         \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine.
    390         For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    391         \label{fig:churn:nasus}
    392 \end{figure}
    393 Figure~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput as a function of \proc count on Intel and AMD respectively.
    394 It uses the same representation as the previous benchmark : 15 runs where the dashed line show the extremums and the solid line the median.
     379Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput as a function of \proc count on Intel and AMD respectively.
     380It uses the same representation as the previous benchmark : 15 runs where the dashed line show the extremes and the solid line the median.
    395381The performance cost of crossing the cache boundaries is still visible at the same \proc count.
    396 However, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the eachother's data.
     382However, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the each other's data.
    397383Scalability is notably worst than the previous benchmarks since there is inherently more communication between processors.
    398384Indeed, once the number of \glspl{hthrd} goes beyond a single socket, performance ceases to improve.
    399385An interesting aspect to note here is that the runtimes differ in how they handle this situation.
    400386Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at.
    401 \CFA, tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc.
     387\CFA, Tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc.
    402388In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
     389
     390\begin{figure}
     391        \subfloat[][Throughput, 100 \ats per \proc]{
     392                \resizebox{0.5\linewidth}{!}{
     393                        \input{result.churn.nasus.ops.pstex_t}
     394                }
     395                \label{fig:churn:nasus:ops}
     396        }
     397        \subfloat[][Throughput, 2 \ats per \proc]{
     398                \resizebox{0.5\linewidth}{!}{
     399                        \input{result.churn.low.nasus.ops.pstex_t}
     400                }
     401                \label{fig:churn:nasus:low:ops}
     402        }
     403
     404        \subfloat[][Latency, 100 \ats per \proc]{
     405                \resizebox{0.5\linewidth}{!}{
     406                        \input{result.churn.nasus.ns.pstex_t}
     407                }
     408                \label{fig:churn:nasus:ns}
     409        }
     410        \subfloat[][Latency, 2 \ats per \proc]{
     411                \resizebox{0.5\linewidth}{!}{
     412                        \input{result.churn.low.nasus.ns.pstex_t}
     413                }
     414                \label{fig:churn:nasus:low:ns}
     415        }
     416        \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine.
     417        For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     418        \label{fig:churn:nasus}
     419\end{figure}
    403420
    404421Like for the cycle benchmark, here all runtimes achieve fairly similar performance.
     
    406423Beyond that performance starts to suffer from increased caching costs.
    407424
    408 Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 1 and 100 \ats per \proc, \CFA, libfibre, Go and tokio achieve effectively equivalent performance for most \proc count.
     425Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 1 and 100 \ats per \proc, \CFA, libfibre, Go and Tokio achieve effectively equivalent performance for most \proc count.
    409426
    410427However, Figure~\ref{fig:churn:nasus} again shows a somewhat different story on AMD.
    411 While \CFA, libfibre, and tokio achieve effectively equivalent performance for most \proc count, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts.
     428While \CFA, libfibre, and Tokio achieve effectively equivalent performance for most \proc count, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts.
    412429This performance difference is visible at both high and low \at counts.
    413430
     
    420437As second possible explanation is that Go may sometimes use the heap when allocating variables based on the result of escape analysis of the code.
    421438It is possible that variables that should be placed on the stack are placed on the heap.
    422 This could cause extra pointer chasing in the benchmark, heightning locality effects.
     439This could cause extra pointer chasing in the benchmark, heightening locality effects.
    423440Depending on how the heap is structure, this could also lead to false sharing.
    424441
    425442The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues.
    426 Indeed, the fact all runtimes achieve some scaling at lower \proc count demontrate that migrations do not need to be serialized.
     443Indeed, the fact all runtimes achieve some scaling at lower \proc count demonstrate that migrations do not need to be serialized.
    427444Again these result demonstrate \CFA achieves satisfactory performance.
    428445
    429446\section{Locality}
    430 \begin{figure}
    431 \begin{cfa}
     447
     448\begin{figure}
     449\newsavebox{\myboxA}
     450\newsavebox{\myboxB}
     451
     452\begin{lrbox}{\myboxA}
     453\begin{cfa}[tabsize=3]
    432454Thread.main() {
    433455        count := 0
     
    436458                // go through the array
    437459                @work( a )@
     460
    438461                spots[r].V()
    439462                spots[r].P()
     
    444467}
    445468\end{cfa}
    446 \begin{cfa}
     469\end{lrbox}
     470
     471\begin{lrbox}{\myboxB}
     472\begin{cfa}[tabsize=3]
    447473Thread.main() {
    448474        count := 0
     
    460486}
    461487\end{cfa}
     488\end{lrbox}
     489
     490\subfloat[Thread$_1$]{\label{f:CFibonacci}\usebox\myboxA}
     491\hspace{3pt}
     492\vrule
     493\hspace{3pt}
     494\subfloat[Thread$_2$]{\label{f:CFAFibonacciGen}\usebox\myboxB}
     495
    462496\caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code}
    463497\label{fig:locality:code}
    464498\end{figure}
    465 As mentionned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.
     499
     500As mentioned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.
    466501\footnote{It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, there is little to suggest this would not degrade performance.}
    467502The locality experiment includes two variations of the churn benchmark, where an array of data is added.
    468503In both variations, before @V@ing the semaphore, each \at increment random cells inside the array.
    469504The @share@ variation then passes the array to the shadow-queue of the semaphore, transferring ownership of the array to the woken thread.
    470 In the @noshare@ variation the array is not passed on and each thread continously accesses its private array.
     505In the @noshare@ variation the array is not passed on and each thread continuously accesses its private array.
    471506
    472507The objective here is to highlight the different decision made by the runtime when unparking.
     
    480515
    481516\subsection{Results}
     517
    482518\begin{figure}
    483519        \subfloat[][Throughput share]{
     
    506542                \label{fig:locality:jax:noshare:ns}
    507543        }
    508         \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
     544        \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
    509545        \label{fig:locality:jax}
    510546\end{figure}
     
    535571                \label{fig:locality:nasus:noshare:ns}
    536572        }
    537         \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
     573        \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
    538574        \label{fig:locality:nasus}
    539575\end{figure}
    540576
    541 Figure~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
     577Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
    542578In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@.
    543579
    544580On Intel, Figure~\ref{fig:locality:jax} shows Go trailing behind the 3 other runtimes.
    545 On the left of the figure showing the results for the shared variation, where \CFA and tokio slightly outperform libfibre as expected.
    546 And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and tokio.
    547 Otherwise the results are similar to the churn benchmark, with lower throughtput due to the array processing.
     581On the left of the figure showing the results for the shared variation, where \CFA and Tokio slightly outperform libfibre as expected.
     582And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and Tokio.
     583Otherwise the results are similar to the churn benchmark, with lower throughput due to the array processing.
    548584Presumably the reason why Go trails behind are the same as in Figure~\ref{fig:churn:nasus}.
    549585
    550586Figure~\ref{fig:locality:nasus} shows the same experiment on AMD.
    551587\todo{why is cfa slower?}
    552 Again, we see the same story, where tokio and libfibre swap places and Go trails behind.
     588Again, we see the same story, where Tokio and libfibre swap places and Go trails behind.
    553589
    554590\section{Transfer}
     
    572608In both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
    573609In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by:
    574 $ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueuing and dequeuing a \at and $NP$ is the number of \procs.
     610$ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueueing and dequeuing a \at and $NP$ is the number of \procs.
    575611However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay will increase.
    576612The semaphore version is an approximation of the strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once.
    577613The benchmark effectively provides the fairness guarantee in this case.
    578 In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsability and any unfairness will be measurable.
     614In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness will be measurable.
    579615
    580616While this is a fairly artificial scenario, it requires only a few simple pieces.
     
    634670libfibre  & 127 $\mu$s  & ~33.5 ms   & DNC         & DNC           & 156 $\mu$s  & ~36.7 ms   & DNC         & DNC         \\
    635671Go        & 106 $\mu$s  & ~64.0 ms   & 24.6 ms     & 74.3 ms       & 271 $\mu$s  & 121.6 ms   & ~~1.21~ms   & 117.4 ms    \\
    636 tokio     & 289 $\mu$s  & 180.6 ms   & DNC         & DNC           & 157 $\mu$s  & 111.0 ms   & DNC         & DNC
     672Tokio     & 289 $\mu$s  & 180.6 ms   & DNC         & DNC           & 157 $\mu$s  & 111.0 ms   & DNC         & DNC
    637673\end{tabular}
    638674\end{centering}
    639 \caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader. }
     675\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
    640676\label{fig:transfer:res}
    641677\end{figure}
    642 Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiement runs 100 \at per \proc.
     678
     679Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiment runs 100 \at per \proc.
    643680Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter.
    644 As such, data points that are the on the same order of magnitude as eachother should be basically considered equal.
    645 The takeaway of this experiement is the presence of very large differences.
     681As such, data points that are the on the same order of magnitude as each other should be basically considered equal.
     682The takeaway of this experiment is the presence of very large differences.
    646683The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged.
    647684The yielding variation is denoted ``Yield''.
    648 The experiement was only run for the extremums of the number of cores since the scaling per core behaves like previous experiements.
     685The experiment was only run for the extremes of the number of cores since the scaling per core behaves like previous experiments.
    649686This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness.
    650687The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly.
     
    652689Figure~\ref{fig:transfer:res} shows that while Go and Tokio are slower, all runtime achieve decent latency.
    653690However, the yielding variation shows an entirely different picture.
    654 Since libfibre and tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs.
    655 The result is that the experiement simply does not complete for these runtime.
     691Since libfibre and Tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs.
     692The result is that the experiment simply does not complete for these runtime.
    656693Without \procs stealing from the \proc running the leader, the experiment will simply never terminate.
    657 Go manages to complete the experiement because it adds preemption on top of classic work-stealing.
     694Go manages to complete the experiment because it adds preemption on top of classic work-stealing.
    658695However, since preemption is fairly costly it achieves significantly worst performance.
    659696In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness.
  • doc/theses/thierry_delisle_PhD/thesis/text/existing.tex

    re116db3 r3ce3fb9  
    5050It can therefore be desirable for schedulers to support \ats with identical priorities and/or automatically setting and adjusting priorities for \ats.
    5151Most common operating systems use some variant on priorities with overlaps and dynamic priority adjustments.
    52 For example, Microsoft Windows uses a pair of priorities
    53 \cite{win:priority}, one specified by users out of ten possible options and one adjusted by the system.
     52For example, Microsoft Windows uses a pair of priorities~\cite{win:priority}, one specified by users out of ten possible options and one adjusted by the system.
    5453
    5554\subsection{Uninformed and Self-Informed Dynamic Schedulers}
     
    137136The scheduler may also temporarily adjust priorities after certain effects like the completion of I/O requests.
    138137
    139 In~\cite{russinovich2009windows}, Chapter 1 section ``Processes, Threads, and Jobs'' discusses the scheduling policy more in depth.
    140 Multicore scheduling is based on a combination of priorities, preferred \proc.
    141 Each \at is assigned an \newterm{ideal} \proc using a round-robin policy.
    142 \Gls{at} are distributed among the \procs according to their priority, preferring to match \ats to their ideal \proc and then to the last \proc they ran on.
    143 This is similar to a variation of work stealing, where the stealing \proc restore the \at to its original \proc after running it, but with priorities added onto the mix.
     138In~\cite{russinovich2009windows}, Chapter 1 section ``Processes, Threads, and Jobs''\todo{Look up section number.} discusses the scheduling policy more in depth.
     139Multicore scheduling is based on a combination of priorities and preferred \proc.
     140Each \at is assigned an initial processor using a round-robin policy, called the \at's \newterm{ideal} \proc.
     141\Glspl{at} are distributed among the \procs according to their priority, preferring to match \ats to their ideal \proc and then to the last \proc they ran on.
     142This approach is a variation of work stealing, where the stealing \proc restore the \at to its original \proc after running it, but mixed with priorities.
    144143
    145144\paragraph{Apple OS X}
     
    203202
    204203\paragraph{Grand Central Dispatch}
    205 An Apple\cite{apple:gcd} API that offers task parallelism~\cite{wiki:taskparallel}.
     204An Apple~\cite{apple:gcd} API that offers task parallelism~\cite{wiki:taskparallel}.
    206205Its distinctive aspect is multiple ``Dispatch Queues'', some of which are created by programmers.
    207206Each queue has its own local ordering guarantees, \eg \ats on queue $A$ are executed in \emph{FIFO} order.
    208207
    209 While the documentation only gives limited insight into the scheduling and load balancing approach, \cite{apple:gcd2} suggests an approach fairly classic;
    210 Where each \proc has a queue of \newterm{blocks} to run, effectively \ats, and they drain their respective queues in \glsxtrshort{fifo}.
    211 They seem to add the concept of dependent queues with clear ordering, where a executing a block ends-up scheduling more blocks.
    212 In terms of semantics, these Dispatch Queues seem to be very similar to Intel\textregistered ~TBB @execute()@ and predecessor semantics.
     208While the documentation only gives limited insight into the scheduling and load balancing approach, \cite{apple:gcd2} suggests a fairly classic approach.
     209Each \proc has a queue of \ats to run, called \newterm{blocks}, which are drained in \glsxtrshort{fifo}.
     210\todo{update: They seem to add the concept of dependent queues with clear ordering, where executing a block ends-up scheduling more blocks.
     211In terms of semantics, these Dispatch Queues seem to be very similar to Intel\textregistered ~TBB \lstinline{execute()} and predecessor semantics.}
     212
    213213
    214214\paragraph{LibFibre}
  • doc/theses/thierry_delisle_PhD/thesis/text/io.tex

    re116db3 r3ce3fb9  
    141141In the worst case, where all \glspl{thrd} are consistently blocking on \io, it devolves into 1-to-1 threading.
    142142However, regardless of the frequency of \io operations, it achieves the fundamental goal of not blocking \glspl{proc} when \glspl{thrd} are ready to run.
    143 This approach is used by languages like Go\cite{GITHUB:go}, frameworks like libuv\cite{libuv}, and web servers like Apache~\cite{apache} and Nginx~\cite{nginx}, since it has the advantage that it can easily be used across multiple operating systems.
     143This approach is used by languages like Go~\cite{GITHUB:go}, frameworks like libuv~\cite{libuv}, and web servers like Apache~\cite{apache} and NGINX~\cite{nginx}, since it has the advantage that it can easily be used across multiple operating systems.
    144144This advantage is especially relevant for languages like Go, which offer a homogeneous \glsxtrshort{api} across all platforms.
    145145As opposed to C, which has a very limited standard api for \io, \eg, the C standard library has no networking.
     
    151151
    152152For this project, I selected @io_uring@, in large parts because of its generality.
    153 While @epoll@ has been shown to be a good solution for socket \io (\cite{DBLP:journals/pomacs/KarstenB20}), @io_uring@'s transparent support for files, pipes, and more complex operations, like @splice@ and @tee@, make it a better choice as the foundation for a general \io subsystem.
     153While @epoll@ has been shown to be a good solution for socket \io (\cite{Karsten20}), @io_uring@'s transparent support for files, pipes, and more complex operations, like @splice@ and @tee@, make it a better choice as the foundation for a general \io subsystem.
    154154
    155155\section{Event-Engine}
Note: See TracChangeset for help on using the changeset viewer.