\chapter{Micro-Benchmarks}\label{microbench} The first step in evaluating this work is to test-out small controlled cases to ensure the basics work properly. This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler. \section{Benchmark Environment} All benchmarks are run on two distinct hardware platforms. \begin{description} \item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM. The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}. Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively. Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}. The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. \item[Intel] is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM. The Xeon CPU has 24 cores with 2 \glspl{hthrd} per core, for 48 \glspl{hthrd} per socket with 4 sockets for a total of 196 \glspl{hthrd}. Each CPU has 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively. Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}. The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. \end{description} For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading. If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used. If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA Nodes as needed. The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine. Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU still incur high latency. \section{Cycling latency} \begin{figure} \centering \input{cycle.pstex_t} \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \gls{at} unparks the next \gls{at} in the cycle before parking itself.} \label{fig:cycle} \end{figure} The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue. Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark. However, yielding can be treated as a special case by optimizing it away (dead code) since the number of ready \glspl{at} does not change. Not all systems perform this optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}. For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}. This benchmark arranges a number of \glspl{at} into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list. At runtime, each \gls{at} unparks the next \gls{at} before parking itself. Unparking the next \gls{at} pushes that \gls{at} onto the ready queue as does the ensuing park. Hence, the underlying runtime cannot rely on the number of ready \glspl{at} staying constant over the duration of the experiment. In fact, the total number of \glspl{at} waiting on the ready queue is expected to vary because of the race between the next \gls{at} unparking and the current \gls{at} parking. That is, the runtime cannot anticipate that the current task will immediately park. As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \gls{at} parks because of time-slicing or multiple \procs. Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops. To prevent any attempt of silently omitting ready-queue operations, the ring of \glspl{at} is made big enough so the \glspl{at} have time to fully park before being unparked again. (Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.) Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment. To avoid this benchmark being affected by idle-sleep handling, the number of rings is multiple times greater than the number of \glspl{proc}. This design avoids the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentioned above. Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark. There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition. \begin{figure} \begin{cfa} Thread.main() { count := 0 for { @wait()@ @this.next.wake()@ count ++ if must_stop() { break } } global.count += count } \end{cfa} \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code} \label{fig:cycle:code} \end{figure} \subsection{Results} Figure~\ref{fig:cycle:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle. \begin{figure} \subfloat[][Throughput, 100 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.cycle.jax.ops.pstex_t} } \label{fig:cycle:jax:ops} } \subfloat[][Throughput, 1 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.cycle.low.jax.ops.pstex_t} } \label{fig:cycle:jax:low:ops} } \subfloat[][Latency, 100 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.cycle.jax.ns.pstex_t} } } \subfloat[][Latency, 1 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.cycle.low.jax.ns.pstex_t} } \label{fig:cycle:jax:low:ns} } \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput as a function of \proc count with 100 cycles per \proc and 5 \ats per cycle.} \label{fig:cycle:jax} \end{figure} \todo{results discussion} \section{Yield} For completion, the classic yield benchmark is included. This benchmark is simpler than the cycle test: it creates many \glspl{at} that call @yield@. As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@. The only interesting variable in this benchmark is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) can be empty. This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work. Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@. \begin{figure} \begin{cfa} Thread.main() { count := 0 for { @yield()@ count ++ if must_stop() { break } } global.count += count } \end{cfa} \caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code} \label{fig:yield:code} \end{figure} \subsection{Results} Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count, where each run uses 100 \ats per \proc. \begin{figure} \subfloat[][Throughput, 100 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.yield.jax.ops.pstex_t} } \label{fig:yield:jax:ops} } \subfloat[][Throughput, 1 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.yield.low.jax.ops.pstex_t} } \label{fig:yield:jax:low:ops} } \subfloat[][Latency, 100 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.yield.jax.ns.pstex_t} } \label{fig:yield:jax:ns} } \subfloat[][Latency, 1 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.yield.low.jax.ns.pstex_t} } \label{fig:yield:jax:low:ns} } \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput as a function of \proc count, using 1 \ats per \proc.} \label{fig:yield:jax} \end{figure} \todo{results discussion} \section{Churn} The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application. In these benchmarks, \glspl{at} can be easily partitioned over the different \glspl{proc} upfront and none of the \glspl{at} communicate with each other. The Churn benchmark represents more chaotic execution, where there is no relation between the last \gls{proc} on which a \gls{at} ran and blocked and the \gls{proc} that subsequently unblocks it. With processor-specific ready-queues, when a \gls{at} is unblocked by a different \gls{proc} that means the unblocking \gls{proc} must either ``steal'' the \gls{at} from another processor or find it on a global queue. This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on \gls{at} data structure. In either case, this benchmark aims to highlight how each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly. This benchmark uses a fixed-size array of counting semaphores. Each \gls{at} picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore. This creates a flow where \glspl{at} push each other out of the semaphores before being pushed out themselves. For this benchmark to work, the number of \glspl{at} must be equal or greater than the number of semaphores plus the number of \glspl{proc}. Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@. Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@. \begin{figure} \begin{cfa} Thread.main() { count := 0 for { r := random() % len(spots) @spots[r].V()@ @spots[r].P()@ count ++ if must_stop() { break } } global.count += count } \end{cfa} \caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code} \label{fig:churn:code} \end{figure} \subsection{Results} Figure~\ref{fig:churn:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle. \begin{figure} \subfloat[][Throughput, 100 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.churn.jax.ops.pstex_t} } \label{fig:churn:jax:ops} } \subfloat[][Throughput, 1 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.churn.low.jax.ops.pstex_t} } \label{fig:churn:jax:low:ops} } \subfloat[][Latency, 100 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.churn.jax.ns.pstex_t} } } \subfloat[][Latency, 1 \ats per \proc]{ \resizebox{0.5\linewidth}{!}{ \input{result.churn.low.jax.ns.pstex_t} } \label{fig:churn:jax:low:ns} } \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. Throughput is the total operation per second across all cores. Latency is the duration of each operation.} \label{fig:churn:jax} \end{figure} \todo{results discussion} \section{Locality} \todo{code, setup, results} \section{Transfer} The last benchmark is more of an experiment than a benchmark. It tests the behaviour of the schedulers for a misbehaved workload. In this workload, one of the \gls{at} is selected at random to be the leader. The leader then spins in a tight loop until it has observed that all other \glspl{at} have acknowledged its leadership. The leader \gls{at} then picks a new \gls{at} to be the ``spinner'' and the cycle repeats. The benchmark comes in two flavours for the non-leader \glspl{at}: once they acknowledged the leader, they either block on a semaphore or spin yielding. The experiment is designed to evaluate the short-term load-balancing of a scheduler. Indeed, schedulers where the runnable \glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at} for this experiment to terminate. This problem occurs because the spinning \gls{at} is effectively preventing the \gls{proc} from running any other \glspl{thrd}. In the semaphore flavour, the number of runnable \glspl{at} eventually dwindles down to only the leader. This scenario is a simpler case to handle for schedulers since \glspl{proc} eventually run out of work. In the yielding flavour, the number of runnable \glspl{at} stays constant. This scenario is a harder case to handle because corrective measures must be taken even when work is available. Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield. \todo{code, setup, results} \begin{figure} \begin{cfa} Thread.lead() { this.idx_seen = ++lead_idx if lead_idx > stop_idx { done := true return } // Wait for everyone to acknowledge my leadership start: = timeNow() for t in threads { while t.idx_seen != lead_idx { asm pause if (timeNow() - start) > 5 seconds { error() } } } // pick next leader leader := threads[ prng() % len(threads) ] // wake every one if ! exhaust { for t in threads { if t != me { t.wake() } } } } Thread.wait() { this.idx_seen := lead_idx if exhaust { wait() } else { yield() } } Thread.main() { while !done { if leader == me { this.lead() } else { this.wait() } } } \end{cfa} \caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark : Pseudo Code} \label{fig:transfer:code} \end{figure} \subsection{Results} Figure~\ref{fig:transfer:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle. \todo{results discussion}