Ignore:
Timestamp:
Aug 23, 2022, 6:40:54 AM (2 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
8baa40aa
Parents:
4fee301 (diff), 94eff4c (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Location:
doc/theses/thierry_delisle_PhD/thesis/text
Files:
4 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex

    r4fee301 r0c40bfe  
    11\chapter{Macro-Benchmarks}\label{macrobench}
    2 The previous chapter has demonstrated that the scheduler achieves its performance goal in small and controlled scenario.
    3 The next step is then to demonstrate that this stays true in more realistic and complete scenarios.
    4 This chapter presents two flavours of webservers that demonstrate that \CFA performs competitively with production environments.
    5 
    6 Webservers where chosen because they offer fairly simple applications that are still useful as standalone products.
    7 Furthermore, webservers are generally amenable to parallelisation since their workloads are mostly homogenous.
    8 They therefore offer a stringent performance benchmark for \CFA.
    9 Indeed existing solutions are likely to have close to optimal performance while the homogeneity of the workloads mean the additional fairness is not needed.
     2The previous chapter demonstrated the \CFA scheduler achieves its equivalent performance goal in small and controlled \at-scheduling scenarios.
     3The next step is to demonstrate performance stays true in more realistic and complete scenarios.
     4Therefore, this chapter exercises both \at and I/O scheduling using two flavours of webservers that demonstrate \CFA performs competitively with production environments.
     5
     6Webservers are chosen because they offer fairly simple applications that perform complex I/O, both network and disk, and are useful as standalone products.
     7Furthermore, webservers are generally amenable to parallelization since their workloads are mostly homogeneous.
     8Therefore, webservers offer a stringent performance benchmark for \CFA.
     9Indeed, existing webservers have close to optimal performance, while the homogeneity of the workload means fairness may not be a problem.
     10As such, these experiments should highlight any \CFA fairness cost (overhead) in realistic scenarios.
    1011
    1112\section{Memcached}
    12 Memcached~\cite{memcached} is an in memory key-value store that is used in many production environments, \eg \cite{atikoglu2012workload}.
    13 This also server also has the notable added benefit that there exists a full-featured front-end for performance testing called @mutilate@~\cite{GITHUB:mutilate}.
    14 Experimenting on memcached allows for a simple test of the \CFA runtime as a whole, it will exercise the scheduler, the idle-sleep mechanism, as well the \io subsystem for sockets.
    15 This experiment does not exercise the \io subsytem with regards to disk operations.
     13Memcached~\cite{memcached} is an in-memory key-value store used in many production environments, \eg \cite{atikoglu2012workload}.
     14In fact, the Memcached server is so popular there exists a full-featured front-end for performance testing, called @mutilate@~\cite{GITHUB:mutilate}.
     15Experimenting on Memcached allows for a simple test of the \CFA runtime as a whole, exercising the scheduler, the idle-sleep mechanism, as well the \io subsystem for sockets.
     16Note, this experiment does not exercise the \io subsystem with regards to disk operations because Memcached is an in-memory server.
    1617
    1718\subsection{Benchmark Environment}
    18 These experiments are run on a cluster of homogenous Supermicro SYS-6017R-TDF compute nodes with the following characteristics:
     19The Memcached experiments are run on a cluster of homogeneous Supermicro SYS-6017R-TDF compute nodes with the following characteristics.
     20\begin{itemize}
     21\item
    1922The server runs Ubuntu 20.04.3 LTS on top of Linux Kernel 5.11.0-34.
     23\item
    2024Each node has 2 Intel(R) Xeon(R) CPU E5-2620 v2 running at 2.10GHz.
     25\item
    2126These CPUs have 6 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 24 \glspl{hthrd}.
    22 The cpus each have 384 KB, 3 MB and 30 MB of L1, L2 and L3 caches respectively.
     27\item
     28The CPUs each have 384 KB, 3 MB and 30 MB of L1, L2 and L3 caches respectively.
     29\item
    2330Each node is connected to the network through a Mellanox 10 Gigabit Ethernet port.
    24 The network route uses 1 Mellanox SX1012 10/40 Gigabit Ethernet cluster switch.
    25 
    26 \subsection{Memcached with threads per connection}
    27 Comparing against memcached using a user-level runtime only really make sense if the server actually uses this threading model.
    28 Indeed, evaluating a user-level runtime with 1 \at per \proc is not meaningful since it does not exercise the runtime, it simply adds some overhead to the underlying OS scheduler.
    29 
    30 One approach is to use a webserver that uses a thread-per-connection model, where each incoming connection is served by a single \at in a strict 1-to-1 pairing.
    31 This models adds flexibility to the implementation, as the serving logic can now block on user-level primitives without affecting other connections.
    32 
    33 Memcached is not built according to a thread-per-connection model, but there exists a port of it that is, which was built for libfibre in \cite{DBLP:journals/pomacs/KarstenB20}.
    34 Therefore this version can both be compared to the original version and to a port to the \CFA runtime.
    35 
    36 As such, this memcached experiment compares 3 different varitions of memcached:
    37 \begin{itemize}
    38  \item \emph{vanilla}: the official release of memcached, version~1.6.9.
    39  \item \emph{fibre}: a modification of vanilla which uses the thread per connection model on top of the libfibre runtime~\cite{DBLP:journals/pomacs/KarstenB20}.
     31\item
     32Network routing is performed by a Mellanox SX1012 10/40 Gigabit Ethernet switch.
     33\end{itemize}
     34
     35\subsection{Memcached threading}
     36Memcached can be built to use multiple threads in addition to its @libevent@ subsystem to handle requests.
     37When enabled, the threading implementation operates as follows~\cite{https://docs.oracle.com/cd/E17952_01/mysql-5.6-en/ha-memcached-using-threads.html}:
     38\begin{itemize}
     39\item
     40Threading is handled by wrapping functions within the code to provide basic protection from updating the same global structures at the same time.
     41\item
     42Each thread uses its own instance of the @libevent@ to help improve performance.
     43\item
     44TCP/IP connections are handled with a single thread listening on the TCP/IP socket.
     45Each connection is then distributed to one of the active threads on a simple round-robin basis.
     46Each connection then operates solely within this thread while the connection remains open.
     47\item
     48For UDP connections, all the threads listen to a single UDP socket for incoming requests.
     49Threads that are not currently dealing with another request ignore the incoming packet.
     50One of the remaining, nonbusy, threads reads the request and sends the response.
     51This implementation can lead to increased CPU load as threads wake from sleep to potentially process the request.
     52\end{itemize}
     53Here, Memcached is based on an event-based webserver architecture~\cite{Pai99Flash}, using \gls{kthrd}ing to run multiple (largely) independent event engines, and if needed, spinning up additional kernel threads to handle blocking I/O.
     54Alternative webserver architecture are:
     55\begin{itemize}
     56\item
     57pipeline~\cite{Welsh01}, where the event engine is subdivided into multiple stages and the stages are connected with asynchronous buffers, where the final stage has multiple threads to handle blocking I/O.
     58\item
     59thread-per-connection~\cite{apache,Behren03}, where each incoming connection is served by a single \at in a strict 1-to-1 pairing, using the thread stack to hold the event state and folding the event engine implicitly into the threading runtime with its nonblocking I/O mechanism.
     60\end{itemize}
     61Both pipelining and thread-per-connection add flexibility to the implementation, as the serving logic can now block without halting the event engine~\cite{Harji12}.
     62
     63However, \gls{kthrd}ing in Memcached is not amenable to this work, which is based on \gls{uthrding}.
     64While it is feasible to layer one user thread per kernel thread, it is not meaningful as it fails to exercise the user runtime;
     65it simply adds extra scheduling overhead over the kernel threading.
     66Hence, there is no direct way to compare Memcached using a kernel-level runtime with a user-level runtime.
     67
     68Fortunately, there exists a recent port of Memcached to \gls{uthrding} based on the libfibre~\cite{DBLP:journals/pomacs/KarstenB20} \gls{uthrding} library.
     69This port did all of the heavy-lifting, making it straightforward to replace the libfibre user-threading with the \gls{uthrding} in \CFA.
     70It is now possible to compare the original kernel-threading Memcached with both user-threading runtimes in libfibre and \CFA.
     71
     72As such, this Memcached experiment compares 3 different variations of Memcached:
     73\begin{itemize}
     74 \item \emph{vanilla}: the official release of Memcached, version~1.6.9.
     75 \item \emph{fibre}: a modification of vanilla using the thread-per-connection model on top of the libfibre runtime.
    4076 \item \emph{cfa}: a modification of the fibre webserver that replaces the libfibre runtime with \CFA.
    4177\end{itemize}
    4278
    4379\subsection{Throughput} \label{memcd:tput}
     80This experiment is done by having the clients establish 15,360 total connections, which persist for the duration of the experiment.
     81The clients then send read and write queries with only 3\% writes (updates), attempting to follow a desired query rate, and the server responds to the desired rate as best as possible.
     82Figure~\ref{fig:memcd:rate:qps} shows the 3 server versions at different client rates, ``Target \underline{Q}ueries \underline{P}er \underline{S}econd'', and the actual rate, ``Actual QPS'', for all three webservers.
     83
     84Like the experimental setup in Chapter~\ref{microbench}, each experiment is run 15 times, and for each client rate, the measured webserver rate is plotted.
     85The solid line represents the median while the dashed and dotted lines represent the maximum and minimum respectively.
     86For rates below 500K queries per seconds, all three webservers match the client rate.
     87Beyond 500K, the webservers cannot match the client rate.
     88During this interval, vanilla Memcached achieves the highest webserver throughput, with libfibre and \CFA slightly lower but very similar throughput.
     89Overall the performance of all three webservers is very similar, especially considering that at 500K the servers have reached saturation, which is discussed more in the next section.
     90
    4491\begin{figure}
    4592        \centering
    46         \input{result.memcd.rate.qps.pstex_t}
    47         \caption[Memcached Benchmark: Throughput]{Memcached Benchmark: Throughput\smallskip\newline Desired vs Actual request rate for 15360 connections. Target QPS is the request rate that the clients are attempting to maintain and Actual QPS is the rate at which the server is able to respond.}
     93        \resizebox{0.83\linewidth}{!}{\input{result.memcd.rate.qps.pstex_t}}
     94        \caption[Memcached Benchmark: Throughput]{Memcached Benchmark: Throughput\smallskip\newline Desired vs Actual query rate for 15,360 connections. Target QPS is the query rate that the clients are attempting to maintain and Actual QPS is the rate at which the server is able to respond.}
    4895        \label{fig:memcd:rate:qps}
    49 \end{figure}
    50 Figure~\ref{fig:memcd:rate:qps} shows the result for the throughput of all three webservers.
    51 This experiment is done by having the clients establish 15360 total connections, which persist for the duration of the experiments.
    52 The clients then send requests, attempting to follow a desired request rate.
    53 The servers respond to the desired rate as best they can and the difference between desired rate, ``Target \underline{Q}ueries \underline{P}er \underline{S}econd'', and the actual rate, ``Actual QPS''.
    54 The results show that \CFA achieves equivalent throughput even when the server starts to reach saturation.
    55 Only then does it start to fall behind slightly.
    56 This is a demonstration of the \CFA runtime achieving its performance goal.
    57 
    58 \subsection{Tail Latency}
    59 \begin{figure}
     96%\end{figure}
     97\bigskip
     98%\begin{figure}
    6099        \centering
    61         \input{result.memcd.rate.99th.pstex_t}
    62         \caption[Memcached Benchmark : 99th Percentile Lantency]{Memcached Benchmark : 99th Percentile Lantency\smallskip\newline 99th Percentile of the response latency as a function of \emph{desired} request rate for 15360 connections. }
     100        \resizebox{0.83\linewidth}{!}{\input{result.memcd.rate.99th.pstex_t}}
     101        \caption[Memcached Benchmark : 99th Percentile Lantency]{Memcached Benchmark : 99th Percentile Lantency\smallskip\newline 99th Percentile of the response latency as a function of \emph{desired} query rate for 15,360 connections. }
    63102        \label{fig:memcd:rate:tail}
    64103\end{figure}
    65 Another important performance metric to look at is \newterm{tail} latency.
    66 Since many web applications rely on a combination of different requests made in parallel, the latency of the slowest response, \ie tail latency, can dictate overall performance.
    67 Figure~\ref{fig:memcd:rate:tail} shows the 99th percentile latency results for the same experiment memcached experiment.
    68 As is expected, the latency starts low and increases as the server gets close to saturation, point at which the latency increses dramatically.
    69 Note that the figure shows \emph{target} request rate, the actual response rate is given in Figure~\ref{fig:memcd:rate:qps} as this is the same underlying experiment.
     104
     105\subsection{Tail Latency}
     106Another popular performance metric is \newterm{tail} latency, which indicates some notion of fairness among requests across the experiment, \ie do some requests wait longer than other requests for service.
     107Since many web applications rely on a combination of different queries made in parallel, the latency of the slowest response, \ie tail latency, can dictate a performance perception.
     108Figure~\ref{fig:memcd:rate:tail} shows the 99th percentile latency results for the same Memcached experiment.
     109
     110Again, each experiment is run 15 times with the median, maximum and minimum plotted with different lines.
     111As expected, the latency starts low and increases as the server gets close to saturation, at which point, the latency increases dramatically because the webservers cannot keep up with the connection rate so client requests are disproportionally delayed.
     112Because of this dramatic increase, the Y axis is presented using log scale.
     113Note that the graph shows \emph{target} query rate, the actual response rate is given in Figure~\ref{fig:memcd:rate:qps} as this is the same underlying experiment.
     114
     115For all three servers, the saturation point is reached before 500K queries per second, which is when throughput starts to decline among the webservers.
     116In this experiment, all three webservers are much more distinguishable than the throughput experiment.
     117Vanilla Memcached achieves the lowest latency until 600K, after which all the webservers are struggling to respond to client requests.
     118\CFA begins to decline at 600K, indicating some bottleneck after saturation.
     119Overall, all three webservers achieve micro-second latencies and the increases in latency mostly follow each other.
    70120
    71121\subsection{Update rate}
     122Since Memcached is effectively a simple database, the information that is cached can be written to concurrently by multiple queries.
     123And since writes can significantly affect performance, it is interesting to see how varying the update rate affects performance.
     124Figure~\ref{fig:memcd:updt} shows the results for the same experiment as the throughput and latency experiment but increasing the update percentage to 5\%, 10\% and 50\%, respectively, versus the original 3\% update percentage.
     125
    72126\begin{figure}
    73         \centering
    74         \subfloat[][Throughput]{
    75                 \input{result.memcd.forall.qps.pstex_t}
    76         }
    77 
    78         \subfloat[][Latency]{
    79                 \input{result.memcd.forall.lat.pstex_t}
    80         }
    81         \caption[forall Latency results at different update rates]{forall Latency results at different update rates\smallskip\newline Description}
    82         \label{fig:memcd:updt:forall}
     127        \subfloat[][\CFA: Throughput]{
     128                \resizebox{0.5\linewidth}{!}{
     129                        \input{result.memcd.forall.qps.pstex_t}
     130                }
     131                \label{fig:memcd:updt:forall:qps}
     132        }
     133        \subfloat[][\CFA: Latency]{
     134                \resizebox{0.5\linewidth}{!}{
     135                        \input{result.memcd.forall.lat.pstex_t}
     136                }
     137                \label{fig:memcd:updt:forall:lat}
     138        }
     139
     140        \subfloat[][LibFibre: Throughput]{
     141                \resizebox{0.5\linewidth}{!}{
     142                        \input{result.memcd.fibre.qps.pstex_t}
     143                }
     144                \label{fig:memcd:updt:fibre:qps}
     145        }
     146        \subfloat[][LibFibre: Latency]{
     147                \resizebox{0.5\linewidth}{!}{
     148                        \input{result.memcd.fibre.lat.pstex_t}
     149                }
     150                \label{fig:memcd:updt:fibre:lat}
     151        }
     152
     153        \subfloat[][Vanilla: Throughput]{
     154                \resizebox{0.5\linewidth}{!}{
     155                        \input{result.memcd.vanilla.qps.pstex_t}
     156                }
     157                \label{fig:memcd:updt:vanilla:qps}
     158        }
     159        \subfloat[][Vanilla: Latency]{
     160                \resizebox{0.5\linewidth}{!}{
     161                        \input{result.memcd.vanilla.lat.pstex_t}
     162                }
     163                \label{fig:memcd:updt:vanilla:lat}
     164        }
     165        \caption[Throughput and Latency results at different update rates (percentage of writes).]{Throughput and Latency results at different update rates (percentage of writes).\smallskip\newline Description}
     166        \label{fig:memcd:updt}
    83167\end{figure}
    84168
    85 \begin{figure}
    86         \centering
    87         \subfloat[][Throughput]{
    88                 \input{result.memcd.fibre.qps.pstex_t}
    89         }
    90 
    91         \subfloat[][Latency]{
    92                 \input{result.memcd.fibre.lat.pstex_t}
    93         }
    94         \caption[fibre Latency results at different update rates]{fibre Latency results at different update rates\smallskip\newline Description}
    95         \label{fig:memcd:updt:fibre}
    96 \end{figure}
    97 
    98 \begin{figure}
    99         \centering
    100         \subfloat[][Throughput]{
    101                 \input{result.memcd.vanilla.qps.pstex_t}
    102         }
    103 
    104         \subfloat[][Latency]{
    105                 \input{result.memcd.vanilla.lat.pstex_t}
    106         }
    107         \caption[vanilla Latency results at different update rates]{vanilla Latency results at different update rates\smallskip\newline Description}
    108         \label{fig:memcd:updt:vanilla}
    109 \end{figure}
    110 
    111 
     169In the end, this experiment mostly demonstrates that the performance of Memcached is affected very little by the update rate.
     170Indeed, since values read/written can be bigger than what can be read/written atomically, a lock must be acquired while the value is read.
     171Hence, I believe the underlying locking pattern for reads and writes is fairly similar, if not the same.
     172These results suggest Memcached does not attempt to optimize reads/writes using a readers-writer lock to protect each value and instead just relies on having a sufficient number of keys to limit contention.
     173In the end, the update experiment shows that \CFA is achieving equivalent performance.
    112174
    113175\section{Static Web-Server}
    114 The memcached experiment has two aspects of the \io subsystem it does not exercise, accepting new connections and interacting with disks.
    115 On the other hand, static webservers, servers that offer static webpages, do stress disk \io since they serve files from disk\footnote{Dynamic webservers, which construct pages as they are sent, are not as interesting since the construction of the pages do not exercise the runtime in a meaningfully different way.}.
    116 The static webserver experiments will compare NGINX~\cit{nginx} with a custom webserver developped for this experiment.
     176The Memcached experiment does not exercise two key aspects of the \io subsystem: accept\-ing new connections and interacting with disks.
     177On the other hand, a webserver servicing static web-pages does stress both accepting connections and disk \io by accepting tens of thousands of client requests per second where these requests return static data serviced from the file-system cache or disk.\footnote{
     178Webservers servicing dynamic requests, which read from multiple locations and construct a response, are not as interesting since creating the response takes more time and does not exercise the runtime in a meaningfully different way.}
     179The static webserver experiment compares NGINX~\cite{nginx} with a custom \CFA-based webserver developed for this experiment.
    117180
    118181\subsection{\CFA webserver}
    119 Unlike the memcached experiment, the webserver experiment relies on a custom designed webserver.
    120 It is a simple thread-per-connection webserver where a fixed number of \ats are created upfront.
    121 Each of the \at calls @accept@, through @io_uring@, on the listening port and handle the incomming connection once accepted.
    122 Most of the implementation is fairly straight forward however the inclusion of file \io introduces a new challenge that had to be hacked around.
    123 
    124 Normally, webservers use @sendfile@\cite{MAN:sendfile} to send files over the socket.
    125 @io_uring@ does not support @sendfile@, it supports @splice@\cite{MAN:splice} instead, which is strictly more powerful.
    126 However, because of how linux implements file \io, see Subsection~\ref{ononblock}, @io_uring@'s implementation must delegate calls to splice to worker threads inside the kernel.
    127 As of Linux 5.13, @io_uring@ caps the numer of these worker threads to @RLIMIT_NPROC@ and therefore, when tens of thousands of splice requests are made, it can create tens of thousands of \glspl{kthrd}.
    128 Such a high number of \glspl{kthrd} is more than Linux can handle in this scenario so performance suffers significantly.
    129 For this reason, the \CFA webserver calls @sendfile@ directly.
    130 This approach works up to a certain point, but once the server approaches saturation, it leads to a new problem.
    131 
    132 When the saturation point of the server is attained, latency will increase and inevitably some client connections will timeout.
    133 As these clients close there connections, the server must close these sockets without delay so the OS can reclaim the resources used by these connections.
    134 Indeed, until they are closed on the server end, the connection will linger in the CLOSE-WAIT tcp state~\cite{rfc:tcp} and the tcp buffers will be preserved.
    135 However, this poses a problem using blocking @sendfile@ calls.
    136 The calls can block if they do not have suffcient memory, which can be caused by having too many connections in the CLOSE-WAIT state.
    137 Since blocking in calls to @sendfile@ blocks the \proc rather than the \at, this prevents other connections from closing their sockets.
    138 This leads to a vicious cycle where timeouts lead to @sendfile@ calls running out of resources, which lead to more timeouts.
    139 
    140 Normally, this is address by marking the sockets as non-blocking and using @epoll@ to wait for sockets to have sufficient resources.
    141 However, since @io_uring@ respects non-blocking semantics marking all sockets as non-blocking effectively circumvents the @io_uring@ subsystem entirely.
     182The \CFA webserver is a straightforward thread-per-connection webserver, where a fixed number of \ats are created upfront (tuning parameter).
     183Each \at calls @accept@, through @io_uring@, on the listening port and handles the incoming connection once accepted.
     184Most of the implementation is fairly straightforward;
     185however, the inclusion of file \io found an @io_uring@ problem that required an unfortunate workaround.
     186
     187Normally, webservers use @sendfile@~\cite{MAN:sendfile} to send files over a socket because it performs a direct move in the kernel from the file-system cache to the NIC, eliminating reading/writing the file into the webserver.
     188While @io_uring@ does not support @sendfile@, it does supports @splice@~\cite{MAN:splice}, which is strictly more powerful.
     189However, because of how Linux implements file \io, see Subsection~\ref{ononblock}, @io_uring@ must delegate splice calls to worker threads inside the kernel.
     190As of Linux 5.13, @io_uring@ had no mechanism to restrict the number of worker threads, and therefore, when tens of thousands of splice requests are made, it correspondingly creates tens of thousands of internal \glspl{kthrd}.
     191Such a high number of \glspl{kthrd} slows Linux significantly.
     192Rather than abandon the experiment, the \CFA webserver was switched to nonblocking @sendfile@.
     193However, when the nonblocking @sendfile@ returns @EAGAIN@, the \CFA server cannot block the \at because its I/O subsystem uses @io_uring@.
     194Therefore, the \at must spin performing the @sendfile@ and yield if the call returns @EAGAIN@.
     195This workaround works up to the saturation point, when other problems occur.
     196
     197At saturation, latency increases so some client connections timeout.
     198As these clients close their connection, the server must close its corresponding side without delay so the OS can reclaim the resources used by these connections.
     199Indeed, until the server connection is closed, the connection lingers in the CLOSE-WAIT TCP state~\cite{rfc:tcp} and the TCP buffers are preserved.
     200However, this poses a problem using nonblocking @sendfile@ calls:
     201the call can still block if there is insufficient memory, which can be caused by having too many connections in the CLOSE-WAIT state.\footnote{
     202\lstinline{sendfile} can always block even in nonblocking mode if the file to be sent is not in the file-system cache, because Linux does not provide nonblocking disk I/O.}
     203When @sendfile@ blocks, the \proc rather than the \at blocks, preventing other connections from closing their sockets.
     204This effect results in a negative feedback where more timeouts lead to more @sendfile@ calls running out of resources.
     205
     206Normally, this is address by using @select@/@epoll@ to wait for sockets to have sufficient resources.
     207However, since @io_uring@ respects nonblocking semantics, marking all sockets as non-blocking effectively circumvents the @io_uring@ subsystem entirely.
    142208For this reason, the \CFA webserver sets and resets the @O_NONBLOCK@ flag before and after any calls to @sendfile@.
    143209Normally @epoll@ would also be used when these calls to @sendfile@ return @EAGAIN@, but since this would not help in the evaluation of the \CFA runtime, the \CFA webserver simply yields and retries in these cases.
    144210
    145 It is important to state that in Linux 5.15 @io_uring@ introduces the ability for users to limit the number of worker threads that are created, through the @IORING_REGISTER_IOWQ_MAX_WORKERS@ option.
     211Interestingly, Linux 5.15 @io_uring@ introduces the ability to limit the number of worker threads that are created, through the @IORING_REGISTER_IOWQ_MAX_WORKERS@ option.
    146212However, as of writing this document Ubuntu does not have a stable release of Linux 5.15.
    147213There exists versions of the kernel that are currently under testing, but these caused unrelated but nevertheless prohibitive issues in this experiment.
     
    150216
    151217\subsection{Benchmark Environment}
    152 Unlike the memcached experiment, the webserver run on a more heterogenous environment.
     218Unlike the Memcached experiment, the webserver experiment is run on a heterogeneous environment.
     219\begin{itemize}
     220\item
    153221The server runs Ubuntu 20.04.4 LTS on top of Linux Kernel 5.13.0-52.
    154 It has an AMD Opteron(tm) Processor 6380 running at 2.50GHz.
    155 These CPUs has only 8 \glspl{hthrd} enabled by grub, which is sufficient to achieve line rate.
    156 This cpus each have 64 KB, 256 KiB and 8 MB of L1, L2 and L3 caches respectively.
    157 The kernel is setup to limit the memory at 25Gb.
    158 
    159 The client machines each have two 2.8 GHz Xeon CPUs, and four one-gigabit Ethernet cards.
    160 Each client machine runs two copies of the workload generator.
    161 They run a 2.6.11-1 SMP Linux kernel, which permits each client load-generator to run on a separate CPU.
    162 Since the clients outnumber the server 8-to-1, this is plenty sufficient to generate enough load for the clients not to become the bottleneck.
    163 
     222\item
     223It has an AMD Opteron(tm) Processor 6380 running at 2.5GHz.
     224\item
     225Each CPU has 64 KB, 256 KiB and 8 MB of L1, L2 and L3 caches respectively.
     226\item
     227The computer is booted with only 8 CPUs enabled, which is sufficient to achieve line rate.
     228\item
     229The computer is booted with only 25GB of memory to restrict the file-system cache.
     230\end{itemize}
     231There are 8 client machines.
     232\begin{itemize}
     233\item
     234A client runs a 2.6.11-1 SMP Linux kernel, which permits each client load-generator to run on a separate CPU.
     235\item
     236It has two 2.8 GHz Xeon CPUs, and four one-gigabit Ethernet cards.
     237\item
    164238\todo{switch}
     239\item
     240A client machine runs two copies of the workload generator.
     241\end{itemize}
     242The clients and network are sufficiently provisioned to drive the server to saturation and beyond.
     243Hence, any server effects are attributable solely to the runtime system and webserver.
     244Finally, without restricting the server hardware resources, it is impossible to determine if a runtime system or the webserver using it has any specific design restrictions, \eg using space to reduce time.
     245Trying to determine these restriction with large numbers of processors or memory simply means running equally large experiments, which takes longer and are harder to set up.
    165246
    166247\subsection{Throughput}
    167 To measure the throughput of both webservers, each server is loaded with over 30,000 files making over 4.5 Gigabytes in total.
    168 Each client runs httperf~\cit{httperf} which establishes a connection, does an http request for one or more files, closes the connection and repeats the process.
    169 The connections and requests are made according to a Zipfian distribution~\cite{zipf}.
     248To measure webserver throughput, the server computer is loaded with 21,600 files, sharded across 650 directories, occupying about 2.2GB of disk, distributed over the server's RAID-5 4-drives to achieve high throughput for disk I/O.
     249The clients run httperf~\cite{httperf} to request a set of static files.
     250The httperf load-generator is used with session files to simulate a large number of users and to implement a partially open-loop system.
     251This permits httperf to produce overload conditions, generate multiple requests from persistent HTTP/1.1 connections, and include both active and inactive off periods to model browser processing times and user think times~\cite{Barford98}.
     252
     253The experiments are run with 16 clients, each running a copy of httperf (one copy per CPU), requiring a set of 16 log files with requests conforming to a Zipf distribution.
     254This distribution is representative of users accessing static data through a web-browser.
     255Each request reads a file name from its trace, establishes a connection, performs an HTTP get-request for the file name, receive the file data, close the connection, and repeat the process.
     256Some trace elements have multiple file names that are read across a persistent connection.
     257A client times-out if the server does not complete a request within 10 seconds.
     258
     259An experiment consists of running a server with request rates ranging from 10,000 to 70,000 requests per second;
     260each rate takes about 5 minutes to complete.
     261There is 20 seconds idle time between rates and between experiments to allow connections in the TIME-WAIT state to clear.
     262Server throughput is measured both at peak and after saturation (\ie after peak).
     263Peak indicates the level of client requests the server can handle and after peak indicates if a server degrades gracefully.
    170264Throughput is measured by aggregating the results from httperf of all the clients.
     265
     266Two workload scenarios are created by reconfiguring the server with different amounts of memory: 4 GB and 2 GB.
     267The two workloads correspond to in-memory (4 GB) and disk-I/O (2 GB).
     268Due to the Zipf distribution, only a small amount of memory is needed to service a significant percentage of requests.
     269Table~\ref{t:CumulativeMemory} shows the cumulative memory required to satisfy the specified percentage of requests; e.g., 95\% of the requests come from 126.5 MB of the file set and 95\% of the requests are for files less than or equal to 51,200 bytes.
     270Interestingly, with 2 GB of memory, significant disk-I/O occurs.
     271
     272\begin{table}
     273\caption{Cumulative memory for requests by file size}
     274\label{t:CumulativeMemory}
     275\begin{tabular}{r|rrrrrrrr}
     276\% Requests   & 10 & 30 & 50 & 70 & 80 & 90 & \textbf{95} & 100 \\
     277Memory (MB)   & 0.5 & 1.5 & 8.4 & 12.2 & 20.1 & 94.3 & \textbf{126.5} & 2,291.6 \\
     278File Size (B) & 409 & 716 & 4,096 & 5,120 & 7,168 & 40,960 & \textbf{51,200} & 921,600
     279\end{tabular}
     280\end{table}
     281
     282Figure~\ref{fig:swbsrv} shows the results comparing \CFA to NGINX in terms of throughput.
     283These results are fairly straightforward.
     284Both servers achieve the same throughput until around 57,500 requests per seconds.
     285Since the clients are asking for the same files, the fact that the throughput matches exactly is expected as long as both servers are able to serve the desired rate.
     286Once the saturation point is reached, both servers are still very close.
     287NGINX achieves slightly better throughput.
     288However, Figure~\ref{fig:swbsrv:err} shows the rate of errors, a gross approximation of tail latency, where \CFA achieves notably fewer errors once the machine reaches saturation.
     289This suggest that \CFA is slightly more fair and NGINX may slightly sacrifice some fairness for improved throughput.
     290It demonstrate that the \CFA webserver described above is able to match the performance of NGINX up-to and beyond the saturation point of the machine.
     291
    171292\begin{figure}
    172293        \subfloat[][Throughput]{
    173                 \input{result.swbsrv.25gb.pstex_t}
     294                \resizebox{0.85\linewidth}{!}{\input{result.swbsrv.25gb.pstex_t}}
    174295                \label{fig:swbsrv:ops}
    175296        }
    176297
    177298        \subfloat[][Rate of Errors]{
    178                 \input{result.swbsrv.25gb.err.pstex_t}
     299                \resizebox{0.85\linewidth}{!}{\input{result.swbsrv.25gb.err.pstex_t}}
    179300                \label{fig:swbsrv:err}
    180301        }
     
    182303        \label{fig:swbsrv}
    183304\end{figure}
    184 Figure~\ref{fig:swbsrv} shows the results comparing \CFA to NGINX in terms of throughput.
    185 These results are fairly straight forward.
    186 Both servers achieve the same throughput until around 57,500 requests per seconds.
    187 Since the clients are asking for the same files, the fact that the throughput matches exactly is expected as long as both servers are able to serve the desired rate.
    188 Once the saturation point is reached, both servers are still very close.
    189 NGINX achieves slightly better throughtput.
    190 However, Figure~\ref{fig:swbsrv:err} shows the rate of errors, a gross approximation of tail latency, where \CFA achives notably fewet errors once the machine reaches saturation.
    191 This suggest that \CFA is slightly more fair and NGINX may sloghtly sacrifice some fairness for improved throughtput.
    192 It demonstrate that the \CFA webserver described above is able to match the performance of NGINX up-to and beyond the saturation point of the machine.
    193305
    194306\subsection{Disk Operations}
    195 The throughput was made using a server with 25gb of memory, this was sufficient to hold the entire fileset in addition to all the code and data needed to run the webserver and the reste of the machine.
     307The throughput was made using a server with 25gb of memory, this was sufficient to hold the entire fileset in addition to all the code and data needed to run the webserver and the rest of the machine.
    196308Previous work like \cit{Cite Ashif's stuff} demonstrate that an interesting follow-up experiment is to rerun the same throughput experiment but allowing significantly less memory on the machine.
    197309If the machine is constrained enough, it will force the OS to evict files from the file cache and cause calls to @sendfile@ to have to read from disk.
    198310However, what these low memory experiments demonstrate is how the memory footprint of the webserver affects the performance.
    199 However, since what I am to evaluate in this thesis is the runtime of \CFA, I diceded to forgo experiments on low memory server.
     311However, since what I am to evaluate in this thesis is the runtime of \CFA, I decided to forgo experiments on low memory server.
    200312The implementation of the webserver itself is simply too impactful to be an interesting evaluation of the underlying runtime.
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    r4fee301 r0c40bfe  
    11\chapter{Micro-Benchmarks}\label{microbench}
    22
    3 The first step in evaluating this work is to test-out small controlled cases to ensure the basics work properly.
    4 This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
     3The first step in evaluating this work is to test small controlled cases to ensure the basics work properly.
     4This chapter presents five different experimental setups for evaluating the basic features of the \CFA, libfibre~\cite{libfibre}, Go, and Tokio~\cite{Tokio} schedulers.
     5All of these systems have a \gls{uthrding} model.
     6The goal in this chapter is show the \CFA scheduler obtains equivalent performance to other less fair schedulers through the different experiments.
     7Note, only the code of the \CFA tests is shown;
     8all tests in the other systems are functionally identical and available online~\cite{SchedulingBenchmarks}.
    59
    610\section{Benchmark Environment}\label{microenv}
     11
    712All benchmarks are run on two distinct hardware platforms.
    813\begin{description}
     
    2025\end{description}
    2126
    22 For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading.
    23 If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used.
    24 If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA Nodes as needed.
     27For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA node with no hyper threading.
     28If more \glspl{hthrd} are needed, then 1 NUMA node with hyperthreading is used.
     29If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA nodes as needed.
     30For the Intel machine, this means that from 1 to 24 \procs one socket and \emph{no} hyperthreading is used, and from 25 to 48 \procs still only one socket is used but \emph{with} hyperthreading.
     31This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192.
     32On AMD, the same algorithm is used, but the machine only has 2 sockets.
     33So hyperthreading\footnote{
     34Hyperthreading normally refers specifically to the technique used by Intel, however it is often used generically to refer to any equivalent feature.}
     35is used when the \proc count reach 65 and 193.
    2536
    2637The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine.
    27 Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU still incur high latency.
    28 
    29 
    30 \section{Cycling latency}
     38Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU also incur high latency.
     39
     40\section{Experimental setup}
     41
     42Each experiment is run 15 times varying the number of processors depending on the two different computers.
     43All experiments gather throughput data and secondary data for scalability or latency.
     44The data is graphed using a solid and two dashed lines representing the median, maximum and minimum result respectively, where the minimum/maximum lines are referred to as the \emph{extremes}.\footnote{
     45An alternative display is to use error bars with min/max as the bottom/top for the bar.
     46However, this approach is not truly an error bar around a mean value and I felt the connected lines are easier to read.}
     47This graph presentation offers an overview of the distribution of the results for each experiment.
     48
     49For each experiment, four graphs are generated showing traditional throughput on the top row and \newterm{scalability} or \newterm{latency} on the bottom row (peek ahead to Figure~\ref{fig:cycle:jax}).
     50Scalability uses the same data as throughput but the Y axis is calculated as the number of \procs over the throughput.
     51In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same.
     52
     53The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy.
     54The right column shows results for 1 cycle per \proc, where the ready queues are expected to be near empty most of the time.
     55The distinction between 100 and 1 cycles is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
     56
     57\section{Cycle}
     58
     59The most basic evaluation of any ready queue is the latency needed to push and pop one element from the ready queue.
     60Since these two operations also describe a @yield@ operation, many systems use this operation as the fundamental benchmark.
     61However, yielding can be treated as a special case by optimizing it away since the number of ready \ats does not change.
     62Hence, systems that perform this optimization have an artificial performance benefit because the yield becomes a \emph{nop}.
     63For this reason, I designed a different push/pop benchmark, called \newterm{Cycle Benchmark}.
     64This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
     65At runtime, each \at unparks the next \at before parking itself.
     66Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue.
     67
    3168\begin{figure}
    3269        \centering
     
    3572        \label{fig:cycle}
    3673\end{figure}
    37 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue.
    38 Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark.
    39 However, yielding can be treated as a special case and some aspects of the scheduler can be optimized away since the number of ready \ats does not change.
    40 Not all systems perform this type of optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}.
    41 For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}.
    42 This benchmark arranges a number of \ats into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
    43 At runtime, each \at unparks the next \at before parking itself.
    44 Unparking the next \at pushes that \at onto the ready queue while the ensuing park leads to a \at being popped from the ready queue.
    45 
    46 Hence, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment.
    47 In fact, the total number of \ats waiting on the ready queue is expected to vary because of the delay between the next \at unparking and the current \at parking.
    48 That is, the runtime cannot anticipate that the current task will immediately park.
    49 As well, the size of the cycle is also decided based on this delay.
    50 Note that, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.
    51 If this happens, the scheduler push and pop are avoided and the results of the experiment would be skewed.
    52 Because of time-slicing or because cycles can be spread over multiple \procs, a small cycle may see the chain of unparks go full circle before the first \at parks.
    53 Every runtime system must handle this race and but cannot optimized away the ready-queue pushes and pops if the cycle is long enough.
     74
     75Therefore, the underlying runtime cannot rely on the number of ready \ats staying constant over the duration of the experiment.
     76In fact, the total number of \ats waiting on the ready queue is expected to vary because of the race between the next \at unparking and the current \at parking.
     77That is, the runtime cannot anticipate that the current task immediately parks.
     78As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \at parks because of time-slicing or multiple \procs.
     79If this happens, the scheduler push and pop are avoided and the results of the experiment are skewed.
     80(Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.)
     81Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops.
    5482To prevent any attempt of silently omitting ready-queue operations, the ring of \ats is made big enough so the \ats have time to fully park before being unparked again.
    5583Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
    5684
    57 Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark.
     85Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark, where each cycle has 5 \ats.
    5886There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition.
    5987
     
    73101\caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
    74102\label{fig:cycle:code}
     103%\end{figure}
     104
     105\bigskip
     106
     107%\begin{figure}
     108        \subfloat[][Throughput, 100 cycles per \proc]{
     109                \resizebox{0.5\linewidth}{!}{
     110                        \input{result.cycle.jax.ops.pstex_t}
     111                }
     112                \label{fig:cycle:jax:ops}
     113        }
     114        \subfloat[][Throughput, 1 cycle per \proc]{
     115                \resizebox{0.5\linewidth}{!}{
     116                        \input{result.cycle.low.jax.ops.pstex_t}
     117                }
     118                \label{fig:cycle:jax:low:ops}
     119        }
     120
     121        \subfloat[][Scalability, 100 cycles per \proc]{
     122                \resizebox{0.5\linewidth}{!}{
     123                        \input{result.cycle.jax.ns.pstex_t}
     124                }
     125                \label{fig:cycle:jax:ns}
     126        }
     127        \subfloat[][Scalability, 1 cycle per \proc]{
     128                \resizebox{0.5\linewidth}{!}{
     129                        \input{result.cycle.low.jax.ns.pstex_t}
     130                }
     131                \label{fig:cycle:jax:low:ns}
     132        }
     133        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are maximums while the solid line is the medium.}
     134        \label{fig:cycle:jax}
     135\end{figure}
     136
     137\begin{figure}
     138        \subfloat[][Throughput, 100 cycles per \proc]{
     139                \resizebox{0.5\linewidth}{!}{
     140                        \input{result.cycle.nasus.ops.pstex_t}
     141                }
     142                \label{fig:cycle:nasus:ops}
     143        }
     144        \subfloat[][Throughput, 1 cycle per \proc]{
     145                \resizebox{0.5\linewidth}{!}{
     146                        \input{result.cycle.low.nasus.ops.pstex_t}
     147                }
     148                \label{fig:cycle:nasus:low:ops}
     149        }
     150
     151        \subfloat[][Scalability, 100 cycles per \proc]{
     152                \resizebox{0.5\linewidth}{!}{
     153                        \input{result.cycle.nasus.ns.pstex_t}
     154                }
     155                \label{fig:cycle:nasus:ns}
     156        }
     157        \subfloat[][Scalability, 1 cycle per \proc]{
     158                \resizebox{0.5\linewidth}{!}{
     159                        \input{result.cycle.low.nasus.ns.pstex_t}
     160                }
     161                \label{fig:cycle:nasus:low:ns}
     162        }
     163        \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     164        \label{fig:cycle:nasus}
    75165\end{figure}
    76166
    77167\subsection{Results}
    78 \begin{figure}
    79         \subfloat[][Throughput, 100 cycles per \proc]{
    80                 \resizebox{0.5\linewidth}{!}{
    81                         \input{result.cycle.jax.ops.pstex_t}
    82                 }
    83                 \label{fig:cycle:jax:ops}
    84         }
    85         \subfloat[][Throughput, 1 cycle per \proc]{
    86                 \resizebox{0.5\linewidth}{!}{
    87                         \input{result.cycle.low.jax.ops.pstex_t}
    88                 }
    89                 \label{fig:cycle:jax:low:ops}
    90         }
    91 
    92         \subfloat[][Scalability, 100 cycles per \proc]{
    93                 \resizebox{0.5\linewidth}{!}{
    94                         \input{result.cycle.jax.ns.pstex_t}
    95                 }
    96                 \label{fig:cycle:jax:ns}
    97         }
    98         \subfloat[][Scalability, 1 cycle per \proc]{
    99                 \resizebox{0.5\linewidth}{!}{
    100                         \input{result.cycle.low.jax.ns.pstex_t}
    101                 }
    102                 \label{fig:cycle:jax:low:ns}
    103         }
    104         \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    105         \label{fig:cycle:jax}
    106 \end{figure}
    107 
    108 \begin{figure}
    109         \subfloat[][Throughput, 100 cycles per \proc]{
    110                 \resizebox{0.5\linewidth}{!}{
    111                         \input{result.cycle.nasus.ops.pstex_t}
    112                 }
    113                 \label{fig:cycle:nasus:ops}
    114         }
    115         \subfloat[][Throughput, 1 cycle per \proc]{
    116                 \resizebox{0.5\linewidth}{!}{
    117                         \input{result.cycle.low.nasus.ops.pstex_t}
    118                 }
    119                 \label{fig:cycle:nasus:low:ops}
    120         }
    121 
    122         \subfloat[][Scalability, 100 cycles per \proc]{
    123                 \resizebox{0.5\linewidth}{!}{
    124                         \input{result.cycle.nasus.ns.pstex_t}
    125                 }
    126                 \label{fig:cycle:nasus:ns}
    127         }
    128         \subfloat[][Scalability, 1 cycle per \proc]{
    129                 \resizebox{0.5\linewidth}{!}{
    130                         \input{result.cycle.low.nasus.ns.pstex_t}
    131                 }
    132                 \label{fig:cycle:nasus:low:ns}
    133         }
    134         \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    135         \label{fig:cycle:nasus}
    136 \end{figure}
    137 Figure~\ref{fig:cycle:jax} and Figure~\ref{fig:cycle:nasus} shows the throughput as a function of \proc count on Intel and AMD respectively, where each cycle has 5 \ats.
    138 The graphs show traditional throughput on the top row and \newterm{scalability} on the bottom row.
    139 Where scalability uses the same data but the Y axis is calculated as the number of \procs over the throughput.
    140 In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same.
    141 The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy.
    142 The right column shows results for only 1 cycle per \proc, where the ready queues are expected to be near empty at all times.
    143 The distinction is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
    144 
    145 The experiment was run 15 times for each series and processor count and the \emph{$\times$}s on the graph show all of the results obtained.
    146 Each series also has a solid and two dashed lines highlighting the median, maximum and minimum result respectively.
    147 This presentation offers an overview of the distribution of the results for each series.
    148 
    149 The experimental setup uses taskset to limit the placement of \glspl{kthrd} by the operating system.
    150 As mentioned in Section~\ref{microenv}, the experiement is setup to prioritize running on 2 \glspl{hthrd} per core before running on multiple sockets.
    151 For the Intel machine, this means that from 1 to 24 \procs, one socket and \emph{no} hyperthreading is used and from 25 to 48 \procs, still only one socket but \emph{with} hyperthreading.
    152 This pattern is repeated between 49 and 96, between 97 and 144, and between 145 and 192.
    153 On AMD, the same algorithm is used, but the machine only has 2 sockets.
    154 So hyperthreading\footnote{Hyperthreading normally refers specifically to the technique used by Intel, however here it is loosely used to refer to AMD's equivalent feature.} is used when the \proc count reach 65 and 193.
    155 
    156 Figure~\ref{fig:cycle:jax:ops} and Figure~\ref{fig:cycle:jax:ns} show that for 100 cycles per \proc, \CFA, Go and Tokio all obtain effectively the same performance.
     168
     169For the Intel architecture, Figure~\ref{fig:cycle:jax}:
     170\begin{itemize}
     171\item
     172For 100 cycles per \proc (first column), \CFA, Go and Tokio all obtain effectively the same throughput performance.
    157173Libfibre is slightly behind in this case but still scales decently.
    158 As a result of the \gls{kthrd} placement, we can see that additional \procs from 25 to 48 offer less performance improvements for all runtimes.
    159 As expected, this pattern repeats between \proc count 72 and 96.
    160 The performance goal of \CFA is to obtain equivalent performance to other, less fair schedulers and that is what results show.
    161 Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show very good throughput and scalability for all runtimes.
    162 
    163 When running only a single cycle, the story is slightly different.
    164 \CFA and tokio obtain very smiliar results overall, but tokio shows notably more variations in the results.
    165 While \CFA, Go and tokio achive equivalent performance with 100 cycles per \proc, with only 1 cycle per \proc Go achieves slightly better performance.
    166 This difference in throughput and scalability is due to the idle-sleep mechanism.
    167 With very few cycles, stealing or helping can cause a cascade of tasks migration and trick \proc into very short idle sleeps.
    168 Both effect will negatively affect performance.
    169 
    170 An interesting and unusual result is that libfibre achieves better performance with fewer cycle.
    171 This suggest that the cascade effect is never present in libfibre and that some bottleneck disappears in this context.
    172 However, I did not investigate this result any deeper.
    173 
    174 Figure~\ref{fig:cycle:nasus} show a similar story happening on AMD as it does on Intel.
    175 The different performance improvements and plateaus due to cache topology appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
    176 Unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability for 100 cycles per \proc.
    177 
    178 In the 1 cycle per \proc experiment, the same performance increase for libfibre is visible.
    179 However, unlike on Intel, tokio achieves the same performance as Go rather than \CFA.
    180 This leaves \CFA trailing behind in this particular case, but only at hight core counts.
    181 Presumably this is because in this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
    182 Since this effect is only problematic in cases with 1 \at per \proc it is not very meaningful for the general performance.
    183 
    184 The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance in this scenario.
    185 Which demonstrate that in this case \CFA achieves equivalent performance.
     174As a result of the \gls{kthrd} placement, additional \procs from 25 to 48 offer less performance improvement (flatting of the line) for all runtimes.
     175As expected, this pattern repeats again between \proc count 72 and 96.
     176\item
     177For 1 cycle per \proc, \CFA and Tokio obtain very similar results overall, but Tokio shows more variations in the results.
     178Go achieves slightly better performance.
     179Interestingly, libfibre achieves better performance with 1 cycle.
     180\end{itemize}
     181
     182For the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results show the same story as on the Intel, with close to double the performance overall but with slightly increased variation.
     183The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
     184\begin{itemize}
     185\item
     186For 100 cycles per \proc, unlike Intel, all 4 runtimes achieve very similar throughput and scalability.
     187\item
     188For 1 cycle per \proc, unlike on Intel, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
     189Again, the same performance increase for libfibre is visible.
     190\end{itemize}
     191Note, I did not investigate the libfibre performance boost for 1 cycle in this experiment.
     192
     193The conclusion from both architectures is that all of the compared runtime have fairly equivalent performance for this micro-benchmark.
     194Clearly, the pathological case with 1 \at per \proc, can affect fairness algorithms managing mostly idle processors, \eg \CFA, but only at high core counts.
     195For this case, \emph{any} helping is likely to cause a cascade of \procs running out of work and attempting to steal.
     196For this experiment, the \CFA scheduler has achieved the goal of obtaining equivalent performance to other less fair schedulers, except for very unusual workloads.
    186197
    187198\section{Yield}
     199
    188200For completion, the classic yield benchmark is included.
    189 This benchmark is simpler than the cycle test: it creates many \ats that call @yield@.
     201Here, the throughput is dominated by the mechanism used to handle the @yield@ function.
     202Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the cycle @wait/next.wake@ is replaced by @yield@.
    190203As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@.
    191 The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty.
    192 This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work.
    193 Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@.
     204The only interesting variable in this benchmark is the number of \ats per \procs, where ratios close to 1 means the ready queue(s) can be empty, which again puts a strain on the idle-sleep handling.
    194205
    195206\begin{figure}
     
    207218\caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
    208219\label{fig:yield:code}
     220%\end{figure}
     221\bigskip
     222%\begin{figure}
     223        \subfloat[][Throughput, 100 \ats per \proc]{
     224                \resizebox{0.5\linewidth}{!}{
     225                        \input{result.yield.jax.ops.pstex_t}
     226                }
     227                \label{fig:yield:jax:ops}
     228        }
     229        \subfloat[][Throughput, 1 \ats per \proc]{
     230                \resizebox{0.5\linewidth}{!}{
     231                \input{result.yield.low.jax.ops.pstex_t}
     232                }
     233                \label{fig:yield:jax:low:ops}
     234        }
     235
     236        \subfloat[][Scalability, 100 \ats per \proc]{
     237                \resizebox{0.5\linewidth}{!}{
     238                \input{result.yield.jax.ns.pstex_t}
     239                }
     240                \label{fig:yield:jax:ns}
     241        }
     242        \subfloat[][Scalability, 1 \ats per \proc]{
     243                \resizebox{0.5\linewidth}{!}{
     244                \input{result.yield.low.jax.ns.pstex_t}
     245                }
     246                \label{fig:yield:jax:low:ns}
     247        }
     248        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, using 1 \ats per \proc. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     249        \label{fig:yield:jax}
    209250\end{figure}
    210251
    211252\subsection{Results}
     253
     254Figures~\ref{fig:yield:jax} and~\ref{fig:yield:nasus} show the same throughput graphs as @cycle@ on Intel and AMD, respectively.
     255Note, the Y-axis on the yield graph for Intel is twice as large as the Intel cycle-graph.
     256A visual glance between the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
     257
     258For the Intel architecture, Figure~\ref{fig:yield:jax}:
     259\begin{itemize}
     260\item
     261\CFA has no special handling for @yield@, but this experiment requires less synchronization than the @cycle@ experiment.
     262Hence, the @yield@ throughput and scalability graphs for both 100 and 1 cycles/tasks per processor have similar shapes to the corresponding @cycle@ graphs.
     263The only difference is sightly better performance for @yield@ because of less synchronization.
     264As for @cycle@, the cost of idle sleep also comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
     265\item
     266libfibre has special handling for @yield@ using the fact that the number of ready fibres does not change, and therefore, by-passing the idle-sleep mechanism entirely.
     267Additionally, when only running 1 \at per \proc, libfibre optimizes further, and forgoes the context-switch entirely.
     268Hence, libfibre behaves very differently in the cycle and yield benchmarks, with a 4 times increase in performance for 100 cycles/tasks and an 8 times increase for 1 cycle/task.
     269\item
     270Go has special handling for @yield@ by putting a yielding goroutine on a secondary global ready-queue, giving it lower priority.
     271The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
     272Hence, Go behaves very differently in the cycle and yield benchmarks, with a complete performance collapse in @yield@ for both 100 and 1 cycles/tasks.
     273\item
     274Tokio has a similar performance collapse after 16 processors, and therefore, its special @yield@ handling is probably related to a Go-like scheduler problem and/or a \CFA idle-sleep problem.
     275(I did not dig through the Rust code to ascertain the exact reason for the collapse.)
     276\end{itemize}
     277
    212278\begin{figure}
    213279        \subfloat[][Throughput, 100 \ats per \proc]{
    214280                \resizebox{0.5\linewidth}{!}{
    215                         \input{result.yield.jax.ops.pstex_t}
    216                 }
    217                 \label{fig:yield:jax:ops}
    218         }
    219         \subfloat[][Throughput, 1 \ats per \proc]{
    220                 \resizebox{0.5\linewidth}{!}{
    221                 \input{result.yield.low.jax.ops.pstex_t}
    222                 }
    223                 \label{fig:yield:jax:low:ops}
     281                        \input{result.yield.nasus.ops.pstex_t}
     282                }
     283                \label{fig:yield:nasus:ops}
     284        }
     285        \subfloat[][Throughput, 1 \at per \proc]{
     286                \resizebox{0.5\linewidth}{!}{
     287                        \input{result.yield.low.nasus.ops.pstex_t}
     288                }
     289                \label{fig:yield:nasus:low:ops}
    224290        }
    225291
    226292        \subfloat[][Scalability, 100 \ats per \proc]{
    227293                \resizebox{0.5\linewidth}{!}{
    228                 \input{result.yield.jax.ns.pstex_t}
    229                 }
    230                 \label{fig:yield:jax:ns}
    231         }
    232         \subfloat[][Scalability, 1 \ats per \proc]{
    233                 \resizebox{0.5\linewidth}{!}{
    234                 \input{result.yield.low.jax.ns.pstex_t}
    235                 }
    236                 \label{fig:yield:jax:low:ns}
    237         }
    238         \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    239         \label{fig:yield:jax}
    240 \end{figure}
    241 
    242 \begin{figure}
    243         \subfloat[][Throughput, 100 \ats per \proc]{
    244                 \resizebox{0.5\linewidth}{!}{
    245                         \input{result.yield.nasus.ops.pstex_t}
    246                 }
    247                 \label{fig:yield:nasus:ops}
    248         }
    249         \subfloat[][Throughput, 1 \at per \proc]{
    250                 \resizebox{0.5\linewidth}{!}{
    251                         \input{result.yield.low.nasus.ops.pstex_t}
    252                 }
    253                 \label{fig:yield:nasus:low:ops}
    254         }
    255 
    256         \subfloat[][Scalability, 100 \ats per \proc]{
    257                 \resizebox{0.5\linewidth}{!}{
    258294                        \input{result.yield.nasus.ns.pstex_t}
    259295                }
     
    266302                \label{fig:yield:nasus:low:ns}
    267303        }
    268         \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
     304        \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, using 1 \ats per \proc. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
    269305        \label{fig:yield:nasus}
    270306\end{figure}
    271 Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count on Intel.
    272 It is fairly obvious why I claim this benchmark is more artificial.
    273 The throughput is dominated by the mechanism used to handle the @yield@.
    274 \CFA does not have special handling for @yield@ but the experiment requires less synchronization.
    275 As a result achieves better performance than the cycle benchmark, but still comparable.
    276 
    277 When the number of \ats is reduce to 1 per \proc, the cost of idle sleep also comes into play in a very significant way.
    278 If anything causes a \at migration, where two \ats end-up on the same ready-queue, work-stealing will start occuring and could cause several \ats to shuffle around.
    279 In the process, several \procs can go to sleep transiently if they fail to find where the \ats were shuffled to.
    280 In \CFA, spurious bursts of latency can trick a \proc into helping, triggering this effect.
    281 However, since user-level threading with equal number of \ats and \procs is a somewhat degenerate case, especially when context-switching very often, this result is not particularly meaningful and is only included for completness.
    282 
    283 Libfibre uses the fact that @yield@ doesn't change the number of ready fibres and by-passes the idle-sleep mechanism entirely, producing significantly better throughput.
    284 Additionally, when only running 1 \at per \proc, libfibre optimizes further and forgoes the context-switch entirely.
    285 This results in incredible performance results comparing to the other runtimes.
    286 
    287 In stark contrast with libfibre, Go puts yielding goroutines on a secondary global ready-queue, giving them lower priority.
    288 The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
    289 Based on the scalability, Tokio obtains the similarly poor performance and therefore it is likely it handles @yield@ in a similar fashion.
    290 However, it must be doing something different since it does scale at low \proc count.
    291 
    292 Again, Figure~\ref{fig:yield:nasus} show effectively the same story happening on AMD as it does on Intel.
    293 \CFA fairs slightly better with many \ats per \proc, but the performance is satisfactory on both architectures.
    294 
    295 Since \CFA obtains the same satisfactory performance as the previous benchmark this is still a success, albeit a less meaningful one.
     307
     308For the AMD architecture, Figure~\ref{fig:yield:nasus}, the results show the same story as on the Intel, with slightly increased variations.
     309Also, some transition points on the X-axis differ because of the architectures, like at 16 versus 24 processors.
     310
     311It is difficult to draw conclusions for this benchmark when runtime system treat @yield@ so differently.
     312The win for \CFA is its consistency between the cycle and yield benchmarks making it simpler for programmers to use and understand, \ie the \CFA semantics match with programmer intuition.
    296313
    297314
    298315\section{Churn}
     316
    299317The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.
    300318In these benchmarks, \ats can be easily partitioned over the different \procs upfront and none of the \ats communicate with each other.
    301319
    302 The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no apparent relation between the last \proc on which a \at ran and blocked, and the \proc that subsequently unblocks it.
    303 With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or place it on a remote queue.
    304 This enqueuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
    305 In either case, this benchmark aims to measure how well each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
     320The Churn benchmark represents more chaotic executions, where there is more communication among \ats but no relationship between the last \proc on which a \at ran and blocked and the \proc that subsequently unblocks it.
     321With processor-specific ready-queues, when a \at is unblocked by a different \proc that means the unblocking \proc must either ``steal'' the \at from another processor or find it on a remote queue.
     322This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on the \at data structure.
     323Hence, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the each other's data.
     324In either case, this benchmark aims to measure how well a scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
    306325
    307326This benchmark uses a fixed-size array of counting semaphores.
    308 Each \at picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore.
     327Each \at picks a random semaphore, @V@s it to unblock any waiting \at, and then @P@s (maybe blocks) the \ats on the semaphore.
    309328This creates a flow where \ats push each other out of the semaphores before being pushed out themselves.
    310 For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs.
     329For this benchmark to work, the number of \ats must be equal or greater than the number of semaphores plus the number of \procs;
     330\eg if there are 10 semaphores and 5 \procs, but only 3 \ats, all 3 \ats can block (P) on a random semaphore and now there is no \ats to unblock (V) them.
    311331Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@.
    312332Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@.
     
    328348\caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code}
    329349\label{fig:churn:code}
     350%\end{figure}
     351\bigskip
     352%\begin{figure}
     353        \subfloat[][Throughput, 100 \ats per \proc]{
     354                \resizebox{0.5\linewidth}{!}{
     355                        \input{result.churn.jax.ops.pstex_t}
     356                }
     357                \label{fig:churn:jax:ops}
     358        }
     359        \subfloat[][Throughput, 2 \ats per \proc]{
     360                \resizebox{0.5\linewidth}{!}{
     361                        \input{result.churn.low.jax.ops.pstex_t}
     362                }
     363                \label{fig:churn:jax:low:ops}
     364        }
     365
     366        \subfloat[][Latency, 100 \ats per \proc]{
     367                \resizebox{0.5\linewidth}{!}{
     368                        \input{result.churn.jax.ns.pstex_t}
     369                }
     370                \label{fig:churn:jax:ns}
     371        }
     372        \subfloat[][Latency, 2 \ats per \proc]{
     373                \resizebox{0.5\linewidth}{!}{
     374                        \input{result.churn.low.jax.ns.pstex_t}
     375                }
     376                \label{fig:churn:jax:low:ns}
     377        }
     378        \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     379        \label{fig:churn:jax}
    330380\end{figure}
    331381
    332382\subsection{Results}
    333 \begin{figure}
    334         \subfloat[][Throughput, 100 \ats per \proc]{
    335                 \resizebox{0.5\linewidth}{!}{
    336                         \input{result.churn.jax.ops.pstex_t}
    337                 }
    338                 \label{fig:churn:jax:ops}
    339         }
    340         \subfloat[][Throughput, 2 \ats per \proc]{
    341                 \resizebox{0.5\linewidth}{!}{
    342                         \input{result.churn.low.jax.ops.pstex_t}
    343                 }
    344                 \label{fig:churn:jax:low:ops}
    345         }
    346 
    347         \subfloat[][Latency, 100 \ats per \proc]{
    348                 \resizebox{0.5\linewidth}{!}{
    349                         \input{result.churn.jax.ns.pstex_t}
    350                 }
    351                 \label{fig:churn:jax:ns}
    352         }
    353         \subfloat[][Latency, 2 \ats per \proc]{
    354                 \resizebox{0.5\linewidth}{!}{
    355                         \input{result.churn.low.jax.ns.pstex_t}
    356                 }
    357                 \label{fig:churn:jax:low:ns}
    358         }
    359         \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    360         \label{fig:churn:jax}
    361 \end{figure}
    362 
    363 \begin{figure}
    364         \subfloat[][Throughput, 100 \ats per \proc]{
    365                 \resizebox{0.5\linewidth}{!}{
    366                         \input{result.churn.nasus.ops.pstex_t}
    367                 }
    368                 \label{fig:churn:nasus:ops}
    369         }
    370         \subfloat[][Throughput, 2 \ats per \proc]{
    371                 \resizebox{0.5\linewidth}{!}{
    372                         \input{result.churn.low.nasus.ops.pstex_t}
    373                 }
    374                 \label{fig:churn:nasus:low:ops}
    375         }
    376 
    377         \subfloat[][Latency, 100 \ats per \proc]{
    378                 \resizebox{0.5\linewidth}{!}{
    379                         \input{result.churn.nasus.ns.pstex_t}
    380                 }
    381                 \label{fig:churn:nasus:ns}
    382         }
    383         \subfloat[][Latency, 2 \ats per \proc]{
    384                 \resizebox{0.5\linewidth}{!}{
    385                         \input{result.churn.low.nasus.ns.pstex_t}
    386                 }
    387                 \label{fig:churn:nasus:low:ns}
    388         }
    389         \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine.
    390         For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
    391         \label{fig:churn:nasus}
    392 \end{figure}
    393 Figure~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput as a function of \proc count on Intel and AMD respectively.
    394 It uses the same representation as the previous benchmark : 15 runs where the dashed line show the extremums and the solid line the median.
     383
     384Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the throughput on Intel and AMD respectively.
     385
    395386The performance cost of crossing the cache boundaries is still visible at the same \proc count.
    396 However, this benchmark has performance dominated by the cache traffic as \proc are constantly accessing the eachother's data.
     387
    397388Scalability is notably worst than the previous benchmarks since there is inherently more communication between processors.
    398389Indeed, once the number of \glspl{hthrd} goes beyond a single socket, performance ceases to improve.
    399390An interesting aspect to note here is that the runtimes differ in how they handle this situation.
    400391Indeed, when a \proc unparks a \at that was last run on a different \proc, the \at could be appended to the ready-queue local \proc or to the ready-queue of the remote \proc, which previously ran the \at.
    401 \CFA, tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc.
     392\CFA, Tokio and Go all use the approach of unparking to the local \proc while Libfibre unparks to the remote \proc.
    402393In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
     394
     395\begin{figure}
     396        \subfloat[][Throughput, 100 \ats per \proc]{
     397                \resizebox{0.5\linewidth}{!}{
     398                        \input{result.churn.nasus.ops.pstex_t}
     399                }
     400                \label{fig:churn:nasus:ops}
     401        }
     402        \subfloat[][Throughput, 2 \ats per \proc]{
     403                \resizebox{0.5\linewidth}{!}{
     404                        \input{result.churn.low.nasus.ops.pstex_t}
     405                }
     406                \label{fig:churn:nasus:low:ops}
     407        }
     408
     409        \subfloat[][Latency, 100 \ats per \proc]{
     410                \resizebox{0.5\linewidth}{!}{
     411                        \input{result.churn.nasus.ns.pstex_t}
     412                }
     413                \label{fig:churn:nasus:ns}
     414        }
     415        \subfloat[][Latency, 2 \ats per \proc]{
     416                \resizebox{0.5\linewidth}{!}{
     417                        \input{result.churn.low.nasus.ns.pstex_t}
     418                }
     419                \label{fig:churn:nasus:low:ns}
     420        }
     421        \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and latency of the Churn on the benchmark on the AMD machine.
     422        For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     423        \label{fig:churn:nasus}
     424\end{figure}
    403425
    404426Like for the cycle benchmark, here all runtimes achieve fairly similar performance.
     
    406428Beyond that performance starts to suffer from increased caching costs.
    407429
    408 Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 1 and 100 \ats per \proc, \CFA, libfibre, Go and tokio achieve effectively equivalent performance for most \proc count.
     430Indeed on Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns} show that with 1 and 100 \ats per \proc, \CFA, libfibre, Go and Tokio achieve effectively equivalent performance for most \proc count.
    409431
    410432However, Figure~\ref{fig:churn:nasus} again shows a somewhat different story on AMD.
    411 While \CFA, libfibre, and tokio achieve effectively equivalent performance for most \proc count, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts.
     433While \CFA, libfibre, and Tokio achieve effectively equivalent performance for most \proc count, Go starts with better scaling at very low \proc counts but then performance quickly plateaus, resulting in worse performance at higher \proc counts.
    412434This performance difference is visible at both high and low \at counts.
    413435
     
    420442As second possible explanation is that Go may sometimes use the heap when allocating variables based on the result of escape analysis of the code.
    421443It is possible that variables that should be placed on the stack are placed on the heap.
    422 This could cause extra pointer chasing in the benchmark, heightning locality effects.
     444This could cause extra pointer chasing in the benchmark, heightening locality effects.
    423445Depending on how the heap is structure, this could also lead to false sharing.
    424446
    425447The objective of this benchmark is to demonstrate that unparking \ats from remote \procs do not cause too much contention on the local queues.
    426 Indeed, the fact all runtimes achieve some scaling at lower \proc count demontrate that migrations do not need to be serialized.
     448Indeed, the fact all runtimes achieve some scaling at lower \proc count demonstrate that migrations do not need to be serialized.
    427449Again these result demonstrate \CFA achieves satisfactory performance.
    428450
    429451\section{Locality}
    430 \begin{figure}
    431 \begin{cfa}
     452
     453\begin{figure}
     454\newsavebox{\myboxA}
     455\newsavebox{\myboxB}
     456
     457\begin{lrbox}{\myboxA}
     458\begin{cfa}[tabsize=3]
    432459Thread.main() {
    433460        count := 0
     
    436463                // go through the array
    437464                @work( a )@
     465
    438466                spots[r].V()
    439467                spots[r].P()
     
    444472}
    445473\end{cfa}
    446 \begin{cfa}
     474\end{lrbox}
     475
     476\begin{lrbox}{\myboxB}
     477\begin{cfa}[tabsize=3]
    447478Thread.main() {
    448479        count := 0
     
    460491}
    461492\end{cfa}
     493\end{lrbox}
     494
     495\subfloat[Thread$_1$]{\label{f:CFibonacci}\usebox\myboxA}
     496\hspace{3pt}
     497\vrule
     498\hspace{3pt}
     499\subfloat[Thread$_2$]{\label{f:CFAFibonacciGen}\usebox\myboxB}
     500
    462501\caption[Locality Benchmark : Pseudo Code]{Locality Benchmark : Pseudo Code}
    463502\label{fig:locality:code}
    464503\end{figure}
    465 As mentionned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.
     504
     505As mentioned in the churn benchmark, when unparking a \at, it is possible to either unpark to the local or remote ready-queue.
    466506\footnote{It is also possible to unpark to a third unrelated ready-queue, but without additional knowledge about the situation, there is little to suggest this would not degrade performance.}
    467507The locality experiment includes two variations of the churn benchmark, where an array of data is added.
    468508In both variations, before @V@ing the semaphore, each \at increment random cells inside the array.
    469509The @share@ variation then passes the array to the shadow-queue of the semaphore, transferring ownership of the array to the woken thread.
    470 In the @noshare@ variation the array is not passed on and each thread continously accesses its private array.
     510In the @noshare@ variation the array is not passed on and each thread continuously accesses its private array.
    471511
    472512The objective here is to highlight the different decision made by the runtime when unparking.
     
    480520
    481521\subsection{Results}
     522
    482523\begin{figure}
    483524        \subfloat[][Throughput share]{
     
    506547                \label{fig:locality:jax:noshare:ns}
    507548        }
    508         \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
     549        \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
    509550        \label{fig:locality:jax}
    510551\end{figure}
     
    535576                \label{fig:locality:nasus:noshare:ns}
    536577        }
    537         \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count. For Throughput higher is better, for Scalability lower is better. Each series represent 15 independent runs, the dotted lines are extremums while the solid line is the medium.}
     578        \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
    538579        \label{fig:locality:nasus}
    539580\end{figure}
    540581
    541 Figure~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
     582Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
    542583In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@.
    543584
    544585On Intel, Figure~\ref{fig:locality:jax} shows Go trailing behind the 3 other runtimes.
    545 On the left of the figure showing the results for the shared variation, where \CFA and tokio slightly outperform libfibre as expected.
    546 And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and tokio.
    547 Otherwise the results are similar to the churn benchmark, with lower throughtput due to the array processing.
     586On the left of the figure showing the results for the shared variation, where \CFA and Tokio slightly outperform libfibre as expected.
     587And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and Tokio.
     588Otherwise the results are similar to the churn benchmark, with lower throughput due to the array processing.
    548589Presumably the reason why Go trails behind are the same as in Figure~\ref{fig:churn:nasus}.
    549590
    550591Figure~\ref{fig:locality:nasus} shows the same experiment on AMD.
    551592\todo{why is cfa slower?}
    552 Again, we see the same story, where tokio and libfibre swap places and Go trails behind.
     593Again, we see the same story, where Tokio and libfibre swap places and Go trails behind.
    553594
    554595\section{Transfer}
     
    572613In both flavours, the experiment effectively measures how long it takes for all \ats to run once after a given synchronization point.
    573614In an ideal scenario where the scheduler is strictly FIFO, every thread would run once after the synchronization and therefore the delay between leaders would be given by:
    574 $ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueuing and dequeuing a \at and $NP$ is the number of \procs.
     615$ \frac{CSL + SL}{NP - 1}$, where $CSL$ is the context switch latency, $SL$ is the cost for enqueueing and dequeuing a \at and $NP$ is the number of \procs.
    575616However, if the scheduler allows \ats to run many times before other \ats are able to run once, this delay will increase.
    576617The semaphore version is an approximation of the strictly FIFO scheduling, where none of the \ats \emph{attempt} to run more than once.
    577618The benchmark effectively provides the fairness guarantee in this case.
    578 In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsability and any unfairness will be measurable.
     619In the yielding version however, the benchmark provides no such guarantee, which means the scheduler has full responsibility and any unfairness will be measurable.
    579620
    580621While this is a fairly artificial scenario, it requires only a few simple pieces.
     
    634675libfibre  & 127 $\mu$s  & ~33.5 ms   & DNC         & DNC           & 156 $\mu$s  & ~36.7 ms   & DNC         & DNC         \\
    635676Go        & 106 $\mu$s  & ~64.0 ms   & 24.6 ms     & 74.3 ms       & 271 $\mu$s  & 121.6 ms   & ~~1.21~ms   & 117.4 ms    \\
    636 tokio     & 289 $\mu$s  & 180.6 ms   & DNC         & DNC           & 157 $\mu$s  & 111.0 ms   & DNC         & DNC
     677Tokio     & 289 $\mu$s  & 180.6 ms   & DNC         & DNC           & 157 $\mu$s  & 111.0 ms   & DNC         & DNC
    637678\end{tabular}
    638679\end{centering}
    639 \caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader. }
     680\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
    640681\label{fig:transfer:res}
    641682\end{figure}
    642 Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiement runs 100 \at per \proc.
     683
     684Figure~\ref{fig:transfer:res} shows the result for the transfer benchmark with 2 \procs and all \procs, where each experiment runs 100 \at per \proc.
    643685Note that the results here are only meaningful as a coarse measurement of fairness, beyond which small cost differences in the runtime and concurrent primitives begin to matter.
    644 As such, data points that are the on the same order of magnitude as eachother should be basically considered equal.
    645 The takeaway of this experiement is the presence of very large differences.
     686As such, data points that are the on the same order of magnitude as each other should be basically considered equal.
     687The takeaway of this experiment is the presence of very large differences.
    646688The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged.
    647689The yielding variation is denoted ``Yield''.
    648 The experiement was only run for the extremums of the number of cores since the scaling per core behaves like previous experiements.
     690The experiment was only run for the extremes of the number of cores since the scaling per core behaves like previous experiments.
    649691This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness.
    650692The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly.
     
    652694Figure~\ref{fig:transfer:res} shows that while Go and Tokio are slower, all runtime achieve decent latency.
    653695However, the yielding variation shows an entirely different picture.
    654 Since libfibre and tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs.
    655 The result is that the experiement simply does not complete for these runtime.
     696Since libfibre and Tokio have a traditional work-stealing scheduler, \procs that have \ats on their local queues will never steal from other \procs.
     697The result is that the experiment simply does not complete for these runtime.
    656698Without \procs stealing from the \proc running the leader, the experiment will simply never terminate.
    657 Go manages to complete the experiement because it adds preemption on top of classic work-stealing.
     699Go manages to complete the experiment because it adds preemption on top of classic work-stealing.
    658700However, since preemption is fairly costly it achieves significantly worst performance.
    659701In contrast, \CFA achieves equivalent performance in both variations, demonstrating very good fairness.
  • doc/theses/thierry_delisle_PhD/thesis/text/existing.tex

    r4fee301 r0c40bfe  
    5050It can therefore be desirable for schedulers to support \ats with identical priorities and/or automatically setting and adjusting priorities for \ats.
    5151Most common operating systems use some variant on priorities with overlaps and dynamic priority adjustments.
    52 For example, Microsoft Windows uses a pair of priorities
    53 \cite{win:priority}, one specified by users out of ten possible options and one adjusted by the system.
     52For example, Microsoft Windows uses a pair of priorities~\cite{win:priority}, one specified by users out of ten possible options and one adjusted by the system.
    5453
    5554\subsection{Uninformed and Self-Informed Dynamic Schedulers}
     
    137136The scheduler may also temporarily adjust priorities after certain effects like the completion of I/O requests.
    138137
    139 In~\cite{russinovich2009windows}, Chapter 1 section ``Processes, Threads, and Jobs'' discusses the scheduling policy more in depth.
    140 Multicore scheduling is based on a combination of priorities, preferred \proc.
    141 Each \at is assigned an \newterm{ideal} \proc using a round-robin policy.
    142 \Gls{at} are distributed among the \procs according to their priority, preferring to match \ats to their ideal \proc and then to the last \proc they ran on.
    143 This is similar to a variation of work stealing, where the stealing \proc restore the \at to its original \proc after running it, but with priorities added onto the mix.
     138In~\cite{russinovich2009windows}, Chapter 1 section ``Processes, Threads, and Jobs''\todo{Look up section number.} discusses the scheduling policy more in depth.
     139Multicore scheduling is based on a combination of priorities and preferred \proc.
     140Each \at is assigned an initial processor using a round-robin policy, called the \at's \newterm{ideal} \proc.
     141\Glspl{at} are distributed among the \procs according to their priority, preferring to match \ats to their ideal \proc and then to the last \proc they ran on.
     142This approach is a variation of work stealing, where the stealing \proc restore the \at to its original \proc after running it, but mixed with priorities.
    144143
    145144\paragraph{Apple OS X}
     
    203202
    204203\paragraph{Grand Central Dispatch}
    205 An Apple\cite{apple:gcd} API that offers task parallelism~\cite{wiki:taskparallel}.
     204An Apple~\cite{apple:gcd} API that offers task parallelism~\cite{wiki:taskparallel}.
    206205Its distinctive aspect is multiple ``Dispatch Queues'', some of which are created by programmers.
    207206Each queue has its own local ordering guarantees, \eg \ats on queue $A$ are executed in \emph{FIFO} order.
    208207
    209 While the documentation only gives limited insight into the scheduling and load balancing approach, \cite{apple:gcd2} suggests an approach fairly classic;
    210 Where each \proc has a queue of \newterm{blocks} to run, effectively \ats, and they drain their respective queues in \glsxtrshort{fifo}.
    211 They seem to add the concept of dependent queues with clear ordering, where a executing a block ends-up scheduling more blocks.
    212 In terms of semantics, these Dispatch Queues seem to be very similar to Intel\textregistered ~TBB @execute()@ and predecessor semantics.
     208While the documentation only gives limited insight into the scheduling and load balancing approach, \cite{apple:gcd2} suggests a fairly classic approach.
     209Each \proc has a queue of \ats to run, called \newterm{blocks}, which are drained in \glsxtrshort{fifo}.
     210\todo{update: They seem to add the concept of dependent queues with clear ordering, where executing a block ends-up scheduling more blocks.
     211In terms of semantics, these Dispatch Queues seem to be very similar to Intel\textregistered ~TBB \lstinline{execute()} and predecessor semantics.}
     212
    213213
    214214\paragraph{LibFibre}
  • doc/theses/thierry_delisle_PhD/thesis/text/io.tex

    r4fee301 r0c40bfe  
    141141In the worst case, where all \glspl{thrd} are consistently blocking on \io, it devolves into 1-to-1 threading.
    142142However, regardless of the frequency of \io operations, it achieves the fundamental goal of not blocking \glspl{proc} when \glspl{thrd} are ready to run.
    143 This approach is used by languages like Go\cite{GITHUB:go}, frameworks like libuv\cite{libuv}, and web servers like Apache~\cite{apache} and Nginx~\cite{nginx}, since it has the advantage that it can easily be used across multiple operating systems.
     143This approach is used by languages like Go~\cite{GITHUB:go}, frameworks like libuv~\cite{libuv}, and web servers like Apache~\cite{apache} and NGINX~\cite{nginx}, since it has the advantage that it can easily be used across multiple operating systems.
    144144This advantage is especially relevant for languages like Go, which offer a homogeneous \glsxtrshort{api} across all platforms.
    145145As opposed to C, which has a very limited standard api for \io, \eg, the C standard library has no networking.
     
    151151
    152152For this project, I selected @io_uring@, in large parts because of its generality.
    153 While @epoll@ has been shown to be a good solution for socket \io (\cite{DBLP:journals/pomacs/KarstenB20}), @io_uring@'s transparent support for files, pipes, and more complex operations, like @splice@ and @tee@, make it a better choice as the foundation for a general \io subsystem.
     153While @epoll@ has been shown to be a good solution for socket \io (\cite{Karsten20}), @io_uring@'s transparent support for files, pipes, and more complex operations, like @splice@ and @tee@, make it a better choice as the foundation for a general \io subsystem.
    154154
    155155\section{Event-Engine}
Note: See TracChangeset for help on using the changeset viewer.