source: doc/theses/thierry_delisle_PhD/comp_II/comp_II.tex @ 33f3cfb

ADTarm-ehast-experimentalenumforall-pointer-decayjacob/cs343-translationnew-ast-unique-exprpthread-emulationqualifiedEnum
Last change on this file since 33f3cfb was 0a945fd, checked in by Thierry Delisle <tdelisle@…>, 3 years ago

Minor fixes in compII

  • Property mode set to 100644
File size: 40.6 KB
RevLine 
[4a40f695]1\documentclass[11pt]{article}
2\usepackage{fullpage}
[df75fe97]3\usepackage[T1]{fontenc}
4\usepackage[utf8]{inputenc}
5\usepackage{xspace}
6\usepackage{xcolor}
7\usepackage{graphicx}
[a037f85]8\usepackage{epic,eepic}
[5569a31]9\usepackage{listings}                   % for code listings
[df75fe97]10\usepackage{glossaries}
11\usepackage{textcomp}
[5569a31]12% cfa macros used in the document
13\input{common}
14
15\setlist{topsep=6pt,parsep=0pt}         % global reduce spacing between points
16\newcommand{\uC}{$\mu$\CC}
[3d3cbd0]17\usepackage[hidelinks]{hyperref}
[5569a31]18\setlength{\abovecaptionskip}{5pt plus 3pt minus 2pt}
19\lstMakeShortInline$%                   % single-character for \lstinline
[4a40f695]20%\usepackage[margin=1in]{geometry}
21%\usepackage{float}
[df75fe97]22
23\input{glossary}
24
25\CFAStyle                               % use default CFA format-style
26
27\title{
28        \Huge \vspace*{1in} The \CFA Scheduler\\
29        \huge \vspace*{0.25in} PhD Comprehensive II Research Proposal
30        \vspace*{1in}
31}
32
33\author{
[5569a31]34        \huge Thierry Delisle \vspace*{5pt} \\
35        \Large \texttt{tdelisle@uwaterloo.ca} \vspace*{5pt} \\
[df75fe97]36        \Large Cheriton School of Computer Science \\
37        \Large University of Waterloo
38}
39
40\date{
41        \today
42}
43
44\begin{document}
45\maketitle
[61b1447]46\thispagestyle{empty}
[df75fe97]47\cleardoublepage
48
49\newcommand{\cit}{\textsuperscript{[Citation Needed]}\xspace}
[5569a31]50\newcommand{\TODO}{{\large\bf\color{red} TODO: }\xspace}
[df75fe97]51
52% ===============================================================================
53% ===============================================================================
54
55\tableofcontents
56
57% ===============================================================================
58% ===============================================================================
59\newpage
60\section{Introduction}
61\subsection{\CFA and the \CFA concurrency package}
[31a6f38]62\CFA~\cite{Moss18} is a modern, polymorphic, non-object-oriented, concurrent, backwards-compatible extension of the C programming language.
[5569a31]63It aims to add high-productivity features while maintaining the predictable performance of C.
[31a6f38]64As such, concurrency in \CFA~\cite{Delisle19} aims to offer simple and safe high-level tools while still allowing performant code.
65\CFA concurrent code is written in the synchronous programming paradigm but uses \glspl{uthrd} to achieve the simplicity and maintainability of synchronous programming without sacrificing the efficiency of asynchronous programming.
[5569a31]66As such, the \CFA \newterm{scheduler} is a preemptive user-level scheduler that maps \glspl{uthrd} onto \glspl{kthrd}.
67
[31a6f38]68\subsection{Scheduling}
[5569a31]69\newterm{Scheduling} occurs when execution switches from one thread to another, where the second thread is implicitly chosen by the scheduler.
[31a6f38]70This scheduling is an indirect handoff, as opposed to generators and coroutines that explicitly switch to the next generator and coroutine respectively.
[5569a31]71The cost of switching between two threads for an indirect handoff has two components:
72\begin{enumerate}
73\item
74the cost of actually context-switching, \ie changing the relevant registers to move execution from one thread to the other,
75\item
76and the cost of scheduling, \ie deciding which thread to run next among all the threads ready to run.
77\end{enumerate}
[31a6f38]78The first cost is generally constant\footnote{Affecting the constant context-switch cost is whether it is done in one step, where the first thread schedules the second, or in two steps, where the first thread context switches to a third scheduler thread.}, while the scheduling cost can vary based on the system state.
79Adding multiple \glspl{kthrd} does not fundamentally change the scheduler semantics or requirements, it simply adds new correctness requirements, \ie \newterm{linearizability}\footnote{Meaning however fast the CPU threads run, there is an equivalent sequential order that gives the same result.}, and a new dimension to performance: scalability, where scheduling cost also depends on contention.
[5569a31]80The more threads switch, the more the administration cost of scheduling becomes noticeable.
81It is therefore important to build a scheduler with the lowest possible cost and latency.
82Another important consideration is \newterm{fairness}.
83In principle, scheduling should give the illusion of perfect fairness, where all threads ready to run are running \emph{simultaneously}.
[31a6f38]84In practice, there can be advantages to unfair scheduling, similar to the express cash register at a grocery store.
[5569a31]85While the illusion of simultaneity is easier to reason about, it can break down if the scheduler allows too much unfairness.
86Therefore, the scheduler should offer as much fairness as needed to guarantee eventual progress, but use unfairness to help performance.
87
[31a6f38]88\subsection{Research Goal}
89The goal of this research is to produce a scheduler that is simple for programmers to understand and offers good general performance.
[5569a31]90Here understandability does not refer to the API but to how much scheduling concerns programmers need to take into account when writing a \CFA concurrent package.
[31a6f38]91Therefore, the main consequence of this goal is :
[df75fe97]92\begin{quote}
[41efd33]93The \CFA scheduler should be \emph{viable} for \emph{any} workload.
[df75fe97]94\end{quote}
95
[31a6f38]96For a general-purpose scheduler, it is impossible to produce an optimal algorithm as that requires knowledge of the future behaviour of threads.
97As such, scheduling performance is generally either defined by a best-case scenario, \ie a workload to which the scheduler is tailored, or a worst-case scenario, \ie the scheduler behaves no worse than \emph{X}.
[5569a31]98For this proposal, the performance is evaluated using the second approach to allow \CFA programmers to rely on scheduling performance.
99Because there is no optimal scheduler, ultimately \CFA may allow programmers to write their own scheduler; but that is not the subject of this proposal, which considers only the default scheduler.
100As such, it is important that only programmers with exceptionally high performance requirements should need to write their own scheduler and replace the scheduler in this proposal.
[41efd33]101
[5569a31]102To achieve the \CFA scheduling goal includes:
103\begin{enumerate}
[8d8ac3b]104        \item producing a scheduling strategy with sufficient fairness guarantees,
105        \item creating an abstraction layer over the operating system to handle kernel-threads spinning unnecessarily,
106        \item scheduling blocking I/O operations,
[31a6f38]107        \item and writing sufficient library tools to allow developers to indirectly use the scheduler, either through tuning knobs in the default scheduler or replacing the default scheduler.
[5569a31]108\end{enumerate}
[df75fe97]109
110% ===============================================================================
111% ===============================================================================
112
[c4fd4ef]113\section{\CFA Scheduling}
[5569a31]114To schedule user-level threads across all workloads, the scheduler has a number of requirements:
115
116\paragraph{Correctness} As with any other concurrent data structure or algorithm, the correctness requirement is paramount.
117The scheduler cannot allow threads to be dropped from the ready queue, \ie scheduled but never run, or be executed multiple times when only being scheduled once.
[8d8ac3b]118Since \CFA concurrency has no spurious wake up, this definition of correctness also means the scheduler should have no spurious wake up.
[5569a31]119The \CFA scheduler must be correct.
120
121\paragraph{Performance} The performance of a scheduler can generally be measured in terms of scheduling cost, scalability and latency.
122\newterm{Scheduling cost} is the cost to switch from one thread to another, as mentioned above.
[31a6f38]123For compute-bound concurrent applications with little context switching, the scheduling cost is negligible.
124For applications with high context-switch rates, scheduling cost can begin to dominating the cost.
125\newterm{Scalability} is the cost of adding multiple kernel threads.
126It can increase the time for scheduling because of contention from the multiple threads accessing shared resources, \eg a single ready queue.
[5569a31]127Finally, \newterm{tail latency} is service delay and relates to thread fairness.
[31a6f38]128Specifically, latency measures how long a thread waits to run once scheduled and is evaluated by the worst case.
[5569a31]129The \CFA scheduler should offer good performance for all three metrics.
130
[8d8ac3b]131\paragraph{Fairness} Like performance, this requirement has several aspects : eventual progress, predictability and performance reliability.
[5569a31]132\newterm{Eventual progress} guarantees every scheduled thread is eventually run, \ie prevent starvation.
[8d8ac3b]133As a hard requirement, the \CFA scheduler must guarantee eventual progress, otherwise the above-mentioned illusion of simultaneous execution is broken and the scheduler becomes much more complex to reason about.
[31a6f38]134\newterm{Predictability} and \newterm{reliability} mean similar workloads achieve similar performance so programmer execution intuition is respected.
[5a1c9ef]135For example, a thread that yields aggressively should not run more often than other threads.
[5569a31]136While this is intuitive, it does not hold true for many work-stealing or feedback based schedulers.
[31a6f38]137The \CFA scheduler must guarantee eventual progress, should be predictable, and offer reliable performance.
[5569a31]138
139\paragraph{Efficiency} Finally, efficient usage of CPU resources is also an important requirement and is discussed in depth towards the end of the proposal.
[0a945fd]140\newterm{Efficiency} means avoiding using CPU cycles when there are no threads to run (to conserve energy), and conversely, using as many available CPU cycles when the workload can benefit from it.
[5569a31]141Balancing these two states is where the complexity lies.
142The \CFA scheduler should be efficient with respect to the underlying (shared) computer.
[41efd33]143
[c4fd4ef]144\bigskip To achieve these requirements, I can reject two broad types of scheduling strategies : feedback-based and priority schedulers.
[df75fe97]145
146\subsection{Feedback-Based Schedulers}
[5569a31]147Many operating systems use schedulers based on feedback in some form, \eg measuring how much CPU a particular thread has used\footnote{Different metrics can be measured but it is not relevant to the discussion.} and schedule threads based on this metric.
148These strategies are sensible for operating systems but rely on two assumptions for the workload:
[df75fe97]149
150\begin{enumerate}
[8d8ac3b]151        \item Threads live long enough for useful feedback information to be gathered.
[0a945fd]152        \item Threads belong to multiple users so fairness across users is important.
[df75fe97]153\end{enumerate}
154
[5569a31]155While these two assumptions generally hold for operating systems, they may not for user-level threading.
156Since \CFA has the explicit goal of allowing many smaller threads, this can naturally lead to threads with much shorter lifetimes that are only scheduled a few times.
157Scheduling strategies based on feedback cannot be effective in these cases because there is no opportunity to measure the metrics that underlie the algorithm.
[8d8ac3b]158Note, the problem of \newterm{feedback convergence} (reacting too slowly to scheduling events) is not specific to short-lived threads but can also occur with threads that show drastic changes in scheduling, \eg threads running for long periods of time and then suddenly blocking and unblocking quickly and repeatedly.
[df75fe97]159
[5569a31]160In the context of operating systems, these concerns can be overshadowed by a more pressing concern : security.
161When multiple users are involved, it is possible some users are malevolent and try to exploit the scheduling strategy to achieve some nefarious objective.
162Security concerns mean more precise and robust fairness metrics must be used to guarantee fairness across processes created by users as well as threads created within a process.
163In the case of the \CFA scheduler, every thread runs in the same user space and is controlled by the same user.
[0a945fd]164Fairness across users is therefore a given and it is then possible to safely ignore the possibility that threads are malevolent.
[31a6f38]165This approach allows for a much simpler fairness metric, and in this proposal, \emph{fairness} is defined as:
166\begin{quote}
167When multiple threads are cycling through the system, the total ordering of threads being scheduled, \ie pushed onto the ready queue, should not differ much from the total ordering of threads being executed, \ie popped from the ready queue.
168\end{quote}
[df75fe97]169
[5569a31]170Since feedback is not necessarily feasible within the lifetime of all threads and a simple fairness metric can be used, the scheduling strategy proposed for the \CFA runtime does not use per-threads feedback.
171Feedback in general is not rejected for secondary concerns like idle sleep for kernel threads, but no feedback is used to decide which thread to run next.
[df75fe97]172
173\subsection{Priority Schedulers}
[5569a31]174Another broad category of schedulers are priority schedulers.
175In these scheduling strategies, threads have priorities and the runtime schedules the threads with the highest priority before scheduling other threads.
[8d8ac3b]176Threads with equal priority are scheduled using a secondary strategy, often something simple like round robin or FIFO.
[5569a31]177A consequence of priority is that, as long as there is a thread with a higher priority that desires to run, a thread with a lower priority does not run.
[31a6f38]178The potential for thread starvation dramatically increases programming complexity since starving threads and priority inversion (prioritizing a lower priority thread) can both lead to serious problems.
[5569a31]179
180An important observation is that threads do not need to have explicit priorities for problems to occur.
[31a6f38]181Indeed, any system with multiple ready queues that attempts to exhaust one queue before accessing the other queues, essentially provides implicit priority, which can encounter starvation problems.
[5569a31]182For example, a popular scheduling strategy that suffers from implicit priorities is work stealing.
183\newterm{Work stealing} is generally presented as follows:
[df75fe97]184\begin{enumerate}
[5569a31]185        \item Each processor has a list of ready threads.
186        \item Each processor runs threads from its ready queue first.
187        \item If a processor's ready queue is empty, attempt to run threads from some other processor's ready queue.
[df75fe97]188\end{enumerate}
[5569a31]189In a loaded system\footnote{A \newterm{loaded system} is a system where threads are being run at the same rate they are scheduled.}, if a thread does not yield, block, or preempt for an extended period of time, threads on the same processor's list starve if no other processors exhaust their list.
[df75fe97]190
[31a6f38]191Since priorities can be complex for programmers to incorporate into their execution intuition, the \CFA scheduling strategy does not provided explicit priorities and attempts to eliminate implicit priorities.
[df75fe97]192
[41efd33]193\subsection{Schedulers without feedback or priorities}
[8d8ac3b]194This proposal conjectures that it is possible to construct a default scheduler for the \CFA runtime that offers good scalability and a simple fairness guarantee that is easy for programmers to reason about.
[0a945fd]195The simplest fairness guarantee is FIFO ordering, \ie threads scheduled first run first.
[5569a31]196However, enforcing FIFO ordering generally conflicts with scalability across multiple processors because of the additional synchronization.
197Thankfully, strict FIFO is not needed for sufficient fairness.
198Since concurrency is inherently non-deterministic, fairness concerns in scheduling are only a problem if a thread repeatedly runs before another thread can run.
[31a6f38]199Some relaxation is possible because non-determinism means programmers already handle ordering problems to produce correct code and hence rely on weak guarantees, \eg that a thread \emph{eventually} runs.
[5569a31]200Since some reordering does not break correctness, the FIFO fairness guarantee can be significantly relaxed without causing problems.
201For this proposal, the target guarantee is that the \CFA scheduler provides \emph{probable} FIFO ordering, which allows reordering but makes it improbable that threads are reordered far from their position in total ordering.
202
203The \CFA scheduler fairness is defined as follows:
[31a6f38]204\begin{quote}
205Given two threads $X$ and $Y$, the odds that thread $X$ runs $N$ times \emph{after} thread $Y$ is scheduled but \emph{before} it is run, decreases exponentially with regard to $N$.
206\end{quote}
[a037f85]207While this is not a bounded guarantee, the probability that unfairness persist for long periods of times decreases exponentially, making persisting unfairness virtually impossible.
[df75fe97]208
209% ===============================================================================
210% ===============================================================================
[5569a31]211\section{Proposal Details}
212
213\subsection{Central Ready Queue} \label{sec:queue}
214A central ready queue can be built from a FIFO queue, where user threads are pushed onto the queue when they are ready to run, and processors (kernel-threads acting as virtual processors) pop the user threads from the queue and execute them.
215Alistarh \etal~\cite{alistarh2018relaxed} show it is straightforward to build a relaxed FIFO list that is fast and scalable for loaded or overloaded systems.
216The described queue uses an array of underlying strictly FIFO queues as shown in Figure~\ref{fig:base}\footnote{For this section, the number of underlying queues is assumed to be constant.
217Section~\ref{sec:resize} discusses resizing the array.}.
[31a6f38]218Pushing new data is done by selecting one of the underlying queues at random, recording a timestamp for the operation, and pushing to the selected queue.
[5569a31]219Popping is done by selecting two queues at random and popping from the queue with the oldest timestamp.
[31a6f38]220A higher number of underlying queues leads to less contention on each queue and therefore better performance.
[5a1c9ef]221In a loaded system, it is highly likely the queues are non-empty, \ie several threads are on each of the underlying queues.
[31a6f38]222For this case, selecting a queue at random to pop from is highly likely to yield a queue with available items.
[5569a31]223In Figure~\ref{fig:base}, ignoring the ellipsis, the chances of getting an empty queue is 2/7 per pick, meaning two random picks yield an item approximately 9 times out of 10.
[a037f85]224
[4a40f695]225\begin{figure}
[a037f85]226        \begin{center}
[8d8ac3b]227                \input{base.pstex_t}
[a037f85]228        \end{center}
[31a6f38]229        \caption{Loaded relaxed FIFO list base on an array of strictly FIFO lists.
230        A timestamp appears in each node and array cell.}
[a037f85]231        \label{fig:base}
232\end{figure}
233
[4a40f695]234\begin{figure}
[a037f85]235        \begin{center}
[8d8ac3b]236                \input{empty.pstex_t}
[a037f85]237        \end{center}
[0a945fd]238        \caption{Underloaded relaxed FIFO list where the array contains many empty cells.}
[a037f85]239        \label{fig:empty}
240\end{figure}
241
[0a945fd]242In an underloaded system, several of the queues are empty, so selecting a random queue for popping is less likely to yield a successful selection and more attempts are needed, resulting in a performance degradation.
[5569a31]243Figure~\ref{fig:empty} shows an example with fewer elements, where the chances of getting an empty queue is 5/7 per pick, meaning two random picks yield an item only half the time.
244Since the ready queue is not empty, the pop operation \emph{must} find an element before returning and therefore must retry.
245Note, the popping kernel thread has no work to do, but CPU cycles are wasted both for available user and kernel threads during the pop operation as the popping thread is using a CPU.
246Overall performance is therefore influenced by the contention on the underlying queues and pop performance is influenced by the item density.
247
[8d8ac3b]248This leads to four performance cases for the centralized ready queue, as depicted in Table~\ref{tab:perfcases}.
[5569a31]249The number of processors (many or few) refers to the number of kernel threads \emph{actively} attempting to pop user threads from the queues, not the total number of kernel threads.
250The number of threads (many or few) refers to the number of user threads ready to be run.
251Many threads means they outnumber processors significantly and most underlying queues have items, few threads mean there are barely more threads than processors and most underlying queues are empty.
252Cases with fewer threads than processors are discussed in Section~\ref{sec:sleep}.
[a037f85]253
[4a40f695]254\begin{table}
[a037f85]255        \begin{center}
[3d3cbd0]256                \begin{tabular}{|r|l|l|}
257                        \cline{2-3}
258                        \multicolumn{1}{r|}{} & \multicolumn{1}{c|}{Many Processors} & \multicolumn{1}{c|}{Few Processors} \\
[9c6f459]259                        \hline
[a037f85]260                        Many Threads & A: good performance & B: good performance \\
[9c6f459]261                        \hline
[5569a31]262                        Few Threads  & C: worst performance & D: poor performance \\
[9c6f459]263                        \hline
[a037f85]264                \end{tabular}
265        \end{center}
[5569a31]266        \caption{Expected performance of the relaxed FIFO list in different cases.}
[a037f85]267        \label{tab:perfcases}
268\end{table}
269
[31a6f38]270Performance can be improved in Table~\ref{tab:perfcases} case~D by adding information to help processors find which inner queues are used.
[5569a31]271This addition aims to avoid the cost of retrying the pop operation but does not affect contention on the underlying queues and can incur some management cost for both push and pop operations.
272The approach used to encode this information can vary in density and be either global or local.
273\newterm{Density} means the information is either packed in a few cachelines or spread across several cachelines, and \newterm{local information} means each thread uses an independent copy instead of a single global, \ie common, source of information.
[c4fd4ef]274
[5569a31]275For example, Figure~\ref{fig:emptybit} shows a dense bitmask to identify which inner queues are currently in use.
276This approach means processors can often find user threads in constant time, regardless of how many underlying queues are empty.
277Furthermore, modern x86 CPUs have extended bit manipulation instructions (BMI2) that allow using the bitmask with very little overhead compared to the randomized selection approach for a filled ready queue, offering good performance even in cases with many empty inner queues.
278However, this technique has its limits: with a single word\footnote{Word refers here to however many bits can be written atomically.} bitmask, the total number of underlying queues in the ready queue is limited to the number of bits in the word.
279With a multi-word bitmask, this maximum limit can be increased arbitrarily, but it is not possible to check if the queue is empty by reading the bitmask atomically.
[c4fd4ef]280
[31a6f38]281Finally, a dense bitmap, either single or multi-word, causes additional problems in Table~\ref{tab:perfcases} case C, because many processors are continuously scanning the bitmask to find the few available threads.
[5569a31]282This increased contention on the bitmask(s) reduces performance because of cache misses after updates and the bitmask is updated more frequently by the scanning processors racing to read and/or update that information.
283This increased update frequency means the information in the bitmask is more often stale before a processor can use it to find an item, \ie mask read says there are available user threads but none on queue.
[df75fe97]284
[4a40f695]285\begin{figure}
[a037f85]286        \begin{center}
[31a6f38]287                {\resizebox{0.73\textwidth}{!}{\input{emptybit}}}
[a037f85]288        \end{center}
[31a6f38]289        \vspace*{-5pt}
[0a945fd]290        \caption{Underloaded queue with added bitmask to indicate which array cells have items.}
[a037f85]291        \label{fig:emptybit}
292        \begin{center}
[31a6f38]293                {\resizebox{0.73\textwidth}{!}{\input{emptytree}}}
[a037f85]294        \end{center}
[31a6f38]295        \vspace*{-5pt}
[0a945fd]296        \caption{Underloaded queue with added binary search tree indicate which array cells have items.}
[a037f85]297        \label{fig:emptytree}
[31a6f38]298        \begin{center}
299                {\resizebox{0.9\textwidth}{!}{\input{emptytls}}}
300        \end{center}
301        \vspace*{-5pt}
[0a945fd]302        \caption{Underloaded queue with added per processor bitmask to indicate which array cells have items.}
[31a6f38]303        \label{fig:emptytls}
[a037f85]304\end{figure}
[df75fe97]305
[0a945fd]306Figure~\ref{fig:emptytree} shows an approach using a hierarchical tree data-structure to reduce contention and has been shown to work in similar cases~\cite{ellen2007snzi}.
[31a6f38]307However, this approach may lead to poorer performance in Table~\ref{tab:perfcases} case~B due to the inherent pointer chasing cost and already low contention cost in that case.
308
309Figure~\ref{fig:emptytls} shows an approach using dense information, similar to the bitmap, but have each thread keep its own independent copy of it.
[5569a31]310While this approach can offer good scalability \emph{and} low latency, the liveliness of the information can become a problem.
[0a945fd]311In the simple cases, local copies can become stale and end-up not being useful for the pop operation.
[5569a31]312A more serious problem is that reliable information is necessary for some parts of this algorithm to be correct.
313As mentioned in this section, processors must know \emph{reliably} whether the list is empty or not to decide if they can return \texttt{NULL} or if they must keep looking during a pop operation.
314Section~\ref{sec:sleep} discusses another case where reliable information is required for the algorithm to be correct.
[df75fe97]315
[5569a31]316There is a fundamental tradeoff among these approach.
[31a6f38]317Dense global information about empty underlying queues helps zero-contention cases at the cost of the high-contention case.
318Sparse global information helps high-contention cases but increases latency in zero-contention cases to read and ``aggregate'' the information\footnote{Hierarchical structures, \eg binary search tree, effectively aggregate information but follow pointer chains, learning information at each node.
[5569a31]319Similarly, other sparse schemes need to read multiple cachelines to acquire all the information needed.}.
[31a6f38]320Finally, dense local information has both the advantages of low latency in zero-contention cases and scalability in high-contention cases.
321However, the information can become stale making it difficult to use to ensure correctness.
[8d8ac3b]322The fact that these solutions have these fundamental limits suggest to me a better solution that attempts to combine these properties in an interesting way.
[5569a31]323Also, the lock discussed in Section~\ref{sec:resize} allows for solutions that adapt to the number of processors, which could also prove useful.
[df75fe97]324
[a037f85]325\paragraph{Objectives and Existing Work}
[df75fe97]326
[5569a31]327How much scalability is actually needed is highly debatable.
[31a6f38]328\emph{libfibre}~\cite{libfibre} has compared favourably to other schedulers in webserver tests~\cite{Karsten20} and uses a single atomic counter in its scheduling algorithm similarly to the proposed bitmask.
[5569a31]329As such, the single atomic instruction on a shared cacheline may be sufficiently performant.
[7a0f1d25]330
[31a6f38]331I have built a prototype of this ready queue in the shape of a data queue, \ie nodes on the queue are structures with a single $int$ representing a thread and intrusive data fields.
332Using this prototype, preliminary performance experiments confirm the expected performance in Table~\ref{tab:perfcases}.
333However, these experiments only offer a hint at the actual performance of the scheduler since threads are involved in more complex operations, \eg threads are not independent of each other: when a thread blocks some other thread must intervene to wake it.
[df75fe97]334
[5569a31]335I have also integrated this prototype into the \CFA runtime, but have not yet created performance experiments to compare results, as creating one-to-one comparisons between the prototype and the \CFA runtime will be complex.
[df75fe97]336
[a037f85]337\subsection{Dynamic Resizing} \label{sec:resize}
[5569a31]338
[7a0f1d25]339\begin{figure}
340        \begin{center}
[8d8ac3b]341                \input{system.pstex_t}
[7a0f1d25]342        \end{center}
343        \caption{Global structure of the \CFA runtime system.}
344        \label{fig:system}
345\end{figure}
346
[5569a31]347The \CFA runtime system groups processors together as \newterm{clusters}, as shown in Figure~\ref{fig:system}.
348Threads on a cluster are always scheduled on one of the processors of the cluster.
349Currently, the runtime handles dynamically adding and removing processors from clusters at any time.
[31a6f38]350Since this feature is part of the existing design, the proposed scheduler must also support this behaviour.
[5569a31]351However, dynamically resizing a cluster is considered a rare event associated with setup, tear down and major configuration changes.
352This assumption is made both in the design of the proposed scheduler as well as in the original design of the \CFA runtime system.
353As such, the proposed scheduler must honour the correctness of this behaviour but does not have any performance objectives with regard to resizing a cluster.
[31a6f38]354That is, the time to add or remove processors and how much this disrupts the performance of other threads is considered a secondary concern since it should be amortized over long periods of times.
[5569a31]355However, as mentioned in Section~\ref{sec:queue}, contention on the underlying queues can have a direct impact on performance.
356The number of underlying queues must therefore be adjusted as the number of processors grows or shrinks.
357Since the underlying queues are stored in a dense array, changing the number of queues requires resizing the array and expanding the array requires moving it, which can introduce memory reclamation problems if not done correctly.
[df75fe97]358
[4a40f695]359\begin{figure}
[a037f85]360        \begin{center}
[4a40f695]361                \input{resize}
[a037f85]362        \end{center}
[5569a31]363        \caption{Copy of data structure shown in Figure~\ref{fig:base}.}
[a037f85]364        \label{fig:base2}
365\end{figure}
[df75fe97]366
[5569a31]367It is important to note how the array is used in this case.
368While the array cells are modified by every push and pop operation, the array itself, \ie the pointer that would change when resized, is only read during these operations.
369Therefore the use of this pointer can be described as frequent reads and infrequent writes.
370This description effectively matches with the description of a reader-writer lock, infrequent but invasive updates among frequent read operations.
371In the case of the ready queue described above, read operations are operations that push or pop from the ready queue but do not invalidate any references to the ready queue data structures.
[8d8ac3b]372Writes, on the other hand, would add or remove inner queues, invalidating references to the array of inner queues in a process.
[5569a31]373Therefore, the current proposed approach to this problem is to add a per-cluster reader-writer lock around the ready queue to prevent restructuring of the ready-queue data-structure while threads are being pushed or popped.
[df75fe97]374
[5569a31]375There are possible alternatives to the reader-writer lock solution.
[31a6f38]376This problem is effectively a memory reclamation problem and as such there is a large body of research on the subject~\cite{brown2015reclaiming, michael2004hazard}.
[5569a31]377However, the reader-write lock-solution is simple and can be leveraged to solve other problems (\eg processor ordering and memory reclamation of threads), which makes it an attractive solution.
[df75fe97]378
[a037f85]379\paragraph{Objectives and Existing Work}
[8d8ac3b]380The lock must offer scalability and performance on par with the actual ready queue in order not to introduce a new bottleneck.
[5569a31]381I have already built a lock that fits the desired requirements and preliminary testing show scalability and performance that exceed the target.
382As such, I do not consider this lock to be a risk for this project.
[df75fe97]383
[a037f85]384\subsection{Idle Sleep} \label{sec:sleep}
[df75fe97]385
[5569a31]386\newterm{Idle sleep} is the process of putting processors to sleep when they have no threads to execute.
387In this context, processors are kernel threads and sleeping refers to asking the kernel to block a thread.
388This operation can be achieved with either thread synchronization operations like $pthread_cond_wait$ or using signal operations like $sigsuspend$.
389The goal of putting idle processors to sleep is:
390\begin{enumerate}
391\item
392reduce contention on the ready queue, since the otherwise idle processors generally contend trying to pop items from the queue,
393\item
394give back unneeded CPU time associated with a process to other user processors executing on the computer,
395\item
[8d8ac3b]396and reduce energy consumption in cases where more idle kernel-threads translate into idle CPUs, which can cycle down.
[5569a31]397\end{enumerate}
398Support for idle sleep broadly involves calling the operating system to block the kernel thread and handling the race between a blocking thread and the waking thread, and handling which kernel thread should sleep or wake up.
399
400When a processor decides to sleep, there is a race that occurs between it signalling that is going to sleep (so other processors can find sleeping processors) and actually blocking the kernel thread.
401This operation is equivalent to the classic problem of missing signals when using condition variables: the ``sleepy'' processor indicates its intention to block but has not yet gone to sleep when another processor attempts to wake it up.
402The waking-up operation sees the blocked process and signals it, but the blocking process is racing to sleep so the signal is missed.
[8d8ac3b]403In cases where kernel threads are managed as processors on the current cluster, losing signals is not necessarily critical, because at least some processors on the cluster are awake and may check for more processors eventually.
[5569a31]404Individual processors always finish scheduling user threads before looking for new work, which means that the last processor to go to sleep cannot miss threads scheduled from inside the cluster (if they do, that demonstrates the ready queue is not linearizable).
405However, this guarantee does not hold if threads are scheduled from outside the cluster, either due to an external event like timers and I/O, or due to a user (or kernel) thread migrating from a different cluster.
[31a6f38]406In this case, missed signals can lead to the cluster deadlocking\footnote{Clusters should only deadlock in cases where a \CFA programmer \emph{actually} writes \CFA code that leads to a deadlock.}.
[5569a31]407Therefore, it is important that the scheduling of threads include a mechanism where signals \emph{cannot} be missed.
408For performance reasons, it can be advantageous to have a secondary mechanism that allows signals to be missed in cases where it cannot lead to a deadlock.
[31a6f38]409To be safe, this process must include a ``handshake'' where it is guaranteed that either:
410\begin{enumerate}
411\item
412the sleeping processor notices that a user thread is scheduled after the sleeping processor signalled its intent to block or
413\item
414code scheduling threads sees the intent to sleep before scheduling and be able to wake-up the processor.
415\end{enumerate}
[5569a31]416This matter is complicated by the fact that pthreads and Linux offer few tools to implement this solution and no guarantee of ordering of threads waking up for most of these tools.
417
418Another important issue is avoiding kernel threads sleeping and waking frequently because there is a significant operating-system cost.
[31a6f38]419This scenario happens when a program oscillates between high and low activity, needing most and then few processors.
[5569a31]420A possible partial solution is to order the processors so that the one which most recently went to sleep is woken up.
421This allows other sleeping processors to reach deeper sleep state (when these are available) while keeping ``hot'' processors warmer.
422Note that while this generally means organizing the processors in a stack, I believe that the unique index provided in my reader-writer lock can be reused to strictly order the waking processors, causing a mostly LIFO order.
423While a strict LIFO stack is probably better, the processor index could prove useful for other reasons, while still offering a sufficiently LIFO ordering.
424
425A final important aspect of idle sleep is when should processors make the decision to sleep and when is it appropriate for sleeping processors to be woken up.
[8d8ac3b]426Processors that are unnecessarily unblocked lead to unnecessary contention, CPU usage, and power consumption, while too many sleeping processors can lead to suboptimal throughput.
427Furthermore, transitions from sleeping to awake and vice versa also add unnecessary latency.
[31a6f38]428There is already a wealth of research on the subject~\cite{schillings1996engineering, wiki:thunderherd} and I may use an existing approach for the idle-sleep heuristic in this project, \eg~\cite{Karsten20}.
[df75fe97]429
[a037f85]430\subsection{Asynchronous I/O}
[df75fe97]431
[5569a31]432The final aspect of this proposal is asynchronous I/O.
433Without it, user threads that execute I/O operations block the underlying kernel thread, which leads to poor throughput.
434It is preferable to block the user thread performing the I/O and reuse the underlying kernel-thread to run other ready user threads.
435This approach requires intercepting user-thread calls to I/O operations, redirecting them to an asynchronous I/O interface, and handling the multiplexing/demultiplexing between the synchronous and asynchronous API.
[8d8ac3b]436As such, there are three components needed to implement support for asynchronous I/O:
[5569a31]437\begin{enumerate}
438\item
439an OS abstraction layer over the asynchronous interface,
440\item
441an event-engine to (de)multiplex the operations,
442\item
[31a6f38]443and a synchronous interface for users.
[5569a31]444\end{enumerate}
445None of these components currently exist in \CFA and I will need to build all three for this project.
[df75fe97]446
[31a6f38]447\paragraph{OS Asynchronous Abstraction}
448One fundamental part for converting blocking I/O operations into non-blocking is having an underlying asynchronous I/O interface to direct the I/O operations.
[5569a31]449While there exists many different APIs for asynchronous I/O, it is not part of this proposal to create a novel API.
450It is sufficient to make one work in the complex context of the \CFA runtime.
[31a6f38]451\uC uses the $select$~\cite{select} as its interface, which handles ttys, pipes and sockets, but not disk.
[8d8ac3b]452$select$ entails significant complexity and is being replaced in UNIX operating systems, which make it a less interesting alternative.
[31a6f38]453Another popular interface is $epoll$~\cite{epoll}, which is supposed to be cheaper than $select$.
454However, $epoll$ also does not handle the file system and anecdotal evidence suggest it has problems with Linux pipes and ttys.
455A popular cross-platform alternative is $libuv$~\cite{libuv}, which offers asynchronous sockets and asynchronous file system operations (among other features).
[5569a31]456However, as a full-featured library it includes much more than I need and could conflict with other features of \CFA unless significant effort is made to merge them together.
[31a6f38]457A very recent alternative that I am investigating is $io_uring$~\cite{io_uring}.
[8d8ac3b]458It claims to address some of the issues with $epoll$ and my early investigating suggests that the claim is accurate.
[0a945fd]459$io_uring$ uses a much more general approach where system calls are registered to a queue and later executed by the kernel, rather than relying on system calls to support returning an error instead of blocking.
[31a6f38]460I believe this approach allows for fewer problems, \eg the manpage for $open$~\cite{open} states:
[912ccbcf]461\begin{quote}
[8d8ac3b]462Note that [the $O_NONBLOCK$ flag] has no effect for regular files and block devices;
463that is, I/O operations will (briefly) block when device activity is required, regardless of whether $O_NONBLOCK$ is set.
464Since $O_NONBLOCK$ semantics might eventually be implemented, applications should not depend upon blocking behaviour when specifying this flag for regular files and block devices.
[912ccbcf]465\end{quote}
[31a6f38]466This makes approaches based on $select$/$epoll$ less reliable since they may not work for every file descriptors.
467For this reason, I plan to use $io_uring$ as the OS abstraction for the \CFA runtime unless further work encounters a fatal problem.
468However, only a small subset of the features are available in Ubuntu as of April 2020~\cite{wiki:ubuntu-linux}, which will limit performance comparisons.
[912ccbcf]469I do not believe this will affect the comparison result.
[5569a31]470
471\paragraph{Event Engine}
[31a6f38]472Above the OS asynchronous abstraction is the event engine.
[5569a31]473This engine is responsible for multiplexing (batching) the synchronous I/O requests into asynchronous I/O requests and demultiplexing the results to appropriate blocked user threads.
474This step can be straightforward for simple cases, but becomes quite complex when there are thousands of user threads performing both reads and writes, possibly on overlapping file descriptors.
475Decisions that need to be made include:
476\begin{enumerate}
477\item
478whether to poll from a separate kernel thread or a regularly scheduled user thread,
479\item
480what should be the ordering used when results satisfy many requests,
481\item
482how to handle threads waiting for multiple operations, etc.
483\end{enumerate}
[df75fe97]484
[a037f85]485\paragraph{Interface}
[5569a31]486Finally, for these non-blocking I/O components to be available, it is necessary to expose them through a synchronous interface because that is the \CFA concurrent programming style.
487The interface can be novel but it is preferable to match the existing POSIX interface when possible to be compatible with existing code.
488Matching allows C programs written using this interface to be transparently converted to \CFA with minimal effort.
[31a6f38]489Where new functionality is needed, I will add novel interface extensions to fill gaps and provide advanced features.
[a037f85]490
491
492% ===============================================================================
493% ===============================================================================
494\section{Discussion}
[912ccbcf]495I believe that runtime system and scheduling are still open topics.
[31a6f38]496Many ``state of the art'' production frameworks still use single-threaded event loops because of performance considerations, \eg~\cite{nginx-design}, and, to my knowledge, no widely available system language offers modern threading facilities.
[912ccbcf]497I believe the proposed work offers a novel runtime and scheduling package, where existing work only offers fragments that users must assemble themselves when possible.
[a037f85]498
499% ===============================================================================
500% ===============================================================================
501\section{Timeline}
[912ccbcf]502\begin{center}
503\begin{tabular}{ | r @{--} l | p{4in} | }
504\hline May 2020 & October 2020   & Creation of the performance benchmark. \\
505\hline November 2020 & March 2021   & Completion of the implementation. \\
506\hline March 2021 & April 2021  & Final Performance experiments. \\
[8d8ac3b]507\hline May 2021 & August 2021 & Thesis writing and defence. \\
[912ccbcf]508\hline
509\end{tabular}
510\end{center}
[df75fe97]511
512% B I B L I O G R A P H Y
513% -----------------------------
514\cleardoublepage
515\phantomsection         % allows hyperref to link to the correct page
[3d3cbd0]516\addcontentsline{toc}{section}{\refname}
517\bibliographystyle{plain}
518\bibliography{pl,local}
[df75fe97]519
520% G L O S S A R Y
521% -----------------------------
522\cleardoublepage
523\phantomsection         % allows hyperref to link to the correct page
[3d3cbd0]524\addcontentsline{toc}{section}{Glossary}
525\printglossary
[df75fe97]526
[4a40f695]527\end{document}
Note: See TracBrowser for help on using the repository browser.