1 | \chapter{Scheduling Core}\label{core} |
---|
2 | |
---|
3 | Before discussing scheduling in general, where it is important to address systems that are changing states, this document discusses scheduling in a somewhat ideal scenario, where the system has reached a steady state. For this purpose, a steady state is loosely defined as a state where there are always \glspl{thrd} ready to run and the system has the resources necessary to accomplish the work, \eg, enough workers. In short, the system is neither overloaded nor underloaded. |
---|
4 | |
---|
5 | It is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state tend to be pervasive in all states. |
---|
6 | |
---|
7 | \section{Design Goals} |
---|
8 | As with most of the design decisions behind \CFA, an important goal is to match the expectation of the programmer according to their execution mental-model. To match expectations, the design must offer the programmer sufficient guarantees so that, as long as they respect the execution mental-model, the system also respects this model. |
---|
9 | |
---|
10 | For threading, a simple and common execution mental-model is the ``Ideal multi-tasking CPU'' : |
---|
11 | |
---|
12 | \begin{displayquote}[Linux CFS\cit{https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt}] |
---|
13 | {[The]} ``Ideal multi-tasking CPU'' is a (non-existent :-)) CPU that has 100\% physical power and which can run each task at precise equal speed, in parallel, each at [an equal fraction of the] speed. For example: if there are 2 tasks running, then it runs each at 50\% physical power --- i.e., actually in parallel. |
---|
14 | \label{q:LinuxCFS} |
---|
15 | \end{displayquote} |
---|
16 | |
---|
17 | Applied to threads, this model states that every ready \gls{thrd} immediately runs in parallel with all other ready \glspl{thrd}. While a strict implementation of this model is not feasible, programmers still have expectations about scheduling that come from this model. |
---|
18 | |
---|
19 | In general, the expectation at the center of this model is that ready \glspl{thrd} do not interfere with each other but simply share the hardware. This assumption makes it easier to reason about threading because ready \glspl{thrd} can be thought of in isolation and the effect of the scheduler can be virtually ignored. This expectation of \gls{thrd} independence means the scheduler is expected to offer two guarantees: |
---|
20 | \begin{enumerate} |
---|
21 | \item A fairness guarantee: a \gls{thrd} that is ready to run is not prevented by another thread. |
---|
22 | \item A performance guarantee: a \gls{thrd} that wants to start or stop running is not prevented by other threads wanting to do the same. |
---|
23 | \end{enumerate} |
---|
24 | |
---|
25 | It is important to note that these guarantees are expected only up to a point. \Glspl{thrd} that are ready to run should not be prevented to do so, but they still share the limited hardware resources. Therefore, the guarantee is considered respected if a \gls{thrd} gets access to a \emph{fair share} of the hardware resources, even if that share is very small. |
---|
26 | |
---|
27 | Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is compatitive to other popular languages, the guarantee will be consider achieved. |
---|
28 | |
---|
29 | More precisely the scheduler should be: |
---|
30 | \begin{itemize} |
---|
31 | \item As fast as other schedulers that are less fair. |
---|
32 | \item Faster than other schedulers that have equal or better fairness. |
---|
33 | \end{itemize} |
---|
34 | |
---|
35 | \subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal} |
---|
36 | An important performance factor in modern architectures is cache locality. Waiting for data at lower levels or not present in the cache can have a major impact on performance. Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on. It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}. |
---|
37 | |
---|
38 | For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in the next section.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available. |
---|
39 | |
---|
40 | However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally, where Figure~\ref{fig:fair} shows a visual representation of this behaviour. As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model. |
---|
41 | |
---|
42 | \begin{figure} |
---|
43 | \centering |
---|
44 | \input{fairness.pstex_t} |
---|
45 | \vspace*{-10pt} |
---|
46 | \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \gls{thrd} awaits running is shown as the time the ready \gls{thrd} waits increases, Ready Time, the chances that its data is still in cache, Locality, decreases. At the same time, the need for fairness increases since other \glspl{thrd} may have the chance to run many times, breaking the fairness model. Since the actual values and curves of this graph can be highly variable, the graph is an idealized representation of the two opposing goals.} |
---|
47 | \label{fig:fair} |
---|
48 | \end{figure} |
---|
49 | |
---|
50 | \subsection{Performance Challenges}\label{pref:challenge} |
---|
51 | While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges. Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design. |
---|
52 | |
---|
53 | \subsubsection{Scalability} |
---|
54 | The most basic performance challenge of a scheduler is scalability. |
---|
55 | Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeues \ats. |
---|
56 | One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can lead to worst improvements. |
---|
57 | While the ready-queue itself can be sharded to alleviate the main source of contention, auxillary scheduling features, \eg counting ready \ats, can also be sources of contention. |
---|
58 | |
---|
59 | \subsubsection{Migration Cost} |
---|
60 | Another important source of latency in scheduling is migration. |
---|
61 | An \at is said to have migrated if it is executed by two different \proc consecutively, which is the process discussed in \ref{fairnessvlocal}. |
---|
62 | Migrations can have many different causes, but it certain programs it can be all but impossible to limit migrations. |
---|
63 | Chapter~\ref{microbench} for example, has a benchmark where any \at can potentially unblock any other \at, which can leat to \ats migrating more often than not. |
---|
64 | Because of this it is important to design the internal data structures of the scheduler to limit the latency penalty from migrations. |
---|
65 | |
---|
66 | |
---|
67 | \section{Inspirations} |
---|
68 | In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is adding/removing \glspl{thrd} is a single point of contention. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue : create multiple sub-ready-queues that multiple \glspl{hthrd} can access and modify without interfering. |
---|
69 | |
---|
70 | Before going into the design of \CFA's scheduler proper, it is relevant to discuss two sharding solutions which served as the inspiration scheduler in this thesis. |
---|
71 | |
---|
72 | \subsection{Work-Stealing} |
---|
73 | |
---|
74 | As mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing. |
---|
75 | In this pattern each \gls{proc} has its own local ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work on their local ready-queue. |
---|
76 | The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed. |
---|
77 | In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention. |
---|
78 | On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work. |
---|
79 | This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work. |
---|
80 | Chapter~\ref{microbench} shows that in pathological cases this problem can lead to indefinite starvation. |
---|
81 | |
---|
82 | |
---|
83 | Based on these observation, the conclusion is that a \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises. |
---|
84 | |
---|
85 | \subsection{Relaxed-Fifo} |
---|
86 | An entirely different scheme is to create a ``relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgos any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which the \glspl{proc} can pick from. |
---|
87 | \Glspl{proc} choose ready-queus at random, but timestamps are added to all elements of the queue and dequeues are done by picking two queues and dequeing the oldest element. |
---|
88 | All subqueues are protected by TryLocks and \procs simply pick a different subqueue if they fail to acquire the TryLock. |
---|
89 | The result is a queue that has both decent scalability and sufficient fairness. |
---|
90 | The lack of ownership means that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely that any element will stay on the queue for much longer than any other element. |
---|
91 | This contrasts with work-stealing, where \emph{any} \gls{proc} busy for an extended period of time results in all the elements on its local queue to have to wait. Unless another \gls{proc} runs out of work. |
---|
92 | |
---|
93 | An important aspects of this scheme's fairness approach is that the timestamps make it possible to evaluate how long elements have been on the queue. |
---|
94 | However, another major aspect is that \glspl{proc} will eagerly search for these older elements instead of focusing on specific queues. |
---|
95 | |
---|
96 | While the fairness, of this scheme is good, it does suffer in terms of performance. |
---|
97 | It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues can be difficult if there are too few ready \ats. |
---|
98 | |
---|
99 | \section{Relaxed-FIFO++} |
---|
100 | Since it has inherent fairness quelities and decent performance in the presence of many \ats, the relaxed-FIFO queue appears as a good candidate to form the basis of a scheduler. |
---|
101 | The most obvious problems is for workloads where the number of \ats is barely greater than the number of \procs. |
---|
102 | In these situations, the wide sharding means most of the sub-queues from which the relaxed queue is formed will be empty. |
---|
103 | The consequence is that when a dequeue operations attempts to pick a sub-queue at random, it is likely that it picks an empty sub-queue and will have to pick again. |
---|
104 | This problem can repeat an unbounded number of times. |
---|
105 | |
---|
106 | As this is the most obvious challenge, it is worth addressing first. |
---|
107 | The obvious solution is to supplement each subqueue with some sharded data structure that keeps track of which subqueues are empty. |
---|
108 | This data structure can take many forms, for example simple bitmask or a binary tree that tracks which branch are empty. |
---|
109 | Following a binary tree on each pick has fairly good Big O complexity and many modern architectures have powerful bitmask manipulation instructions. |
---|
110 | However, precisely tracking which sub-queues are empty is actually fundamentally problematic. |
---|
111 | The reason is that each subqueues are already a form of sharding and the sharding width has presumably already chosen to avoid contention. |
---|
112 | However, tracking which ready queue is empty is only useful if the tracking mechanism uses denser sharding than the sub queues, then it will invariably create a new source of contention. |
---|
113 | But if the tracking mechanism is not denser than the sub-queues, then it will generally not provide useful because reading this new data structure risks being as costly as simply picking a sub-queue at random. |
---|
114 | Early experiments with this approach have shown that even with low success rates, randomly picking a sub-queue can be faster than a simple tree walk. |
---|
115 | |
---|
116 | The exception to this rule is using local tracking. |
---|
117 | If each \proc keeps track locally of which sub-queue is empty, then this can be done with a very dense data structure without introducing a new source of contention. |
---|
118 | The consequence of local tracking however, is that the information is not complete. |
---|
119 | Each \proc is only aware of the last state it saw each subqueues but does not have any information about freshness. |
---|
120 | Even on systems with low \gls{hthrd} count, \eg 4 or 8, this can quickly lead to the local information being no better than the random pick. |
---|
121 | This is due in part to the cost of this maintaining this information and its poor quality. |
---|
122 | |
---|
123 | However, using a very low cost approach to local tracking may actually be beneficial. |
---|
124 | If the local tracking is no more costly than the random pick, than \emph{any} improvement to the succes rate, however low it is, would lead to a performance benefits. |
---|
125 | This leads to the following approach: |
---|
126 | |
---|
127 | \subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/} |
---|
128 | The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}. |
---|
129 | The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed. |
---|
130 | While this is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the subqueues that were accessed. |
---|
131 | Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, for example some Linear Congruential Generators\cit{https://en.wikipedia.org/wiki/Linear\_congruential\_generator} support running the algorithm backwards while offering good quality and performance. |
---|
132 | This particular \glsxtrshort{prng} can be used as follows: |
---|
133 | |
---|
134 | Each \proc maintains two \glsxtrshort{prng} states, which whill be refered to as \texttt{F} and \texttt{B}. |
---|
135 | |
---|
136 | When a \proc attempts to dequeue a \at, it picks the subqueues by running the \texttt{B} backwards. |
---|
137 | When a \proc attempts to enqueue a \at, it runs \texttt{F} forward to pick to subqueue to enqueue to. |
---|
138 | If the enqueue is successful, the state \texttt{B} is overwritten with the content of \texttt{F}. |
---|
139 | |
---|
140 | The result is that each \proc will tend to dequeue \ats that it has itself enqueued. |
---|
141 | When most sub-queues are empty, this technique increases the odds of finding \ats at very low cost, while also offering an improvement on locality in many cases. |
---|
142 | |
---|
143 | However, while this approach does notably improve performance in many cases, this algorithm is still not competitive with work-stealing algorithms. |
---|
144 | The fundamental problem is that the constant randomness limits how much locality the scheduler offers. |
---|
145 | This becomes problematic both because the scheduler is likely to get cache misses on internal data-structures and because migration become very frequent. |
---|
146 | Therefore since the approach of modifying to relaxed-FIFO algorithm to behave more like work stealing does not seem to pan out, the alternative is to do it the other way around. |
---|
147 | |
---|
148 | \section{Work Stealing++} |
---|
149 | To add stronger fairness guarantees to workstealing a few changes. |
---|
150 | First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all subqueues. |
---|
151 | Therefore the workstealing algorithm must be prepended with some monitoring. |
---|
152 | Before attempting to dequeue from a \proc's local queue, the \proc must make some effort to make sure remote queues are not being neglected. |
---|
153 | To make this possible, \procs must be able to determie which \at has been on the ready-queue the longest. |
---|
154 | Which is the second aspect that much be added. |
---|
155 | The relaxed-FIFO approach uses timestamps for each \at and this is also what is done here. |
---|
156 | |
---|
157 | \begin{figure} |
---|
158 | \centering |
---|
159 | \input{base.pstex_t} |
---|
160 | \caption[Base \CFA design]{Base \CFA design \smallskip\newline A Pool of sub-ready queues offers the sharding, two per \glspl{proc}. Each \gls{proc} have local subqueues, however \glspl{proc} can access any of the sub-queues. Each \at is timestamped when enqueued.} |
---|
161 | \label{fig:base} |
---|
162 | \end{figure} |
---|
163 | The algorithm is structure as shown in Figure~\ref{fig:base}. |
---|
164 | This is very similar to classic workstealing except the local queues are placed in an array so \procs can access eachother's queue in constant time. |
---|
165 | Sharding width can be adjusted based on need. |
---|
166 | When a \proc attempts to dequeue a \at, it first picks a random remote queue and compares its timestamp to the timestamps of the local queue(s), dequeue from the remote queue if needed. |
---|
167 | |
---|
168 | Implemented as as naively state above, this approach has some obvious performance problems. |
---|
169 | First, it is necessary to have some damping effect on helping. |
---|
170 | Random effects like cache misses and preemption can add spurious but short bursts of latency for which helping is not helpful, pun intended. |
---|
171 | The effect of these bursts would be to cause more migrations than needed and make this workstealing approach slowdown to the match the relaxed-FIFO approach. |
---|
172 | |
---|
173 | \begin{figure} |
---|
174 | \centering |
---|
175 | \input{base_avg.pstex_t} |
---|
176 | \caption[\CFA design with Moving Average]{\CFA design with Moving Average \smallskip\newline A moving average is added to each subqueue.} |
---|
177 | \label{fig:base-ma} |
---|
178 | \end{figure} |
---|
179 | |
---|
180 | A simple solution to this problem is to compare an exponential moving average\cit{https://en.wikipedia.org/wiki/Moving\_average\#Exponential\_moving\_average} instead if the raw timestamps, shown in Figure~\ref{fig:base-ma}. |
---|
181 | Note that this is slightly more complex than it sounds because since the \at at the head of a subqueue is still waiting, its wait time has not ended. |
---|
182 | Therefore the exponential moving average is actually an exponential moving average of how long each already dequeued \at have waited. |
---|
183 | To compare subqueues, the timestamp at the head must be compared to the current time, yielding the bestcase wait time for the \at at the head of the queue. |
---|
184 | This new waiting is averaged with the stored average. |
---|
185 | To limit even more the amount of unnecessary migration, a bias can be added to the local queue, where a remote queue is helped only if its moving average is more than \emph{X} times the local queue's average. |
---|
186 | None of the experimentation that I have run with these scheduler seem to indicate that the choice of the weight for the moving average or the choice of bis is particularly important. |
---|
187 | Weigths and biases of similar \emph{magnitudes} have similar effects. |
---|
188 | |
---|
189 | With these additions to workstealing, scheduling can be made as fair as the relaxed-FIFO approach, well avoiding the majority of unnecessary migrations. |
---|
190 | Unfortunately, the performance of this approach does suffer in the cases with no risks of starvation. |
---|
191 | The problem is that the constant polling of remote subqueues generally entail a cache miss. |
---|
192 | To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, the higher the chances are that polling will incurr a cache-miss. |
---|
193 | Conversly, the active subqueues do not benefit much from helping since starvation is already a non-issue. |
---|
194 | This puts this algorithm in an akward situation where it is paying for a cost, but the cost itself suggests the operation was unnecessary. |
---|
195 | The good news is that this problem can be mitigated |
---|
196 | |
---|
197 | \subsection{Redundant Timestamps} |
---|
198 | The problem with polling remote queues is due to a tension between the consistency requirement on the subqueue. |
---|
199 | For the subqueues, correctness is critical. There must be a consensus among \procs on which subqueues hold which \ats. |
---|
200 | Since the timestamps are use for fairness, it is alco important to have consensus and which \at is the oldest. |
---|
201 | However, when deciding if a remote subqueue is worth polling, correctness is much less of a problem. |
---|
202 | Since the only need is that a subqueue will eventually be polled, some data staleness can be acceptable. |
---|
203 | This leads to a tension where stale timestamps are only problematic in some cases. |
---|
204 | Furthermore, stale timestamps can be somewhat desirable since lower freshness requirements means less tension on the cache coherence protocol. |
---|
205 | |
---|
206 | |
---|
207 | \begin{figure} |
---|
208 | \centering |
---|
209 | % \input{base_ts2.pstex_t} |
---|
210 | \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline A array is added containing a copy of the timestamps. These timestamps are written to with relaxed atomics, without fencing, leading to fewer cache invalidations.} |
---|
211 | \label{fig:base-ts2} |
---|
212 | \end{figure} |
---|
213 | A solution to this is to create a second array containing a copy of the timestamps and average. |
---|
214 | This copy is updated \emph{after} the subqueue's critical sections using relaxed atomics. |
---|
215 | \Glspl{proc} now check if polling is needed by comparing the copy of the remote timestamp instead of the actual timestamp. |
---|
216 | The result is that since there is no fencing, the writes can be buffered and cause fewer cache invalidations. |
---|
217 | |
---|
218 | The correctness argument here is somewhat subtle. |
---|
219 | The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation. |
---|
220 | Therefore, it is acceptable if stale data make queues appear older than they really are but not fresher. |
---|
221 | For the timestamps, this means that missing writes to the timestamp is acceptable since they will make the head \at look older. |
---|
222 | For the moving average, as long as the operation are RW-safe, the average is guaranteed to yield a value that is between the oldest and newest values written. |
---|
223 | Therefore this unprotected read of the timestamp and average satisfy the limited correctness that is required. |
---|
224 | |
---|
225 | \subsection{Per CPU Sharding} |
---|
226 | |
---|
227 | \subsection{Topological Work Stealing} |
---|
228 | |
---|
229 | |
---|