source: doc/theses/thierry_delisle_PhD/thesis/text/core.tex @ d489da8

ADTast-experimentalpthread-emulation
Last change on this file since d489da8 was d489da8, checked in by Thierry Delisle <tdelisle@…>, 19 months ago

core final read

  • Property mode set to 100644
File size: 29.4 KB
Line 
1\chapter{Scheduling Core}\label{core}
2
3Before discussing scheduling in general, where it is important to address systems that are changing states, this document discusses scheduling in a somewhat ideal scenario, where the system has reached a steady state.
4For this purpose, a steady state is loosely defined as a state where there are always \ats ready to run and the system has the resources necessary to accomplish the work, \eg, enough workers.
5In short, the system is neither overloaded nor underloaded.
6
7It is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected.
8As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new \gls{load} and return to the steady state, \eg, by adding or removing workers.
9Therefore, flaws in scheduling the steady state tend to be pervasive in all states.
10
11\section{Design Goals}
12As with most of the design decisions behind \CFA, an important goal is to match the expectation of the programmer according to their execution mental model.
13To match expectations, the design must offer the programmer sufficient guarantees so that, as long as they respect the execution mental model, the system also respects this model.
14
15For threading, a simple and common execution mental model is the ``ideal multitasking CPU'':
16
17\begin{displayquote}[Linux CFS\cite{MAN:linux/cfs}]
18        {[The]} ``ideal multi-tasking CPU'' is a (non-existent  :-)) CPU that has 100\% physical power and which can run each task at precise equal speed, in parallel, each at [an equal fraction of the] speed.  For example: if there are 2 running tasks, then it runs each at 50\% physical power --- i.e., actually in parallel.
19        \label{q:LinuxCFS}
20\end{displayquote}
21
22Applied to \ats, this model states that every ready \at immediately runs in parallel with all other ready \ats. While a strict implementation of this model is not feasible, programmers still have expectations about scheduling that come from this model.
23
24In general, the expectation at the centre of this model is that ready \ats do not interfere with each other but simply share the hardware.
25This assumption makes it easier to reason about threading because ready \ats can be thought of in isolation and the effect of the scheduler can be virtually ignored.
26This expectation of \at independence means the scheduler is expected to offer two guarantees:
27\begin{enumerate}
28        \item A fairness guarantee: a \at that is ready to run is not prevented by another thread.
29        \item A performance guarantee: a \at that wants to start or stop running is not prevented by other threads wanting to do the same.
30\end{enumerate}
31
32It is important to note that these guarantees are expected only up to a point.
33\Glspl{at} that are ready to run should not be prevented from doing so, but they still share the limited hardware resources.
34Therefore, the guarantee is considered respected if a \at gets access to a \emph{fair share} of the hardware resources, even if that share is very small.
35
36Similar to the performance guarantee, the lack of interference among threads is only relevant up to a point.
37Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention.
38How much is an acceptable cost is obviously highly variable.
39For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages.
40This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models.
41Recall programmer expectation is that the impact of the scheduler can be ignored.
42Therefore, if the cost of scheduling is competitive with other popular languages, the guarantee is considered achieved.
43More precisely the scheduler should be:
44\begin{itemize}
45        \item As fast as other schedulers that are less fair.
46        \item Faster than other schedulers that have equal or better fairness.
47\end{itemize}
48
49\subsection{Fairness Goals}
50For this work, fairness is considered to have two strongly related requirements: true starvation freedom and ``fast'' load balancing.
51
52\paragraph{True starvation freedom} means as long as at least one \proc continues to dequeue \ats, all ready \ats should be able to run eventually, \ie, eventual progress.
53In any running system, a \proc can stop dequeuing \ats if it starts running a \at that never blocks.
54Without preemption, traditional work-stealing schedulers do not have starvation freedom in this case.
55Now, this requirement begs the question, what about preemption?
56Generally speaking, preemption happens on the timescale of several milliseconds, which brings us to the next requirement: ``fast'' load balancing.
57
58\paragraph{Fast load balancing} means that load balancing should happen faster than preemption would normally allow.
59For interactive applications that need to run at 60, 90 or 120 frames per second, \ats having to wait for several milliseconds to run are effectively starved.
60Therefore load-balancing should be done at a faster pace, one that can detect starvation at the microsecond scale.
61With that said, this is a much fuzzier requirement since it depends on the number of \procs, the number of \ats and the general \gls{load} of the system.
62
63\subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal}
64An important performance factor in modern architectures is cache locality.
65Waiting for data at lower levels or not present in the cache can have a major impact on performance.
66Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on.
67It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}.
68
69For a scheduler, having good locality, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness.
70Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \at, and as consequence cache lines, to a \gls{hthrd} that is currently available.
71Note that this section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how scheduling affects the locality of the application's data.
72External locality is a much more complicated subject and is discussed in the next section.
73
74However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally.
75Figure~\ref{fig:fair} shows a visual representation of this behaviour.
76As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental model.
77
78\begin{figure}
79        \centering
80        \input{fairness.pstex_t}
81        \vspace*{-10pt}
82        \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \at awaits running is shown as the time the ready \at waits increases (Ready Time) the chances that its data is still in cache decreases (Locality).
83        At the same time, the need for fairness increases since other \ats may have the chance to run many times, breaking the fairness model.
84        Since the actual values and curves of this graph can be highly variable, the graph is an idealized representation of the two opposing goals.}
85        \label{fig:fair}
86\end{figure}
87
88\subsection{Performance Challenges}\label{pref:challenge}
89While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges.
90Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design.
91
92\subsubsection{Scalability}
93The most basic performance challenge of a scheduler is scalability.
94Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeue \ats.
95One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can diminish the improvements.
96While the ready queue itself can be sharded to alleviate the main source of contention, auxiliary scheduling features, \eg counting ready \ats, can also be sources of contention.
97
98\subsubsection{Migration Cost}
99Another important source of scheduling latency is \glslink{atmig}{migration}.
100A \at migrates if it executes on two different \procs consecutively, which is the process discussed in \ref{fairnessvlocal}.
101Migrations can have many different causes, but in certain programs, it can be impossible to limit migration.
102Chapter~\ref{microbench} has a benchmark where any \at can potentially unblock any other \at, which can lead to \ats migrating frequently.
103Hence, it is important to design the internal data structures of the scheduler to limit any latency penalty from migrations.
104
105
106\section{Inspirations}
107In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance.
108The problem is a single point of contention when adding/removing \ats.
109As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}.
110The solution to this problem is to shard the ready queue: create multiple \emph{sub-queues} forming the logical ready-queue and the sub-queues are accessed by multiple \glspl{hthrd} without interfering.
111
112Before going into the design of \CFA's scheduler, it is relevant to discuss two sharding solutions that served as the inspiration scheduler in this thesis.
113
114\subsection{Work-Stealing}
115
116As mentioned in \ref{existing:workstealing}, a popular sharding approach for the ready queue is work-stealing.
117In this approach, each \gls{proc} has its own local sub-queue and \glspl{proc} only access each other's sub-queue if they run out of work on their local ready-queue.
118The interesting aspect of work stealing happens in the steady-state scheduling case, \ie all \glspl{proc} have work and no load balancing is needed.
119In this case, work stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
120On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work.
121This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work.
122Chapter~\ref{microbench} shows that, in pathological cases, work stealing can lead to indefinite starvation.
123
124Based on these observations, the conclusion is that a \emph{perfect} scheduler should behave similarly to work-stealing in the steady-state case, but load balance proactively when the need arises.
125
126\subsection{Relaxed-FIFO}
127A different scheduling approach is to create a ``relaxed-FIFO'' queue, as in \cite{alistarh2018relaxed}.
128This approach forgoes any ownership between \gls{proc} and sub-queue, and simply creates a pool of sub-queues from which \glspl{proc} pick.
129Scheduling is performed as follows:
130\begin{itemize}
131\item
132All sub-queues are protected by TryLocks.
133\item
134Timestamps are added to each element of a sub-queue.
135\item
136A \gls{proc} randomly tests sub-queues until it has acquired one or two queues.
137\item
138If two queues are acquired, the older of the two \ats is dequeued from the front of the acquired queues.
139\item
140Otherwise, the \at from the single queue is dequeued.
141\end{itemize}
142The result is a queue that has both good scalability and sufficient fairness.
143The lack of ownership ensures that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely any element will delay longer than any other element.
144This guarantee contrasts with work-stealing, where a \gls{proc} with a long sub-queue results in unfairness for its \ats in comparison to a \gls{proc} with a short sub-queue.
145This unfairness persists until a \gls{proc} runs out of work and steals.
146
147An important aspect of this scheme's fairness approach is that the timestamps make it possible to evaluate how long elements have been in the queue.
148However, \glspl{proc} eagerly search for these older elements instead of focusing on specific queues, which negatively affects locality.
149
150While this scheme has good fairness, its performance suffers.
151It requires wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues is difficult when there are few ready \ats.
152
153\section{Relaxed-FIFO++}
154The inherent fairness and good performance with many \ats make the relaxed-FIFO queue a good candidate to form the basis of a new scheduler.
155The problem case is workloads where the number of \ats is barely greater than the number of \procs.
156In these situations, the wide sharding of the ready queue means most of its sub-queues are empty.
157Furthermore, the non-empty sub-queues are unlikely to hold more than one item.
158The consequence is that a random dequeue operation is likely to pick an empty sub-queue, resulting in an unbounded number of selections.
159This state is generally unstable: each sub-queue is likely to frequently toggle between being empty and nonempty.
160Indeed, when the number of \ats is \emph{equal} to the number of \procs, every pop operation is expected to empty a sub-queue and every push is expected to add to an empty sub-queue.
161In the worst case, a check of the sub-queues sees all are empty or full.
162
163As this is the most obvious challenge, it is worth addressing first.
164The obvious solution is to supplement each sharded sub-queue with data that indicates if the queue is empty/nonempty to simplify finding nonempty queues, \ie ready \glspl{at}.
165This sharded data can be organized in different forms, \eg a bitmask or a binary tree that tracks the nonempty sub-queues.
166Specifically, many modern architectures have powerful bitmask manipulation instructions or searching a binary tree has good Big-O complexity.
167However, precisely tracking nonempty sub-queues is problematic.
168The reason is that the sub-queues are initially sharded with a width presumably chosen to avoid contention.
169However, tracking which ready queue is nonempty is only useful if the tracking data is dense, \ie denser than the sharded sub-queues.
170Otherwise, it does not provide useful information because reading this new data structure risks being as costly as simply picking a sub-queue at random.
171But if the tracking mechanism \emph{is} denser than the shared sub-queues, then constant updates invariably create a new source of contention.
172Early experiments with this approach showed that randomly picking, even with low success rates, is often faster than bit manipulations or tree walks.
173
174The exception to this rule is using local tracking.
175If each \proc locally keeps track of empty sub-queues, then this can be done with a very dense data structure without introducing a new source of contention.
176However, the consequence of local tracking is that the information is incomplete.
177Each \proc is only aware of the last state it saw about each sub-queue so this information quickly becomes stale.
178Even on systems with low \gls{hthrd} count, \eg 4 or 8, this approach can quickly lead to the local information being no better than the random pick.
179This result is due in part to the cost of maintaining information and its poor quality.
180
181However, using a very low-cost but inaccurate approach for local tracking can still be beneficial.
182If the local tracking is no more costly than a random pick, then \emph{any} improvement to the success rate, however low it is, leads to a performance benefit.
183This suggests the following approach:
184
185\subsection{Dynamic Entropy}\cite{xkcd:dynamicentropy}
186The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}.
187The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed.
188While this concept is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the sub-queues that were accessed.
189Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, \eg some Linear Congruential Generators\cite{wiki:lcg} support running the algorithm backwards while offering good quality and performance.
190This particular \glsxtrshort{prng} can be used as follows:
191\begin{itemize}
192\item
193Each \proc maintains two \glsxtrshort{prng} states, referred to as $F$ and $B$.
194\item
195When a \proc attempts to dequeue a \at, it picks a sub-queue by running $B$ backwards.
196\item
197When a \proc attempts to enqueue a \at, it runs $F$ forward picking a sub-queue to enqueue to.
198If the enqueue is successful, state $B$ is overwritten with the content of $F$.
199\end{itemize}
200The result is that each \proc tends to dequeue \ats that it has itself enqueued.
201When most sub-queues are empty, this technique increases the odds of finding \ats at a very low cost, while also offering an improvement on locality in many cases.
202
203Tests showed this approach performs better than relaxed-FIFO in many cases.
204However, it is still not competitive with work-stealing algorithms.
205The fundamental problem is that the constant randomness limits how much locality the scheduler offers.
206This becomes problematic both because the scheduler is likely to get cache misses on internal data structures and because migrations become frequent.
207Therefore, the attempt to modify the relaxed-FIFO algorithm to behave more like work stealing did not pan out.
208The alternative is to do it the other way around.
209
210\section{Work Stealing++}\label{helping}
211To add stronger fairness guarantees to work stealing a few changes are needed.
212First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all sub-queues.
213Therefore, the work-stealing algorithm must be prepended with some monitoring.
214Before attempting to dequeue from a \proc's sub-queue, the \proc must make some effort to ensure other sub-queues are not being neglected.
215To make this possible, \procs must be able to determine which \at has been on the ready queue the longest.
216Second, the relaxed-FIFO approach needs timestamps for each \at to make this possible.
217
218\begin{figure}
219        \centering
220        \input{base.pstex_t}
221        \caption[Base \CFA design]{Base \CFA design \smallskip\newline A pool of sub-queues offers the sharding, two per \proc.
222        Each \gls{proc} can access all of the sub-queues.
223        Each \at is timestamped when enqueued.}
224        \label{fig:base}
225\end{figure}
226
227Figure~\ref{fig:base} shows the algorithm structure.
228This structure is similar to classic work-stealing except the sub-queues are placed in an array so \procs can access them in constant time.
229Sharding width can be adjusted based on contention.
230Note, as an optimization, the TS of a \at is stored in the \at in front of it, so the first TS is in the array and the last \at has no TS.
231This organization keeps the highly accessed front TSs directly in the array.
232When a \proc attempts to dequeue a \at, it first picks a random remote sub-queue and compares its timestamp to the timestamps of its local sub-queue(s).
233The oldest waiting \at is dequeued to provide global fairness.
234
235However, this na\"ive implementation has performance problems.
236First, it is necessary to have some damping effect on helping.
237Random effects like cache misses and preemption can add spurious but short bursts of latency negating the attempt to help.
238These bursts can cause increased migrations and make this work-stealing approach slow down to the level of relaxed-FIFO.
239
240\begin{figure}
241        \centering
242        \input{base_avg.pstex_t}
243        \caption[\CFA design with Moving Average]{\CFA design with Moving Average \smallskip\newline A moving average is added to each sub-queue.}
244        \label{fig:base-ma}
245\end{figure}
246
247A simple solution to this problem is to use an exponential moving average\cite{wiki:ma} (MA) instead of a raw timestamp, as shown in Figure~\ref{fig:base-ma}.
248Note that this is more complex because the \at at the head of a sub-queue is still waiting, so its wait time has not ended.
249Therefore, the exponential moving average is an average of how long each dequeued \at has waited.
250To compare sub-queues, the timestamp at the head must be compared to the current time, yielding the best-case wait time for the \at at the head of the queue.
251This new waiting is averaged with the stored average.
252To further limit \glslink{atmig}{migrations}, a bias can be added to a local sub-queue, where a remote sub-queue is helped only if its moving average is more than $X$ times the local sub-queue's average.
253Tests for this approach indicate the choice of the weight for the moving average or the bias is not important, \ie weights and biases of similar \emph{magnitudes} have similar effects.
254
255With these additions to work stealing, scheduling can be made as fair as the relaxed-FIFO approach, avoiding the majority of unnecessary migrations.
256Unfortunately, the work to achieve fairness has a performance cost, especially when the workload is inherently fair, and hence, there is only short-term unfairness or no starvation.
257The problem is that the constant polling, \ie reads, of remote sub-queues generally entails cache misses because the TSs are constantly being updated, \ie, writes.
258To make things worst, remote sub-queues that are very active, \ie \ats are frequently enqueued and dequeued from them, lead to higher chances that polling will incur a cache-miss.
259Conversely, the active sub-queues do not benefit much from helping since starvation is already a non-issue.
260This puts this algorithm in the awkward situation of paying for a largely unnecessary cost.
261The good news is that this problem can be mitigated.
262
263\subsection{Redundant Timestamps}\label{relaxedtimes}
264The problem with polling remote sub-queues is that correctness is critical.
265There must be a consensus among \procs on which sub-queues hold which \ats, as the \ats are in constant motion.
266Furthermore, since timestamps are used for fairness, it is critical to have a consensus on which \at is the oldest.
267However, when deciding if a remote sub-queue is worth polling, correctness is less of a problem.
268Since the only requirement is that a sub-queue is eventually polled, some data staleness is acceptable.
269This leads to a situation where stale timestamps are only problematic in some cases.
270Furthermore, stale timestamps can be desirable since lower freshness requirements mean fewer cache invalidations.
271
272Figure~\ref{fig:base-ts2} shows a solution with a second array containing a copy of the timestamps and average.
273This copy is updated \emph{after} the sub-queue's critical sections using relaxed atomics.
274\Glspl{proc} now check if polling is needed by comparing the copy of the remote timestamp instead of the actual timestamp.
275The result is that since there is no fencing, the writes can be buffered in the hardware and cause fewer cache invalidations.
276
277\begin{figure}
278        \centering
279        \input{base_ts2.pstex_t}
280        \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline An array is added containing a copy of the timestamps.
281        These timestamps are written-to with relaxed atomics, so there is no order among concurrent memory accesses, leading to fewer cache invalidations.}
282        \label{fig:base-ts2}
283\end{figure}
284
285The correctness argument is somewhat subtle.
286The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation.
287Therefore, it is acceptable if stale data makes queues appear older than they are but appearing fresher can be a problem.
288For the timestamps, this means it is acceptable to miss writes to the timestamp since they make the head \at look older.
289For the moving average, as long as the operations are just atomic reads/writes, the average is guaranteed to yield a value that is between the oldest and newest values written.
290Therefore, this unprotected read of the timestamp and average satisfies the limited correctness that is required.
291
292With redundant timestamps, this scheduling algorithm achieves both the fairness and performance requirements on most machines.
293The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}.
294For example on machines with a CPU containing multiple hyper threads and cores and multiple CPU sockets, cache misses can be satisfied from the caches on the same (local) CPU, or by a CPU on a different (remote) socket.
295Cache misses satisfied by a remote CPU have significantly higher latency than from the local CPU.
296However, these delays are not specific to systems with multiple CPUs.
297Depending on the cache structure, cache misses can have different latency on the same CPU, \eg the AMD EPYC 7662 CPUs used in Chapter~\ref{microbench}.
298
299\begin{figure}
300        \centering
301        \input{cache-share.pstex_t}
302        \caption[CPU design with wide L3 sharing]{CPU design with wide L3 sharing \smallskip\newline A CPU with 4 cores, where caches L1 and L2 are private to each core, and the L3 cache is shared across all cores.}
303        \label{fig:cache-share}
304
305        \vspace{25pt}
306
307        \input{cache-noshare.pstex_t}
308        \caption[CPU design with a narrower L3 sharing]{CPU design with a narrow L3 sharing \smallskip\newline A CPU with 4 cores, where caches L1 and L2 are private to each core, and the L3 cache is shared across a pair of cores.}
309        \label{fig:cache-noshare}
310\end{figure}
311
312Figures~\ref{fig:cache-share} and~\ref{fig:cache-noshare} show two different cache topologies that highlight this difference.
313In Figure~\ref{fig:cache-share}, all cache misses are either private to a CPU or shared with another CPU.
314This means latency due to cache misses is fairly consistent.
315In contrast, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by either instance of the L3 cache.
316However, the memory-access latency to the remote L3 is higher than the memory-access latency to the local L3.
317The impact of these different designs on this algorithm is that scheduling only scales well on architectures with a wide L3 cache, similar to Figure~\ref{fig:cache-share}, and less well on architectures with many narrower L3 cache instances, similar to Figure~\ref{fig:cache-noshare}.
318Hence, as the number of L3 instances grows, so too does the chance that the random helping causes significant cache latency.
319The solution is for the scheduler to be aware of the cache topology.
320
321\subsection{Per CPU Sharding}
322Building a scheduler that is cache aware poses two main challenges: discovering the cache topology and matching \procs to this cache structure.
323Unfortunately, there is no portable way to discover cache topology, and it is outside the scope of this thesis to solve this problem.
324This work uses the cache topology information from Linux's @/sys/devices/system/cpu@ directory.
325This leaves the challenge of matching \procs to cache structure, or more precisely identifying which sub-queues of the ready queue are local to which subcomponents of the cache structure.
326Once a match is generated, the helping algorithm is changed to add bias so that \procs more often help sub-queues local to the same cache substructure.\footnote{
327Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuning.}
328
329The simplest approach for mapping sub-queues to cache structure is to statically tie sub-queues to CPUs.
330Instead of having each sub-queue local to a specific \proc, the system is initialized with sub-queues for each hardware hyperthread/core up front.
331Then \procs dequeue and enqueue by first asking which CPU id they are executing on, to identify which sub-queues are the local ones.
332\Glspl{proc} can get the CPU id from @sched_getcpu@ or @librseq@.
333
334This approach solves the performance problems on systems with topologies with narrow L3 caches, similar to Figure \ref{fig:cache-noshare}.
335However, it can still cause some subtle fairness problems in systems with few \procs and many \glspl{hthrd}.
336In this case, the large number of sub-queues and the bias against sub-queues tied to different cache substructures make it unlikely that every sub-queue is picked.
337To make things worst, the small number of \procs means that few helping attempts are made.
338This combination of low selection and few helping attempts allow a \at to become stranded on a sub-queue for a long time until it gets randomly helped.
339On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can take multiple seconds for a \at to get dequeued from a remote queue.
340Therefore, a more dynamic match of sub-queues to cache instances is needed.
341
342\subsection{Topological Work Stealing}
343\label{s:TopologicalWorkStealing}
344The approach used in the \CFA scheduler is to have per-\proc sub-queues, but have an explicit data structure to track which cache substructure each sub-queue is tied to.
345This tracking requires some finesse because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
346A key element, however, is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}.
347Therefore the algorithm can be built as follows: before enqueueing or dequeuing a \at, each \proc queries the CPU id and the corresponding cache instance.
348Since sub-queues are tied to \procs, each \proc can then update the cache instance mapped to the local sub-queue(s).
349To avoid unnecessary cache line invalidation, the map is only written-to if the mapping changes.
350
351This scheduler is used in the remainder of the thesis for managing CPU execution, but additional scheduling is needed to handle long-term blocking and unblocking, such as I/O.
352
Note: See TracBrowser for help on using the repository browser.