# source:doc/theses/thierry_delisle_PhD/thesis/text/core.tex@91715ce

Last change on this file since 91715ce was 05e33f5, checked in by Thierry Delisle <tdelisle@…>, 6 months ago

Added a section on fairness goals to chapter 3

• Property mode set to 100644
File size: 29.3 KB
Line
1\chapter{Scheduling Core}\label{core}
2
3Before discussing scheduling in general, where it is important to address systems that are changing states, this document discusses scheduling in a somewhat ideal scenario, where the system has reached a steady state. For this purpose, a steady state is loosely defined as a state where there are always \glspl{thrd} ready to run and the system has the resources necessary to accomplish the work, \eg, enough workers. In short, the system is neither overloaded nor underloaded.
4
5It is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state tend to be pervasive in all states.
6
7\section{Design Goals}
8As with most of the design decisions behind \CFA, an important goal is to match the expectation of the programmer according to their execution mental-model. To match expectations, the design must offer the programmer sufficient guarantees so that, as long as they respect the execution mental-model, the system also respects this model.
9
10For threading, a simple and common execution mental-model is the Ideal multi-tasking CPU'' :
11
12\begin{displayquote}[Linux CFS\cit{https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt}]
13        {[The]} Ideal multi-tasking CPU'' is a (non-existent  :-)) CPU that has 100\% physical power and which can run each task at precise equal speed, in parallel, each at [an equal fraction of the] speed.  For example: if there are 2 tasks running, then it runs each at 50\% physical power --- i.e., actually in parallel.
14        \label{q:LinuxCFS}
15\end{displayquote}
16
17Applied to threads, this model states that every ready \gls{thrd} immediately runs in parallel with all other ready \glspl{thrd}. While a strict implementation of this model is not feasible, programmers still have expectations about scheduling that come from this model.
18
19In general, the expectation at the center of this model is that ready \glspl{thrd} do not interfere with each other but simply share the hardware. This assumption makes it easier to reason about threading because ready \glspl{thrd} can be thought of in isolation and the effect of the scheduler can be virtually ignored. This expectation of \gls{thrd} independence means the scheduler is expected to offer two guarantees:
20\begin{enumerate}
21        \item A fairness guarantee: a \gls{thrd} that is ready to run is not prevented by another thread.
22        \item A performance guarantee: a \gls{thrd} that wants to start or stop running is not prevented by other threads wanting to do the same.
23\end{enumerate}
24
25It is important to note that these guarantees are expected only up to a point. \Glspl{thrd} that are ready to run should not be prevented to do so, but they still share the limited hardware resources. Therefore, the guarantee is considered respected if a \gls{thrd} gets access to a \emph{fair share} of the hardware resources, even if that share is very small.
26
27Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is compatitive to other popular languages, the guarantee will be consider achieved.
28
29More precisely the scheduler should be:
30\begin{itemize}
31        \item As fast as other schedulers that are less fair.
32        \item Faster than other schedulers that have equal or better fairness.
33\end{itemize}
34
35\subsection{Fairness Goals}
36For this work fairness will be considered as having two strongly related requirements: true starvation freedom and fast'' load balancing.
37
38\paragraph{True starvation freedom} is more easily defined: As long as at least one \proc continues to dequeue \ats, all read \ats should be able to run eventually.
39In any running system, \procs can stop dequeing \ats if they start running a \at that will simply never park.
40Traditional workstealing schedulers do not have starvation freedom in these cases.
41Now this requirement begs the question, what about preemption?
42Generally speaking preemption happens on the timescale of several milliseconds, which brings us to the next requirement: fast'' load balancing.
43
44\paragraph{Fast load balancing} means that load balancing should happen faster than preemption would normally allow.
45For interactive applications that need to run at 60, 90, 120 frames per second, \ats having to wait for several millseconds to run are effectively starved.
46Therefore load-balancing should be done at a faster pace, one that can detect starvation at the microsecond scale.
47With that said, this is a much fuzzier requirement since it depends on the number of \procs, the number of \ats and the general load of the system.
48
49\subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal}
50An important performance factor in modern architectures is cache locality. Waiting for data at lower levels or not present in the cache can have a major impact on performance. Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on. It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}.
51
52For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in the next section.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
53
54However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally, where Figure~\ref{fig:fair} shows a visual representation of this behaviour. As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model.
55
56\begin{figure}
57        \centering
58        \input{fairness.pstex_t}
59        \vspace*{-10pt}
60        \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \gls{thrd} awaits running is shown as the time the ready \gls{thrd} waits increases, Ready Time, the chances that its data is still in cache, Locality, decreases. At the same time, the need for fairness increases since other \glspl{thrd} may have the chance to run many times, breaking the fairness model. Since the actual values and curves of this graph can be highly variable, the graph is an idealized representation of the two opposing goals.}
61        \label{fig:fair}
62\end{figure}
63
64\subsection{Performance Challenges}\label{pref:challenge}
65While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges. Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design.
66
67\subsubsection{Scalability}
68The most basic performance challenge of a scheduler is scalability.
69Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeues \ats.
70One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can lead to worst improvements.
71While the ready-queue itself can be sharded to alleviate the main source of contention, auxillary scheduling features, \eg counting ready \ats, can also be sources of contention.
72
73\subsubsection{Migration Cost}
74Another important source of latency in scheduling is migration.
75An \at is said to have migrated if it is executed by two different \proc consecutively, which is the process discussed in \ref{fairnessvlocal}.
76Migrations can have many different causes, but it certain programs it can be all but impossible to limit migrations.
77Chapter~\ref{microbench} for example, has a benchmark where any \at can potentially unblock any other \at, which can leat to \ats migrating more often than not.
78Because of this it is important to design the internal data structures of the scheduler to limit the latency penalty from migrations.
79
80
81\section{Inspirations}
82In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is adding/removing \glspl{thrd} is a single point of contention. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue : create multiple sub-ready-queues that multiple \glspl{hthrd} can access and modify without interfering.
83
84Before going into the design of \CFA's scheduler proper, it is relevant to discuss two sharding solutions which served as the inspiration scheduler in this thesis.
85
86\subsection{Work-Stealing}
87
88As mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing.
89In this pattern each \gls{proc} has its own local ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work on their local ready-queue.
90The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed.
91In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
92On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work.
93This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work.
94Chapter~\ref{microbench} shows that in pathological cases this problem can lead to indefinite starvation.
95
96
97Based on these observation, the conclusion is that a \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises.
98
99\subsection{Relaxed-Fifo}
100An entirely different scheme is to create a relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgos any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which the \glspl{proc} can pick from.
101\Glspl{proc} choose ready-queus at random, but timestamps are added to all elements of the queue and dequeues are done by picking two queues and dequeing the oldest element.
102All subqueues are protected by TryLocks and \procs simply pick a different subqueue if they fail to acquire the TryLock.
103The result is a queue that has both decent scalability and sufficient fairness.
104The lack of ownership means that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely that any element will stay on the queue for much longer than any other element.
105This contrasts with work-stealing, where \emph{any} \gls{proc} busy for an extended period of time results in all the elements on its local queue to have to wait. Unless another \gls{proc} runs out of work.
106
107An important aspects of this scheme's fairness approach is that the timestamps make it possible to evaluate how long elements have been on the queue.
108However, another major aspect is that \glspl{proc} will eagerly search for these older elements instead of focusing on specific queues.
109
110While the fairness, of this scheme is good, it does suffer in terms of performance.
111It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues can be difficult if there are too few ready \ats.
112
113\section{Relaxed-FIFO++}
114Since it has inherent fairness quelities and decent performance in the presence of many \ats, the relaxed-FIFO queue appears as a good candidate to form the basis of a scheduler.
115The most obvious problems is for workloads where the number of \ats is barely greater than the number of \procs.
116In these situations, the wide sharding means most of the sub-queues from which the relaxed queue is formed will be empty.
117The consequence is that when a dequeue operations attempts to pick a sub-queue at random, it is likely that it picks an empty sub-queue and will have to pick again.
118This problem can repeat an unbounded number of times.
119
120As this is the most obvious challenge, it is worth addressing first.
121The obvious solution is to supplement each subqueue with some sharded data structure that keeps track of which subqueues are empty.
122This data structure can take many forms, for example simple bitmask or a binary tree that tracks which branch are empty.
123Following a binary tree on each pick has fairly good Big O complexity and many modern architectures have powerful bitmask manipulation instructions.
124However, precisely tracking which sub-queues are empty is actually fundamentally problematic.
125The reason is that each subqueues are already a form of sharding and the sharding width has presumably already chosen to avoid contention.
126However, tracking which ready queue is empty is only useful if the tracking mechanism uses denser sharding than the sub queues, then it will invariably create a new source of contention.
127But if the tracking mechanism is not denser than the sub-queues, then it will generally not provide useful because reading this new data structure risks being as costly as simply picking a sub-queue at random.
128Early experiments with this approach have shown that even with low success rates, randomly picking a sub-queue can be faster than a simple tree walk.
129
130The exception to this rule is using local tracking.
131If each \proc keeps track locally of which sub-queue is empty, then this can be done with a very dense data structure without introducing a new source of contention.
132The consequence of local tracking however, is that the information is not complete.
133Each \proc is only aware of the last state it saw each subqueues but does not have any information about freshness.
134Even on systems with low \gls{hthrd} count, \eg 4 or 8, this can quickly lead to the local information being no better than the random pick.
135This is due in part to the cost of this maintaining this information and its poor quality.
136
137However, using a very low cost approach to local tracking may actually be beneficial.
138If the local tracking is no more costly than the random pick, than \emph{any} improvement to the succes rate, however low it is, would lead to a performance benefits.
139This leads to the following approach:
140
141\subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/}
142The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}.
143The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed.
144While this is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the subqueues that were accessed.
145Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, for example some Linear Congruential Generators\cit{https://en.wikipedia.org/wiki/Linear\_congruential\_generator} support running the algorithm backwards while offering good quality and performance.
146This particular \glsxtrshort{prng} can be used as follows:
147
148Each \proc maintains two \glsxtrshort{prng} states, which whill be refered to as \texttt{F} and \texttt{B}.
149
150When a \proc attempts to dequeue a \at, it picks the subqueues by running the \texttt{B} backwards.
151When a \proc attempts to enqueue a \at, it runs \texttt{F} forward to pick to subqueue to enqueue to.
152If the enqueue is successful, the state \texttt{B} is overwritten with the content of \texttt{F}.
153
154The result is that each \proc will tend to dequeue \ats that it has itself enqueued.
155When most sub-queues are empty, this technique increases the odds of finding \ats at very low cost, while also offering an improvement on locality in many cases.
156
157However, while this approach does notably improve performance in many cases, this algorithm is still not competitive with work-stealing algorithms.
158The fundamental problem is that the constant randomness limits how much locality the scheduler offers.
159This becomes problematic both because the scheduler is likely to get cache misses on internal data-structures and because migration become very frequent.
160Therefore since the approach of modifying to relaxed-FIFO algorithm to behave more like work stealing does not seem to pan out, the alternative is to do it the other way around.
161
162\section{Work Stealing++}
163To add stronger fairness guarantees to workstealing a few changes.
164First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all subqueues.
165Therefore the workstealing algorithm must be prepended with some monitoring.
166Before attempting to dequeue from a \proc's local queue, the \proc must make some effort to make sure remote queues are not being neglected.
167To make this possible, \procs must be able to determie which \at has been on the ready-queue the longest.
168Which is the second aspect that much be added.
169The relaxed-FIFO approach uses timestamps for each \at and this is also what is done here.
170
171\begin{figure}
172        \centering
173        \input{base.pstex_t}
174        \caption[Base \CFA design]{Base \CFA design \smallskip\newline A Pool of sub-ready queues offers the sharding, two per \glspl{proc}. Each \gls{proc} have local subqueues, however \glspl{proc} can access any of the sub-queues. Each \at is timestamped when enqueued.}
175        \label{fig:base}
176\end{figure}
177The algorithm is structure as shown in Figure~\ref{fig:base}.
178This is very similar to classic workstealing except the local queues are placed in an array so \procs can access eachother's queue in constant time.
179Sharding width can be adjusted based on need.
180When a \proc attempts to dequeue a \at, it first picks a random remote queue and compares its timestamp to the timestamps of the local queue(s), dequeue from the remote queue if needed.
181
182Implemented as as naively state above, this approach has some obvious performance problems.
183First, it is necessary to have some damping effect on helping.
184Random effects like cache misses and preemption can add spurious but short bursts of latency for which helping is not helpful, pun intended.
185The effect of these bursts would be to cause more migrations than needed and make this workstealing approach slowdown to the match the relaxed-FIFO approach.
186
187\begin{figure}
188        \centering
189        \input{base_avg.pstex_t}
190        \caption[\CFA design with Moving Average]{\CFA design with Moving Average \smallskip\newline A moving average is added to each subqueue.}
191        \label{fig:base-ma}
192\end{figure}
193
194A simple solution to this problem is to compare an exponential moving average\cit{https://en.wikipedia.org/wiki/Moving\_average\#Exponential\_moving\_average} instead if the raw timestamps, shown in Figure~\ref{fig:base-ma}.
195Note that this is slightly more complex than it sounds because since the \at at the head of a subqueue is still waiting, its wait time has not ended.
196Therefore the exponential moving average is actually an exponential moving average of how long each already dequeued \at have waited.
197To compare subqueues, the timestamp at the head must be compared to the current time, yielding the bestcase wait time for the \at at the head of the queue.
198This new waiting is averaged with the stored average.
199To limit even more the amount of unnecessary migration, a bias can be added to the local queue, where a remote queue is helped only if its moving average is more than \emph{X} times the local queue's average.
200None of the experimentation that I have run with these scheduler seem to indicate that the choice of the weight for the moving average or the choice of bis is particularly important.
201Weigths and biases of similar \emph{magnitudes} have similar effects.
202
203With these additions to workstealing, scheduling can be made as fair as the relaxed-FIFO approach, well avoiding the majority of unnecessary migrations.
204Unfortunately, the performance of this approach does suffer in the cases with no risks of starvation.
205The problem is that the constant polling of remote subqueues generally entail a cache miss.
206To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, the higher the chances are that polling will incurr a cache-miss.
207Conversly, the active subqueues do not benefit much from helping since starvation is already a non-issue.
208This puts this algorithm in an akward situation where it is paying for a cost, but the cost itself suggests the operation was unnecessary.
209The good news is that this problem can be mitigated
210
211\subsection{Redundant Timestamps}
212The problem with polling remote queues is due to a tension between the consistency requirement on the subqueue.
213For the subqueues, correctness is critical. There must be a consensus among \procs on which subqueues hold which \ats.
214Since the timestamps are use for fairness, it is alco important to have consensus and which \at is the oldest.
215However, when deciding if a remote subqueue is worth polling, correctness is much less of a problem.
216Since the only need is that a subqueue will eventually be polled, some data staleness can be acceptable.
217This leads to a tension where stale timestamps are only problematic in some cases.
218Furthermore, stale timestamps can be somewhat desirable since lower freshness requirements means less tension on the cache coherence protocol.
219
220
221\begin{figure}
222        \centering
223        % \input{base_ts2.pstex_t}
224        \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline A array is added containing a copy of the timestamps. These timestamps are written to with relaxed atomics, without fencing, leading to fewer cache invalidations.}
225        \label{fig:base-ts2}
226\end{figure}
227A solution to this is to create a second array containing a copy of the timestamps and average.
228This copy is updated \emph{after} the subqueue's critical sections using relaxed atomics.
229\Glspl{proc} now check if polling is needed by comparing the copy of the remote timestamp instead of the actual timestamp.
230The result is that since there is no fencing, the writes can be buffered and cause fewer cache invalidations.
231
232The correctness argument here is somewhat subtle.
233The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation.
234Therefore, it is acceptable if stale data make queues appear older than they really are but not fresher.
235For the timestamps, this means that missing writes to the timestamp is acceptable since they will make the head \at look older.
236For the moving average, as long as the operation are RW-safe, the average is guaranteed to yield a value that is between the oldest and newest values written.
237Therefore this unprotected read of the timestamp and average satisfy the limited correctness that is required.
238
239\begin{figure}
240        \centering
241        \input{cache-share.pstex_t}
242        \caption[CPU design with wide L3 sharing]{CPU design with wide L3 sharing \smallskip\newline A very simple CPU with 4 \glspl{hthrd}. L1 and L2 are private to each \gls{hthrd} but the L3 is shared across to entire core.}
243        \label{fig:cache-share}
244\end{figure}
245
246\begin{figure}
247        \centering
248        \input{cache-noshare.pstex_t}
249        \caption[CPU design with a narrower L3 sharing]{CPU design with a narrower L3 sharing \smallskip\newline A different CPU design, still with 4 \glspl{hthrd}. L1 and L2 are still private to each \gls{hthrd} but the L3 is shared some of the CPU but there is still two distinct L3 instances.}
250        \label{fig:cache-noshare}
251\end{figure}
252
253With redundant tiemstamps this scheduling algorithm achieves both the fairness and performance requirements, on some machines.
254The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}.
255For example, on machines where the motherboard holds multiple CPU, cache misses can be satisfied from a cache that belongs to the CPU that missed, the \emph{local} CPU, or by a different CPU, a \emph{remote} one.
256Cache misses that are satisfied by a remote CPU will have higher latency than if it is satisfied by the local CPU.
257However, this is not specific to systems with multiple CPUs.
258Depending on the cache structure, cache-misses can have different latency for the same CPU.
259The AMD EPYC 7662 CPUs that is described in Chapter~\ref{microbench} is an example of that.
260Figure~\ref{fig:cache-share} and Figure~\ref{fig:cache-noshare} show two different cache topologies with highlight this difference.
261In Figure~\ref{fig:cache-share}, all cache instances are either private to a \gls{hthrd} or shared to the entire system, this means latency due to cache-misses are likely fairly consistent.
262By comparison, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by a hit in either instance of the L3.
263However, the memory access latency to the remote L3 instance will be notably higher than the memory access latency to the local L3.
264The impact of these different design on this algorithm is that scheduling will scale very well on architectures similar to Figure~\ref{fig:cache-share}, both will have notably worst scalling with many narrower L3 instances.
265This is simply because as the number of L3 instances grow, so two does the chances that the random helping will cause significant latency.
266The solution is to have the scheduler be aware of the cache topology.
267
268\subsection{Per CPU Sharding}
269Building a scheduler that is aware of cache topology poses two main challenges: discovering cache topology and matching \procs to cache instance.
270Sadly, there is no standard portable way to discover cache topology in C.
271Therefore, while this is a significant portability challenge, it is outside the scope of this thesis to design a cross-platform cache discovery mechanisms.
272The rest of this work assumes discovering the cache topology based on Linux's \texttt{/sys/devices/system/cpu} directory.
273This leaves the challenge of matching \procs to cache instance, or more precisely identifying which subqueues of the ready queue are local to which cache instance.
274Once this matching is available, the helping algorithm can be changed to add bias so that \procs more often help subqueues local to the same cache instance
275\footnote{Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuinng.}.
276
277The obvious approach to mapping cache instances to subqueues is to statically tie subqueues to CPUs.
278Instead of having each subqueue local to a specific \proc, the system is initialized with subqueues for each \glspl{hthrd} up front.
279Then \procs dequeue and enqueue by first asking which CPU id they are local to, in order to identify which subqueues are the local ones.
280\Glspl{proc} can get the CPU id from \texttt{sched\_getcpu} or \texttt{librseq}.
281
282This approach solves the performance problems on systems with topologies similar to Figure~\ref{fig:cache-noshare}.
283However, it actually causes some subtle fairness problems in some systems, specifically systems with few \procs and many \glspl{hthrd}.
284In these cases, the large number of subqueues and the bias agains subqueues tied to different cache instances make it so it is very unlikely any single subqueue is picked.
285To make things worst, the small number of \procs mean that few helping attempts will be made.
286This combination of few attempts and low chances make it so a \at stranded on a subqueue that is not actively dequeued from may wait very long before it gets randomly helped.
287On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can actually take multiple seconds for a \at to get dequeued from a remote queue.
288Therefore, a more dynamic matching of subqueues to cache instance is needed.
289
290\subsection{Topological Work Stealing}
291The approach that is used in the \CFA scheduler is to have per-\proc subqueue, but have an excplicit data-structure track which cache instance each subqueue is tied to.
292This is requires some finess because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
293A key element however is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}.
294Therefore the algorithm can be built as follows: Before enqueuing or dequeing a \at, each \proc queries the CPU id and the corresponding cache instance.
295Since subqueues are tied to \procs, each \proc can then update the cache instance mapped to the local subqueue(s).
296To avoid unnecessary cache line invalidation, the map is only written to if the mapping changes.
297
Note: See TracBrowser for help on using the repository browser.