Ignore:
Timestamp:
Nov 24, 2022, 3:41:44 PM (17 months ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master
Children:
dacd8e6e
Parents:
82a90d4
Message:

Last corrections to my thesis... hopefully

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/core.tex

    r82a90d4 rddcaff6  
    2424In general, the expectation at the centre of this model is that ready \ats do not interfere with each other but simply share the hardware.
    2525This assumption makes it easier to reason about threading because ready \ats can be thought of in isolation and the effect of the scheduler can be virtually ignored.
    26 This expectation of \at independence means the scheduler is expected to offer two guarantees:
     26This expectation of \at independence means the scheduler is expected to offer two features:
    2727\begin{enumerate}
    28         \item A fairness guarantee: a \at that is ready to run is not prevented by another thread.
    29         \item A performance guarantee: a \at that wants to start or stop running is not prevented by other threads wanting to do the same.
     28        \item A fairness guarantee: a \at that is ready to run is not prevented by another thread indefinitely, \ie, starvation freedom. This is discussed further in the next section.
     29        \item A performance goal: given a \at that wants to start running, other threads wanting to do the same do not interfere with it.
    3030\end{enumerate}
    3131
    32 It is important to note that these guarantees are expected only up to a point.
    33 \Glspl{at} that are ready to run should not be prevented from doing so, but they still share the limited hardware resources.
    34 Therefore, the guarantee is considered respected if a \at gets access to a \emph{fair share} of the hardware resources, even if that share is very small.
    35 
    36 Similar to the performance guarantee, the lack of interference among threads is only relevant up to a point.
    37 Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention.
     32The performance goal, the lack of interference among threads, is only desired up to a point.
     33Ideally, the cost of running and blocking should be constant regardless of contention, but the goal is considered satisfied if the cost is not \emph{too high} with or without contention.
    3834How much is an acceptable cost is obviously highly variable.
    3935For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages.
    4036This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models.
    4137Recall programmer expectation is that the impact of the scheduler can be ignored.
    42 Therefore, if the cost of scheduling is competitive with other popular languages, the guarantee is considered achieved.
     38Therefore, if the cost of scheduling is competitive with other popular languages, the goal is considered satisfied.
    4339More precisely the scheduler should be:
    4440\begin{itemize}
    45         \item As fast as other schedulers that are less fair.
    46         \item Faster than other schedulers that have equal or better fairness.
     41        \item As fast as other schedulers without any fairness guarantee.
     42        \item Faster than other schedulers that have equal or stronger fairness guarantees.
    4743\end{itemize}
    4844
    4945\subsection{Fairness Goals}
    50 For this work, fairness is considered to have two strongly related requirements: true starvation freedom and ``fast'' load balancing.
    51 
    52 \paragraph{True starvation freedom} means as long as at least one \proc continues to dequeue \ats, all ready \ats should be able to run eventually, \ie, eventual progress.
     46For this work, fairness is considered to have two strongly related requirements:
     47
     48\paragraph{Starvation freedom} means as long as at least one \proc continues to dequeue \ats, all ready \ats should be able to run eventually, \ie, eventual progress.
     49Starvation freedom can be bounded or unbounded.
     50In the bounded case, all \ats should be able to run within a fix bound, relative to its own enqueue.
     51Whereas unbounded starvation freedom only requires the \at to eventually run.
     52The \CFA scheduler aims to guarantee unbounded starvation freedom.
    5353In any running system, a \proc can stop dequeuing \ats if it starts running a \at that never blocks.
    54 Without preemption, traditional work-stealing schedulers do not have starvation freedom in this case.
    55 Now, this requirement begs the question, what about preemption?
     54Without preemption, traditional work-stealing schedulers do not have starvation freedom, bounded or unbounded.
     55Now, this requirement raises the question, what about preemption?
    5656Generally speaking, preemption happens on the timescale of several milliseconds, which brings us to the next requirement: ``fast'' load balancing.
    5757
    58 \paragraph{Fast load balancing} means that load balancing should happen faster than preemption would normally allow.
    59 For interactive applications that need to run at 60, 90 or 120 frames per second, \ats having to wait for several milliseconds to run are effectively starved.
    60 Therefore load-balancing should be done at a faster pace, one that can detect starvation at the microsecond scale.
    61 With that said, this is a much fuzzier requirement since it depends on the number of \procs, the number of \ats and the general \gls{load} of the system.
     58\paragraph{Fast load balancing} means that while eventual progress is guaranteed, it is important to mention on which timescale this progress is expected to happen.
     59Indeed, while a scheduler with bounded starvation freedom is beyond the scope of this work, offering a good expected bound in the mathematical sense~\cite{wiki:expected} is desirable.
     60The expected bound on starvation freedom should be tighter than what preemption normally allows.
     61For interactive applications that need to run at 60, 90 or 120 frames per second, \ats having to wait milliseconds to run are effectively starved.
     62Therefore load-balancing should be done at a faster pace: one that is expected to detect starvation at the microsecond scale.
    6263
    6364\subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal}
     
    6869
    6970For a scheduler, having good locality, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness.
    70 Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \at, and as consequence cache lines, to a \gls{hthrd} that is currently available.
    71 Note that this section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how scheduling affects the locality of the application's data.
     71Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \at, and as a consequence cache lines, to a \gls{hthrd} that is currently available.
     72Note that this section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler, versus \emph{external locality}, \ie, how scheduling affects the locality of the application's data.
    7273External locality is a much more complicated subject and is discussed in the next section.
    7374
    74 However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally.
    75 Figure~\ref{fig:fair} shows a visual representation of this behaviour.
    76 As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental model.
     75However, I claim that in practice it is possible to strike a balance between fairness and performance because these requirements do not necessarily overlap temporally.
     76Figure~\ref{fig:fair} shows a visual representation of this effect.
     77As mentioned, some unfairness is acceptable; for example, once the bounded starvation guarantee is met, additional fairness will not satisfy it \emph{more}.
     78Inversely, once a \at's data is evicted from cache, its locality cannot worsen.
     79Therefore it is desirable to have an algorithm that prioritizes cache locality as long as the fairness guarantee is also satisfied.
    7780
    7881\begin{figure}
     
    8891\subsection{Performance Challenges}\label{pref:challenge}
    8992While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges.
    90 Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design.
     93Since these challenges are recurring themes in the design of a scheduler it is relevant to describe them here before looking at the scheduler's design.
     94
     95\subsubsection{Latency}
     96The most basic performance metric of a scheduler is scheduling latency.
     97This measures the how long it takes for a \at to run once scheduled, including the cost of scheduling itself.
     98This measure include both the sequential cost of the operation itself, both also the scalability.
    9199
    92100\subsubsection{Scalability}
    93 The most basic performance challenge of a scheduler is scalability.
    94 Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeue \ats.
     101Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeue \ats relative to the available parallelism.
    95102One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can diminish the improvements.
    96103While the ready queue itself can be sharded to alleviate the main source of contention, auxiliary scheduling features, \eg counting ready \ats, can also be sources of contention.
     104In the Chapter~\ref{microbench}, scalability is measured as $\# procs \times \frac{ns}{ops}$, \ie, number of \procs times total time over total operations.
     105Since the total number of operation should scale with the number of \procs, this gives a measure how much each additional \proc affects the other \procs.
    97106
    98107\subsubsection{Migration Cost}
     
    107116In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance.
    108117The problem is a single point of contention when adding/removing \ats.
    109 As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}.
    110 The solution to this problem is to shard the ready queue: create multiple \emph{sub-queues} forming the logical ready-queue and the sub-queues are accessed by multiple \glspl{hthrd} without interfering.
    111 
    112 Before going into the design of \CFA's scheduler, it is relevant to discuss two sharding solutions that served as the inspiration scheduler in this thesis.
     118As is shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}.
     119The solution to this problem is to shard the ready queue: create multiple \emph{sub-queues} forming the logical ready-queue.
     120The sub-queues are accessed by multiple \glspl{hthrd} without the need for communication.
     121
     122Before going into the design of \CFA's scheduler, it is relevant to discuss two sharding solutions that served as the inspiration for the scheduler in this thesis.
    113123
    114124\subsection{Work-Stealing}
     
    116126As mentioned in \ref{existing:workstealing}, a popular sharding approach for the ready queue is work-stealing.
    117127In this approach, each \gls{proc} has its own local sub-queue and \glspl{proc} only access each other's sub-queue if they run out of work on their local ready-queue.
    118 The interesting aspect of work stealing happens in the steady-state scheduling case, \ie all \glspl{proc} have work and no load balancing is needed.
    119 In this case, work stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
    120 On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work.
     128The interesting aspect of work stealing manifests itself in the steady-state scheduling case, \ie all \glspl{proc} have work and no load balancing is needed.
     129In this case, work stealing is close to optimal scheduling latency: it can achieve perfect locality and have no contention.
     130On the other hand, work-stealing only attempts to do load-balancing when a \gls{proc} runs out of work.
    121131This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work.
    122 Chapter~\ref{microbench} shows that, in pathological cases, work stealing can lead to indefinite starvation.
    123 
    124 Based on these observations, the conclusion is that a \emph{perfect} scheduler should behave similarly to work-stealing in the steady-state case, but load balance proactively when the need arises.
     132Chapter~\ref{microbench} shows that, in pathological cases, work stealing can lead to unbounded starvation.
     133
     134Based on these observations, the conclusion is that a \emph{perfect} scheduler should behave similarly to work-stealing in the steady-state case, \ie, avoid migrations in well balanced workloads, but load balance proactively when the need arises.
    125135
    126136\subsection{Relaxed-FIFO}
    127 A different scheduling approach is to create a ``relaxed-FIFO'' queue, as in \cite{alistarh2018relaxed}.
     137A different scheduling approach is the ``relaxed-FIFO'' queue, as in \cite{alistarh2018relaxed}.
    128138This approach forgoes any ownership between \gls{proc} and sub-queue, and simply creates a pool of sub-queues from which \glspl{proc} pick.
    129139Scheduling is performed as follows:
     
    134144Timestamps are added to each element of a sub-queue.
    135145\item
    136 A \gls{proc} randomly tests sub-queues until it has acquired one or two queues.
     146A \gls{proc} randomly tests sub-queues until it has acquired one or two queues, referred to as \newterm{randomly picking} or \newterm{randomly helping}.
    137147\item
    138148If two queues are acquired, the older of the two \ats is dequeued from the front of the acquired queues.
     
    148158However, \glspl{proc} eagerly search for these older elements instead of focusing on specific queues, which negatively affects locality.
    149159
    150 While this scheme has good fairness, its performance suffers.
    151 It requires wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues is difficult when there are few ready \ats.
     160While this scheme has good fairness, its performance can be improved.
     161Wide sharding is generally desired, \eg at least 4 queues per \proc, and randomly picking non-empty queues is difficult when there are few ready \ats.
     162The next sections describe improvements I made to this existing algorithm.
     163However, ultimately the ``relaxed-FIFO'' queue is not used as the basis of the \CFA scheduler.
    152164
    153165\section{Relaxed-FIFO++}
    154 The inherent fairness and good performance with many \ats make the relaxed-FIFO queue a good candidate to form the basis of a new scheduler.
     166The inherent fairness and decent performance with many \ats make the relaxed-FIFO queue a good candidate to form the basis of a new scheduler.
    155167The problem case is workloads where the number of \ats is barely greater than the number of \procs.
    156168In these situations, the wide sharding of the ready queue means most of its sub-queues are empty.
     
    162174
    163175As this is the most obvious challenge, it is worth addressing first.
    164 The obvious solution is to supplement each sharded sub-queue with data that indicates if the queue is empty/nonempty to simplify finding nonempty queues, \ie ready \glspl{at}.
    165 This sharded data can be organized in different forms, \eg a bitmask or a binary tree that tracks the nonempty sub-queues.
    166 Specifically, many modern architectures have powerful bitmask manipulation instructions or searching a binary tree has good Big-O complexity.
     176The seemingly obvious solution is to supplement each sharded sub-queue with data that indicates whether the queue is empty/nonempty.
     177This simplifies finding nonempty queues, \ie ready \glspl{at}.
     178The sharded data can be organized in different forms, \eg a bitmask or a binary tree that tracks the nonempty sub-queues, using a bit or a node per sub-queue, respectively.
     179Specifically, many modern architectures have powerful bitmask manipulation instructions, and, searching a binary tree has good Big-O complexity.
    167180However, precisely tracking nonempty sub-queues is problematic.
    168181The reason is that the sub-queues are initially sharded with a width presumably chosen to avoid contention.
    169 However, tracking which ready queue is nonempty is only useful if the tracking data is dense, \ie denser than the sharded sub-queues.
     182However, tracking which ready queue is nonempty is only useful if the tracking data is dense, \ie tracks whether multiple sub-queues are empty.
    170183Otherwise, it does not provide useful information because reading this new data structure risks being as costly as simply picking a sub-queue at random.
    171184But if the tracking mechanism \emph{is} denser than the shared sub-queues, then constant updates invariably create a new source of contention.
     
    184197
    185198\subsection{Dynamic Entropy}\cite{xkcd:dynamicentropy}
    186 The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}.
     199The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng} that drives the random picking of sub-queues.
    187200The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed.
    188201While this concept is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the sub-queues that were accessed.
    189 Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, \eg some Linear Congruential Generators\cite{wiki:lcg} support running the algorithm backwards while offering good quality and performance.
     202Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, \eg some Linear Congruential Generators~\cite{wiki:lcg} support running the algorithm backwards while offering good quality and performance.
    190203This particular \glsxtrshort{prng} can be used as follows:
    191204\begin{itemize}
     
    193206Each \proc maintains two \glsxtrshort{prng} states, referred to as $F$ and $B$.
    194207\item
    195 When a \proc attempts to dequeue a \at, it picks a sub-queue by running $B$ backwards.
    196 \item
    197 When a \proc attempts to enqueue a \at, it runs $F$ forward picking a sub-queue to enqueue to.
    198 If the enqueue is successful, state $B$ is overwritten with the content of $F$.
     208When a \proc attempts to dequeue a \at, it picks a sub-queue by running its $B$ backwards.
     209\item
     210When a \proc attempts to enqueue a \at, it runs its $F$ forward picking a sub-queue to enqueue to.
     211If the enqueue is successful, state of its $B$ is overwritten with the content of its $F$.
    199212\end{itemize}
    200213The result is that each \proc tends to dequeue \ats that it has itself enqueued.
    201214When most sub-queues are empty, this technique increases the odds of finding \ats at a very low cost, while also offering an improvement on locality in many cases.
    202215
    203 Tests showed this approach performs better than relaxed-FIFO in many cases.
     216My own tests showed this approach performs better than relaxed-FIFO in many cases.
    204217However, it is still not competitive with work-stealing algorithms.
    205 The fundamental problem is that the constant randomness limits how much locality the scheduler offers.
     218The fundamental problem is that the randomness limits how much locality the scheduler offers.
    206219This becomes problematic both because the scheduler is likely to get cache misses on internal data structures and because migrations become frequent.
    207220Therefore, the attempt to modify the relaxed-FIFO algorithm to behave more like work stealing did not pan out.
     
    214227Before attempting to dequeue from a \proc's sub-queue, the \proc must make some effort to ensure other sub-queues are not being neglected.
    215228To make this possible, \procs must be able to determine which \at has been on the ready queue the longest.
    216 Second, the relaxed-FIFO approach needs timestamps for each \at to make this possible.
     229Second, the relaxed-FIFO approach uses timestamps, denoted TS, for each \at to make this possible.
     230Theses timestamps can be added to work stealing.
    217231
    218232\begin{figure}
    219233        \centering
    220234        \input{base.pstex_t}
    221         \caption[Base \CFA design]{Base \CFA design \smallskip\newline A pool of sub-queues offers the sharding, two per \proc.
     235        \caption[Base \CFA design]{Base \CFA design \smallskip\newline It uses a pool of sub-queues, with a sharding of two sub-queue per \proc.
    222236        Each \gls{proc} can access all of the sub-queues.
    223237        Each \at is timestamped when enqueued.}
     
    227241Figure~\ref{fig:base} shows the algorithm structure.
    228242This structure is similar to classic work-stealing except the sub-queues are placed in an array so \procs can access them in constant time.
    229 Sharding width can be adjusted based on contention.
    230 Note, as an optimization, the TS of a \at is stored in the \at in front of it, so the first TS is in the array and the last \at has no TS.
     243Sharding can be adjusted based on contention.
     244As an optimization, the timestamp of a \at is stored in the \at in front of it, so the first TS is in the array and the last \at has no TS.
    231245This organization keeps the highly accessed front TSs directly in the array.
    232246When a \proc attempts to dequeue a \at, it first picks a random remote sub-queue and compares its timestamp to the timestamps of its local sub-queue(s).
    233 The oldest waiting \at is dequeued to provide global fairness.
     247The oldest waiting of the compared \ats is dequeued.
     248In this document, picking from a remote sub-queue in this fashion is referred to as ``helping''.
     249
     250The timestamps are measured using the CPU's hardware timestamp counters~\cite{wiki:rdtsc}.
     251These provide a 64-bit counter that tracks the number of cycles since the CPU was powered on.
     252Assuming the CPU runs at less than 5 GHz, this means that the 64-bit counter takes over a century before overflowing.
     253This is true even on 32-bit CPUs, where the counter is generally still 64-bit.
     254However, on many architectures, the instructions to read the counter do not have any particular ordering guarantees.
     255Since the counter does not depend on any data in the cpu pipeline, this means there is significant flexibility for the instruction to be read out of order, which limites the accuracy to a window of code.
     256Finally, another issue that can come up with timestamp counters is synchronization between \glspl{hthrd}.
     257This appears to be mostly a historical concern, as recent CPU offer more synchronization guarantees.
     258For example, Intel supports "Invariant TSC" \cite[\S~17.15.1]{MAN:inteldev} which is guaranteed to be synchronized across \glspl{hthrd}.
    234259
    235260However, this na\"ive implementation has performance problems.
    236 First, it is necessary to have some damping effect on helping.
    237 Random effects like cache misses and preemption can add spurious but short bursts of latency negating the attempt to help.
    238 These bursts can cause increased migrations and make this work-stealing approach slow down to the level of relaxed-FIFO.
     261First, it is necessary to avoid helping when it does not improve fairness.
     262Random effects like cache misses and preemption can add unpredictable but short bursts of latency but do not warrant the cost of helping.
     263These bursts can cause increased migrations, at which point this same locality problems as in the relaxed-FIFO approach start to appear.
    239264
    240265\begin{figure}
     
    246271
    247272A simple solution to this problem is to use an exponential moving average\cite{wiki:ma} (MA) instead of a raw timestamp, as shown in Figure~\ref{fig:base-ma}.
    248 Note that this is more complex because the \at at the head of a sub-queue is still waiting, so its wait time has not ended.
     273Note that this is more complex than it can appear because the \at at the head of a sub-queue is still waiting, so its wait time has not ended.
    249274Therefore, the exponential moving average is an average of how long each dequeued \at has waited.
    250275To compare sub-queues, the timestamp at the head must be compared to the current time, yielding the best-case wait time for the \at at the head of the queue.
    251276This new waiting is averaged with the stored average.
    252277To further limit \glslink{atmig}{migrations}, a bias can be added to a local sub-queue, where a remote sub-queue is helped only if its moving average is more than $X$ times the local sub-queue's average.
    253 Tests for this approach indicate the choice of the weight for the moving average or the bias is not important, \ie weights and biases of similar \emph{magnitudes} have similar effects.
    254 
    255 With these additions to work stealing, scheduling can be made as fair as the relaxed-FIFO approach, avoiding the majority of unnecessary migrations.
     278Tests for this approach indicate the precise values for the weight of the moving average and the bias are not important, \ie weights and biases of similar \emph{magnitudes} have similar effects.
     279
     280With these additions to work stealing, scheduling can satisfy the starvation freedom guarantee while suffering much less from unnecessary migrations than the relaxed-FIFO approach.
    256281Unfortunately, the work to achieve fairness has a performance cost, especially when the workload is inherently fair, and hence, there is only short-term unfairness or no starvation.
    257 The problem is that the constant polling, \ie reads, of remote sub-queues generally entails cache misses because the TSs are constantly being updated, \ie, writes.
    258 To make things worst, remote sub-queues that are very active, \ie \ats are frequently enqueued and dequeued from them, lead to higher chances that polling will incur a cache-miss.
     282The problem is that the constant polling, \ie reads, of remote sub-queues generally entails cache misses because the TSs are constantly being updated.
     283To make things worse, remote sub-queues that are very active, \ie \ats are frequently enqueued and dequeued from them, lead to higher chances that polling will incur a cache-miss.
    259284Conversely, the active sub-queues do not benefit much from helping since starvation is already a non-issue.
    260285This puts this algorithm in the awkward situation of paying for a largely unnecessary cost.
     
    264289The problem with polling remote sub-queues is that correctness is critical.
    265290There must be a consensus among \procs on which sub-queues hold which \ats, as the \ats are in constant motion.
    266 Furthermore, since timestamps are used for fairness, it is critical to have a consensus on which \at is the oldest.
     291Furthermore, since timestamps are used for fairness, it is critical that the oldest \ats eventually be recognized as such.
    267292However, when deciding if a remote sub-queue is worth polling, correctness is less of a problem.
    268293Since the only requirement is that a sub-queue is eventually polled, some data staleness is acceptable.
     
    278303        \centering
    279304        \input{base_ts2.pstex_t}
    280         \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline An array is added containing a copy of the timestamps.
     305        \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline This design uses an array containing a copy of the timestamps.
    281306        These timestamps are written-to with relaxed atomics, so there is no order among concurrent memory accesses, leading to fewer cache invalidations.}
    282307        \label{fig:base-ts2}
     
    285310The correctness argument is somewhat subtle.
    286311The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation.
    287 Therefore, it is acceptable if stale data makes queues appear older than they are but appearing fresher can be a problem.
     312Therefore, it is acceptable if stale data makes queues appear older than they are, but appearing fresher can be a problem.
    288313For the timestamps, this means it is acceptable to miss writes to the timestamp since they make the head \at look older.
    289314For the moving average, as long as the operations are just atomic reads/writes, the average is guaranteed to yield a value that is between the oldest and newest values written.
     
    292317With redundant timestamps, this scheduling algorithm achieves both the fairness and performance requirements on most machines.
    293318The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}.
    294 For example on machines with a CPU containing multiple hyper threads and cores and multiple CPU sockets, cache misses can be satisfied from the caches on the same (local) CPU, or by a CPU on a different (remote) socket.
     319For example on machines with multiple CPUs, cache misses can be satisfied from the caches on the same (local) CPU, or by the caches on a different (remote) CPU.
    295320Cache misses satisfied by a remote CPU have significantly higher latency than from the local CPU.
    296321However, these delays are not specific to systems with multiple CPUs.
     
    312337Figures~\ref{fig:cache-share} and~\ref{fig:cache-noshare} show two different cache topologies that highlight this difference.
    313338In Figure~\ref{fig:cache-share}, all cache misses are either private to a CPU or shared with another CPU.
    314 This means latency due to cache misses is fairly consistent.
    315 In contrast, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by either instance of the L3 cache.
     339This means that latency due to cache misses is fairly consistent.
     340In contrast, in Figure~\ref{fig:cache-noshare}, misses in the L2 cache can be satisfied by either instance of the L3 cache.
    316341However, the memory-access latency to the remote L3 is higher than the memory-access latency to the local L3.
    317 The impact of these different designs on this algorithm is that scheduling only scales well on architectures with a wide L3 cache, similar to Figure~\ref{fig:cache-share}, and less well on architectures with many narrower L3 cache instances, similar to Figure~\ref{fig:cache-noshare}.
     342The impact of these different designs on this algorithm is that scheduling only scales well on architectures with the L3 cache shared across many \glspl{hthrd}, similar to Figure~\ref{fig:cache-share}, and less well on architectures with many L3 cache instances and less sharing, similar to Figure~\ref{fig:cache-noshare}.
    318343Hence, as the number of L3 instances grows, so too does the chance that the random helping causes significant cache latency.
    319344The solution is for the scheduler to be aware of the cache topology.
     
    323348Unfortunately, there is no portable way to discover cache topology, and it is outside the scope of this thesis to solve this problem.
    324349This work uses the cache topology information from Linux's @/sys/devices/system/cpu@ directory.
    325 This leaves the challenge of matching \procs to cache structure, or more precisely identifying which sub-queues of the ready queue are local to which subcomponents of the cache structure.
     350This leaves the challenge of matching \procs to cache structure, or more precisely, identifying which sub-queues of the ready queue are local to which subcomponents of the cache structure.
    326351Once a match is generated, the helping algorithm is changed to add bias so that \procs more often help sub-queues local to the same cache substructure.\footnote{
    327 Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuning.}
     352Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuning beyond the order of magnitude.}
    328353
    329354The simplest approach for mapping sub-queues to cache structure is to statically tie sub-queues to CPUs.
     
    335360However, it can still cause some subtle fairness problems in systems with few \procs and many \glspl{hthrd}.
    336361In this case, the large number of sub-queues and the bias against sub-queues tied to different cache substructures make it unlikely that every sub-queue is picked.
    337 To make things worst, the small number of \procs means that few helping attempts are made.
     362To make things worse, the small number of \procs means that few helping attempts are made.
    338363This combination of low selection and few helping attempts allow a \at to become stranded on a sub-queue for a long time until it gets randomly helped.
    339 On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can take multiple seconds for a \at to get dequeued from a remote queue.
     364On a system with 2 \procs, 256 \glspl{hthrd}, and a 100:1 bias, it can take multiple seconds for a \at to get dequeued from a remote queue.
     365In this scenario, where each \proc attempts to help on 50\% of dequeues, the probability that a remote sub-queue gets help is $\frac{1}{51200}$ and follows a geometric distribution.
     366Therefore the probability of the remote sub-queue gets help within the next 100'000 dequeues is only 85\%.
     367Assuming dequeues happen every 100ns, there is still 15\% chance a \at could starve for more than 10ms and a 1\% chance the \at starves for 33.33ms, the maximum latency tolerated for interactive applications.
     368If few \glspl{hthrd} share each cache instance, the probability that a \at is on a remote sub-queue becomes high.
    340369Therefore, a more dynamic match of sub-queues to cache instances is needed.
    341370
     
    343372\label{s:TopologicalWorkStealing}
    344373The approach used in the \CFA scheduler is to have per-\proc sub-queues, but have an explicit data structure to track which cache substructure each sub-queue is tied to.
    345 This tracking requires some finesse because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
     374This tracking requires some finesse, because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
    346375A key element, however, is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}.
    347 Therefore the algorithm can be built as follows: before enqueueing or dequeuing a \at, each \proc queries the CPU id and the corresponding cache instance.
    348 Since sub-queues are tied to \procs, each \proc can then update the cache instance mapped to the local sub-queue(s).
     376Therefore the algorithm can be built as follows: before enqueueing or dequeuing a \at, a \proc queries the CPU id and the corresponding cache instance.
     377Since sub-queues are tied to \procs, a \proc can then update the cache instance mapped to the local sub-queue(s).
    349378To avoid unnecessary cache line invalidation, the map is only written-to if the mapping changes.
    350379
Note: See TracChangeset for help on using the changeset viewer.