Changeset aa60460 for doc


Ignore:
Timestamp:
Jul 3, 2022, 9:57:25 AM (2 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
Children:
3e3bee2
Parents:
84f90b6
Message:

proofread chapter text/core.tex

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/core.tex

    r84f90b6 raa60460  
    2525It is important to note that these guarantees are expected only up to a point. \Glspl{thrd} that are ready to run should not be prevented to do so, but they still share the limited hardware resources. Therefore, the guarantee is considered respected if a \gls{thrd} gets access to a \emph{fair share} of the hardware resources, even if that share is very small.
    2626
    27 Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is compatitive to other popular languages, the guarantee will be consider achieved.
    28 
     27Similar to the performance guarantee, the lack of interference among threads is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is competitive to other popular languages, the guarantee is consider achieved.
    2928More precisely the scheduler should be:
    3029\begin{itemize}
     
    3231        \item Faster than other schedulers that have equal or better fairness.
    3332\end{itemize}
     33(Everything should be made as fair as possible, but not fairer. Chuck Einstein, Albert's younger brother)
    3434
    3535\subsection{Fairness Goals}
    36 For this work fairness will be considered as having two strongly related requirements: true starvation freedom and ``fast'' load balancing.
    37 
    38 \paragraph{True starvation freedom} is more easily defined: As long as at least one \proc continues to dequeue \ats, all read \ats should be able to run eventually.
    39 In any running system, \procs can stop dequeing \ats if they start running a \at that will simply never park.
    40 Traditional workstealing schedulers do not have starvation freedom in these cases.
     36For this work, fairness is considered to have two strongly related requirements: true starvation freedom and ``fast'' load balancing.
     37
     38\paragraph{True starvation freedom} means as long as at least one \proc continues to dequeue \ats, all ready \ats should be able to run eventually (eventual progress).
     39In any running system, a \proc can stop dequeuing \ats if it starts running a \at that never blocks.
     40Without preemption, traditional work-stealing schedulers do not have starvation freedom in this case.
    4141Now this requirement begs the question, what about preemption?
    4242Generally speaking preemption happens on the timescale of several milliseconds, which brings us to the next requirement: ``fast'' load balancing.
    4343
    4444\paragraph{Fast load balancing} means that load balancing should happen faster than preemption would normally allow.
    45 For interactive applications that need to run at 60, 90, 120 frames per second, \ats having to wait for several millseconds to run are effectively starved.
     45For interactive applications that need to run at 60, 90, 120 frames per second, \ats having to wait for several milliseconds to run are effectively starved.
    4646Therefore load-balancing should be done at a faster pace, one that can detect starvation at the microsecond scale.
    4747With that said, this is a much fuzzier requirement since it depends on the number of \procs, the number of \ats and the general load of the system.
     
    5050An important performance factor in modern architectures is cache locality. Waiting for data at lower levels or not present in the cache can have a major impact on performance. Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on. It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}.
    5151
    52 For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in the next section.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
    53 
    54 However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally, where Figure~\ref{fig:fair} shows a visual representation of this behaviour. As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model.
     52For a scheduler, having good locality {\color{red}PAB: I think you should fold this footnote into the paragraph}\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in the next section.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
     53
     54However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally. Figure~\ref{fig:fair} shows a visual representation of this behaviour. As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model.
    5555
    5656\begin{figure}
     
    5858        \input{fairness.pstex_t}
    5959        \vspace*{-10pt}
    60         \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \gls{thrd} awaits running is shown as the time the ready \gls{thrd} waits increases, Ready Time, the chances that its data is still in cache, Locality, decreases. At the same time, the need for fairness increases since other \glspl{thrd} may have the chance to run many times, breaking the fairness model. Since the actual values and curves of this graph can be highly variable, the graph is an idealized representation of the two opposing goals.}
     60        \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \gls{thrd} awaits running is shown as the time the ready \gls{thrd} waits increases, Ready Time, the chances that its data is still in cache decreases, Locality. At the same time, the need for fairness increases since other \glspl{thrd} may have the chance to run many times, breaking the fairness model. Since the actual values and curves of this graph can be highly variable, the graph is an idealized representation of the two opposing goals.}
    6161        \label{fig:fair}
    6262\end{figure}
     
    6969Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeues \ats.
    7070One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can lead to worst improvements.
    71 While the ready-queue itself can be sharded to alleviate the main source of contention, auxillary scheduling features, \eg counting ready \ats, can also be sources of contention.
     71While the ready-queue itself can be sharded to alleviate the main source of contention, auxiliary scheduling features, \eg counting ready \ats, can also be sources of contention.
    7272
    7373\subsubsection{Migration Cost}
    74 Another important source of latency in scheduling is migration.
    75 An \at is said to have migrated if it is executed by two different \proc consecutively, which is the process discussed in \ref{fairnessvlocal}.
    76 Migrations can have many different causes, but it certain programs it can be all but impossible to limit migrations.
    77 Chapter~\ref{microbench} for example, has a benchmark where any \at can potentially unblock any other \at, which can leat to \ats migrating more often than not.
    78 Because of this it is important to design the internal data structures of the scheduler to limit the latency penalty from migrations.
     74Another important source of scheduling latency is migration.
     75A \at migrates if it executes on two different \procs consecutively, which is the process discussed in \ref{fairnessvlocal}.
     76Migrations can have many different causes, but in certain programs, it can be impossible to limit migration.
     77Chapter~\ref{microbench} has a benchmark where any \at can potentially unblock any other \at, which can lead to \ats migrating frequently.
     78Hence, it is important to design the internal data structures of the scheduler to limit any latency penalty from migrations.
    7979
    8080
    8181\section{Inspirations}
    82 In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is adding/removing \glspl{thrd} is a single point of contention. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue : create multiple sub-ready-queues that multiple \glspl{hthrd} can access and modify without interfering.
    83 
    84 Before going into the design of \CFA's scheduler proper, it is relevant to discuss two sharding solutions which served as the inspiration scheduler in this thesis.
     82In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is a single point of contention when adding/removing \glspl{thrd}. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue: create multiple \emph{subqueues} forming the logical ready-queue and the subqueues are accessed by multiple \glspl{hthrd} without interfering.
     83
     84Before going into the design of \CFA's scheduler, it is relevant to discuss two sharding solutions that served as the inspiration scheduler in this thesis.
    8585
    8686\subsection{Work-Stealing}
    8787
    88 As mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing.
    89 In this pattern each \gls{proc} has its own local ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work on their local ready-queue.
    90 The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed.
    91 In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
     88As mentioned in \ref{existing:workstealing}, a popular pattern in work-stealing is sharding the ready-queue.
     89In this pattern, each \gls{proc} has its own local ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work on their local ready-queue.
     90The interesting aspect of work stealing happens in the steady-state scheduling case, \ie all \glspl{proc} have work and no load balancing is needed.
     91In this case, work stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
    9292On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work.
    9393This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work.
    94 Chapter~\ref{microbench} shows that in pathological cases this problem can lead to indefinite starvation.
    95 
    96 
    97 Based on these observation, the conclusion is that a \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises.
    98 
    99 \subsection{Relaxed-Fifo}
    100 An entirely different scheme is to create a ``relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgos any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which the \glspl{proc} can pick from.
    101 \Glspl{proc} choose ready-queus at random, but timestamps are added to all elements of the queue and dequeues are done by picking two queues and dequeing the oldest element.
    102 All subqueues are protected by TryLocks and \procs simply pick a different subqueue if they fail to acquire the TryLock.
    103 The result is a queue that has both decent scalability and sufficient fairness.
    104 The lack of ownership means that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely that any element will stay on the queue for much longer than any other element.
    105 This contrasts with work-stealing, where \emph{any} \gls{proc} busy for an extended period of time results in all the elements on its local queue to have to wait. Unless another \gls{proc} runs out of work.
     94Chapter~\ref{microbench} shows that pathological cases work stealing can lead to indefinite starvation.
     95
     96Based on these observation, the conclusion is that a \emph{perfect} scheduler should behave similar to work-stealing in the steady-state case, but load balance proactively when the need arises.
     97
     98\subsection{Relaxed-FIFO}
     99
     100A different scheduling approach is to create a ``relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgoes any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which \glspl{proc} pick.
     101Scheduling is performed as follows:
     102\begin{itemize}
     103\item
     104All ready queues are protected by TryLocks.
     105\item
     106Timestamps are added to each element of a ready queue.
     107\item
     108A \gls{proc} randomly tests ready queues until it has acquired two queues.
     109\item
     110The older of the two \ats at the front the acquired queues is dequeued.
     111\end{itemize}
     112The result is a queue that has both good scalability and sufficient fairness.
     113The lack of ownership ensures that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely any element will delay longer than any other element.
     114This guarantee contrasts with work-stealing, where a \gls{proc} with a long ready queue results in unfairness for its \ats in comparison to a \gls{proc} with a short ready queue. This unfairness persists until a \gls{proc} runs out of work and steals.
    106115
    107116An important aspects of this scheme's fairness approach is that the timestamps make it possible to evaluate how long elements have been on the queue.
    108 However, another major aspect is that \glspl{proc} will eagerly search for these older elements instead of focusing on specific queues.
    109 
    110 While the fairness, of this scheme is good, it does suffer in terms of performance.
    111 It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues can be difficult if there are too few ready \ats.
     117However, \glspl{proc} eagerly search for these older elements instead of focusing on specific queues, which affects locality.
     118
     119While this scheme has good fairness, its performance suffers.
     120It requires wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues is difficult when there are few ready \ats.
    112121
    113122\section{Relaxed-FIFO++}
    114 Since it has inherent fairness quelities and decent performance in the presence of many \ats, the relaxed-FIFO queue appears as a good candidate to form the basis of a scheduler.
    115 The most obvious problems is for workloads where the number of \ats is barely greater than the number of \procs.
    116 In these situations, the wide sharding means most of the sub-queues from which the relaxed queue is formed will be empty.
    117 The consequence is that when a dequeue operations attempts to pick a sub-queue at random, it is likely that it picks an empty sub-queue and will have to pick again.
    118 This problem can repeat an unbounded number of times.
     123The inherent fairness and good performance with many \ats, makes the relaxed-FIFO queue a good candidate to form the basis of a new scheduler.
     124The problem case is workloads where the number of \ats is barely greater than the number of \procs.
     125In these situations, the wide sharding of the ready queue means most of its (relaxed) subqueues are empty.
     126Furthermore, the non-empty subqueues are unlikely to hold more than one item.
     127The consequence is that a random dequeue operation is likely to pick an empty subqueue, resulting in an unbounded number of selections.
     128This state is generally unstable: each subqueue is likely to frequently toggle between being empty and nonempty.
     129Indeed, when the number of \ats is \emph{equal} to the number of \procs, every pop operation is expected to empty a subqueue and every push is expected to add to an empty subqueue.
     130In the worst case, a check of the subqueues sees all are empty or full.
    119131
    120132As this is the most obvious challenge, it is worth addressing first.
    121 The obvious solution is to supplement each subqueue with some sharded data structure that keeps track of which subqueues are empty.
    122 This data structure can take many forms, for example simple bitmask or a binary tree that tracks which branch are empty.
    123 Following a binary tree on each pick has fairly good Big O complexity and many modern architectures have powerful bitmask manipulation instructions.
    124 However, precisely tracking which sub-queues are empty is actually fundamentally problematic.
    125 The reason is that each subqueues are already a form of sharding and the sharding width has presumably already chosen to avoid contention.
    126 However, tracking which ready queue is empty is only useful if the tracking mechanism uses denser sharding than the sub queues, then it will invariably create a new source of contention.
    127 But if the tracking mechanism is not denser than the sub-queues, then it will generally not provide useful because reading this new data structure risks being as costly as simply picking a sub-queue at random.
    128 Early experiments with this approach have shown that even with low success rates, randomly picking a sub-queue can be faster than a simple tree walk.
     133The obvious solution is to supplement each sharded subqueue with data that indicates if the queue is empty/nonempty to simplify finding nonempty queues, \ie ready \glspl{at}.
     134This sharded data can be organized in different forms, \eg a bitmask or a binary tree that tracks the nonempty subqueues.
     135Specifically, many modern architectures have powerful bitmask manipulation instructions or searching a binary tree has good Big-O complexity.
     136However, precisely tracking nonempty subqueues is problematic.
     137The reason is that the subqueues are initially sharded with a width presumably chosen to avoid contention.
     138However, tracking which ready queue is nonempty is only useful if the tracking data is dense, \ie denser than the sharded subqueues.
     139Otherwise, it does not provide useful information because reading this new data structure risks being as costly as simply picking a subqueue at random.
     140But if the tracking mechanism \emph{is} denser than the shared subqueues, than constant updates invariably create a new source of contention.
     141Early experiments with this approach showed that randomly picking, even with low success rates, is often faster than bit manipulations or tree walks.
    129142
    130143The exception to this rule is using local tracking.
    131 If each \proc keeps track locally of which sub-queue is empty, then this can be done with a very dense data structure without introducing a new source of contention.
    132 The consequence of local tracking however, is that the information is not complete.
    133 Each \proc is only aware of the last state it saw each subqueues but does not have any information about freshness.
    134 Even on systems with low \gls{hthrd} count, \eg 4 or 8, this can quickly lead to the local information being no better than the random pick.
    135 This is due in part to the cost of this maintaining this information and its poor quality.
    136 
    137 However, using a very low cost approach to local tracking may actually be beneficial.
    138 If the local tracking is no more costly than the random pick, than \emph{any} improvement to the succes rate, however low it is, would lead to a performance benefits.
    139 This leads to the following approach:
     144If each \proc locally keeps track of empty subqueues, than this can be done with a very dense data structure without introducing a new source of contention.
     145However, the consequence of local tracking is that the information is incomplete.
     146Each \proc is only aware of the last state it saw about each subqueue so this information quickly becomes stale.
     147Even on systems with low \gls{hthrd} count, \eg 4 or 8, this approach can quickly lead to the local information being no better than the random pick.
     148This result is due in part to the cost of maintaining information and its poor quality.
     149
     150However, using a very low cost but inaccurate approach for local tracking can actually be beneficial.
     151If the local tracking is no more costly than a random pick, than \emph{any} improvement to the success rate, however low it is, leads to a performance benefits.
     152This suggests to the following approach:
    140153
    141154\subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/}
    142 The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}.
    143 The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed.
    144 While this is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the subqueues that were accessed.
    145 Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, for example some Linear Congruential Generators\cit{https://en.wikipedia.org/wiki/Linear\_congruential\_generator} support running the algorithm backwards while offering good quality and performance.
     155The Relaxed-FIFO approach can be made to handle the case of mostly empty subqueues by tweaking the \glsxtrlong{prng} (PRNG).
     156The \glsxtrshort{prng} state can be seen as containing a list of all the future subqueues that will be accessed.
     157While this concept is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the subqueues that were accessed.
     158Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, \eg some Linear Congruential Generators\cit{https://en.wikipedia.org/wiki/Linear\_congruential\_generator} support running the algorithm backwards while offering good quality and performance.
    146159This particular \glsxtrshort{prng} can be used as follows:
    147 
    148 Each \proc maintains two \glsxtrshort{prng} states, which whill be refered to as \texttt{F} and \texttt{B}.
    149 
    150 When a \proc attempts to dequeue a \at, it picks the subqueues by running the \texttt{B} backwards.
    151 When a \proc attempts to enqueue a \at, it runs \texttt{F} forward to pick to subqueue to enqueue to.
    152 If the enqueue is successful, the state \texttt{B} is overwritten with the content of \texttt{F}.
    153 
    154 The result is that each \proc will tend to dequeue \ats that it has itself enqueued.
    155 When most sub-queues are empty, this technique increases the odds of finding \ats at very low cost, while also offering an improvement on locality in many cases.
    156 
    157 However, while this approach does notably improve performance in many cases, this algorithm is still not competitive with work-stealing algorithms.
     160\begin{itemize}
     161\item
     162Each \proc maintains two \glsxtrshort{prng} states, refereed to as $F$ and $B$.
     163\item
     164When a \proc attempts to dequeue a \at, it picks a subqueue by running $B$ backwards.
     165\item
     166When a \proc attempts to enqueue a \at, it runs $F$ forward picking a subqueue to enqueue to.
     167If the enqueue is successful, the state $B$ is overwritten with the content of $F$.
     168\end{itemize}
     169The result is that each \proc tends to dequeue \ats that it has itself enqueued.
     170When most subqueues are empty, this technique increases the odds of finding \ats at very low cost, while also offering an improvement on locality in many cases.
     171
     172Tests showed this approach performs better than relaxed-FIFO in many cases.
     173However, it is still not competitive with work-stealing algorithms.
    158174The fundamental problem is that the constant randomness limits how much locality the scheduler offers.
    159 This becomes problematic both because the scheduler is likely to get cache misses on internal data-structures and because migration become very frequent.
    160 Therefore since the approach of modifying to relaxed-FIFO algorithm to behave more like work stealing does not seem to pan out, the alternative is to do it the other way around.
     175This becomes problematic both because the scheduler is likely to get cache misses on internal data-structures and because migrations become frequent.
     176Therefore, the attempt to modify the relaxed-FIFO algorithm to behave more like work stealing did not pan out.
     177The alternative is to do it the other way around.
    161178
    162179\section{Work Stealing++}
    163 To add stronger fairness guarantees to workstealing a few changes.
     180To add stronger fairness guarantees to work stealing a few changes are needed.
    164181First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all subqueues.
    165 Therefore the workstealing algorithm must be prepended with some monitoring.
    166 Before attempting to dequeue from a \proc's local queue, the \proc must make some effort to make sure remote queues are not being neglected.
    167 To make this possible, \procs must be able to determie which \at has been on the ready-queue the longest.
    168 Which is the second aspect that much be added.
    169 The relaxed-FIFO approach uses timestamps for each \at and this is also what is done here.
     182Therefore, the work-stealing algorithm must be prepended with some monitoring.
     183Before attempting to dequeue from a \proc's subqueue, the \proc must make some effort to ensure other subqueues are not being neglected.
     184To make this possible, \procs must be able to determine which \at has been on the ready queue the longest.
     185Second, the relaxed-FIFO approach needs timestamps for each \at to make this possible.
    170186
    171187\begin{figure}
    172188        \centering
    173189        \input{base.pstex_t}
    174         \caption[Base \CFA design]{Base \CFA design \smallskip\newline A Pool of sub-ready queues offers the sharding, two per \glspl{proc}. Each \gls{proc} have local subqueues, however \glspl{proc} can access any of the sub-queues. Each \at is timestamped when enqueued.}
     190        \caption[Base \CFA design]{Base \CFA design \smallskip\newline A pool of subqueues offers the sharding, two per \glspl{proc}. Each \gls{proc} can access all of the subqueues. Each \at is timestamped when enqueued.}
    175191        \label{fig:base}
    176192\end{figure}
    177 The algorithm is structure as shown in Figure~\ref{fig:base}.
    178 This is very similar to classic workstealing except the local queues are placed in an array so \procs can access eachother's queue in constant time.
    179 Sharding width can be adjusted based on need.
    180 When a \proc attempts to dequeue a \at, it first picks a random remote queue and compares its timestamp to the timestamps of the local queue(s), dequeue from the remote queue if needed.
    181 
    182 Implemented as as naively state above, this approach has some obvious performance problems.
     193
     194Figure~\ref{fig:base} shows the algorithm structure.
     195This structure is similar to classic work-stealing except the subqueues are placed in an array so \procs can access them in constant time.
     196Sharding width can be adjusted based on contention.
     197Note, as an optimization, the TS of a \at is store in the \at in front of it, so the first TS is in the array and the last \at has no TS.
     198This organization keeps the highly accessed front TSs close together in the array.
     199When a \proc attempts to dequeue a \at, it first picks a random remote subqueue and compares its timestamp to the timestamps of its local subqueue(s).
     200The oldest waiting \at (possibly within some range) is dequeued to provide global fairness.
     201
     202However, this na\"ive implemented has performance problems.
    183203First, it is necessary to have some damping effect on helping.
    184 Random effects like cache misses and preemption can add spurious but short bursts of latency for which helping is not helpful, pun intended.
    185 The effect of these bursts would be to cause more migrations than needed and make this workstealing approach slowdown to the match the relaxed-FIFO approach.
     204Random effects like cache misses and preemption can add spurious but short bursts of latency negating the attempt to help.
     205These bursts are caused by increased migrations and make this work stealing approach slowdown to the level of relaxed-FIFO.
    186206
    187207\begin{figure}
     
    192212\end{figure}
    193213
    194 A simple solution to this problem is to compare an exponential moving average\cit{https://en.wikipedia.org/wiki/Moving\_average\#Exponential\_moving\_average} instead if the raw timestamps, shown in Figure~\ref{fig:base-ma}.
    195 Note that this is slightly more complex than it sounds because since the \at at the head of a subqueue is still waiting, its wait time has not ended.
    196 Therefore the exponential moving average is actually an exponential moving average of how long each already dequeued \at have waited.
    197 To compare subqueues, the timestamp at the head must be compared to the current time, yielding the bestcase wait time for the \at at the head of the queue.
     214A simple solution to this problem is to use an exponential moving average\cit{https://en.wikipedia.org/wiki/Moving\_average\#Exponential\_moving\_average} (MA) instead of a raw timestamps, shown in Figure~\ref{fig:base-ma}.
     215Note, this is more complex because the \at at the head of a subqueue is still waiting, so its wait time has not ended.
     216Therefore, the exponential moving average is actually an exponential moving average of how long each dequeued \at has waited.
     217To compare subqueues, the timestamp at the head must be compared to the current time, yielding the best-case wait-time for the \at at the head of the queue.
    198218This new waiting is averaged with the stored average.
    199 To limit even more the amount of unnecessary migration, a bias can be added to the local queue, where a remote queue is helped only if its moving average is more than \emph{X} times the local queue's average.
    200 None of the experimentation that I have run with these scheduler seem to indicate that the choice of the weight for the moving average or the choice of bis is particularly important.
    201 Weigths and biases of similar \emph{magnitudes} have similar effects.
    202 
    203 With these additions to workstealing, scheduling can be made as fair as the relaxed-FIFO approach, well avoiding the majority of unnecessary migrations.
    204 Unfortunately, the performance of this approach does suffer in the cases with no risks of starvation.
    205 The problem is that the constant polling of remote subqueues generally entail a cache miss.
    206 To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, the higher the chances are that polling will incurr a cache-miss.
    207 Conversly, the active subqueues do not benefit much from helping since starvation is already a non-issue.
    208 This puts this algorithm in an akward situation where it is paying for a cost, but the cost itself suggests the operation was unnecessary.
     219To further limit migration, a bias can be added to a local subqueue, where a remote subqueue is helped only if its moving average is more than $X$ times the local subqueue's average.
     220Tests for this approach indicate the choice of the weight for the moving average or the bias is not important, \ie weights and biases of similar \emph{magnitudes} have similar effects.
     221
     222With these additions to work stealing, scheduling can be made as fair as the relaxed-FIFO approach, avoiding the majority of unnecessary migrations.
     223Unfortunately, the work to achieve fairness has a performance cost, especially when the workload is inherently fair, and hence, there is only short-term or no starvation.
     224The problem is that the constant polling (reading) of remote subqueues generally entail a cache miss because the TSs are constantly being updated (written).
     225To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, the higher the chances are that polling will incur a cache-miss.
     226Conversely, the active subqueues do not benefit much from helping since starvation is already a non-issue.
     227This puts this algorithm in the awkward situation of paying for a cost that is largely unnecessary.
    209228The good news is that this problem can be mitigated
    210229
    211230\subsection{Redundant Timestamps}
    212 The problem with polling remote queues is due to a tension between the consistency requirement on the subqueue.
    213 For the subqueues, correctness is critical. There must be a consensus among \procs on which subqueues hold which \ats.
    214 Since the timestamps are use for fairness, it is alco important to have consensus and which \at is the oldest.
    215 However, when deciding if a remote subqueue is worth polling, correctness is much less of a problem.
    216 Since the only need is that a subqueue will eventually be polled, some data staleness can be acceptable.
    217 This leads to a tension where stale timestamps are only problematic in some cases.
    218 Furthermore, stale timestamps can be somewhat desirable since lower freshness requirements means less tension on the cache coherence protocol.
    219 
    220 
    221 \begin{figure}
    222         \centering
    223         % \input{base_ts2.pstex_t}
    224         \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline A array is added containing a copy of the timestamps. These timestamps are written to with relaxed atomics, without fencing, leading to fewer cache invalidations.}
    225         \label{fig:base-ts2}
    226 \end{figure}
    227 A solution to this is to create a second array containing a copy of the timestamps and average.
     231The problem with polling remote subqueues is that correctness is critical.
     232There must be a consensus among \procs on which subqueues hold which \ats, as the \ats are in constant motion.
     233Furthermore, since timestamps are use for fairness, it is critical to have consensus on which \at is the oldest.
     234However, when deciding if a remote subqueue is worth polling, correctness is less of a problem.
     235Since the only requirement is that a subqueue is eventually polled, some data staleness is acceptable.
     236This leads to a situation where stale timestamps are only problematic in some cases.
     237Furthermore, stale timestamps can be desirable since lower freshness requirements mean less cache invalidations.
     238
     239Figure~\ref{fig:base-ts2} shows a solution with a second array containing a copy of the timestamps and average.
    228240This copy is updated \emph{after} the subqueue's critical sections using relaxed atomics.
    229241\Glspl{proc} now check if polling is needed by comparing the copy of the remote timestamp instead of the actual timestamp.
    230 The result is that since there is no fencing, the writes can be buffered and cause fewer cache invalidations.
    231 
    232 The correctness argument here is somewhat subtle.
     242The result is that since there is no fencing, the writes can be buffered in the hardware and cause fewer cache invalidations.
     243
     244\begin{figure}
     245        \centering
     246        \input{base_ts2.pstex_t}
     247        \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline An array is added containing a copy of the timestamps. These timestamps are written to with relaxed atomics, so there is no order among concurrent memory accesses, leading to fewer cache invalidations.}
     248        \label{fig:base-ts2}
     249\end{figure}
     250
     251The correctness argument is somewhat subtle.
    233252The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation.
    234 Therefore, it is acceptable if stale data make queues appear older than they really are but not fresher.
    235 For the timestamps, this means that missing writes to the timestamp is acceptable since they will make the head \at look older.
    236 For the moving average, as long as the operation are RW-safe, the average is guaranteed to yield a value that is between the oldest and newest values written.
    237 Therefore this unprotected read of the timestamp and average satisfy the limited correctness that is required.
     253Therefore, it is acceptable if stale data makes queues appear older than they really are but not fresher.
     254For the timestamps, this means missing writes to the timestamp is acceptable since they make the head \at look older.
     255For the moving average, as long as the operations are just atomic reads/writes, the average is guaranteed to yield a value that is between the oldest and newest values written.
     256Therefore, this unprotected read of the timestamp and average satisfy the limited correctness that is required.
     257
     258With redundant timestamps, this scheduling algorithm achieves both the fairness and performance requirements on most machines.
     259The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}.
     260For example, on machines with a CPU containing multiple hyperthreads and cores and multiple CPU sockets, cache misses can be satisfied from the caches on same (local) CPU, or by a CPU on a different (remote) socket.
     261Cache misses satisfied by a remote CPU have significantly higher latency than from the local CPU.
     262However, these delays are not specific to systems with multiple CPUs.
     263Depending on the cache structure, cache misses can have different latency on the same CPU, \eg the AMD EPYC 7662 CPUs used in Chapter~\ref{microbench}.
    238264
    239265\begin{figure}
    240266        \centering
    241267        \input{cache-share.pstex_t}
    242         \caption[CPU design with wide L3 sharing]{CPU design with wide L3 sharing \smallskip\newline A very simple CPU with 4 \glspl{hthrd}. L1 and L2 are private to each \gls{hthrd} but the L3 is shared across to entire core.}
     268        \caption[CPU design with wide L3 sharing]{CPU design with wide L3 sharing \smallskip\newline A CPU with 4 cores, where caches L1 and L2 are private to each core, and the L3 cache is shared across all cores.}
    243269        \label{fig:cache-share}
    244 \end{figure}
    245 
    246 \begin{figure}
    247         \centering
     270
     271        \vspace{25pt}
     272
    248273        \input{cache-noshare.pstex_t}
    249         \caption[CPU design with a narrower L3 sharing]{CPU design with a narrower L3 sharing \smallskip\newline A different CPU design, still with 4 \glspl{hthrd}. L1 and L2 are still private to each \gls{hthrd} but the L3 is shared some of the CPU but there is still two distinct L3 instances.}
     274        \caption[CPU design with a narrower L3 sharing]{CPU design with a narrow L3 sharing \smallskip\newline A CPU with 4 cores, where caches L1 and L2 are private to each core, and the L3 cache is shared across a pair of cores.}
    250275        \label{fig:cache-noshare}
    251276\end{figure}
    252277
    253 With redundant tiemstamps this scheduling algorithm achieves both the fairness and performance requirements, on some machines.
    254 The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}.
    255 For example, on machines where the motherboard holds multiple CPU, cache misses can be satisfied from a cache that belongs to the CPU that missed, the \emph{local} CPU, or by a different CPU, a \emph{remote} one.
    256 Cache misses that are satisfied by a remote CPU will have higher latency than if it is satisfied by the local CPU.
    257 However, this is not specific to systems with multiple CPUs.
    258 Depending on the cache structure, cache-misses can have different latency for the same CPU.
    259 The AMD EPYC 7662 CPUs that is described in Chapter~\ref{microbench} is an example of that.
    260 Figure~\ref{fig:cache-share} and Figure~\ref{fig:cache-noshare} show two different cache topologies with highlight this difference.
    261 In Figure~\ref{fig:cache-share}, all cache instances are either private to a \gls{hthrd} or shared to the entire system, this means latency due to cache-misses are likely fairly consistent.
    262 By comparison, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by a hit in either instance of the L3.
    263 However, the memory access latency to the remote L3 instance will be notably higher than the memory access latency to the local L3.
    264 The impact of these different design on this algorithm is that scheduling will scale very well on architectures similar to Figure~\ref{fig:cache-share}, both will have notably worst scalling with many narrower L3 instances.
    265 This is simply because as the number of L3 instances grow, so two does the chances that the random helping will cause significant latency.
    266 The solution is to have the scheduler be aware of the cache topology.
     278Figures~\ref{fig:cache-share} and~\ref{fig:cache-noshare} show two different cache topologies that highlight this difference.
     279In Figure~\ref{fig:cache-share}, all cache misses are either private to a CPU or shared with another CPU.
     280This means latency due to cache misses is fairly consistent.
     281In contrast, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by either instance of L3 cache.
     282However, the memory-access latency to the remote L3 is higher than the memory-access latency to the local L3.
     283The impact of these different designs on this algorithm is that scheduling only scales well on architectures with a wide L3 cache, similar to Figure~\ref{fig:cache-share}, and less well on architectures with many narrower L3 cache instances, similar to Figure~\ref{fig:cache-noshare}.
     284Hence, as the number of L3 instances grow, so too does the chance that the random helping causes significant cache latency.
     285The solution is for the scheduler be aware of the cache topology.
    267286
    268287\subsection{Per CPU Sharding}
    269 Building a scheduler that is aware of cache topology poses two main challenges: discovering cache topology and matching \procs to cache instance.
    270 Sadly, there is no standard portable way to discover cache topology in C.
    271 Therefore, while this is a significant portability challenge, it is outside the scope of this thesis to design a cross-platform cache discovery mechanisms.
    272 The rest of this work assumes discovering the cache topology based on Linux's \texttt{/sys/devices/system/cpu} directory.
    273 This leaves the challenge of matching \procs to cache instance, or more precisely identifying which subqueues of the ready queue are local to which cache instance.
    274 Once this matching is available, the helping algorithm can be changed to add bias so that \procs more often help subqueues local to the same cache instance
    275 \footnote{Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuinng.}.
    276 
    277 The obvious approach to mapping cache instances to subqueues is to statically tie subqueues to CPUs.
    278 Instead of having each subqueue local to a specific \proc, the system is initialized with subqueues for each \glspl{hthrd} up front.
    279 Then \procs dequeue and enqueue by first asking which CPU id they are local to, in order to identify which subqueues are the local ones.
     288
     289Building a scheduler that is cache aware poses two main challenges: discovering the cache topology and matching \procs to this cache structure.
     290Unfortunately, there is no portable way to discover cache topology, and it is outside the scope of this thesis to solve this problem.
     291This work uses the cache topology information from Linux's \texttt{/sys/devices/system/cpu} directory.
     292This leaves the challenge of matching \procs to cache structure, or more precisely identifying which subqueues of the ready queue are local to which subcomponents of the cache structure.
     293Once a matching is generated, the helping algorithm is changed to add bias so that \procs more often help subqueues local to the same cache substructure.\footnote{
     294Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuning.}
     295
     296The simplest approach for mapping subqueues to cache structure is to statically tie subqueues to CPUs.
     297Instead of having each subqueue local to a specific \proc (kernel thread), the system is initialized with subqueues for each hardware hyperthread/core up front.
     298Then \procs dequeue and enqueue by first asking which CPU id they are executing on, in order to identify which subqueues are the local ones.
    280299\Glspl{proc} can get the CPU id from \texttt{sched\_getcpu} or \texttt{librseq}.
    281300
    282 This approach solves the performance problems on systems with topologies similar to Figure~\ref{fig:cache-noshare}.
    283 However, it actually causes some subtle fairness problems in some systems, specifically systems with few \procs and many \glspl{hthrd}.
    284 In these cases, the large number of subqueues and the bias agains subqueues tied to different cache instances make it so it is very unlikely any single subqueue is picked.
    285 To make things worst, the small number of \procs mean that few helping attempts will be made.
    286 This combination of few attempts and low chances make it so a \at stranded on a subqueue that is not actively dequeued from may wait very long before it gets randomly helped.
     301This approach solves the performance problems on systems with topologies with narrow L3 caches, similar to Figure \ref{fig:cache-noshare}.
     302However, it can still cause some subtle fairness problems in systems with few \procs and many \glspl{hthrd}.
     303In this case, the large number of subqueues and the bias against subqueues tied to different cache substructures make it unlikely that every subqueue is picked.
     304To make things worst, the small number of \procs mean that few helping attempts are made.
     305This combination of low selection and few helping attempts allow a \at to become stranded on a subqueue for a long time until it gets randomly helped.
    287306On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can actually take multiple seconds for a \at to get dequeued from a remote queue.
    288307Therefore, a more dynamic matching of subqueues to cache instance is needed.
    289308
    290309\subsection{Topological Work Stealing}
    291 The approach that is used in the \CFA scheduler is to have per-\proc subqueue, but have an excplicit data-structure track which cache instance each subqueue is tied to.
    292 This is requires some finess because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
     310
     311The approach used in the \CFA scheduler is to have per-\proc subqueues, but have an explicit data-structure track which cache substructure each subqueue is tied to.
     312This tracking requires some finesse because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
    293313A key element however is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}.
    294 Therefore the algorithm can be built as follows: Before enqueuing or dequeing a \at, each \proc queries the CPU id and the corresponding cache instance.
     314Therefore the algorithm can be built as follows: before enqueueing or dequeuing a \at, each \proc queries the CPU id and the corresponding cache instance.
    295315Since subqueues are tied to \procs, each \proc can then update the cache instance mapped to the local subqueue(s).
    296316To avoid unnecessary cache line invalidation, the map is only written to if the mapping changes.
Note: See TracChangeset for help on using the changeset viewer.