Changeset 6db62fa for doc/theses/thierry_delisle_PhD/thesis/text
- Timestamp:
- Apr 9, 2022, 2:51:24 PM (3 years ago)
- Branches:
- ADT, ast-experimental, enum, master, pthread-emulation, qualifiedEnum
- Children:
- 2a77817
- Parents:
- 11a1240
- Location:
- doc/theses/thierry_delisle_PhD/thesis/text
- Files:
-
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/text/core.tex
r11a1240 r6db62fa 3 3 Before discussing scheduling in general, where it is important to address systems that are changing states, this document discusses scheduling in a somewhat ideal scenario, where the system has reached a steady state. For this purpose, a steady state is loosely defined as a state where there are always \glspl{thrd} ready to run and the system has the resources necessary to accomplish the work, \eg, enough workers. In short, the system is neither overloaded nor underloaded. 4 4 5 I believe it is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state canto be pervasive in all states.5 It is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state tend to be pervasive in all states. 6 6 7 7 \section{Design Goals} … … 25 25 It is important to note that these guarantees are expected only up to a point. \Glspl{thrd} that are ready to run should not be prevented to do so, but they still share the limited hardware resources. Therefore, the guarantee is considered respected if a \gls{thrd} gets access to a \emph{fair share} of the hardware resources, even if that share is very small. 26 26 27 Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is equivalent to or lower than other popular languages, I consider the guaranteeachieved.27 Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is compatitive to other popular languages, the guarantee will be consider achieved. 28 28 29 29 More precisely the scheduler should be: … … 33 33 \end{itemize} 34 34 35 \subsection{Fairness vs Scheduler Locality} 35 \subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal} 36 36 An important performance factor in modern architectures is cache locality. Waiting for data at lower levels or not present in the cache can have a major impact on performance. Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on. It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}. 37 37 38 For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in part~\ref{Evaluation} on evaluation.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.38 For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in the next section.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available. 39 39 40 40 However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally, where Figure~\ref{fig:fair} shows a visual representation of this behaviour. As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model. … … 48 48 \end{figure} 49 49 50 \section{Design} 50 \subsection{Performance Challenges}\label{pref:challenge} 51 While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges. Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design. 52 53 \subsubsection{Scalability} 54 The most basic performance challenge of a scheduler is scalability. 55 Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeues \ats. 56 One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can lead to worst improvements. 57 While the ready-queue itself can be sharded to alleviate the main source of contention, auxillary scheduling features, \eg counting ready \ats, can also be sources of contention. 58 59 \subsubsection{Migration Cost} 60 Another important source of latency in scheduling is migration. 61 An \at is said to have migrated if it is executed by two different \proc consecutively, which is the process discussed in \ref{fairnessvlocal}. 62 Migrations can have many different causes, but it certain programs it can be all but impossible to limit migrations. 63 Chapter~\ref{microbench} for example, has a benchmark where any \at can potentially unblock any other \at, which can leat to \ats migrating more often than not. 64 Because of this it is important to design the internal data structures of the scheduler to limit the latency penalty from migrations. 65 66 67 \section{Inspirations} 51 68 In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is adding/removing \glspl{thrd} is a single point of contention. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue : create multiple sub-ready-queues that multiple \glspl{hthrd} can access and modify without interfering. 52 69 53 Before going into the design of \CFA's scheduler proper, I want to discuss two sharding solutions which served as the inspiration scheduler in this thesis.70 Before going into the design of \CFA's scheduler proper, it is relevant to discuss two sharding solutions which served as the inspiration scheduler in this thesis. 54 71 55 72 \subsection{Work-Stealing} 56 73 57 As I mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing. As mentionned, in this pattern each \gls{proc} has its own ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work. 58 The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed. In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention. 74 As mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing. 75 In this pattern each \gls{proc} has its own local ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work on their local ready-queue. 76 The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed. 77 In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention. 59 78 On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work. 60 This means that the scheduler may never balance unfairness that does notresult in a \gls{proc} running out of work.79 This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work. 61 80 Chapter~\ref{microbench} shows that in pathological cases this problem can lead to indefinite starvation. 62 81 63 82 64 Based on these observation, I conclude that\emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises.83 Based on these observation, the conclusion is that a \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises. 65 84 66 85 \subsection{Relaxed-Fifo} 67 86 An entirely different scheme is to create a ``relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgos any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which the \glspl{proc} can pick from. 68 87 \Glspl{proc} choose ready-queus at random, but timestamps are added to all elements of the queue and dequeues are done by picking two queues and dequeing the oldest element. 88 All subqueues are protected by TryLocks and \procs simply pick a different subqueue if they fail to acquire the TryLock. 69 89 The result is a queue that has both decent scalability and sufficient fairness. 70 90 The lack of ownership means that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely that any element will stay on the queue for much longer than any other element. … … 75 95 76 96 While the fairness, of this scheme is good, it does suffer in terms of performance. 77 It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and the randomness means locality can suffer significantly and finding non-empty queues can be difficult. 78 79 \section{\CFA} 80 The \CFA is effectively attempting to merge these two approaches, keeping the best of both. 81 It is based on the 97 It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues can be difficult if there are too few ready \ats. 98 99 \section{Relaxed-FIFO++} 100 Since it has inherent fairness quelities and decent performance in the presence of many \ats, the relaxed-FIFO queue appears as a good candidate to form the basis of a scheduler. 101 The most obvious problems is for workloads where the number of \ats is barely greater than the number of \procs. 102 In these situations, the wide sharding means most of the sub-queues from which the relaxed queue is formed will be empty. 103 The consequence is that when a dequeue operations attempts to pick a sub-queue at random, it is likely that it picks an empty sub-queue and will have to pick again. 104 This problem can repeat an unbounded number of times. 105 106 As this is the most obvious challenge, it is worth addressing first. 107 The obvious solution is to supplement each subqueue with some sharded data structure that keeps track of which subqueues are empty. 108 This data structure can take many forms, for example simple bitmask or a binary tree that tracks which branch are empty. 109 Following a binary tree on each pick has fairly good Big O complexity and many modern architectures have powerful bitmask manipulation instructions. 110 However, precisely tracking which sub-queues are empty is actually fundamentally problematic. 111 The reason is that each subqueues are already a form of sharding and the sharding width has presumably already chosen to avoid contention. 112 However, tracking which ready queue is empty is only useful if the tracking mechanism uses denser sharding than the sub queues, then it will invariably create a new source of contention. 113 But if the tracking mechanism is not denser than the sub-queues, then it will generally not provide useful because reading this new data structure risks being as costly as simply picking a sub-queue at random. 114 Early experiments with this approach have shown that even with low success rates, randomly picking a sub-queue can be faster than a simple tree walk. 115 116 The exception to this rule is using local tracking. 117 If each \proc keeps track locally of which sub-queue is empty, then this can be done with a very dense data structure without introducing a new source of contention. 118 The consequence of local tracking however, is that the information is not complete. 119 Each \proc is only aware of the last state it saw each subqueues but does not have any information about freshness. 120 Even on systems with low \gls{hthrd} count, \eg 4 or 8, this can quickly lead to the local information being no better than the random pick. 121 This is due in part to the cost of this maintaining this information and its poor quality. 122 123 However, using a very low cost approach to local tracking may actually be beneficial. 124 If the local tracking is no more costly than the random pick, than \emph{any} improvement to the succes rate, however low it is, would lead to a performance benefits. 125 This leads to the following approach: 126 127 \subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/} 128 The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}. 129 The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed. 130 While this is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the subqueues that were accessed. 131 Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, for example some Linear Congruential Generators\cit{https://en.wikipedia.org/wiki/Linear\_congruential\_generator} support running the algorithm backwards while offering good quality and performance. 132 This particular \glsxtrshort{prng} can be used as follows: 133 134 Each \proc maintains two \glsxtrshort{prng} states, which whill be refered to as \texttt{F} and \texttt{B}. 135 136 When a \proc attempts to dequeue a \at, it picks the subqueues by running the \texttt{B} backwards. 137 When a \proc attempts to enqueue a \at, it runs \texttt{F} forward to pick to subqueue to enqueue to. 138 If the enqueue is successful, the state \texttt{B} is overwritten with the content of \texttt{F}. 139 140 The result is that each \proc will tend to dequeue \ats that it has itself enqueued. 141 When most sub-queues are empty, this technique increases the odds of finding \ats at very low cost, while also offering an improvement on locality in many cases. 142 143 However, while this approach does notably improve performance in many cases, this algorithm is still not competitive with work-stealing algorithms. 144 The fundamental problem is that the constant randomness limits how much locality the scheduler offers. 145 This becomes problematic both because the scheduler is likely to get cache misses on internal data-structures and because migration become very frequent. 146 Therefore since the approach of modifying to relaxed-FIFO algorithm to behave more like work stealing does not seem to pan out, the alternative is to do it the other way around. 147 148 \section{Work Stealing++} 149 To add stronger fairness guarantees to workstealing a few changes. 150 First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all subqueues. 151 Therefore the workstealing algorithm must be prepended with some monitoring. 152 Before attempting to dequeue from a \proc's local queue, the \proc must make some effort to make sure remote queues are not being neglected. 153 To make this possible, \procs must be able to determie which \at has been on the ready-queue the longest. 154 Which is the second aspect that much be added. 155 The relaxed-FIFO approach uses timestamps for each \at and this is also what is done here. 156 82 157 \begin{figure} 83 158 \centering 84 159 \input{base.pstex_t} 85 \caption[Base \CFA design]{Base \CFA design \smallskip\newline A list of sub-ready queues offers the sharding, two per \glspl{proc}. However, \glspl{proc} can access any of the sub-queues.}160 \caption[Base \CFA design]{Base \CFA design \smallskip\newline A Pool of sub-ready queues offers the sharding, two per \glspl{proc}. Each \gls{proc} have local subqueues, however \glspl{proc} can access any of the sub-queues. Each \at is timestamped when enqueued.} 86 161 \label{fig:base} 87 162 \end{figure} 88 89 90 91 % The common solution to the single point of contention is to shard the ready-queue so each \gls{hthrd} can access the ready-queue without contention, increasing performance. 92 93 % \subsection{Sharding} \label{sec:sharding} 94 % An interesting approach to sharding a queue is presented in \cit{Trevors paper}. This algorithm presents a queue with a relaxed \glsxtrshort{fifo} guarantee using an array of strictly \glsxtrshort{fifo} sublists as shown in Figure~\ref{fig:base}. Each \emph{cell} of the array has a timestamp for the last operation and a pointer to a linked-list with a lock. Each node in the list is marked with a timestamp indicating when it is added to the list. A push operation is done by picking a random cell, acquiring the list lock, and pushing to the list. If the cell is locked, the operation is simply retried on another random cell until a lock is acquired. A pop operation is done in a similar fashion except two random cells are picked. If both cells are unlocked with non-empty lists, the operation pops the node with the oldest timestamp. If one of the cells is unlocked and non-empty, the operation pops from that cell. If both cells are either locked or empty, the operation picks two new random cells and tries again. 95 96 % \begin{figure} 97 % \centering 98 % \input{base.pstex_t} 99 % \caption[Relaxed FIFO list]{Relaxed FIFO list \smallskip\newline List at the base of the scheduler: an array of strictly FIFO lists. The timestamp is in all nodes and cell arrays.} 100 % \label{fig:base} 101 % \end{figure} 102 103 % \subsection{Finding threads} 104 % Once threads have been distributed onto multiple queues, identifying empty queues becomes a problem. Indeed, if the number of \glspl{thrd} does not far exceed the number of queues, it is probable that several of the cell queues are empty. Figure~\ref{fig:empty} shows an example with 2 \glspl{thrd} running on 8 queues, where the chances of getting an empty queue is 75\% per pick, meaning two random picks yield a \gls{thrd} only half the time. This scenario leads to performance problems since picks that do not yield a \gls{thrd} are not useful and do not necessarily help make more informed guesses. 105 106 % \begin{figure} 107 % \centering 108 % \input{empty.pstex_t} 109 % \caption[``More empty'' Relaxed FIFO list]{``More empty'' Relaxed FIFO list \smallskip\newline Emptier state of the queue: the array contains many empty cells, that is strictly FIFO lists containing no elements.} 110 % \label{fig:empty} 111 % \end{figure} 112 113 % There are several solutions to this problem, but they ultimately all have to encode if a cell has an empty list. My results show the density and locality of this encoding is generally the dominating factor in these scheme. Classic solutions to this problem use one of three techniques to encode the information: 114 115 % \paragraph{Dense Information} Figure~\ref{fig:emptybit} shows a dense bitmask to identify the cell queues currently in use. This approach means processors can often find \glspl{thrd} in constant time, regardless of how many underlying queues are empty. Furthermore, modern x86 CPUs have extended bit manipulation instructions (BMI2) that allow searching the bitmask with very little overhead compared to the randomized selection approach for a filled ready queue, offering good performance even in cases with many empty inner queues. However, this technique has its limits: with a single word\footnote{Word refers here to however many bits can be written atomically.} bitmask, the total amount of ready-queue sharding is limited to the number of bits in the word. With a multi-word bitmask, this maximum limit can be increased arbitrarily, but the look-up time increases. Finally, a dense bitmap, either single or multi-word, causes additional contention problems that reduces performance because of cache misses after updates. This central update bottleneck also means the information in the bitmask is more often stale before a processor can use it to find an item, \ie mask read says there are available \glspl{thrd} but none on queue when the subsequent atomic check is done. 116 117 % \begin{figure} 118 % \centering 119 % \vspace*{-5pt} 120 % {\resizebox{0.75\textwidth}{!}{\input{emptybit.pstex_t}}} 121 % \vspace*{-5pt} 122 % \caption[Underloaded queue with bitmask]{Underloaded queue with bitmask indicating array cells with items.} 123 % \label{fig:emptybit} 124 125 % \vspace*{10pt} 126 % {\resizebox{0.75\textwidth}{!}{\input{emptytree.pstex_t}}} 127 % \vspace*{-5pt} 128 % \caption[Underloaded queue with binary search-tree]{Underloaded queue with binary search-tree indicating array cells with items.} 129 % \label{fig:emptytree} 130 131 % \vspace*{10pt} 132 % {\resizebox{0.95\textwidth}{!}{\input{emptytls.pstex_t}}} 133 % \vspace*{-5pt} 134 % \caption[Underloaded queue with per processor bitmask]{Underloaded queue with per processor bitmask indicating array cells with items.} 135 % \label{fig:emptytls} 136 % \end{figure} 137 138 % \paragraph{Sparse Information} Figure~\ref{fig:emptytree} shows an approach using a hierarchical tree data-structure to reduce contention and has been shown to work in similar cases~\cite{ellen2007snzi}. However, this approach may lead to poorer performance due to the inherent pointer chasing cost while still allowing significant contention on the nodes of the tree if the tree is shallow. 139 140 % \paragraph{Local Information} Figure~\ref{fig:emptytls} shows an approach using dense information, similar to the bitmap, but each \gls{hthrd} keeps its own independent copy. While this approach can offer good scalability \emph{and} low latency, the liveliness and discovery of the information can become a problem. This case is made worst in systems with few processors where even blind random picks can find \glspl{thrd} in a few tries. 141 142 % I built a prototype of these approaches and none of these techniques offer satisfying performance when few threads are present. All of these approach hit the same 2 problems. First, randomly picking sub-queues is very fast. That speed means any improvement to the hit rate can easily be countered by a slow-down in look-up speed, whether or not there are empty lists. Second, the array is already sharded to avoid contention bottlenecks, so any denser data structure tends to become a bottleneck. In all cases, these factors meant the best cases scenario, \ie many threads, would get worst throughput, and the worst-case scenario, few threads, would get a better hit rate, but an equivalent poor throughput. As a result I tried an entirely different approach. 143 144 % \subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/} 145 % In the worst-case scenario there are only few \glspl{thrd} ready to run, or more precisely given $P$ \glspl{proc}\footnote{For simplicity, this assumes there is a one-to-one match between \glspl{proc} and \glspl{hthrd}.}, $T$ \glspl{thrd} and $\epsilon$ a very small number, than the worst case scenario can be represented by $T = P + \epsilon$, with $\epsilon \ll P$. It is important to note in this case that fairness is effectively irrelevant. Indeed, this case is close to \emph{actually matching} the model of the ``Ideal multi-tasking CPU'' on page \pageref{q:LinuxCFS}. In this context, it is possible to use a purely internal-locality based approach and still meet the fairness requirements. This approach simply has each \gls{proc} running a single \gls{thrd} repeatedly. Or from the shared ready-queue viewpoint, each \gls{proc} pushes to a given sub-queue and then pops from the \emph{same} subqueue. The challenge is for the the scheduler to achieve good performance in both the $T = P + \epsilon$ case and the $T \gg P$ case, without affecting the fairness guarantees in the later. 146 147 % To handle this case, I use a \glsxtrshort{prng}\todo{Fix missing long form} in a novel way. There exist \glsxtrshort{prng}s that are fast, compact and can be run forward \emph{and} backwards. Linear congruential generators~\cite{wiki:lcg} are an example of \glsxtrshort{prng}s of such \glsxtrshort{prng}s. The novel approach is to use the ability to run backwards to ``replay'' the \glsxtrshort{prng}. The scheduler uses an exclusive \glsxtrshort{prng} instance per \gls{proc}, the random-number seed effectively starts an encoding that produces a list of all accessed subqueues, from latest to oldest. Replaying the \glsxtrshort{prng} to identify cells accessed recently and which probably have data still cached. 148 149 % The algorithm works as follows: 150 % \begin{itemize} 151 % \item Each \gls{proc} has two \glsxtrshort{prng} instances, $F$ and $B$. 152 % \item Push and Pop operations occur as discussed in Section~\ref{sec:sharding} with the following exceptions: 153 % \begin{itemize} 154 % \item Push operations use $F$ going forward on each try and on success $F$ is copied into $B$. 155 % \item Pop operations use $B$ going backwards on each try. 156 % \end{itemize} 157 % \end{itemize} 158 159 % The main benefit of this technique is that it basically respects the desired properties of Figure~\ref{fig:fair}. When looking for work, a \gls{proc} first looks at the last cell they pushed to, if any, and then move backwards through its accessed cells. As the \gls{proc} continues looking for work, $F$ moves backwards and $B$ stays in place. As a result, the relation between the two becomes weaker, which means that the probablisitic fairness of the algorithm reverts to normal. Chapter~\ref{proofs} discusses more formally the fairness guarantees of this algorithm. 160 161 % \section{Details} 163 The algorithm is structure as shown in Figure~\ref{fig:base}. 164 This is very similar to classic workstealing except the local queues are placed in an array so \procs can access eachother's queue in constant time. 165 Sharding width can be adjusted based on need. 166 When a \proc attempts to dequeue a \at, it first picks a random remote queue and compares its timestamp to the timestamps of the local queue(s), dequeue from the remote queue if needed. 167 168 Implemented as as naively state above, this approach has some obvious performance problems. 169 First, it is necessary to have some damping effect on helping. 170 Random effects like cache misses and preemption can add spurious but short bursts of latency for which helping is not helpful, pun intended. 171 The effect of these bursts would be to cause more migrations than needed and make this workstealing approach slowdown to the match the relaxed-FIFO approach. 172 173 \begin{figure} 174 \centering 175 \input{base_avg.pstex_t} 176 \caption[\CFA design with Moving Average]{\CFA design with Moving Average \smallskip\newline A moving average is added to each subqueue.} 177 \label{fig:base-ma} 178 \end{figure} 179 180 A simple solution to this problem is to compare an exponential moving average\cit{https://en.wikipedia.org/wiki/Moving\_average\#Exponential\_moving\_average} instead if the raw timestamps, shown in Figure~\ref{fig:base-ma}. 181 Note that this is slightly more complex than it sounds because since the \at at the head of a subqueue is still waiting, its wait time has not ended. 182 Therefore the exponential moving average is actually an exponential moving average of how long each already dequeued \at have waited. 183 To compare subqueues, the timestamp at the head must be compared to the current time, yielding the bestcase wait time for the \at at the head of the queue. 184 This new waiting is averaged with the stored average. 185 To limit even more the amount of unnecessary migration, a bias can be added to the local queue, where a remote queue is helped only if its moving average is more than \emph{X} times the local queue's average. 186 None of the experimentation that I have run with these scheduler seem to indicate that the choice of the weight for the moving average or the choice of bis is particularly important. 187 Weigths and biases of similar \emph{magnitudes} have similar effects. 188 189 With these additions to workstealing, scheduling can be made as fair as the relaxed-FIFO approach, well avoiding the majority of unnecessary migrations. 190 Unfortunately, the performance of this approach does suffer in the cases with no risks of starvation. 191 The problem is that the constant polling of remote subqueues generally entail a cache miss. 192 To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, the higher the chances are that polling will incurr a cache-miss. 193 Conversly, the active subqueues do not benefit much from helping since starvation is already a non-issue. 194 This puts this algorithm in an akward situation where it is paying for a cost, but the cost itself suggests the operation was unnecessary. 195 The good news is that this problem can be mitigated 196 197 \subsection{Redundant Timestamps} 198 The problem with polling remote queues is due to a tension between the consistency requirement on the subqueue. 199 For the subqueues, correctness is critical. There must be a consensus among \procs on which subqueues hold which \ats. 200 Since the timestamps are use for fairness, it is alco important to have consensus and which \at is the oldest. 201 However, when deciding if a remote subqueue is worth polling, correctness is much less of a problem. 202 Since the only need is that a subqueue will eventually be polled, some data staleness can be acceptable. 203 This leads to a tension where stale timestamps are only problematic in some cases. 204 Furthermore, stale timestamps can be somewhat desirable since lower freshness requirements means less tension on the cache coherence protocol. 205 206 207 \begin{figure} 208 \centering 209 % \input{base_ts2.pstex_t} 210 \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline A array is added containing a copy of the timestamps. These timestamps are written to with relaxed atomics, without fencing, leading to fewer cache invalidations.} 211 \label{fig:base-ts2} 212 \end{figure} 213 A solution to this is to create a second array containing a copy of the timestamps and average. 214 This copy is updated \emph{after} the subqueue's critical sections using relaxed atomics. 215 \Glspl{proc} now check if polling is needed by comparing the copy of the remote timestamp instead of the actual timestamp. 216 The result is that since there is no fencing, the writes can be buffered and cause fewer cache invalidations. 217 218 The correctness argument here is somewhat subtle. 219 The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation. 220 Therefore, it is acceptable if stale data make queues appear older than they really are but not fresher. 221 For the timestamps, this means that missing writes to the timestamp is acceptable since they will make the head \at look older. 222 For the moving average, as long as the operation are RW-safe, the average is guaranteed to yield a value that is between the oldest and newest values written. 223 Therefore this unprotected read of the timestamp and average satisfy the limited correctness that is required. 224 225 \subsection{Per CPU Sharding} 226 227 \subsection{Topological Work Stealing} 228 229 -
doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
r11a1240 r6db62fa 3 3 The first step of evaluation is always to test-out small controlled cases, to ensure that the basics are working properly. 4 4 This sections presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler. 5 6 \section{Benchmark Environment} 7 All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine. 8 9 \paragraph{AMD} The AMD machine is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM. 10 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. 11 These EPYCs have 64 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 256 \glspl{hthrd}. 12 The cpus each have 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches respectively. 13 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}. 14 15 \paragraph{Intel} The Intel machine is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM. 16 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. 17 These Xeon Platinums have 24 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 192 \glspl{hthrd}. 18 The cpus each have 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively. 19 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}. 20 21 This limited sharing of the last level cache on the AMD machine is markedly different than the Intel machine. Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different cpu incurr a significant latency, on AMD it is also the case that cache misses served by a different L3 instance on the same cpu still incur high latency. 22 5 23 6 24 \section{Cycling latency} … … 31 49 \end{figure} 32 50 33 \todo{check term ``idle sleep handling''}34 51 To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of \glspl{proc} available. 35 52 Beyond this point, adding more rings serves to mitigate even more the idle sleep handling. 36 This is to avoid the case where one of the worker \glspl{at} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.53 This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above. 37 54 38 55 The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw \texttt{park}/\texttt{unpark} and carefully picking the order of the \texttt{P} and \texttt{V} with respect to the loop condition. 39 56 40 \todo{code, setup, results}41 57 \begin{lstlisting} 42 58 Thread.main() { … … 52 68 \end{lstlisting} 53 69 70 \begin{figure} 71 \centering 72 \input{result.cycle.jax.ops.pstex_t} 73 \vspace*{-10pt} 74 \label{fig:cycle:ns:jax} 75 \end{figure} 54 76 55 77 \section{Yield} -
doc/theses/thierry_delisle_PhD/thesis/text/io.tex
r11a1240 r6db62fa 330 330 \paragraph{Pending Allocations} can be more complicated to handle. 331 331 If the arbiter has available instances, the arbiter can attempt to directly hand over the instance and satisfy the request. 332 Otherwise 332 Otherwise it must hold onto the list of threads until SQEs are made available again. 333 This handling becomes that much more complex if pending allocation require more than one SQE, since the arbiter must make a decision between statisfying requests in FIFO ordering or satisfy requests for fewer SQEs first. 334 335 While this arbiter has the potential to solve many of the problems mentionned in above, it also introduces a significant amount of complexity. 336 Tracking which processors are borrowing which instances and which instances have SQEs available ends-up adding a significant synchronization prelude to any I/O operation. 337 Any submission must start with a handshake that pins the currently borrowed instance, if available. 338 An attempt to allocate is then made, but the arbiter can concurrently be attempting to allocate from the same instance from a different \gls{hthrd}. 339 Once the allocation is completed, the submission must still check that the instance is still burrowed before attempt to flush. 340 These extra synchronization steps end-up having a similar cost to the multiple shared instances approach. 341 Furthermore, if the number of instances does not match the number of processors actively submitting I/O, the system can fall into a state where instances are constantly being revoked and end-up cycling the processors, which leads to significant cache deterioration. 342 Because of these reasons, this approach, which sounds promising on paper, does not improve on the private instance approach in practice. 343 344 \subsubsection{Private Instances V2} 345 333 346 334 347
Note: See TracChangeset
for help on using the changeset viewer.