Changeset 62424af2 for doc/theses/thierry_delisle_PhD
- Timestamp:
- Sep 9, 2022, 3:44:44 PM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation
- Children:
- 4a2d728, d895e32
- Parents:
- 264f6c9
- Location:
- doc/theses/thierry_delisle_PhD/thesis/text
- Files:
-
- 4 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/text/core.tex
r264f6c9 r62424af2 10 10 11 11 \section{Design Goals} 12 As with most of the design decisions behind \CFA, an important goal is to match the expectation of the programmer according to their execution mental -model.13 To match expectations, the design must offer the programmer sufficient guarantees so that, as long as they respect the execution mental -model, the system also respects this model.14 15 For threading, a simple and common execution mental -model is the ``Ideal multi-tasking CPU'' :12 As with most of the design decisions behind \CFA, an important goal is to match the expectation of the programmer according to their execution mental model. 13 To match expectations, the design must offer the programmer sufficient guarantees so that, as long as they respect the execution mental model, the system also respects this model. 14 15 For threading, a simple and common execution mental model is the ``Ideal multi-tasking CPU'' : 16 16 17 17 \begin{displayquote}[Linux CFS\cite{MAN:linux/cfs}] 18 {[The]} ``Ideal multi-tasking CPU'' is a (non-existent :-)) CPU that has 100\% physical power and which can run each task at precise equal speed, in parallel, each at [an equal fraction of the] speed. For example: if there are 2 tasks running, then it runs each at 50\% physical power --- i.e., actually in parallel.18 {[The]} ``Ideal multi-tasking CPU'' is a (non-existent :-)) CPU that has 100\% physical power and which can run each task at precise equal speed, in parallel, each at [an equal fraction of the] speed. For example: if there are 2 running tasks, then it runs each at 50\% physical power --- i.e., actually in parallel. 19 19 \label{q:LinuxCFS} 20 20 \end{displayquote} … … 40 40 This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. 41 41 Recall programmer expectation is that the impact of the scheduler can be ignored. 42 Therefore, if the cost of scheduling is competitive to other popular languages, the guarantee is considerachieved.42 Therefore, if the cost of scheduling is competitive with other popular languages, the guarantee is considered achieved. 43 43 More precisely the scheduler should be: 44 44 \begin{itemize} … … 53 53 In any running system, a \proc can stop dequeuing \ats if it starts running a \at that never blocks. 54 54 Without preemption, traditional work-stealing schedulers do not have starvation freedom in this case. 55 Now this requirement begs the question, what about preemption?56 Generally speaking preemption happens on the timescale of several milliseconds, which brings us to the next requirement: ``fast'' load balancing.55 Now, this requirement begs the question, what about preemption? 56 Generally speaking, preemption happens on the timescale of several milliseconds, which brings us to the next requirement: ``fast'' load balancing. 57 57 58 58 \paragraph{Fast load balancing} means that load balancing should happen faster than preemption would normally allow. 59 For interactive applications that need to run at 60, 90 ,120 frames per second, \ats having to wait for several milliseconds to run are effectively starved.59 For interactive applications that need to run at 60, 90 or 120 frames per second, \ats having to wait for several milliseconds to run are effectively starved. 60 60 Therefore load-balancing should be done at a faster pace, one that can detect starvation at the microsecond scale. 61 61 With that said, this is a much fuzzier requirement since it depends on the number of \procs, the number of \ats and the general \gls{load} of the system. … … 69 69 For a scheduler, having good locality, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. 70 70 Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \at, and as consequence cache lines, to a \gls{hthrd} that is currently available. 71 Note that this section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling.71 Note that this section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how scheduling affects the locality of the application's data. 72 72 External locality is a much more complicated subject and is discussed in the next section. 73 73 74 74 However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally. 75 75 Figure~\ref{fig:fair} shows a visual representation of this behaviour. 76 As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental -model.76 As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental model. 77 77 78 78 \begin{figure} … … 80 80 \input{fairness.pstex_t} 81 81 \vspace*{-10pt} 82 \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \at awaits running is shown as the time the ready \at waits increases , Ready Time, the chances that its data is still in cache decreases, Locality.82 \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \at awaits running is shown as the time the ready \at waits increases (Ready Time) the chances that its data is still in cache decreases (Locality). 83 83 At the same time, the need for fairness increases since other \ats may have the chance to run many times, breaking the fairness model. 84 84 Since the actual values and curves of this graph can be highly variable, the graph is an idealized representation of the two opposing goals.} … … 94 94 Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeues \ats. 95 95 One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can lead to worst improvements. 96 While the ready -queue itself can be sharded to alleviate the main source of contention, auxiliary scheduling features, \eg counting ready \ats, can also be sources of contention.96 While the ready queue itself can be sharded to alleviate the main source of contention, auxiliary scheduling features, \eg counting ready \ats, can also be sources of contention. 97 97 98 98 \subsubsection{Migration Cost} … … 108 108 The problem is a single point of contention when adding/removing \ats. 109 109 As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. 110 The solution to this problem is to shard the ready -queue: create multiple \emph{subqueues} forming the logical ready-queue and the subqueues are accessed by multiple \glspl{hthrd} without interfering.110 The solution to this problem is to shard the ready queue: create multiple \emph{sub-queues} forming the logical ready queue and the sub-queues are accessed by multiple \glspl{hthrd} without interfering. 111 111 112 112 Before going into the design of \CFA's scheduler, it is relevant to discuss two sharding solutions that served as the inspiration scheduler in this thesis. … … 115 115 116 116 As mentioned in \ref{existing:workstealing}, a popular sharding approach for the ready-queue is work-stealing. 117 In this approach, each \gls{proc} has its own local sub queue and \glspl{proc} only access each other's subqueue if they run out of work on their local ready-queue.117 In this approach, each \gls{proc} has its own local sub-queue and \glspl{proc} only access each other's sub-queue if they run out of work on their local ready-queue. 118 118 The interesting aspect of work stealing happens in the steady-state scheduling case, \ie all \glspl{proc} have work and no load balancing is needed. 119 119 In this case, work stealing is close to optimal scheduling: it can achieve perfect locality and have no contention. … … 122 122 Chapter~\ref{microbench} shows that pathological cases work stealing can lead to indefinite starvation. 123 123 124 Based on these observation , the conclusion is that a \emph{perfect} scheduler should behave similarto work-stealing in the steady-state case, but load balance proactively when the need arises.124 Based on these observations, the conclusion is that a \emph{perfect} scheduler should behave similarly to work-stealing in the steady-state case, but load balance proactively when the need arises. 125 125 126 126 \subsection{Relaxed-FIFO} 127 127 A different scheduling approach is to create a ``relaxed-FIFO'' queue, as in \cite{alistarh2018relaxed}. 128 This approach forgoes any ownership between \gls{proc} and sub queue, and simply creates a pool of ready-queues from which \glspl{proc} pick.128 This approach forgoes any ownership between \gls{proc} and sub-queue, and simply creates a pool of ready queues from which \glspl{proc} pick. 129 129 Scheduling is performed as follows: 130 130 \begin{itemize} 131 131 \item 132 All sub queues are protected by TryLocks.133 \item 134 Timestamps are added to each element of a sub queue.135 \item 136 A \gls{proc} randomly tests ready 137 \item 138 If two queues are acquired, the older of the two \ats at the front the acquired queues is dequeued.139 \item 140 Otherwise the \atsfrom the single queue is dequeued.132 All sub-queues are protected by TryLocks. 133 \item 134 Timestamps are added to each element of a sub-queue. 135 \item 136 A \gls{proc} randomly tests ready-queues until it has acquired one or two queues. 137 \item 138 If two queues are acquired, the older of the two \ats is dequeued from the front of the acquired queues. 139 \item 140 Otherwise, the \at from the single queue is dequeued. 141 141 \end{itemize} 142 142 The result is a queue that has both good scalability and sufficient fairness. 143 143 The lack of ownership ensures that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely any element will delay longer than any other element. 144 This guarantee contrasts with work-stealing, where a \gls{proc} with a long sub queue results in unfairness for its \ats in comparison to a \gls{proc} with a short subqueue.144 This guarantee contrasts with work-stealing, where a \gls{proc} with a long sub-queue results in unfairness for its \ats in comparison to a \gls{proc} with a short sub-queue. 145 145 This unfairness persists until a \gls{proc} runs out of work and steals. 146 146 147 An important aspect s of this scheme's fairness approach is that the timestamps make it possible to evaluate how long elements have been on the queue.147 An important aspect of this scheme's fairness approach is that the timestamps make it possible to evaluate how long elements have been in the queue. 148 148 However, \glspl{proc} eagerly search for these older elements instead of focusing on specific queues, which negatively affects locality. 149 149 … … 152 152 153 153 \section{Relaxed-FIFO++} 154 The inherent fairness and good performance with many \ats , makesthe relaxed-FIFO queue a good candidate to form the basis of a new scheduler.154 The inherent fairness and good performance with many \ats make the relaxed-FIFO queue a good candidate to form the basis of a new scheduler. 155 155 The problem case is workloads where the number of \ats is barely greater than the number of \procs. 156 In these situations, the wide sharding of the ready queue means most of its sub queues are empty.157 Furthermore, the non-empty sub queues are unlikely to hold more than one item.158 The consequence is that a random dequeue operation is likely to pick an empty sub queue, resulting in an unbounded number of selections.159 This state is generally unstable: each sub queue is likely to frequently toggle between being empty and nonempty.160 Indeed, when the number of \ats is \emph{equal} to the number of \procs, every pop operation is expected to empty a sub queue and every push is expected to add to an empty subqueue.161 In the worst case, a check of the sub queues sees all are empty or full.156 In these situations, the wide sharding of the ready queue means most of its sub-queues are empty. 157 Furthermore, the non-empty sub-queues are unlikely to hold more than one item. 158 The consequence is that a random dequeue operation is likely to pick an empty sub-queue, resulting in an unbounded number of selections. 159 This state is generally unstable: each sub-queue is likely to frequently toggle between being empty and nonempty. 160 Indeed, when the number of \ats is \emph{equal} to the number of \procs, every pop operation is expected to empty a sub-queue and every push is expected to add to an empty sub-queue. 161 In the worst case, a check of the sub-queues sees all are empty or full. 162 162 163 163 As this is the most obvious challenge, it is worth addressing first. 164 The obvious solution is to supplement each sharded sub queue with data that indicates if the queue is empty/nonempty to simplify finding nonempty queues, \ie ready \glspl{at}.165 This sharded data can be organized in different forms, \eg a bitmask or a binary tree that tracks the nonempty sub queues.164 The obvious solution is to supplement each sharded sub-queue with data that indicates if the queue is empty/nonempty to simplify finding nonempty queues, \ie ready \glspl{at}. 165 This sharded data can be organized in different forms, \eg a bitmask or a binary tree that tracks the nonempty sub-queues. 166 166 Specifically, many modern architectures have powerful bitmask manipulation instructions or searching a binary tree has good Big-O complexity. 167 However, precisely tracking nonempty sub queues is problematic.168 The reason is that the sub queues are initially sharded with a width presumably chosen to avoid contention.169 However, tracking which ready queue is nonempty is only useful if the tracking data is dense, \ie denser than the sharded sub queues.170 Otherwise, it does not provide useful information because reading this new data structure risks being as costly as simply picking a sub queue at random.171 But if the tracking mechanism \emph{is} denser than the shared sub queues, than constant updates invariably create a new source of contention.167 However, precisely tracking nonempty sub-queues is problematic. 168 The reason is that the sub-queues are initially sharded with a width presumably chosen to avoid contention. 169 However, tracking which ready queue is nonempty is only useful if the tracking data is dense, \ie denser than the sharded sub-queues. 170 Otherwise, it does not provide useful information because reading this new data structure risks being as costly as simply picking a sub-queue at random. 171 But if the tracking mechanism \emph{is} denser than the shared sub-queues, then constant updates invariably create a new source of contention. 172 172 Early experiments with this approach showed that randomly picking, even with low success rates, is often faster than bit manipulations or tree walks. 173 173 174 174 The exception to this rule is using local tracking. 175 If each \proc locally keeps track of empty sub queues, than this can be done with a very dense data structure without introducing a new source of contention.175 If each \proc locally keeps track of empty sub-queues, then this can be done with a very dense data structure without introducing a new source of contention. 176 176 However, the consequence of local tracking is that the information is incomplete. 177 Each \proc is only aware of the last state it saw about each sub queue so this information quickly becomes stale.177 Each \proc is only aware of the last state it saw about each sub-queue so this information quickly becomes stale. 178 178 Even on systems with low \gls{hthrd} count, \eg 4 or 8, this approach can quickly lead to the local information being no better than the random pick. 179 179 This result is due in part to the cost of maintaining information and its poor quality. 180 180 181 However, using a very low cost but inaccurate approach for local tracking can actuallybe beneficial.182 If the local tracking is no more costly than a random pick, th an \emph{any} improvement to the success rate, however low it is, leads to a performance benefits.183 This suggests t o the following approach:181 However, using a very low-cost but inaccurate approach for local tracking can still be beneficial. 182 If the local tracking is no more costly than a random pick, then \emph{any} improvement to the success rate, however low it is, leads to a performance benefit. 183 This suggests the following approach: 184 184 185 185 \subsection{Dynamic Entropy}\cite{xkcd:dynamicentropy} 186 The Relaxed-FIFO approach can be made to handle the case of mostly empty sub queues by tweaking the \glsxtrlong{prng}.187 The \glsxtrshort{prng} state can be seen as containing a list of all the future sub queues that will be accessed.188 While this concept is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the sub queues that were accessed.186 The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}. 187 The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed. 188 While this concept is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the sub-queues that were accessed. 189 189 Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, \eg some Linear Congruential Generators\cite{wiki:lcg} support running the algorithm backwards while offering good quality and performance. 190 190 This particular \glsxtrshort{prng} can be used as follows: 191 191 \begin{itemize} 192 192 \item 193 Each \proc maintains two \glsxtrshort{prng} states, refer eed to as $F$ and $B$.194 \item 195 When a \proc attempts to dequeue a \at, it picks a sub queue by running $B$ backwards.196 \item 197 When a \proc attempts to enqueue a \at, it runs $F$ forward picking a sub queue to enqueue to.198 If the enqueue is successful, thestate $B$ is overwritten with the content of $F$.193 Each \proc maintains two \glsxtrshort{prng} states, referred to as $F$ and $B$. 194 \item 195 When a \proc attempts to dequeue a \at, it picks a sub-queue by running $B$ backwards. 196 \item 197 When a \proc attempts to enqueue a \at, it runs $F$ forward picking a sub-queue to enqueue to. 198 If the enqueue is successful, state $B$ is overwritten with the content of $F$. 199 199 \end{itemize} 200 200 The result is that each \proc tends to dequeue \ats that it has itself enqueued. 201 When most sub queues are empty, this technique increases the odds of finding \ats atvery low cost, while also offering an improvement on locality in many cases.201 When most sub-queues are empty, this technique increases the odds of finding \ats at a very low cost, while also offering an improvement on locality in many cases. 202 202 203 203 Tests showed this approach performs better than relaxed-FIFO in many cases. 204 204 However, it is still not competitive with work-stealing algorithms. 205 205 The fundamental problem is that the constant randomness limits how much locality the scheduler offers. 206 This becomes problematic both because the scheduler is likely to get cache misses on internal data -structures and because migrations become frequent.206 This becomes problematic both because the scheduler is likely to get cache misses on internal data structures and because migrations become frequent. 207 207 Therefore, the attempt to modify the relaxed-FIFO algorithm to behave more like work stealing did not pan out. 208 208 The alternative is to do it the other way around. … … 210 210 \section{Work Stealing++}\label{helping} 211 211 To add stronger fairness guarantees to work stealing a few changes are needed. 212 First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all sub queues.212 First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all sub-queues. 213 213 Therefore, the work-stealing algorithm must be prepended with some monitoring. 214 Before attempting to dequeue from a \proc's sub queue, the \proc must make some effort to ensure other subqueues are not being neglected.214 Before attempting to dequeue from a \proc's sub-queue, the \proc must make some effort to ensure other sub-queues are not being neglected. 215 215 To make this possible, \procs must be able to determine which \at has been on the ready queue the longest. 216 216 Second, the relaxed-FIFO approach needs timestamps for each \at to make this possible. … … 219 219 \centering 220 220 \input{base.pstex_t} 221 \caption[Base \CFA design]{Base \CFA design \smallskip\newline A pool of sub queues offers the sharding, two per \glspl{proc}.222 Each \gls{proc} can access all of the sub queues.221 \caption[Base \CFA design]{Base \CFA design \smallskip\newline A pool of sub-queues offers the sharding, two per \proc. 222 Each \gls{proc} can access all of the sub-queues. 223 223 Each \at is timestamped when enqueued.} 224 224 \label{fig:base} … … 226 226 227 227 Figure~\ref{fig:base} shows the algorithm structure. 228 This structure is similar to classic work-stealing except the sub queues are placed in an array so \procs can access them in constant time.228 This structure is similar to classic work-stealing except the sub-queues are placed in an array so \procs can access them in constant time. 229 229 Sharding width can be adjusted based on contention. 230 230 Note, as an optimization, the TS of a \at is stored in the \at in front of it, so the first TS is in the array and the last \at has no TS. 231 231 This organization keeps the highly accessed front TSs directly in the array. 232 When a \proc attempts to dequeue a \at, it first picks a random remote sub queue and compares its timestamp to the timestamps of its local subqueue(s).232 When a \proc attempts to dequeue a \at, it first picks a random remote sub-queue and compares its timestamp to the timestamps of its local sub-queue(s). 233 233 The oldest waiting \at is dequeued to provide global fairness. 234 234 235 However, this na\"ive implement edhas performance problems.235 However, this na\"ive implementation has performance problems. 236 236 First, it is necessary to have some damping effect on helping. 237 237 Random effects like cache misses and preemption can add spurious but short bursts of latency negating the attempt to help. 238 These bursts can cause increased migrations and make this work stealing approach slowdown to the level of relaxed-FIFO.238 These bursts can cause increased migrations and make this work-stealing approach slow down to the level of relaxed-FIFO. 239 239 240 240 \begin{figure} 241 241 \centering 242 242 \input{base_avg.pstex_t} 243 \caption[\CFA design with Moving Average]{\CFA design with Moving Average \smallskip\newline A moving average is added to each sub queue.}243 \caption[\CFA design with Moving Average]{\CFA design with Moving Average \smallskip\newline A moving average is added to each sub-queue.} 244 244 \label{fig:base-ma} 245 245 \end{figure} 246 246 247 A simple solution to this problem is to use an exponential moving average\cite{wiki:ma} (MA) instead of a raw timestamp s,shown in Figure~\ref{fig:base-ma}.248 Note, th is is more complex because the \at at the head of a subqueue is still waiting, so its wait time has not ended.249 Therefore, the exponential moving average is a ctually an exponential movingaverage of how long each dequeued \at has waited.250 To compare sub queues, the timestamp at the head must be compared to the current time, yielding the best-case wait-time for the \at at the head of the queue.247 A simple solution to this problem is to use an exponential moving average\cite{wiki:ma} (MA) instead of a raw timestamp, as shown in Figure~\ref{fig:base-ma}. 248 Note, that this is more complex because the \at at the head of a sub-queue is still waiting, so its wait time has not ended. 249 Therefore, the exponential moving average is an average of how long each dequeued \at has waited. 250 To compare sub-queues, the timestamp at the head must be compared to the current time, yielding the best-case wait time for the \at at the head of the queue. 251 251 This new waiting is averaged with the stored average. 252 To further limit \glslink{atmig}{migrations}, a bias can be added to a local sub queue, where a remote subqueue is helped only if its moving average is more than $X$ times the local subqueue's average.252 To further limit \glslink{atmig}{migrations}, a bias can be added to a local sub-queue, where a remote sub-queue is helped only if its moving average is more than $X$ times the local sub-queue's average. 253 253 Tests for this approach indicate the choice of the weight for the moving average or the bias is not important, \ie weights and biases of similar \emph{magnitudes} have similar effects. 254 254 255 255 With these additions to work stealing, scheduling can be made as fair as the relaxed-FIFO approach, avoiding the majority of unnecessary migrations. 256 256 Unfortunately, the work to achieve fairness has a performance cost, especially when the workload is inherently fair, and hence, there is only short-term or no starvation. 257 The problem is that the constant polling, \ie reads, of remote sub queues generally entail a cache miss because the TSs are constantly being updated, \ie, writes.258 To make things worst, remote sub queues that are very active, \ie \ats are frequently enqueued and dequeued from them, lead to higher chances that polling will incur a cache-miss.259 Conversely, the active sub queues do not benefit much from helping since starvation is already a non-issue.260 This puts this algorithm in the awkward situation of paying for a cost that is largely unnecessary.257 The problem is that the constant polling, \ie reads, of remote sub-queues generally entails cache misses because the TSs are constantly being updated, \ie, writes. 258 To make things worst, remote sub-queues that are very active, \ie \ats are frequently enqueued and dequeued from them, lead to higher chances that polling will incur a cache-miss. 259 Conversely, the active sub-queues do not benefit much from helping since starvation is already a non-issue. 260 This puts this algorithm in the awkward situation of paying for a largely unnecessary cost. 261 261 The good news is that this problem can be mitigated 262 262 263 263 \subsection{Redundant Timestamps}\label{relaxedtimes} 264 The problem with polling remote sub queues is that correctness is critical.265 There must be a consensus among \procs on which sub queues hold which \ats, as the \ats are in constant motion.266 Furthermore, since timestamps are use for fairness, it is critical to haveconsensus on which \at is the oldest.267 However, when deciding if a remote sub queue is worth polling, correctness is less of a problem.268 Since the only requirement is that a sub queue is eventually polled, some data staleness is acceptable.264 The problem with polling remote sub-queues is that correctness is critical. 265 There must be a consensus among \procs on which sub-queues hold which \ats, as the \ats are in constant motion. 266 Furthermore, since timestamps are used for fairness, it is critical to have a consensus on which \at is the oldest. 267 However, when deciding if a remote sub-queue is worth polling, correctness is less of a problem. 268 Since the only requirement is that a sub-queue is eventually polled, some data staleness is acceptable. 269 269 This leads to a situation where stale timestamps are only problematic in some cases. 270 Furthermore, stale timestamps can be desirable since lower freshness requirements mean lesscache invalidations.270 Furthermore, stale timestamps can be desirable since lower freshness requirements mean fewer cache invalidations. 271 271 272 272 Figure~\ref{fig:base-ts2} shows a solution with a second array containing a copy of the timestamps and average. 273 This copy is updated \emph{after} the sub queue's critical sections using relaxed atomics.273 This copy is updated \emph{after} the sub-queue's critical sections using relaxed atomics. 274 274 \Glspl{proc} now check if polling is needed by comparing the copy of the remote timestamp instead of the actual timestamp. 275 275 The result is that since there is no fencing, the writes can be buffered in the hardware and cause fewer cache invalidations. … … 285 285 The correctness argument is somewhat subtle. 286 286 The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation. 287 Therefore, it is acceptable if stale data makes queues appear older than they reallyare but appearing fresher can be a problem.288 For the timestamps, this means missing writes to the timestamp is acceptablesince they make the head \at look older.287 Therefore, it is acceptable if stale data makes queues appear older than they are but appearing fresher can be a problem. 288 For the timestamps, this means it is acceptable to miss writes to the timestamp since they make the head \at look older. 289 289 For the moving average, as long as the operations are just atomic reads/writes, the average is guaranteed to yield a value that is between the oldest and newest values written. 290 Therefore, this unprotected read of the timestamp and average satisf ythe limited correctness that is required.290 Therefore, this unprotected read of the timestamp and average satisfies the limited correctness that is required. 291 291 292 292 With redundant timestamps, this scheduling algorithm achieves both the fairness and performance requirements on most machines. 293 293 The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}. 294 For example, on machines with a CPU containing multiple hyper threads and cores and multiple CPU sockets, cache misses can be satisfied from the caches onsame (local) CPU, or by a CPU on a different (remote) socket.294 For example, on machines with a CPU containing multiple hyper threads and cores and multiple CPU sockets, cache misses can be satisfied from the caches on the same (local) CPU, or by a CPU on a different (remote) socket. 295 295 Cache misses satisfied by a remote CPU have significantly higher latency than from the local CPU. 296 296 However, these delays are not specific to systems with multiple CPUs. … … 313 313 In Figure~\ref{fig:cache-share}, all cache misses are either private to a CPU or shared with another CPU. 314 314 This means latency due to cache misses is fairly consistent. 315 In contrast, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by either instance of L3 cache.315 In contrast, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by either instance of the L3 cache. 316 316 However, the memory-access latency to the remote L3 is higher than the memory-access latency to the local L3. 317 317 The impact of these different designs on this algorithm is that scheduling only scales well on architectures with a wide L3 cache, similar to Figure~\ref{fig:cache-share}, and less well on architectures with many narrower L3 cache instances, similar to Figure~\ref{fig:cache-noshare}. 318 Hence, as the number of L3 instances grow , so too does the chance that the random helping causes significant cache latency.319 The solution is for the scheduler be aware of the cache topology.318 Hence, as the number of L3 instances grows, so too does the chance that the random helping causes significant cache latency. 319 The solution is for the scheduler to be aware of the cache topology. 320 320 321 321 \subsection{Per CPU Sharding} … … 323 323 Unfortunately, there is no portable way to discover cache topology, and it is outside the scope of this thesis to solve this problem. 324 324 This work uses the cache topology information from Linux's @/sys/devices/system/cpu@ directory. 325 This leaves the challenge of matching \procs to cache structure, or more precisely identifying which sub queues of the ready queue are local to which subcomponents of the cache structure.326 Once a match ing is generated, the helping algorithm is changed to add bias so that \procs more often help subqueues local to the same cache substructure.\footnote{325 This leaves the challenge of matching \procs to cache structure, or more precisely identifying which sub-queues of the ready queue are local to which subcomponents of the cache structure. 326 Once a match is generated, the helping algorithm is changed to add bias so that \procs more often help sub-queues local to the same cache substructure.\footnote{ 327 327 Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuning.} 328 328 329 The simplest approach for mapping sub queues to cache structure is to statically tie subqueues to CPUs.330 Instead of having each sub queue local to a specific \proc, the system is initialized with subqueues for each hardware hyperthread/core up front.331 Then \procs dequeue and enqueue by first asking which CPU id they are executing on, in order to identify which subqueues are the local ones.329 The simplest approach for mapping sub-queues to cache structure is to statically tie sub-queues to CPUs. 330 Instead of having each sub-queue local to a specific \proc, the system is initialized with sub-queues for each hardware hyperthread/core up front. 331 Then \procs dequeue and enqueue by first asking which CPU id they are executing on, to identify which sub-queues are the local ones. 332 332 \Glspl{proc} can get the CPU id from @sched_getcpu@ or @librseq@. 333 333 334 334 This approach solves the performance problems on systems with topologies with narrow L3 caches, similar to Figure \ref{fig:cache-noshare}. 335 335 However, it can still cause some subtle fairness problems in systems with few \procs and many \glspl{hthrd}. 336 In this case, the large number of sub queues and the bias against subqueues tied to different cache substructures make it unlikely that every subqueue is picked.337 To make things worst, the small number of \procs mean that few helping attempts are made.338 This combination of low selection and few helping attempts allow a \at to become stranded on a sub queue for a long time until it gets randomly helped.339 On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can actuallytake multiple seconds for a \at to get dequeued from a remote queue.340 Therefore, a more dynamic match ing of subqueues to cache instanceis needed.336 In this case, the large number of sub-queues and the bias against sub-queues tied to different cache substructures make it unlikely that every sub-queue is picked. 337 To make things worst, the small number of \procs means that few helping attempts are made. 338 This combination of low selection and few helping attempts allow a \at to become stranded on a sub-queue for a long time until it gets randomly helped. 339 On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can take multiple seconds for a \at to get dequeued from a remote queue. 340 Therefore, a more dynamic match of sub-queues to cache instances is needed. 341 341 342 342 \subsection{Topological Work Stealing} 343 343 \label{s:TopologicalWorkStealing} 344 Therefore, the approach used in the \CFA scheduler is to have per-\proc sub queues, but have an explicit data-structure track which cache substructure each subqueue is tied to.344 Therefore, the approach used in the \CFA scheduler is to have per-\proc sub-queues, but have an explicit data structure to track which cache substructure each sub-queue is tied to. 345 345 This tracking requires some finesse because reading this data structure must lead to fewer cache misses than not having the data structure in the first place. 346 346 A key element however is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}. 347 347 Therefore the algorithm can be built as follows: before enqueueing or dequeuing a \at, each \proc queries the CPU id and the corresponding cache instance. 348 Since sub queues are tied to \procs, each \proc can then update the cache instance mapped to the local subqueue(s).349 To avoid unnecessary cache line invalidation, the map is only written 348 Since sub-queues are tied to \procs, each \proc can then update the cache instance mapped to the local sub-queue(s). 349 To avoid unnecessary cache line invalidation, the map is only written-to if the mapping changes. 350 350 351 351 This scheduler is used in the remainder of the thesis for managing CPU execution, but additional scheduling is needed to handle long-term blocking and unblocking, such as I/O. -
doc/theses/thierry_delisle_PhD/thesis/text/front.tex
r264f6c9 r62424af2 134 134 135 135 This thesis analyses multiple scheduler systems, where each system attempts to fulfill the requirements for user-level threading. 136 The predominant technique for managing high levels of concurrency is sharding the ready -queue with one queue per \gls{kthrd} and using some form of work stealing/sharing to dynamically rebalance workload shifts.136 The predominant technique for managing high levels of concurrency is sharding the ready queue with one queue per \gls{kthrd} and using some form of work stealing/sharing to dynamically rebalance workload shifts. 137 137 Preventing kernel blocking is accomplished by transforming kernel locks and I/O operations into user-level operations that do not block the kernel thread or spin up new kernel threads to manage the blocking. 138 138 Fairness is handled through preemption and/or ad-hoc solutions, which leads to coarse-grained fairness with some pathological cases. -
doc/theses/thierry_delisle_PhD/thesis/text/intro.tex
r264f6c9 r62424af2 12 12 13 13 This thesis analyses multiple scheduler systems, where each system attempts to fulfill the requirements for \gls{uthrding}. 14 The predominant technique for managing high levels of concurrency is sharding the ready -queue with one queue per kernel-thread and using some form of work stealing/sharing to dynamically rebalance workload shifts.14 The predominant technique for managing high levels of concurrency is sharding the ready queue with one queue per kernel-thread and using some form of work stealing/sharing to dynamically rebalance workload shifts. 15 15 Preventing kernel blocking is accomplished by transforming kernel locks and I/O operations into user-level operations that do not block the kernel thread or spin up new kernel threads to manage the blocking. 16 16 Fairness is handled through preemption and/or ad-hoc solutions, which leads to coarse-grained fairness with some pathological cases. -
doc/theses/thierry_delisle_PhD/thesis/text/runtime.tex
r264f6c9 r62424af2 4 4 \section{C Threading} 5 5 6 \Celeven introduced threading features, such the @_Thread_local@ storage class, and libraries @stdatomic.h@ and @threads.h@.6 \Celeven introduced threading features, such as the @_Thread_local@ storage class, and libraries @stdatomic.h@ and @threads.h@. 7 7 Interestingly, almost a decade after the \Celeven standard, the most recent versions of gcc, clang, and msvc do not support the \Celeven include @threads.h@, indicating no interest in the C11 concurrency approach (possibly because of the recent effort to add concurrency to \CC). 8 8 While the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}, as for \CC. … … 13 13 \section{M:N Threading}\label{prev:model} 14 14 15 Threading in \CFA is based on \Gls{uthrding}, where \ats are the representation of a unit of work. As such, \CFA programmers should expect these units to be fairly inexpensive, \ie programmers should be able to create a large number of \ats and switch among \ats liberally without many concerns for performance.15 Threading in \CFA is based on \Gls{uthrding}, where \ats are the representation of a unit of work. As such, \CFA programmers should expect these units to be fairly inexpensive, \ie programmers should be able to create a large number of \ats and switch among \ats liberally without many performance concerns. 16 16 17 The \CFA M:N threading model sis implemented using many user-level threads mapped onto fewer \glspl{kthrd}.17 The \CFA M:N threading model is implemented using many user-level threads mapped onto fewer \glspl{kthrd}. 18 18 The user-level threads have the same semantic meaning as a \glspl{kthrd} in the 1:1 model: they represent an independent thread of execution with its own stack. 19 The difference is that user-level threads do not have a corresponding object in the kernel; they are handled by the runtime in user space and scheduled onto \glspl{kthrd}, referred to as \glspl{proc} in this document. \Glspl{proc} run a \at until it context 19 The difference is that user-level threads do not have a corresponding object in the kernel; they are handled by the runtime in user space and scheduled onto \glspl{kthrd}, referred to as \glspl{proc} in this document. \Glspl{proc} run a \at until it context-switches out, it then chooses a different \at to run. 20 20 21 21 \section{Clusters} … … 45 45 In theory, this should not be a problem, even if the second \at waits, because the first \at is still ready to run and should be able to get CPU time to send the request. 46 46 With M:N threading, while the first \at is ready, the lone \gls{proc} \emph{cannot} run the first \at if it is blocked in the \glsxtrshort{io} operation of the second \at. 47 If this happen , the system is in a synchronization deadlock\footnote{In this example, the deadlock could be resolved if the server sends unprompted messages to the client.47 If this happens, the system is in a synchronization deadlock\footnote{In this example, the deadlock could be resolved if the server sends unprompted messages to the client. 48 48 However, this solution is neither general nor appropriate even in this simple case.}. 49 49 \end{quote} 50 50 51 Therefore, one of the objective of this work is to introduce \emph{User-Level \glsxtrshort{io}}, which like \glslink{uthrding}{User-Level \emph{Threading}}, blocks \ats rather than \glspl{proc} when doing \glsxtrshort{io} operations.51 Therefore, one of the objectives of this work is to introduce \emph{User-Level \glsxtrshort{io}}, which like \glslink{uthrding}{User-Level \emph{Threading}}, blocks \ats rather than \glspl{proc} when doing \glsxtrshort{io} operations. 52 52 This feature entails multiplexing the \glsxtrshort{io} operations of many \ats onto fewer \glspl{proc}. 53 53 The multiplexing requires a single \gls{proc} to execute multiple \glsxtrshort{io} operations in parallel. … … 60 60 All functions defined by this volume of POSIX.1-2017 shall be thread-safe, except that the following functions need not be thread-safe. ... (list of 70+ excluded functions) 61 61 \end{quote} 62 Only UNIX @man@ pages identify whether or not a library function is thread 62 Only UNIX @man@ pages identify whether or not a library function is thread-safe, and hence, may block on a pthreads lock or system call; hence interoperability with UNIX library functions is a challenge for an M:N threading model. 63 63 64 64 Languages like Go and Java, which have strict interoperability with C\cite{wiki:jni,go:cgo}, can control operations in C by ``sandboxing'' them, \eg a blocking function may be delegated to a \gls{kthrd}. Sandboxing may help towards guaranteeing that the kind of deadlock mentioned above does not occur. … … 72 72 Therefore, it is possible calls to an unknown library function can block a \gls{kthrd} leading to deadlocks in \CFA's M:N threading model, which would not occur in a traditional 1:1 threading model. 73 73 Currently, all M:N thread systems interacting with UNIX without sandboxing suffer from this problem but manage to work very well in the majority of applications. 74 Therefore, a complete solution to this problem is outside the scope of this thesis.\footnote{\CFA does provide a pthreads emulation, so any library function using embedded pthreads locks areredirected to \CFA user-level locks. This capability further reduces the chances of blocking a \gls{kthrd}.}74 Therefore, a complete solution to this problem is outside the scope of this thesis.\footnote{\CFA does provide a pthreads emulation, so any library function using embedded pthreads locks is redirected to \CFA user-level locks. This capability further reduces the chances of blocking a \gls{kthrd}.}
Note: See TracChangeset
for help on using the changeset viewer.