Ignore:
Timestamp:
Jul 4, 2022, 7:44:54 PM (2 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
Children:
1931bb01, 25404c7
Parents:
3e3bee2
Message:

Merge peter's changes for core

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/core.tex

    r3e3bee2 r9c6443e  
    11\chapter{Scheduling Core}\label{core}
    22
    3 Before discussing scheduling in general, where it is important to address systems that are changing states, this document discusses scheduling in a somewhat ideal scenario, where the system has reached a steady state. For this purpose, a steady state is loosely defined as a state where there are always \glspl{thrd} ready to run and the system has the resources necessary to accomplish the work, \eg, enough workers. In short, the system is neither overloaded nor underloaded.
    4 
    5 It is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state tend to be pervasive in all states.
     3Before discussing scheduling in general, where it is important to address systems that are changing states, this document discusses scheduling in a somewhat ideal scenario, where the system has reached a steady state.
     4For this purpose, a steady state is loosely defined as a state where there are always \glspl{thrd} ready to run and the system has the resources necessary to accomplish the work, \eg, enough workers.
     5In short, the system is neither overloaded nor underloaded.
     6
     7It is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected.
     8As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers.
     9Therefore, flaws in scheduling the steady state tend to be pervasive in all states.
    610
    711\section{Design Goals}
    8 As with most of the design decisions behind \CFA, an important goal is to match the expectation of the programmer according to their execution mental-model. To match expectations, the design must offer the programmer sufficient guarantees so that, as long as they respect the execution mental-model, the system also respects this model.
     12As with most of the design decisions behind \CFA, an important goal is to match the expectation of the programmer according to their execution mental-model.
     13To match expectations, the design must offer the programmer sufficient guarantees so that, as long as they respect the execution mental-model, the system also respects this model.
    914
    1015For threading, a simple and common execution mental-model is the ``Ideal multi-tasking CPU'' :
     
    1722Applied to threads, this model states that every ready \gls{thrd} immediately runs in parallel with all other ready \glspl{thrd}. While a strict implementation of this model is not feasible, programmers still have expectations about scheduling that come from this model.
    1823
    19 In general, the expectation at the center of this model is that ready \glspl{thrd} do not interfere with each other but simply share the hardware. This assumption makes it easier to reason about threading because ready \glspl{thrd} can be thought of in isolation and the effect of the scheduler can be virtually ignored. This expectation of \gls{thrd} independence means the scheduler is expected to offer two guarantees:
     24In general, the expectation at the center of this model is that ready \glspl{thrd} do not interfere with each other but simply share the hardware.
     25This assumption makes it easier to reason about threading because ready \glspl{thrd} can be thought of in isolation and the effect of the scheduler can be virtually ignored.
     26This expectation of \gls{thrd} independence means the scheduler is expected to offer two guarantees:
    2027\begin{enumerate}
    2128        \item A fairness guarantee: a \gls{thrd} that is ready to run is not prevented by another thread.
     
    2330\end{enumerate}
    2431
    25 It is important to note that these guarantees are expected only up to a point. \Glspl{thrd} that are ready to run should not be prevented to do so, but they still share the limited hardware resources. Therefore, the guarantee is considered respected if a \gls{thrd} gets access to a \emph{fair share} of the hardware resources, even if that share is very small.
    26 
    27 Similar to the performance guarantee, the lack of interference among threads is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is competitive to other popular languages, the guarantee is consider achieved.
     32It is important to note that these guarantees are expected only up to a point.
     33\Glspl{thrd} that are ready to run should not be prevented to do so, but they still share the limited hardware resources.
     34Therefore, the guarantee is considered respected if a \gls{thrd} gets access to a \emph{fair share} of the hardware resources, even if that share is very small.
     35
     36Similar to the performance guarantee, the lack of interference among threads is only relevant up to a point.
     37Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention.
     38How much is an acceptable cost is obviously highly variable.
     39For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages.
     40This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models.
     41Recall programmer expectation is that the impact of the scheduler can be ignored.
     42Therefore, if the cost of scheduling is competitive to other popular languages, the guarantee is consider achieved.
    2843More precisely the scheduler should be:
    2944\begin{itemize}
     
    3146        \item Faster than other schedulers that have equal or better fairness.
    3247\end{itemize}
    33 (Everything should be made as fair as possible, but not fairer. Chuck Einstein, Albert's younger brother)
    3448
    3549\subsection{Fairness Goals}
    3650For this work, fairness is considered to have two strongly related requirements: true starvation freedom and ``fast'' load balancing.
    3751
    38 \paragraph{True starvation freedom} means as long as at least one \proc continues to dequeue \ats, all ready \ats should be able to run eventually (eventual progress).
     52\paragraph{True starvation freedom} means as long as at least one \proc continues to dequeue \ats, all ready \ats should be able to run eventually, \ie, eventual progress.
    3953In any running system, a \proc can stop dequeuing \ats if it starts running a \at that never blocks.
    4054Without preemption, traditional work-stealing schedulers do not have starvation freedom in this case.
     
    4862
    4963\subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal}
    50 An important performance factor in modern architectures is cache locality. Waiting for data at lower levels or not present in the cache can have a major impact on performance. Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on. It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}.
    51 
    52 For a scheduler, having good locality {\color{red}PAB: I think you should fold this footnote into the paragraph}\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in the next section.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
    53 
    54 However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally. Figure~\ref{fig:fair} shows a visual representation of this behaviour. As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model.
     64An important performance factor in modern architectures is cache locality.
     65Waiting for data at lower levels or not present in the cache can have a major impact on performance.
     66Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on.
     67It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}.
     68
     69For a scheduler, having good locality, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness.
     70Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
     71Note that this section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling.
     72External locality is a much more complicated subject and is discussed in the next section.
     73
     74However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally.
     75Figure~\ref{fig:fair} shows a visual representation of this behaviour.
     76As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model.
    5577
    5678\begin{figure}
     
    5880        \input{fairness.pstex_t}
    5981        \vspace*{-10pt}
    60         \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \gls{thrd} awaits running is shown as the time the ready \gls{thrd} waits increases, Ready Time, the chances that its data is still in cache decreases, Locality. At the same time, the need for fairness increases since other \glspl{thrd} may have the chance to run many times, breaking the fairness model. Since the actual values and curves of this graph can be highly variable, the graph is an idealized representation of the two opposing goals.}
     82        \caption[Fairness vs Locality graph]{Rule of thumb Fairness vs Locality graph \smallskip\newline The importance of Fairness and Locality while a ready \gls{thrd} awaits running is shown as the time the ready \gls{thrd} waits increases, Ready Time, the chances that its data is still in cache decreases, Locality.
     83        At the same time, the need for fairness increases since other \glspl{thrd} may have the chance to run many times, breaking the fairness model.
     84        Since the actual values and curves of this graph can be highly variable, the graph is an idealized representation of the two opposing goals.}
    6185        \label{fig:fair}
    6286\end{figure}
    6387
    6488\subsection{Performance Challenges}\label{pref:challenge}
    65 While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges. Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design.
     89While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges.
     90Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design.
    6691
    6792\subsubsection{Scalability}
     
    80105
    81106\section{Inspirations}
    82 In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is a single point of contention when adding/removing \glspl{thrd}. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue: create multiple \emph{subqueues} forming the logical ready-queue and the subqueues are accessed by multiple \glspl{hthrd} without interfering.
     107In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance.
     108The problem is a single point of contention when adding/removing \ats.
     109As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}.
     110The solution to this problem is to shard the ready-queue: create multiple \emph{subqueues} forming the logical ready-queue and the subqueues are accessed by multiple \glspl{hthrd} without interfering.
    83111
    84112Before going into the design of \CFA's scheduler, it is relevant to discuss two sharding solutions that served as the inspiration scheduler in this thesis.
     
    86114\subsection{Work-Stealing}
    87115
    88 As mentioned in \ref{existing:workstealing}, a popular pattern in work-stealing is sharding the ready-queue.
    89 In this pattern, each \gls{proc} has its own local ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work on their local ready-queue.
     116As mentioned in \ref{existing:workstealing}, a popular sharding approach for the ready-queue is work-stealing.
     117In this approach, each \gls{proc} has its own local subqueue and \glspl{proc} only access each other's subqueue if they run out of work on their local ready-queue.
    90118The interesting aspect of work stealing happens in the steady-state scheduling case, \ie all \glspl{proc} have work and no load balancing is needed.
    91119In this case, work stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
     
    97125
    98126\subsection{Relaxed-FIFO}
    99 
    100 A different scheduling approach is to create a ``relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgoes any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which \glspl{proc} pick.
     127A different scheduling approach is to create a ``relaxed-FIFO'' queue, as in \todo{cite Trevor's paper}.
     128This approach forgoes any ownership between \gls{proc} and subqueue, and simply creates a pool of ready-queues from which \glspl{proc} pick.
    101129Scheduling is performed as follows:
    102130\begin{itemize}
    103131\item
    104 All ready queues are protected by TryLocks.
    105 \item
    106 Timestamps are added to each element of a ready queue.
    107 \item
    108 A \gls{proc} randomly tests ready queues until it has acquired two queues.
    109 \item
    110 The older of the two \ats at the front the acquired queues is dequeued.
     132All subqueues are protected by TryLocks.
     133\item
     134Timestamps are added to each element of a subqueue.
     135\item
     136A \gls{proc} randomly tests ready queues until it has acquired one or two queues.
     137\item
     138If two queues are acquired, the older of the two \ats at the front the acquired queues is dequeued.
     139\item
     140Otherwise the \ats from the single queue is dequeued.
    111141\end{itemize}
    112142The result is a queue that has both good scalability and sufficient fairness.
    113143The lack of ownership ensures that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely any element will delay longer than any other element.
    114 This guarantee contrasts with work-stealing, where a \gls{proc} with a long ready queue results in unfairness for its \ats in comparison to a \gls{proc} with a short ready queue. This unfairness persists until a \gls{proc} runs out of work and steals.
     144This guarantee contrasts with work-stealing, where a \gls{proc} with a long subqueue results in unfairness for its \ats in comparison to a \gls{proc} with a short subqueue.
     145This unfairness persists until a \gls{proc} runs out of work and steals.
    115146
    116147An important aspects of this scheme's fairness approach is that the timestamps make it possible to evaluate how long elements have been on the queue.
    117 However, \glspl{proc} eagerly search for these older elements instead of focusing on specific queues, which affects locality.
     148However, \glspl{proc} eagerly search for these older elements instead of focusing on specific queues, which negatively affects locality.
    118149
    119150While this scheme has good fairness, its performance suffers.
     
    123154The inherent fairness and good performance with many \ats, makes the relaxed-FIFO queue a good candidate to form the basis of a new scheduler.
    124155The problem case is workloads where the number of \ats is barely greater than the number of \procs.
    125 In these situations, the wide sharding of the ready queue means most of its (relaxed) subqueues are empty.
     156In these situations, the wide sharding of the ready queue means most of its subqueues are empty.
    126157Furthermore, the non-empty subqueues are unlikely to hold more than one item.
    127158The consequence is that a random dequeue operation is likely to pick an empty subqueue, resulting in an unbounded number of selections.
     
    153184
    154185\subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/}
    155 The Relaxed-FIFO approach can be made to handle the case of mostly empty subqueues by tweaking the \glsxtrlong{prng} (PRNG).
     186The Relaxed-FIFO approach can be made to handle the case of mostly empty subqueues by tweaking the \glsxtrlong{prng}.
    156187The \glsxtrshort{prng} state can be seen as containing a list of all the future subqueues that will be accessed.
    157188While this concept is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the subqueues that were accessed.
     
    188219        \centering
    189220        \input{base.pstex_t}
    190         \caption[Base \CFA design]{Base \CFA design \smallskip\newline A pool of subqueues offers the sharding, two per \glspl{proc}. Each \gls{proc} can access all of the subqueues. Each \at is timestamped when enqueued.}
     221        \caption[Base \CFA design]{Base \CFA design \smallskip\newline A pool of subqueues offers the sharding, two per \glspl{proc}.
     222        Each \gls{proc} can access all of the subqueues.
     223        Each \at is timestamped when enqueued.}
    191224        \label{fig:base}
    192225\end{figure}
     
    195228This structure is similar to classic work-stealing except the subqueues are placed in an array so \procs can access them in constant time.
    196229Sharding width can be adjusted based on contention.
    197 Note, as an optimization, the TS of a \at is store in the \at in front of it, so the first TS is in the array and the last \at has no TS.
    198 This organization keeps the highly accessed front TSs close together in the array.
     230Note, as an optimization, the TS of a \at is stored in the \at in front of it, so the first TS is in the array and the last \at has no TS.
     231This organization keeps the highly accessed front TSs directly in the array.
    199232When a \proc attempts to dequeue a \at, it first picks a random remote subqueue and compares its timestamp to the timestamps of its local subqueue(s).
    200 The oldest waiting \at (possibly within some range) is dequeued to provide global fairness.
     233The oldest waiting \at is dequeued to provide global fairness.
    201234
    202235However, this na\"ive implemented has performance problems.
    203236First, it is necessary to have some damping effect on helping.
    204237Random effects like cache misses and preemption can add spurious but short bursts of latency negating the attempt to help.
    205 These bursts are caused by increased migrations and make this work stealing approach slowdown to the level of relaxed-FIFO.
     238These bursts can cause increased migrations and make this work stealing approach slowdown to the level of relaxed-FIFO.
    206239
    207240\begin{figure}
     
    222255With these additions to work stealing, scheduling can be made as fair as the relaxed-FIFO approach, avoiding the majority of unnecessary migrations.
    223256Unfortunately, the work to achieve fairness has a performance cost, especially when the workload is inherently fair, and hence, there is only short-term or no starvation.
    224 The problem is that the constant polling (reading) of remote subqueues generally entail a cache miss because the TSs are constantly being updated (written).
    225 To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, the higher the chances are that polling will incur a cache-miss.
     257The problem is that the constant polling, \ie reads, of remote subqueues generally entail a cache miss because the TSs are constantly being updated, \ie, writes.
     258To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, lead to higher chances that polling will incur a cache-miss.
    226259Conversely, the active subqueues do not benefit much from helping since starvation is already a non-issue.
    227260This puts this algorithm in the awkward situation of paying for a cost that is largely unnecessary.
     
    245278        \centering
    246279        \input{base_ts2.pstex_t}
    247         \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline An array is added containing a copy of the timestamps. These timestamps are written to with relaxed atomics, so there is no order among concurrent memory accesses, leading to fewer cache invalidations.}
     280        \caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline An array is added containing a copy of the timestamps.
     281        These timestamps are written to with relaxed atomics, so there is no order among concurrent memory accesses, leading to fewer cache invalidations.}
    248282        \label{fig:base-ts2}
    249283\end{figure}
     
    251285The correctness argument is somewhat subtle.
    252286The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation.
    253 Therefore, it is acceptable if stale data makes queues appear older than they really are but not fresher.
     287Therefore, it is acceptable if stale data makes queues appear older than they really are but appearing fresher can be a problem.
    254288For the timestamps, this means missing writes to the timestamp is acceptable since they make the head \at look older.
    255289For the moving average, as long as the operations are just atomic reads/writes, the average is guaranteed to yield a value that is between the oldest and newest values written.
     
    286320
    287321\subsection{Per CPU Sharding}
    288 
    289322Building a scheduler that is cache aware poses two main challenges: discovering the cache topology and matching \procs to this cache structure.
    290323Unfortunately, there is no portable way to discover cache topology, and it is outside the scope of this thesis to solve this problem.
     
    295328
    296329The simplest approach for mapping subqueues to cache structure is to statically tie subqueues to CPUs.
    297 Instead of having each subqueue local to a specific \proc (kernel thread), the system is initialized with subqueues for each hardware hyperthread/core up front.
     330Instead of having each subqueue local to a specific \proc, the system is initialized with subqueues for each hardware hyperthread/core up front.
    298331Then \procs dequeue and enqueue by first asking which CPU id they are executing on, in order to identify which subqueues are the local ones.
    299332\Glspl{proc} can get the CPU id from \texttt{sched\_getcpu} or \texttt{librseq}.
     
    308341
    309342\subsection{Topological Work Stealing}
    310 
    311343Therefore, the approach used in the \CFA scheduler is to have per-\proc subqueues, but have an explicit data-structure track which cache substructure each subqueue is tied to.
    312344This tracking requires some finesse because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
Note: See TracChangeset for help on using the changeset viewer.