Ignore:
Timestamp:
Jun 29, 2022, 4:15:33 PM (7 weeks ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
master, pthread-emulation
Children:
06bdba4, 1ed3fe7c
Parents:
d7af839
Message:

Updated intro

Location:
doc/theses/thierry_delisle_PhD/thesis
Files:
3 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/local.bib

    rd7af839 radf03a6  
    701701  note = "[Online; accessed 12-April-2022]"
    702702}
     703@misc{wiki:binpak,
     704  author = "{Wikipedia contributors}",
     705  title = "Bin packing problem --- {W}ikipedia{,} The Free Encyclopedia",
     706  year = "2022",
     707  url = "https://en.wikipedia.org/wiki/Bin_packing_problem",
     708  note = "[Online; accessed 29-June-2022]"
     709}
    703710
    704711% RMR notes :
  • doc/theses/thierry_delisle_PhD/thesis/text/existing.tex

    rd7af839 radf03a6  
    1414
    1515\section{Naming Convention}
    16 Scheduling has been studied by various communities concentrating on different incarnation of the same problems. As a result, there are no standard naming conventions for scheduling that is respected across these communities. This document uses the term \newterm{\Gls{at}} to refer to the abstract objects being scheduled and the term \newterm{\Gls{proc}} to refer to the concrete objects executing these \glspl{at}.
     16Scheduling has been studied by various communities concentrating on different incarnation of the same problems. As a result, there are no standard naming conventions for scheduling that is respected across these communities. This document uses the term \newterm{\Gls{at}} to refer to the abstract objects being scheduled and the term \newterm{\Gls{proc}} to refer to the concrete objects executing these \ats.
    1717
    1818\section{Static Scheduling}
    19 \newterm{Static schedulers} require \gls{at} dependencies and costs be explicitly and exhaustively specified prior to scheduling.
     19\newterm{Static schedulers} require \ats dependencies and costs be explicitly and exhaustively specified prior to scheduling.
    2020The scheduler then processes this input ahead of time and produces a \newterm{schedule} the system follows during execution.
    2121This approach is popular in real-time systems since the need for strong guarantees justifies the cost of determining and supplying this information.
     
    2525
    2626\section{Dynamic Scheduling}
    27 \newterm{Dynamic schedulers} determine \gls{at} dependencies and costs during scheduling, if at all.
    28 Hence, unlike static scheduling, \gls{at} dependencies are conditional and detected at runtime. This detection takes the form of observing new \gls{at}(s) in the system and determining dependencies from their behaviour, including suspending or halting a \gls{at} that dynamically detects unfulfilled dependencies.
    29 Furthermore, each \gls{at} has the responsibility of adding dependent \glspl{at} back into the system once dependencies are fulfilled.
    30 As a consequence, the scheduler often has an incomplete view of the system, seeing only \glspl{at} with no pending dependencies.
     27\newterm{Dynamic schedulers} determine \ats dependencies and costs during scheduling, if at all.
     28Hence, unlike static scheduling, \ats dependencies are conditional and detected at runtime. This detection takes the form of observing new \ats(s) in the system and determining dependencies from their behaviour, including suspending or halting a \ats that dynamically detects unfulfilled dependencies.
     29Furthermore, each \ats has the responsibility of adding dependent \ats back into the system once dependencies are fulfilled.
     30As a consequence, the scheduler often has an incomplete view of the system, seeing only \ats with no pending dependencies.
    3131
    3232\subsection{Explicitly Informed Dynamic Schedulers}
    33 While dynamic schedulers may not have an exhaustive list of dependencies for a \gls{at}, some information may be available about each \gls{at}, \eg expected duration, required resources, relative importance, \etc.
    34 When available, a scheduler can then use this information to direct the scheduling decisions. \cit{Examples of schedulers with more information} 
     33While dynamic schedulers may not have an exhaustive list of dependencies for a \ats, some information may be available about each \ats, \eg expected duration, required resources, relative importance, \etc.
     34When available, a scheduler can then use this information to direct the scheduling decisions. \cit{Examples of schedulers with more information}
    3535However, most programmers do not determine or even \emph{predict} this information;
    36 at best, the scheduler has only some imprecise information provided by the programmer, \eg, indicating a \glspl{at} takes approximately 3--7 seconds to complete, rather than exactly 5 seconds.
    37 Providing this kind of information is a significant programmer burden especially if the information does not scale with the number of \glspl{at} and their complexity.
    38 For example, providing an exhaustive list of files read by 5 \glspl{at} is an easier requirement then providing an exhaustive list of memory addresses accessed by 10,000 independent \glspl{at}.
     36at best, the scheduler has only some imprecise information provided by the programmer, \eg, indicating a \ats takes approximately 3--7 seconds to complete, rather than exactly 5 seconds.
     37Providing this kind of information is a significant programmer burden especially if the information does not scale with the number of \ats and their complexity.
     38For example, providing an exhaustive list of files read by 5 \ats is an easier requirement then providing an exhaustive list of memory addresses accessed by 10,000 independent \ats.
    3939
    4040Since the goal of this thesis is to provide a scheduler as a replacement for \CFA's existing \emph{uninformed} scheduler, explicitly informed schedulers are less relevant to this project. Nevertheless, some strategies are worth mentioning.
    4141
    4242\subsubsection{Priority Scheduling}
    43 Common information used by schedulers to direct their algorithm is priorities. 
    44 Each \gls{at} is given a priority and higher-priority \glspl{at} are preferred to lower-priority ones.
    45 The simplest priority scheduling algorithm is to require that every \gls{at} have a distinct pre-established priority and always run the available \gls{at} with the highest priority.
    46 Asking programmers to provide an exhaustive set of unique priorities can be prohibitive when the system has a large number of \glspl{at}.
    47 It can therefore be desirable for schedulers to support \glspl{at} with identical priorities and/or automatically setting and adjusting priorities for \glspl{at}.
    48 Most common operating systems use some variant on priorities with overlaps and dynamic priority adjustments. 
     43Common information used by schedulers to direct their algorithm is priorities.
     44Each \ats is given a priority and higher-priority \ats are preferred to lower-priority ones.
     45The simplest priority scheduling algorithm is to require that every \ats have a distinct pre-established priority and always run the available \ats with the highest priority.
     46Asking programmers to provide an exhaustive set of unique priorities can be prohibitive when the system has a large number of \ats.
     47It can therefore be desirable for schedulers to support \ats with identical priorities and/or automatically setting and adjusting priorities for \ats.
     48Most common operating systems use some variant on priorities with overlaps and dynamic priority adjustments.
    4949For example, Microsoft Windows uses a pair of priorities
    5050\cit{https://docs.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities,https://docs.microsoft.com/en-us/windows/win32/taskschd/taskschedulerschema-priority-settingstype-element}, one specified by users out of ten possible options and one adjusted by the system.
    5151
    5252\subsection{Uninformed and Self-Informed Dynamic Schedulers}
    53 Several scheduling algorithms do not require programmers to provide additional information on each \gls{at}, and instead make scheduling decisions based solely on internal state and/or information implicitly gathered by the scheduler.
     53Several scheduling algorithms do not require programmers to provide additional information on each \ats, and instead make scheduling decisions based solely on internal state and/or information implicitly gathered by the scheduler.
    5454
    5555
    5656\subsubsection{Feedback Scheduling}
    57 As mentioned, schedulers may also gather information about each \glspl{at} to direct their decisions.
    58 This design effectively moves the scheduler into the realm of \newterm{Control Theory}~\cite{wiki:controltheory}. 
    59 This information gathering does not generally involve programmers, and as such, does not increase programmer burden the same way explicitly provided information may. 
    60 However, some feedback schedulers do allow programmers to offer additional information on certain \glspl{at}, in order to direct scheduling decisions.
     57As mentioned, schedulers may also gather information about each \ats to direct their decisions.
     58This design effectively moves the scheduler into the realm of \newterm{Control Theory}~\cite{wiki:controltheory}.
     59This information gathering does not generally involve programmers, and as such, does not increase programmer burden the same way explicitly provided information may.
     60However, some feedback schedulers do allow programmers to offer additional information on certain \ats, in order to direct scheduling decisions.
    6161The important distinction being whether or not the scheduler can function without this additional information.
    6262
    6363
    6464\section{Work Stealing}\label{existing:workstealing}
    65 One of the most popular scheduling algorithm in practice (see~\ref{existing:prod}) is work stealing. 
    66 This idea, introduce by \cite{DBLP:conf/fpca/BurtonS81}, effectively has each worker process its local \glspl{at} first, but allows the possibility for other workers to steal local \glspl{at} if they run out of \glspl{at}.
    67 \cite{DBLP:conf/focs/Blumofe94} introduced the more familiar incarnation of this, where each workers has a queue of \glspl{at} and workers without \glspl{at} steal \glspl{at} from random workers\footnote{The Burton and Sleep algorithm had trees of \glspl{at} and steal only among neighbours.}.
     65One of the most popular scheduling algorithm in practice (see~\ref{existing:prod}) is work stealing.
     66This idea, introduce by \cite{DBLP:conf/fpca/BurtonS81}, effectively has each worker process its local \ats first, but allows the possibility for other workers to steal local \ats if they run out of \ats.
     67\cite{DBLP:conf/focs/Blumofe94} introduced the more familiar incarnation of this, where each workers has a queue of \ats and workers without \ats steal \ats from random workers\footnote{The Burton and Sleep algorithm had trees of \ats and steal only among neighbours.}.
    6868Blumofe and Leiserson also prove worst case space and time requirements for well-structured computations.
    6969
    7070Many variations of this algorithm have been proposed over the years~\cite{DBLP:journals/ijpp/YangH18}, both optimizations of existing implementations and approaches that account for new metrics.
    7171
    72 \paragraph{Granularity} A significant portion of early work-stealing research concentrated on \newterm{Implicit Parallelism}~\cite{wiki:implicitpar}. 
     72\paragraph{Granularity} A significant portion of early work-stealing research concentrated on \newterm{Implicit Parallelism}~\cite{wiki:implicitpar}.
    7373Since the system is responsible for splitting the work, granularity is a challenge that cannot be left to programmers, as opposed to \newterm{Explicit Parallelism}\cite{wiki:explicitpar} where the burden can be left to programmers.
    74 In general, fine granularity is better for load balancing and coarse granularity reduces communication overhead. 
    75 The best performance generally means finding a middle ground between the two. 
     74In general, fine granularity is better for load balancing and coarse granularity reduces communication overhead.
     75The best performance generally means finding a middle ground between the two.
    7676Several methods can be employed, but I believe these are less relevant for threads, which are generally explicit and more coarse grained.
    7777
    78 \paragraph{Task Placement} Since modern computers rely heavily on cache hierarchies\cit{Do I need a citation for this}, migrating \glspl{at} from one core to another can be .  \cite{DBLP:journals/tpds/SquillanteL93}
     78\paragraph{Task Placement} Since modern computers rely heavily on cache hierarchies\cit{Do I need a citation for this}, migrating \ats from one core to another can be .  \cite{DBLP:journals/tpds/SquillanteL93}
    7979
    8080\todo{The survey is not great on this subject}
     
    8383
    8484\subsection{Theoretical Results}
    85 There is also a large body of research on the theoretical aspects of work stealing. These evaluate, for example, the cost of migration~\cite{DBLP:conf/sigmetrics/SquillanteN91,DBLP:journals/pe/EagerLZ86}, how affinity affects performance~\cite{DBLP:journals/tpds/SquillanteL93,DBLP:journals/mst/AcarBB02,DBLP:journals/ipl/SuksompongLS16} and theoretical models for heterogeneous systems~\cite{DBLP:journals/jpdc/MirchandaneyTS90,DBLP:journals/mst/BenderR02,DBLP:conf/sigmetrics/GastG10}. 
    86 \cite{DBLP:journals/jacm/BlellochGM99} examines the space bounds of work stealing and \cite{DBLP:journals/siamcomp/BerenbrinkFG03} shows that for under-loaded systems, the scheduler completes its computations in finite time, \ie is \newterm{stable}. 
    87 Others show that work stealing is applicable to various scheduling contexts~\cite{DBLP:journals/mst/AroraBP01,DBLP:journals/anor/TchiboukdjianGT13,DBLP:conf/isaac/TchiboukdjianGTRB10,DBLP:conf/ppopp/AgrawalLS10,DBLP:conf/spaa/AgrawalFLSSU14}. 
    88 \cite{DBLP:conf/ipps/ColeR13} also studied how randomized work-stealing affects false sharing among \glspl{at}.
    89 
    90 However, as \cite{DBLP:journals/ijpp/YangH18} highlights, it is worth mentioning that this theoretical research has mainly focused on ``fully-strict'' computations, \ie workloads that can be fully represented with a direct acyclic graph. 
     85There is also a large body of research on the theoretical aspects of work stealing. These evaluate, for example, the cost of migration~\cite{DBLP:conf/sigmetrics/SquillanteN91,DBLP:journals/pe/EagerLZ86}, how affinity affects performance~\cite{DBLP:journals/tpds/SquillanteL93,DBLP:journals/mst/AcarBB02,DBLP:journals/ipl/SuksompongLS16} and theoretical models for heterogeneous systems~\cite{DBLP:journals/jpdc/MirchandaneyTS90,DBLP:journals/mst/BenderR02,DBLP:conf/sigmetrics/GastG10}.
     86\cite{DBLP:journals/jacm/BlellochGM99} examines the space bounds of work stealing and \cite{DBLP:journals/siamcomp/BerenbrinkFG03} shows that for under-loaded systems, the scheduler completes its computations in finite time, \ie is \newterm{stable}.
     87Others show that work stealing is applicable to various scheduling contexts~\cite{DBLP:journals/mst/AroraBP01,DBLP:journals/anor/TchiboukdjianGT13,DBLP:conf/isaac/TchiboukdjianGTRB10,DBLP:conf/ppopp/AgrawalLS10,DBLP:conf/spaa/AgrawalFLSSU14}.
     88\cite{DBLP:conf/ipps/ColeR13} also studied how randomized work-stealing affects false sharing among \ats.
     89
     90However, as \cite{DBLP:journals/ijpp/YangH18} highlights, it is worth mentioning that this theoretical research has mainly focused on ``fully-strict'' computations, \ie workloads that can be fully represented with a direct acyclic graph.
    9191It is unclear how well these distributions represent workloads in real world scenarios.
    9292
    9393\section{Preemption}
    94 One last aspect of scheduling is preemption since many schedulers rely on it for some of their guarantees. 
    95 Preemption is the idea of interrupting \glspl{at} that have been running too long, effectively injecting suspend points into the application.
    96 There are multiple techniques to achieve this effect but they all aim to guarantee that the suspend points in a \gls{at} are never further apart than some fixed duration.
    97 While this helps schedulers guarantee that no \glspl{at} unfairly monopolizes a worker, preemption can effectively be added to any scheduler.
     94One last aspect of scheduling is preemption since many schedulers rely on it for some of their guarantees.
     95Preemption is the idea of interrupting \ats that have been running too long, effectively injecting suspend points into the application.
     96There are multiple techniques to achieve this effect but they all aim to guarantee that the suspend points in a \ats are never further apart than some fixed duration.
     97While this helps schedulers guarantee that no \ats unfairly monopolizes a worker, preemption can effectively be added to any scheduler.
    9898Therefore, the only interesting aspect of preemption for the design of scheduling is whether or not to require it.
    9999
     
    106106
    107107\subsection{Operating System Schedulers}
    108 Operating System Schedulers tend to be fairly complex as they generally support some amount of real-time, aim to balance interactive and non-interactive \glspl{at} and support multiple users sharing hardware without requiring these users to cooperate.
    109 Here are more details on a few schedulers used in the common operating systems: Linux, FreeBSD, Microsoft Windows and Apple's OS X. 
     108Operating System Schedulers tend to be fairly complex as they generally support some amount of real-time, aim to balance interactive and non-interactive \ats and support multiple users sharing hardware without requiring these users to cooperate.
     109Here are more details on a few schedulers used in the common operating systems: Linux, FreeBSD, Microsoft Windows and Apple's OS X.
    110110The information is less complete for operating systems with closed source.
    111111
    112112\paragraph{Linux's CFS}
    113 The default scheduler used by Linux, the Completely Fair Scheduler~\cite{MAN:linux/cfs,MAN:linux/cfs2}, is a feedback scheduler based on CPU time. 
    114 For each processor, it constructs a Red-Black tree of \glspl{at} waiting to run, ordering them by the amount of CPU time used.
    115 The \gls{at} that has used the least CPU time is scheduled.
     113The default scheduler used by Linux, the Completely Fair Scheduler~\cite{MAN:linux/cfs,MAN:linux/cfs2}, is a feedback scheduler based on CPU time.
     114For each processor, it constructs a Red-Black tree of \ats waiting to run, ordering them by the amount of CPU time used.
     115The \ats that has used the least CPU time is scheduled.
    116116It also supports the concept of \newterm{Nice values}, which are effectively multiplicative factors on the CPU time used.
    117 The ordering of \glspl{at} is also affected by a group based notion of fairness, where \glspl{at} belonging to groups having used less CPU time are preferred to \glspl{at} belonging to groups having used more CPU time.
     117The ordering of \ats is also affected by a group based notion of fairness, where \ats belonging to groups having used less CPU time are preferred to \ats belonging to groups having used more CPU time.
    118118Linux achieves load-balancing by regularly monitoring the system state~\cite{MAN:linux/cfs/balancing} and using some heuristic on the load, currently CPU time used in the last millisecond plus a decayed version of the previous time slots~\cite{MAN:linux/cfs/pelt}.
    119119
    120120\cite{DBLP:conf/eurosys/LoziLFGQF16} shows that Linux's CFS also does work stealing to balance the workload of each processors, but the paper argues this aspect can be improved significantly.
    121 The issues highlighted stem from Linux's need to support fairness across \glspl{at} \emph{and} across users\footnote{Enforcing fairness across users means that given two users, one with a single \gls{at} and the other with one thousand \glspl{at}, the user with a single \gls{at} does not receive one thousandth of the CPU time.}, increasing the complexity.
    122 
    123 Linux also offers a FIFO scheduler, a real-time scheduler, which runs the highest-priority \gls{at}, and a round-robin scheduler, which is an extension of the FIFO-scheduler that adds fixed time slices. \cite{MAN:linux/sched}
     121The issues highlighted stem from Linux's need to support fairness across \ats \emph{and} across users\footnote{Enforcing fairness across users means that given two users, one with a single \ats and the other with one thousand \ats, the user with a single \ats does not receive one thousandth of the CPU time.}, increasing the complexity.
     122
     123Linux also offers a FIFO scheduler, a real-time scheduler, which runs the highest-priority \ats, and a round-robin scheduler, which is an extension of the FIFO-scheduler that adds fixed time slices. \cite{MAN:linux/sched}
    124124
    125125\paragraph{FreeBSD}
     
    131131Microsoft's Operating System's Scheduler~\cite{MAN:windows/scheduler} is a feedback scheduler with priorities.
    132132It supports 32 levels of priorities, some of which are reserved for real-time and privileged applications.
    133 It schedules \glspl{at} based on the highest priorities (lowest number) and how much CPU time each \gls{at} has used.
     133It schedules \ats based on the highest priorities (lowest number) and how much CPU time each \ats has used.
    134134The scheduler may also temporarily adjust priorities after certain effects like the completion of I/O requests.
    135135
     
    149149
    150150\subsection{User-Level Schedulers}
    151 By comparison, user level schedulers tend to be simpler, gathering fewer metrics and avoid complex notions of fairness. Part of the simplicity is due to the fact that all \glspl{at} have the same user, and therefore cooperation is both feasible and probable.
     151By comparison, user level schedulers tend to be simpler, gathering fewer metrics and avoid complex notions of fairness. Part of the simplicity is due to the fact that all \ats have the same user, and therefore cooperation is both feasible and probable.
    152152
    153153\paragraph{Go}\label{GoSafePoint}
     
    167167
    168168\paragraph{Erlang}
    169 Erlang is a functional language that supports concurrency in the form of processes: threads that share no data. 
     169Erlang is a functional language that supports concurrency in the form of processes: threads that share no data.
    170170It uses a kind of round-robin scheduler, with a mix of work sharing and stealing to achieve load balancing~\cite{:erlang}, where under-loaded workers steal from other workers, but overloaded workers also push work to other workers.
    171171This migration logic is directed by monitoring logic that evaluates the load a few times per seconds.
     
    173173\paragraph{Intel\textregistered ~Threading Building Blocks}
    174174\newterm{Thread Building Blocks} (TBB) is Intel's task parallelism \cite{wiki:taskparallel} framework.
    175 It runs \newterm{jobs}, which are uninterruptable \glspl{at} that must always run to completion, on a pool of worker threads.
     175It runs \newterm{jobs}, which are uninterruptable \ats that must always run to completion, on a pool of worker threads.
    176176TBB's scheduler is a variation of randomized work-stealing that also supports higher-priority graph-like dependencies~\cite{MAN:tbb/scheduler}.
    177 It schedules \glspl{at} as follows (where \textit{t} is the last \gls{at} completed):
     177It schedules \ats as follows (where \textit{t} is the last \ats completed):
    178178\begin{displayquote}
    179179        \begin{enumerate}
     
    196196
    197197\paragraph{Grand Central Dispatch}
    198 An Apple\cit{Official GCD source} API that offers task parallelism~\cite{wiki:taskparallel}. 
     198An Apple\cit{Official GCD source} API that offers task parallelism~\cite{wiki:taskparallel}.
    199199Its distinctive aspect is multiple ``Dispatch Queues'', some of which are created by programmers.
    200 Each queue has its own local ordering guarantees, \eg \glspl{at} on queue $A$ are executed in \emph{FIFO} order.
     200Each queue has its own local ordering guarantees, \eg \ats on queue $A$ are executed in \emph{FIFO} order.
    201201
    202202\todo{load balancing and scheduling}
     
    207207
    208208\paragraph{LibFibre}
    209 LibFibre~\cite{DBLP:journals/pomacs/KarstenB20} is a light-weight user-level threading framework developed at the University of Waterloo. 
    210 Similarly to Go, it uses a variation of work stealing with a global queue that is higher priority than stealing. 
     209LibFibre~\cite{DBLP:journals/pomacs/KarstenB20} is a light-weight user-level threading framework developed at the University of Waterloo.
     210Similarly to Go, it uses a variation of work stealing with a global queue that is higher priority than stealing.
    211211Unlike Go, it does not have the high-priority next ``chair'' and does not use randomized work-stealing.
  • doc/theses/thierry_delisle_PhD/thesis/text/intro.tex

    rd7af839 radf03a6  
    11\chapter{Introduction}\label{intro}
    2 \todo{A proper intro}
     2\section{\CFA programming language}
    33
    4 Any shared system needs scheduling.
    5 Computer systems share multiple resources across many threads of execution, even on single user computers like a laptop.
     4The \CFA programming language~\cite{cfa:frontpage,cfa:typesystem} extends the C programming language by adding modern safety and productivity features, while maintaining backwards compatibility.
     5Among its productivity features, \CFA supports user-level threading~\cite{Delisle21} allowing programmers to write modern concurrent and parallel programs.
     6My previous master's thesis on concurrent in \CFA focused on features and interfaces.
     7This Ph.D.\ thesis focuses on performance, introducing \glsxtrshort{api} changes only when required by performance considerations.
     8Specifically, this work concentrates on scheduling and \glsxtrshort{io}.
     9Prior to this work, the \CFA runtime used a strict \glsxtrshort{fifo} \gls{rQ} and no \glsxtrshort{io} capabilities at the user-thread level\footnote{C supports \glsxtrshort{io} capabilities at the kernel level, which means blocking operations block kernel threads where blocking user-level threads whould be more appropriate for \CFA.}.
     10
     11As a research project, this work builds exclusively on newer versions of the Linux operating-system and gcc/clang compilers.
     12While \CFA is released, supporting older versions of Linux ($<$~Ubuntu 16.04) and gcc/clang compilers ($<$~gcc 6.0) is not a goal of this work.
     13
     14\section{Scheduling}
     15Computer systems share multiple resources across many threads of execution, even on single user computers like laptops or smartphones.
     16On a computer system with multiple processors and work units, there exists the problem of mapping work onto processors in an efficient manner, called \newterm{scheduling}.
     17These systems are normally \newterm{open}, meaning new work arrives from an external source or is spawned from an existing work unit.
     18On a computer system, the scheduler takes a sequence of work requests in the form of threads and attempts to complete the work, subject to performance objectives, such as resource utilization.
     19A general-purpose dynamic-scheduler for an open system cannot anticipate future work requests, so its performance is rarely optimal.
     20With complete knowledge of arrive order and work, creating an optimal solution still effectively needs solving the bin packing problem\cite{wiki:binpak}.
     21However, optimal solutions are often not required.
     22Schedulers do produce excellent solutions, whitout needing optimality, by taking advantage of regularities in work patterns.
     23
    624Scheduling occurs at discreet points when there are transitions in a system.
    725For example, a thread cycles through the following transitions during its execution.
     
    2139\item
    2240normal completion or error, \ie segment fault (running $\rightarrow$ halted)
     41\item
     42scheduler assigns a thread to a resource (ready $\rightarrow$ running)
    2343\end{itemize}
    2444Key to scheduling is that a thread cannot bypass the ``ready'' state during a transition so the scheduler maintains complete control of the system.
    2545
    26 In detail, in a computer system with multiple processors and work units, there exists the problem of mapping work onto processors in an efficient manner, called \newterm{scheduling}.
    27 These systems are normally \newterm{open}, meaning new work arrives from an external source or spawned from an existing work unit.
    28 Scheduling is a zero-sum game as computer processors normally have a fixed, maximum number of cycles per unit time.
    29 (Frequency scaling and turbot boost add a degree of complexity that can be ignored in this discussion without loss of generality.)
    30 
    31 On a computer system, the scheduler takes a sequence of work requests in the form of threads and attempts to complete the work, subject to performance objectives, such as resource utilization.
    32 A general-purpose dynamic-scheduler for an open system cannot anticipate future work requests, so its performance is rarely optimal.
    33 Even with complete knowledge of arrive order and work, an optimal solution is NP (bin packing).
    34 However, schedulers do take advantage of regularities in work patterns to produce excellent solutions.
    35 Nevertheless, schedulers are a series of compromises, occasionally with some static or dynamic tuning parameters to enhance specific patterns.
    36 
    3746When the workload exceeds the capacity of the processors, \ie work cannot be executed immediately, it is placed on a queue for subsequent service, called a \newterm{ready queue}.
    38 (In real-life, a work unit (person) can \newterm{balk}, and leave the system rather than wait.)
    3947Ready queues organize threads for scheduling, which indirectly organizes the work to be performed.
    40 The structure of ready queues forms a spectrum.
    41 At one end is the single-queue multi-server (SIMS) and at the other end is the multi-queue multi-server (MOMS).
     48The structure of ready queues can take many different forms.
     49Where simple examples include single-queue multi-server (SQMS) and the multi-queue multi-server (MQMS).
    4250\begin{center}
    4351\begin{tabular}{l|l}
    44 \multicolumn{1}{c|}{\textbf{SIMS}} & \multicolumn{1}{c}{\textbf{MOMS}} \\
     52\multicolumn{1}{c|}{\textbf{SQMS}} & \multicolumn{1}{c}{\textbf{MQMS}} \\
    4553\hline
    4654\raisebox{0.5\totalheight}{\input{SQMS.pstex_t}} & \input{MQMSG.pstex_t}
    4755\end{tabular}
    4856\end{center}
    49 (While \newterm{pipeline} organizations also exist, \ie chaining schedulers together, they do not provide an advantage in this context.)
     57Beyond these two schedulers are a host of options, \ie adding an optional global, shared queue to MQMS.
    5058
    5159The three major optimization criteria for a scheduler are:
     
    6068
    6169\noindent
    62 Essentially, all multi-processor computers have non-uniform memory access (NUMB), with one or more quantized steps to access data at different levels (steps) in the memory hierarchy.
    63 When a system has a large number of independently executing threads, affinity is impossible because of \newterm{thread churn}.
     70Essentially, all multi-processor computers have non-uniform memory access (NUMA), with one or more quantized steps to access data at different levels in the memory hierarchy.
     71When a system has a large number of independently executing threads, affinity becomes difficult because of \newterm{thread churn}.
    6472That is, threads must be scheduled on multiple processors to obtain high processors utilization because the number of threads $\ggg$ processors.
    6573
    6674\item
    67 \newterm{contention}: safe access of shared objects by multiple processors requires mutual exclusion in the form of locking\footnote{
    68 Lock-free data-structures is still locking because all forms of locking invoke delays.}
     75\newterm{contention}: safe access of shared objects by multiple processors requires mutual exclusion in some form, generally locking\footnote{
     76Lock-free data-structures do not involve locking but incurr similar costs to achieve mutual exclusion.}
    6977
    7078\noindent
    71 Locking cost and latency increases significantly with the number of processors accessing a shared object.
     79Mutual exclusion cost and latency increases significantly with the number of processors accessing a shared object.
    7280\end{enumerate}
    7381
    74 SIMS has perfect load-balancing but poor affinity and high contention by the processors, because of the single queue.
    75 MOMS has poor load-balancing but perfect affinity and no contention, because each processor has its own queue.
    76 Between these two schedulers are a host of options, \ie adding an optional global, shared ready-queue to MOMS.
     82Nevertheless, schedulers are a series of compromises, occasionally with some static or dynamic tuning parameters to enhance specific patterns.
     83Scheduling is a zero-sum game as computer processors normally have a fixed, maximum number of cycles per unit time\footnote{Frequency scaling and turbot boost add a degree of complexity that can be ignored in this discussion without loss of generality.}.
     84SQMS has perfect load-balancing but poor affinity and high contention by the processors, because of the single queue.
     85MQMS has poor load-balancing but perfect affinity and no contention, because each processor has its own queue.
     86
    7787Significant research effort has also looked at load sharing/stealing among queues, when a ready queue is too long or short, respectively.
    7888These approaches attempt to perform better load-balancing at the cost of affinity and contention.
    7989Load sharing/stealing schedulers attempt to push/pull work units to/from other ready queues
    8090
    81 Note, a computer system is often lightly loaded (or has no load);
    82 any scheduler can handle this situation, \ie all schedulers are equal in this context.
    83 As the workload increases, there is a point where some schedulers begin to perform better than others based on the above criteria.
    84 A poorer scheduler might saturate for some reason and not be able to assign available work to idle processors, \ie processors are spinning when work is available.
     91Note however that while any change comes at a cost, hence the zero-sum game, not all compromises are necessarily equivalent.
     92Some schedulers can perform very well only in very specific workload scenarios, others might offer acceptable performance but be applicable to a wider range of workloads.
     93Since \CFA attempts to improve the safety and productivity of C, the scheduler presented in this thesis attempts to achieve the same goals.
     94More specifically, safety and productivity for scheduling means supporting a wide range of workloads so that programmers can rely on progress guarantees (safety) and more easily achieve acceptable performance (productivity).
    8595
    8696
    87 \section{\CFA programming language}
    88 
    89 \todo{Brief discussion on \CFA features used in the thesis.}
    90 
    91 The \CFA programming language~\cite{cfa:frontpage,cfa:typesystem} extends the C programming language by adding modern safety and productivity features, while maintaining backwards compatibility. Among its productivity features, \CFA supports user-level threading~\cite{Delisle21} allowing programmers to write modern concurrent and parallel programs.
    92 My previous master's thesis on concurrent in \CFA focused on features and interfaces.
    93 This Ph.D.\ thesis focuses on performance, introducing \glsxtrshort{api} changes only when required by performance considerations. Specifically, this work concentrates on scheduling and \glsxtrshort{io}. Prior to this work, the \CFA runtime used a strict \glsxtrshort{fifo} \gls{rQ} and  no non-blocking I/O capabilities at the user-thread level.
    94 
    95 As a research project, this work builds exclusively on newer versions of the Linux operating-system and gcc/clang compilers. While \CFA is released, supporting older versions of Linux ($<$~Ubuntu 16.04) and gcc/clang compilers ($<$~gcc 6.0) is not a goal of this work.
    96 
    97 
    98 \section{Contributions}
    99 \label{s:Contributions}
    100 
     97\section{Contributions}\label{s:Contributions}
    10198This work provides the following contributions in the area of user-level scheduling in an advanced programming-language runtime-system:
    10299\begin{enumerate}[leftmargin=*]
    103100\item
    104 Guarantee no thread or set of threads can conspire to prevent other threads from executing.
    105 \begin{itemize}
     101A scalable scheduling algorithm that offers progress guarantees.
    106102\item
    107 There must exist some form of preemption to prevent CPU-bound threads from gaining exclusive access to one or more processors.
     103An algorithm for load-balancing and idle sleep of processors, including NUMA awareness.
    108104\item
    109 There must be a scheduler guarantee that threads are executed after CPU-bound threads are preempted.
    110 Otherwise, CPU-bound threads can immediately rescheduled, negating the preemption.
    111 \end{itemize}
    112 Hence, the runtime/scheduler provides a notion of fairness that is impossible for a programmer to violate.
    113 \item
    114 Provide comprehensive scheduling across all forms of preemption and blocking, where threads move from the running to ready or blocked states.
    115 
    116 Once a thread stops running, the processor executing it must be rescheduled, if possible.
    117 Knowing what threads are in the ready state for scheduling is difficult because blocking operations complete asynchronous and with poor notification.
    118 \item
    119 Efficiently deal with unbalanced workloads among processors.
    120 
    121 Simpler to
    122 \item
    123 Efficiently deal with idle processors when there is less work than the available computing capacity.
     105Support for user-level \glsxtrshort{io} capabilities based on Linux's \texttt{io\_uring}.
    124106\end{enumerate}
Note: See TracChangeset for help on using the changeset viewer.