Changeset c5af4f9 for doc/theses/thierry_delisle_PhD/thesis/text
- Timestamp:
- Feb 25, 2022, 12:28:44 PM (3 years ago)
- Branches:
- ADT, ast-experimental, enum, master, pthread-emulation, qualifiedEnum
- Children:
- 41870a5
- Parents:
- 08ed947
- Location:
- doc/theses/thierry_delisle_PhD/thesis/text
- Files:
-
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/text/existing.tex
r08ed947 rc5af4f9 1 1 \chapter{Previous Work}\label{existing} 2 Scheduling is a topic with a very long history, predating its use in computer science. As such, early work in computed science was inspired from other fields and focused principally on solving scheduling upfront rather that as the system is running. 2 Scheduling is the process of assigning resources to incomming requests. 3 A very common form of this is assigning available workers to work-requests. 4 The need for scheduling is very common in Computer Science, \eg Operating Systems and Hypervisors schedule available CPUs, NICs schedule available bamdwith, but it is also common in other fields. 5 For example, assmebly lines are an example of scheduling where parts needed assembly are assigned to line workers. 6 7 In all these cases, the choice of a scheduling algorithm generally depends first and formost on how much information is available to the scheduler. 8 Workloads that are well-kown, consistent and homegenous can benefit from a scheduler that is optimized to use this information while ill-defined inconsistent heterogenous workloads will require general algorithms. 9 A secondary aspect to that is how much information can be gathered versus how much information must be given as part of the input. 10 There is therefore a spectrum of scheduling algorithms, going from static schedulers that are well informed from the start, to schedulers that gather most of the information needed, to schedulers that can only rely on very limitted information. 11 Note that this description includes both infomation about each requests, \eg time to complete or resources needed, and information about the relationships between request, \eg whether or not some request must be completed before another request starts. 12 13 Scheduling physical resources, for example in assembly lines, is generally amenable to using very well informed scheduling since information can be gathered much faster than the physical resources can be assigned and workloads are likely to stay stable for long periods of time. 14 When a faster pace is needed and changes are much more frequent gathering information on workloads, up-front or live, can become much more limiting and more general schedulers are needed. 3 15 4 16 \section{Naming Convention} … … 6 18 7 19 \section{Static Scheduling} 8 Static schedulers require that programmers explicitly and exhaustively specify dependencies among tasks in order to schedule them. The scheduler then processes this input ahead of time and producess a \newterm{schedule} to which the system can later adhere. An example application for these schedulers 9 20 Static schedulers require that tasks have their dependencies and costs explicitly and exhaustively specified prior schedule. 21 The scheduler then processes this input ahead of time and producess a \newterm{schedule} to which the system can later adhere. 22 This approach is generally popular in real-time systems since the need for strong guarantees justifies the cost of supplying this information. 10 23 In general, static schedulers are less relavant to this project since they require input from the programmers that \CFA does not have as part of its concurrency semantic. 11 \todo{Rate-monotonic scheduling} 24 Specifying this information explicitly can add a significant burden on the programmers and reduces flexibility, for this reason the \CFA scheduler does not require this information. 12 25 13 26 14 27 \section{Dynamic Scheduling} 15 It may be difficult to fulfill the requirements of static scheduler if dependencies are beconditionnal. In this case, it may be preferable to detect dependencies at runtime. This detection effectively takes the form of halting or suspending a task with unfulfilled dependencies and adding one or more new task(s) to the system. The new task(s) have the responsability of adding the dependent task back in the system once completed. As a consequence, the scheduler may have an incomplete view of the system, seeing only tasks we no pending dependencies. Schedulers that support this detection at runtime are referred to as \newterm{Dynamic Schedulers}.28 It may be difficult to fulfill the requirements of static scheduler if dependencies are conditionnal. In this case, it may be preferable to detect dependencies at runtime. This detection effectively takes the form of halting or suspending a task with unfulfilled dependencies and adding one or more new task(s) to the system. The new task(s) have the responsability of adding the dependent task back in the system once completed. As a consequence, the scheduler may have an incomplete view of the system, seeing only tasks we no pending dependencies. Schedulers that support this detection at runtime are referred to as \newterm{Dynamic Schedulers}. 16 29 17 30 \subsection{Explicitly Informed Dynamic Schedulers} … … 29 42 \subsubsection{Feedback Scheduling} 30 43 As mentionned, Schedulers may also gather information about each tasks to direct their decisions. This design effectively moves the scheduler to some extent into the realm of \newterm{Control Theory}\cite{wiki:controltheory}. This gathering does not generally involve programmers and as such does not increase programmer burden the same way explicitly provided information may. However, some feedback schedulers do offer the option to programmers to offer additionnal information on certain tasks, in order to direct scheduling decision. The important distinction being whether or not the scheduler can function without this additionnal information. 31 32 Feedback scheduler33 44 34 45 -
doc/theses/thierry_delisle_PhD/thesis/text/io.tex
r08ed947 rc5af4f9 1 1 \chapter{User Level \io} 2 2 As mentioned in Section~\ref{prev:io}, User-Level \io requires multiplexing the \io operations of many \glspl{thrd} onto fewer \glspl{proc} using asynchronous \io operations. 3 Different operating systems offer various forms of asynchronous operations and as mentioned in Chapter~\ref{intro}, this work is exclusively focused on the Linux operating-system.3 Different operating systems offer various forms of asynchronous operations and, as mentioned in Chapter~\ref{intro}, this work is exclusively focused on the Linux operating-system. 4 4 5 5 \section{Kernel Interface} … … 178 178 Since completions are sent to the instance where requests were submitted, all instances with pending operations must be polled continously 179 179 \footnote{As will be described in Chapter~\ref{practice}, this does not translate into constant cpu usage.}. 180 Note that once an operation completes, there is nothing that ties it to the @io_uring@ instance that handled it. 181 There is nothing preventing a new operation with, for example, the same file descriptors to a different @io_uring@ instance. 180 182 181 183 A complicating aspect of submission is @io_uring@'s support for chains of operations, where the completion of an operation triggers the submission of the next operation on the link. … … 240 242 To remove this requirement, a \gls{thrd} would need the ability to ``yield to a specific \gls{proc}'', \ie, park with the promise that it will be run next on a specific \gls{proc}, the \gls{proc} attached to the correct ring.} 241 243 , greatly simplifying both allocation and submission. 242 In this design, allocation and submission form a ringpartitionned ring buffer as shown in Figure~\ref{fig:pring}.244 In this design, allocation and submission form a partitionned ring buffer as shown in Figure~\ref{fig:pring}. 243 245 Once added to the ring buffer, the attached \gls{proc} has a significant amount of flexibility with regards to when to do the system call. 244 Possible options are: when the \gls{proc} runs out of \glspl{thrd} to run, after running a given number of threads\glspl{thrd}, etc.246 Possible options are: when the \gls{proc} runs out of \glspl{thrd} to run, after running a given number of \glspl{thrd}, etc. 245 247 246 248 \begin{figure}
Note: See TracChangeset
for help on using the changeset viewer.