Ignore:
Timestamp:
Feb 25, 2022, 12:28:44 PM (3 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, enum, master, pthread-emulation, qualifiedEnum
Children:
41870a5
Parents:
08ed947
Message:

Small fixes to io.tex and reworked start of existing work.

Location:
doc/theses/thierry_delisle_PhD/thesis/text
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/existing.tex

    r08ed947 rc5af4f9  
    11\chapter{Previous Work}\label{existing}
    2 Scheduling is a topic with a very long history, predating its use in computer science. As such, early work in computed science was inspired from other fields and focused principally on solving scheduling upfront rather that as the system is running.
     2Scheduling is the process of assigning resources to incomming requests.
     3A very common form of this is assigning available workers to work-requests.
     4The need for scheduling is very common in Computer Science, \eg Operating Systems and Hypervisors schedule available CPUs, NICs schedule available bamdwith, but it is also common in other fields.
     5For example, assmebly lines are an example of scheduling where parts needed assembly are assigned to line workers.
     6
     7In all these cases, the choice of a scheduling algorithm generally depends first and formost on how much information is available to the scheduler.
     8Workloads that are well-kown, consistent and homegenous can benefit from a scheduler that is optimized to use this information while ill-defined inconsistent heterogenous workloads will require general algorithms.
     9A secondary aspect to that is how much information can be gathered versus how much information must be given as part of the input.
     10There is therefore a spectrum of scheduling algorithms, going from static schedulers that are well informed from the start, to schedulers that gather most of the information needed, to schedulers that can only rely on very limitted information.
     11Note that this description includes both infomation about each requests, \eg time to complete or resources needed, and information about the relationships between request, \eg whether or not some request must be completed before another request starts.
     12
     13Scheduling physical resources, for example in assembly lines, is generally amenable to using very well informed scheduling since information can be gathered much faster than the physical resources can be assigned and workloads are likely to stay stable for long periods of time.
     14When a faster pace is needed and changes are much more frequent gathering information on workloads, up-front or live, can become much more limiting and more general schedulers are needed.
    315
    416\section{Naming Convention}
     
    618
    719\section{Static Scheduling}
    8 Static schedulers require that programmers explicitly and exhaustively specify dependencies among tasks in order to schedule them. The scheduler then processes this input ahead of time and producess a \newterm{schedule} to which the system can later adhere. An example application for these schedulers
    9 
     20Static schedulers require that tasks have their dependencies and costs explicitly and exhaustively specified prior schedule.
     21The scheduler then processes this input ahead of time and producess a \newterm{schedule} to which the system can later adhere.
     22This approach is generally popular in real-time systems since the need for strong guarantees justifies the cost of supplying this information.
    1023In general, static schedulers are less relavant to this project since they require input from the programmers that \CFA does not have as part of its concurrency semantic.
    11 \todo{Rate-monotonic scheduling}
     24Specifying this information explicitly can add a significant burden on the programmers and reduces flexibility, for this reason the \CFA scheduler does not require this information.
    1225
    1326
    1427\section{Dynamic Scheduling}
    15 It may be difficult to fulfill the requirements of static scheduler if dependencies are be conditionnal. In this case, it may be preferable to detect dependencies at runtime. This detection effectively takes the form of halting or suspending a task with unfulfilled dependencies and adding one or more new task(s) to the system. The new task(s) have the responsability of adding the dependent task back in the system once completed. As a consequence, the scheduler may have an incomplete view of the system, seeing only tasks we no pending dependencies. Schedulers that support this detection at runtime are referred to as \newterm{Dynamic Schedulers}.
     28It may be difficult to fulfill the requirements of static scheduler if dependencies are conditionnal. In this case, it may be preferable to detect dependencies at runtime. This detection effectively takes the form of halting or suspending a task with unfulfilled dependencies and adding one or more new task(s) to the system. The new task(s) have the responsability of adding the dependent task back in the system once completed. As a consequence, the scheduler may have an incomplete view of the system, seeing only tasks we no pending dependencies. Schedulers that support this detection at runtime are referred to as \newterm{Dynamic Schedulers}.
    1629
    1730\subsection{Explicitly Informed Dynamic Schedulers}
     
    2942\subsubsection{Feedback Scheduling}
    3043As mentionned, Schedulers may also gather information about each tasks to direct their decisions. This design effectively moves the scheduler to some extent into the realm of \newterm{Control Theory}\cite{wiki:controltheory}. This gathering does not generally involve programmers and as such does not increase programmer burden the same way explicitly provided information may. However, some feedback schedulers do offer the option to programmers to offer additionnal information on certain tasks, in order to direct scheduling decision. The important distinction being whether or not the scheduler can function without this additionnal information.
    31 
    32 Feedback scheduler
    3344
    3445
  • doc/theses/thierry_delisle_PhD/thesis/text/io.tex

    r08ed947 rc5af4f9  
    11\chapter{User Level \io}
    22As mentioned in Section~\ref{prev:io}, User-Level \io requires multiplexing the \io operations of many \glspl{thrd} onto fewer \glspl{proc} using asynchronous \io operations.
    3 Different operating systems offer various forms of asynchronous operations and as mentioned in Chapter~\ref{intro}, this work is exclusively focused on the Linux operating-system.
     3Different operating systems offer various forms of asynchronous operations and, as mentioned in Chapter~\ref{intro}, this work is exclusively focused on the Linux operating-system.
    44
    55\section{Kernel Interface}
     
    178178Since completions are sent to the instance where requests were submitted, all instances with pending operations must be polled continously
    179179\footnote{As will be described in Chapter~\ref{practice}, this does not translate into constant cpu usage.}.
     180Note that once an operation completes, there is nothing that ties it to the @io_uring@ instance that handled it.
     181There is nothing preventing a new operation with, for example, the same file descriptors to a different @io_uring@ instance.
    180182
    181183A complicating aspect of submission is @io_uring@'s support for chains of operations, where the completion of an operation triggers the submission of the next operation on the link.
     
    240242To remove this requirement, a \gls{thrd} would need the ability to ``yield to a specific \gls{proc}'', \ie, park with the promise that it will be run next on a specific \gls{proc}, the \gls{proc} attached to the correct ring.}
    241243, greatly simplifying both allocation and submission.
    242 In this design, allocation and submission form a ring partitionned ring buffer as shown in Figure~\ref{fig:pring}.
     244In this design, allocation and submission form a partitionned ring buffer as shown in Figure~\ref{fig:pring}.
    243245Once added to the ring buffer, the attached \gls{proc} has a significant amount of flexibility with regards to when to do the system call.
    244 Possible options are: when the \gls{proc} runs out of \glspl{thrd} to run, after running a given number of threads \glspl{thrd}, etc.
     246Possible options are: when the \gls{proc} runs out of \glspl{thrd} to run, after running a given number of \glspl{thrd}, etc.
    245247
    246248\begin{figure}
Note: See TracChangeset for help on using the changeset viewer.