Ignore:
Timestamp:
Aug 5, 2022, 4:18:02 PM (22 months ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
62c5a55
Parents:
511a9368
Message:

Filled in several citations and did some of the todos

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/existing.tex

    r511a9368 r8040286  
    1414
    1515\section{Naming Convention}
    16 Scheduling has been studied by various communities concentrating on different incarnation of the same problems. 
    17 As a result, there are no standard naming conventions for scheduling that is respected across these communities. 
     16Scheduling has been studied by various communities concentrating on different incarnation of the same problems.
     17As a result, there are no standard naming conventions for scheduling that is respected across these communities.
    1818This document uses the term \newterm{\Gls{at}} to refer to the abstract objects being scheduled and the term \newterm{\Gls{proc}} to refer to the concrete objects executing these \ats.
    1919
     
    2828\section{Dynamic Scheduling}
    2929\newterm{Dynamic schedulers} determine \ats dependencies and costs during scheduling, if at all.
    30 Hence, unlike static scheduling, \ats dependencies are conditional and detected at runtime. 
     30Hence, unlike static scheduling, \ats dependencies are conditional and detected at runtime.
    3131This detection takes the form of observing new \ats(s) in the system and determining dependencies from their behaviour, including suspending or halting a \ats that dynamically detects unfulfilled dependencies.
    3232Furthermore, each \ats has the responsibility of adding dependent \ats back into the system once dependencies are fulfilled.
     
    5151Most common operating systems use some variant on priorities with overlaps and dynamic priority adjustments.
    5252For example, Microsoft Windows uses a pair of priorities
    53 \cit{https://docs.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities,https://docs.microsoft.com/en-us/windows/win32/taskschd/taskschedulerschema-priority-settingstype-element}, one specified by users out of ten possible options and one adjusted by the system.
     53\cite{win:priority}, one specified by users out of ten possible options and one adjusted by the system.
    5454
    5555\subsection{Uninformed and Self-Informed Dynamic Schedulers}
     
    137137The scheduler may also temporarily adjust priorities after certain effects like the completion of I/O requests.
    138138
    139 \todo{load balancing}
     139In~\cite{russinovich2009windows}, Chapter 1 section ``Processes, Threads, and Jobs'' discusses the scheduling policy more in depth.
     140Multicore scheduling is based on a combination of priorities, preferred \proc.
     141Each \at is assigned an \newterm{ideal} \proc using a round-robin policy.
     142\Ats are distributed among the \procs according to their priority, preferring to match \ats to their ideal \proc and then to the last \proc they ran on.
     143This is similar to a variation of work stealing, where the stealing \proc restore the \at to its original \proc after running it, but with priorities added onto the mix.
    140144
    141145\paragraph{Apple OS X}
     
    156160\paragraph{Go}\label{GoSafePoint}
    157161Go's scheduler uses a randomized work-stealing algorithm that has a global run-queue (\emph{GRQ}) and each processor (\emph{P}) has both a fixed-size run-queue (\emph{LRQ}) and a high-priority next ``chair'' holding a single element~\cite{GITHUB:go,YTUBE:go}.
    158 Preemption is present, but only at safe-points,~\cit{https://go.dev/src/runtime/preempt.go} which are inserted detection code at various frequent access boundaries.
     162Preemption is present, but only at safe-points,~\cite{go:safepoints} which are inserted detection code at various frequent access boundaries.
    159163
    160164The algorithm is as follows :
     
    199203
    200204\paragraph{Grand Central Dispatch}
    201 An Apple\cit{Official GCD source} API that offers task parallelism~\cite{wiki:taskparallel}.
     205An Apple\cite{apple:gcd} API that offers task parallelism~\cite{wiki:taskparallel}.
    202206Its distinctive aspect is multiple ``Dispatch Queues'', some of which are created by programmers.
    203207Each queue has its own local ordering guarantees, \eg \ats on queue $A$ are executed in \emph{FIFO} order.
    204208
    205 \todo{load balancing and scheduling}
    206 
    207 % http://web.archive.org/web/20090920043909/http://images.apple.com/macosx/technology/docs/GrandCentral_TB_brief_20090903.pdf
    208 
    209 In terms of semantics, the Dispatch Queues seem to be very similar to Intel\textregistered ~TBB @execute()@ and predecessor semantics.
     209While the documentation only gives limited insight into the scheduling and load balancing approach, \cite{apple:gcd2} suggests an approach fairly classic;
     210Where each \proc has a queue of \newterm{blocks} to run, effectively \ats, and they drain their respective queues in \glsxtrshort{fifo}.
     211They seem to add the concept of dependent queues with clear ordering, where a executing a block ends-up scheduling more blocks.
     212In terms of semantics, these Dispatch Queues seem to be very similar to Intel\textregistered ~TBB @execute()@ and predecessor semantics.
    210213
    211214\paragraph{LibFibre}
Note: See TracChangeset for help on using the changeset viewer.