Ignore:
Timestamp:
Jul 21, 2022, 8:26:59 AM (2 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
Children:
0809c4e, 18070ee
Parents:
6bf35d1 (diff), e6662f5 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/practice.tex

    r6bf35d1 rb9e2b87  
    2424Therefore, dynamically changing the number of \procs is an appropriate moment to allocate or free resources to match the new state.
    2525As such, all internal scheduling arrays that are sized based on the number of \procs need to be @realloc@ed.
    26 This requirement also means any references into these arrays, \eg pointers or indexes, may need to be updated if elements are moved for compaction or any other reason.
    27 % \footnote{Indexes may still need fixing when shrinking because some indexes are expected to refer to dense contiguous resources and there is no guarantee the resource being removed has the highest index.}
     26This requirement also means any references into these arrays, \eg pointers or indexes, may need to be updated if elements are moved for compaction or for any other reason.
    2827
    2928There are no performance requirements, within reason, for resizing since it is expected to be rare.
     
    3433\subsection{Read-Copy-Update}
    3534One solution is to use the Read-Copy-Update pattern~\cite{wiki:rcu}.
    36 In this pattern, resizing is done by creating a copy of the internal data structures (\eg see Figure~\ref{fig:base-ts2}), updating the copy with the desired changes, and then attempt an Indiana Jones Switch to replace the original with the copy.
     35In this pattern, resizing is done by creating a copy of the internal data structures, \eg see Figure~\ref{fig:base-ts2}, updating the copy with the desired changes, and then attempt an Indiana Jones Switch to replace the original with the copy.
    3736This approach has the advantage that it may not need any synchronization to do the switch.
    3837However, there is a race where \procs still use the original data structure after the copy is switched.
     
    10099\section{Idle-Sleep}
    101100While manual resizing of \procs is expected to be rare, the number of \ats can vary significantly over an application's lifetime, which means there are times when there are too few or too many \procs.
    102 For this work, it is the programer's responsibility to manually create \procs, so if there a too few \procs, the application must address this issue.
     101For this work, it is the programer's responsibility to manually create \procs, so if there are too few \procs, the application must address this issue.
    103102This leaves too many \procs when there are not enough \ats for all the \procs to be useful.
    104103These idle \procs cannot be removed because their lifetime is controlled by the application, and only the application knows when the number of \ats may increase or decrease.
    105 While idle \procs can spin until work appears, this approach wastes the processor (from other applications), energy and heat.
     104While idle \procs can spin until work appears, this approach wastes energy, unnecessarily produces heat and prevents other applications from using the processor.
    106105Therefore, idle \procs are put into an idle state, called \newterm{Idle-Sleep}, where the \gls{kthrd} is blocked until the scheduler deems it is needed.
    107106
     
    129128This generally takes the form of creating a file descriptor, \eg, dummy file, pipe, or event fd, and using that file descriptor when \procs need to wake each other.
    130129This leads to additional complexity because there can be a race between these artificial \io and genuine \io operations.
    131 If not handled correctly, this can lead to artificial files getting delaying too long behind genuine files, resulting in longer latency.
     130If not handled correctly, this can lead to artificial files getting delayed too long behind genuine files, resulting in longer latency.
    132131
    133132\subsection{Event FDs}
     
    148147As a result, improper handling of this race leads to all \procs going to sleep when there are ready \ats and the system deadlocks.
    149148
     149The handshake closing the race is done with both the notifier and the idle \proc executing two ordered steps.
     150The notifier first make sure the newly ready \at is visible to \procs searching for \ats, and then attempt to notify an idle \proc.
     151On the other side, \procs make themselves visible as idle \procs and then search for any \ats they may have missed.
     152Unlike regular work-stealing, this search must be exhaustive to make sure that pre-existing \at is missed.
     153These steps from both sides guarantee that if the search misses a newly ready \at, then the notifier is guaranteed to see at least one idle \proc.
     154Conversly, if the notifier does not see any idle \proc, then a \proc is guaranteed to find the new \at in its exhaustive search.
     155
    150156Furthermore, the ``Race-to-Idle'' approach means that there may be contention on the data structure tracking sleepers.
    151157Contention can be tolerated for \procs attempting to sleep or wake-up because these \procs are not doing useful work, and therefore, not contributing to overall performance.
     
    154160\subsection{Sleepers List}
    155161Each cluster maintains a list of idle \procs, organized as a stack.
    156 This ordering allows \procs at the head of the list to stay constantly active and those at the tail to stay in idle sleep for extended period of times.
     162This ordering allows \procs at the tail to stay in idle sleep for extended period of times while those at the head of the list wake-up for bursts of activity.
    157163Because of unbalanced performance requirements, the algorithm tracking sleepers is designed to have idle \procs handle as much of the work as possible.
    158164The idle \procs maintain the stack of sleepers among themselves and notifying a sleeping \proc takes as little work as possible.
     
    184190The contention from the \procs attempting to go to sleep can be mitigated slightly by using @try_acquire@, so the \procs simply busy wait again searching for \ats if the lock is held.
    185191This trick cannot be used when waking \procs since the waker needs to return immediately to what it was doing.
     192
    186193Interestingly, general notification, \ie waking any idle processor versus a specific one, does not strictly require modifying the list.
    187194Here, contention can be reduced notably by having notifiers avoid the lock entirely by adding a pointer to the event @fd@ of the first idle \proc, as in Figure~\ref{fig:idle2}.
    188195To avoid contention among notifiers, notifiers atomically exchange it to @NULL@ so only one notifier contends on the system call.
    189 \todo{Expand explanation of how a notification works.}
     196The first notifier will succeed the atomic exchange and obtain the @fd@ of an idle \proc.
     197The notifier will the normally write to the @fd@, which will wake a \proc.
     198The woken \proc can then update the atomic pointer while it is updating the head of the list.
     199Notifiers that obtained a @NULL@ in the exchange simply move on, knowing that another notifier is already waking a \proc.
     200This behaviour is equivalent to having multiple notifier write to the @fd@ since reads consume all previous writes.
    190201
    191202\begin{figure}[t]
     
    196207\end{figure}
    197208
    198 The next optimization is to avoid the latency of the event @fd@, which can be done by adding what is effectively a benaphore\cit{benaphore} in front of the event @fd@.
     209The next optimization is to avoid the latency of the event @fd@, which can be done by adding what is effectively a binary benaphore\cit{benaphore} in front of the event @fd@.
    199210A simple three state flag is added beside the event @fd@ to avoid unnecessary system calls, as shown in Figure~\ref{fig:idle:state}.
    200 In Topological Work Stealing (see Section~\ref{s:TopologicalWorkStealing}), a \proc without \ats begins searching by setting the state flag to @SEARCH@.
    201 If no \ats can be found to steal, the \proc then confirms it is going to sleep by atomically swapping the state to @SLEEP@.
     211A \proc begins its idle sleep by adding itself to the idle list before searching for an \at.
     212In the process of adding itself to the idle list, it sets the state flag to @SEARCH@.
     213If no \ats can be found during the search, the \proc then confirms it is going to sleep by atomically swapping the state to @SLEEP@.
    202214If the previous state is still @SEARCH@, then the \proc does read the event @fd@.
    203215Meanwhile, notifiers atomically exchange the state to @AWAKE@ state.
    204216If the previous state is @SLEEP@, then the notifier must write to the event @fd@.
    205 However, if the notify arrives almost immediately after the \proc marks itself sleeping (idle), then both reads and writes on the event @fd@ can be omitted, which reduces latency notably.
     217However, if the notify arrives almost immediately after the \proc marks itself idle, then both reads and writes on the event @fd@ can be omitted, which reduces latency notably.
    206218These extensions leads to the final data structure shown in Figure~\ref{fig:idle}.
    207 \todo{You never talk about the Beaphore. What is its purpose and when is it used?}
    208219
    209220\begin{figure}
Note: See TracChangeset for help on using the changeset viewer.