1 | \chapter{Scheduling in practice}\label{practice} |
---|
2 | The scheduling algorithm discribed in Chapter~\ref{core} addresses scheduling in a stable state. |
---|
3 | However, it does not address problems that occur when the system changes state. |
---|
4 | Indeed the \CFA runtime, supports expanding and shrinking the number of \procs, both manually and, to some extent, automatically. |
---|
5 | This entails that the scheduling algorithm must support these transitions. |
---|
6 | |
---|
7 | More precise \CFA supports adding \procs using the RAII object @processor@. |
---|
8 | These objects can be created at any time and can be destroyed at any time. |
---|
9 | They are normally create as automatic stack variables, but this is not a requirement. |
---|
10 | |
---|
11 | The consequence is that the scheduler and \io subsystems must support \procs comming in and out of existence. |
---|
12 | |
---|
13 | \section{Manual Resizing} |
---|
14 | The consequence of dynamically changing the number of \procs is that all internal arrays that are sized based on the number of \procs neede to be \texttt{realloc}ed. |
---|
15 | This also means that any references into these arrays, pointers or indexes, may need to be fixed when shrinking\footnote{Indexes may still need fixing because there is no guarantee the \proc causing the shrink had the highest index. Therefore indexes need to be reassigned to preserve contiguous indexes.}. |
---|
16 | |
---|
17 | There are no performance requirements, within reason, for resizing since this is usually considered as part of setup and teardown. |
---|
18 | However, this operation has strict correctness requirements since shrinking and idle sleep can easily lead to deadlocks. |
---|
19 | It should also avoid as much as possible any effect on performance when the number of \procs remain constant. |
---|
20 | This later requirement prehibits simple solutions, like simply adding a global lock to these arrays. |
---|
21 | |
---|
22 | \subsection{Read-Copy-Update} |
---|
23 | One solution is to use the Read-Copy-Update\cite{wiki:rcu} pattern. |
---|
24 | In this pattern, resizing is done by creating a copy of the internal data strucures, updating the copy with the desired changes, and then attempt an Idiana Jones Switch to replace the original witht the copy. |
---|
25 | This approach potentially has the advantage that it may not need any synchronization to do the switch. |
---|
26 | The switch definitely implies a race where \procs could still use the previous, original, data structure after the copy was switched in. |
---|
27 | The important question then becomes whether or not this race can be recovered from. |
---|
28 | If the changes that arrived late can be transferred from the original to the copy then this solution works. |
---|
29 | |
---|
30 | For linked-lists, dequeing is somewhat of a problem. |
---|
31 | Dequeing from the original will not necessarily update the copy which could lead to multiple \procs dequeing the same \at. |
---|
32 | Fixing this requires making the array contain pointers to subqueues rather than the subqueues themselves. |
---|
33 | |
---|
34 | Another challenge is that the original must be kept until all \procs have witnessed the change. |
---|
35 | This is a straight forward memory reclamation challenge but it does mean that every operation will need \emph{some} form of synchronization. |
---|
36 | If each of these operation does need synchronization then it is possible a simpler solution achieves the same performance. |
---|
37 | Because in addition to the classic challenge of memory reclamation, transferring the original data to the copy before reclaiming it poses additional challenges. |
---|
38 | Especially merging subqueues while having a minimal impact on fairness and locality. |
---|
39 | |
---|
40 | \subsection{Read-Writer Lock} |
---|
41 | A simpler approach would be to use a \newterm{Readers-Writer Lock}\cite{wiki:rwlock} where the resizing requires acquiring the lock as a writer while simply enqueing/dequeing \ats requires acquiring the lock as a reader. |
---|
42 | Using a Readers-Writer lock solves the problem of dynamically resizing and leaves the challenge of finding or building a lock with sufficient good read-side performance. |
---|
43 | Since this is not a very complex challenge and an ad-hoc solution is perfectly acceptable, building a Readers-Writer lock was the path taken. |
---|
44 | |
---|
45 | To maximize reader scalability, the readers should not contend with eachother when attempting to acquire and release the critical sections. |
---|
46 | This effectively requires that each reader have its own piece of memory to mark as locked and unlocked. |
---|
47 | Reades then acquire the lock wait for writers to finish the critical section and then acquire their local spinlocks. |
---|
48 | Writers acquire the global lock, so writers have mutual exclusion among themselves, and then acquires each of the local reader locks. |
---|
49 | Acquiring all the local locks guarantees mutual exclusion between the readers and the writer, while the wait on the read side prevents readers from continously starving the writer. |
---|
50 | \todo{reference listings} |
---|
51 | |
---|
52 | \begin{lstlisting} |
---|
53 | void read_lock() { |
---|
54 | // Step 1 : make sure no writers in |
---|
55 | while write_lock { Pause(); } |
---|
56 | |
---|
57 | // May need fence here |
---|
58 | |
---|
59 | // Step 2 : acquire our local lock |
---|
60 | while atomic_xchg( tls.lock ) { |
---|
61 | Pause(); |
---|
62 | } |
---|
63 | } |
---|
64 | |
---|
65 | void read_unlock() { |
---|
66 | tls.lock = false; |
---|
67 | } |
---|
68 | \end{lstlisting} |
---|
69 | |
---|
70 | \begin{lstlisting} |
---|
71 | void write_lock() { |
---|
72 | // Step 1 : lock global lock |
---|
73 | while atomic_xchg( write_lock ) { |
---|
74 | Pause(); |
---|
75 | } |
---|
76 | |
---|
77 | // Step 2 : lock per-proc locks |
---|
78 | for t in all_tls { |
---|
79 | while atomic_xchg( t.lock ) { |
---|
80 | Pause(); |
---|
81 | } |
---|
82 | } |
---|
83 | } |
---|
84 | |
---|
85 | void write_unlock() { |
---|
86 | // Step 1 : release local locks |
---|
87 | for t in all_tls { |
---|
88 | t.lock = false; |
---|
89 | } |
---|
90 | |
---|
91 | // Step 2 : release global lock |
---|
92 | write_lock = false; |
---|
93 | } |
---|
94 | \end{lstlisting} |
---|
95 | |
---|
96 | \section{Idle-Sleep} |
---|
97 | In addition to users manually changing the number of \procs, it is desireable to support ``removing'' \procs when there is not enough \ats for all the \procs to be useful. |
---|
98 | While manual resizing is expected to be rare, the number of \ats is expected to vary much more which means \procs may need to be ``removed'' for only short periods of time. |
---|
99 | Furthermore, race conditions that spuriously lead to the impression no \ats are ready are actually common in practice. |
---|
100 | Therefore \procs should not be actually \emph{removed} but simply put into an idle state where the \gls{kthrd} is blocked until more \ats become ready. |
---|
101 | This state is referred to as \newterm{Idle-Sleep}. |
---|
102 | |
---|
103 | Idle sleep effectively encompasses several challenges. |
---|
104 | First some data structure needs to keep track of all \procs that are in idle sleep. |
---|
105 | Because of idle sleep can be spurious, this data structure has strict performance requirements in addition to the strict correctness requirements. |
---|
106 | Next, some tool must be used to block kernel threads \glspl{kthrd}, \eg \texttt{pthread\_cond\_wait}, pthread semaphores. |
---|
107 | The complexity here is to support \at parking and unparking, timers, \io operations and all other \CFA features with minimal complexity. |
---|
108 | Finally, idle sleep also includes a heuristic to determine the appropriate number of \procs to be in idle sleep an any given time. |
---|
109 | This third challenge is however outside the scope of this thesis because developping a general heuristic is involved enough to justify its own work. |
---|
110 | The \CFA scheduler simply follows the ``Race-to-Idle'\cit{https://doi.org/10.1137/1.9781611973099.100}' approach where a sleeping \proc is woken any time an \at becomes ready and \procs go to idle sleep anytime they run out of work. |
---|
111 | |
---|
112 | |
---|
113 | \section{Tracking Sleepers} |
---|
114 | Tracking which \procs are in idle sleep requires a data structure holding all the sleeping \procs, but more importantly it requires a concurrent \emph{handshake} so that no \at is stranded on a ready-queue with no active \proc. |
---|
115 | The classic challenge is when a \at is made ready while a \proc is going to sleep, there is a race where the new \at may not see the sleeping \proc and the sleeping \proc may not see the ready \at. |
---|
116 | |
---|
117 | Furthermore, the ``Race-to-Idle'' approach means that there is some |
---|
118 | |
---|
119 | \section{Sleeping} |
---|
120 | |
---|
121 | \subsection{Event FDs} |
---|
122 | |
---|
123 | \subsection{Epoll} |
---|
124 | |
---|
125 | \subsection{\texttt{io\_uring}} |
---|
126 | |
---|
127 | \section{Reducing Latency} |
---|