[86c1f1c3] | 1 | \chapter{Scheduling in practice}\label{practice} |
---|
| 2 | The scheduling algorithm discribed in Chapter~\ref{core} addresses scheduling in a stable state. |
---|
| 3 | However, it does not address problems that occur when the system changes state. |
---|
[6b06abe] | 4 | Indeed the \CFA runtime, supports expanding and shrinking the number of \procs, both manually and, to some extent, automatically. |
---|
[86c1f1c3] | 5 | This entails that the scheduling algorithm must support these transitions. |
---|
| 6 | |
---|
[6b06abe] | 7 | More precise \CFA supports adding \procs using the RAII object @processor@. |
---|
| 8 | These objects can be created at any time and can be destroyed at any time. |
---|
| 9 | They are normally create as automatic stack variables, but this is not a requirement. |
---|
[86c1f1c3] | 10 | |
---|
[6b06abe] | 11 | The consequence is that the scheduler and \io subsystems must support \procs comming in and out of existence. |
---|
| 12 | |
---|
| 13 | \section{Manual Resizing} |
---|
| 14 | The consequence of dynamically changing the number of \procs is that all internal arrays that are sized based on the number of \procs neede to be \texttt{realloc}ed. |
---|
| 15 | This also means that any references into these arrays, pointers or indexes, may need to be fixed when shrinking\footnote{Indexes may still need fixing because there is no guarantee the \proc causing the shrink had the highest index. Therefore indexes need to be reassigned to preserve contiguous indexes.}. |
---|
| 16 | |
---|
| 17 | There are no performance requirements, within reason, for resizing since this is usually considered as part of setup and teardown. |
---|
| 18 | However, this operation has strict correctness requirements since shrinking and idle sleep can easily lead to deadlocks. |
---|
| 19 | It should also avoid as much as possible any effect on performance when the number of \procs remain constant. |
---|
| 20 | This later requirement prehibits simple solutions, like simply adding a global lock to these arrays. |
---|
| 21 | |
---|
| 22 | \subsection{Read-Copy-Update} |
---|
| 23 | One solution is to use the Read-Copy-Update\cite{wiki:rcu} pattern. |
---|
| 24 | In this pattern, resizing is done by creating a copy of the internal data strucures, updating the copy with the desired changes, and then attempt an Idiana Jones Switch to replace the original witht the copy. |
---|
| 25 | This approach potentially has the advantage that it may not need any synchronization to do the switch. |
---|
| 26 | The switch definitely implies a race where \procs could still use the previous, original, data structure after the copy was switched in. |
---|
| 27 | The important question then becomes whether or not this race can be recovered from. |
---|
| 28 | If the changes that arrived late can be transferred from the original to the copy then this solution works. |
---|
| 29 | |
---|
| 30 | For linked-lists, dequeing is somewhat of a problem. |
---|
| 31 | Dequeing from the original will not necessarily update the copy which could lead to multiple \procs dequeing the same \at. |
---|
| 32 | Fixing this requires making the array contain pointers to subqueues rather than the subqueues themselves. |
---|
| 33 | |
---|
| 34 | Another challenge is that the original must be kept until all \procs have witnessed the change. |
---|
| 35 | This is a straight forward memory reclamation challenge but it does mean that every operation will need \emph{some} form of synchronization. |
---|
| 36 | If each of these operation does need synchronization then it is possible a simpler solution achieves the same performance. |
---|
| 37 | Because in addition to the classic challenge of memory reclamation, transferring the original data to the copy before reclaiming it poses additional challenges. |
---|
| 38 | Especially merging subqueues while having a minimal impact on fairness and locality. |
---|
| 39 | |
---|
| 40 | \subsection{Read-Writer Lock} |
---|
| 41 | A simpler approach would be to use a \newterm{Readers-Writer Lock}\cite{wiki:rwlock} where the resizing requires acquiring the lock as a writer while simply enqueing/dequeing \ats requires acquiring the lock as a reader. |
---|
| 42 | Using a Readers-Writer lock solves the problem of dynamically resizing and leaves the challenge of finding or building a lock with sufficient good read-side performance. |
---|
| 43 | Since this is not a very complex challenge and an ad-hoc solution is perfectly acceptable, building a Readers-Writer lock was the path taken. |
---|
| 44 | |
---|
| 45 | To maximize reader scalability, the readers should not contend with eachother when attempting to acquire and release the critical sections. |
---|
| 46 | This effectively requires that each reader have its own piece of memory to mark as locked and unlocked. |
---|
| 47 | Reades then acquire the lock wait for writers to finish the critical section and then acquire their local spinlocks. |
---|
| 48 | Writers acquire the global lock, so writers have mutual exclusion among themselves, and then acquires each of the local reader locks. |
---|
| 49 | Acquiring all the local locks guarantees mutual exclusion between the readers and the writer, while the wait on the read side prevents readers from continously starving the writer. |
---|
| 50 | \todo{reference listings} |
---|
| 51 | |
---|
| 52 | \begin{lstlisting} |
---|
| 53 | void read_lock() { |
---|
| 54 | // Step 1 : make sure no writers in |
---|
| 55 | while write_lock { Pause(); } |
---|
| 56 | |
---|
| 57 | // May need fence here |
---|
| 58 | |
---|
| 59 | // Step 2 : acquire our local lock |
---|
| 60 | while atomic_xchg( tls.lock ) { |
---|
| 61 | Pause(); |
---|
| 62 | } |
---|
| 63 | } |
---|
| 64 | |
---|
| 65 | void read_unlock() { |
---|
| 66 | tls.lock = false; |
---|
| 67 | } |
---|
| 68 | \end{lstlisting} |
---|
| 69 | |
---|
| 70 | \begin{lstlisting} |
---|
| 71 | void write_lock() { |
---|
| 72 | // Step 1 : lock global lock |
---|
| 73 | while atomic_xchg( write_lock ) { |
---|
| 74 | Pause(); |
---|
| 75 | } |
---|
| 76 | |
---|
| 77 | // Step 2 : lock per-proc locks |
---|
| 78 | for t in all_tls { |
---|
| 79 | while atomic_xchg( t.lock ) { |
---|
| 80 | Pause(); |
---|
| 81 | } |
---|
| 82 | } |
---|
| 83 | } |
---|
| 84 | |
---|
| 85 | void write_unlock() { |
---|
| 86 | // Step 1 : release local locks |
---|
| 87 | for t in all_tls { |
---|
| 88 | t.lock = false; |
---|
| 89 | } |
---|
| 90 | |
---|
| 91 | // Step 2 : release global lock |
---|
| 92 | write_lock = false; |
---|
| 93 | } |
---|
| 94 | \end{lstlisting} |
---|
| 95 | |
---|
| 96 | \section{Idle-Sleep} |
---|
| 97 | |
---|
| 98 | \subsection{Tracking Sleepers} |
---|
| 99 | |
---|
| 100 | \subsection{Event FDs} |
---|
| 101 | |
---|
| 102 | \subsection{Epoll} |
---|
| 103 | |
---|
| 104 | \subsection{\texttt{io\_uring}} |
---|
| 105 | |
---|
| 106 | \subsection{Reducing Latency} |
---|