Changeset c4072d8e for doc/theses/thierry_delisle_PhD
- Timestamp:
- Jul 23, 2022, 5:35:22 PM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
- Children:
- 8b3de2a
- Parents:
- 0809c4e
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
r0809c4e rc4072d8e 1 1 \chapter{Micro-Benchmarks}\label{microbench} 2 2 3 The first step of evaluation is always to test-out small controlled cases, to ensure that the basics are workingproperly.4 This sectionspresents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.3 The first step in evaluating this work is to test-out small controlled cases to ensure the basics work properly. 4 This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler. 5 5 6 6 \section{Benchmark Environment} 7 All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine. 7 All benchmarks are run on two distinct hardware platforms. 8 \begin{description} 9 \item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM. 10 The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}. 11 Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively. 12 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}. 13 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. 14 15 \item[Intel] is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM. 16 The Xeon CPU has 24 cores with 2 \glspl{hthrd} per core, for 48 \glspl{hthrd} per socket with 4 sockets for a total of 196 \glspl{hthrd}. 17 Each CPU has 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively. 18 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}. 19 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. 20 \end{description} 8 21 9 22 For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading. 10 23 If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used. 11 If still more \glspl{hthrd} are needed then the experiment is limited to as few NUMA Nodes as needed. 12 13 14 \paragraph{AMD} The AMD machine is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM. 15 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. 16 These EPYCs have 64 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 256 \glspl{hthrd}. 17 The cpus each have 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches respectively. 18 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}. 19 20 \paragraph{Intel} The Intel machine is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM. 21 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55. 22 These Xeon Platinums have 24 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 192 \glspl{hthrd}. 23 The cpus each have 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively. 24 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}. 25 26 This limited sharing of the last level cache on the AMD machine is markedly different than the Intel machine. Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different cpu incurr a significant latency, on AMD it is also the case that cache misses served by a different L3 instance on the same cpu still incur high latency. 24 If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA Nodes as needed. 25 26 The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine. 27 Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU still incur high latency. 27 28 28 29 … … 34 35 \label{fig:cycle} 35 36 \end{figure} 36 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready-queue. 37 Since these two operation also describe a @yield@ operation, many systems use this as the most basic benchmark. 38 However, yielding can be treated as a special case, since it also carries the information that the number of the ready \glspl{at} will not change. 39 Not all systems use this information, but those which do may appear to have better performance than they would for disconnected push/pop pairs. 40 For this reason, I chose a different first benchmark, which I call the Cycle Benchmark. 41 This benchmark arranges many \glspl{at} into multiple rings of \glspl{at}. 42 Each ring is effectively a circular singly-linked list. 37 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue. 38 Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark. 39 However, yielding can be treated as a special case by optimizing it away (dead code) since the number of ready \glspl{at} does not change. 40 Not all systems perform this optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}. 41 For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}. 42 This benchmark arranges a number of \glspl{at} into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list. 43 43 At runtime, each \gls{at} unparks the next \gls{at} before parking itself. 44 This corresponds to the desired pair of ready queue operations. 45 Unparking the next \gls{at} requires pushing that \gls{at} onto the ready queue and the ensuing park will cause the runtime to pop a \gls{at} from the ready-queue. 46 Figure~\ref{fig:cycle} shows a visual representation of this arrangement. 47 48 The goal of this ring is that the underlying runtime cannot rely on the guarantee that the number of ready \glspl{at} will stay constant over the duration of the experiment. 44 Unparking the next \gls{at} pushes that \gls{at} onto the ready queue as does the ensuing park. 45 46 Hence, the underlying runtime cannot rely on the number of ready \glspl{at} staying constant over the duration of the experiment. 49 47 In fact, the total number of \glspl{at} waiting on the ready queue is expected to vary because of the race between the next \gls{at} unparking and the current \gls{at} parking. 50 The size of the cycle is also decided based on this race: cycles that are too small may see the chain of unparks go full circle before the first \gls{at} can park. 51 While this would not be a correctness problem, every runtime system must handle that race, it could lead to pushes and pops being optimized away. 52 Since silently omitting ready-queue operations would throw off the measuring of these operations, the ring of \glspl{at} must be big enough so the \glspl{at} have the time to fully park before they are unparked. 53 Note that this problem is only present on SMP machines and is significantly mitigated by the fact that there are multiple rings in the system. 54 55 To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of \glspl{proc} available. 56 Beyond this point, adding more rings serves to mitigate even more the idle sleep handling. 57 This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above. 58 59 The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition. 60 Figure~\ref{fig:cycle:code} shows pseudo code for this benchmark. 61 62 \begin{figure} 63 \begin{lstlisting} 64 Thread.main() { 65 count := 0 66 for { 67 wait() 68 this.next.wake() 69 count ++ 70 if must_stop() { break } 71 } 72 global.count += count 73 } 74 \end{lstlisting} 75 \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code} 76 \label{fig:cycle:code} 77 \end{figure} 78 79 48 That is, the runtime cannot anticipate that the current task will immediately park. 49 As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \gls{at} parks because of time-slicing or multiple \procs. 50 Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops. 51 To prevent any attempt of silently omitting ready-queue operations, the ring of \glspl{at} is made big enough so the \glspl{at} have time to fully park before being unparked again. 52 (Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.) 53 Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment. 54 55 To avoid this benchmark being affected by idle-sleep handling, the number of rings is multiple times greater than the number of \glspl{proc}. 56 This design avoids the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentioned above. 57 58 Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark. 59 There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition. 60 61 \begin{figure} 62 \begin{cfa} 63 Thread.main() { 64 count := 0 65 for { 66 @wait()@ 67 @this.next.wake()@ 68 count ++ 69 if must_stop() { break } 70 } 71 global.count += count 72 } 73 \end{cfa} 74 \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code} 75 \label{fig:cycle:code} 76 \end{figure} 80 77 81 78 \subsection{Results} 79 Figure~\ref{fig:cycle:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle. 80 82 81 \begin{figure} 83 82 \subfloat[][Throughput, 100 \ats per \proc]{ … … 106 105 \label{fig:cycle:jax:low:ns} 107 106 } 108 \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput as a function of \proc count , using 100 cycles per \proc,5 \ats per cycle.}107 \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput as a function of \proc count with 100 cycles per \proc and 5 \ats per cycle.} 109 108 \label{fig:cycle:jax} 110 109 \end{figure} 111 Figure~\ref{fig:cycle:jax} shows the throughput as a function of \proc count, with the following constants:112 Each run uses 100 cycles per \proc, 5 \ats per cycle.113 110 114 111 \todo{results discussion} 115 112 116 113 \section{Yield} 117 For completion, I also include the yield benchmark.118 This benchmark is much simpler than the cycle tests, it simplycreates many \glspl{at} that call @yield@.119 As mention ned in the previous section, this benchmark may be less representative of usages that only make limited use of @yield@, due to potential shortcuts in the routine.120 Its only interesting variable is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) couldbe empty.121 This s ometimes puts more strain on the idle sleep handling, compared to scenarios where there is clearly plenty of work to be done.122 Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, the ``wait/wake-next'' is simply replaced by a yield.123 124 \begin{figure} 125 \begin{lstlisting}126 127 128 129 yield()130 131 132 133 134 135 \end{lstlisting}136 137 114 For completion, the classic yield benchmark is included. 115 This benchmark is simpler than the cycle test: it creates many \glspl{at} that call @yield@. 116 As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@. 117 The only interesting variable in this benchmark is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) can be empty. 118 This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work. 119 Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@. 120 121 \begin{figure} 122 \begin{cfa} 123 Thread.main() { 124 count := 0 125 for { 126 @yield()@ 127 count ++ 128 if must_stop() { break } 129 } 130 global.count += count 131 } 132 \end{cfa} 133 \caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code} 134 \label{fig:yield:code} 138 135 \end{figure} 139 136 140 137 \subsection{Results} 138 139 Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count, where each run uses 100 \ats per \proc. 140 141 141 \begin{figure} 142 142 \subfloat[][Throughput, 100 \ats per \proc]{ … … 168 168 \label{fig:yield:jax} 169 169 \end{figure} 170 Figure~\ref{fig:yield:ops:jax} shows the throughput as a function of \proc count, with the following constants:171 Each run uses 100 \ats per \proc.172 170 173 171 \todo{results discussion} 174 172 175 176 173 \section{Churn} 177 The Cycle and Yield benchmark represent s an ``easy'' scenario for a scheduler, \eg,an embarrassingly parallel application.178 In these benchmarks, \glspl{at} can be easily partitioned over the different \glspl{proc} up -front and none of the \glspl{at} communicate with each other.179 180 The Churn benchmark represents more chaotic usages, where there is no relation between the last \gls{proc} on which a \gls{at} ran and the \gls{proc} that unblockedit.181 W hen a \gls{at} is unblocked from a different \gls{proc} than the one on which it last ran, the unblocking \gls{proc} must either ``steal'' the \gls{at} or place it on a remotequeue.182 This results can result in either contention on the remote queueor \glspl{rmr} on \gls{at} data structure.183 In either case, this benchmark aims to highlight how each scheduler handles these cases, since both cases can lead to performance degradation if they arenot handled correctly.184 185 T o achieve this the benchmark uses a fixed size array ofsemaphores.186 Each \gls{at} picks a random semaphore, @V@s it to unblock a \at waitingand then @P@s on the semaphore.174 The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application. 175 In these benchmarks, \glspl{at} can be easily partitioned over the different \glspl{proc} upfront and none of the \glspl{at} communicate with each other. 176 177 The Churn benchmark represents more chaotic execution, where there is no relation between the last \gls{proc} on which a \gls{at} ran and blocked and the \gls{proc} that subsequently unblocks it. 178 With processor-specific ready-queues, when a \gls{at} is unblocked by a different \gls{proc} that means the unblocking \gls{proc} must either ``steal'' the \gls{at} from another processor or find it on a global queue. 179 This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on \gls{at} data structure. 180 In either case, this benchmark aims to highlight how each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly. 181 182 This benchmark uses a fixed-size array of counting semaphores. 183 Each \gls{at} picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore. 187 184 This creates a flow where \glspl{at} push each other out of the semaphores before being pushed out themselves. 188 For this benchmark to work however, the number of \glspl{at} must be equal or greater to the number of semaphores plus the number of \glspl{proc}. 189 Note that the nature of these semaphores mean the counter can go beyond 1, which could lead to calls to @P@ not blocking. 185 For this benchmark to work, the number of \glspl{at} must be equal or greater than the number of semaphores plus the number of \glspl{proc}. 186 Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@. 187 Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@. 188 189 \begin{figure} 190 \begin{cfa} 191 Thread.main() { 192 count := 0 193 for { 194 r := random() % len(spots) 195 @spots[r].V()@ 196 @spots[r].P()@ 197 count ++ 198 if must_stop() { break } 199 } 200 global.count += count 201 } 202 \end{cfa} 203 \caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code} 204 \label{fig:churn:code} 205 \end{figure} 206 207 \subsection{Results} 208 Figure~\ref{fig:churn:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle. 209 210 \begin{figure} 211 \subfloat[][Throughput, 100 \ats per \proc]{ 212 \resizebox{0.5\linewidth}{!}{ 213 \input{result.churn.jax.ops.pstex_t} 214 } 215 \label{fig:churn:jax:ops} 216 } 217 \subfloat[][Throughput, 1 \ats per \proc]{ 218 \resizebox{0.5\linewidth}{!}{ 219 \input{result.churn.low.jax.ops.pstex_t} 220 } 221 \label{fig:churn:jax:low:ops} 222 } 223 224 \subfloat[][Latency, 100 \ats per \proc]{ 225 \resizebox{0.5\linewidth}{!}{ 226 \input{result.churn.jax.ns.pstex_t} 227 } 228 229 } 230 \subfloat[][Latency, 1 \ats per \proc]{ 231 \resizebox{0.5\linewidth}{!}{ 232 \input{result.churn.low.jax.ns.pstex_t} 233 } 234 \label{fig:churn:jax:low:ns} 235 } 236 \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. 237 Throughput is the total operation per second across all cores. Latency is the duration of each operation.} 238 \label{fig:churn:jax} 239 \end{figure} 240 241 \todo{results discussion} 242 243 \section{Locality} 190 244 191 245 \todo{code, setup, results} 192 \begin{lstlisting}193 Thread.main() {194 count := 0195 for {196 r := random() % len(spots)197 spots[r].V()198 spots[r].P()199 count ++200 if must_stop() { break }201 }202 global.count += count203 }204 \end{lstlisting}205 206 \begin{figure}207 \subfloat[][Throughput, 100 \ats per \proc]{208 \resizebox{0.5\linewidth}{!}{209 \input{result.churn.jax.ops.pstex_t}210 }211 \label{fig:churn:jax:ops}212 }213 \subfloat[][Throughput, 1 \ats per \proc]{214 \resizebox{0.5\linewidth}{!}{215 \input{result.churn.low.jax.ops.pstex_t}216 }217 \label{fig:churn:jax:low:ops}218 }219 220 \subfloat[][Latency, 100 \ats per \proc]{221 \resizebox{0.5\linewidth}{!}{222 \input{result.churn.jax.ns.pstex_t}223 }224 225 }226 \subfloat[][Latency, 1 \ats per \proc]{227 \resizebox{0.5\linewidth}{!}{228 \input{result.churn.low.jax.ns.pstex_t}229 }230 \label{fig:churn:jax:low:ns}231 }232 \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. Throughput is the total operation per second across all cores. Latency is the duration of each opeartion.}233 \label{fig:churn:jax}234 \end{figure}235 236 \section{Locality}237 238 \todo{code, setup, results}239 246 240 247 \section{Transfer} 241 The last benchmark is more exactly characterize asan experiment than a benchmark.242 It tests the behavio r of the schedulers for a particularlymisbehaved workload.248 The last benchmark is more of an experiment than a benchmark. 249 It tests the behaviour of the schedulers for a misbehaved workload. 243 250 In this workload, one of the \gls{at} is selected at random to be the leader. 244 251 The leader then spins in a tight loop until it has observed that all other \glspl{at} have acknowledged its leadership. 245 252 The leader \gls{at} then picks a new \gls{at} to be the ``spinner'' and the cycle repeats. 246 247 The benchmark comes in two flavours for the behavior of the non-leader \glspl{at}: 248 once they acknowledged the leader, they either block on a semaphore or yield repeatadly. 249 250 This experiment is designed to evaluate the short term load balancing of the scheduler. 251 Indeed, schedulers where the runnable \glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at} for this experient to terminate. 252 This is because the spinning \gls{at} is effectively preventing the \gls{proc} from runnning any other \glspl{thrd}. 253 In the semaphore flavour, the number of runnable \glspl{at} will eventually dwindle down to only the leader. 254 This is a simpler case to handle for schedulers since \glspl{proc} eventually run out of work. 253 The benchmark comes in two flavours for the non-leader \glspl{at}: 254 once they acknowledged the leader, they either block on a semaphore or spin yielding. 255 256 The experiment is designed to evaluate the short-term load-balancing of a scheduler. 257 Indeed, schedulers where the runnable \glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at} for this experiment to terminate. 258 This problem occurs because the spinning \gls{at} is effectively preventing the \gls{proc} from running any other \glspl{thrd}. 259 In the semaphore flavour, the number of runnable \glspl{at} eventually dwindles down to only the leader. 260 This scenario is a simpler case to handle for schedulers since \glspl{proc} eventually run out of work. 255 261 In the yielding flavour, the number of runnable \glspl{at} stays constant. 256 This is a harder case to handle because corrective measures must be taken even if work is stillavailable.257 Note that languages that have mandatory preemption docircumvent this problem by forcing the spinner to yield.262 This scenario is a harder case to handle because corrective measures must be taken even when work is available. 263 Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield. 258 264 259 265 \todo{code, setup, results} 260 \begin{lstlisting} 261 Thread.lead() { 262 this.idx_seen = ++lead_idx 263 if lead_idx > stop_idx { 264 done := true 265 return 266 } 267 268 // Wait for everyone to acknowledge my leadership 269 start: = timeNow() 266 267 \begin{figure} 268 \begin{cfa} 269 Thread.lead() { 270 this.idx_seen = ++lead_idx 271 if lead_idx > stop_idx { 272 done := true 273 return 274 } 275 276 // Wait for everyone to acknowledge my leadership 277 start: = timeNow() 278 for t in threads { 279 while t.idx_seen != lead_idx { 280 asm pause 281 if (timeNow() - start) > 5 seconds { error() } 282 } 283 } 284 285 // pick next leader 286 leader := threads[ prng() % len(threads) ] 287 288 // wake every one 289 if ! exhaust { 270 290 for t in threads { 271 while t.idx_seen != lead_idx { 272 asm pause 273 if (timeNow() - start) > 5 seconds { error() } 274 } 275 } 276 277 // pick next leader 278 leader := threads[ prng() % len(threads) ] 279 280 // wake every one 281 if !exhaust { 282 for t in threads { 283 if t != me { t.wake() } 284 } 285 } 286 } 287 288 Thread.wait() { 289 this.idx_seen := lead_idx 290 if exhaust { wait() } 291 else { yield() } 292 } 293 294 Thread.main() { 295 while !done { 296 if leader == me { this.lead() } 297 else { this.wait() } 298 } 299 } 300 \end{lstlisting} 291 if t != me { t.wake() } 292 } 293 } 294 } 295 296 Thread.wait() { 297 this.idx_seen := lead_idx 298 if exhaust { wait() } 299 else { yield() } 300 } 301 302 Thread.main() { 303 while !done { 304 if leader == me { this.lead() } 305 else { this.wait() } 306 } 307 } 308 \end{cfa} 309 \caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark : Pseudo Code} 310 \label{fig:transfer:code} 311 \end{figure} 312 313 \subsection{Results} 314 Figure~\ref{fig:transfer:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle. 315 316 \todo{results discussion}
Note: See TracChangeset
for help on using the changeset viewer.