1 | \chapter{Micro-Benchmarks}\label{microbench}
|
---|
2 |
|
---|
3 | The first step in evaluating this work is to test-out small controlled cases to ensure the basics work properly.
|
---|
4 | This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
|
---|
5 |
|
---|
6 | \section{Benchmark Environment}
|
---|
7 | All benchmarks are run on two distinct hardware platforms.
|
---|
8 | \begin{description}
|
---|
9 | \item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
|
---|
10 | The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}.
|
---|
11 | Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively.
|
---|
12 | Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
|
---|
13 | The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
|
---|
14 |
|
---|
15 | \item[Intel] is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
|
---|
16 | The Xeon CPU has 24 cores with 2 \glspl{hthrd} per core, for 48 \glspl{hthrd} per socket with 4 sockets for a total of 196 \glspl{hthrd}.
|
---|
17 | Each CPU has 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
|
---|
18 | Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
|
---|
19 | The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
|
---|
20 | \end{description}
|
---|
21 |
|
---|
22 | For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading.
|
---|
23 | If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used.
|
---|
24 | If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA Nodes as needed.
|
---|
25 |
|
---|
26 | The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine.
|
---|
27 | Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU still incur high latency.
|
---|
28 |
|
---|
29 |
|
---|
30 | \section{Cycling latency}
|
---|
31 | \begin{figure}
|
---|
32 | \centering
|
---|
33 | \input{cycle.pstex_t}
|
---|
34 | \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \gls{at} unparks the next \gls{at} in the cycle before parking itself.}
|
---|
35 | \label{fig:cycle}
|
---|
36 | \end{figure}
|
---|
37 | The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue.
|
---|
38 | Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark.
|
---|
39 | However, yielding can be treated as a special case by optimizing it away since the number of ready \glspl{at} does not change.
|
---|
40 | Not all systems perform this optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}.
|
---|
41 | For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}.
|
---|
42 | This benchmark arranges a number of \glspl{at} into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
|
---|
43 | At runtime, each \gls{at} unparks the next \gls{at} before parking itself.
|
---|
44 | Unparking the next \gls{at} pushes that \gls{at} onto the ready queue as does the ensuing park.
|
---|
45 |
|
---|
46 | Hence, the underlying runtime cannot rely on the number of ready \glspl{at} staying constant over the duration of the experiment.
|
---|
47 | In fact, the total number of \glspl{at} waiting on the ready queue is expected to vary because of the race between the next \gls{at} unparking and the current \gls{at} parking.
|
---|
48 | That is, the runtime cannot anticipate that the current task will immediately park.
|
---|
49 | As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \gls{at} parks because of time-slicing or multiple \procs.
|
---|
50 | Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops.
|
---|
51 | To prevent any attempt of silently omitting ready-queue operations, the ring of \glspl{at} is made big enough so the \glspl{at} have time to fully park before being unparked again.
|
---|
52 | (Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.)
|
---|
53 | Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
|
---|
54 |
|
---|
55 | To avoid this benchmark being affected by idle-sleep handling, the number of rings is multiple times greater than the number of \glspl{proc}.
|
---|
56 | This design avoids the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentioned above.
|
---|
57 |
|
---|
58 | Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark.
|
---|
59 | There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition.
|
---|
60 |
|
---|
61 | \begin{figure}
|
---|
62 | \begin{cfa}
|
---|
63 | Thread.main() {
|
---|
64 | count := 0
|
---|
65 | for {
|
---|
66 | @wait()@
|
---|
67 | @this.next.wake()@
|
---|
68 | count ++
|
---|
69 | if must_stop() { break }
|
---|
70 | }
|
---|
71 | global.count += count
|
---|
72 | }
|
---|
73 | \end{cfa}
|
---|
74 | \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
|
---|
75 | \label{fig:cycle:code}
|
---|
76 | \end{figure}
|
---|
77 |
|
---|
78 | \subsection{Results}
|
---|
79 | \begin{figure}
|
---|
80 | \subfloat[][Throughput, 100 cycles per \proc]{
|
---|
81 | \resizebox{0.5\linewidth}{!}{
|
---|
82 | \input{result.cycle.jax.ops.pstex_t}
|
---|
83 | }
|
---|
84 | \label{fig:cycle:jax:ops}
|
---|
85 | }
|
---|
86 | \subfloat[][Throughput, 1 cycle per \proc]{
|
---|
87 | \resizebox{0.5\linewidth}{!}{
|
---|
88 | \input{result.cycle.low.jax.ops.pstex_t}
|
---|
89 | }
|
---|
90 | \label{fig:cycle:jax:low:ops}
|
---|
91 | }
|
---|
92 |
|
---|
93 | \subfloat[][Scalability, 100 cycles per \proc]{
|
---|
94 | \resizebox{0.5\linewidth}{!}{
|
---|
95 | \input{result.cycle.jax.ns.pstex_t}
|
---|
96 | }
|
---|
97 | \label{fig:cycle:jax:ns}
|
---|
98 | }
|
---|
99 | \subfloat[][Scalability, 1 cycle per \proc]{
|
---|
100 | \resizebox{0.5\linewidth}{!}{
|
---|
101 | \input{result.cycle.low.jax.ns.pstex_t}
|
---|
102 | }
|
---|
103 | \label{fig:cycle:jax:low:ns}
|
---|
104 | }
|
---|
105 | \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better.}
|
---|
106 | \label{fig:cycle:jax}
|
---|
107 | \end{figure}
|
---|
108 |
|
---|
109 | \begin{figure}
|
---|
110 | \subfloat[][Throughput, 100 cycles per \proc]{
|
---|
111 | \resizebox{0.5\linewidth}{!}{
|
---|
112 | \input{result.cycle.nasus.ops.pstex_t}
|
---|
113 | }
|
---|
114 | \label{fig:cycle:nasus:ops}
|
---|
115 | }
|
---|
116 | \subfloat[][Throughput, 1 cycle per \proc]{
|
---|
117 | \resizebox{0.5\linewidth}{!}{
|
---|
118 | \input{result.cycle.low.nasus.ops.pstex_t}
|
---|
119 | }
|
---|
120 | \label{fig:cycle:nasus:low:ops}
|
---|
121 | }
|
---|
122 |
|
---|
123 | \subfloat[][Scalability, 100 cycles per \proc]{
|
---|
124 | \resizebox{0.5\linewidth}{!}{
|
---|
125 | \input{result.cycle.nasus.ns.pstex_t}
|
---|
126 | }
|
---|
127 |
|
---|
128 | }
|
---|
129 | \subfloat[][Scalability, 1 cycle per \proc]{
|
---|
130 | \resizebox{0.5\linewidth}{!}{
|
---|
131 | \input{result.cycle.low.nasus.ns.pstex_t}
|
---|
132 | }
|
---|
133 | \label{fig:cycle:nasus:low:ns}
|
---|
134 | }
|
---|
135 | \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count 5 \ats per cycle and different cycle count. For Throughput higher is better, for Scalability lower is better.}
|
---|
136 | \label{fig:cycle:nasus}
|
---|
137 | \end{figure}
|
---|
138 | Figure~\ref{fig:cycle:jax} and Figure~\ref{fig:cycle:nasus} shows the throughput as a function of \proc count on Intel and AMD respectively, where each cycle has 5 \ats.
|
---|
139 | The graphs show traditional throughput on the top row and \newterm{scalability} on the bottom row.
|
---|
140 | Where scalability uses the same data but the Y axis is calculated as throughput over the number of \procs.
|
---|
141 | In this representation, perfect scalability should appear as a horizontal line, \eg, if doubling the number of \procs doubles the throughput, then the relation stays the same.
|
---|
142 | The left column shows results for 100 cycles per \proc, enough cycles to always keep every \proc busy.
|
---|
143 | The right column shows results for only 1 cycle per \proc, where the ready queues are expected to be near empty at all times.
|
---|
144 | The distinction is meaningful because the idle sleep subsystem is expected to matter only in the right column, where spurious effects can cause a \proc to run out of work temporarily.
|
---|
145 |
|
---|
146 | The performance goal of \CFA is to obtain equivalent performance to other, less fair schedulers and that is what results show.
|
---|
147 | Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns} show very good throughput and scalability for all runtimes.
|
---|
148 | The experimental setup prioritizes running on 2 \glspl{hthrd} per core before running on multiple sockets.
|
---|
149 | The effect of that setup is seen from 25 to 48 \procs, running on 24 core with 2 \glspl{hthrd} per core.
|
---|
150 | This effect is again repeated from 73 and 96 \procs, where it happens on the second CPU.
|
---|
151 | When running only a single cycle, most runtime achieve lower throughput because of the idle-sleep mechanism.
|
---|
152 | In Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}
|
---|
153 |
|
---|
154 | Figure~\ref{fig:cycle:nasus} show effectively the same story happening on AMD as it does on Intel.
|
---|
155 | The different performance bumps due to cache topology happen at different locations and there is a little more variability.
|
---|
156 | However, in all cases \CFA is still competitive with other runtimes.
|
---|
157 |
|
---|
158 |
|
---|
159 | \section{Yield}
|
---|
160 | For completion, the classic yield benchmark is included.
|
---|
161 | This benchmark is simpler than the cycle test: it creates many \glspl{at} that call @yield@.
|
---|
162 | As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@.
|
---|
163 | The only interesting variable in this benchmark is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) can be empty.
|
---|
164 | This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work.
|
---|
165 | Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@.
|
---|
166 |
|
---|
167 | \begin{figure}
|
---|
168 | \begin{cfa}
|
---|
169 | Thread.main() {
|
---|
170 | count := 0
|
---|
171 | for {
|
---|
172 | @yield()@
|
---|
173 | count ++
|
---|
174 | if must_stop() { break }
|
---|
175 | }
|
---|
176 | global.count += count
|
---|
177 | }
|
---|
178 | \end{cfa}
|
---|
179 | \caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
|
---|
180 | \label{fig:yield:code}
|
---|
181 | \end{figure}
|
---|
182 |
|
---|
183 | \subsection{Results}
|
---|
184 | \begin{figure}
|
---|
185 | \subfloat[][Throughput, 100 \ats per \proc]{
|
---|
186 | \resizebox{0.5\linewidth}{!}{
|
---|
187 | \input{result.yield.jax.ops.pstex_t}
|
---|
188 | }
|
---|
189 | \label{fig:yield:jax:ops}
|
---|
190 | }
|
---|
191 | \subfloat[][Throughput, 1 \ats per \proc]{
|
---|
192 | \resizebox{0.5\linewidth}{!}{
|
---|
193 | \input{result.yield.low.jax.ops.pstex_t}
|
---|
194 | }
|
---|
195 | \label{fig:yield:jax:low:ops}
|
---|
196 | }
|
---|
197 |
|
---|
198 | \subfloat[][Scalability, 100 \ats per \proc]{
|
---|
199 | \resizebox{0.5\linewidth}{!}{
|
---|
200 | \input{result.yield.jax.ns.pstex_t}
|
---|
201 | }
|
---|
202 | \label{fig:yield:jax:ns}
|
---|
203 | }
|
---|
204 | \subfloat[][Scalability, 1 \ats per \proc]{
|
---|
205 | \resizebox{0.5\linewidth}{!}{
|
---|
206 | \input{result.yield.low.jax.ns.pstex_t}
|
---|
207 | }
|
---|
208 | \label{fig:yield:jax:low:ns}
|
---|
209 | }
|
---|
210 | \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better.}
|
---|
211 | \label{fig:yield:jax}
|
---|
212 | \end{figure}
|
---|
213 |
|
---|
214 | \begin{figure}
|
---|
215 | \subfloat[][Throughput, 100 \ats per \proc]{
|
---|
216 | \resizebox{0.5\linewidth}{!}{
|
---|
217 | \input{result.yield.nasus.ops.pstex_t}
|
---|
218 | }
|
---|
219 | \label{fig:yield:nasus:ops}
|
---|
220 | }
|
---|
221 | \subfloat[][Throughput, 1 \at per \proc]{
|
---|
222 | \resizebox{0.5\linewidth}{!}{
|
---|
223 | \input{result.yield.low.nasus.ops.pstex_t}
|
---|
224 | }
|
---|
225 | \label{fig:yield:nasus:low:ops}
|
---|
226 | }
|
---|
227 |
|
---|
228 | \subfloat[][Scalability, 100 \ats per \proc]{
|
---|
229 | \resizebox{0.5\linewidth}{!}{
|
---|
230 | \input{result.yield.nasus.ns.pstex_t}
|
---|
231 | }
|
---|
232 |
|
---|
233 | }
|
---|
234 | \subfloat[][Scalability, 1 \at per \proc]{
|
---|
235 | \resizebox{0.5\linewidth}{!}{
|
---|
236 | \input{result.yield.low.nasus.ns.pstex_t}
|
---|
237 | }
|
---|
238 | \label{fig:yield:nasus:low:ns}
|
---|
239 | }
|
---|
240 | \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and Scalability as a function of \proc count, using 1 \ats per \proc. For Throughput higher is better, for Scalability lower is better.}
|
---|
241 | \label{fig:yield:nasus}
|
---|
242 | \end{figure}
|
---|
243 |
|
---|
244 | Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count, where each run uses 100 \ats per \proc.
|
---|
245 | It is fairly obvious why I claim this benchmark is more artificial.
|
---|
246 | The throughput is dominated by the mechanism used to handle the @yield@.
|
---|
247 | \CFA does not have special handling for @yield@ and achieves very similar performance to the cycle benchmark.
|
---|
248 | Libfibre uses the fact that @yield@ doesn't change the number of ready fibres and by-passes the idle-sleep mechanism entirely, producing significantly better throughput.
|
---|
249 | Go puts yielding goroutines on a secondary global ready-queue, giving them lower priority.
|
---|
250 | The result is that multiple \glspl{hthrd} contend for the global queue and performance suffers drastically.
|
---|
251 | Based on the scalability, Tokio obtains the same poor performance and therefore it is likely it handles @yield@ in a similar fashion.
|
---|
252 |
|
---|
253 | When the number of \ats is reduce to 1 per \proc, the cost of idle sleep also comes into play in a very significant way.
|
---|
254 | If anything causes a \at migration, where two \ats end-up on the same ready-queue, work-stealing will start occuring and cause every \at to shuffle around.
|
---|
255 | In the process, several \procs can go to sleep transiently if they fail to find where the \ats were shuffled to.
|
---|
256 | In \CFA, spurious bursts of latency can trick a \proc into helping, triggering this effect.
|
---|
257 | However, since user-level threading with equal number of \ats and \procs is a somewhat degenerate case, especially when ctxswitching very often, this result is not particularly meaningful and is only included for completness.
|
---|
258 |
|
---|
259 | Again, Figure~\ref{fig:yield:nasus} show effectively the same story happening on AMD as it does on Intel.
|
---|
260 | \CFA fairs slightly better with many \ats per \proc, but the performance is satisfactory on both architectures.
|
---|
261 |
|
---|
262 | Since \CFA obtains the same satisfactory performance as the previous benchmark this is still a success, albeit a less meaningful one.
|
---|
263 |
|
---|
264 |
|
---|
265 | \section{Churn}
|
---|
266 | The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.
|
---|
267 | In these benchmarks, \glspl{at} can be easily partitioned over the different \glspl{proc} upfront and none of the \glspl{at} communicate with each other.
|
---|
268 |
|
---|
269 | The Churn benchmark represents more chaotic execution, where there is no relation between the last \gls{proc} on which a \gls{at} ran and blocked and the \gls{proc} that subsequently unblocks it.
|
---|
270 | With processor-specific ready-queues, when a \gls{at} is unblocked by a different \gls{proc} that means the unblocking \gls{proc} must either ``steal'' the \gls{at} from another processor or find it on a global queue.
|
---|
271 | This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on \gls{at} data structure.
|
---|
272 | In either case, this benchmark aims to highlight how each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
|
---|
273 |
|
---|
274 | This benchmark uses a fixed-size array of counting semaphores.
|
---|
275 | Each \gls{at} picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore.
|
---|
276 | This creates a flow where \glspl{at} push each other out of the semaphores before being pushed out themselves.
|
---|
277 | For this benchmark to work, the number of \glspl{at} must be equal or greater than the number of semaphores plus the number of \glspl{proc}.
|
---|
278 | Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@.
|
---|
279 | Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@.
|
---|
280 |
|
---|
281 | \begin{figure}
|
---|
282 | \begin{cfa}
|
---|
283 | Thread.main() {
|
---|
284 | count := 0
|
---|
285 | for {
|
---|
286 | r := random() % len(spots)
|
---|
287 | @spots[r].V()@
|
---|
288 | @spots[r].P()@
|
---|
289 | count ++
|
---|
290 | if must_stop() { break }
|
---|
291 | }
|
---|
292 | global.count += count
|
---|
293 | }
|
---|
294 | \end{cfa}
|
---|
295 | \caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code}
|
---|
296 | \label{fig:churn:code}
|
---|
297 | \end{figure}
|
---|
298 |
|
---|
299 | \subsection{Results}
|
---|
300 | Figure~\ref{fig:churn:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle.
|
---|
301 |
|
---|
302 | \begin{figure}
|
---|
303 | \subfloat[][Throughput, 100 \ats per \proc]{
|
---|
304 | \resizebox{0.5\linewidth}{!}{
|
---|
305 | \input{result.churn.jax.ops.pstex_t}
|
---|
306 | }
|
---|
307 | \label{fig:churn:jax:ops}
|
---|
308 | }
|
---|
309 | \subfloat[][Throughput, 1 \ats per \proc]{
|
---|
310 | \resizebox{0.5\linewidth}{!}{
|
---|
311 | \input{result.churn.low.jax.ops.pstex_t}
|
---|
312 | }
|
---|
313 | \label{fig:churn:jax:low:ops}
|
---|
314 | }
|
---|
315 |
|
---|
316 | \subfloat[][Latency, 100 \ats per \proc]{
|
---|
317 | \resizebox{0.5\linewidth}{!}{
|
---|
318 | \input{result.churn.jax.ns.pstex_t}
|
---|
319 | }
|
---|
320 |
|
---|
321 | }
|
---|
322 | \subfloat[][Latency, 1 \ats per \proc]{
|
---|
323 | \resizebox{0.5\linewidth}{!}{
|
---|
324 | \input{result.churn.low.jax.ns.pstex_t}
|
---|
325 | }
|
---|
326 | \label{fig:churn:jax:low:ns}
|
---|
327 | }
|
---|
328 | \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine.
|
---|
329 | Throughput is the total operation per second across all cores. Latency is the duration of each operation.}
|
---|
330 | \label{fig:churn:jax}
|
---|
331 | \end{figure}
|
---|
332 |
|
---|
333 | \todo{results discussion}
|
---|
334 |
|
---|
335 | \section{Locality}
|
---|
336 |
|
---|
337 | \todo{code, setup, results}
|
---|
338 |
|
---|
339 | \section{Transfer}
|
---|
340 | The last benchmark is more of an experiment than a benchmark.
|
---|
341 | It tests the behaviour of the schedulers for a misbehaved workload.
|
---|
342 | In this workload, one of the \gls{at} is selected at random to be the leader.
|
---|
343 | The leader then spins in a tight loop until it has observed that all other \glspl{at} have acknowledged its leadership.
|
---|
344 | The leader \gls{at} then picks a new \gls{at} to be the ``spinner'' and the cycle repeats.
|
---|
345 | The benchmark comes in two flavours for the non-leader \glspl{at}:
|
---|
346 | once they acknowledged the leader, they either block on a semaphore or spin yielding.
|
---|
347 |
|
---|
348 | The experiment is designed to evaluate the short-term load-balancing of a scheduler.
|
---|
349 | Indeed, schedulers where the runnable \glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at} for this experiment to terminate.
|
---|
350 | This problem occurs because the spinning \gls{at} is effectively preventing the \gls{proc} from running any other \glspl{thrd}.
|
---|
351 | In the semaphore flavour, the number of runnable \glspl{at} eventually dwindles down to only the leader.
|
---|
352 | This scenario is a simpler case to handle for schedulers since \glspl{proc} eventually run out of work.
|
---|
353 | In the yielding flavour, the number of runnable \glspl{at} stays constant.
|
---|
354 | This scenario is a harder case to handle because corrective measures must be taken even when work is available.
|
---|
355 | Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield.
|
---|
356 |
|
---|
357 | \todo{code, setup, results}
|
---|
358 |
|
---|
359 | \begin{figure}
|
---|
360 | \begin{cfa}
|
---|
361 | Thread.lead() {
|
---|
362 | this.idx_seen = ++lead_idx
|
---|
363 | if lead_idx > stop_idx {
|
---|
364 | done := true
|
---|
365 | return
|
---|
366 | }
|
---|
367 |
|
---|
368 | // Wait for everyone to acknowledge my leadership
|
---|
369 | start: = timeNow()
|
---|
370 | for t in threads {
|
---|
371 | while t.idx_seen != lead_idx {
|
---|
372 | asm pause
|
---|
373 | if (timeNow() - start) > 5 seconds { error() }
|
---|
374 | }
|
---|
375 | }
|
---|
376 |
|
---|
377 | // pick next leader
|
---|
378 | leader := threads[ prng() % len(threads) ]
|
---|
379 |
|
---|
380 | // wake every one
|
---|
381 | if ! exhaust {
|
---|
382 | for t in threads {
|
---|
383 | if t != me { t.wake() }
|
---|
384 | }
|
---|
385 | }
|
---|
386 | }
|
---|
387 |
|
---|
388 | Thread.wait() {
|
---|
389 | this.idx_seen := lead_idx
|
---|
390 | if exhaust { wait() }
|
---|
391 | else { yield() }
|
---|
392 | }
|
---|
393 |
|
---|
394 | Thread.main() {
|
---|
395 | while !done {
|
---|
396 | if leader == me { this.lead() }
|
---|
397 | else { this.wait() }
|
---|
398 | }
|
---|
399 | }
|
---|
400 | \end{cfa}
|
---|
401 | \caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark : Pseudo Code}
|
---|
402 | \label{fig:transfer:code}
|
---|
403 | \end{figure}
|
---|
404 |
|
---|
405 | \subsection{Results}
|
---|
406 | Figure~\ref{fig:transfer:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle.
|
---|
407 |
|
---|
408 | \todo{results discussion}
|
---|