source: doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex @ e4ea4b8d

ADTast-experimentalpthread-emulationqualifiedEnum
Last change on this file since e4ea4b8d was 622a358, checked in by Thierry Delisle <tdelisle@…>, 2 years ago

A whole lot of results and some text section done

  • Property mode set to 100644
File size: 13.2 KB
Line 
1\chapter{Micro-Benchmarks}\label{microbench}
2
3The first step of evaluation is always to test-out small controlled cases, to ensure that the basics are working properly.
4This sections presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
5
6\section{Benchmark Environment}
7All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine.
8
9For all benchmarks, \texttt{taskset} is used to limit the experiment to 1 NUMA Node with no hyper threading.
10If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used.
11If still more \glspl{hthrd} are needed then the experiment is limited to as few NUMA Nodes as needed.
12
13
14\paragraph{AMD} The AMD machine is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
15The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
16These EPYCs have 64 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 256 \glspl{hthrd}.
17The cpus each have 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches respectively.
18Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
19
20\paragraph{Intel} The Intel machine is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
21The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
22These Xeon Platinums have 24 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 192 \glspl{hthrd}.
23The cpus each have 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
24Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
25
26This limited sharing of the last level cache on the AMD machine is markedly different than the Intel machine. Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different cpu incurr a significant latency, on AMD it is also the case that cache misses served by a different L3 instance on the same cpu still incur high latency.
27
28
29\section{Cycling latency}
30\begin{figure}
31        \centering
32        \input{cycle.pstex_t}
33        \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \gls{at} unparks the next \gls{at} in the cycle before parking itself.}
34        \label{fig:cycle}
35\end{figure}
36The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready-queue.
37Since these two operation also describe a \texttt{yield} operation, many systems use this as the most basic benchmark.
38However, yielding can be treated as a special case, since it also carries the information that the number of the ready \glspl{at} will not change.
39Not all systems use this information, but those which do may appear to have better performance than they would for disconnected push/pop pairs.
40For this reason, I chose a different first benchmark, which I call the Cycle Benchmark.
41This benchmark arranges many \glspl{at} into multiple rings of \glspl{at}.
42Each ring is effectively a circular singly-linked list.
43At runtime, each \gls{at} unparks the next \gls{at} before parking itself.
44This corresponds to the desired pair of ready queue operations.
45Unparking the next \gls{at} requires pushing that \gls{at} onto the ready queue and the ensuing park will cause the runtime to pop a \gls{at} from the ready-queue.
46Figure~\ref{fig:cycle} shows a visual representation of this arrangement.
47
48The goal of this ring is that the underlying runtime cannot rely on the guarantee that the number of ready \glspl{at} will stay constant over the duration of the experiment.
49In fact, the total number of \glspl{at} waiting on the ready queue is expected to vary because of the race between the next \gls{at} unparking and the current \gls{at} parking.
50The size of the cycle is also decided based on this race: cycles that are too small may see the chain of unparks go full circle before the first \gls{at} can park.
51While this would not be a correctness problem, every runtime system must handle that race, it could lead to pushes and pops being optimized away.
52Since silently omitting ready-queue operations would throw off the measuring of these operations, the ring of \glspl{at} must be big enough so the \glspl{at} have the time to fully park before they are unparked.
53Note that this problem is only present on SMP machines and is significantly mitigated by the fact that there are multiple rings in the system.
54
55To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of \glspl{proc} available.
56Beyond this point, adding more rings serves to mitigate even more the idle sleep handling.
57This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
58
59The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw \texttt{park}/\texttt{unpark} and carefully picking the order of the \texttt{P} and \texttt{V} with respect to the loop condition.
60Figure~\ref{fig:cycle:code} shows pseudo code for this benchmark.
61
62\begin{figure}
63        \begin{lstlisting}
64                Thread.main() {
65                        count := 0
66                        for {
67                                wait()
68                                this.next.wake()
69                                count ++
70                                if must_stop() { break }
71                        }
72                        global.count += count
73                }
74        \end{lstlisting}
75        \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
76        \label{fig:cycle:code}
77\end{figure}
78
79
80
81\subsection{Results}
82\begin{figure}
83        \subfloat[][Throughput, 100 \ats per \proc]{
84                \resizebox{0.5\linewidth}{!}{
85                        \input{result.cycle.jax.ops.pstex_t}
86                }
87                \label{fig:cycle:jax:ops}
88        }
89        \subfloat[][Throughput, 1 \ats per \proc]{
90                \resizebox{0.5\linewidth}{!}{
91                        \input{result.cycle.low.jax.ops.pstex_t}
92                }
93                \label{fig:cycle:jax:low:ops}
94        }
95
96        \subfloat[][Latency, 100 \ats per \proc]{
97                \resizebox{0.5\linewidth}{!}{
98                        \input{result.cycle.jax.ns.pstex_t}
99                }
100
101        }
102        \subfloat[][Latency, 1 \ats per \proc]{
103                \resizebox{0.5\linewidth}{!}{
104                        \input{result.cycle.low.jax.ns.pstex_t}
105                }
106                \label{fig:cycle:jax:low:ns}
107        }
108        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput as a function of \proc count, using 100 cycles per \proc, 5 \ats per cycle.}
109        \label{fig:cycle:jax}
110\end{figure}
111Figure~\ref{fig:cycle:jax} shows the throughput as a function of \proc count, with the following constants:
112Each run uses 100 cycles per \proc, 5 \ats per cycle.
113
114\todo{results discussion}
115
116\section{Yield}
117For completion, I also include the yield benchmark.
118This benchmark is much simpler than the cycle tests, it simply creates many \glspl{at} that call \texttt{yield}.
119As mentionned in the previous section, this benchmark may be less representative of usages that only make limited use of \texttt{yield}, due to potential shortcuts in the routine.
120Its only interesting variable is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) could be empty.
121This sometimes puts more strain on the idle sleep handling, compared to scenarios where there is clearly plenty of work to be done.
122Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, the ``wait/wake-next'' is simply replaced by a yield.
123
124\begin{figure}
125        \begin{lstlisting}
126                Thread.main() {
127                        count := 0
128                        for {
129                                yield()
130                                count ++
131                                if must_stop() { break }
132                        }
133                        global.count += count
134                }
135        \end{lstlisting}
136        \caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
137        \label{fig:yield:code}
138\end{figure}
139
140\subsection{Results}
141\begin{figure}
142        \subfloat[][Throughput, 100 \ats per \proc]{
143                \resizebox{0.5\linewidth}{!}{
144                        \input{result.yield.jax.ops.pstex_t}
145                }
146                \label{fig:yield:jax:ops}
147        }
148        \subfloat[][Throughput, 1 \ats per \proc]{
149                \resizebox{0.5\linewidth}{!}{
150                \input{result.yield.low.jax.ops.pstex_t}
151                }
152                \label{fig:yield:jax:low:ops}
153        }
154
155        \subfloat[][Latency, 100 \ats per \proc]{
156                \resizebox{0.5\linewidth}{!}{
157                \input{result.yield.jax.ns.pstex_t}
158                }
159                \label{fig:yield:jax:ns}
160        }
161        \subfloat[][Latency, 1 \ats per \proc]{
162                \resizebox{0.5\linewidth}{!}{
163                \input{result.yield.low.jax.ns.pstex_t}
164                }
165                \label{fig:yield:jax:low:ns}
166        }
167        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput as a function of \proc count, using 1 \ats per \proc.}
168        \label{fig:yield:jax}
169\end{figure}
170Figure~\ref{fig:yield:ops:jax} shows the throughput as a function of \proc count, with the following constants:
171Each run uses 100 \ats per \proc.
172
173\todo{results discussion}
174
175
176\section{Churn}
177The Cycle and Yield benchmark represents an ``easy'' scenario for a scheduler, \eg, an embarrassingly parallel application.
178In these benchmarks, \glspl{at} can be easily partitioned over the different \glspl{proc} up-front and none of the \glspl{at} communicate with each other.
179
180The Churn benchmark represents more chaotic usages, where there is no relation between the last \gls{proc} on which a \gls{at} ran and the \gls{proc} that unblocked it.
181When a \gls{at} is unblocked from a different \gls{proc} than the one on which it last ran, the unblocking \gls{proc} must either ``steal'' the \gls{at} or place it on a remote queue.
182This results can result in either contention on the remote queue or \glspl{rmr} on \gls{at} data structure.
183In either case, this benchmark aims to highlight how each scheduler handles these cases, since both cases can lead to performance degradation if they are not handled correctly.
184
185To achieve this the benchmark uses a fixed size array of semaphores.
186Each \gls{at} picks a random semaphore, \texttt{V}s it to unblock a \at waiting and then \texttt{P}s on the semaphore.
187This creates a flow where \glspl{at} push each other out of the semaphores before being pushed out themselves.
188For this benchmark to work however, the number of \glspl{at} must be equal or greater to the number of semaphores plus the number of \glspl{proc}.
189Note that the nature of these semaphores mean the counter can go beyond 1, which could lead to calls to \texttt{P} not blocking.
190
191\todo{code, setup, results}
192\begin{lstlisting}
193        Thread.main() {
194                count := 0
195                for {
196                        r := random() % len(spots)
197                        spots[r].V()
198                        spots[r].P()
199                        count ++
200                        if must_stop() { break }
201                }
202                global.count += count
203        }
204\end{lstlisting}
205
206\begin{figure}
207        \subfloat[][Throughput, 100 \ats per \proc]{
208                \resizebox{0.5\linewidth}{!}{
209                        \input{result.churn.jax.ops.pstex_t}
210                }
211                \label{fig:churn:jax:ops}
212        }
213        \subfloat[][Throughput, 1 \ats per \proc]{
214                \resizebox{0.5\linewidth}{!}{
215                        \input{result.churn.low.jax.ops.pstex_t}
216                }
217                \label{fig:churn:jax:low:ops}
218        }
219
220        \subfloat[][Latency, 100 \ats per \proc]{
221                \resizebox{0.5\linewidth}{!}{
222                        \input{result.churn.jax.ns.pstex_t}
223                }
224
225        }
226        \subfloat[][Latency, 1 \ats per \proc]{
227                \resizebox{0.5\linewidth}{!}{
228                        \input{result.churn.low.jax.ns.pstex_t}
229                }
230                \label{fig:churn:jax:low:ns}
231        }
232        \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. Throughput is the total operation per second across all cores. Latency is the duration of each opeartion.}
233        \label{fig:churn:jax}
234\end{figure}
235
236\section{Locality}
237
238\todo{code, setup, results}
239
240\section{Transfer}
241The last benchmark is more exactly characterize as an experiment than a benchmark.
242It tests the behavior of the schedulers for a particularly misbehaved workload.
243In this workload, one of the \gls{at} is selected at random to be the leader.
244The leader then spins in a tight loop until it has observed that all other \glspl{at} have acknowledged its leadership.
245The leader \gls{at} then picks a new \gls{at} to be the ``spinner'' and the cycle repeats.
246
247The benchmark comes in two flavours for the behavior of the non-leader \glspl{at}:
248once they acknowledged the leader, they either block on a semaphore or yield repeatadly.
249
250This experiment is designed to evaluate the short term load balancing of the scheduler.
251Indeed, schedulers where the runnable \glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at} for this experient to terminate.
252This is because the spinning \gls{at} is effectively preventing the \gls{proc} from runnning any other \glspl{thrd}.
253In the semaphore flavour, the number of runnable \glspl{at} will eventually dwindle down to only the leader.
254This is a simpler case to handle for schedulers since \glspl{proc} eventually run out of work.
255In the yielding flavour, the number of runnable \glspl{at} stays constant.
256This is a harder case to handle because corrective measures must be taken even if work is still available.
257Note that languages that have mandatory preemption do circumvent this problem by forcing the spinner to yield.
258
259\todo{code, setup, results}
260\begin{lstlisting}
261        Thread.lead() {
262                this.idx_seen = ++lead_idx
263                if lead_idx > stop_idx {
264                        done := true
265                        return
266                }
267
268                // Wait for everyone to acknowledge my leadership
269                start: = timeNow()
270                for t in threads {
271                        while t.idx_seen != lead_idx {
272                                asm pause
273                                if (timeNow() - start) > 5 seconds { error() }
274                        }
275                }
276
277                // pick next leader
278                leader := threads[ prng() % len(threads) ]
279
280                // wake every one
281                if !exhaust {
282                        for t in threads {
283                                if t != me { t.wake() }
284                        }
285                }
286        }
287
288        Thread.wait() {
289                this.idx_seen := lead_idx
290                if exhaust { wait() }
291                else { yield() }
292        }
293
294        Thread.main() {
295                while !done  {
296                        if leader == me { this.lead() }
297                        else { this.wait() }
298                }
299        }
300\end{lstlisting}
Note: See TracBrowser for help on using the repository browser.