Ignore:
File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    r6db62fa r729c991  
    33The first step of evaluation is always to test-out small controlled cases, to ensure that the basics are working properly.
    44This sections presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
    5 
    6 \section{Benchmark Environment}
    7 All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine.
    8 
    9 \paragraph{AMD} The AMD machine is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
    10 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
    11 These EPYCs have 64 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 256 \glspl{hthrd}.
    12 The cpus each have 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches respectively.
    13 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
    14 
    15 \paragraph{Intel} The Intel machine is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
    16 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
    17 These Xeon Platinums have 24 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 192 \glspl{hthrd}.
    18 The cpus each have 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
    19 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
    20 
    21 This limited sharing of the last level cache on the AMD machine is markedly different than the Intel machine. Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different cpu incurr a significant latency, on AMD it is also the case that cache misses served by a different L3 instance on the same cpu still incur high latency.
    22 
    235
    246\section{Cycling latency}
     
    4931\end{figure}
    5032
     33\todo{check term ``idle sleep handling''}
    5134To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of \glspl{proc} available.
    5235Beyond this point, adding more rings serves to mitigate even more the idle sleep handling.
    53 This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
     36This is to avoid the case where one of the worker \glspl{at} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
    5437
    5538The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw \texttt{park}/\texttt{unpark} and carefully picking the order of the \texttt{P} and \texttt{V} with respect to the loop condition.
    5639
     40\todo{code, setup, results}
    5741\begin{lstlisting}
    5842        Thread.main() {
     
    6852\end{lstlisting}
    6953
    70 \begin{figure}
    71         \centering
    72         \input{result.cycle.jax.ops.pstex_t}
    73         \vspace*{-10pt}
    74         \label{fig:cycle:ns:jax}
    75 \end{figure}
    7654
    7755\section{Yield}
Note: See TracChangeset for help on using the changeset viewer.