Changes in / [5ce7f4a:e01d2f6]


Ignore:
Location:
doc
Files:
5 edited

Legend:

Unmodified
Added
Removed
  • doc/bibliography/pl.bib

    r5ce7f4a re01d2f6  
    20242024@manual{C++20Coroutine19,
    20252025    keywords    = {coroutine},
     2026    key         = {Coroutines},
    20262027    contributer = {pabuhr@plg},
    20272028    title       = {Coroutines (C++20)},
    20282029    organization= {cppreference.com},
    2029     month       = apr,
    2030     year        = 2019,
     2030    month       = jun,
     2031    year        = 2022,
    20312032    note        = {\href{https://en.cppreference.com/w/cpp/language/coroutines}{https://\-en.cppreference.com/\-w/\-cpp/\-language/\-coroutines}},
    20322033}
     
    69916992% S
    69926993
     6994@inproceedings{Imam14,
     6995    keywords    = {actor model, performance comparison, java actor libraries, benchmark suite},
     6996    contributer = {pabuhr@plg},
     6997    author      = {Shams M. Imam and Vivek Sarkar},
     6998    title       = {Savina - An Actor Benchmark Suite: Enabling Empirical Evaluation of Actor Libraries},
     6999    year        = {2014},
     7000    publisher   = {ACM},
     7001    address     = {New York, NY, USA},
     7002    booktitle   = {Proceedings of the 4th International Workshop on Programming Based on Actors Agents \& Decentralized Control},
     7003    pages       = {67-80},
     7004    numpages    = {14},
     7005    location    = {Portland, Oregon, USA},
     7006    series      = {AGERE! '14}
     7007}
     7008
    69937009@manual{Scala,
    69947010    keywords    = {Scala programming language},
  • doc/theses/thierry_delisle_PhD/thesis/fig/cycle.fig

    r5ce7f4a re01d2f6  
    88-2
    991200 2
    10 5 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 3144.643 2341.072 3525 2250 3375 2025 3150 1950
    11         2 0 1.00 60.00 120.00
    12 5 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 1955.357 2341.072 1950 1950 1725 2025 1575 2250
    13         2 0 1.00 60.00 120.00
    14 5 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 3637.500 3487.500 3750 3750 3900 3600 3900 3375
    15         2 0 1.00 60.00 120.00
    16 5 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 2587.500 4087.500 2325 4500 2550 4575 2850 4500
    17         2 0 1.00 60.00 120.00
    18 5 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 1612.500 3487.500 1200 3375 1200 3600 1350 3825
    19         2 0 1.00 60.00 120.00
    20 1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3675 2850 586 586 3675 2850 4125 3225
    21 1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3300 4125 586 586 3300 4125 3750 4500
    22 1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 1875 4125 586 586 1875 4125 2325 4500
    23 1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 1425 2850 586 586 1425 2850 1875 3225
    24 1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2550 1950 586 586 2550 1950 3000 2325
    25 4 0 0 50 -1 0 11 0.0000 2 135 720 1125 2925 Thread 2\001
    26 4 2 0 50 -1 0 11 0.0000 2 165 540 1650 1950 Unpark\001
    27 4 0 0 50 -1 0 11 0.0000 2 165 540 4050 3600 Unpark\001
    28 4 2 0 50 -1 0 11 0.0000 2 165 540 1125 3750 Unpark\001
    29 4 2 0 50 -1 0 11 0.0000 2 165 540 2850 4800 Unpark\001
    30 4 0 0 50 -1 0 11 0.0000 2 135 720 2250 2025 Thread 1\001
    31 4 0 0 50 -1 0 11 0.0000 2 135 720 3000 4200 Thread 4\001
    32 4 0 0 50 -1 0 11 0.0000 2 135 720 1575 4200 Thread 3\001
    33 4 0 0 50 -1 0 11 0.0000 2 165 540 3525 2025 Unpark\001
    34 4 0 0 50 -1 0 11 0.0000 2 135 720 3375 2925 Thread 5\001
     105 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 3150.000 4012.500 2850 4575 3150 4650 3450 4575
     11        1 1 1.00 60.00 120.00
     125 1 0 1 0 7 50 -1 -1 0.000 0 0 0 1 2268.750 3450.000 1950 3825 1800 3600 1800 3300
     13        1 1 1.00 60.00 120.00
     145 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 4031.250 3450.000 4350 3825 4500 3600 4500 3300
     15        1 1 1.00 60.00 120.00
     165 1 0 1 0 7 50 -1 -1 0.000 0 0 0 1 3675.000 2250.000 3750 1725 4050 1875 4200 2175
     17        1 1 1.00 60.00 120.00
     185 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 2625.000 2250.000 2550 1725 2250 1875 2100 2175
     19        1 1 1.00 60.00 120.00
     201 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3150 1800 600 600 3150 1800 3750 1800
     211 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 1875 2700 600 600 1875 2700 2475 2700
     221 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2400 4200 600 600 2400 4200 3000 4200
     231 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3900 4200 600 600 3900 4200 4500 4200
     241 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 4425 2700 600 600 4425 2700 5025 2700
     254 1 0 50 -1 0 11 0.0000 2 165 855 2400 4275 Thread$_3$\001
     264 1 0 50 -1 0 11 0.0000 2 165 855 3900 4275 Thread$_4$\001
     274 1 0 50 -1 0 11 0.0000 2 165 855 1875 2775 Thread$_2$\001
     284 1 0 50 -1 0 11 0.0000 2 165 855 3150 1875 Thread$_1$\001
     294 1 0 50 -1 0 11 0.0000 2 165 855 4425 2775 Thread$_5$\001
     304 1 0 50 -1 0 11 0.0000 2 180 540 3150 4875 Unpark\001
     314 0 0 50 -1 0 11 0.0000 2 180 540 4650 3675 Unpark\001
     324 2 0 50 -1 0 11 0.0000 2 180 540 1650 3600 Unpark\001
     334 2 0 50 -1 0 11 0.0000 2 180 540 2100 1875 Unpark\001
     344 0 0 50 -1 0 11 0.0000 2 180 540 4200 1875 Unpark\001
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    r5ce7f4a re01d2f6  
    11\chapter{Micro-Benchmarks}\label{microbench}
    22
    3 The first step of evaluation is always to test-out small controlled cases, to ensure that the basics are working properly.
    4 This sections presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
     3The first step in evaluating this work is to test-out small controlled cases to ensure the basics work properly.
     4This chapter presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
    55
    66\section{Benchmark Environment}
    7 All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine.
     7All benchmarks are run on two distinct hardware platforms.
     8\begin{description}
     9\item[AMD] is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
     10The EPYC CPU has 64 cores with 2 \glspl{hthrd} per core, for 128 \glspl{hthrd} per socket with 2 sockets for a total of 256 \glspl{hthrd}.
     11Each CPU has 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches, respectively.
     12Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
     13The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
     14
     15\item[Intel] is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
     16The Xeon CPU has 24 cores with 2 \glspl{hthrd} per core, for 48 \glspl{hthrd} per socket with 4 sockets for a total of 196 \glspl{hthrd}.
     17Each CPU has 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
     18Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
     19The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
     20\end{description}
    821
    922For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading.
    1023If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used.
    11 If still more \glspl{hthrd} are needed then the experiment is limited to as few NUMA Nodes as needed.
    12 
    13 
    14 \paragraph{AMD} The AMD machine is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
    15 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
    16 These EPYCs have 64 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 256 \glspl{hthrd}.
    17 The cpus each have 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches respectively.
    18 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
    19 
    20 \paragraph{Intel} The Intel machine is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
    21 The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
    22 These Xeon Platinums have 24 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 192 \glspl{hthrd}.
    23 The cpus each have 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
    24 Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
    25 
    26 This limited sharing of the last level cache on the AMD machine is markedly different than the Intel machine. Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different cpu incurr a significant latency, on AMD it is also the case that cache misses served by a different L3 instance on the same cpu still incur high latency.
     24If still more \glspl{hthrd} are needed, then the experiment is limited to as few NUMA Nodes as needed.
     25
     26The limited sharing of the last-level cache on the AMD machine is markedly different than the Intel machine.
     27Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different CPU incur a significant latency, on the AMD it is also the case that cache misses served by a different L3 instance on the same CPU still incur high latency.
    2728
    2829
     
    3435        \label{fig:cycle}
    3536\end{figure}
    36 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready-queue.
    37 Since these two operation also describe a @yield@ operation, many systems use this as the most basic benchmark.
    38 However, yielding can be treated as a special case, since it also carries the information that the number of the ready \glspl{at} will not change.
    39 Not all systems use this information, but those which do may appear to have better performance than they would for disconnected push/pop pairs.
    40 For this reason, I chose a different first benchmark, which I call the Cycle Benchmark.
    41 This benchmark arranges many \glspl{at} into multiple rings of \glspl{at}.
    42 Each ring is effectively a circular singly-linked list.
     37The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready queue.
     38Since these two operation also describe a @yield@ operation, many systems use this operation as the most basic benchmark.
     39However, yielding can be treated as a special case by optimizing it away (dead code) since the number of ready \glspl{at} does not change.
     40Not all systems perform this optimization, but those that do have an artificial performance benefit because the yield becomes a \emph{nop}.
     41For this reason, I chose a different first benchmark, called \newterm{Cycle Benchmark}.
     42This benchmark arranges a number of \glspl{at} into a ring, as seen in Figure~\ref{fig:cycle}, where the ring is a circular singly-linked list.
    4343At runtime, each \gls{at} unparks the next \gls{at} before parking itself.
    44 This corresponds to the desired pair of ready queue operations.
    45 Unparking the next \gls{at} requires pushing that \gls{at} onto the ready queue and the ensuing park will cause the runtime to pop a \gls{at} from the ready-queue.
    46 Figure~\ref{fig:cycle} shows a visual representation of this arrangement.
    47 
    48 The goal of this ring is that the underlying runtime cannot rely on the guarantee that the number of ready \glspl{at} will stay constant over the duration of the experiment.
     44Unparking the next \gls{at} pushes that \gls{at} onto the ready queue as does the ensuing park.
     45
     46Hence, the underlying runtime cannot rely on the number of ready \glspl{at} staying constant over the duration of the experiment.
    4947In fact, the total number of \glspl{at} waiting on the ready queue is expected to vary because of the race between the next \gls{at} unparking and the current \gls{at} parking.
    50 The size of the cycle is also decided based on this race: cycles that are too small may see the chain of unparks go full circle before the first \gls{at} can park.
    51 While this would not be a correctness problem, every runtime system must handle that race, it could lead to pushes and pops being optimized away.
    52 Since silently omitting ready-queue operations would throw off the measuring of these operations, the ring of \glspl{at} must be big enough so the \glspl{at} have the time to fully park before they are unparked.
    53 Note that this problem is only present on SMP machines and is significantly mitigated by the fact that there are multiple rings in the system.
    54 
    55 To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of \glspl{proc} available.
    56 Beyond this point, adding more rings serves to mitigate even more the idle sleep handling.
    57 This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
    58 
    59 The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition.
    60 Figure~\ref{fig:cycle:code} shows pseudo code for this benchmark.
    61 
    62 \begin{figure}
    63         \begin{lstlisting}
    64                 Thread.main() {
    65                         count := 0
    66                         for {
    67                                 wait()
    68                                 this.next.wake()
    69                                 count ++
    70                                 if must_stop() { break }
    71                         }
    72                         global.count += count
    73                 }
    74         \end{lstlisting}
    75         \caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
    76         \label{fig:cycle:code}
    77 \end{figure}
    78 
    79 
     48That is, the runtime cannot anticipate that the current task will immediately park.
     49As well, the size of the cycle is also decided based on this race, \eg a small cycle may see the chain of unparks go full circle before the first \gls{at} parks because of time-slicing or multiple \procs.
     50Every runtime system must handle this race and cannot optimized away the ready-queue pushes and pops.
     51To prevent any attempt of silently omitting ready-queue operations, the ring of \glspl{at} is made big enough so the \glspl{at} have time to fully park before being unparked again.
     52(Note, an unpark is like a V on a semaphore, so the subsequent park (P) may not block.)
     53Finally, to further mitigate any underlying push/pop optimizations, especially on SMP machines, multiple rings are created in the experiment.
     54
     55To avoid this benchmark being affected by idle-sleep handling, the number of rings is multiple times greater than the number of \glspl{proc}.
     56This design avoids the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentioned above.
     57
     58Figure~\ref{fig:cycle:code} shows the pseudo code for this benchmark.
     59There is additional complexity to handle termination (not shown), which requires a binary semaphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition.
     60
     61\begin{figure}
     62\begin{cfa}
     63Thread.main() {
     64        count := 0
     65        for {
     66                @wait()@
     67                @this.next.wake()@
     68                count ++
     69                if must_stop() { break }
     70        }
     71        global.count += count
     72}
     73\end{cfa}
     74\caption[Cycle Benchmark : Pseudo Code]{Cycle Benchmark : Pseudo Code}
     75\label{fig:cycle:code}
     76\end{figure}
    8077
    8178\subsection{Results}
     79Figure~\ref{fig:cycle:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle.
     80
    8281\begin{figure}
    8382        \subfloat[][Throughput, 100 \ats per \proc]{
     
    106105                \label{fig:cycle:jax:low:ns}
    107106        }
    108         \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput as a function of \proc count, using 100 cycles per \proc, 5 \ats per cycle.}
     107        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput as a function of \proc count with 100 cycles per \proc and 5 \ats per cycle.}
    109108        \label{fig:cycle:jax}
    110109\end{figure}
    111 Figure~\ref{fig:cycle:jax} shows the throughput as a function of \proc count, with the following constants:
    112 Each run uses 100 cycles per \proc, 5 \ats per cycle.
    113110
    114111\todo{results discussion}
    115112
    116113\section{Yield}
    117 For completion, I also include the yield benchmark.
    118 This benchmark is much simpler than the cycle tests, it simply creates many \glspl{at} that call @yield@.
    119 As mentionned in the previous section, this benchmark may be less representative of usages that only make limited use of @yield@, due to potential shortcuts in the routine.
    120 Its only interesting variable is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) could be empty.
    121 This sometimes puts more strain on the idle sleep handling, compared to scenarios where there is clearly plenty of work to be done.
    122 Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, the ``wait/wake-next'' is simply replaced by a yield.
    123 
    124 \begin{figure}
    125         \begin{lstlisting}
    126                 Thread.main() {
    127                         count := 0
    128                         for {
    129                                 yield()
    130                                 count ++
    131                                 if must_stop() { break }
    132                         }
    133                         global.count += count
    134                 }
    135         \end{lstlisting}
    136         \caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
    137         \label{fig:yield:code}
     114For completion, the classic yield benchmark is included.
     115This benchmark is simpler than the cycle test: it creates many \glspl{at} that call @yield@.
     116As mentioned, this benchmark may not be representative because of optimization shortcuts in @yield@.
     117The only interesting variable in this benchmark is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) can be empty.
     118This scenario can put a strain on the idle-sleep handling compared to scenarios where there is plenty of work.
     119Figure~\ref{fig:yield:code} shows pseudo code for this benchmark, where the @wait/next.wake@ is replaced by @yield@.
     120
     121\begin{figure}
     122\begin{cfa}
     123Thread.main() {
     124        count := 0
     125        for {
     126                @yield()@
     127                count ++
     128                if must_stop() { break }
     129        }
     130        global.count += count
     131}
     132\end{cfa}
     133\caption[Yield Benchmark : Pseudo Code]{Yield Benchmark : Pseudo Code}
     134\label{fig:yield:code}
    138135\end{figure}
    139136
    140137\subsection{Results}
     138
     139Figure~\ref{fig:yield:jax} shows the throughput as a function of \proc count, where each run uses 100 \ats per \proc.
     140
    141141\begin{figure}
    142142        \subfloat[][Throughput, 100 \ats per \proc]{
     
    168168        \label{fig:yield:jax}
    169169\end{figure}
    170 Figure~\ref{fig:yield:ops:jax} shows the throughput as a function of \proc count, with the following constants:
    171 Each run uses 100 \ats per \proc.
    172170
    173171\todo{results discussion}
    174172
    175 
    176173\section{Churn}
    177 The Cycle and Yield benchmark represents an ``easy'' scenario for a scheduler, \eg, an embarrassingly parallel application.
    178 In these benchmarks, \glspl{at} can be easily partitioned over the different \glspl{proc} up-front and none of the \glspl{at} communicate with each other.
    179 
    180 The Churn benchmark represents more chaotic usages, where there is no relation between the last \gls{proc} on which a \gls{at} ran and the \gls{proc} that unblocked it.
    181 When a \gls{at} is unblocked from a different \gls{proc} than the one on which it last ran, the unblocking \gls{proc} must either ``steal'' the \gls{at} or place it on a remote queue.
    182 This results can result in either contention on the remote queue or \glspl{rmr} on \gls{at} data structure.
    183 In either case, this benchmark aims to highlight how each scheduler handles these cases, since both cases can lead to performance degradation if they are not handled correctly.
    184 
    185 To achieve this the benchmark uses a fixed size array of semaphores.
    186 Each \gls{at} picks a random semaphore, @V@s it to unblock a \at waiting and then @P@s on the semaphore.
     174The Cycle and Yield benchmark represent an \emph{easy} scenario for a scheduler, \eg an embarrassingly parallel application.
     175In these benchmarks, \glspl{at} can be easily partitioned over the different \glspl{proc} upfront and none of the \glspl{at} communicate with each other.
     176
     177The Churn benchmark represents more chaotic execution, where there is no relation between the last \gls{proc} on which a \gls{at} ran and blocked and the \gls{proc} that subsequently unblocks it.
     178With processor-specific ready-queues, when a \gls{at} is unblocked by a different \gls{proc} that means the unblocking \gls{proc} must either ``steal'' the \gls{at} from another processor or find it on a global queue.
     179This dequeuing results in either contention on the remote queue and/or \glspl{rmr} on \gls{at} data structure.
     180In either case, this benchmark aims to highlight how each scheduler handles these cases, since both cases can lead to performance degradation if not handled correctly.
     181
     182This benchmark uses a fixed-size array of counting semaphores.
     183Each \gls{at} picks a random semaphore, @V@s it to unblock any \at waiting, and then @P@s on the semaphore.
    187184This creates a flow where \glspl{at} push each other out of the semaphores before being pushed out themselves.
    188 For this benchmark to work however, the number of \glspl{at} must be equal or greater to the number of semaphores plus the number of \glspl{proc}.
    189 Note that the nature of these semaphores mean the counter can go beyond 1, which could lead to calls to @P@ not blocking.
     185For this benchmark to work, the number of \glspl{at} must be equal or greater than the number of semaphores plus the number of \glspl{proc}.
     186Note, the nature of these semaphores mean the counter can go beyond 1, which can lead to nonblocking calls to @P@.
     187Figure~\ref{fig:churn:code} shows pseudo code for this benchmark, where the @yield@ is replaced by @V@ and @P@.
     188
     189\begin{figure}
     190\begin{cfa}
     191Thread.main() {
     192        count := 0
     193        for {
     194                r := random() % len(spots)
     195                @spots[r].V()@
     196                @spots[r].P()@
     197                count ++
     198                if must_stop() { break }
     199        }
     200        global.count += count
     201}
     202\end{cfa}
     203\caption[Churn Benchmark : Pseudo Code]{Churn Benchmark : Pseudo Code}
     204\label{fig:churn:code}
     205\end{figure}
     206
     207\subsection{Results}
     208Figure~\ref{fig:churn:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle.
     209
     210\begin{figure}
     211        \subfloat[][Throughput, 100 \ats per \proc]{
     212                \resizebox{0.5\linewidth}{!}{
     213                        \input{result.churn.jax.ops.pstex_t}
     214                }
     215                \label{fig:churn:jax:ops}
     216        }
     217        \subfloat[][Throughput, 1 \ats per \proc]{
     218                \resizebox{0.5\linewidth}{!}{
     219                        \input{result.churn.low.jax.ops.pstex_t}
     220                }
     221                \label{fig:churn:jax:low:ops}
     222        }
     223
     224        \subfloat[][Latency, 100 \ats per \proc]{
     225                \resizebox{0.5\linewidth}{!}{
     226                        \input{result.churn.jax.ns.pstex_t}
     227                }
     228
     229        }
     230        \subfloat[][Latency, 1 \ats per \proc]{
     231                \resizebox{0.5\linewidth}{!}{
     232                        \input{result.churn.low.jax.ns.pstex_t}
     233                }
     234                \label{fig:churn:jax:low:ns}
     235        }
     236        \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine.
     237        Throughput is the total operation per second across all cores. Latency is the duration of each operation.}
     238        \label{fig:churn:jax}
     239\end{figure}
     240
     241\todo{results discussion}
     242
     243\section{Locality}
    190244
    191245\todo{code, setup, results}
    192 \begin{lstlisting}
    193         Thread.main() {
    194                 count := 0
    195                 for {
    196                         r := random() % len(spots)
    197                         spots[r].V()
    198                         spots[r].P()
    199                         count ++
    200                         if must_stop() { break }
    201                 }
    202                 global.count += count
    203         }
    204 \end{lstlisting}
    205 
    206 \begin{figure}
    207         \subfloat[][Throughput, 100 \ats per \proc]{
    208                 \resizebox{0.5\linewidth}{!}{
    209                         \input{result.churn.jax.ops.pstex_t}
    210                 }
    211                 \label{fig:churn:jax:ops}
    212         }
    213         \subfloat[][Throughput, 1 \ats per \proc]{
    214                 \resizebox{0.5\linewidth}{!}{
    215                         \input{result.churn.low.jax.ops.pstex_t}
    216                 }
    217                 \label{fig:churn:jax:low:ops}
    218         }
    219 
    220         \subfloat[][Latency, 100 \ats per \proc]{
    221                 \resizebox{0.5\linewidth}{!}{
    222                         \input{result.churn.jax.ns.pstex_t}
    223                 }
    224 
    225         }
    226         \subfloat[][Latency, 1 \ats per \proc]{
    227                 \resizebox{0.5\linewidth}{!}{
    228                         \input{result.churn.low.jax.ns.pstex_t}
    229                 }
    230                 \label{fig:churn:jax:low:ns}
    231         }
    232         \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and latency of the Churn on the benchmark on the Intel machine. Throughput is the total operation per second across all cores. Latency is the duration of each opeartion.}
    233         \label{fig:churn:jax}
    234 \end{figure}
    235 
    236 \section{Locality}
    237 
    238 \todo{code, setup, results}
    239246
    240247\section{Transfer}
    241 The last benchmark is more exactly characterize as an experiment than a benchmark.
    242 It tests the behavior of the schedulers for a particularly misbehaved workload.
     248The last benchmark is more of an experiment than a benchmark.
     249It tests the behaviour of the schedulers for a misbehaved workload.
    243250In this workload, one of the \gls{at} is selected at random to be the leader.
    244251The leader then spins in a tight loop until it has observed that all other \glspl{at} have acknowledged its leadership.
    245252The leader \gls{at} then picks a new \gls{at} to be the ``spinner'' and the cycle repeats.
    246 
    247 The benchmark comes in two flavours for the behavior of the non-leader \glspl{at}:
    248 once they acknowledged the leader, they either block on a semaphore or yield repeatadly.
    249 
    250 This experiment is designed to evaluate the short term load balancing of the scheduler.
    251 Indeed, schedulers where the runnable \glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at} for this experient to terminate.
    252 This is because the spinning \gls{at} is effectively preventing the \gls{proc} from runnning any other \glspl{thrd}.
    253 In the semaphore flavour, the number of runnable \glspl{at} will eventually dwindle down to only the leader.
    254 This is a simpler case to handle for schedulers since \glspl{proc} eventually run out of work.
     253The benchmark comes in two flavours for the non-leader \glspl{at}:
     254once they acknowledged the leader, they either block on a semaphore or spin yielding.
     255
     256The experiment is designed to evaluate the short-term load-balancing of a scheduler.
     257Indeed, schedulers where the runnable \glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at} for this experiment to terminate.
     258This problem occurs because the spinning \gls{at} is effectively preventing the \gls{proc} from running any other \glspl{thrd}.
     259In the semaphore flavour, the number of runnable \glspl{at} eventually dwindles down to only the leader.
     260This scenario is a simpler case to handle for schedulers since \glspl{proc} eventually run out of work.
    255261In the yielding flavour, the number of runnable \glspl{at} stays constant.
    256 This is a harder case to handle because corrective measures must be taken even if work is still available.
    257 Note that languages that have mandatory preemption do circumvent this problem by forcing the spinner to yield.
     262This scenario is a harder case to handle because corrective measures must be taken even when work is available.
     263Note, runtime systems with preemption circumvent this problem by forcing the spinner to yield.
    258264
    259265\todo{code, setup, results}
    260 \begin{lstlisting}
    261         Thread.lead() {
    262                 this.idx_seen = ++lead_idx
    263                 if lead_idx > stop_idx {
    264                         done := true
    265                         return
    266                 }
    267 
    268                 // Wait for everyone to acknowledge my leadership
    269                 start: = timeNow()
     266
     267\begin{figure}
     268\begin{cfa}
     269Thread.lead() {
     270        this.idx_seen = ++lead_idx
     271        if lead_idx > stop_idx {
     272                done := true
     273                return
     274        }
     275
     276        // Wait for everyone to acknowledge my leadership
     277        start: = timeNow()
     278        for t in threads {
     279                while t.idx_seen != lead_idx {
     280                        asm pause
     281                        if (timeNow() - start) > 5 seconds { error() }
     282                }
     283        }
     284
     285        // pick next leader
     286        leader := threads[ prng() % len(threads) ]
     287
     288        // wake every one
     289        if ! exhaust {
    270290                for t in threads {
    271                         while t.idx_seen != lead_idx {
    272                                 asm pause
    273                                 if (timeNow() - start) > 5 seconds { error() }
    274                         }
    275                 }
    276 
    277                 // pick next leader
    278                 leader := threads[ prng() % len(threads) ]
    279 
    280                 // wake every one
    281                 if !exhaust {
    282                         for t in threads {
    283                                 if t != me { t.wake() }
    284                         }
    285                 }
    286         }
    287 
    288         Thread.wait() {
    289                 this.idx_seen := lead_idx
    290                 if exhaust { wait() }
    291                 else { yield() }
    292         }
    293 
    294         Thread.main() {
    295                 while !done  {
    296                         if leader == me { this.lead() }
    297                         else { this.wait() }
    298                 }
    299         }
    300 \end{lstlisting}
     291                        if t != me { t.wake() }
     292                }
     293        }
     294}
     295
     296Thread.wait() {
     297        this.idx_seen := lead_idx
     298        if exhaust { wait() }
     299        else { yield() }
     300}
     301
     302Thread.main() {
     303        while !done  {
     304                if leader == me { this.lead() }
     305                else { this.wait() }
     306        }
     307}
     308\end{cfa}
     309\caption[Transfer Benchmark : Pseudo Code]{Transfer Benchmark : Pseudo Code}
     310\label{fig:transfer:code}
     311\end{figure}
     312
     313\subsection{Results}
     314Figure~\ref{fig:transfer:jax} shows the throughput as a function of \proc count, where each run uses 100 cycles per \proc and 5 \ats per cycle.
     315
     316\todo{results discussion}
  • doc/theses/thierry_delisle_PhD/thesis/text/io.tex

    r5ce7f4a re01d2f6  
    455455The following sections discuss some useful options using @read@ as an example.
    456456The standard Linux interface for C is :
    457 \begin{lstlisting}
     457\begin{cfa}
    458458ssize_t read(int fd, void *buf, size_t count);
    459 \end{lstlisting}
     459\end{cfa}
    460460
    461461\subsection{Replacement}
     
    471471Another interface option is to offer an interface different in name only.
    472472For example:
    473 \begin{lstlisting}
     473\begin{cfa}
    474474ssize_t cfa_read(int fd, void *buf, size_t count);
    475 \end{lstlisting}
     475\end{cfa}
    476476This approach is feasible and still familiar to C programmers.
    477477It comes with the caveat that any code attempting to use it must be recompiled, which is a problem considering the amount of existing legacy C binaries.
     
    481481\subsection{Asynchronous Extension}
    482482A fairly traditional way of providing asynchronous interactions is using a future mechanism~\cite{multilisp}, \eg:
    483 \begin{lstlisting}
     483\begin{cfa}
    484484future(ssize_t) read(int fd, void *buf, size_t count);
    485 \end{lstlisting}
     485\end{cfa}
    486486where the generic @future@ is fulfilled when the read completes and it contains the number of bytes read, which may be less than the number of bytes requested.
    487487The data read is placed in @buf@.
     
    489489Hence, the buffer cannot be reused until the operation completes but the synchronization does not cover the buffer.
    490490A classical asynchronous API is:
    491 \begin{lstlisting}
     491\begin{cfa}
    492492future([ssize_t, void *]) read(int fd, size_t count);
    493 \end{lstlisting}
     493\end{cfa}
    494494where the future tuple covers the components that require synchronization.
    495495However, this interface immediately introduces memory lifetime challenges since the call must effectively allocate a buffer to be returned.
     
    498498\subsection{Direct \lstinline{io_uring} Interface}
    499499The last interface directly exposes the underlying @io_uring@ interface, \eg:
    500 \begin{lstlisting}
     500\begin{cfa}
    501501array(SQE, want) cfa_io_allocate(int want);
    502502void cfa_io_submit( const array(SQE, have) & );
    503 \end{lstlisting}
     503\end{cfa}
    504504where the generic @array@ contains an array of SQEs with a size that may be less than the request.
    505505This offers more flexibility to users wanting to fully utilize all of the @io_uring@ features.
  • doc/theses/thierry_delisle_PhD/thesis/text/practice.tex

    r5ce7f4a re01d2f6  
    66
    77In detail, \CFA supports adding \procs using the type @processor@, in both RAII and heap coding scenarios.
    8 \begin{lstlisting}
     8\begin{cfa}
    99{
    1010        processor p[4]; // 4 new kernel threads
     
    1515        ... // execute on 4 processors
    1616} // delete 4 kernel threads
    17 \end{lstlisting}
     17\end{cfa}
    1818Dynamically allocated processors can be deleted an any time, \ie their lifetime exceeds the block of creation.
    1919The consequence is that the scheduler and \io subsystems must know when these \procs come in and out of existence and roll them into the appropriate scheduling algorithms.
     
    6868
    6969\begin{figure}
    70 \begin{lstlisting}
     70\begin{cfa}
    7171void read_lock() {
    7272        // Step 1 : make sure no writers in
     
    9292        write_lock = false;
    9393}
    94 \end{lstlisting}
     94\end{cfa}
    9595\caption{Specialized Readers-Writer Lock}
    9696\label{f:SpecializedReadersWriterLock}
Note: See TracChangeset for help on using the changeset viewer.