Changeset d6d2136


Ignore:
Timestamp:
Jul 12, 2023, 12:49:05 PM (11 months ago)
Author:
caparsons <caparson@…>
Branches:
master
Children:
cc28153d
Parents:
e0069bd
Message:

Made changes in response to Peter's comments

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/colby_parsons_MMAth/text/actors.tex

    re0069bd rd6d2136  
    10021002The longest-victim heuristic maintains a timestamp per executor thread that is updated every time a worker attempts to steal work.
    10031003The timestamps are generated using @rdtsc@~\cite{IntelManual} and are stored in a shared array, with one index per worker.
    1004 Thieves then attempt to steal from the worker with the oldest timestamp.
    1005 \PAB{How does a thief find the oldest timestamp?}
     1004Thieves then attempt to steal from the worker with the oldest timestamp, which is found by performing a linear search across the array of timestamps.
    10061005The intuition behind this heuristic is that the slowest worker receives help via work stealing until it becomes a thief, which indicates that it has caught up to the pace of the rest of the workers.
    10071006This heuristic should ideally result in lowered latency for message sends to victim workers that are overloaded with work.
     
    12121211Finally, \CFA runs consistently on both of the AMD and Intel, and is faster than \uC on the AMD, but slightly slower on the Intel.
    12131212This benchmark is a pathological case for work stealing actor systems, as the majority of work is being performed by the single actor conducting the scatter/gather.
    1214 The impact of work stealing on this benchmark are discussed further in Section~\ref{s:steal_perf}.
     1213The impact of work stealing on this benchmark is discussed further in Section~\ref{s:steal_perf}.
    12151214Here, gains from using the copy queue are much less apparent, due to the costs of stealing.
    12161215
     
    12871286
    12881287Two pathological unbalanced cases are created, and compared using vanilla and randomized work stealing in \CFA.
    1289 These benchmarks adversarially takes advantage of the round-robin assignment of actors to workers by isolating the receive and send actors on different cores.
     1288These benchmarks adversarially takes advantage of the round-robin assignment of actors to workers to load actors only on specific cores (there is one worker per core).
    12901289The workload on the loaded cores is the same as the executor benchmark described in \ref{s:executorPerf}, but with fewer rounds.
    12911290
     
    13271326On Intel in Figure~\ref{f:BalanceOneIntel}, above 32 cores the performance gets worse for all variants due to hyperthreading.
    13281327Here, the longest-victim and random heuristic are the same.
    1329 Note, the non-stealing variation of balance-one slows down (no decrease in graph) as the cores increase \PAB{not sure I understand this part: due to having to create more dummy actors on the inactive cores during startup}.
     1328Note, the non-stealing variation of balance-one slows down slightly (no decrease in graph) as the cores increase, since a few "dummy" actors need to be made for each of the extra cores beyond the first to adversarially layout all loaded actors on the first core.
    13301329
    13311330For the balance-multi benchmark in Figures~\ref{f:BalanceMultiAMD} and~\ref{f:BalanceMultiIntel}, the random heuristic outperforms the longest victim.
     
    13621361\end{figure}
    13631362
    1364 \PAB{Something is wrong here because there are no graphs containing all the benchmarks.
    13651363Figures~\ref{f:cfaRepeatAMD} and~\ref{f:cfaRepeatIntel} show the effects of the stealing heuristics for the repeat benchmark.
    1366 % Here, the no-stealing version of \CFA performs better than both stealing variations.
    13671364As mentioned, the repeat benchmark is a pathological case for work stealing systems since there is one actor with the majority of the work, and not enough other work to go around.
    1368 The worst-case scenario is if the actor doing the majority of work or its mail queue is stolen by the work stealing system, as this a huge cost to move the work and refill the local cache.
     1365The worst-case scenario is if the actor doing the majority of work or its mail queue is stolen by the work stealing system, as this incurs a huge cost to move the work and refill the local cache.
    13691366This worst-case steal is likely to happen since there is little other work in the system between scatter/gather rounds.
    13701367
    1371 In particular on the Intel machine in Figure~\ref{f:cfaRepeatIntel}, the cost of stealing is higher, which can be seen in the vertical shift of Akka, CAF and \CFA results in Figure~\ref{f:RepeatIntel} (\uC and ProtoActor do not have work stealing).
    1372 The shift for CAF is particularly large, which further supports the hypothesis that CAF's work stealing is particularly eager.
     1368All \CFA variants are comparable in performance on the repeat benchmark.
     1369This is surprising, since this is a pathological case for work stealing, and one would expect the No-Stealing variant to have better performance.
     1370It is likely that the \CFA No-Stealing variant may be minorly impacted by other design decisions in the \CFA actor system that were made with work stealing in mind.
     1371Work stealing performance in the repeat benchmark can be further examined by discussing Figures \ref{f:RepeatAMD} and \ref{f:RepeatIntel}.
     1372
     1373In particular, on the Intel machine in Figure~\ref{f:RepeatIntel}, the cost of stealing is higher, which can be seen in the vertical shift of Akka, CAF and \CFA when comparing the AMD results in Figure~\ref{f:RepeatAMD} results in Figure~\ref{f:RepeatIntel} (\uC and ProtoActor do not have work stealing).
     1374The shift for CAF is particularly large, which supports the hypothesis that CAF's work stealing is particularly eager.
    13731375In both the executor and the repeat benchmark CAF performs poorly.
    13741376It is hypothesized that CAF has an aggressive work stealing algorithm, that eagerly attempts to steal.
    13751377This results in poor performance in benchmarks with small messages containing little work per message.
    1376 On the other hand, in \ref{f:MatrixAMD} CAF performs much better since each message has a large amount of work, and few messages are sent, so the eager work stealing allows for the clean up of loose ends to occur faster.
     1378In comparison with the other systems \uC does well on the repeat benchmark since it does not have work stealing.
     1379The client of this experiment is long running and maintains a lot of state, as it needs to know the handles of all the servers.
     1380When stealing the client or its respective queue (in \CFA's inverted model), moving the client incurs a high cost due to cache invalidation.
     1381As such stealing the client can result in a hit in performance.
     1382
     1383Finally, Figures~\ref{f:cfaMatrixAMD} and~\ref{f:cfaMatrixIntel} show the effects of the stealing heuristics for the matrix-multiply benchmark.
     1384Here, there is negligible performance difference across stealing heuristics, likely due to the long running workload of each message.
     1385
     1386Stealing can still improve performance marginally in the matrix-multiply benchmark.
     1387In \ref{f:MatrixAMD} CAF performs much better; few messages are sent, so the eager work stealing allows for the clean up of loose ends to occur faster.
    13771388This hypothesis stems from experimentation with \CFA.
    13781389CAF uses a randomized work stealing heuristic.
    13791390Tuning the \CFA actor system to steal work much more eagerly with randomized victim selection heuristics provided similar results to what CAF achieved in the matrix benchmark.
    1380 This experimental tuning performed much worse on all other microbenchmarks that we present, since they all perform a small amount of work per message.
    1381 
    1382 In comparison with the other systems \uC does well on the repeat benchmark since it does not have work stealing.
    1383 The client of this experiment is long running and maintains a lot of state, as it needs to know the handles of all the servers.
    1384 When stealing the client or its respective queue (in \CFA's inverted model), moving the client incurs a high cost due to cache invalidation.
    1385 As such stealing the client can result in a hit in performance.}
    1386 
    1387 Finally, Figures~\ref{f:cfaMatrixAMD} and~\ref{f:cfaMatrixIntel} show the effects of the stealing heuristics for the matrix-multiple benchmark.
    1388 Here, there is negligible performance difference across stealing heuristics.
     1391This experimental tuning performed much worse on all other microbenchmarks that we present, since they all perform a small amount of work per message, which may partially explain CAF's poor performance on other benchmarks.
    13891392
    13901393\begin{figure}
Note: See TracChangeset for help on using the changeset viewer.