Ignore:
Timestamp:
Aug 30, 2022, 6:30:32 PM (2 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
4858a88
Parents:
a8dd247 (diff), 01ba701 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Location:
doc/theses/thierry_delisle_PhD/thesis
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/Makefile

    ra8dd247 ra0dbf20  
    198198churn_low_nasus_ns_FLAGS = --MaxY=5000
    199199
     200locality_share_jax_ops_FLAGS = --MaxY=40000000
     201locality_noshare_jax_ops_FLAGS = --MaxY=40000000
     202locality_share_jax_ns_FLAGS = --MaxY=10000
     203locality_noshare_jax_ns_FLAGS = --MaxY=10000
     204
     205locality_share_nasus_ops_FLAGS = --MaxY=60000000
     206locality_noshare_nasus_ops_FLAGS = --MaxY=60000000
     207locality_share_nasus_ns_FLAGS = --MaxY=10000
     208locality_noshare_nasus_ns_FLAGS = --MaxY=10000
     209
    200210build/result.%.ns.svg : data/% Makefile ../../../../benchmark/plot.py | ${Build}
    201211        ../../../../benchmark/plot.py -f $< -o $@ -y "ns per ops/procs" $($(subst .,_,$*)_ns_FLAGS)
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex

    ra8dd247 ra0dbf20  
    132132        \caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count.
    133133        For throughput, higher is better, for scalability, lower is better.
    134         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
     134        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    135135        \label{fig:cycle:jax}
    136136\end{figure}
     
    164164        \caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts.
    165165        For throughput, higher is better, for scalability, lower is better.
    166         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
     166        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    167167        \label{fig:cycle:nasus}
    168168\end{figure}
     
    170170\subsection{Results}
    171171
    172 Figure~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment.
    173 Looking at the left column on Intel first, Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}, which shows the results for many \ats, in this case 100 cycles of 5 \ats for each \proc.
     172Figures~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment.
     173Looking at the left column on Intel first, Figures~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}, which shows the results for many \ats, in this case 100 cycles of 5 \ats for each \proc.
    174174\CFA, Go and Tokio all obtain effectively the same throughput performance.
    175175Libfibre is slightly behind in this case but still scales decently.
     
    178178As expected, this pattern repeats again between \proc count 72 and 96.
    179179
    180 Looking next at the right column on Intel, Figure~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns}, which shows the results for few threads, in this case 1 cycle of 5 \ats for each \proc.
     180Looking next at the right column on Intel, Figures~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns}, which shows the results for few threads, in this case 1 cycle of 5 \ats for each \proc.
    181181\CFA and Tokio obtain very similar results overall, but Tokio shows more variations in the results.
    182182Go achieves slightly better performance than \CFA and Tokio, but all three display significantly workst performance compared to the left column.
     
    188188Looking now at the results for the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results show a story that is overall similar to the results on the Intel, with close to double the performance overall but with slightly increased variation and some differences in the details.
    189189Note that the maximum of the Y-axis on Intel and AMD differ significantly.
    190 Looking at the left column first, Figure~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns}, unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability.
     190Looking at the left column first, Figures~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns}, unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability.
    191191However, as the number of \procs grows higher, the results on AMD show notably more variability than on Intel.
    192192The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
    193 Looking next at the right column, Figure~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
     193Looking next at the right column, Figures~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
    194194This is different than on Intel, where Tokio behaved like \CFA rather than behaving like Go.
    195195Again, the same performance increase for libfibre is visible when running fewer \ats.
     
    253253        \caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
    254254        For throughput, higher is better, for scalability, lower is better.
    255         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
     255        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    256256        \label{fig:yield:jax}
    257257\end{figure}
     
    259259\subsection{Results}
    260260
    261 Figures~\ref{fig:yield:jax} and~\ref{fig:yield:nasus} show the results for the yield experiment.
    262 Looking at the left column on Intel first, Figure~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc.
     261Figures~\ref{fig:yield:jax} and \ref{fig:yield:nasus} show the results for the yield experiment.
     262Looking at the left column on Intel first, Figures~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc.
    263263Note, the Y-axis on the graph is twice as large as the Intel cycle-graph.
    264264A visual glance between the left columns of the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
     
    276276This lack of communication is probably why the plateaus due to topology are not present.
    277277
    278 Lookking next at the right column on Intel, Figure~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc.
     278Lookking next at the right column on Intel, Figures~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc.
    279279As for @cycle@, \CFA's cost of idle sleep comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
    280280This is to be expected since fewet \ats means \procs are more likely to run out of work.
     
    312312        \caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
    313313        For throughput, higher is better, for scalability, lower is better.
    314         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
     314        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    315315        \label{fig:yield:nasus}
    316316\end{figure}
     
    318318Looking now at the results for the AMD architecture, Figure~\ref{fig:yield:nasus}, the results again show a story that is overall similar to the results on the Intel, with increased variation and some differences in the details.
    319319Note that the maximum of the Y-axis on Intel and AMD differ less in @yield@ than @cycle@.
    320 Looking at the left column first, Figure~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling.
     320Looking at the left column first, Figures~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling.
    321321Libfibre still outpaces all other runtimes, but it encounter a performance hit at 64 \procs.
    322322This suggest some amount of communication between the \procs that the Intel machine was able to mask where the AMD is not once hyperthreading is needed.
    323323Go and Tokio still display the same performance collapse than on Intel.
    324 Looking next at the right column, Figure~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime effectively behave the same as they did on the Intel machine.
     324Looking next at the right column, Figures~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime effectively behave the same as they did on the Intel machine.
    325325At high \ats count the only difference was Libfibre's scaling and this difference disappears on the right column.
    326326This suggest that whatever communication benchmark it encountered on the left is completely circumvented on the right.
     
    395395        \caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and scalability of the Churn on the benchmark on the Intel machine.
    396396        For throughput, higher is better, for scalability, lower is better.
    397         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
     397        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    398398        \label{fig:churn:jax}
    399399\end{figure}
     
    402402
    403403Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the results for the churn experiment.
    404 Looking at the left column on Intel first, Figure~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc, all runtime obtain fairly similar throughput for most \proc counts.
     404Looking at the left column on Intel first, Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc, all runtime obtain fairly similar throughput for most \proc counts.
    405405\CFA does very well on a single \proc but quickly loses its advantage over the other runtimes.
    406406As expected it scales decently up to 48 \procs and then basically plateaus.
     
    422422In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
    423423
    424 Looking next at the right column on Intel, Figure~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc, many of the differences between the runtime disappear.
     424Looking next at the right column on Intel, Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc, many of the differences between the runtime disappear.
    425425\CFA outperforms other runtimes by a minuscule margin.
    426426Libfibre follows very closely behind with basically the same performance and scaling.
     
    461461        \caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and scalability of the Churn on the benchmark on the AMD machine.
    462462        For throughput, higher is better, for scalability, lower is better.
    463         Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
     463        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    464464        \label{fig:churn:nasus}
    465465\end{figure}
     
    467467
    468468Looking now at the results for the AMD architecture, Figure~\ref{fig:churn:nasus}, the results show a somewhat different story.
    469 Looking at the left column first, Figure~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability.
     469Looking at the left column first, Figures~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability.
    470470\CFA suffers particular from a larger variations at higher \proc counts, but almost all run still outperform the other runtimes.
    471471Go still produces intriguing results in this case and even more intriguingly, the results have fairly low variation.
     
    484484I did not further investigate what causes these unusual results.
    485485
    486 Looking next at the right column, Figure~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, like for Intel all runtime obtain overall similar throughput between the left and right column.
     486Looking next at the right column, Figures~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, like for Intel all runtime obtain overall similar throughput between the left and right column.
    487487\CFA, Libfibre and Tokio all have very close results.
    488488Go still suffers from poor scalability but is now unusual in a different way.
     
    592592                \label{fig:locality:jax:noshare:ns}
    593593        }
    594         \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     594        \caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
     595        For throughput, higher is better, for scalability, lower is better.
     596        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    595597        \label{fig:locality:jax}
    596598\end{figure}
     
    621623                \label{fig:locality:nasus:noshare:ns}
    622624        }
    623         \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
     625        \caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
     626        For throughput, higher is better, for scalability, lower is better.
     627        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
    624628        \label{fig:locality:nasus}
    625629\end{figure}
    626630
    627 Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
     631Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} show the results for the locality experiment.
    628632In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@.
    629 
    630 On Intel, Figure~\ref{fig:locality:jax} shows Go trailing behind the 3 other runtimes.
    631 On the left of the figure showing the results for the shared variation, where \CFA and Tokio slightly outperform libfibre as expected.
    632 And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and Tokio.
     633Looking at the left column on Intel first, Figures~\ref{fig:locality:jax:share:ops} and \ref{fig:locality:jax:share:ns}, which shows the results for the @share@ variation.
     634\CFA and Tokio slightly outperform libfibre, as expected based on their \ats placement approach.
     635\CFA and Tokio both unpark locally and do not suffer cache misses on the transferred array.
     636Libfibre on the other hand unparks remotely, and as such the unparked \at is likely to miss on the shared data.
     637Go trails behind in this experiment, presumably for the same reasons that were observable in the churn benchmark.
    633638Otherwise the results are similar to the churn benchmark, with lower throughput due to the array processing.
    634 Presumably the reason why Go trails behind are the same as in Figure~\ref{fig:churn:nasus}.
    635 
    636 Figure~\ref{fig:locality:nasus} shows the same experiment on AMD.
    637 \todo{why is cfa slower?}
    638 Again, we see the same story, where Tokio and libfibre swap places and Go trails behind.
     639As for most previous results, all runtime suffer a performance hit after 48 \proc, which is the socket boundary.
     640
     641Looking next at the right column on Intel, Figures~\ref{fig:locality:jax:noshare:ops} and \ref{fig:locality:jax:noshare:ns}, which shows the results for the @noshare@ variation.
     642The graph show the expected performance inversion where libfibre now outperforms \CFA and Tokio.
     643Indeed, in this case, unparking remotely means the unparked \at is less likely to suffer a cache miss on the array.
     644The leaves the \at data structure and the remote queue as the only source of likely cache misses.
     645Results show both are armotized fairly well in this case.
     646\CFA and Tokio both unpark locally and as a result suffer a marginal performance degradation from the cache miss on the array.
     647
     648Looking now at the results for the AMD architecture, Figure~\ref{fig:locality:nasus}, the results show a story that is overall similar to the results on the Intel.
     649Again overall performance is higher and slightly more variation is visible.
     650Looking at the left column first, Figures~\ref{fig:locality:nasus:share:ops} and \ref{fig:locality:nasus:share:ns}, \CFA and Tokio still outperform libfibre, this time more significantly.
     651This is expected from the AMD server, which has smaller and more narrow caches that magnify the costs of processing the array.
     652Go still sees the same poor performance as on Intel.
     653
     654Finally looking at the right column, Figures~\ref{fig:locality:nasus:noshare:ops} and \ref{fig:locality:nasus:noshare:ns}, like on Intel, the same performance inversion is present between libfibre and \CFA/Tokio.
     655Go still sees the same poor performance.
     656
     657Overall, this experiment mostly demonstrates the two options available when unparking a \at.
     658Depending on the workload, either of these options can be the appropriate one.
     659Since it is prohibitively difficult to detect which approach is appropriate, all runtime much choose one of the two and live with the consequences.
     660
     661Once again, this demonstrate that \CFA achieves equivalent performance to the other runtime, in this case matching the faster Tokio rather than Go which is trailing behind.
    639662
    640663\section{Transfer}
     
    723746\end{tabular}
    724747\end{centering}
    725 \caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
     748\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at.
     749DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
    726750\label{fig:transfer:res}
    727751\end{figure}
     
    733757The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged.
    734758The yielding variation is denoted ``Yield''.
    735 The experiment was only run for the extremes of the number of cores since the scaling per core behaves like previous experiments.
     759The experiment was only run for the extremes of the number of \procs since the scaling is not the focus of this experiment.
     760
     761The first two columns show the results for the the semaphore variation on Intel.
     762While there are some differences in latencies, \CFA is consistenly the fastest and Tokio the slowest, all runtime achieve results that are fairly close.
     763Again, this experiment is meant to highlight major differences so latencies within $10\times$ of each other are considered close to each other.
     764
     765Looking at the next two columns, the results for the yield variation in Intel, the story is very different.
     766\CFA achieves better latencies, presumably due to the lack of synchronization on the semaphore.
     767Neither Libfibre or Tokio complete the experiment.
     768Both runtime use classical work-stealing scheduling and therefore since non of the work-queues are ever emptied no load balancing occurs.
     769Go does complete the experiment, but with drastically higher latency:
     770latency at 2 \procs is $350\times$ higher than \CFA and $70\times$ higher at 192 \procs.
     771This is because Go also has a classic work-stealing scheduler, but it adds preemption which interrupts the spinning leader after a period.
     772
     773Looking now at the results for the AMD architecture, the results show effectively the same story.
     774The first two columns show all runtime obtaining results well within $10\times$ of each other.
     775The next two columns again show \CFA producing low latencies while Libfibre and Tokio do not complete the experiment.
     776Go still has notably higher latency but the difference is less drastic on 2 \procs, where it produces a $15\times$ difference as opposed to a $100\times$ difference on 256 \procs.
     777
    736778This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness.
    737779The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly.
Note: See TracChangeset for help on using the changeset viewer.