Index: doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex	(revision c702d21da497a112fe437d8cffcac3dcfb50bd9a)
+++ doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex	(revision 50ff1d060ac447f17f9341318d7d3af00edb9a87)
@@ -132,5 +132,5 @@
 	\caption[Cycle Benchmark on Intel]{Cycle Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle count.
 	For throughput, higher is better, for scalability, lower is better.
-	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
+	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
 	\label{fig:cycle:jax}
 \end{figure}
@@ -164,5 +164,5 @@
 	\caption[Cycle Benchmark on AMD]{Cycle Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count, 5 \ats per cycle, and different cycle counts.
 	For throughput, higher is better, for scalability, lower is better.
-	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
+	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
 	\label{fig:cycle:nasus}
 \end{figure}
@@ -170,6 +170,6 @@
 \subsection{Results}
 
-Figure~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment.
-Looking at the left column on Intel first, Figure~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}, which shows the results for many \ats, in this case 100 cycles of 5 \ats for each \proc.
+Figures~\ref{fig:cycle:jax} and \ref{fig:cycle:nasus} show the results for the cycle experiment.
+Looking at the left column on Intel first, Figures~\ref{fig:cycle:jax:ops} and \ref{fig:cycle:jax:ns}, which shows the results for many \ats, in this case 100 cycles of 5 \ats for each \proc.
 \CFA, Go and Tokio all obtain effectively the same throughput performance.
 Libfibre is slightly behind in this case but still scales decently.
@@ -178,5 +178,5 @@
 As expected, this pattern repeats again between \proc count 72 and 96.
 
-Looking next at the right column on Intel, Figure~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns}, which shows the results for few threads, in this case 1 cycle of 5 \ats for each \proc.
+Looking next at the right column on Intel, Figures~\ref{fig:cycle:jax:low:ops} and \ref{fig:cycle:jax:low:ns}, which shows the results for few threads, in this case 1 cycle of 5 \ats for each \proc.
 \CFA and Tokio obtain very similar results overall, but Tokio shows more variations in the results.
 Go achieves slightly better performance than \CFA and Tokio, but all three display significantly workst performance compared to the left column.
@@ -188,8 +188,8 @@
 Looking now at the results for the AMD architecture, Figure~\ref{fig:cycle:nasus}, the results show a story that is overall similar to the results on the Intel, with close to double the performance overall but with slightly increased variation and some differences in the details.
 Note that the maximum of the Y-axis on Intel and AMD differ significantly.
-Looking at the left column first, Figure~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns}, unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability.
+Looking at the left column first, Figures~\ref{fig:cycle:nasus:ops} and \ref{fig:cycle:nasus:ns}, unlike Intel, on AMD all 4 runtimes achieve very similar throughput and scalability.
 However, as the number of \procs grows higher, the results on AMD show notably more variability than on Intel.
 The different performance improvements and plateaus are due to cache topology and appear at the expected \proc counts of 64, 128 and 192, for the same reasons as on Intel.
-Looking next at the right column, Figure~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
+Looking next at the right column, Figures~\ref{fig:cycle:nasus:low:ops} and \ref{fig:cycle:nasus:low:ns}, Tokio and Go have the same throughput performance, while \CFA is slightly slower.
 This is different than on Intel, where Tokio behaved like \CFA rather than behaving like Go.
 Again, the same performance increase for libfibre is visible when running fewer \ats.
@@ -253,5 +253,5 @@
 	\caption[Yield Benchmark on Intel]{Yield Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
 	For throughput, higher is better, for scalability, lower is better.
-	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
+	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
 	\label{fig:yield:jax}
 \end{figure}
@@ -259,6 +259,6 @@
 \subsection{Results}
 
-Figures~\ref{fig:yield:jax} and~\ref{fig:yield:nasus} show the results for the yield experiment.
-Looking at the left column on Intel first, Figure~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc.
+Figures~\ref{fig:yield:jax} and \ref{fig:yield:nasus} show the results for the yield experiment.
+Looking at the left column on Intel first, Figures~\ref{fig:yield:jax:ops} and \ref{fig:yield:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc.
 Note, the Y-axis on the graph is twice as large as the Intel cycle-graph.
 A visual glance between the left columns of the cycle and yield graphs confirms my claim that the yield benchmark is unreliable.
@@ -276,5 +276,5 @@
 This lack of communication is probably why the plateaus due to topology are not present.
 
-Lookking next at the right column on Intel, Figure~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc.
+Lookking next at the right column on Intel, Figures~\ref{fig:yield:jax:low:ops} and \ref{fig:yield:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc.
 As for @cycle@, \CFA's cost of idle sleep comes into play in a very significant way in Figure~\ref{fig:yield:jax:low:ns}, where the scaling is not flat.
 This is to be expected since fewet \ats means \procs are more likely to run out of work.
@@ -312,5 +312,5 @@
 	\caption[Yield Benchmark on AMD]{Yield Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
 	For throughput, higher is better, for scalability, lower is better.
-	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
+	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
 	\label{fig:yield:nasus}
 \end{figure}
@@ -318,9 +318,9 @@
 Looking now at the results for the AMD architecture, Figure~\ref{fig:yield:nasus}, the results again show a story that is overall similar to the results on the Intel, with increased variation and some differences in the details.
 Note that the maximum of the Y-axis on Intel and AMD differ less in @yield@ than @cycle@.
-Looking at the left column first, Figure~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling.
+Looking at the left column first, Figures~\ref{fig:yield:nasus:ops} and \ref{fig:yield:nasus:ns}, \CFA achieves very similar throughput and scaling.
 Libfibre still outpaces all other runtimes, but it encounter a performance hit at 64 \procs.
 This suggest some amount of communication between the \procs that the Intel machine was able to mask where the AMD is not once hyperthreading is needed.
 Go and Tokio still display the same performance collapse than on Intel.
-Looking next at the right column, Figure~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime effectively behave the same as they did on the Intel machine.
+Looking next at the right column, Figures~\ref{fig:yield:nasus:low:ops} and \ref{fig:yield:nasus:low:ns}, all runtime effectively behave the same as they did on the Intel machine.
 At high \ats count the only difference was Libfibre's scaling and this difference disappears on the right column.
 This suggest that whatever communication benchmark it encountered on the left is completely circumvented on the right.
@@ -395,5 +395,5 @@
 	\caption[Churn Benchmark on Intel]{\centering Churn Benchmark on Intel\smallskip\newline Throughput and scalability of the Churn on the benchmark on the Intel machine.
 	For throughput, higher is better, for scalability, lower is better.
-	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
+	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
 	\label{fig:churn:jax}
 \end{figure}
@@ -402,5 +402,5 @@
 
 Figures~\ref{fig:churn:jax} and Figure~\ref{fig:churn:nasus} show the results for the churn experiment.
-Looking at the left column on Intel first, Figure~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc, all runtime obtain fairly similar throughput for most \proc counts.
+Looking at the left column on Intel first, Figures~\ref{fig:churn:jax:ops} and \ref{fig:churn:jax:ns}, which shows the results for many \ats, in this case 100 \ats for each \proc, all runtime obtain fairly similar throughput for most \proc counts.
 \CFA does very well on a single \proc but quickly loses its advantage over the other runtimes.
 As expected it scales decently up to 48 \procs and then basically plateaus.
@@ -422,5 +422,5 @@
 In this particular benchmark, the inherent chaos of the benchmark in addition to small memory footprint means neither approach wins over the other.
 
-Looking next at the right column on Intel, Figure~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc, many of the differences between the runtime disappear.
+Looking next at the right column on Intel, Figures~\ref{fig:churn:jax:low:ops} and \ref{fig:churn:jax:low:ns}, which shows the results for few threads, in this case 1 \at for each \proc, many of the differences between the runtime disappear.
 \CFA outperforms other runtimes by a minuscule margin.
 Libfibre follows very closely behind with basically the same performance and scaling.
@@ -461,5 +461,5 @@
 	\caption[Churn Benchmark on AMD]{\centering Churn Benchmark on AMD\smallskip\newline Throughput and scalability of the Churn on the benchmark on the AMD machine.
 	For throughput, higher is better, for scalability, lower is better.
-	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the mnimums.}
+	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
 	\label{fig:churn:nasus}
 \end{figure}
@@ -467,5 +467,5 @@
 
 Looking now at the results for the AMD architecture, Figure~\ref{fig:churn:nasus}, the results show a somewhat different story.
-Looking at the left column first, Figure~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability.
+Looking at the left column first, Figures~\ref{fig:churn:nasus:ops} and \ref{fig:churn:nasus:ns}, \CFA, Libfibre and Tokio all produce decent scalability.
 \CFA suffers particular from a larger variations at higher \proc counts, but almost all run still outperform the other runtimes.
 Go still produces intriguing results in this case and even more intriguingly, the results have fairly low variation.
@@ -484,5 +484,5 @@
 I did not further investigate what causes these unusual results.
 
-Looking next at the right column, Figure~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, like for Intel all runtime obtain overall similar throughput between the left and right column.
+Looking next at the right column, Figures~\ref{fig:churn:nasus:low:ops} and \ref{fig:churn:nasus:low:ns}, like for Intel all runtime obtain overall similar throughput between the left and right column.
 \CFA, Libfibre and Tokio all have very close results.
 Go still suffers from poor scalability but is now unusual in a different way.
@@ -592,5 +592,7 @@
 		\label{fig:locality:jax:noshare:ns}
 	}
-	\caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
+	\caption[Locality Benchmark on Intel]{Locality Benchmark on Intel\smallskip\newline Throughput and scalability as a function of \proc count.
+	For throughput, higher is better, for scalability, lower is better.
+	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
 	\label{fig:locality:jax}
 \end{figure}
@@ -621,20 +623,41 @@
 		\label{fig:locality:nasus:noshare:ns}
 	}
-	\caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count. For throughput, higher is better, for scalability, lower is better. Each series represent 15 independent runs, the dotted lines are extremes while the solid line is the medium.}
+	\caption[Locality Benchmark on AMD]{Locality Benchmark on AMD\smallskip\newline Throughput and scalability as a function of \proc count.
+	For throughput, higher is better, for scalability, lower is better.
+	Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
 	\label{fig:locality:nasus}
 \end{figure}
 
-Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} shows the results on Intel and AMD respectively.
+Figures~\ref{fig:locality:jax} and \ref{fig:locality:nasus} show the results for the locality experiment.
 In both cases, the graphs on the left column show the results for the @share@ variation and the graphs on the right column show the results for the @noshare@.
-
-On Intel, Figure~\ref{fig:locality:jax} shows Go trailing behind the 3 other runtimes.
-On the left of the figure showing the results for the shared variation, where \CFA and Tokio slightly outperform libfibre as expected.
-And correspondingly on the right, we see the expected performance inversion where libfibre now outperforms \CFA and Tokio.
+Looking at the left column on Intel first, Figures~\ref{fig:locality:jax:share:ops} and \ref{fig:locality:jax:share:ns}, which shows the results for the @share@ variation.
+\CFA and Tokio slightly outperform libfibre, as expected based on their \ats placement approach.
+\CFA and Tokio both unpark locally and do not suffer cache misses on the transferred array.
+Libfibre on the other hand unparks remotely, and as such the unparked \at is likely to miss on the shared data.
+Go trails behind in this experiment, presumably for the same reasons that were observable in the churn benchmark.
 Otherwise the results are similar to the churn benchmark, with lower throughput due to the array processing.
-Presumably the reason why Go trails behind are the same as in Figure~\ref{fig:churn:nasus}.
-
-Figure~\ref{fig:locality:nasus} shows the same experiment on AMD.
-\todo{why is cfa slower?}
-Again, we see the same story, where Tokio and libfibre swap places and Go trails behind.
+As for most previous results, all runtime suffer a performance hit after 48 \proc, which is the socket boundary.
+
+Looking next at the right column on Intel, Figures~\ref{fig:locality:jax:noshare:ops} and \ref{fig:locality:jax:noshare:ns}, which shows the results for the @noshare@ variation.
+The graph show the expected performance inversion where libfibre now outperforms \CFA and Tokio.
+Indeed, in this case, unparking remotely means the unparked \at is less likely to suffer a cache miss on the array.
+The leaves the \at data structure and the remote queue as the only source of likely cache misses.
+Results show both are armotized fairly well in this case.
+\CFA and Tokio both unpark locally and as a result suffer a marginal performance degradation from the cache miss on the array.
+
+Looking now at the results for the AMD architecture, Figure~\ref{fig:locality:nasus}, the results show a story that is overall similar to the results on the Intel.
+Again overall performance is higher and slightly more variation is visible.
+Looking at the left column first, Figures~\ref{fig:locality:nasus:share:ops} and \ref{fig:locality:nasus:share:ns}, \CFA and Tokio still outperform libfibre, this time more significantly.
+This is expected from the AMD server, which has smaller and more narrow caches that magnify the costs of processing the array.
+Go still sees the same poor performance as on Intel.
+
+Finally looking at the right column, Figures~\ref{fig:locality:nasus:noshare:ops} and \ref{fig:locality:nasus:noshare:ns}, like on Intel, the same performance inversion is present between libfibre and \CFA/Tokio.
+Go still sees the same poor performance.
+
+Overall, this experiment mostly demonstrates the two options available when unparking a \at.
+Depending on the workload, either of these options can be the appropriate one.
+Since it is prohibitively difficult to detect which approach is appropriate, all runtime much choose one of the two and live with the consequences.
+
+Once again, this demonstrate that \CFA achieves equivalent performance to the other runtime, in this case matching the faster Tokio rather than Go which is trailing behind.
 
 \section{Transfer}
@@ -723,5 +746,6 @@
 \end{tabular}
 \end{centering}
-\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at. DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
+\caption[Transfer Benchmark on Intel and AMD]{Transfer Benchmark on Intel and AMD\smallskip\newline Average measurement of how long it takes for all \ats to acknowledge the leader \at.
+DNC stands for ``did not complete'', meaning that after 5 seconds of a new leader being decided, some \ats still had not acknowledged the new leader.}
 \label{fig:transfer:res}
 \end{figure}
@@ -733,5 +757,23 @@
 The semaphore variation is denoted ``Park'', where the number of \ats dwindles down as the new leader is acknowledged.
 The yielding variation is denoted ``Yield''.
-The experiment was only run for the extremes of the number of cores since the scaling per core behaves like previous experiments.
+The experiment was only run for the extremes of the number of \procs since the scaling is not the focus of this experiment.
+
+The first two columns show the results for the the semaphore variation on Intel.
+While there are some differences in latencies, \CFA is consistenly the fastest and Tokio the slowest, all runtime achieve results that are fairly close.
+Again, this experiment is meant to highlight major differences so latencies within $10\times$ of each other are considered close to each other.
+
+Looking at the next two columns, the results for the yield variation in Intel, the story is very different.
+\CFA achieves better latencies, presumably due to the lack of synchronization on the semaphore.
+Neither Libfibre or Tokio complete the experiment.
+Both runtime use classical work-stealing scheduling and therefore since non of the work-queues are ever emptied no load balancing occurs.
+Go does complete the experiment, but with drastically higher latency:
+latency at 2 \procs is $350\times$ higher than \CFA and $70\times$ higher at 192 \procs.
+This is because Go also has a classic work-stealing scheduler, but it adds preemption which interrupts the spinning leader after a period.
+
+Looking now at the results for the AMD architecture, the results show effectively the same story.
+The first two columns show all runtime obtaining results well within $10\times$ of each other.
+The next two columns again show \CFA producing low latencies while Libfibre and Tokio do not complete the experiment.
+Go still has notably higher latency but the difference is less drastic on 2 \procs, where it produces a $15\times$ difference as opposed to a $100\times$ difference on 256 \procs.
+
 This experiments clearly demonstrate that while the other runtimes achieve similar performance in previous benchmarks, here \CFA achieves significantly better fairness.
 The semaphore variation serves as a control group, where all runtimes are expected to transfer leadership fairly quickly.
