Index: doc/theses/thierry_delisle_PhD/thesis/Makefile
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/Makefile	(revision ee4b77bee13f8761af7a9f179b46ad1073f6c7bd)
+++ doc/theses/thierry_delisle_PhD/thesis/Makefile	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
@@ -29,4 +29,5 @@
 PICTURES = ${addsuffix .pstex, \
 	base \
+	base_avg \
 	empty \
 	emptybit \
@@ -38,4 +39,5 @@
 	system \
 	cycle \
+	result.cycle.jax.ops \
 }
 
@@ -112,4 +114,10 @@
 	python3 $< $@
 
+build/result.%.ns.svg : data/% | ${Build}
+	../../../../benchmark/plot.py -f $< -o $@ -y "ns per ops"
+
+build/result.%.ops.svg : data/% | ${Build}
+	../../../../benchmark/plot.py -f $< -o $@ -y "Ops per second"
+
 ## pstex with inverted colors
 %.dark.pstex : fig/%.fig Makefile | ${Build}
Index: doc/theses/thierry_delisle_PhD/thesis/data/cycle.jax
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/data/cycle.jax	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
+++ doc/theses/thierry_delisle_PhD/thesis/data/cycle.jax	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
@@ -0,0 +1,1 @@
+[["rdq-cycle-go", "./rdq-cycle-go -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 43606897.0, "Ops per second": 8720908.73, "ns per ops": 114.67, "Ops per threads": 2180344.0, "Ops per procs": 10901724.0, "Ops/sec/procs": 2180227.18, "ns per ops/procs": 458.67}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5010.922033, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 93993568.0, "Total blocks": 93993209.0, "Ops per second": 18757739.07, "ns per ops": 53.31, "Ops per threads": 1174919.0, "Ops per procs": 5874598.0, "Ops/sec/procs": 1172358.69, "ns per ops/procs": 852.98}],["rdq-cycle-go", "./rdq-cycle-go -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 136763517.0, "Ops per second": 27351079.35, "ns per ops": 36.56, "Ops per threads": 1709543.0, "Ops per procs": 8547719.0, "Ops/sec/procs": 1709442.46, "ns per ops/procs": 584.99}],["rdq-cycle-go", "./rdq-cycle-go -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 27778961.0, "Ops per second": 5555545.09, "ns per ops": 180.0, "Ops per threads": 5555792.0, "Ops per procs": 27778961.0, "Ops/sec/procs": 5555545.09, "ns per ops/procs": 180.0}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5009.290878, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 43976310.0, "Total blocks": 43976217.0, "Ops per second": 8778949.17, "ns per ops": 113.91, "Ops per threads": 2198815.0, "Ops per procs": 10994077.0, "Ops/sec/procs": 2194737.29, "ns per ops/procs": 455.64}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5009.151542, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 44132300.0, "Total blocks": 44132201.0, "Ops per second": 8810334.37, "ns per ops": 113.5, "Ops per threads": 2206615.0, "Ops per procs": 11033075.0, "Ops/sec/procs": 2202583.59, "ns per ops/procs": 454.01}],["rdq-cycle-go", "./rdq-cycle-go -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 46353896.0, "Ops per second": 9270294.11, "ns per ops": 107.87, "Ops per threads": 2317694.0, "Ops per procs": 11588474.0, "Ops/sec/procs": 2317573.53, "ns per ops/procs": 431.49}],["rdq-cycle-go", "./rdq-cycle-go -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 27894379.0, "Ops per second": 5578591.58, "ns per ops": 179.26, "Ops per threads": 5578875.0, "Ops per procs": 27894379.0, "Ops/sec/procs": 5578591.58, "ns per ops/procs": 179.26}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5008.743463, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 32825528.0, "Total blocks": 32825527.0, "Ops per second": 6553645.29, "ns per ops": 152.59, "Ops per threads": 6565105.0, "Ops per procs": 32825528.0, "Ops/sec/procs": 6553645.29, "ns per ops/procs": 152.59}],["rdq-cycle-go", "./rdq-cycle-go -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 138213098.0, "Ops per second": 27640977.5, "ns per ops": 36.18, "Ops per threads": 1727663.0, "Ops per procs": 8638318.0, "Ops/sec/procs": 1727561.09, "ns per ops/procs": 578.85}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5007.914168, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 44109513.0, "Total blocks": 44109419.0, "Ops per second": 8807961.06, "ns per ops": 113.53, "Ops per threads": 2205475.0, "Ops per procs": 11027378.0, "Ops/sec/procs": 2201990.27, "ns per ops/procs": 454.13}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5012.121876, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 94130673.0, "Total blocks": 94130291.0, "Ops per second": 18780603.37, "ns per ops": 53.25, "Ops per threads": 1176633.0, "Ops per procs": 5883167.0, "Ops/sec/procs": 1173787.71, "ns per ops/procs": 851.94}],["rdq-cycle-go", "./rdq-cycle-go -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 140936367.0, "Ops per second": 28185668.38, "ns per ops": 35.48, "Ops per threads": 1761704.0, "Ops per procs": 8808522.0, "Ops/sec/procs": 1761604.27, "ns per ops/procs": 567.66}],["rdq-cycle-go", "./rdq-cycle-go -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 44279585.0, "Ops per second": 8855475.01, "ns per ops": 112.92, "Ops per threads": 2213979.0, "Ops per procs": 11069896.0, "Ops/sec/procs": 2213868.75, "ns per ops/procs": 451.7}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5008.37392, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 32227534.0, "Total blocks": 32227533.0, "Ops per second": 6434730.02, "ns per ops": 155.41, "Ops per threads": 6445506.0, "Ops per procs": 32227534.0, "Ops/sec/procs": 6434730.02, "ns per ops/procs": 155.41}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5011.019789, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 90600569.0, "Total blocks": 90600173.0, "Ops per second": 18080265.66, "ns per ops": 55.31, "Ops per threads": 1132507.0, "Ops per procs": 5662535.0, "Ops/sec/procs": 1130016.6, "ns per ops/procs": 884.94}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5008.52474, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 32861776.0, "Total blocks": 32861775.0, "Ops per second": 6561168.75, "ns per ops": 152.41, "Ops per threads": 6572355.0, "Ops per procs": 32861776.0, "Ops/sec/procs": 6561168.75, "ns per ops/procs": 152.41}],["rdq-cycle-go", "./rdq-cycle-go -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 28097680.0, "Ops per second": 5619274.9, "ns per ops": 177.96, "Ops per threads": 5619536.0, "Ops per procs": 28097680.0, "Ops/sec/procs": 5619274.9, "ns per ops/procs": 177.96}]]
Index: doc/theses/thierry_delisle_PhD/thesis/fig/base.fig
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/fig/base.fig	(revision ee4b77bee13f8761af7a9f179b46ad1073f6c7bd)
+++ doc/theses/thierry_delisle_PhD/thesis/fig/base.fig	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
@@ -89,4 +89,10 @@
 	 5700 5210 5550 4950 5250 4950 5100 5210 5250 5470 5550 5470
 	 5700 5210
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 3600 5700 3600 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 4800 5700 4800 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 6000 5700 6000 1200
 4 2 -1 50 -1 0 12 0.0000 2 135 630 2100 3075 Threads\001
 4 2 -1 50 -1 0 12 0.0000 2 165 450 2100 2850 Ready\001
Index: doc/theses/thierry_delisle_PhD/thesis/fig/base_avg.fig
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/fig/base_avg.fig	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
+++ doc/theses/thierry_delisle_PhD/thesis/fig/base_avg.fig	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
@@ -0,0 +1,107 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+6 6750 4125 7050 4275
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6825 4200 20 20 6825 4200 6845 4200
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6900 4200 20 20 6900 4200 6920 4200
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6975 4200 20 20 6975 4200 6995 4200
+-6
+6 6375 5100 6675 5250
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6450 5175 20 20 6450 5175 6470 5175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6525 5175 20 20 6525 5175 6545 5175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6600 5175 20 20 6600 5175 6620 5175
+-6
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3900 2400 300 300 3900 2400 4200 2400
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3900 3300 300 300 3900 3300 4200 3300
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 5100 1500 300 300 5100 1500 5400 1500
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 5100 2400 300 300 5100 2400 5400 2400
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 5100 3300 300 300 5100 3300 5400 3300
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 6300 2400 300 300 6300 2400 6600 2400
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 6300 3300 300 300 6300 3300 6600 3300
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 4509 3302 300 300 4509 3302 4809 3302
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2700 3300 300 300 2700 3300 3000 3300
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2700 2400 300 300 2700 2400 3000 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3000 3900 3000 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3600 3900 3600 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4200 3900 4200 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4800 3900 4800 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5400 3900 5400 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 6000 3900 6000 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 6600 3900 6600 4500
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2400 3900 7200 3900 7200 4500 2400 4500 2400 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2700 3300 2700 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3900 3300 3900 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3900 3975 3900 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 5100 2475 5100 1800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 5100 3300 5100 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 5100 3975 5100 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 6300 3300 6300 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 6300 3975 6300 3600
+2 1 0 1 -1 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2700 3975 2700 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 4275 3000 4275
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 4500 3975 4500 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 3375 3000 3375
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 2475 3000 2475
+2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7
+	 3300 5210 3150 4950 2850 4950 2700 5210 2850 5470 3150 5470
+	 3300 5210
+2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7
+	 4500 5210 4350 4950 4050 4950 3900 5210 4050 5470 4350 5470
+	 4500 5210
+2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7
+	 5700 5210 5550 4950 5250 4950 5100 5210 5250 5470 5550 5470
+	 5700 5210
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 3600 5700 3600 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 4800 5700 4800 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 6000 5700 6000 1200
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 4050 3000 4050
+4 2 -1 50 -1 0 12 0.0000 2 135 630 2100 3075 Threads\001
+4 2 -1 50 -1 0 12 0.0000 2 165 450 2100 2850 Ready\001
+4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 4450 MA\001
+4 2 -1 50 -1 0 12 0.0000 2 165 720 2100 4200 Array of\001
+4 2 -1 50 -1 0 12 0.0000 2 150 540 2100 4425 Queues\001
+4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 3550 TS\001
+4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 2650 TS\001
+4 2 -1 50 -1 0 12 0.0000 2 135 900 2100 5175 Processors\001
+4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 4200 TS\001
Index: doc/theses/thierry_delisle_PhD/thesis/text/core.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/core.tex	(revision ee4b77bee13f8761af7a9f179b46ad1073f6c7bd)
+++ doc/theses/thierry_delisle_PhD/thesis/text/core.tex	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
@@ -3,5 +3,5 @@
 Before discussing scheduling in general, where it is important to address systems that are changing states, this document discusses scheduling in a somewhat ideal scenario, where the system has reached a steady state. For this purpose, a steady state is loosely defined as a state where there are always \glspl{thrd} ready to run and the system has the resources necessary to accomplish the work, \eg, enough workers. In short, the system is neither overloaded nor underloaded.
 
-I believe it is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state can to be pervasive in all states.
+It is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state tend to be pervasive in all states.
 
 \section{Design Goals}
@@ -25,5 +25,5 @@
 It is important to note that these guarantees are expected only up to a point. \Glspl{thrd} that are ready to run should not be prevented to do so, but they still share the limited hardware resources. Therefore, the guarantee is considered respected if a \gls{thrd} gets access to a \emph{fair share} of the hardware resources, even if that share is very small.
 
-Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is equivalent to or lower than other popular languages, I consider the guarantee achieved.
+Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is compatitive to other popular languages, the guarantee will be consider achieved.
 
 More precisely the scheduler should be:
@@ -33,8 +33,8 @@
 \end{itemize}
 
-\subsection{Fairness vs Scheduler Locality}
+\subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal}
 An important performance factor in modern architectures is cache locality. Waiting for data at lower levels or not present in the cache can have a major impact on performance. Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on. It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}.
 
-For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in part~\ref{Evaluation} on evaluation.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
+For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in the next section.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
 
 However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally, where Figure~\ref{fig:fair} shows a visual representation of this behaviour. As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model.
@@ -48,23 +48,43 @@
 \end{figure}
 
-\section{Design}
+\subsection{Performance Challenges}\label{pref:challenge}
+While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges. Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design.
+
+\subsubsection{Scalability}
+The most basic performance challenge of a scheduler is scalability.
+Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeues \ats.
+One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can lead to worst improvements.
+While the ready-queue itself can be sharded to alleviate the main source of contention, auxillary scheduling features, \eg counting ready \ats, can also be sources of contention.
+
+\subsubsection{Migration Cost}
+Another important source of latency in scheduling is migration.
+An \at is said to have migrated if it is executed by two different \proc consecutively, which is the process discussed in \ref{fairnessvlocal}.
+Migrations can have many different causes, but it certain programs it can be all but impossible to limit migrations.
+Chapter~\ref{microbench} for example, has a benchmark where any \at can potentially unblock any other \at, which can leat to \ats migrating more often than not.
+Because of this it is important to design the internal data structures of the scheduler to limit the latency penalty from migrations.
+
+
+\section{Inspirations}
 In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is adding/removing \glspl{thrd} is a single point of contention. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue : create multiple sub-ready-queues that multiple \glspl{hthrd} can access and modify without interfering.
 
-Before going into the design of \CFA's scheduler proper, I want to discuss two sharding solutions which served as the inspiration scheduler in this thesis.
+Before going into the design of \CFA's scheduler proper, it is relevant to discuss two sharding solutions which served as the inspiration scheduler in this thesis.
 
 \subsection{Work-Stealing}
 
-As I mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing. As mentionned, in this pattern each \gls{proc} has its own ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work.
-The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed. In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
+As mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing.
+In this pattern each \gls{proc} has its own local ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work on their local ready-queue.
+The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed.
+In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
 On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work.
-This means that the scheduler may never balance unfairness that does not result in a \gls{proc} running out of work.
+This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work.
 Chapter~\ref{microbench} shows that in pathological cases this problem can lead to indefinite starvation.
 
 
-Based on these observation, I conclude that \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises.
+Based on these observation, the conclusion is that a \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises.
 
 \subsection{Relaxed-Fifo}
 An entirely different scheme is to create a ``relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgos any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which the \glspl{proc} can pick from.
 \Glspl{proc} choose ready-queus at random, but timestamps are added to all elements of the queue and dequeues are done by picking two queues and dequeing the oldest element.
+All subqueues are protected by TryLocks and \procs simply pick a different subqueue if they fail to acquire the TryLock.
 The result is a queue that has both decent scalability and sufficient fairness.
 The lack of ownership means that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely that any element will stay on the queue for much longer than any other element.
@@ -75,87 +95,135 @@
 
 While the fairness, of this scheme is good, it does suffer in terms of performance.
-It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and the randomness means locality can suffer significantly and finding non-empty queues can be difficult.
-
-\section{\CFA}
-The \CFA is effectively attempting to merge these two approaches, keeping the best of both.
-It is based on the
+It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues can be difficult if there are too few ready \ats.
+
+\section{Relaxed-FIFO++}
+Since it has inherent fairness quelities and decent performance in the presence of many \ats, the relaxed-FIFO queue appears as a good candidate to form the basis of a scheduler.
+The most obvious problems is for workloads where the number of \ats is barely greater than the number of \procs.
+In these situations, the wide sharding means most of the sub-queues from which the relaxed queue is formed will be empty.
+The consequence is that when a dequeue operations attempts to pick a sub-queue at random, it is likely that it picks an empty sub-queue and will have to pick again.
+This problem can repeat an unbounded number of times.
+
+As this is the most obvious challenge, it is worth addressing first.
+The obvious solution is to supplement each subqueue with some sharded data structure that keeps track of which subqueues are empty.
+This data structure can take many forms, for example simple bitmask or a binary tree that tracks which branch are empty.
+Following a binary tree on each pick has fairly good Big O complexity and many modern architectures have powerful bitmask manipulation instructions.
+However, precisely tracking which sub-queues are empty is actually fundamentally problematic.
+The reason is that each subqueues are already a form of sharding and the sharding width has presumably already chosen to avoid contention.
+However, tracking which ready queue is empty is only useful if the tracking mechanism uses denser sharding than the sub queues, then it will invariably create a new source of contention.
+But if the tracking mechanism is not denser than the sub-queues, then it will generally not provide useful because reading this new data structure risks being as costly as simply picking a sub-queue at random.
+Early experiments with this approach have shown that even with low success rates, randomly picking a sub-queue can be faster than a simple tree walk.
+
+The exception to this rule is using local tracking.
+If each \proc keeps track locally of which sub-queue is empty, then this can be done with a very dense data structure without introducing a new source of contention.
+The consequence of local tracking however, is that the information is not complete.
+Each \proc is only aware of the last state it saw each subqueues but does not have any information about freshness.
+Even on systems with low \gls{hthrd} count, \eg 4 or 8, this can quickly lead to the local information being no better than the random pick.
+This is due in part to the cost of this maintaining this information and its poor quality.
+
+However, using a very low cost approach to local tracking may actually be beneficial.
+If the local tracking is no more costly than the random pick, than \emph{any} improvement to the succes rate, however low it is, would lead to a performance benefits.
+This leads to the following approach:
+
+\subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/}
+The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}.
+The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed.
+While this is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the subqueues that were accessed.
+Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, for example some Linear Congruential Generators\cit{https://en.wikipedia.org/wiki/Linear\_congruential\_generator} support running the algorithm backwards while offering good quality and performance.
+This particular \glsxtrshort{prng} can be used as follows:
+
+Each \proc maintains two \glsxtrshort{prng} states, which whill be refered to as \texttt{F} and \texttt{B}.
+
+When a \proc attempts to dequeue a \at, it picks the subqueues by running the \texttt{B} backwards.
+When a \proc attempts to enqueue a \at, it runs \texttt{F} forward to pick to subqueue to enqueue to.
+If the enqueue is successful, the state \texttt{B} is overwritten with the content of \texttt{F}.
+
+The result is that each \proc will tend to dequeue \ats that it has itself enqueued.
+When most sub-queues are empty, this technique increases the odds of finding \ats at very low cost, while also offering an improvement on locality in many cases.
+
+However, while this approach does notably improve performance in many cases, this algorithm is still not competitive with work-stealing algorithms.
+The fundamental problem is that the constant randomness limits how much locality the scheduler offers.
+This becomes problematic both because the scheduler is likely to get cache misses on internal data-structures and because migration become very frequent.
+Therefore since the approach of modifying to relaxed-FIFO algorithm to behave more like work stealing does not seem to pan out, the alternative is to do it the other way around.
+
+\section{Work Stealing++}
+To add stronger fairness guarantees to workstealing a few changes.
+First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all subqueues.
+Therefore the workstealing algorithm must be prepended with some monitoring.
+Before attempting to dequeue from a \proc's local queue, the \proc must make some effort to make sure remote queues are not being neglected.
+To make this possible, \procs must be able to determie which \at has been on the ready-queue the longest.
+Which is the second aspect that much be added.
+The relaxed-FIFO approach uses timestamps for each \at and this is also what is done here.
+
 \begin{figure}
 	\centering
 	\input{base.pstex_t}
-	\caption[Base \CFA design]{Base \CFA design \smallskip\newline A list of sub-ready queues offers the sharding, two per \glspl{proc}. However, \glspl{proc} can access any of the sub-queues.}
+	\caption[Base \CFA design]{Base \CFA design \smallskip\newline A Pool of sub-ready queues offers the sharding, two per \glspl{proc}. Each \gls{proc} have local subqueues, however \glspl{proc} can access any of the sub-queues. Each \at is timestamped when enqueued.}
 	\label{fig:base}
 \end{figure}
-
-
-
-% The common solution to the single point of contention is to shard the ready-queue so each \gls{hthrd} can access the ready-queue without contention, increasing performance.
-
-% \subsection{Sharding} \label{sec:sharding}
-% An interesting approach to sharding a queue is presented in \cit{Trevors paper}. This algorithm presents a queue with a relaxed \glsxtrshort{fifo} guarantee using an array of strictly \glsxtrshort{fifo} sublists as shown in Figure~\ref{fig:base}. Each \emph{cell} of the array has a timestamp for the last operation and a pointer to a linked-list with a lock. Each node in the list is marked with a timestamp indicating when it is added to the list. A push operation is done by picking a random cell, acquiring the list lock, and pushing to the list. If the cell is locked, the operation is simply retried on another random cell until a lock is acquired. A pop operation is done in a similar fashion except two random cells are picked. If both cells are unlocked with non-empty lists, the operation pops the node with the oldest timestamp. If one of the cells is unlocked and non-empty, the operation pops from that cell. If both cells are either locked or empty, the operation picks two new random cells and tries again.
-
-% \begin{figure}
-% 	\centering
-% 	\input{base.pstex_t}
-% 	\caption[Relaxed FIFO list]{Relaxed FIFO list \smallskip\newline List at the base of the scheduler: an array of strictly FIFO lists. The timestamp is in all nodes and cell arrays.}
-% 	\label{fig:base}
-% \end{figure}
-
-% \subsection{Finding threads}
-% Once threads have been distributed onto multiple queues, identifying empty queues becomes a problem. Indeed, if the number of \glspl{thrd} does not far exceed the number of queues, it is probable that several of the cell queues are empty. Figure~\ref{fig:empty} shows an example with 2 \glspl{thrd} running on 8 queues, where the chances of getting an empty queue is 75\% per pick, meaning two random picks yield a \gls{thrd} only half the time. This scenario leads to performance problems since picks that do not yield a \gls{thrd} are not useful and do not necessarily help make more informed guesses.
-
-% \begin{figure}
-% 	\centering
-% 	\input{empty.pstex_t}
-% 	\caption[``More empty'' Relaxed FIFO list]{``More empty'' Relaxed FIFO list \smallskip\newline Emptier state of the queue: the array contains many empty cells, that is strictly FIFO lists containing no elements.}
-% 	\label{fig:empty}
-% \end{figure}
-
-% There are several solutions to this problem, but they ultimately all have to encode if a cell has an empty list. My results show the density and locality of this encoding is generally the dominating factor in these scheme. Classic solutions to this problem use one of three techniques to encode the information:
-
-% \paragraph{Dense Information} Figure~\ref{fig:emptybit} shows a dense bitmask to identify the cell queues currently in use. This approach means processors can often find \glspl{thrd} in constant time, regardless of how many underlying queues are empty. Furthermore, modern x86 CPUs have extended bit manipulation instructions (BMI2) that allow searching the bitmask with very little overhead compared to the randomized selection approach for a filled ready queue, offering good performance even in cases with many empty inner queues. However, this technique has its limits: with a single word\footnote{Word refers here to however many bits can be written atomically.} bitmask, the total amount of ready-queue sharding is limited to the number of bits in the word. With a multi-word bitmask, this maximum limit can be increased arbitrarily, but the look-up time increases. Finally, a dense bitmap, either single or multi-word, causes additional contention problems that reduces performance because of cache misses after updates. This central update bottleneck also means the information in the bitmask is more often stale before a processor can use it to find an item, \ie mask read says there are available \glspl{thrd} but none on queue when the subsequent atomic check is done.
-
-% \begin{figure}
-% 	\centering
-% 	\vspace*{-5pt}
-% 	{\resizebox{0.75\textwidth}{!}{\input{emptybit.pstex_t}}}
-% 	\vspace*{-5pt}
-% 	\caption[Underloaded queue with bitmask]{Underloaded queue with bitmask indicating array cells with items.}
-% 	\label{fig:emptybit}
-
-% 	\vspace*{10pt}
-% 	{\resizebox{0.75\textwidth}{!}{\input{emptytree.pstex_t}}}
-% 	\vspace*{-5pt}
-% 	\caption[Underloaded queue with binary search-tree]{Underloaded queue with binary search-tree indicating array cells with items.}
-% 	\label{fig:emptytree}
-
-% 	\vspace*{10pt}
-% 	{\resizebox{0.95\textwidth}{!}{\input{emptytls.pstex_t}}}
-% 	\vspace*{-5pt}
-% 	\caption[Underloaded queue with per processor bitmask]{Underloaded queue with per processor bitmask indicating array cells with items.}
-% 	\label{fig:emptytls}
-% \end{figure}
-
-% \paragraph{Sparse Information} Figure~\ref{fig:emptytree} shows an approach using a hierarchical tree data-structure to reduce contention and has been shown to work in similar cases~\cite{ellen2007snzi}. However, this approach may lead to poorer performance due to the inherent pointer chasing cost while still allowing significant contention on the nodes of the tree if the tree is shallow.
-
-% \paragraph{Local Information} Figure~\ref{fig:emptytls} shows an approach using dense information, similar to the bitmap, but each \gls{hthrd} keeps its own independent copy. While this approach can offer good scalability \emph{and} low latency, the liveliness and discovery of the information can become a problem. This case is made worst in systems with few processors where even blind random picks can find \glspl{thrd} in a few tries.
-
-% I built a prototype of these approaches and none of these techniques offer satisfying performance when few threads are present. All of these approach hit the same 2 problems. First, randomly picking sub-queues is very fast. That speed means any improvement to the hit rate can easily be countered by a slow-down in look-up speed, whether or not there are empty lists. Second, the array is already sharded to avoid contention bottlenecks, so any denser data structure tends to become a bottleneck. In all cases, these factors meant the best cases scenario, \ie many threads, would get worst throughput, and the worst-case scenario, few threads, would get a better hit rate, but an equivalent poor throughput. As a result I tried an entirely different approach.
-
-% \subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/}
-% In the worst-case scenario there are only few \glspl{thrd} ready to run, or more precisely given $P$ \glspl{proc}\footnote{For simplicity, this assumes there is a one-to-one match between \glspl{proc} and \glspl{hthrd}.}, $T$ \glspl{thrd} and $\epsilon$ a very small number, than the worst case scenario can be represented by $T = P + \epsilon$, with $\epsilon \ll P$. It is important to note in this case that fairness is effectively irrelevant. Indeed, this case is close to \emph{actually matching} the model of the ``Ideal multi-tasking CPU'' on page \pageref{q:LinuxCFS}. In this context, it is possible to use a purely internal-locality based approach and still meet the fairness requirements. This approach simply has each \gls{proc} running a single \gls{thrd} repeatedly. Or from the shared ready-queue viewpoint, each \gls{proc} pushes to a given sub-queue and then pops from the \emph{same} subqueue. The challenge is for the the scheduler to achieve good performance in both the $T = P + \epsilon$ case and the $T \gg P$ case, without affecting the fairness guarantees in the later.
-
-% To handle this case, I use a \glsxtrshort{prng}\todo{Fix missing long form} in a novel way. There exist \glsxtrshort{prng}s that are fast, compact and can be run forward \emph{and} backwards.  Linear congruential generators~\cite{wiki:lcg} are an example of \glsxtrshort{prng}s of such \glsxtrshort{prng}s. The novel approach is to use the ability to run backwards to ``replay'' the \glsxtrshort{prng}. The scheduler uses an exclusive \glsxtrshort{prng} instance per \gls{proc}, the random-number seed effectively starts an encoding that produces a list of all accessed subqueues, from latest to oldest. Replaying the \glsxtrshort{prng} to identify cells accessed recently and which probably have data still cached.
-
-% The algorithm works as follows:
-% \begin{itemize}
-% 	\item Each \gls{proc} has two \glsxtrshort{prng} instances, $F$ and $B$.
-% 	\item Push and Pop operations occur as discussed in Section~\ref{sec:sharding} with the following exceptions:
-% 	\begin{itemize}
-% 		\item Push operations use $F$ going forward on each try and on success $F$ is copied into $B$.
-% 		\item Pop operations use $B$ going backwards on each try.
-% 	\end{itemize}
-% \end{itemize}
-
-% The main benefit of this technique is that it basically respects the desired properties of Figure~\ref{fig:fair}. When looking for work, a \gls{proc} first looks at the last cell they pushed to, if any, and then move backwards through its accessed cells. As the \gls{proc} continues looking for work, $F$ moves backwards and $B$ stays in place. As a result, the relation between the two becomes weaker, which means that the probablisitic fairness of the algorithm reverts to normal. Chapter~\ref{proofs} discusses more formally the fairness guarantees of this algorithm.
-
-% \section{Details}
+The algorithm is structure as shown in Figure~\ref{fig:base}.
+This is very similar to classic workstealing except the local queues are placed in an array so \procs can access eachother's queue in constant time.
+Sharding width can be adjusted based on need.
+When a \proc attempts to dequeue a \at, it first picks a random remote queue and compares its timestamp to the timestamps of the local queue(s), dequeue from the remote queue if needed.
+
+Implemented as as naively state above, this approach has some obvious performance problems.
+First, it is necessary to have some damping effect on helping.
+Random effects like cache misses and preemption can add spurious but short bursts of latency for which helping is not helpful, pun intended.
+The effect of these bursts would be to cause more migrations than needed and make this workstealing approach slowdown to the match the relaxed-FIFO approach.
+
+\begin{figure}
+	\centering
+	\input{base_avg.pstex_t}
+	\caption[\CFA design with Moving Average]{\CFA design with Moving Average \smallskip\newline A moving average is added to each subqueue.}
+	\label{fig:base-ma}
+\end{figure}
+
+A simple solution to this problem is to compare an exponential moving average\cit{https://en.wikipedia.org/wiki/Moving\_average\#Exponential\_moving\_average} instead if the raw timestamps, shown in Figure~\ref{fig:base-ma}.
+Note that this is slightly more complex than it sounds because since the \at at the head of a subqueue is still waiting, its wait time has not ended.
+Therefore the exponential moving average is actually an exponential moving average of how long each already dequeued \at have waited.
+To compare subqueues, the timestamp at the head must be compared to the current time, yielding the bestcase wait time for the \at at the head of the queue.
+This new waiting is averaged with the stored average.
+To limit even more the amount of unnecessary migration, a bias can be added to the local queue, where a remote queue is helped only if its moving average is more than \emph{X} times the local queue's average.
+None of the experimentation that I have run with these scheduler seem to indicate that the choice of the weight for the moving average or the choice of bis is particularly important.
+Weigths and biases of similar \emph{magnitudes} have similar effects.
+
+With these additions to workstealing, scheduling can be made as fair as the relaxed-FIFO approach, well avoiding the majority of unnecessary migrations.
+Unfortunately, the performance of this approach does suffer in the cases with no risks of starvation.
+The problem is that the constant polling of remote subqueues generally entail a cache miss.
+To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, the higher the chances are that polling will incurr a cache-miss.
+Conversly, the active subqueues do not benefit much from helping since starvation is already a non-issue.
+This puts this algorithm in an akward situation where it is paying for a cost, but the cost itself suggests the operation was unnecessary.
+The good news is that this problem can be mitigated
+
+\subsection{Redundant Timestamps}
+The problem with polling remote queues is due to a tension between the consistency requirement on the subqueue.
+For the subqueues, correctness is critical. There must be a consensus among \procs on which subqueues hold which \ats.
+Since the timestamps are use for fairness, it is alco important to have consensus and which \at is the oldest.
+However, when deciding if a remote subqueue is worth polling, correctness is much less of a problem.
+Since the only need is that a subqueue will eventually be polled, some data staleness can be acceptable.
+This leads to a tension where stale timestamps are only problematic in some cases.
+Furthermore, stale timestamps can be somewhat desirable since lower freshness requirements means less tension on the cache coherence protocol.
+
+
+\begin{figure}
+	\centering
+	% \input{base_ts2.pstex_t}
+	\caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline A array is added containing a copy of the timestamps. These timestamps are written to with relaxed atomics, without fencing, leading to fewer cache invalidations.}
+	\label{fig:base-ts2}
+\end{figure}
+A solution to this is to create a second array containing a copy of the timestamps and average.
+This copy is updated \emph{after} the subqueue's critical sections using relaxed atomics.
+\Glspl{proc} now check if polling is needed by comparing the copy of the remote timestamp instead of the actual timestamp.
+The result is that since there is no fencing, the writes can be buffered and cause fewer cache invalidations.
+
+The correctness argument here is somewhat subtle.
+The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation.
+Therefore, it is acceptable if stale data make queues appear older than they really are but not fresher.
+For the timestamps, this means that missing writes to the timestamp is acceptable since they will make the head \at look older.
+For the moving average, as long as the operation are RW-safe, the average is guaranteed to yield a value that is between the oldest and newest values written.
+Therefore this unprotected read of the timestamp and average satisfy the limited correctness that is required.
+
+\subsection{Per CPU Sharding}
+
+\subsection{Topological Work Stealing}
+
+
Index: doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex	(revision ee4b77bee13f8761af7a9f179b46ad1073f6c7bd)
+++ doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
@@ -3,4 +3,22 @@
 The first step of evaluation is always to test-out small controlled cases, to ensure that the basics are working properly.
 This sections presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
+
+\section{Benchmark Environment}
+All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine.
+
+\paragraph{AMD} The AMD machine is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
+The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
+These EPYCs have 64 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 256 \glspl{hthrd}.
+The cpus each have 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches respectively.
+Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
+
+\paragraph{Intel} The Intel machine is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
+The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
+These Xeon Platinums have 24 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 192 \glspl{hthrd}.
+The cpus each have 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
+Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
+
+This limited sharing of the last level cache on the AMD machine is markedly different than the Intel machine. Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different cpu incurr a significant latency, on AMD it is also the case that cache misses served by a different L3 instance on the same cpu still incur high latency.
+
 
 \section{Cycling latency}
@@ -31,12 +49,10 @@
 \end{figure}
 
-\todo{check term ``idle sleep handling''}
 To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of \glspl{proc} available.
 Beyond this point, adding more rings serves to mitigate even more the idle sleep handling.
-This is to avoid the case where one of the worker \glspl{at} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
+This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
 
 The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw \texttt{park}/\texttt{unpark} and carefully picking the order of the \texttt{P} and \texttt{V} with respect to the loop condition.
 
-\todo{code, setup, results}
 \begin{lstlisting}
 	Thread.main() {
@@ -52,4 +68,10 @@
 \end{lstlisting}
 
+\begin{figure}
+	\centering
+	\input{result.cycle.jax.ops.pstex_t}
+	\vspace*{-10pt}
+	\label{fig:cycle:ns:jax}
+\end{figure}
 
 \section{Yield}
Index: doc/theses/thierry_delisle_PhD/thesis/text/io.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/io.tex	(revision ee4b77bee13f8761af7a9f179b46ad1073f6c7bd)
+++ doc/theses/thierry_delisle_PhD/thesis/text/io.tex	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
@@ -330,5 +330,18 @@
 \paragraph{Pending Allocations} can be more complicated to handle.
 If the arbiter has available instances, the arbiter can attempt to directly hand over the instance and satisfy the request.
-Otherwise
+Otherwise it must hold onto the list of threads until SQEs are made available again.
+This handling becomes that much more complex if pending allocation require more than one SQE, since the arbiter must make a decision between statisfying requests in FIFO ordering or satisfy requests for fewer SQEs first.
+
+While this arbiter has the potential to solve many of the problems mentionned in above, it also introduces a significant amount of complexity.
+Tracking which processors are borrowing which instances and which instances have SQEs available ends-up adding a significant synchronization prelude to any I/O operation.
+Any submission must start with a handshake that pins the currently borrowed instance, if available.
+An attempt to allocate is then made, but the arbiter can concurrently be attempting to allocate from the same instance from a different \gls{hthrd}.
+Once the allocation is completed, the submission must still check that the instance is still burrowed before attempt to flush.
+These extra synchronization steps end-up having a similar cost to the multiple shared instances approach.
+Furthermore, if the number of instances does not match the number of processors actively submitting I/O, the system can fall into a state where instances are constantly being revoked and end-up cycling the processors, which leads to significant cache deterioration.
+Because of these reasons, this approach, which sounds promising on paper, does not improve on the private instance approach in practice.
+
+\subsubsection{Private Instances V2}
+
 
 
Index: doc/theses/thierry_delisle_PhD/thesis/thesis.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/thesis.tex	(revision ee4b77bee13f8761af7a9f179b46ad1073f6c7bd)
+++ doc/theses/thierry_delisle_PhD/thesis/thesis.tex	(revision 2a77817e25131a0403de85f1bf87cc0ef91d72a3)
@@ -202,4 +202,8 @@
 
 \newcommand\io{\glsxtrshort{io}\xspace}%
+\newcommand\at{\gls{at}\xspace}%
+\newcommand\ats{\glspl{at}\xspace}%
+\newcommand\proc{\gls{proc}\xspace}%
+\newcommand\procs{\glspl{proc}\xspace}%
 
 %======================================================================
