Index: doc/theses/thierry_delisle_PhD/thesis/fig/io_uring.fig
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/fig/io_uring.fig	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/fig/io_uring.fig	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -8,88 +8,88 @@
 -2
 1200 2
-6 180 3240 2025 3510
+6 675 3105 2520 3375
 2 1 0 1 0 7 40 -1 -1 0.000 0 0 -1 0 0 2
-	 720 3240 720 3510
+	 1215 3105 1215 3375
 2 1 0 1 0 7 40 -1 -1 0.000 0 0 -1 0 0 2
-	 450 3240 450 3510
+	 945 3105 945 3375
 2 2 0 1 0 7 45 -1 20 0.000 0 0 -1 0 0 5
-	 180 3240 1260 3240 1260 3510 180 3510 180 3240
+	 675 3105 1755 3105 1755 3375 675 3375 675 3105
 2 1 0 1 0 7 40 -1 -1 0.000 0 0 -1 0 0 2
-	 990 3240 990 3510
-4 0 0 40 -1 0 12 0.0000 2 165 990 1035 3420 {\\small S3}\001
-4 0 0 40 -1 0 12 0.0000 2 165 990 765 3420 {\\small S2}\001
-4 0 0 40 -1 0 12 0.0000 2 165 990 225 3420 {\\small S0}\001
-4 0 0 40 -1 0 12 0.0000 2 165 990 495 3420 {\\small S1}\001
+	 1485 3105 1485 3375
+4 0 0 40 -1 0 12 0.0000 2 165 930 1530 3285 {\\small S3}\001
+4 0 0 40 -1 0 12 0.0000 2 165 930 1260 3285 {\\small S2}\001
+4 0 0 40 -1 0 12 0.0000 2 165 930 720 3285 {\\small S0}\001
+4 0 0 40 -1 0 12 0.0000 2 165 930 990 3285 {\\small S1}\001
 -6
-6 1530 2610 3240 4140
-5 1 0 1 0 7 35 -1 -1 0.000 0 1 1 0 2455.714 3375.000 1890 2700 1575 3375 1890 4050
+6 2025 2475 3735 4005
+5 1 0 1 0 7 35 -1 -1 0.000 0 1 1 0 2950.714 3240.000 2385 2565 2070 3240 2385 3915
 	1 1 1.00 60.00 120.00
-1 3 0 1 0 7 40 -1 20 0.000 1 0.0000 2475 3375 315 315 2475 3375 2790 3375
-1 3 0 1 0 7 50 -1 20 0.000 1 0.0000 2475 3375 765 765 2475 3375 3240 3375
+1 3 0 1 0 7 40 -1 20 0.000 1 0.0000 2970 3240 315 315 2970 3240 3285 3240
+1 3 0 1 0 7 50 -1 20 0.000 1 0.0000 2970 3240 765 765 2970 3240 3735 3240
 2 1 0 1 0 7 45 -1 -1 0.000 0 0 -1 0 0 2
-	 2475 3375 2133 2690
+	 2970 3240 2628 2555
 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2
-	 2475 3375 1769 3093
+	 2970 3240 2264 2958
 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2
-	 2475 3375 1769 3661
+	 2970 3240 2264 3526
 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2
-	 2475 3375 2133 4057
+	 2970 3240 2628 3922
 2 1 1 1 0 7 35 -1 0 4.000 0 0 -1 0 0 2
-	 2205 3375 2745 3375
+	 2700 3240 3240 3240
 -6
-6 585 2250 1485 2610
-4 2 0 50 -1 0 12 0.0000 2 135 900 1485 2385 Submission\001
-4 2 0 50 -1 0 12 0.0000 2 165 360 1485 2580 Ring\001
+6 1080 2115 1980 2475
+4 2 0 50 -1 0 12 0.0000 2 135 945 1980 2250 Submission\001
+4 2 0 50 -1 0 12 0.0000 2 180 405 1980 2445 Ring\001
 -6
-6 3600 2610 5265 4140
-5 1 0 1 0 7 35 -1 -1 0.000 0 1 1 0 4384.000 3375.000 4950 4050 5265 3375 4950 2700
+6 4095 2475 5760 4005
+5 1 0 1 0 7 35 -1 -1 0.000 0 1 1 0 4879.000 3240.000 5445 3915 5760 3240 5445 2565
 	1 1 1.00 60.00 120.00
-1 3 0 1 0 7 40 -1 20 0.000 1 3.1416 4365 3375 315 315 4365 3375 4050 3375
-1 3 0 1 0 7 50 -1 20 0.000 1 3.1416 4365 3375 765 765 4365 3375 3600 3375
+1 3 0 1 0 7 40 -1 20 0.000 1 3.1416 4860 3240 315 315 4860 3240 4545 3240
+1 3 0 1 0 7 50 -1 20 0.000 1 3.1416 4860 3240 765 765 4860 3240 4095 3240
 2 1 0 1 0 7 45 -1 -1 0.000 0 0 -1 0 0 2
-	 4365 3375 4707 4060
+	 4860 3240 5202 3925
 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2
-	 4365 3375 5071 3657
+	 4860 3240 5566 3522
 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2
-	 4365 3375 5071 3089
+	 4860 3240 5566 2954
 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2
-	 4365 3375 4707 2693
+	 4860 3240 5202 2558
 2 1 1 1 0 7 35 -1 0 4.000 0 0 -1 0 0 2
-	 4635 3375 4095 3375
+	 5130 3240 4590 3240
 -6
-6 5355 2250 6255 2610
-4 0 0 50 -1 0 12 0.0000 2 165 360 5355 2580 Ring\001
-4 0 0 50 -1 0 12 0.0000 2 165 900 5355 2385 Completion\001
+6 5850 2115 6750 2475
+4 0 0 50 -1 0 12 0.0000 2 180 405 5850 2445 Ring\001
+4 0 0 50 -1 0 12 0.0000 2 180 975 5850 2250 Completion\001
 -6
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 60.00 120.00
-	 2925 2025 2550 2486
+	 3420 1890 3045 2351
 2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
 	1 1 1.00 60.00 120.00
-	 4275 2475 3825 2025
+	 4770 2340 4320 1890
 2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
 	1 1 1.00 60.00 120.00
-	 2751 4268 3066 4538
+	 3060 4095 3600 4410
 2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
 	1 1 1.00 60.00 120.00
-	 3780 4545 4275 4230
+	 4275 4410 4770 4095
 2 1 1 1 0 7 55 -1 -1 4.000 0 0 -1 0 0 2
-	 0 3375 6255 3375
-4 0 0 35 -1 0 12 0.0000 2 165 1170 1845 3060 {\\small \\&S2}\001
-4 0 0 35 -1 0 12 0.0000 2 165 1170 1755 3420 {\\small \\&S3}\001
-4 0 0 35 -1 0 12 0.0000 2 165 1170 1890 3735 {\\small \\&S0}\001
-4 0 0 50 -1 0 12 0.0000 6 135 360 2790 2565 Push\001
-4 0 0 50 -1 0 12 0.0000 6 165 270 2880 4230 Pop\001
-4 0 0 50 -1 0 12 0.0000 6 135 360 2025 4275 Head\001
-4 0 0 50 -1 0 12 0.0000 6 135 360 2025 2565 Tail\001
-4 0 0 35 -1 0 12 0.0000 2 165 990 4635 3060 {\\small C0}\001
-4 0 0 35 -1 0 12 0.0000 2 165 990 4815 3420 {\\small C1}\001
-4 0 0 35 -1 0 12 0.0000 2 165 990 4635 3780 {\\small C2}\001
-4 0 0 50 -1 0 12 0.0000 4 135 360 4725 4275 Tail\001
-4 0 0 50 -1 0 12 0.0000 6 135 360 4590 2565 Head\001
-4 0 0 50 -1 0 12 0.0000 2 135 990 5535 3285 Kernel Line\001
-4 1 0 50 -1 0 12 0.0000 2 180 1350 3375 4815 {\\Large Kernel}\001
-4 1 0 50 -1 0 12 0.0000 2 180 1800 3375 1845 {\\Large Application}\001
-4 0 0 50 -1 0 12 0.0000 6 165 270 3690 2565 Pop\001
-4 0 0 50 -1 0 12 0.0000 4 135 360 3465 4230 Push\001
-4 0 0 50 -1 0 12 0.0000 2 135 90 0 3285 S\001
+	 495 3240 6750 3240
+4 0 0 35 -1 0 12 0.0000 2 165 1140 2340 2925 {\\small \\&S2}\001
+4 0 0 50 -1 0 12 0.0000 6 135 390 3285 2430 Push\001
+4 0 0 50 -1 0 12 0.0000 6 135 330 2520 2430 Tail\001
+4 0 0 35 -1 0 12 0.0000 2 165 960 5130 2925 {\\small C0}\001
+4 0 0 35 -1 0 12 0.0000 2 165 960 5310 3285 {\\small C1}\001
+4 0 0 35 -1 0 12 0.0000 2 165 960 5130 3645 {\\small C2}\001
+4 0 0 50 -1 0 12 0.0000 4 135 330 5220 4140 Tail\001
+4 0 0 50 -1 0 12 0.0000 6 135 420 5085 2430 Head\001
+4 0 0 50 -1 0 12 0.0000 2 135 960 6030 3150 Kernel Line\001
+4 0 0 50 -1 0 12 0.0000 2 135 105 495 3150 S\001
+4 0 0 35 -1 0 12 0.0000 2 165 1140 2385 3645 {\\small \\&S0}\001
+4 0 0 50 -1 0 12 0.0000 6 135 420 2340 4140 Head\001
+4 0 0 35 -1 0 12 0.0000 2 165 1140 2250 3285 {\\small \\&S3}\001
+4 2 0 50 -1 0 12 0.0000 4 135 390 4500 4140 Push\001
+4 1 0 50 -1 0 12 0.0000 2 180 1290 3915 4680 {\\Large Kernel}\001
+4 0 0 50 -1 0 12 0.0000 6 180 315 3285 4140 Pop\001
+4 1 0 50 -1 0 12 0.0000 2 180 1725 3915 1755 {\\Large Application}\001
+4 2 0 50 -1 0 12 0.0000 6 180 315 4545 2430 Pop\001
Index: doc/theses/thierry_delisle_PhD/thesis/local.bib
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/local.bib	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/local.bib	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -2,5 +2,5 @@
 % Cforall
 @misc{cfa:frontpage,
-  url = {https://cforall.uwaterloo.ca/}
+  howpublished = {\href{https://cforall.uwaterloo.ca}{https://\-cforall.uwaterloo.ca}}
 }
 @article{cfa:typesystem,
@@ -481,5 +481,5 @@
 @misc{MAN:linux/cfs,
   title = {{CFS} Scheduler - The Linux Kernel documentation},
-  url = {https://www.kernel.org/doc/html/latest/scheduler/sched-design-CFS.html}
+  howpublished = {\href{https://www.kernel.org/doc/html/latest/scheduler/sched-design-CFS.html}{https://\-www.kernel.org/\-doc/\-html/\-latest/\-scheduler/\-sched-design-CFS.html}}
 }
 
@@ -489,5 +489,5 @@
   year = {2019},
   month = {February},
-  url = {https://opensource.com/article/19/2/fair-scheduling-linux}
+  howpublished = {\href{https://opensource.com/article/19/2/fair-scheduling-linux}{https://\-opensource.com/\-article/\-19/2\-/\-fair-scheduling-linux}}
 }
 
@@ -523,5 +523,5 @@
   title = {Mach Scheduling and Thread Interfaces - Kernel Programming Guide},
   organization = {Apple Inc.},
-  url = {https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/scheduler/scheduler.html}
+  howPublish = {\href{https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/scheduler/scheduler.html}{https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/scheduler/scheduler.html}}
 }
 
@@ -536,5 +536,5 @@
   month = {June},
   series = {Developer Reference},
-  url = {https://www.microsoftpressstore.com/articles/article.aspx?p=2233328&seqNum=7#:~:text=Overview\%20of\%20Windows\%20Scheduling,a\%20phenomenon\%20called\%20processor\%20affinity}
+  howpublished = {\href{https://www.microsoftpressstore.com/articles/article.aspx?p=2233328&seqNum=7#:~:text=Overview\%20of\%20Windows\%20Scheduling,a\%20phenomenon\%20called\%20processor\%20affinity}{https://\-www.microsoftpressstore.com/\-articles/\-article.aspx?p=2233328&seqNum=7#:~:text=Overview\%20of\%20Windows\%20Scheduling,a\%20phenomenon\%20called\%20processor\%20affinity}}
 }
 
@@ -542,5 +542,5 @@
   title = {GitHub - The Go Programming Language},
   author = {The Go Programming Language},
-  url = {https://github.com/golang/go},
+  howpublished = {\href{https://github.com/golang/go}{https://\-github.com/\-golang/\-go}},
   version = {Change-Id: If07f40b1d73b8f276ee28ffb8b7214175e56c24d}
 }
@@ -551,5 +551,5 @@
   year = {2019},
   booktitle = {Hydra},
-  url = {https://www.youtube.com/watch?v=-K11rY57K7k&ab_channel=Hydra}
+  howpublished = {\href{https://www.youtube.com/watch?v=-K11rY57K7k&ab_channel=Hydra}{https://\-www.youtube.com/\-watch?v=-K11rY57K7k&ab_channel=Hydra}}
 }
 
@@ -559,5 +559,5 @@
   year = {2008},
   booktitle = {Erlang User Conference},
-  url = {http://www.erlang.se/euc/08/euc_smp.pdf}
+  howpublished = {\href{http://www.erlang.se/euc/08/euc_smp.pdf}{http://\-www.erlang.se/\-euc/\-08/\-euc_smp.pdf}}
 }
 
@@ -567,5 +567,5 @@
   title = {Scheduling Algorithm - Intel{\textregistered} Threading Building Blocks Developer Reference},
   organization = {Intel{\textregistered}},
-  url = {https://www.threadingbuildingblocks.org/docs/help/reference/task_scheduler/scheduling_algorithm.html}
+  howpublished = {\href{https://www.threadingbuildingblocks.org/docs/help/reference/task_scheduler/scheduling_algorithm.html}{https://\-www.threadingbuildingblocks.org/\-docs/\-help/\-reference/\-task\_scheduler/\-scheduling\_algorithm.html}}
 }
 
@@ -573,12 +573,12 @@
   title = {Quasar Core - Quasar User Manual},
   organization = {Parallel Universe},
-  url = {https://docs.paralleluniverse.co/quasar/}
+  howpublished = {\href{https://docs.paralleluniverse.co/quasar}{https://\-docs.paralleluniverse.co/\-quasar}}
 }
 @misc{MAN:project-loom,
-  url = {https://www.baeldung.com/openjdk-project-loom}
+  howpublished = {\href{https://www.baeldung.com/openjdk-project-loom}{https://\-www.baeldung.com/\-openjdk-project-loom}}
 }
 
 @misc{MAN:java/fork-join,
-  url = {https://www.baeldung.com/java-fork-join}
+  howpublished = {\href{https://www.baeldung.com/java-fork-join}{https://\-www.baeldung.com/\-java-fork-join}}
 }
 
@@ -633,5 +633,5 @@
   month   = "March",
   version = {0,4},
-  howpublished = {\url{https://kernel.dk/io_uring.pdf}}
+  howpublished = {\href{https://kernel.dk/io_uring.pdf}{https://\-kernel.dk/\-io\_uring.pdf}}
 }
 
@@ -642,5 +642,5 @@
   title = "Control theory --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2020",
-  url = "https://en.wikipedia.org/wiki/Task_parallelism",
+  howpublished = {\href{https://en.wikipedia.org/wiki/Task_parallelism}{https://\-en.wikipedia.org/\-wiki/\-Task\_parallelism}},
   note = "[Online; accessed 22-October-2020]"
 }
@@ -650,5 +650,5 @@
   title = "Task parallelism --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2020",
-  url = "https://en.wikipedia.org/wiki/Control_theory",
+  howpublished = "\href{https://en.wikipedia.org/wiki/Control_theory}{https://\-en.wikipedia.org/\-wiki/\-Control\_theory}",
   note = "[Online; accessed 22-October-2020]"
 }
@@ -658,5 +658,5 @@
   title = "Implicit parallelism --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2020",
-  url = "https://en.wikipedia.org/wiki/Implicit_parallelism",
+  howpublished = "\href{https://en.wikipedia.org/wiki/Implicit_parallelism}{https://\-en.wikipedia.org/\-wiki/\-Implicit\_parallelism}",
   note = "[Online; accessed 23-October-2020]"
 }
@@ -666,5 +666,5 @@
   title = "Explicit parallelism --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2017",
-  url = "https://en.wikipedia.org/wiki/Explicit_parallelism",
+  howpublished = "\href{https://en.wikipedia.org/wiki/Explicit_parallelism}{https://\-en.wikipedia.org/\-wiki/\-Explicit\_parallelism}",
   note = "[Online; accessed 23-October-2020]"
 }
@@ -674,5 +674,5 @@
   title = "Linear congruential generator --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2020",
-  url = "https://en.wikipedia.org/wiki/Linear_congruential_generator",
+  howpublished = "\href{https://en.wikipedia.org/wiki/Linear_congruential_generator}{https://en.wikipedia.org/wiki/Linear\_congruential\_generator}",
   note = "[Online; accessed 2-January-2021]"
 }
@@ -682,5 +682,5 @@
   title = "Futures and promises --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2020",
-  url = "https://en.wikipedia.org/wiki/Futures_and_promises",
+  howpublished = "\href{https://en.wikipedia.org/wiki/Futures_and_promises}{https://\-en.wikipedia.org/\-wiki/Futures\_and\_promises}",
   note = "[Online; accessed 9-February-2021]"
 }
@@ -690,5 +690,5 @@
   title = "Read-copy-update --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2022",
-  url = "https://en.wikipedia.org/wiki/Linear_congruential_generator",
+  howpublished = "\href{https://en.wikipedia.org/wiki/Linear_congruential_generator}{https://\-en.wikipedia.org/\-wiki/\-Linear\_congruential\_generator}",
   note = "[Online; accessed 12-April-2022]"
 }
@@ -698,5 +698,5 @@
   title = "Readers-writer lock --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2021",
-  url = "https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock",
+  howpublished = "\href{https://en.wikipedia.org/wiki/Readers-writer_lock}{https://\-en.wikipedia.org/\-wiki/\-Readers-writer\_lock}",
   note = "[Online; accessed 12-April-2022]"
 }
@@ -705,5 +705,5 @@
   title = "Bin packing problem --- {W}ikipedia{,} The Free Encyclopedia",
   year = "2022",
-  url = "https://en.wikipedia.org/wiki/Bin_packing_problem",
+  howpublished = "\href{https://en.wikipedia.org/wiki/Bin_packing_problem}{https://\-en.wikipedia.org/\-wiki/\-Bin\_packing\_problem}",
   note = "[Online; accessed 29-June-2022]"
 }
@@ -712,13 +712,13 @@
 % [05/04, 12:36] Trevor Brown
 %     i don't know where rmr complexity was first introduced, but there are many many many papers that use the term and define it
-% ​[05/04, 12:37] Trevor Brown
+% [05/04, 12:37] Trevor Brown
 %     here's one paper that uses the term a lot and links to many others that use it... might trace it to something useful there https://drops.dagstuhl.de/opus/volltexte/2021/14832/pdf/LIPIcs-DISC-2021-30.pdf
-% ​[05/04, 12:37] Trevor Brown
+% [05/04, 12:37] Trevor Brown
 %     another option might be to cite a textbook
-% ​[05/04, 12:42] Trevor Brown
+% [05/04, 12:42] Trevor Brown
 %     but i checked two textbooks in the area i'm aware of and i don't see a definition of rmr complexity in either
-% ​[05/04, 12:42] Trevor Brown
+% [05/04, 12:42] Trevor Brown
 %     this one has a nice statement about the prevelance of rmr complexity, as well as some rough definition
-% ​[05/04, 12:42] Trevor Brown
+% [05/04, 12:42] Trevor Brown
 %     https://dl.acm.org/doi/pdf/10.1145/3465084.3467938
 
@@ -728,2 +728,28 @@
 %
 % https://doi.org/10.1137/1.9781611973099.100
+
+
+@misc{AIORant,
+  author = "Linus Torvalds",
+  title = "Re: [PATCH 09/13] aio: add support for async openat()",
+  year = "2016",
+  month = jan,
+  howpublished = "\href{https://lwn.net/Articles/671657}{https://\-lwn.net/\-Articles/671657}",
+  note = "[Online; accessed 6-June-2022]"
+}
+
+@misc{apache,
+  key = {Apache Software Foundation},
+  title = {{T}he {A}pache Web Server},
+  howpublished = {\href{http://httpd.apache.org}{http://\-httpd.apache.org}},
+  note = "[Online; accessed 6-June-2022]"
+}
+
+@misc{SeriallyReusable,
+    author	= {IBM},
+    title	= {Serially reusable programs},
+    month	= mar,
+    howpublished= {\href{https://www.ibm.com/docs/en/ztpf/1.1.0.15?topic=structures-serially-reusable-programs}{https://www.ibm.com/\-docs/\-en/\-ztpf/\-1.1.0.15?\-topic=structures\--serially\--reusable-programs}},
+    year	= 2021,
+}
+
Index: doc/theses/thierry_delisle_PhD/thesis/text/core.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/core.tex	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/text/core.tex	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -322,5 +322,5 @@
 Building a scheduler that is cache aware poses two main challenges: discovering the cache topology and matching \procs to this cache structure.
 Unfortunately, there is no portable way to discover cache topology, and it is outside the scope of this thesis to solve this problem.
-This work uses the cache topology information from Linux's \texttt{/sys/devices/system/cpu} directory.
+This work uses the cache topology information from Linux's @/sys/devices/system/cpu@ directory.
 This leaves the challenge of matching \procs to cache structure, or more precisely identifying which subqueues of the ready queue are local to which subcomponents of the cache structure.
 Once a matching is generated, the helping algorithm is changed to add bias so that \procs more often help subqueues local to the same cache substructure.\footnote{
@@ -330,5 +330,5 @@
 Instead of having each subqueue local to a specific \proc, the system is initialized with subqueues for each hardware hyperthread/core up front.
 Then \procs dequeue and enqueue by first asking which CPU id they are executing on, in order to identify which subqueues are the local ones.
-\Glspl{proc} can get the CPU id from \texttt{sched\_getcpu} or \texttt{librseq}.
+\Glspl{proc} can get the CPU id from @sched_getcpu@ or @librseq@.
 
 This approach solves the performance problems on systems with topologies with narrow L3 caches, similar to Figure \ref{fig:cache-noshare}.
Index: doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -7,5 +7,5 @@
 All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine.
 
-For all benchmarks, \texttt{taskset} is used to limit the experiment to 1 NUMA Node with no hyper threading.
+For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading.
 If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used.
 If still more \glspl{hthrd} are needed then the experiment is limited to as few NUMA Nodes as needed.
@@ -35,5 +35,5 @@
 \end{figure}
 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready-queue.
-Since these two operation also describe a \texttt{yield} operation, many systems use this as the most basic benchmark.
+Since these two operation also describe a @yield@ operation, many systems use this as the most basic benchmark.
 However, yielding can be treated as a special case, since it also carries the information that the number of the ready \glspl{at} will not change.
 Not all systems use this information, but those which do may appear to have better performance than they would for disconnected push/pop pairs.
@@ -57,5 +57,5 @@
 This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
 
-The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw \texttt{park}/\texttt{unpark} and carefully picking the order of the \texttt{P} and \texttt{V} with respect to the loop condition.
+The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition.
 Figure~\ref{fig:cycle:code} shows pseudo code for this benchmark.
 
@@ -116,6 +116,6 @@
 \section{Yield}
 For completion, I also include the yield benchmark.
-This benchmark is much simpler than the cycle tests, it simply creates many \glspl{at} that call \texttt{yield}.
-As mentionned in the previous section, this benchmark may be less representative of usages that only make limited use of \texttt{yield}, due to potential shortcuts in the routine.
+This benchmark is much simpler than the cycle tests, it simply creates many \glspl{at} that call @yield@.
+As mentionned in the previous section, this benchmark may be less representative of usages that only make limited use of @yield@, due to potential shortcuts in the routine.
 Its only interesting variable is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) could be empty.
 This sometimes puts more strain on the idle sleep handling, compared to scenarios where there is clearly plenty of work to be done.
@@ -184,8 +184,8 @@
 
 To achieve this the benchmark uses a fixed size array of semaphores.
-Each \gls{at} picks a random semaphore, \texttt{V}s it to unblock a \at waiting and then \texttt{P}s on the semaphore.
+Each \gls{at} picks a random semaphore, @V@s it to unblock a \at waiting and then @P@s on the semaphore.
 This creates a flow where \glspl{at} push each other out of the semaphores before being pushed out themselves.
 For this benchmark to work however, the number of \glspl{at} must be equal or greater to the number of semaphores plus the number of \glspl{proc}.
-Note that the nature of these semaphores mean the counter can go beyond 1, which could lead to calls to \texttt{P} not blocking.
+Note that the nature of these semaphores mean the counter can go beyond 1, which could lead to calls to @P@ not blocking.
 
 \todo{code, setup, results}
Index: doc/theses/thierry_delisle_PhD/thesis/text/existing.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/existing.tex	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/text/existing.tex	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -178,5 +178,5 @@
 \begin{displayquote}
 	\begin{enumerate}
-		\item The task returned by \textit{t}\texttt{.execute()}
+		\item The task returned by \textit{t}@.execute()@
 		\item The successor of t if \textit{t} was its last completed predecessor.
 		\item A task popped from the end of the thread's own deque.
@@ -193,5 +193,5 @@
 \paragraph{Quasar/Project Loom}
 Java has two projects, Quasar~\cite{MAN:quasar} and Project Loom~\cite{MAN:project-loom}\footnote{It is unclear if these are distinct projects.}, that are attempting to introduce lightweight thread\-ing in the form of Fibers.
-Both projects seem to be based on the \texttt{ForkJoinPool} in Java, which appears to be a simple incarnation of randomized work-stealing~\cite{MAN:java/fork-join}.
+Both projects seem to be based on the @ForkJoinPool@ in Java, which appears to be a simple incarnation of randomized work-stealing~\cite{MAN:java/fork-join}.
 
 \paragraph{Grand Central Dispatch}
@@ -204,5 +204,5 @@
 % http://web.archive.org/web/20090920043909/http://images.apple.com/macosx/technology/docs/GrandCentral_TB_brief_20090903.pdf
 
-In terms of semantics, the Dispatch Queues seem to be very similar to Intel\textregistered ~TBB \texttt{execute()} and predecessor semantics.
+In terms of semantics, the Dispatch Queues seem to be very similar to Intel\textregistered ~TBB @execute()@ and predecessor semantics.
 
 \paragraph{LibFibre}
Index: doc/theses/thierry_delisle_PhD/thesis/text/intro.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/intro.tex	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/text/intro.tex	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -103,4 +103,4 @@
 An algorithm for load-balancing and idle sleep of processors, including NUMA awareness.
 \item
-Support for user-level \glsxtrshort{io} capabilities based on Linux's \texttt{io\_uring}.
+Support for user-level \glsxtrshort{io} capabilities based on Linux's @io_uring@.
 \end{enumerate}
Index: doc/theses/thierry_delisle_PhD/thesis/text/io.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/io.tex	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/text/io.tex	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -1,8 +1,8 @@
 \chapter{User Level \io}
-As mentioned in Section~\ref{prev:io}, User-Level \io requires multiplexing the \io operations of many \glspl{thrd} onto fewer \glspl{proc} using asynchronous \io operations.
+As mentioned in Section~\ref{prev:io}, user-Level \io requires multiplexing the \io operations of many \glspl{thrd} onto fewer \glspl{proc} using asynchronous \io operations.
 Different operating systems offer various forms of asynchronous operations and, as mentioned in Chapter~\ref{intro}, this work is exclusively focused on the Linux operating-system.
 
 \section{Kernel Interface}
-Since this work fundamentally depends on operating-system support, the first step of any design is to discuss the available interfaces and pick one (or more) as the foundations of the non-blocking \io subsystem.
+Since this work fundamentally depends on operating-system support, the first step of this design is to discuss the available interfaces and pick one (or more) as the foundation for the non-blocking \io subsystem in this work.
 
 \subsection{\lstinline{O_NONBLOCK}}
@@ -10,33 +10,77 @@
 In this mode, ``Neither the @open()@ nor any subsequent \io operations on the [opened file descriptor] will cause the calling process to wait''~\cite{MAN:open}.
 This feature can be used as the foundation for the non-blocking \io subsystem.
-However, for the subsystem to know when an \io operation completes, @O_NONBLOCK@ must be use in conjunction with a system call that monitors when a file descriptor becomes ready, \ie, the next \io operation on it does not cause the process to wait
-\footnote{In this context, ready means \emph{some} operation can be performed without blocking.
+However, for the subsystem to know when an \io operation completes, @O_NONBLOCK@ must be used in conjunction with a system call that monitors when a file descriptor becomes ready, \ie, the next \io operation on it does not cause the process to wait.\footnote{
+In this context, ready means \emph{some} operation can be performed without blocking.
 It does not mean an operation returning \lstinline{EAGAIN} succeeds on the next try.
-For example, a ready read may only return a subset of bytes and the read must be issues again for the remaining bytes, at which point it may return \lstinline{EAGAIN}.}.
+For example, a ready read may only return a subset of requested bytes and the read must be issues again for the remaining bytes, at which point it may return \lstinline{EAGAIN}.}
 This mechanism is also crucial in determining when all \glspl{thrd} are blocked and the application \glspl{kthrd} can now block.
 
-There are three options to monitor file descriptors in Linux
-\footnote{For simplicity, this section omits \lstinline{pselect} and \lstinline{ppoll}.
+There are three options to monitor file descriptors in Linux:\footnote{
+For simplicity, this section omits \lstinline{pselect} and \lstinline{ppoll}.
 The difference between these system calls and \lstinline{select} and \lstinline{poll}, respectively, is not relevant for this discussion.},
 @select@~\cite{MAN:select}, @poll@~\cite{MAN:poll} and @epoll@~\cite{MAN:epoll}.
 All three of these options offer a system call that blocks a \gls{kthrd} until at least one of many file descriptors becomes ready.
-The group of file descriptors being waited is called the \newterm{interest set}.
-
-\paragraph{\lstinline{select}} is the oldest of these options, it takes as an input a contiguous array of bits, where each bits represent a file descriptor of interest.
-On return, it modifies the set in place to identify which of the file descriptors changed status.
-This destructive change means that calling select in a loop requires re-initializing the array each time and the number of file descriptors supported has a hard limit.
-Another limit of @select@ is that once the call is started, the interest set can no longer be modified.
-Monitoring a new file descriptor generally requires aborting any in progress call to @select@
-\footnote{Starting a new call to \lstinline{select} is possible but requires a distinct kernel thread, and as a result is not an acceptable multiplexing solution when the interest set is large and highly dynamic unless the number of parallel calls to \lstinline{select} can be strictly bounded.}.
-
-\paragraph{\lstinline{poll}} is an improvement over select, which removes the hard limit on the number of file descriptors and the need to re-initialize the input on every call.
-It works using an array of structures as an input rather than an array of bits, thus allowing a more compact input for small interest sets.
-Like @select@, @poll@ suffers from the limitation that the interest set cannot be changed while the call is blocked.
-
-\paragraph{\lstinline{epoll}} further improves these two functions by allowing the interest set to be dynamically added to and removed from while a \gls{kthrd} is blocked on an @epoll@ call.
+The group of file descriptors being waited on is called the \newterm{interest set}.
+
+\paragraph{\lstinline{select}} is the oldest of these options, and takes as input a contiguous array of bits, where each bit represents a file descriptor of interest.
+Hence, the array length must be as long as the largest FD currently of interest.
+On return, it outputs the set in place to identify which of the file descriptors changed state.
+This destructive change means selecting in a loop requires re-initializing the array for each iteration.
+Another limit of @select@ is that calls from different \glspl{kthrd} sharing FDs are independent.
+Hence, if one \gls{kthrd} is managing the select calls, other threads can only add/remove to/from the manager's interest set through synchronized calls to update the interest set.
+However, these changes are only reflected when the manager makes its next call to @select@.
+Note, it is possible for the manager thread to never unblock if its current interest set never changes, \eg the sockets/pipes/ttys it is waiting on never get data again.
+Often the I/O manager has a timeout, polls, or is sent a signal on changes to mitigate this problem.
+
+\begin{comment}
+From: Tim Brecht <brecht@uwaterloo.ca>
+Subject: Re: FD sets
+Date: Wed, 6 Jul 2022 00:29:41 +0000
+
+Large number of open files
+--------------------------
+
+In order to be able to use more than the default number of open file
+descriptors you may need to:
+
+o increase the limit on the total number of open files /proc/sys/fs/file-max
+  (on Linux systems)
+
+o increase the size of FD_SETSIZE
+  - the way I often do this is to figure out which include file __FD_SETSIZE
+    is defined in, copy that file into an appropriate directory in ./include,
+    and then modify it so that if you use -DBIGGER_FD_SETSIZE the larger size
+    gets used
+
+  For example on a RH 9.0 distribution I've copied
+  /usr/include/bits/typesizes.h into ./include/i386-linux/bits/typesizes.h
+
+  Then I modify typesizes.h to look something like:
+
+  #ifdef BIGGER_FD_SETSIZE
+  #define __FD_SETSIZE            32767
+  #else
+  #define __FD_SETSIZE            1024
+  #endif
+
+  Note that the since I'm moving and testing the userver on may different
+  machines the Makefiles are set up to use -I ./include/$(HOSTTYPE)
+
+  This way if you redefine the FD_SETSIZE it will get used instead of the
+  default original file.
+\end{comment}
+
+\paragraph{\lstinline{poll}} is the next oldest option, and takes as input an array of structures containing the FD numbers rather than their position in an array of bits, allowing a more compact input for interest sets that contain widely spaced FDs.
+(For small interest sets with densely packed FDs, the @select@ bit mask can take less storage, and hence, copy less information into the kernel.)
+Furthermore, @poll@ is non-destructive, so the array of structures does not have to be re-initialize on every call.
+Like @select@, @poll@ suffers from the limitation that the interest set cannot be changed by other \gls{kthrd}, while a manager thread is blocked in @poll@.
+
+\paragraph{\lstinline{epoll}} follows after @poll@, and places the interest set in the kernel rather than the application, where it is managed by an internal \gls{kthrd}.
+There are two separate functions: one to add to the interest set and another to check for FDs with state changes.
 This dynamic capability is accomplished by creating an \emph{epoll instance} with a persistent interest set, which is used across multiple calls.
-This capability significantly reduces synchronization overhead on the part of the caller (in this case the \io subsystem), since the interest set can be modified when adding or removing file descriptors without having to synchronize with other \glspl{kthrd} potentially calling @epoll@.
-
-However, all three of these system calls have limitations.
+As the interest set is augmented, the changes become implicitly part of the interest set for a blocked manager \gls{kthrd}.
+This capability significantly reduces synchronization between \glspl{kthrd} and the manager calling @epoll@.
+
+However, all three of these I/O systems have limitations.
 The @man@ page for @O_NONBLOCK@ mentions that ``[@O_NONBLOCK@] has no effect for regular files and block devices'', which means none of these three system calls are viable multiplexing strategies for these types of \io operations.
 Furthermore, @epoll@ has been shown to have problems with pipes and ttys~\cit{Peter's examples in some fashion}.
@@ -53,5 +97,5 @@
 It also supports batching multiple operations in a single system call.
 
-AIO offers two different approach to polling: @aio_error@ can be used as a spinning form of polling, returning @EINPROGRESS@ until the operation is completed, and @aio_suspend@ can be used similarly to @select@, @poll@ or @epoll@, to wait until one or more requests have completed.
+AIO offers two different approaches to polling: @aio_error@ can be used as a spinning form of polling, returning @EINPROGRESS@ until the operation is completed, and @aio_suspend@ can be used similarly to @select@, @poll@ or @epoll@, to wait until one or more requests have completed.
 For the purpose of \io multiplexing, @aio_suspend@ is the best interface.
 However, even if AIO requests can be submitted concurrently, @aio_suspend@ suffers from the same limitation as @select@ and @poll@, \ie, the interest set cannot be dynamically changed while a call to @aio_suspend@ is in progress.
@@ -70,5 +114,5 @@
 
 	\begin{flushright}
-		-- Linus Torvalds\cit{https://lwn.net/Articles/671657/}
+		-- Linus Torvalds~\cite{AIORant}
 	\end{flushright}
 \end{displayquote}
@@ -85,5 +129,5 @@
 A very recent addition to Linux, @io_uring@~\cite{MAN:io_uring}, is a framework that aims to solve many of the problems listed in the above interfaces.
 Like AIO, it represents \io operations as entries added to a queue.
-But like @epoll@, new requests can be submitted while a blocking call waiting for requests to complete is already in progress.
+But like @epoll@, new requests can be submitted, while a blocking call waiting for requests to complete, is already in progress.
 The @io_uring@ interface uses two ring buffers (referred to simply as rings) at its core: a submit ring to which programmers push \io requests and a completion ring from which programmers poll for completion.
 
@@ -97,5 +141,5 @@
 In the worst case, where all \glspl{thrd} are consistently blocking on \io, it devolves into 1-to-1 threading.
 However, regardless of the frequency of \io operations, it achieves the fundamental goal of not blocking \glspl{proc} when \glspl{thrd} are ready to run.
-This approach is used by languages like Go\cit{Go} and frameworks like libuv\cit{libuv}, since it has the advantage that it can easily be used across multiple operating systems.
+This approach is used by languages like Go\cit{Go}, frameworks like libuv\cit{libuv}, and web servers like Apache~\cite{apache} and Nginx~\cite{nginx}, since it has the advantage that it can easily be used across multiple operating systems.
 This advantage is especially relevant for languages like Go, which offer a homogeneous \glsxtrshort{api} across all platforms.
 As opposed to C, which has a very limited standard api for \io, \eg, the C standard library has no networking.
@@ -111,5 +155,5 @@
 \section{Event-Engine}
 An event engine's responsibility is to use the kernel interface to multiplex many \io operations onto few \glspl{kthrd}.
-In concrete terms, this means \glspl{thrd} enter the engine through an interface, the event engines then starts the operation and parks the calling \glspl{thrd}, returning control to the \gls{proc}.
+In concrete terms, this means \glspl{thrd} enter the engine through an interface, the event engine then starts an operation and parks the calling \glspl{thrd}, returning control to the \gls{proc}.
 The parked \glspl{thrd} are then rescheduled by the event engine once the desired operation has completed.
 
@@ -134,14 +178,17 @@
 \begin{enumerate}
 \item
-An SQE is allocated from the pre-allocated array (denoted \emph{S} in Figure~\ref{fig:iouring}).
+An SQE is allocated from the pre-allocated array \emph{S}.
 This array is created at the same time as the @io_uring@ instance, is in kernel-locked memory visible by both the kernel and the application, and has a fixed size determined at creation.
-How these entries are allocated is not important for the functioning of @io_uring@, the only requirement is that no entry is reused before the kernel has consumed it.
+How these entries are allocated is not important for the functioning of @io_uring@;
+the only requirement is that no entry is reused before the kernel has consumed it.
 \item
 The SQE is filled according to the desired operation.
-This step is straight forward, the only detail worth mentioning is that SQEs have a @user_data@ field that must be filled in order to match submission and completion entries.
+This step is straight forward.
+The only detail worth mentioning is that SQEs have a @user_data@ field that must be filled in order to match submission and completion entries.
 \item
 The SQE is submitted to the submission ring by appending the index of the SQE to the ring following regular ring buffer steps: \lstinline{buffer[head] = item; head++}.
 Since the head is visible to the kernel, some memory barriers may be required to prevent the compiler from reordering these operations.
 Since the submission ring is a regular ring buffer, more than one SQE can be added at once and the head is updated only after all entries are updated.
+Note, SQE can be filled and submitted in any order, \eg in Figure~\ref{fig:iouring} the submission order is S0, S3, S2 and S1 has not been submitted.
 \item
 The kernel is notified of the change to the ring using the system call @io_uring_enter@.
@@ -161,86 +208,46 @@
 The @io_uring_enter@ system call is protected by a lock inside the kernel.
 This protection means that concurrent call to @io_uring_enter@ using the same instance are possible, but there is no performance gained from parallel calls to @io_uring_enter@.
-It is possible to do the first three submission steps in parallel, however, doing so requires careful synchronization.
+It is possible to do the first three submission steps in parallel;
+however, doing so requires careful synchronization.
 
 @io_uring@ also introduces constraints on the number of simultaneous operations that can be ``in flight''.
-Obviously, SQEs are allocated from a fixed-size array, meaning that there is a hard limit to how many SQEs can be submitted at once.
-In addition, the @io_uring_enter@ system call can fail because ``The  kernel [...] ran out of resources to handle [a request]'' or ``The application is attempting to overcommit the number of requests it can  have  pending.''.
+First, SQEs are allocated from a fixed-size array, meaning that there is a hard limit to how many SQEs can be submitted at once.
+Second, the @io_uring_enter@ system call can fail because ``The  kernel [...] ran out of resources to handle [a request]'' or ``The application is attempting to overcommit the number of requests it can have pending.''.
 This restriction means \io request bursts may have to be subdivided and submitted in chunks at a later time.
 
 \subsection{Multiplexing \io: Submission}
+
 The submission side is the most complicated aspect of @io_uring@ and the completion side effectively follows from the design decisions made in the submission side.
-While it is possible to do the first steps of submission in parallel, the duration of the system call scales with number of entries submitted.
+While there is freedom in designing the submission side, there are some realities of @io_uring@ that must be taken into account.
+It is possible to do the first steps of submission in parallel;
+however, the duration of the system call scales with the number of entries submitted.
 The consequence is that the amount of parallelism used to prepare submissions for the next system call is limited.
 Beyond this limit, the length of the system call is the throughput limiting factor.
-I concluded from early experiments that preparing submissions seems to take at most as long as the system call itself, which means that with a single @io_uring@ instance, there is no benefit in terms of \io throughput to having more than two \glspl{hthrd}.
-Therefore the design of the submission engine must manage multiple instances of @io_uring@ running in parallel, effectively sharding @io_uring@ instances.
-Similarly to scheduling, this sharding can be done privately, \ie, one instance per \glspl{proc}, in decoupled pools, \ie, a pool of \glspl{proc} use a pool of @io_uring@ instances without one-to-one coupling between any given instance and any given \gls{proc}, or some mix of the two.
-Since completions are sent to the instance where requests were submitted, all instances with pending operations must be polled continously
-\footnote{As will be described in Chapter~\ref{practice}, this does not translate into constant cpu usage.}.
+I concluded from early experiments that preparing submissions seems to take almost as long as the system call itself, which means that with a single @io_uring@ instance, there is no benefit in terms of \io throughput to having more than two \glspl{hthrd}.
+Therefore, the design of the submission engine must manage multiple instances of @io_uring@ running in parallel, effectively sharding @io_uring@ instances.
+Since completions are sent to the instance where requests were submitted, all instances with pending operations must be polled continuously\footnote{
+As described in Chapter~\ref{practice}, this does not translate into constant CPU usage.}.
 Note that once an operation completes, there is nothing that ties it to the @io_uring@ instance that handled it.
-There is nothing preventing a new operation with, for example, the same file descriptors to a different @io_uring@ instance.
+There is nothing preventing a new operation with, \eg the same file descriptors to a different @io_uring@ instance.
 
 A complicating aspect of submission is @io_uring@'s support for chains of operations, where the completion of an operation triggers the submission of the next operation on the link.
 SQEs forming a chain must be allocated from the same instance and must be contiguous in the Submission Ring (see Figure~\ref{fig:iouring}).
-The consequence of this feature is that filling SQEs can be arbitrarly complex and therefore users may need to run arbitrary code between allocation and submission.
-Supporting chains is a requirement of the \io subsystem, but it is still valuable.
-Support for this feature can be fulfilled simply to supporting arbitrary user code between allocation and submission.
-
-\subsubsection{Public Instances}
-One approach is to have multiple shared instances.
-\Glspl{thrd} attempting \io operations pick one of the available instances and submit operations to that instance.
-Since there is no coupling between \glspl{proc} and @io_uring@ instances in this approach, \glspl{thrd} running on more than one \gls{proc} can attempt to submit to the same instance concurrently.
-Since @io_uring@ effectively sets the amount of sharding needed to avoid contention on its internal locks, performance in this approach is based on two aspects: the synchronization needed to submit does not induce more contention than @io_uring@ already does and the scheme to route \io requests to specific @io_uring@ instances does not introduce contention.
-This second aspect has an oversized importance because it comes into play before the sharding of instances, and as such, all \glspl{hthrd} can contend on the routing algorithm.
-
-Allocation in this scheme can be handled fairly easily.
-Free SQEs, \ie, SQEs that aren't currently being used to represent a request, can be written to safely and have a field called @user_data@ which the kernel only reads to copy to @cqe@s.
-Allocation also requires no ordering guarantee as all free SQEs are interchangeable.
-This requires a simple concurrent bag.
-The only added complexity is that the number of SQEs is fixed, which means allocation can fail.
-
-Allocation failures need to be pushed up to a routing algorithm: \glspl{thrd} attempting \io operations must not be directed to @io_uring@ instances without sufficient SQEs available.
-Furthermore, the routing algorithm should block operations up-front if none of the instances have available SQEs.
-
-Once an SQE is allocated, \glspl{thrd} can fill them normally, they simply need to keep track of the SQE index and which instance it belongs to.
-
-Once an SQE is filled in, what needs to happen is that the SQE must be added to the submission ring buffer, an operation that is not thread-safe on itself, and the kernel must be notified using the @io_uring_enter@ system call.
-The submission ring buffer is the same size as the pre-allocated SQE buffer, therefore pushing to the ring buffer cannot fail
-\footnote{This is because it is invalid to have the same \lstinline{sqe} multiple times in the ring buffer.}.
-However, as mentioned, the system call itself can fail with the expectation that it will be retried once some of the already submitted operations complete.
-Since multiple SQEs can be submitted to the kernel at once, it is important to strike a balance between batching and latency.
-Operations that are ready to be submitted should be batched together in few system calls, but at the same time, operations should not be left pending for long period of times before being submitted.
-This can be handled by either designating one of the submitting \glspl{thrd} as the being responsible for the system call for the current batch of SQEs or by having some other party regularly submitting all ready SQEs, \eg, the poller \gls{thrd} mentioned later in this section.
-
-In the case of designating a \gls{thrd}, ideally, when multiple \glspl{thrd} attempt to submit operations to the same @io_uring@ instance, all requests would be batched together and one of the \glspl{thrd} would do the system call on behalf of the others, referred to as the \newterm{submitter}.
-In practice however, it is important that the \io requests are not left pending indefinitely and as such, it may be required to have a ``next submitter'' that guarentees everything that is missed by the current submitter is seen by the next one.
-Indeed, as long as there is a ``next'' submitter, \glspl{thrd} submitting new \io requests can move on, knowing that some future system call will include their request.
-Once the system call is done, the submitter must also free SQEs so that the allocator can reused them.
-
-Finally, the completion side is much simpler since the @io_uring@ system call enforces a natural synchronization point.
-Polling simply needs to regularly do the system call, go through the produced CQEs and communicate the result back to the originating \glspl{thrd}.
-Since CQEs only own a signed 32 bit result, in addition to the copy of the @user_data@ field, all that is needed to communicate the result is a simple future~\cite{wiki:future}.
-If the submission side does not designate submitters, polling can also submit all SQEs as it is polling events.
-A simple approach to polling is to allocate a \gls{thrd} per @io_uring@ instance and simply let the poller \glspl{thrd} poll their respective instances when scheduled.
-
-With this pool of instances approach, the big advantage is that it is fairly flexible.
-It does not impose restrictions on what \glspl{thrd} submitting \io operations can and cannot do between allocations and submissions.
-It also can gracefully handle running out of ressources, SQEs or the kernel returning @EBUSY@.
-The down side to this is that many of the steps used for submitting need complex synchronization to work properly.
-The routing and allocation algorithm needs to keep track of which ring instances have available SQEs, block incoming requests if no instance is available, prevent barging if \glspl{thrd} are already queued up waiting for SQEs and handle SQEs being freed.
-The submission side needs to safely append SQEs to the ring buffer, correctly handle chains, make sure no SQE is dropped or left pending forever, notify the allocation side when SQEs can be reused and handle the kernel returning @EBUSY@.
-All this synchronization may have a significant cost and, compared to the next approach presented, this synchronization is entirely overhead.
+The consequence of this feature is that filling SQEs can be arbitrarily complex, and therefore, users may need to run arbitrary code between allocation and submission.
+Supporting chains is not a requirement of the \io subsystem, but it is still valuable.
+Support for this feature can be fulfilled simply by supporting arbitrary user code between allocation and submission.
+
+Similar to scheduling, sharding @io_uring@ instances can be done privately, \ie, one instance per \glspl{proc}, in decoupled pools, \ie, a pool of \glspl{proc} use a pool of @io_uring@ instances without one-to-one coupling between any given instance and any given \gls{proc}, or some mix of the two.
+These three sharding approaches are analyzed.
 
 \subsubsection{Private Instances}
-Another approach is to simply create one ring instance per \gls{proc}.
-This alleviates the need for synchronization on the submissions, requiring only that \glspl{thrd} are not interrupted in between two submission steps.
-This is effectively the same requirement as using @thread_local@ variables.
-Since SQEs that are allocated must be submitted to the same ring, on the same \gls{proc}, this effectively forces the application to submit SQEs in allocation order
-\footnote{The actual requirement is that \glspl{thrd} cannot context switch between allocation and submission.
-This requirement means that from the subsystem's point of view, the allocation and submission are sequential.
-To remove this requirement, a \gls{thrd} would need the ability to ``yield to a specific \gls{proc}'', \ie, park with the promise that it will be run next on a specific \gls{proc}, the \gls{proc} attached to the correct ring.}
-, greatly simplifying both allocation and submission.
-In this design, allocation and submission form a partitionned ring buffer as shown in Figure~\ref{fig:pring}.
-Once added to the ring buffer, the attached \gls{proc} has a significant amount of flexibility with regards to when to do the system call.
+The private approach creates one ring instance per \gls{proc}, \ie one-to-one coupling.
+This alleviates the need for synchronization on the submissions, requiring only that \glspl{thrd} are not time-sliced during submission steps.
+This requirement is the same as accessing @thread_local@ variables, where a \gls{thrd} is accessing kernel-thread data, is time-sliced, and continues execution on another kernel thread but is now accessing the wrong data.
+This failure is the serially reusable problem~\cite{SeriallyReusable}.
+Hence, allocated SQEs must be submitted to the same ring on the same \gls{proc}, which effectively forces the application to submit SQEs in allocation order.\footnote{
+To remove this requirement, a \gls{thrd} needs the ability to ``yield to a specific \gls{proc}'', \ie, park with the guarantee it unparks on a specific \gls{proc}, \ie the \gls{proc} attached to the correct ring.}
+From the subsystem's point of view, the allocation and submission are sequential, greatly simplifying both.
+In this design, allocation and submission form a partitioned ring buffer as shown in Figure~\ref{fig:pring}.
+Once added to the ring buffer, the attached \gls{proc} has a significant amount of flexibility with regards to when to perform the system call.
 Possible options are: when the \gls{proc} runs out of \glspl{thrd} to run, after running a given number of \glspl{thrd}, etc.
 
@@ -254,62 +261,109 @@
 \end{figure}
 
-This approach has the advantage that it does not require much of the synchronization needed in the shared approach.
-This comes at the cost that \glspl{thrd} submitting \io operations have less flexibility, they cannot park or yield, and several exceptional cases are handled poorly.
-Instances running out of SQEs cannot run \glspl{thrd} wanting to do \io operations, in such a case the \gls{thrd} needs to be moved to a different \gls{proc}, the only current way of achieving this would be to @yield()@ hoping to be scheduled on a different \gls{proc}, which is not guaranteed.
-
-A more involved version of this approach can seem to solve most of these problems, using a pattern called \newterm{helping}.
-\Glspl{thrd} that wish to submit \io operations but cannot do so
-\footnote{either because of an allocation failure or because they were migrate to a different \gls{proc} between allocation and submission}
-create an object representing what they wish to achieve and add it to a list somewhere.
-For this particular problem, one solution would be to have a list of pending submissions per \gls{proc} and a list of pending allocations, probably per cluster.
-The problem with these ``solutions'' is that they are still bound by the strong coupling between \glspl{proc} and @io_uring@ instances.
-These data structures would allow moving \glspl{thrd} to a specific \gls{proc} when the current \gls{proc} cannot fulfill the \io request.
-
-Imagine a simple case with two \glspl{thrd} on two \glspl{proc}, one \gls{thrd} submits an \io operation and then sets a flag, the other \gls{thrd} spins until the flag is set.
-If the first \gls{thrd} is preempted between allocation and submission and moves to the other \gls{proc}, the original \gls{proc} could start running the spinning \gls{thrd}.
-If this happens, the helping ``solution'' is for the \io \gls{thrd}to added append an item to the submission list of the \gls{proc} where the allocation was made.
+This approach has the advantage that it does not require much of the synchronization needed in a shared approach.
+However, this benefit means \glspl{thrd} submitting \io operations have less flexibility: they cannot park or yield, and several exceptional cases are handled poorly.
+Instances running out of SQEs cannot run \glspl{thrd} wanting to do \io operations.
+In this case, the \io \gls{thrd} needs to be moved to a different \gls{proc}, and the only current way of achieving this is to @yield()@ hoping to be scheduled on a different \gls{proc} with free SQEs, which is not guaranteed.
+
+A more involved version of this approach tries to solve these problems using a pattern called \newterm{helping}.
+\Glspl{thrd} that cannot submit \io operations, either because of an allocation failure or migration to a different \gls{proc} between allocation and submission, create an \io object and add it to a list of pending submissions per \gls{proc} and a list of pending allocations, probably per cluster.
+While there is still the strong coupling between \glspl{proc} and @io_uring@ instances, these data structures allow moving \glspl{thrd} to a specific \gls{proc}, when the current \gls{proc} cannot fulfill the \io request.
+
+Imagine a simple scenario with two \glspl{thrd} on two \glspl{proc}, where one \gls{thrd} submits an \io operation and then sets a flag, while the other \gls{thrd} spins until the flag is set.
+Assume both \glspl{thrd} are running on the same \gls{proc}, and the \io \gls{thrd} is preempted between allocation and submission, moved to the second \gls{proc}, and the original \gls{proc} starts running the spinning \gls{thrd}.
+In this case, the helping solution has the \io \gls{thrd} append an \io object to the submission list of the first \gls{proc}, where the allocation was made.
 No other \gls{proc} can help the \gls{thrd} since @io_uring@ instances are strongly coupled to \glspl{proc}.
-However, in this case, the \gls{proc} is unable to help because it is executing the spinning \gls{thrd} mentioned when first expression this case
-\footnote{This particular example is completely artificial, but in the presence of many more \glspl{thrd}, it is not impossible that this problem would arise ``in the wild''.
-Furthermore, this pattern is difficult to reliably detect and avoid.}
-resulting in a deadlock.
-Once in this situation, the only escape is to interrupted the execution of the \gls{thrd}, either directly or due to regular preemption, only then can the \gls{proc} take the time to handle the pending request to help.
-Interrupting \glspl{thrd} for this purpose is far from desireable, the cost is significant and the situation may be hard to detect.
-However, a more subtle reason why interrupting the \gls{thrd} is not a satisfying solution is that the \gls{proc} is not actually using the instance it is tied to.
-If it were to use it, then helping could be done as part of the usage.
+However, the \io \gls{proc} is unable to help because it is executing the spinning \gls{thrd} resulting in a deadlock.
+While this example is artificial, in the presence of many \glspl{thrd}, it is possible for this problem to arise ``in the wild''.
+Furthermore, this pattern is difficult to reliably detect and avoid.
+Once in this situation, the only escape is to interrupted the spinning \gls{thrd}, either directly or via some regular preemption (\eg time slicing).
+Having to interrupt \glspl{thrd} for this purpose is costly, the latency can be large between interrupts, and the situation may be hard to detect.
+% However, a more important reason why interrupting the \gls{thrd} is not a satisfying solution is that the \gls{proc} is using the instance it is tied to.
+% If it were to use it, then helping could be done as part of the usage.
 Interrupts are needed here entirely because the \gls{proc} is tied to an instance it is not using.
-Therefore a more satisfying solution would be for the \gls{thrd} submitting the operation to simply notice that the instance is unused and simply go ahead and use it.
-This is the approach presented next.
+Therefore, a more satisfying solution is for the \gls{thrd} submitting the operation to notice that the instance is unused and simply go ahead and use it.
+This approach is presented shortly.
+
+\subsubsection{Public Instances}
+The public approach creates decoupled pools of @io_uring@ instances and processors, \ie without one-to-one coupling.
+\Glspl{thrd} attempting an \io operation pick one of the available instances and submit the operation to that instance.
+Since there is no coupling between @io_uring@ instances and \glspl{proc} in this approach, \glspl{thrd} running on more than one \gls{proc} can attempt to submit to the same instance concurrently.
+Because @io_uring@ effectively sets the amount of sharding needed to avoid contention on its internal locks, performance in this approach is based on two aspects:
+\begin{itemize}
+\item
+The synchronization needed to submit does not induce more contention than @io_uring@ already does.
+\item
+The scheme to route \io requests to specific @io_uring@ instances does not introduce contention.
+This aspect has an oversized importance because it comes into play before the sharding of instances, and as such, all \glspl{hthrd} can contend on the routing algorithm.
+\end{itemize}
+
+Allocation in this scheme is fairly easy.
+Free SQEs, \ie, SQEs that are not currently being used to represent a request, can be written to safely and have a field called @user_data@ that the kernel only reads to copy to @cqe@s.
+Allocation also requires no ordering guarantee as all free SQEs are interchangeable.
+% This requires a simple concurrent bag.
+The only added complexity is that the number of SQEs is fixed, which means allocation can fail.
+
+Allocation failures need to be pushed to a routing algorithm: \glspl{thrd} attempting \io operations must not be directed to @io_uring@ instances without sufficient SQEs available.
+Furthermore, the routing algorithm should block operations up-front, if none of the instances have available SQEs.
+
+Once an SQE is allocated, \glspl{thrd} insert the \io request information, and keep track of the SQE index and the instance it belongs to.
+
+Once an SQE is filled in, it is added to the submission ring buffer, an operation that is not thread-safe, and then the kernel must be notified using the @io_uring_enter@ system call.
+The submission ring buffer is the same size as the pre-allocated SQE buffer, therefore pushing to the ring buffer cannot fail because it is invalid to have the same \lstinline{sqe} multiple times in a ring buffer.
+However, as mentioned, the system call itself can fail with the expectation that it can be retried once some submitted operations complete.
+
+Since multiple SQEs can be submitted to the kernel at once, it is important to strike a balance between batching and latency.
+Operations that are ready to be submitted should be batched together in few system calls, but at the same time, operations should not be left pending for long period of times before being submitted.
+Balancing submission can be handled by either designating one of the submitting \glspl{thrd} as the being responsible for the system call for the current batch of SQEs or by having some other party regularly submitting all ready SQEs, \eg, the poller \gls{thrd} mentioned later in this section.
+
+Ideally, when multiple \glspl{thrd} attempt to submit operations to the same @io_uring@ instance, all requests should be batched together and one of the \glspl{thrd} is designated to do the system call on behalf of the others, called the \newterm{submitter}.
+However, in practice, \io requests must be handed promptly so there is a need to guarantee everything missed by the current submitter is seen by the next one.
+Indeed, as long as there is a ``next'' submitter, \glspl{thrd} submitting new \io requests can move on, knowing that some future system call includes their request.
+Once the system call is done, the submitter must also free SQEs so that the allocator can reused them.
+
+Finally, the completion side is much simpler since the @io_uring@ system-call enforces a natural synchronization point.
+Polling simply needs to regularly do the system call, go through the produced CQEs and communicate the result back to the originating \glspl{thrd}.
+Since CQEs only own a signed 32 bit result, in addition to the copy of the @user_data@ field, all that is needed to communicate the result is a simple future~\cite{wiki:future}.
+If the submission side does not designate submitters, polling can also submit all SQEs as it is polling events.
+A simple approach to polling is to allocate a \gls{thrd} per @io_uring@ instance and simply let the poller \glspl{thrd} poll their respective instances when scheduled.
+
+With the pool of SEQ instances approach, the big advantage is that it is fairly flexible.
+It does not impose restrictions on what \glspl{thrd} submitting \io operations can and cannot do between allocations and submissions.
+It also can gracefully handle running out of resources, SQEs or the kernel returning @EBUSY@.
+The down side to this approach is that many of the steps used for submitting need complex synchronization to work properly.
+The routing and allocation algorithm needs to keep track of which ring instances have available SQEs, block incoming requests if no instance is available, prevent barging if \glspl{thrd} are already queued up waiting for SQEs and handle SQEs being freed.
+The submission side needs to safely append SQEs to the ring buffer, correctly handle chains, make sure no SQE is dropped or left pending forever, notify the allocation side when SQEs can be reused, and handle the kernel returning @EBUSY@.
+All this synchronization has a significant cost, and compared to the private-instance approach, this synchronization is entirely overhead.
 
 \subsubsection{Instance borrowing}
-Both of the approaches presented above have undesirable aspects that stem from too loose or too tight coupling between @io_uring@ and \glspl{proc}.
-In the first approach, loose coupling meant that all operations have synchronization overhead that a tighter coupling can avoid.
-The second approach on the other hand suffers from tight coupling causing problems when the \gls{proc} do not benefit from the coupling.
-While \glspl{proc} are continously issuing \io operations tight coupling is valuable since it avoids synchronization costs.
-However, in unlikely failure cases or when \glspl{proc} are not making use of their instance, tight coupling is no longer advantageous.
-A compromise between these approaches would be to allow tight coupling but have the option to revoke this coupling dynamically when failure cases arise.
-I call this approach ``instance borrowing''\footnote{While it looks similar to work-sharing and work-stealing, I think it is different enough from either to warrant a different verb to avoid confusion.}.
-
-In this approach, each cluster owns a pool of @io_uring@ instances managed by an arbiter.
+Both of the prior approaches have undesirable aspects that stem from tight or loose coupling between @io_uring@ and \glspl{proc}.
+The first approach suffers from tight coupling causing problems when a \gls{proc} does not benefit from the coupling.
+The second approach suffers from loose coupling causing operations to have synchronization overhead, which tighter coupling avoids.
+When \glspl{proc} are continuously issuing \io operations, tight coupling is valuable since it avoids synchronization costs.
+However, in unlikely failure cases or when \glspl{proc} are not using their instances, tight coupling is no longer advantageous.
+A compromise between these approaches is to allow tight coupling but have the option to revoke the coupling dynamically when failure cases arise.
+I call this approach \newterm{instance borrowing}.\footnote{
+While instance borrowing looks similar to work sharing and stealing, I think it is different enough to warrant a different verb to avoid confusion.}
+
+In this approach, each cluster (see Figure~\ref{fig:system}) owns a pool of @io_uring@ instances managed by an \newterm{arbiter}.
 When a \gls{thrd} attempts to issue an \io operation, it ask for an instance from the arbiter and issues requests to that instance.
-However, in doing so it ties to the instance to the \gls{proc} it is currently running on.
-This coupling is kept until the arbiter decides to revoke it, taking back the instance and reverting the \gls{proc} to its initial state with respect to \io.
-This tight coupling means that synchronization can be minimal since only one \gls{proc} can use the instance at any given time, akin to the private instances approach.
-However, where it differs is that revocation from the arbiter means this approach does not suffer from the deadlock scenario described above.
+This instance is now bound to the \gls{proc} the \gls{thrd} is running on.
+This binding is kept until the arbiter decides to revoke it, taking back the instance and reverting the \gls{proc} to its initial state with respect to \io.
+This tight coupling means that synchronization can be minimal since only one \gls{proc} can use the instance at a time, akin to the private instances approach.
+However, it differs in that revocation by the arbiter (an interrupt) means this approach does not suffer from the deadlock scenario described above.
 
 Arbitration is needed in the following cases:
 \begin{enumerate}
-	\item The current \gls{proc} does not currently hold an instance.
+	\item The current \gls{proc} does not hold an instance.
 	\item The current instance does not have sufficient SQEs to satisfy the request.
-	\item The current \gls{proc} has the wrong instance, this happens if the submitting \gls{thrd} context-switched between allocation and submission.
-	I will refer to these as \newterm{External Submissions}.
+	\item The current \gls{proc} has a wrong instance, this happens if the submitting \gls{thrd} context-switched between allocation and submission, called \newterm{external submissions}.
 \end{enumerate}
-However, even when the arbiter is not directly needed, \glspl{proc} need to make sure that their ownership of the instance is not being revoked.
-This can be accomplished by a lock-less handshake\footnote{Note that the handshake is not Lock-\emph{Free} since it lacks the proper progress guarantee.}.
+However, even when the arbiter is not directly needed, \glspl{proc} need to make sure that their instance ownership is not being revoked, which is accomplished by a lock-\emph{less} handshake.\footnote{
+Note the handshake is not lock \emph{free} since it lacks the proper progress guarantee.}
 A \gls{proc} raises a local flag before using its borrowed instance and checks if the instance is marked as revoked or if the arbiter has raised its flag.
-If not it proceeds, otherwise it delegates the operation to the arbiter.
+If not, it proceeds, otherwise it delegates the operation to the arbiter.
 Once the operation is completed, the \gls{proc} lowers its local flag.
 
-Correspondingly, before revoking an instance the arbiter marks the instance and then waits for the \gls{proc} using it to lower its local flag.
+Correspondingly, before revoking an instance, the arbiter marks the instance and then waits for the \gls{proc} using it to lower its local flag.
 Only then does it reclaim the instance and potentially assign it to an other \gls{proc}.
 
@@ -323,32 +377,29 @@
 
 \paragraph{External Submissions} are handled by the arbiter by revoking the appropriate instance and adding the submission to the submission ring.
-There is no need to immediately revoke the instance however.
+However,  there is no need to immediately revoke the instance.
 External submissions must simply be added to the ring before the next system call, \ie, when the submission ring is flushed.
-This means that whoever is responsible for the system call first checks if the instance has any external submissions.
-If it is the case, it asks the arbiter to revoke the instance and add the external submissions to the ring.
-
-\paragraph{Pending Allocations} can be more complicated to handle.
-If the arbiter has available instances, the arbiter can attempt to directly hand over the instance and satisfy the request.
-Otherwise it must hold onto the list of threads until SQEs are made available again.
-This handling becomes that much more complex if pending allocation require more than one SQE, since the arbiter must make a decision between statisfying requests in FIFO ordering or satisfy requests for fewer SQEs first.
-
-While this arbiter has the potential to solve many of the problems mentionned in above, it also introduces a significant amount of complexity.
+This means whoever is responsible for the system call, first checks if the instance has any external submissions.
+If so, it asks the arbiter to revoke the instance and add the external submissions to the ring.
+
+\paragraph{Pending Allocations} are handled by the arbiter when it has available instances and can directly hand over the instance and satisfy the request.
+Otherwise, it must hold onto the list of threads until SQEs are made available again.
+This handling is more complex when an allocation requires multiple SQEs, since the arbiter must make a decision between satisfying requests in FIFO ordering or for fewer SQEs.
+
+While an arbiter has the potential to solve many of the problems mentioned above, it also introduces a significant amount of complexity.
 Tracking which processors are borrowing which instances and which instances have SQEs available ends-up adding a significant synchronization prelude to any I/O operation.
 Any submission must start with a handshake that pins the currently borrowed instance, if available.
 An attempt to allocate is then made, but the arbiter can concurrently be attempting to allocate from the same instance from a different \gls{hthrd}.
-Once the allocation is completed, the submission must still check that the instance is still burrowed before attempt to flush.
-These extra synchronization steps end-up having a similar cost to the multiple shared instances approach.
+Once the allocation is completed, the submission must check that the instance is still burrowed before attempting to flush.
+These synchronization steps turn out to have a similar cost to the multiple shared-instances approach.
 Furthermore, if the number of instances does not match the number of processors actively submitting I/O, the system can fall into a state where instances are constantly being revoked and end-up cycling the processors, which leads to significant cache deterioration.
-Because of these reasons, this approach, which sounds promising on paper, does not improve on the private instance approach in practice.
+For these reasons, this approach, which sounds promising on paper, does not improve on the private instance approach in practice.
 
 \subsubsection{Private Instances V2}
 
-
-
 % Verbs of this design
 
 % Allocation: obtaining an sqe from which to fill in the io request, enforces the io instance to use since it must be the one which provided the sqe. Must interact with the arbiter if the instance does not have enough sqe for the allocation. (Typical allocation will ask for only one sqe, but chained sqe must be allocated from the same context so chains of sqe must be allocated in bulks)
 
-% Submition: simply adds the sqe(s) to some data structure to communicate that they are ready to go. This operation can't fail because there are as many spots in the submit buffer than there are sqes. Must interact with the arbiter only if the thread was moved between the allocation and the submission.
+% Submission: simply adds the sqe(s) to some data structure to communicate that they are ready to go. This operation can't fail because there are as many spots in the submit buffer than there are sqes. Must interact with the arbiter only if the thread was moved between the allocation and the submission.
 
 % Flushing: Taking all the sqes that were submitted and making them visible to the kernel, also counting them in order to figure out what to_submit should be. Must be thread-safe with submission. Has to interact with the Arbiter if there are external submissions. Can't simply use a protected queue because adding to the array is not safe if the ring is still available for submitters. Flushing must therefore: check if there are external pending requests if so, ask the arbiter to flush otherwise use the fast flush operation.
@@ -357,6 +408,4 @@
 
 % Handle: process all the produced cqe. No need to interact with any of the submission operations or the arbiter.
-
-
 
 
@@ -404,52 +453,59 @@
 
 \section{Interface}
-Finally, the last important part of the \io subsystem is it's interface. There are multiple approaches that can be offered to programmers, each with advantages and disadvantages. The new \io subsystem can replace the C runtime's API or extend it. And in the later case the interface can go from very similar to vastly different. The following sections discuss some useful options using @read@ as an example. The standard Linux interface for C is :
-
-@ssize_t read(int fd, void *buf, size_t count);@
+
+The last important part of the \io subsystem is its interface.
+There are multiple approaches that can be offered to programmers, each with advantages and disadvantages.
+The new \io subsystem can replace the C runtime API or extend it, and in the later case, the interface can go from very similar to vastly different.
+The following sections discuss some useful options using @read@ as an example.
+The standard Linux interface for C is :
+\begin{lstlisting}
+ssize_t read(int fd, void *buf, size_t count);
+\end{lstlisting}
 
 \subsection{Replacement}
 Replacing the C \glsxtrshort{api} is the more intrusive and draconian approach.
 The goal is to convince the compiler and linker to replace any calls to @read@ to direct them to the \CFA implementation instead of glibc's.
-This has the advantage of potentially working transparently and supporting existing binaries without needing recompilation.
+This rerouting has the advantage of working transparently and supporting existing binaries without needing recompilation.
 It also offers a, presumably, well known and familiar API that C programmers can simply continue to work with.
-However, this approach also entails a plethora of subtle technical challenges which generally boils down to making a perfect replacement.
+However, this approach also entails a plethora of subtle technical challenges, which generally boils down to making a perfect replacement.
 If the \CFA interface replaces only \emph{some} of the calls to glibc, then this can easily lead to esoteric concurrency bugs.
-Since the gcc ecosystems does not offer a scheme for such perfect replacement, this approach was rejected as being laudable but infeasible.
+Since the gcc ecosystems does not offer a scheme for perfect replacement, this approach was rejected as being laudable but infeasible.
 
 \subsection{Synchronous Extension}
-An other interface option is to simply offer an interface that is different in name only. For example:
-
-@ssize_t cfa_read(int fd, void *buf, size_t count);@
-
-\noindent This is much more feasible but still familiar to C programmers.
-It comes with the caveat that any code attempting to use it must be recompiled, which can be a big problem considering the amount of existing legacy C binaries.
+Another interface option is to offer an interface different in name only.
+For example:
+\begin{lstlisting}
+ssize_t cfa_read(int fd, void *buf, size_t count);
+\end{lstlisting}
+This approach is feasible and still familiar to C programmers.
+It comes with the caveat that any code attempting to use it must be recompiled, which is a problem considering the amount of existing legacy C binaries.
 However, it has the advantage of implementation simplicity.
+Finally, there is a certain irony to using a blocking synchronous interfaces for a feature often referred to as ``non-blocking'' \io.
 
 \subsection{Asynchronous Extension}
-It is important to mention that there is a certain irony to using only synchronous, therefore blocking, interfaces for a feature often referred to as ``non-blocking'' \io.
-A fairly traditional way of doing this is using futures\cit{wikipedia futures}.
-As simple way of doing so is as follows:
-
-@future(ssize_t) read(int fd, void *buf, size_t count);@
-
-\noindent Note that this approach is not necessarily the most idiomatic usage of futures.
-The definition of read above ``returns'' the read content through an output parameter which cannot be synchronized on.
-A more classical asynchronous API could look more like:
-
-@future([ssize_t, void *]) read(int fd, size_t count);@
-
-\noindent However, this interface immediately introduces memory lifetime challenges since the call must effectively allocate a buffer to be returned.
-Because of the performance implications of this, the first approach is considered preferable as it is more familiar to C programmers.
-
-\subsection{Interface directly to \lstinline{io_uring}}
-Finally, an other interface that can be relevant is to simply expose directly the underlying \texttt{io\_uring} interface. For example:
-
-@array(SQE, want) cfa_io_allocate(int want);@
-
-@void cfa_io_submit( const array(SQE, have) & );@
-
-\noindent This offers more flexibility to users wanting to fully use all of the \texttt{io\_uring} features.
+A fairly traditional way of providing asynchronous interactions is using a future mechanism~\cite{multilisp}, \eg:
+\begin{lstlisting}
+future(ssize_t) read(int fd, void *buf, size_t count);
+\end{lstlisting}
+where the generic @future@ is fulfilled when the read completes and it contains the number of bytes read, which may be less than the number of bytes requested.
+The data read is placed in @buf@.
+The problem is that both the bytes read and data form the synchronization object, not just the bytes read.
+Hence, the buffer cannot be reused until the operation completes but the synchronization does not cover the buffer.
+A classical asynchronous API is:
+\begin{lstlisting}
+future([ssize_t, void *]) read(int fd, size_t count);
+\end{lstlisting}
+where the future tuple covers the components that require synchronization.
+However, this interface immediately introduces memory lifetime challenges since the call must effectively allocate a buffer to be returned.
+Because of the performance implications of this API, the first approach is considered preferable as it is more familiar to C programmers.
+
+\subsection{Direct \lstinline{io_uring} Interface}
+The last interface directly exposes the underlying @io_uring@ interface, \eg:
+\begin{lstlisting}
+array(SQE, want) cfa_io_allocate(int want);
+void cfa_io_submit( const array(SQE, have) & );
+\end{lstlisting}
+where the generic @array@ contains an array of SQEs with a size that may be less than the request.
+This offers more flexibility to users wanting to fully utilize all of the @io_uring@ features.
 However, it is not the most user-friendly option.
-It obviously imposes a strong dependency between user code and \texttt{io\_uring} but at the same time restricting users to usages that are compatible with how \CFA internally uses \texttt{io\_uring}.
-
-
+It obviously imposes a strong dependency between user code and @io_uring@ but at the same time restricting users to usages that are compatible with how \CFA internally uses @io_uring@.
Index: doc/theses/thierry_delisle_PhD/thesis/text/practice.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/practice.tex	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/text/practice.tex	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -15,5 +15,5 @@
 Programmers are mostly expected to resize clusters on startup or teardown.
 Therefore dynamically changing the number of \procs is an appropriate moment to allocate or free resources to match the new state.
-As such all internal arrays that are sized based on the number of \procs need to be \texttt{realloc}ed.
+As such all internal arrays that are sized based on the number of \procs need to be @realloc@ed.
 This also means that any references into these arrays, pointers or indexes, may need to be fixed when shrinking\footnote{Indexes may still need fixing when shrinkingbecause some indexes are expected to refer to dense contiguous resources and there is no guarantee the resource being removed has the highest index.}.
 
@@ -107,5 +107,5 @@
 First some data structure needs to keep track of all \procs that are in idle sleep.
 Because of idle sleep can be spurious, this data structure has strict performance requirements in addition to the strict correctness requirements.
-Next, some tool must be used to block kernel threads \glspl{kthrd}, \eg \texttt{pthread\_cond\_wait}, pthread semaphores.
+Next, some tool must be used to block kernel threads \glspl{kthrd}, \eg @pthread_cond_wait@, pthread semaphores.
 The complexity here is to support \at parking and unparking, timers, \io operations and all other \CFA features with minimal complexity.
 Finally, idle sleep also includes a heuristic to determine the appropriate number of \procs to be in idle sleep an any given time.
@@ -117,12 +117,12 @@
 In terms of blocking a \gls{kthrd} until some event occurs the linux kernel has many available options:
 
-\paragraph{\texttt{pthread\_mutex}/\texttt{pthread\_cond}}
-The most classic option is to use some combination of \texttt{pthread\_mutex} and \texttt{pthread\_cond}.
-These serve as straight forward mutual exclusion and synchronization tools and allow a \gls{kthrd} to wait on a \texttt{pthread\_cond} until signalled.
-While this approach is generally perfectly appropriate for \glspl{kthrd} waiting after eachother, \io operations do not signal \texttt{pthread\_cond}s.
-For \io results to wake a \proc waiting on a \texttt{pthread\_cond} means that a different \glspl{kthrd} must be woken up first, and then the \proc can be signalled.
-
-\subsection{\texttt{io\_uring} and Epoll}
-An alternative is to flip the problem on its head and block waiting for \io, using \texttt{io\_uring} or even \texttt{epoll}.
+\paragraph{\lstinline{pthread_mutex}/\lstinline{pthread_cond}}
+The most classic option is to use some combination of @pthread_mutex@ and @pthread_cond@.
+These serve as straight forward mutual exclusion and synchronization tools and allow a \gls{kthrd} to wait on a @pthread_cond@ until signalled.
+While this approach is generally perfectly appropriate for \glspl{kthrd} waiting after eachother, \io operations do not signal @pthread_cond@s.
+For \io results to wake a \proc waiting on a @pthread_cond@ means that a different \glspl{kthrd} must be woken up first, and then the \proc can be signalled.
+
+\subsection{\lstinline{io_uring} and Epoll}
+An alternative is to flip the problem on its head and block waiting for \io, using @io_uring@ or even @epoll@.
 This creates the inverse situation, where \io operations directly wake sleeping \procs but waking \proc from a running \gls{kthrd} must use an indirect scheme.
 This generally takes the form of creating a file descriptor, \eg, a dummy file, a pipe or an event fd, and using that file descriptor when \procs need to wake eachother.
@@ -132,10 +132,11 @@
 \subsection{Event FDs}
 Another interesting approach is to use an event file descriptor\cit{eventfd}.
-This is a Linux feature that is a file descriptor that behaves like \io, \ie, uses \texttt{read} and \texttt{write}, but also behaves like a semaphore.
+This is a Linux feature that is a file descriptor that behaves like \io, \ie, uses @read@ and @write@, but also behaves like a semaphore.
 Indeed, all read and writes must use 64bits large values\footnote{On 64-bit Linux, a 32-bit Linux would use 32 bits values.}.
-Writes add their values to the buffer, that is arithmetic addition and not buffer append, and reads zero out the buffer and return the buffer values so far\footnote{This is without the \texttt{EFD\_SEMAPHORE} flag. This flags changes the behavior of \texttt{read} but is not needed for this work.}.
+Writes add their values to the buffer, that is arithmetic addition and not buffer append, and reads zero out the buffer and return the buffer values so far\footnote{
+This is without the \lstinline{EFD_SEMAPHORE} flag. This flags changes the behavior of \lstinline{read} but is not needed for this work.}.
 If a read is made while the buffer is already 0, the read blocks until a non-0 value is added.
-What makes this feature particularly interesting is that \texttt{io\_uring} supports the \texttt{IORING\_REGISTER\_EVENTFD} command, to register an event fd to a particular instance.
-Once that instance is registered, any \io completion will result in \texttt{io\_uring} writing to the event FD.
+What makes this feature particularly interesting is that @io_uring@ supports the @IORING_REGISTER_EVENTFD@ command, to register an event fd to a particular instance.
+Once that instance is registered, any \io completion will result in @io\_uring@ writing to the event FD.
 This means that a \proc waiting on the event FD can be \emph{directly} woken up by either other \procs or incomming \io.
 
@@ -172,5 +173,5 @@
 This means that whichever entity removes idle \procs from the sleeper list must be able to do so in any order.
 Using a simple lock over this data structure makes the removal much simpler than using a lock-free data structure.
-The notification process then simply needs to wake-up the desired idle \proc, using \texttt{pthread\_cond\_signal}, \texttt{write} on an fd, etc., and the \proc will handle the rest.
+The notification process then simply needs to wake-up the desired idle \proc, using @pthread_cond_signal@, @write@ on an fd, etc., and the \proc will handle the rest.
 
 \subsection{Reducing Latency}
@@ -190,9 +191,9 @@
 The contention is mostly due to the lock on the list needing to be held to get to the head \proc.
 That lock can be contended by \procs attempting to go to sleep, \procs waking or notification attempts.
-The contentention from the \procs attempting to go to sleep can be mitigated slightly by using \texttt{try\_acquire} instead, so the \procs simply continue searching for \ats if the lock is held.
+The contentention from the \procs attempting to go to sleep can be mitigated slightly by using @try\_acquire@ instead, so the \procs simply continue searching for \ats if the lock is held.
 This trick cannot be used for waking \procs since they are not in a state where they can run \ats.
 However, it is worth nothing that notification does not strictly require accessing the list or the head \proc.
 Therefore, contention can be reduced notably by having notifiers avoid the lock entirely and adding a pointer to the event fd of the first idle \proc, as in Figure~\ref{fig:idle2}.
-To avoid contention between the notifiers, instead of simply reading the atomic pointer, notifiers atomically exchange it to \texttt{null} so only only notifier will contend on the system call.
+To avoid contention between the notifiers, instead of simply reading the atomic pointer, notifiers atomically exchange it to @null@ so only only notifier will contend on the system call.
 
 \begin{figure}
@@ -206,9 +207,9 @@
 This can be done by adding what is effectively a benaphore\cit{benaphore} in front of the event fd.
 A simple three state flag is added beside the event fd to avoid unnecessary system calls, as shown in Figure~\ref{fig:idle:state}.
-The flag starts in state \texttt{SEARCH}, while the \proc is searching for \ats to run.
-The \proc then confirms the sleep by atomically swaping the state to \texttt{SLEEP}.
-If the previous state was still \texttt{SEARCH}, then the \proc does read the event fd.
-Meanwhile, notifiers atomically exchange the state to \texttt{AWAKE} state.
-if the previous state was \texttt{SLEEP}, then the notifier must write to the event fd.
+The flag starts in state @SEARCH@, while the \proc is searching for \ats to run.
+The \proc then confirms the sleep by atomically swaping the state to @SLEEP@.
+If the previous state was still @SEARCH@, then the \proc does read the event fd.
+Meanwhile, notifiers atomically exchange the state to @AWAKE@ state.
+if the previous state was @SLEEP@, then the notifier must write to the event fd.
 However, if the notify arrives almost immediately after the \proc marks itself idle, then both reads and writes on the event fd can be omitted, which reduces latency notably.
 This leads to the final data structure shown in Figure~\ref{fig:idle}.
Index: doc/theses/thierry_delisle_PhD/thesis/thesis.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/thesis.tex	(revision 9c6443e73df608a521d9c1d6a1fad2fe729a79e6)
+++ doc/theses/thierry_delisle_PhD/thesis/thesis.tex	(revision 847bb6fe1321be72f2e6a488bae6a3d8574fe886)
@@ -108,5 +108,6 @@
 	citecolor=OliveGreen,   % color of links to bibliography
 	filecolor=magenta,      % color of file links
-	urlcolor=cyan           % color of external links
+	urlcolor=blue,           % color of external links
+	breaklinks=true
 }
 \ifthenelse{\boolean{PrintVersion}}{   % for improved print quality, change some hyperref options
