Index: doc/theses/thierry_delisle_PhD/thesis/text/core.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/core.tex	(revision 4407b7ee22a177cb0cd564fd7ea6335606b27dc3)
+++ doc/theses/thierry_delisle_PhD/thesis/text/core.tex	(revision d489da80335483f120776a4061b19c68ca8e988b)
@@ -13,5 +13,5 @@
 To match expectations, the design must offer the programmer sufficient guarantees so that, as long as they respect the execution mental model, the system also respects this model.
 
-For threading, a simple and common execution mental model is the ``ideal multitasking CPU'' :
+For threading, a simple and common execution mental model is the ``ideal multitasking CPU'':
 
 \begin{displayquote}[Linux CFS\cite{MAN:linux/cfs}]
@@ -120,5 +120,5 @@
 On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work.
 This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work.
-Chapter~\ref{microbench} shows that pathological cases work stealing can lead to indefinite starvation.
+Chapter~\ref{microbench} shows that, in pathological cases, work stealing can lead to indefinite starvation.
 
 Based on these observations, the conclusion is that a \emph{perfect} scheduler should behave similarly to work-stealing in the steady-state case, but load balance proactively when the need arises.
@@ -126,5 +126,5 @@
 \subsection{Relaxed-FIFO}
 A different scheduling approach is to create a ``relaxed-FIFO'' queue, as in \cite{alistarh2018relaxed}.
-This approach forgoes any ownership between \gls{proc} and sub-queue, and simply creates a pool of ready queues from which \glspl{proc} pick.
+This approach forgoes any ownership between \gls{proc} and sub-queue, and simply creates a pool of sub-queues from which \glspl{proc} pick.
 Scheduling is performed as follows:
 \begin{itemize}
@@ -134,5 +134,5 @@
 Timestamps are added to each element of a sub-queue.
 \item
-A \gls{proc} randomly tests ready queues until it has acquired one or two queues.
+A \gls{proc} randomly tests sub-queues until it has acquired one or two queues.
 \item
 If two queues are acquired, the older of the two \ats is dequeued from the front of the acquired queues.
@@ -254,5 +254,5 @@
 
 With these additions to work stealing, scheduling can be made as fair as the relaxed-FIFO approach, avoiding the majority of unnecessary migrations.
-Unfortunately, the work to achieve fairness has a performance cost, especially when the workload is inherently fair, and hence, there is only short-term or no starvation.
+Unfortunately, the work to achieve fairness has a performance cost, especially when the workload is inherently fair, and hence, there is only short-term unfairness or no starvation.
 The problem is that the constant polling, \ie reads, of remote sub-queues generally entails cache misses because the TSs are constantly being updated, \ie, writes.
 To make things worst, remote sub-queues that are very active, \ie \ats are frequently enqueued and dequeued from them, lead to higher chances that polling will incur a cache-miss.
@@ -342,5 +342,5 @@
 \subsection{Topological Work Stealing}
 \label{s:TopologicalWorkStealing}
-Therefore, the approach used in the \CFA scheduler is to have per-\proc sub-queues, but have an explicit data structure to track which cache substructure each sub-queue is tied to.
+The approach used in the \CFA scheduler is to have per-\proc sub-queues, but have an explicit data structure to track which cache substructure each sub-queue is tied to.
 This tracking requires some finesse because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
 A key element, however, is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}.
