Index: doc/theses/thierry_delisle_PhD/thesis/text/front.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/front.tex	(revision 30159e5e29aba584a347c2ecaf2cb5eacb8d5856)
+++ doc/theses/thierry_delisle_PhD/thesis/text/front.tex	(revision 4e21942973cb669449a4cfc55bb1e0fabb7f5de8)
@@ -106,5 +106,5 @@
 % D E C L A R A T I O N   P A G E
 % -------------------------------
-% The following is a sample Delaration Page as provided by the GSO
+% The following is a sample Declaration Page as provided by the GSO
 % December 13th, 2006.  It is designed for an electronic thesis.
 \noindent
@@ -124,27 +124,27 @@
 
 User-Level threading (M:N) is gaining popularity over kernel-level threading (1:1) in many programming languages.
-The user-level approach is often a better mechanism to express complex concurrent applications by efficiently running 10,000+ threads on multi-core systems.
-Indeed, over-partitioning into small work-units significantly eases load balancing while providing user threads for each unit of work offers greater freedom to the programmer.
+The user threading approach is often a better mechanism to express complex concurrent applications by efficiently running 10,000+ threads on multi-core systems.
+Indeed, over-partitioning into small work-units with user threading significantly eases load bal\-ancing, while simultaneously providing advanced synchronization and mutual exclusion mechanisms.
 To manage these high levels of concurrency, the underlying runtime must efficiently schedule many user threads across a few kernel threads;
-which begs of the question of how many kernel threads are needed and when should the need be re-evaliated.
-Furthermore, the scheduler must prevent kernel threads from blocking, otherwise user-thread parallelism drops, and put idle kernel-threads to sleep to avoid wasted resources.
+which begs of the question of how many kernel threads are needed and should the number be dynamically reevaluated.
+Furthermore, scheduling must prevent kernel threads from blocking, otherwise user-thread parallelism drops.
+When user-threading parallelism does drop, how and when should idle kernel-threads be to sleep to avoid wasted CPU resources.
 Finally, the scheduling system must provide fairness to prevent a user thread from monopolizing a kernel thread;
-otherwise other user threads can experience short/long term starvation or kernel threads can deadlock waiting for events to occur.
+otherwise other user threads can experience short/long term starvation or kernel threads can deadlock waiting for events to occur on busy kernel threads.
 
 This thesis analyses multiple scheduler systems, where each system attempts to fulfill the necessary requirements for user-level threading.
-The predominant technique for manage high levels of concurrency is sharding the ready-queue with one queue per kernel-threads and using some form of work stealing/sharing to dynamically rebalance workload shifts.
-Fairness can be handled through preemption or ad-hoc solutions, which leads to coarse-grained fairness and pathological cases.
+The predominant technique for managing high levels of concurrency is sharding the ready-queue with one queue per kernel-thread and using some form of work stealing/sharing to dynamically rebalance workload shifts.
 Preventing kernel blocking is accomplish by transforming kernel locks and I/O operations into user-level operations that do not block the kernel thread or spin up new kernel threads to manage the blocking.
-
-After selecting specific approaches to these scheduling issues, a complete implementation was created and tested in the \CFA (C-for-all) runtime system.
+Fairness is handled through preemption and/or ad-hoc solutions, which leads to coarse-grained fairness with some pathological cases.
+
+After testing and selecting specific approaches to these scheduling issues, a complete implementation was created and tested in the \CFA (C-for-all) runtime system.
 \CFA is a modern extension of C using user-level threading as its fundamental threading model.
 As one of its primary goals, \CFA aims to offer increased safety and productivity without sacrificing performance.
 The new scheduler achieves this goal by demonstrating equivalent performance to work-stealing schedulers while offering better fairness.
-This is achieved through several optimization that successfully eliminate the cost of the additional fairness, some of these optimization relying on interesting hardware optimizations present on most modern cpus.
-This work also includes support for user-level \io, allowing programmers to have many more user-threads blocking on \io operations than there are \glspl{kthrd}.
+The implementation uses several optimizations that successfully balance the cost of fairness against performance;
+some of these optimization rely on interesting hardware optimizations present on modern CPUs.
+The new scheduler also includes support for implicit nonblocking \io, allowing applications to have more user-threads blocking on \io operations than there are \glspl{kthrd}.
 The implementation is based on @io_uring@, a recent addition to the Linux kernel, and achieves the same performance and fairness.
-To complete the picture, the idle sleep mechanism that goes along is presented.
-
-
+To complete the scheduler, an idle sleep mechanism is implemented that significantly reduces wasted CPU cycles, which are then available outside of the application.
 
 \cleardoublepage
