Index: doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex	(revision e5c04b99195bbe87492bf8f6370fc4f500485135)
+++ doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex	(revision 38e4c5ca154e6e38b98994e851af6694912e841d)
@@ -180,4 +180,16 @@
 
 \subsection{NGINX threading}
+NGINX is an high-performance, \emph{full-service}, event-driven webserver.
+It can handle both static and dynamic web content, as well as serve as a reverse proxy and a load balancer~\cite{reese2008nginx}.
+This wealth of features comes with a variety of potential configuration, dictating both available features and performance.
+The NGINX server runs a master process that performs operations such as reading configuration files, binding to ports, and controlling worker processes.
+When running as a static webserver, uses an event driven architecture to server incoming requests.
+Incoming connections are assigned a \emph{statckless} HTTP state-machine and worker processes can potentially handle thousands of these state machines.
+For the following experiment, NGINX was configured to use epoll to listen for events on these state machines and have each worker process independently accept new connections.
+Because of the realities of Linux, see Subsection~\ref{ononblock}, NGINX also maintains a pool of auxilary threads to handle block \io.
+The configuration can be used to set the number of worker processes desired, as well as the size of the auxilary pool.
+However, for the following experiments NGINX was configured to let the master process decided the appropriate number of threads.
+
+
 % Like memcached, NGINX can be made to use multiple \glspl{kthrd}.
 % It has a very similar architecture to the memcached architecture described in Section~\ref{memcd:thrd}, where multiple \glspl{kthrd} each run a mostly independent network logic.
@@ -185,22 +197,22 @@
 % Each worker thread handles multiple connections exclusively, effectively dividing the connections into distinct sets.
 % Again, this is effectively the \emph{event-based server} approach.
-% 
+%
 % \cit{https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/}
 
-NGINX 1.13.7 is an high-performance, \emph{full-service}, event-driven, with multiple operating-system processes or multiple kernel-threads within a process to handle blocking I/O.
-It can also serve as a reverse proxy and a load balancer~\cite{reese2008nginx}.
-Concurrent connections are handled using a complex event-driven architecture.
-The NGINX server runs a master process that performs operations such as reading configuration files, binding to ports, and controlling worker processes.
-NGINX uses a disk-based cache for performance, and assigns a dedicated process to manage the cache.
-This process, known as the \textit{cache manager}, is spun-off by the master process.
-Additionally, there can be many \textit{worker processes}, each handling network connections, reading and writing disk files, and communicating with upstream servers, such as reverse proxies or databases.
-
-A worker is a single-threaded process, running independently of other workers.
-The worker process handles new incoming connections and processes them.
-Workers communicate using shared memory for shared cache and session data, and other shared resources.
-Each worker assigns incoming connections to an HTTP state-machine.
-As in a typical event-driven architecture, the worker listens for events from the clients, and responds immediately without blocking.
-Memory use in NGINX is very conservative, because it does not spin up a new process or thread per connection, like Apache~\cite{apache} or \CFA.
-All operations are asynchronous -- implemented using event notifications, callback functions and fine-tuned timers.
+% NGINX 1.13.7 is an high-performance, \emph{full-service}, event-driven, with multiple operating-system processes or multiple kernel-threads within a process to handle blocking I/O.
+% It can also .
+% Concurrent connections are handled using a complex event-driven architecture.
+% The NGINX server runs a master process that performs operations such as reading configuration files, binding to ports, and controlling worker processes.
+% NGINX uses a disk-based cache for performance, and assigns a dedicated process to manage the cache.
+% This process, known as the \textit{cache manager}, is spun-off by the master process.
+% Additionally, there can be many \textit{worker processes}, each handling network connections, reading and writing disk files, and communicating with upstream servers, such as reverse proxies or databases.
+
+% A worker is a single-threaded process, running independently of other workers.
+% The worker process handles new incoming connections and processes them.
+% Workers communicate using shared memory for shared cache and session data, and other shared resources.
+% Each worker assigns
+% As in a typical event-driven architecture, the worker listens for events from the clients, and responds immediately without blocking.
+% Memory use in NGINX is very conservative, because it does not spin up a new process or thread per connection, like Apache~\cite{apache} or \CFA.
+% All operations are asynchronous -- implemented using event notifications, callback functions and fine-tuned timers.
 
 \subsection{\CFA webserver}
@@ -227,5 +239,5 @@
 This effect results in a negative feedback where more timeouts lead to more @sendfile@ calls running out of resources.
 
-Normally, this problem is address by using @select@/@epoll@ to wait for sockets to have sufficient resources.
+Normally, this problem is addressed by using @select@/@epoll@ to wait for sockets to have sufficient resources.
 However, since @io_uring@ does not support @sendfile@ but does respects non\-blocking semantics, marking all sockets as non-blocking effectively circumvents the @io_uring@ subsystem entirely:
 all calls simply immediately return @EAGAIN@ and all asynchronicity is lost.
@@ -363,3 +375,2 @@
 However, I felt this kind of modification moves to far away from my goal of evaluating the \CFA runtime, \ie it begins writing another runtime system;
 hence, I decided to forgo experiments on low-memory performance.
-% The implementation of the webserver itself is simply too impactful to be an interesting evaluation of the underlying runtime.
