Changeset 38e4c5c


Ignore:
Timestamp:
Aug 31, 2022, 3:00:59 PM (2 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
9d67a6d
Parents:
594e1db
Message:

Re-wrote nginx threading section and fixed typo

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex

    r594e1db r38e4c5c  
    180180
    181181\subsection{NGINX threading}
     182NGINX is an high-performance, \emph{full-service}, event-driven webserver.
     183It can handle both static and dynamic web content, as well as serve as a reverse proxy and a load balancer~\cite{reese2008nginx}.
     184This wealth of features comes with a variety of potential configuration, dictating both available features and performance.
     185The NGINX server runs a master process that performs operations such as reading configuration files, binding to ports, and controlling worker processes.
     186When running as a static webserver, uses an event driven architecture to server incoming requests.
     187Incoming connections are assigned a \emph{statckless} HTTP state-machine and worker processes can potentially handle thousands of these state machines.
     188For the following experiment, NGINX was configured to use epoll to listen for events on these state machines and have each worker process independently accept new connections.
     189Because of the realities of Linux, see Subsection~\ref{ononblock}, NGINX also maintains a pool of auxilary threads to handle block \io.
     190The configuration can be used to set the number of worker processes desired, as well as the size of the auxilary pool.
     191However, for the following experiments NGINX was configured to let the master process decided the appropriate number of threads.
     192
     193
    182194% Like memcached, NGINX can be made to use multiple \glspl{kthrd}.
    183195% It has a very similar architecture to the memcached architecture described in Section~\ref{memcd:thrd}, where multiple \glspl{kthrd} each run a mostly independent network logic.
     
    185197% Each worker thread handles multiple connections exclusively, effectively dividing the connections into distinct sets.
    186198% Again, this is effectively the \emph{event-based server} approach.
    187 % 
     199%
    188200% \cit{https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/}
    189201
    190 NGINX 1.13.7 is an high-performance, \emph{full-service}, event-driven, with multiple operating-system processes or multiple kernel-threads within a process to handle blocking I/O.
    191 It can also serve as a reverse proxy and a load balancer~\cite{reese2008nginx}.
    192 Concurrent connections are handled using a complex event-driven architecture.
    193 The NGINX server runs a master process that performs operations such as reading configuration files, binding to ports, and controlling worker processes.
    194 NGINX uses a disk-based cache for performance, and assigns a dedicated process to manage the cache.
    195 This process, known as the \textit{cache manager}, is spun-off by the master process.
    196 Additionally, there can be many \textit{worker processes}, each handling network connections, reading and writing disk files, and communicating with upstream servers, such as reverse proxies or databases.
    197 
    198 A worker is a single-threaded process, running independently of other workers.
    199 The worker process handles new incoming connections and processes them.
    200 Workers communicate using shared memory for shared cache and session data, and other shared resources.
    201 Each worker assigns incoming connections to an HTTP state-machine.
    202 As in a typical event-driven architecture, the worker listens for events from the clients, and responds immediately without blocking.
    203 Memory use in NGINX is very conservative, because it does not spin up a new process or thread per connection, like Apache~\cite{apache} or \CFA.
    204 All operations are asynchronous -- implemented using event notifications, callback functions and fine-tuned timers.
     202% NGINX 1.13.7 is an high-performance, \emph{full-service}, event-driven, with multiple operating-system processes or multiple kernel-threads within a process to handle blocking I/O.
     203% It can also .
     204% Concurrent connections are handled using a complex event-driven architecture.
     205% The NGINX server runs a master process that performs operations such as reading configuration files, binding to ports, and controlling worker processes.
     206% NGINX uses a disk-based cache for performance, and assigns a dedicated process to manage the cache.
     207% This process, known as the \textit{cache manager}, is spun-off by the master process.
     208% Additionally, there can be many \textit{worker processes}, each handling network connections, reading and writing disk files, and communicating with upstream servers, such as reverse proxies or databases.
     209
     210% A worker is a single-threaded process, running independently of other workers.
     211% The worker process handles new incoming connections and processes them.
     212% Workers communicate using shared memory for shared cache and session data, and other shared resources.
     213% Each worker assigns
     214% As in a typical event-driven architecture, the worker listens for events from the clients, and responds immediately without blocking.
     215% Memory use in NGINX is very conservative, because it does not spin up a new process or thread per connection, like Apache~\cite{apache} or \CFA.
     216% All operations are asynchronous -- implemented using event notifications, callback functions and fine-tuned timers.
    205217
    206218\subsection{\CFA webserver}
     
    227239This effect results in a negative feedback where more timeouts lead to more @sendfile@ calls running out of resources.
    228240
    229 Normally, this problem is address by using @select@/@epoll@ to wait for sockets to have sufficient resources.
     241Normally, this problem is addressed by using @select@/@epoll@ to wait for sockets to have sufficient resources.
    230242However, since @io_uring@ does not support @sendfile@ but does respects non\-blocking semantics, marking all sockets as non-blocking effectively circumvents the @io_uring@ subsystem entirely:
    231243all calls simply immediately return @EAGAIN@ and all asynchronicity is lost.
     
    363375However, I felt this kind of modification moves to far away from my goal of evaluating the \CFA runtime, \ie it begins writing another runtime system;
    364376hence, I decided to forgo experiments on low-memory performance.
    365 % The implementation of the webserver itself is simply too impactful to be an interesting evaluation of the underlying runtime.
Note: See TracChangeset for help on using the changeset viewer.