Ignore:
Timestamp:
Sep 6, 2022, 4:05:00 PM (20 months ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
a44514e
Parents:
9f99799
Message:

Merged peter's last changes and filled in most of the TODOs

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex

    r9f99799 r7a0f798b  
    166166                \label{fig:memcd:updt:vanilla:lat}
    167167        }
    168         \caption[Throughput and Latency results at different update rates (percentage of writes).]{Throughput and Latency results at different update rates (percentage of writes).\smallskip\newline \todo{Description}}
     168        \caption[Throughput and Latency results at different update rates (percentage of writes).]{Throughput and Latency results at different update rates (percentage of writes).\smallskip\newline On the left, throughput as Desired vs Actual query rate.
     169        Target QPS is the query rate that the clients are attempting to maintain and Actual QPS is the rate at which the server is able to respond.
     170        On the right, tail latency, \ie 99th Percentile of the response latency as a function of \emph{desired} query rate.
     171        For throughput, higher is better, for tail-latency, lower is better.
     172        Each series represent 15 independent runs, the dashed lines are maximums of each series while the solid lines are the median and the dotted lines are the minimums.}
     173        All runs have 15,360 client connections.
    169174        \label{fig:memcd:updt}
    170175\end{figure}
     
    194199However, for the following experiments, NGINX is configured to let the master process decided the appropriate number of threads.
    195200
    196 
    197 % Like memcached, NGINX can be made to use multiple \glspl{kthrd}.
    198 % It has a very similar architecture to the memcached architecture described in Section~\ref{memcd:thrd}, where multiple \glspl{kthrd} each run a mostly independent network logic.
    199 % While it does not necessarily use a dedicated listening thread, each connection is arbitrarily assigned to one of the \newterm{worker} threads.
    200 % Each worker thread handles multiple connections exclusively, effectively dividing the connections into distinct sets.
    201 % Again, this is effectively the \emph{event-based server} approach.
    202 %
    203 % \cit{https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/}
    204 
    205 % NGINX 1.13.7 is an high-performance, \emph{full-service}, event-driven, with multiple operating-system processes or multiple kernel-threads within a process to handle blocking I/O.
    206 % It can also .
    207 % Concurrent connections are handled using a complex event-driven architecture.
    208 % The NGINX server runs a master process that performs operations such as reading configuration files, binding to ports, and controlling worker processes.
    209 % NGINX uses a disk-based cache for performance, and assigns a dedicated process to manage the cache.
    210 % This process, known as the \textit{cache manager}, is spun-off by the master process.
    211 % Additionally, there can be many \textit{worker processes}, each handling network connections, reading and writing disk files, and communicating with upstream servers, such as reverse proxies or databases.
    212 
    213 % A worker is a single-threaded process, running independently of other workers.
    214 % The worker process handles new incoming connections and processes them.
    215 % Workers communicate using shared memory for shared cache and session data, and other shared resources.
    216 % Each worker assigns
    217 % As in a typical event-driven architecture, the worker listens for events from the clients, and responds immediately without blocking.
    218 % Memory use in NGINX is very conservative, because it does not spin up a new process or thread per connection, like Apache~\cite{apache} or \CFA.
    219 % All operations are asynchronous -- implemented using event notifications, callback functions and fine-tuned timers.
    220 
    221201\subsection{\CFA webserver}
    222202The \CFA webserver is a straightforward thread-per-connection webserver, where a fixed number of \ats are created upfront.
Note: See TracChangeset for help on using the changeset viewer.