Ignore:
Timestamp:
Sep 13, 2022, 3:07:25 PM (20 months ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
3606fe4
Parents:
1c334d1
Message:

Ran second spell/grammar checker and now the cows have come home

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex

    r1c334d1 rfc96890  
    1313Memcached~\cite{memcached} is an in-memory key-value store used in many production environments, \eg \cite{atikoglu2012workload}.
    1414The Memcached server is so popular there exists a full-featured front-end for performance testing, called @mutilate@~\cite{GITHUB:mutilate}.
    15 Experimenting on Memcached allows for a simple test of the \CFA runtime as a whole, exercising the scheduler, the idle-sleep mechanism, as well the \io subsystem for sockets.
     15Experimenting on Memcached allows for a simple test of the \CFA runtime as a whole, exercising the scheduler, the idle-sleep mechanism, as well as the \io subsystem for sockets.
    1616Note that this experiment does not exercise the \io subsystem with regard to disk operations because Memcached is an in-memory server.
    1717
     
    4848For UDP connections, all the threads listen to a single UDP socket for incoming requests.
    4949Threads that are not currently dealing with another request ignore the incoming packet.
    50 One of the remaining, nonbusy, threads reads the request and sends the response.
     50One of the remaining, non-busy, threads reads the request and sends the response.
    5151This implementation can lead to increased CPU \gls{load} as threads wake from sleep to potentially process the request.
    5252\end{itemize}
     
    110110Again, each experiment is run 15 times with the median, maximum and minimum plotted with different lines.
    111111As expected, the latency starts low and increases as the server gets close to saturation, at which point, the latency increases dramatically because the web servers cannot keep up with the connection rate so client requests are disproportionally delayed.
    112 Because of this dramatic increase, the Y axis is presented using a log scale.
     112Because of this dramatic increase, the Y-axis is presented using a log scale.
    113113Note that the graph shows the \emph{target} query rate, the actual response rate is given in Figure~\ref{fig:memcd:rate:qps} as this is the same underlying experiment.
    114114
     
    117117Vanilla Memcached achieves the lowest latency until 600K, after which all the web servers are struggling to respond to client requests.
    118118\CFA begins to decline at 600K, indicating some bottleneck after saturation.
    119 Overall, all three web servers achieve micro-second latencies and the increases in latency mostly follow each other.
     119Overall, all three web servers achieve microsecond latencies and the increases in latency mostly follow each other.
    120120
    121121\subsection{Update rate}
     
    269269\begin{itemize}
    270270\item
    271 A client runs a 2.6.11-1 SMP Linux kernel, which permits each client load-generator to run on a separate CPU.
     271A client runs a 2.6.11-1 SMP Linux kernel, which permits each client load generator to run on a separate CPU.
    272272\item
    273273It has two 2.8 GHz Xeon CPUs, and four one-gigabit Ethernet cards.
     
    285285To measure web server throughput, the server computer is loaded with 21,600 files, sharded across 650 directories, occupying about 2.2GB of disk, distributed over the server's RAID-5 4-drives to achieve high throughput for disk I/O.
    286286The clients run httperf~\cite{httperf} to request a set of static files.
    287 The httperf load-generator is used with session files to simulate a large number of users and to implement a partially open-loop system.
     287The httperf load generator is used with session files to simulate a large number of users and to implement a partially open-loop system.
    288288This permits httperf to produce overload conditions, generate multiple requests from persistent HTTP/1.1 connections, and include both active and inactive off periods to model browser processing times and user think times~\cite{Barford98}.
    289289
    290290The experiments are run with 16 clients, each running a copy of httperf (one copy per CPU), requiring a set of 16 log files with requests conforming to a Zipf distribution.
    291291This distribution is representative of users accessing static data through a web browser.
    292 Each request reads a file name from its trace, establishes a connection, performs an HTTP get-request for the file name, receives the file data, closes the connection, and repeats the process.
     292Each request reads a file name from its trace, establishes a connection, performs an HTTP GET request for the file name, receives the file data, closes the connection, and repeats the process.
    293293Some trace elements have multiple file names that are read across a persistent connection.
    294294A client times out if the server does not complete a request within 10 seconds.
Note: See TracChangeset for help on using the changeset viewer.