- Timestamp:
- Sep 13, 2022, 3:07:25 PM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation
- Children:
- 3606fe4
- Parents:
- 1c334d1
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex
r1c334d1 rfc96890 13 13 Memcached~\cite{memcached} is an in-memory key-value store used in many production environments, \eg \cite{atikoglu2012workload}. 14 14 The Memcached server is so popular there exists a full-featured front-end for performance testing, called @mutilate@~\cite{GITHUB:mutilate}. 15 Experimenting on Memcached allows for a simple test of the \CFA runtime as a whole, exercising the scheduler, the idle-sleep mechanism, as well the \io subsystem for sockets.15 Experimenting on Memcached allows for a simple test of the \CFA runtime as a whole, exercising the scheduler, the idle-sleep mechanism, as well as the \io subsystem for sockets. 16 16 Note that this experiment does not exercise the \io subsystem with regard to disk operations because Memcached is an in-memory server. 17 17 … … 48 48 For UDP connections, all the threads listen to a single UDP socket for incoming requests. 49 49 Threads that are not currently dealing with another request ignore the incoming packet. 50 One of the remaining, non busy, threads reads the request and sends the response.50 One of the remaining, non-busy, threads reads the request and sends the response. 51 51 This implementation can lead to increased CPU \gls{load} as threads wake from sleep to potentially process the request. 52 52 \end{itemize} … … 110 110 Again, each experiment is run 15 times with the median, maximum and minimum plotted with different lines. 111 111 As expected, the latency starts low and increases as the server gets close to saturation, at which point, the latency increases dramatically because the web servers cannot keep up with the connection rate so client requests are disproportionally delayed. 112 Because of this dramatic increase, the Y 112 Because of this dramatic increase, the Y-axis is presented using a log scale. 113 113 Note that the graph shows the \emph{target} query rate, the actual response rate is given in Figure~\ref{fig:memcd:rate:qps} as this is the same underlying experiment. 114 114 … … 117 117 Vanilla Memcached achieves the lowest latency until 600K, after which all the web servers are struggling to respond to client requests. 118 118 \CFA begins to decline at 600K, indicating some bottleneck after saturation. 119 Overall, all three web servers achieve micro -second latencies and the increases in latency mostly follow each other.119 Overall, all three web servers achieve microsecond latencies and the increases in latency mostly follow each other. 120 120 121 121 \subsection{Update rate} … … 269 269 \begin{itemize} 270 270 \item 271 A client runs a 2.6.11-1 SMP Linux kernel, which permits each client load -generator to run on a separate CPU.271 A client runs a 2.6.11-1 SMP Linux kernel, which permits each client load generator to run on a separate CPU. 272 272 \item 273 273 It has two 2.8 GHz Xeon CPUs, and four one-gigabit Ethernet cards. … … 285 285 To measure web server throughput, the server computer is loaded with 21,600 files, sharded across 650 directories, occupying about 2.2GB of disk, distributed over the server's RAID-5 4-drives to achieve high throughput for disk I/O. 286 286 The clients run httperf~\cite{httperf} to request a set of static files. 287 The httperf load -generator is used with session files to simulate a large number of users and to implement a partially open-loop system.287 The httperf load generator is used with session files to simulate a large number of users and to implement a partially open-loop system. 288 288 This permits httperf to produce overload conditions, generate multiple requests from persistent HTTP/1.1 connections, and include both active and inactive off periods to model browser processing times and user think times~\cite{Barford98}. 289 289 290 290 The experiments are run with 16 clients, each running a copy of httperf (one copy per CPU), requiring a set of 16 log files with requests conforming to a Zipf distribution. 291 291 This distribution is representative of users accessing static data through a web browser. 292 Each request reads a file name from its trace, establishes a connection, performs an HTTP get-request for the file name, receives the file data, closes the connection, and repeats the process.292 Each request reads a file name from its trace, establishes a connection, performs an HTTP GET request for the file name, receives the file data, closes the connection, and repeats the process. 293 293 Some trace elements have multiple file names that are read across a persistent connection. 294 294 A client times out if the server does not complete a request within 10 seconds.
Note: See TracChangeset
for help on using the changeset viewer.