1 | \chapter{Macro-Benchmarks}\label{macrobench} |
---|
2 | The previous chapter demonstrated the \CFA scheduler achieves its equivalent performance goal in small and controlled \at-scheduling scenarios. |
---|
3 | The next step is to demonstrate performance stays true in more realistic and complete scenarios. |
---|
4 | Therefore, this chapter exercises both \at and I/O scheduling using two flavours of webservers that demonstrate \CFA performs competitively with production environments. |
---|
5 | |
---|
6 | Webservers are chosen because they offer fairly simple applications that perform complex I/O, both network and disk, and are useful as standalone products. |
---|
7 | Furthermore, webservers are generally amenable to parallelization since their workloads are mostly homogeneous. |
---|
8 | Therefore, webservers offer a stringent performance benchmark for \CFA. |
---|
9 | Indeed, existing webservers have close to optimal performance, while the homogeneity of the workload means fairness may not be a problem. |
---|
10 | As such, these experiments should highlight any \CFA fairness cost (overhead) in realistic scenarios. |
---|
11 | |
---|
12 | \section{Memcached} |
---|
13 | Memcached~\cite{memcached} is an in-memory key-value store used in many production environments, \eg \cite{atikoglu2012workload}. |
---|
14 | In fact, the Memcached server is so popular there exists a full-featured front-end for performance testing, called @mutilate@~\cite{GITHUB:mutilate}. |
---|
15 | Experimenting on Memcached allows for a simple test of the \CFA runtime as a whole, exercising the scheduler, the idle-sleep mechanism, as well the \io subsystem for sockets. |
---|
16 | Note, this experiment does not exercise the \io subsystem with regards to disk operations because Memcached is an in-memory server. |
---|
17 | |
---|
18 | \subsection{Benchmark Environment} |
---|
19 | The Memcached experiments are run on a cluster of homogeneous Supermicro SYS-6017R-TDF compute nodes with the following characteristics. |
---|
20 | \begin{itemize} |
---|
21 | \item |
---|
22 | The server runs Ubuntu 20.04.3 LTS on top of Linux Kernel 5.11.0-34. |
---|
23 | \item |
---|
24 | Each node has 2 Intel(R) Xeon(R) CPU E5-2620 v2 running at 2.10GHz. |
---|
25 | \item |
---|
26 | These CPUs have 6 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 24 \glspl{hthrd}. |
---|
27 | \item |
---|
28 | The CPUs each have 384 KB, 3 MB and 30 MB of L1, L2 and L3 caches respectively. |
---|
29 | \item |
---|
30 | Each node is connected to the network through a Mellanox 10 Gigabit Ethernet port. |
---|
31 | \item |
---|
32 | Network routing is performed by a Mellanox SX1012 10/40 Gigabit Ethernet switch. |
---|
33 | \end{itemize} |
---|
34 | |
---|
35 | \subsection{Memcached threading} |
---|
36 | Memcached can be built to use multiple threads in addition to its @libevent@ subsystem to handle requests. |
---|
37 | When enabled, the threading implementation operates as follows~\cite{https://docs.oracle.com/cd/E17952_01/mysql-5.6-en/ha-memcached-using-threads.html}: |
---|
38 | \begin{itemize} |
---|
39 | \item |
---|
40 | Threading is handled by wrapping functions within the code to provide basic protection from updating the same global structures at the same time. |
---|
41 | \item |
---|
42 | Each thread uses its own instance of the @libevent@ to help improve performance. |
---|
43 | \item |
---|
44 | TCP/IP connections are handled with a single thread listening on the TCP/IP socket. |
---|
45 | Each connection is then distributed to one of the active threads on a simple round-robin basis. |
---|
46 | Each connection then operates solely within this thread while the connection remains open. |
---|
47 | \item |
---|
48 | For UDP connections, all the threads listen to a single UDP socket for incoming requests. |
---|
49 | Threads that are not currently dealing with another request ignore the incoming packet. |
---|
50 | One of the remaining, nonbusy, threads reads the request and sends the response. |
---|
51 | This implementation can lead to increased CPU load as threads wake from sleep to potentially process the request. |
---|
52 | \end{itemize} |
---|
53 | Here, Memcached is based on an event-based webserver architecture~\cite{Pai99Flash}, using \gls{kthrd}ing to run multiple (largely) independent event engines, and if needed, spinning up additional kernel threads to handle blocking I/O. |
---|
54 | Alternative webserver architecture are: |
---|
55 | \begin{itemize} |
---|
56 | \item |
---|
57 | pipeline~\cite{Welsh01}, where the event engine is subdivided into multiple stages and the stages are connected with asynchronous buffers, where the final stage has multiple threads to handle blocking I/O. |
---|
58 | \item |
---|
59 | thread-per-connection~\cite{apache,Behren03}, where each incoming connection is served by a single \at in a strict 1-to-1 pairing, using the thread stack to hold the event state and folding the event engine implicitly into the threading runtime with its nonblocking I/O mechanism. |
---|
60 | \end{itemize} |
---|
61 | Both pipelining and thread-per-connection add flexibility to the implementation, as the serving logic can now block without halting the event engine~\cite{Harji12}. |
---|
62 | |
---|
63 | However, \gls{kthrd}ing in Memcached is not amenable to this work, which is based on \gls{uthrding}. |
---|
64 | While it is feasible to layer one user thread per kernel thread, it is not meaningful as it fails to exercise the user runtime; |
---|
65 | it simply adds extra scheduling overhead over the kernel threading. |
---|
66 | Hence, there is no direct way to compare Memcached using a kernel-level runtime with a user-level runtime. |
---|
67 | |
---|
68 | Fortunately, there exists a recent port of Memcached to \gls{uthrding} based on the libfibre~\cite{DBLP:journals/pomacs/KarstenB20} \gls{uthrding} library. |
---|
69 | This port did all of the heavy-lifting, making it straightforward to replace the libfibre user-threading with the \gls{uthrding} in \CFA. |
---|
70 | It is now possible to compare the original kernel-threading Memcached with both user-threading runtimes in libfibre and \CFA. |
---|
71 | |
---|
72 | As such, this Memcached experiment compares 3 different variations of Memcached: |
---|
73 | \begin{itemize} |
---|
74 | \item \emph{vanilla}: the official release of Memcached, version~1.6.9. |
---|
75 | \item \emph{fibre}: a modification of vanilla using the thread-per-connection model on top of the libfibre runtime. |
---|
76 | \item \emph{cfa}: a modification of the fibre webserver that replaces the libfibre runtime with \CFA. |
---|
77 | \end{itemize} |
---|
78 | |
---|
79 | \subsection{Throughput} \label{memcd:tput} |
---|
80 | This experiment is done by having the clients establish 15,360 total connections, which persist for the duration of the experiment. |
---|
81 | The clients then send read and write queries with only 3\% writes (updates), attempting to follow a desired query rate, and the server responds to the desired rate as best as possible. |
---|
82 | Figure~\ref{fig:memcd:rate:qps} shows the 3 server versions at different client rates, ``Target \underline{Q}ueries \underline{P}er \underline{S}econd'', and the actual rate, ``Actual QPS'', for all three webservers. |
---|
83 | |
---|
84 | Like the experimental setup in Chapter~\ref{microbench}, each experiment is run 15 times, and for each client rate, the measured webserver rate is plotted. |
---|
85 | The solid line represents the median while the dashed and dotted lines represent the maximum and minimum respectively. |
---|
86 | For rates below 500K queries per seconds, all three webservers match the client rate. |
---|
87 | Beyond 500K, the webservers cannot match the client rate. |
---|
88 | During this interval, vanilla Memcached achieves the highest webserver throughput, with libfibre and \CFA slightly lower but very similar throughput. |
---|
89 | Overall the performance of all three webservers is very similar, especially considering that at 500K the servers have reached saturation, which is discussed more in the next section. |
---|
90 | |
---|
91 | \begin{figure} |
---|
92 | \centering |
---|
93 | \resizebox{0.83\linewidth}{!}{\input{result.memcd.rate.qps.pstex_t}} |
---|
94 | \caption[Memcached Benchmark: Throughput]{Memcached Benchmark: Throughput\smallskip\newline Desired vs Actual query rate for 15,360 connections. Target QPS is the query rate that the clients are attempting to maintain and Actual QPS is the rate at which the server is able to respond.} |
---|
95 | \label{fig:memcd:rate:qps} |
---|
96 | %\end{figure} |
---|
97 | \bigskip |
---|
98 | %\begin{figure} |
---|
99 | \centering |
---|
100 | \resizebox{0.83\linewidth}{!}{\input{result.memcd.rate.99th.pstex_t}} |
---|
101 | \caption[Memcached Benchmark : 99th Percentile Lantency]{Memcached Benchmark : 99th Percentile Lantency\smallskip\newline 99th Percentile of the response latency as a function of \emph{desired} query rate for 15,360 connections. } |
---|
102 | \label{fig:memcd:rate:tail} |
---|
103 | \end{figure} |
---|
104 | |
---|
105 | \subsection{Tail Latency} |
---|
106 | Another popular performance metric is \newterm{tail} latency, which indicates some notion of fairness among requests across the experiment, \ie do some requests wait longer than other requests for service. |
---|
107 | Since many web applications rely on a combination of different queries made in parallel, the latency of the slowest response, \ie tail latency, can dictate a performance perception. |
---|
108 | Figure~\ref{fig:memcd:rate:tail} shows the 99th percentile latency results for the same Memcached experiment. |
---|
109 | |
---|
110 | Again, each experiment is run 15 times with the median, maximum and minimum plotted with different lines. |
---|
111 | As expected, the latency starts low and increases as the server gets close to saturation, at which point, the latency increases dramatically because the webservers cannot keep up with the connection rate so client requests are disproportionally delayed. |
---|
112 | Because of this dramatic increase, the Y axis is presented using log scale. |
---|
113 | Note that the graph shows \emph{target} query rate, the actual response rate is given in Figure~\ref{fig:memcd:rate:qps} as this is the same underlying experiment. |
---|
114 | |
---|
115 | For all three servers, the saturation point is reached before 500K queries per second, which is when throughput starts to decline among the webservers. |
---|
116 | In this experiment, all three webservers are much more distinguishable than the throughput experiment. |
---|
117 | Vanilla Memcached achieves the lowest latency until 600K, after which all the webservers are struggling to respond to client requests. |
---|
118 | \CFA begins to decline at 600K, indicating some bottleneck after saturation. |
---|
119 | Overall, all three webservers achieve micro-second latencies and the increases in latency mostly follow each other. |
---|
120 | |
---|
121 | \subsection{Update rate} |
---|
122 | Since Memcached is effectively a simple database, the information that is cached can be written to concurrently by multiple queries. |
---|
123 | And since writes can significantly affect performance, it is interesting to see how varying the update rate affects performance. |
---|
124 | Figure~\ref{fig:memcd:updt} shows the results for the same experiment as the throughput and latency experiment but increasing the update percentage to 5\%, 10\% and 50\%, respectively, versus the original 3\% update percentage. |
---|
125 | |
---|
126 | \begin{figure} |
---|
127 | \subfloat[][\CFA: Throughput]{ |
---|
128 | \resizebox{0.5\linewidth}{!}{ |
---|
129 | \input{result.memcd.forall.qps.pstex_t} |
---|
130 | } |
---|
131 | \label{fig:memcd:updt:forall:qps} |
---|
132 | } |
---|
133 | \subfloat[][\CFA: Latency]{ |
---|
134 | \resizebox{0.5\linewidth}{!}{ |
---|
135 | \input{result.memcd.forall.lat.pstex_t} |
---|
136 | } |
---|
137 | \label{fig:memcd:updt:forall:lat} |
---|
138 | } |
---|
139 | |
---|
140 | \subfloat[][LibFibre: Throughput]{ |
---|
141 | \resizebox{0.5\linewidth}{!}{ |
---|
142 | \input{result.memcd.fibre.qps.pstex_t} |
---|
143 | } |
---|
144 | \label{fig:memcd:updt:fibre:qps} |
---|
145 | } |
---|
146 | \subfloat[][LibFibre: Latency]{ |
---|
147 | \resizebox{0.5\linewidth}{!}{ |
---|
148 | \input{result.memcd.fibre.lat.pstex_t} |
---|
149 | } |
---|
150 | \label{fig:memcd:updt:fibre:lat} |
---|
151 | } |
---|
152 | |
---|
153 | \subfloat[][Vanilla: Throughput]{ |
---|
154 | \resizebox{0.5\linewidth}{!}{ |
---|
155 | \input{result.memcd.vanilla.qps.pstex_t} |
---|
156 | } |
---|
157 | \label{fig:memcd:updt:vanilla:qps} |
---|
158 | } |
---|
159 | \subfloat[][Vanilla: Latency]{ |
---|
160 | \resizebox{0.5\linewidth}{!}{ |
---|
161 | \input{result.memcd.vanilla.lat.pstex_t} |
---|
162 | } |
---|
163 | \label{fig:memcd:updt:vanilla:lat} |
---|
164 | } |
---|
165 | \caption[Throughput and Latency results at different update rates (percentage of writes).]{Throughput and Latency results at different update rates (percentage of writes).\smallskip\newline Description} |
---|
166 | \label{fig:memcd:updt} |
---|
167 | \end{figure} |
---|
168 | |
---|
169 | In the end, this experiment mostly demonstrates that the performance of Memcached is affected very little by the update rate. |
---|
170 | Indeed, since values read/written can be bigger than what can be read/written atomically, a lock must be acquired while the value is read. |
---|
171 | Hence, I believe the underlying locking pattern for reads and writes is fairly similar, if not the same. |
---|
172 | These results suggest Memcached does not attempt to optimize reads/writes using a readers-writer lock to protect each value and instead just relies on having a sufficient number of keys to limit contention. |
---|
173 | In the end, the update experiment shows that \CFA is achieving equivalent performance. |
---|
174 | |
---|
175 | \section{Static Web-Server} |
---|
176 | The Memcached experiment does not exercise two key aspects of the \io subsystem: accept\-ing new connections and interacting with disks. |
---|
177 | On the other hand, a webserver servicing static web-pages does stress both accepting connections and disk \io by accepting tens of thousands of client requests per second where these requests return static data serviced from the file-system cache or disk.\footnote{ |
---|
178 | Webservers servicing dynamic requests, which read from multiple locations and construct a response, are not as interesting since creating the response takes more time and does not exercise the runtime in a meaningfully different way.} |
---|
179 | The static webserver experiment compares NGINX~\cite{nginx} with a custom \CFA-based webserver developed for this experiment. |
---|
180 | |
---|
181 | \subsection{\CFA webserver} |
---|
182 | The \CFA webserver is a straightforward thread-per-connection webserver, where a fixed number of \ats are created upfront (tuning parameter). |
---|
183 | Each \at calls @accept@, through @io_uring@, on the listening port and handles the incoming connection once accepted. |
---|
184 | Most of the implementation is fairly straightforward; |
---|
185 | however, the inclusion of file \io found an @io_uring@ problem that required an unfortunate workaround. |
---|
186 | |
---|
187 | Normally, webservers use @sendfile@~\cite{MAN:sendfile} to send files over a socket because it performs a direct move in the kernel from the file-system cache to the NIC, eliminating reading/writing the file into the webserver. |
---|
188 | While @io_uring@ does not support @sendfile@, it does supports @splice@~\cite{MAN:splice}, which is strictly more powerful. |
---|
189 | However, because of how Linux implements file \io, see Subsection~\ref{ononblock}, @io_uring@ must delegate splice calls to worker threads inside the kernel. |
---|
190 | As of Linux 5.13, @io_uring@ had no mechanism to restrict the number of worker threads, and therefore, when tens of thousands of splice requests are made, it correspondingly creates tens of thousands of internal \glspl{kthrd}. |
---|
191 | Such a high number of \glspl{kthrd} slows Linux significantly. |
---|
192 | Rather than abandon the experiment, the \CFA webserver was switched to nonblocking @sendfile@. |
---|
193 | However, when the nonblocking @sendfile@ returns @EAGAIN@, the \CFA server cannot block the \at because its I/O subsystem uses @io_uring@. |
---|
194 | Therefore, the \at must spin performing the @sendfile@ and yield if the call returns @EAGAIN@. |
---|
195 | This workaround works up to the saturation point, when other problems occur. |
---|
196 | |
---|
197 | At saturation, latency increases so some client connections timeout. |
---|
198 | As these clients close their connection, the server must close its corresponding side without delay so the OS can reclaim the resources used by these connections. |
---|
199 | Indeed, until the server connection is closed, the connection lingers in the CLOSE-WAIT TCP state~\cite{rfc:tcp} and the TCP buffers are preserved. |
---|
200 | However, this poses a problem using nonblocking @sendfile@ calls: |
---|
201 | the call can still block if there is insufficient memory, which can be caused by having too many connections in the CLOSE-WAIT state.\footnote{ |
---|
202 | \lstinline{sendfile} can always block even in nonblocking mode if the file to be sent is not in the file-system cache, because Linux does not provide nonblocking disk I/O.} |
---|
203 | When @sendfile@ blocks, the \proc rather than the \at blocks, preventing other connections from closing their sockets. |
---|
204 | This effect results in a negative feedback where more timeouts lead to more @sendfile@ calls running out of resources. |
---|
205 | |
---|
206 | Normally, this is address by using @select@/@epoll@ to wait for sockets to have sufficient resources. |
---|
207 | However, since @io_uring@ respects nonblocking semantics, marking all sockets as non-blocking effectively circumvents the @io_uring@ subsystem entirely. |
---|
208 | For this reason, the \CFA webserver sets and resets the @O_NONBLOCK@ flag before and after any calls to @sendfile@. |
---|
209 | Normally @epoll@ would also be used when these calls to @sendfile@ return @EAGAIN@, but since this would not help in the evaluation of the \CFA runtime, the \CFA webserver simply yields and retries in these cases. |
---|
210 | |
---|
211 | Interestingly, Linux 5.15 @io_uring@ introduces the ability to limit the number of worker threads that are created, through the @IORING_REGISTER_IOWQ_MAX_WORKERS@ option. |
---|
212 | However, as of writing this document Ubuntu does not have a stable release of Linux 5.15. |
---|
213 | There exists versions of the kernel that are currently under testing, but these caused unrelated but nevertheless prohibitive issues in this experiment. |
---|
214 | Presumably, the new kernel would remove the need for the hack described above, as it would allow connections in the CLOSE-WAIT state to be closed even while the calls to @splice@/@sendfile@ are underway. |
---|
215 | However, since this could not be tested, this is purely a conjecture at this point. |
---|
216 | |
---|
217 | \subsection{Benchmark Environment} |
---|
218 | Unlike the Memcached experiment, the webserver experiment is run on a heterogeneous environment. |
---|
219 | \begin{itemize} |
---|
220 | \item |
---|
221 | The server runs Ubuntu 20.04.4 LTS on top of Linux Kernel 5.13.0-52. |
---|
222 | \item |
---|
223 | It has an AMD Opteron(tm) Processor 6380 running at 2.5GHz. |
---|
224 | \item |
---|
225 | Each CPU has 64 KB, 256 KiB and 8 MB of L1, L2 and L3 caches respectively. |
---|
226 | \item |
---|
227 | The computer is booted with only 8 CPUs enabled, which is sufficient to achieve line rate. |
---|
228 | \item |
---|
229 | The computer is booted with only 25GB of memory to restrict the file-system cache. |
---|
230 | \end{itemize} |
---|
231 | There are 8 client machines. |
---|
232 | \begin{itemize} |
---|
233 | \item |
---|
234 | A client runs a 2.6.11-1 SMP Linux kernel, which permits each client load-generator to run on a separate CPU. |
---|
235 | \item |
---|
236 | It has two 2.8 GHz Xeon CPUs, and four one-gigabit Ethernet cards. |
---|
237 | \item |
---|
238 | \todo{switch} |
---|
239 | \item |
---|
240 | A client machine runs two copies of the workload generator. |
---|
241 | \end{itemize} |
---|
242 | The clients and network are sufficiently provisioned to drive the server to saturation and beyond. |
---|
243 | Hence, any server effects are attributable solely to the runtime system and webserver. |
---|
244 | Finally, without restricting the server hardware resources, it is impossible to determine if a runtime system or the webserver using it has any specific design restrictions, \eg using space to reduce time. |
---|
245 | Trying to determine these restriction with large numbers of processors or memory simply means running equally large experiments, which takes longer and are harder to set up. |
---|
246 | |
---|
247 | \subsection{Throughput} |
---|
248 | To measure webserver throughput, the server computer is loaded with 21,600 files, sharded across 650 directories, occupying about 2.2GB of disk, distributed over the server's RAID-5 4-drives to achieve high throughput for disk I/O. |
---|
249 | The clients run httperf~\cite{httperf} to request a set of static files. |
---|
250 | The httperf load-generator is used with session files to simulate a large number of users and to implement a partially open-loop system. |
---|
251 | This permits httperf to produce overload conditions, generate multiple requests from persistent HTTP/1.1 connections, and include both active and inactive off periods to model browser processing times and user think times~\cite{Barford98}. |
---|
252 | |
---|
253 | The experiments are run with 16 clients, each running a copy of httperf (one copy per CPU), requiring a set of 16 log files with requests conforming to a Zipf distribution. |
---|
254 | This distribution is representative of users accessing static data through a web-browser. |
---|
255 | Each request reads a file name from its trace, establishes a connection, performs an HTTP get-request for the file name, receive the file data, close the connection, and repeat the process. |
---|
256 | Some trace elements have multiple file names that are read across a persistent connection. |
---|
257 | A client times-out if the server does not complete a request within 10 seconds. |
---|
258 | |
---|
259 | An experiment consists of running a server with request rates ranging from 10,000 to 70,000 requests per second; |
---|
260 | each rate takes about 5 minutes to complete. |
---|
261 | There is 20 seconds idle time between rates and between experiments to allow connections in the TIME-WAIT state to clear. |
---|
262 | Server throughput is measured both at peak and after saturation (\ie after peak). |
---|
263 | Peak indicates the level of client requests the server can handle and after peak indicates if a server degrades gracefully. |
---|
264 | Throughput is measured by aggregating the results from httperf of all the clients. |
---|
265 | |
---|
266 | Two workload scenarios are created by reconfiguring the server with different amounts of memory: 4 GB and 2 GB. |
---|
267 | The two workloads correspond to in-memory (4 GB) and disk-I/O (2 GB). |
---|
268 | Due to the Zipf distribution, only a small amount of memory is needed to service a significant percentage of requests. |
---|
269 | Table~\ref{t:CumulativeMemory} shows the cumulative memory required to satisfy the specified percentage of requests; e.g., 95\% of the requests come from 126.5 MB of the file set and 95\% of the requests are for files less than or equal to 51,200 bytes. |
---|
270 | Interestingly, with 2 GB of memory, significant disk-I/O occurs. |
---|
271 | |
---|
272 | \begin{table} |
---|
273 | \caption{Cumulative memory for requests by file size} |
---|
274 | \label{t:CumulativeMemory} |
---|
275 | \begin{tabular}{r|rrrrrrrr} |
---|
276 | \% Requests & 10 & 30 & 50 & 70 & 80 & 90 & \textbf{95} & 100 \\ |
---|
277 | Memory (MB) & 0.5 & 1.5 & 8.4 & 12.2 & 20.1 & 94.3 & \textbf{126.5} & 2,291.6 \\ |
---|
278 | File Size (B) & 409 & 716 & 4,096 & 5,120 & 7,168 & 40,960 & \textbf{51,200} & 921,600 |
---|
279 | \end{tabular} |
---|
280 | \end{table} |
---|
281 | |
---|
282 | Figure~\ref{fig:swbsrv} shows the results comparing \CFA to NGINX in terms of throughput. |
---|
283 | These results are fairly straightforward. |
---|
284 | Both servers achieve the same throughput until around 57,500 requests per seconds. |
---|
285 | Since the clients are asking for the same files, the fact that the throughput matches exactly is expected as long as both servers are able to serve the desired rate. |
---|
286 | Once the saturation point is reached, both servers are still very close. |
---|
287 | NGINX achieves slightly better throughput. |
---|
288 | However, Figure~\ref{fig:swbsrv:err} shows the rate of errors, a gross approximation of tail latency, where \CFA achieves notably fewer errors once the machine reaches saturation. |
---|
289 | This suggest that \CFA is slightly more fair and NGINX may slightly sacrifice some fairness for improved throughput. |
---|
290 | It demonstrate that the \CFA webserver described above is able to match the performance of NGINX up-to and beyond the saturation point of the machine. |
---|
291 | |
---|
292 | \begin{figure} |
---|
293 | \subfloat[][Throughput]{ |
---|
294 | \resizebox{0.85\linewidth}{!}{\input{result.swbsrv.25gb.pstex_t}} |
---|
295 | \label{fig:swbsrv:ops} |
---|
296 | } |
---|
297 | |
---|
298 | \subfloat[][Rate of Errors]{ |
---|
299 | \resizebox{0.85\linewidth}{!}{\input{result.swbsrv.25gb.err.pstex_t}} |
---|
300 | \label{fig:swbsrv:err} |
---|
301 | } |
---|
302 | \caption[Static Webserver Benchmark : Throughput]{Static Webserver Benchmark : Throughput\smallskip\newline Throughput vs request rate for short lived connections connections.} |
---|
303 | \label{fig:swbsrv} |
---|
304 | \end{figure} |
---|
305 | |
---|
306 | \subsection{Disk Operations} |
---|
307 | The throughput was made using a server with 25gb of memory, this was sufficient to hold the entire fileset in addition to all the code and data needed to run the webserver and the rest of the machine. |
---|
308 | Previous work like \cit{Cite Ashif's stuff} demonstrate that an interesting follow-up experiment is to rerun the same throughput experiment but allowing significantly less memory on the machine. |
---|
309 | If the machine is constrained enough, it will force the OS to evict files from the file cache and cause calls to @sendfile@ to have to read from disk. |
---|
310 | However, what these low memory experiments demonstrate is how the memory footprint of the webserver affects the performance. |
---|
311 | However, since what I am to evaluate in this thesis is the runtime of \CFA, I decided to forgo experiments on low memory server. |
---|
312 | The implementation of the webserver itself is simply too impactful to be an interesting evaluation of the underlying runtime. |
---|