source: doc/theses/mubeen_zulfiqar_MMath/benchmarks.tex @ f6e6a55

ADTast-experimentalpthread-emulationqualifiedEnum
Last change on this file since f6e6a55 was ba897d21, checked in by m3zulfiq <m3zulfiq@…>, 2 years ago

added benchmark and evaluations chapter to thesis

  • Property mode set to 100644
File size: 12.3 KB
Line 
1\chapter{Benchmarks}
2
3%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
4%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
5%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Micro Benchmark Suite
6%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
7%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
8
9The aim of micro benchmark suite is to create a set of programs that can evaluate a memory allocator based on the
10performance matrices described in (FIX ME: local cite). These programs can be taken as a standard to benchmark an
11allocator's basic goals. These programs give details of an allocator's memory overhead and speed under a certain
12allocation pattern. The speed of the allocator is benchmarked in different ways. Similarly, false sharing happening in
13an allocator is also measured in multiple ways. These benchmarks evalute the allocator under a certain allocation
14pattern which is configurable and can be changed using a few knobs to benchmark observe an allocator's performance
15under a desired allocation pattern.
16
17Micro Benchmark Suite benchmarks an allocator's performance by allocating dynamic objects and, then, measuring specifc
18matrices. The benchmark suite evaluates an allocator with a certain allocation pattern. Bnechmarks have different knobs
19that can be used to change allocation pattern and evaluate an allocator under desired conditions. These can be set by
20giving commandline arguments to the benchmark on execution.
21
22\section{Current Benchmarks} There are multiple benchmarks that are built individually and evaluate different aspects of
23 a memory allocator. But, there is not a set of benchamrks that can be used to evaluate multiple aspects of memory
24 allocators.
25
26\subsection{threadtest}(FIX ME: cite benchmark and hoard) Each thread repeatedly allocates and then deallocates 100,000
27 objects. Runtime of the benchmark evaluates its efficiency.
28
29\subsection{shbench}(FIX ME: cite benchmark and hoard) Each thread allocates and randomly frees a number of random-sized
30 objects. It is a stress test that also uses runtime to determine efficiency of the allocator.
31
32\subsection{larson}(FIX ME: cite benchmark and hoard) Larson simulates a server environment. Multiple threads are
33 created where each thread allocator and free a number of objects within a size range. Some objects are passed from
34 threads to the child threads to free. It caluculates memory operations per second as an indicator of memory
35 allocator's performance.
36
37\section{Memory Benchmark} Memory benchmark measures memory overhead of an allocator. It allocates a number of dynamic
38 objects. Then, by reading /self/proc/maps, gets the total memory that the allocator has reuested from the OS. It
39 calculates the memory head by taking the difference between the memory the allocator has requested from the OS and the
40 memory that program has allocated.
41
42\begin{figure}
43\centering
44\includegraphics[width=1\textwidth]{figures/bench-memory.eps}
45\caption{Benchmark Memory Overhead}
46\label{fig:benchMemoryFig}
47\end{figure}
48
49Figure \ref{fig:benchMemoryFig} gives a flow of the memory benchmark. It creates a producer-consumer scenerio with K producers
50 and each producer has M consumers. Producer has a separate buffer for each consumer. Producer allocates N objects of
51 random sizes following the given distrubution for each consumer. Consumer frees those objects. After every memory
52 operation, program memory usage is recorded throughout the runtime. This data then can be used to visualize the memory
53 usage and consumption of the prigram.
54
55Different knobs can be adjusted to set certain thread model.\\
56-threadA :  sets number of alloc threads (producers) for mem benchmark\\
57-consumeS:  sets production and conumption round size\\
58-threadF :  sets number of free threads (consumers) for mem benchmark
59
60Object allocation size can be changed using the knobs:\\
61-maxS    :  sets max object size\\
62-minS    :  sets min object size\\
63-stepS   :  sets object size increment\\
64-distroS :  sets object size distribution\\
65-objN    :  sets number of objects per thread\\
66
67\section{Speed Benchmark} Speed benchmark measures the runtime speed of an allocator (FIX ME: cite allocator routines).
68 Speed benchmark measures runtime speed of individual memory allocation routines. It also considers different
69 allocation chains to measures the performance of the allocator by combining multiple allocation routines in a chain.
70 It uses following chains and measures allocator runtime speed against them:
71\begin{itemize}
72\item malloc 0
73\item free NULL
74\item malloc
75\item realloc
76\item free
77\item calloc
78\item malloc-free
79\item realloc-free
80\item calloc-free
81\item malloc-realloc
82\item calloc-realloc
83\item malloc-realloc-free
84\item calloc-realloc-free
85\item malloc-realloc-free-calloc
86\end{itemize}
87
88\begin{figure}
89\centering
90\includegraphics[width=1\textwidth]{figures/bench-speed.eps}
91\caption{Benchmark Speed}
92\label{fig:benchSpeedFig}
93\end{figure}
94
95As laid out in figure \ref{fig:benchSpeedFig}, each chain is measured separately. Each routine in the chain is called for N objects and then
96 those allocated objects are used when call the next routine in the allocation chain. This way we can measure the
97 complete latency of memory allocator when multiple routines are chained together e.g. malloc-realloc-free-calloc gives
98 us the whole picture of the major allocation routines when combined together in a chain.
99
100For each chain, time taken is recorded which then can be used to visualize performance of a memory allocator against
101each chain.
102
103Number of worker threads can be adjust using a command-line argument -threadN.
104
105\section{Churn Benchmark} Churn benchmark measures the overall runtime speed of an allocator in a multi-threaded
106 scenerio where each thread extinsevly allocates and frees dynamic memory.
107
108\begin{figure}
109\centering
110\includegraphics[width=1\textwidth]{figures/bench-churn.eps}
111\caption{Benchmark Churn}
112\label{fig:benchChurnFig}
113\end{figure}
114
115Figure \ref{fig:benchChurnFig} illustrates churn benchmark.
116 This benchmark creates a buffer with M spots and starts K threads. Each thread randomly picks a
117 spot out of M spots, it frees the object currently at that spot and allocates a new object for that spot. Each thread
118 repeats this cycle for N times. Main threads measures the total time taken for the whole benchmark and that time is
119 used to evaluate memory allocator's performance.
120
121Only malloc and free are used to allocate and free an object to eliminate any extra cost such as memcpy in realloc etc.
122Malloc/free allows us to measure latency of memory allocation only without paying any extra cost. Churn simulates a
123memory intensive program that can be tuned to create different scenerios.
124
125Following commandline arguments can be used to tune the benchmark.\\
126-threadN :  sets number of threads, K\\
127-cSpots  :  sets number of spots for churn, M\\
128-objN    :  sets number of objects per thread, N\\
129-maxS    :  sets max object size\\
130-minS    :  sets min object size\\
131-stepS   :  sets object size increment\\
132-distroS :  sets object size distribution
133
134\section{Cache Thrash}\label{sec:benchThrashSec} Cache Thrash benchmark measures allocator induced active false sharing
135 in an allocator as illustrated in figure \ref{f:AllocatorInducedActiveFalseSharing}.
136 If memory allocator allocates memory for multiple threads on
137 same cache line, this can slow down the program performance. If both threads, who share one cache line, frequently
138 read/write to their object on the cache line concurrently then this will cause cache miss everytime a thread accesse
139 the object as the other thread might have written something at their memory location on the same cache line.
140
141\begin{figure}
142\centering
143\includegraphics[width=1\textwidth]{figures/bench-cache-thrash.eps}
144\caption{Benchmark Allocator Induced Active False Sharing}
145\label{fig:benchThrashFig}
146\end{figure}
147
148Cache thrash tries to create a scenerio that should lead to allocator induced false sharing if the underlying memory
149allocator is allocating dynamic memory to multiple threads on same cache lines. Ideally, a memory allocator should
150distance dynamic memory region of one thread from other threads'. Having multiple threads allocating small objects
151simultanously should cause the memory allocator to allocate objects for multiple objects on the same cache line if its
152not distancing the memory among different threads.
153
154Figure \ref{fig:benchThrashFig} lays out flow of the cache thrash benchmark.
155 It creates K worker threads. Each worker thread allocates an object and intensively read/write
156 it for M times to invalidate cache lines frequently to slow down other threads who might be sharing this cache line
157 with it. Each thread repeats this for N times. Main thread measures the total time taken to for all worker threads to
158 complete. Worker threads sharing cahche lines with each other will take longer.
159
160Different cache access scenerios can be created using the following commandline arguments.\\
161-threadN :  sets number of threads, K\\
162-cacheIt :  iterations for cache benchmark, N\\
163-cacheRep:  repetations for cache benchmark, M\\
164-cacheObj:  object size for cache benchmark
165
166\section{Cache Scratch} Cache Scratch benchmark measures allocator induced passive false sharing in an allocator. An
167 allocator can unintentionally induce false sharing depending upon its management of the freed objects as described in
168 figure \ref{f:AllocatorInducedPassiveFalseSharing}. If a thread A allocates multiple objects together then they will be
169  possibly allocated on the same cache line by the memory allocator. If the thread now passes this object to another
170  thread B then the two of them will sharing the same cache line but this scenerio is not induced by the allocator.
171  Instead, the program induced this situation. Now it might be possible that if thread B frees this object and then
172  allocate an object of the same size then the allocator may return the same object which is on a cache line shared
173  with thread A. Now this false sharing is being caused by the memory allocator although it was started by the
174  program.
175
176\begin{figure}
177\centering
178\includegraphics[width=1\textwidth]{figures/bench-cache-scratch.eps}
179\caption{Benchmark Program Induced Passive False Sharing}
180\label{fig:benchScratchFig}
181\end{figure}
182
183Cache scratch main thread induces false sharing and creates a scenerio that should make memory allocator preserve the
184 program-induced false sharing if it does not retur a freed object to its owner thread and, instead, re-uses it
185 instantly. An alloator using object ownership, as described in section \ref{s:Ownership}, would be less susceptible to allocator induced passive
186 false sharing. If the object is returned to the thread who owns it or originally allocated it then the thread B will
187 get a new object that will be less likely to be on the same cache line as thread A.
188
189As in figure \ref{fig:benchScratchFig}, cache Scratch allocates K dynamic objects together, one for each of the K worker threads,
190 possibly causing memory allocator to allocate these objects on the same cache-line. Then it create K worker threads and passes
191 an object from the K allocated objects to each of the K threads. Each worker thread frees the object passed by the main thread.
192 Then, it allocates an object and reads/writes it repetitively for M times causing frequent cache invalidations. Each worker
193 repeats this for N times.
194
195Each thread allocating an object after freeing the original object passed by the main thread should cause the memory
196allocator to return the same object that was initially allocated by the main thread if the allocator did not return the
197intial object bakc to its owner (main thread). Then, intensive read/write on the shared cache line by multiple threads
198should slow down worker threads due to to high cache invalidations and misses. Main thread measures the total time
199taken for all the workers to complete.
200
201Similar to bechmark cache thrash in section \ref{sec:benchThrashSec}, different cache access scenerios can be created using the following commandline arguments.\\
202-threadN :  sets number of threads, K\\
203-cacheIt :  iterations for cache benchmark, N\\
204-cacheRep:  repetations for cache benchmark, M\\
205-cacheObj:  object size for cache benchmark
Note: See TracBrowser for help on using the repository browser.