Changeset fb6691a for doc/theses/mubeen_zulfiqar_MMath/benchmarks.tex
- Timestamp:
- May 20, 2022, 2:48:24 PM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
- Children:
- 598dc68
- Parents:
- 25fa20a
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/mubeen_zulfiqar_MMath/benchmarks.tex
r25fa20a rfb6691a 48 48 There is no interaction among threads, \ie no object sharing. 49 49 Each thread repeatedly allocates 100,000 \emph{8-byte} objects then deallocates them in the order they were allocated. 50 \PAB{Execution time of the benchmark evaluates its efficiency.} 50 The execution time of the benchmark evaluates its efficiency. 51 51 52 52 … … 75 75 \label{s:ChurnBenchmark} 76 76 77 The churn benchmark measures the runtime speed of an allocator in a multi-threaded scen erio, where each thread extensively allocates and frees dynamic memory.77 The churn benchmark measures the runtime speed of an allocator in a multi-threaded scenario, where each thread extensively allocates and frees dynamic memory. 78 78 Only @malloc@ and @free@ are used to eliminate any extra cost, such as @memcpy@ in @calloc@ or @realloc@. 79 79 Churn simulates a memory intensive program and can be tuned to create different scenarios. … … 133 133 When threads share a cache line, frequent reads/writes to their cache-line object causes cache misses, which cause escalating delays as cache distance increases. 134 134 135 Cache thrash tries to create a scen erio that leads to false sharing, if the underlying memory allocator is allocating dynamic memory to multiple threads on the same cache lines.135 Cache thrash tries to create a scenario that leads to false sharing, if the underlying memory allocator is allocating dynamic memory to multiple threads on the same cache lines. 136 136 Ideally, a memory allocator should distance the dynamic memory region of one thread from another. 137 137 Having multiple threads allocating small objects simultaneously can cause a memory allocator to allocate objects on the same cache line, if its not distancing the memory among different threads. … … 201 201 Cache scratch tries to create a scenario that leads to false sharing and should make the memory allocator preserve the program-induced false sharing, if it does not return a freed object to its owner thread and, instead, re-uses it instantly. 202 202 An allocator using object ownership, as described in section \VRef{s:Ownership}, is less susceptible to allocator-induced passive false-sharing. 203 \PAB{If the object is returned to the thread that owns it, then the new object that the thread gets is less likely to be on the same cache line.} 203 If the object is returned to the thread that owns it, then the new object that the thread gets is less likely to be on the same cache line. 204 204 205 205 \VRef[Figure]{fig:benchScratchFig} shows the pseudo code for the cache-scratch micro-benchmark. … … 245 245 246 246 Similar to benchmark cache thrash in section \VRef{sec:benchThrashSec}, different cache access scenarios can be created using the following command-line arguments. 247 \begin{description}[ itemsep=0pt,parsep=0pt]247 \begin{description}[topsep=0pt,itemsep=0pt,parsep=0pt] 248 248 \item[threads:] 249 249 number of threads (K). … … 259 259 \subsection{Speed Micro-Benchmark} 260 260 \label{s:SpeedMicroBenchmark} 261 \vspace*{-4pt} 261 262 262 263 The speed benchmark measures the runtime speed of individual and sequences of memory allocation routines: 263 \begin{enumerate}[ itemsep=0pt,parsep=0pt]264 \begin{enumerate}[topsep=-5pt,itemsep=0pt,parsep=0pt] 264 265 \item malloc 265 266 \item realloc
Note: See TracChangeset
for help on using the changeset viewer.