Ignore:
Timestamp:
May 20, 2022, 2:48:24 PM (2 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
Children:
598dc68
Parents:
25fa20a
Message:

final proofread of Mubeen's MMath thesis

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/benchmarks.tex

    r25fa20a rfb6691a  
    4848There is no interaction among threads, \ie no object sharing.
    4949Each thread repeatedly allocates 100,000 \emph{8-byte} objects then deallocates them in the order they were allocated.
    50 \PAB{Execution time of the benchmark evaluates its efficiency.}
     50The execution time of the benchmark evaluates its efficiency.
    5151
    5252
     
    7575\label{s:ChurnBenchmark}
    7676
    77 The churn benchmark measures the runtime speed of an allocator in a multi-threaded scenerio, where each thread extensively allocates and frees dynamic memory.
     77The churn benchmark measures the runtime speed of an allocator in a multi-threaded scenario, where each thread extensively allocates and frees dynamic memory.
    7878Only @malloc@ and @free@ are used to eliminate any extra cost, such as @memcpy@ in @calloc@ or @realloc@.
    7979Churn simulates a memory intensive program and can be tuned to create different scenarios.
     
    133133When threads share a cache line, frequent reads/writes to their cache-line object causes cache misses, which cause escalating delays as cache distance increases.
    134134
    135 Cache thrash tries to create a scenerio that leads to false sharing, if the underlying memory allocator is allocating dynamic memory to multiple threads on the same cache lines.
     135Cache thrash tries to create a scenario that leads to false sharing, if the underlying memory allocator is allocating dynamic memory to multiple threads on the same cache lines.
    136136Ideally, a memory allocator should distance the dynamic memory region of one thread from another.
    137137Having multiple threads allocating small objects simultaneously can cause a memory allocator to allocate objects on the same cache line, if its not distancing the memory among different threads.
     
    201201Cache scratch tries to create a scenario that leads to false sharing and should make the memory allocator preserve the program-induced false sharing, if it does not return a freed object to its owner thread and, instead, re-uses it instantly.
    202202An allocator using object ownership, as described in section \VRef{s:Ownership}, is less susceptible to allocator-induced passive false-sharing.
    203 \PAB{If the object is returned to the thread that owns it, then the new object that the thread gets is less likely to be on the same cache line.}
     203If the object is returned to the thread that owns it, then the new object that the thread gets is less likely to be on the same cache line.
    204204
    205205\VRef[Figure]{fig:benchScratchFig} shows the pseudo code for the cache-scratch micro-benchmark.
     
    245245
    246246Similar to benchmark cache thrash in section \VRef{sec:benchThrashSec}, different cache access scenarios can be created using the following command-line arguments.
    247 \begin{description}[itemsep=0pt,parsep=0pt]
     247\begin{description}[topsep=0pt,itemsep=0pt,parsep=0pt]
    248248\item[threads:]
    249249number of threads (K).
     
    259259\subsection{Speed Micro-Benchmark}
    260260\label{s:SpeedMicroBenchmark}
     261\vspace*{-4pt}
    261262
    262263The speed benchmark measures the runtime speed of individual and sequences of memory allocation routines:
    263 \begin{enumerate}[itemsep=0pt,parsep=0pt]
     264\begin{enumerate}[topsep=-5pt,itemsep=0pt,parsep=0pt]
    264265\item malloc
    265266\item realloc
Note: See TracChangeset for help on using the changeset viewer.