\chapter{Benchmarks} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Micro Benchmark Suite %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The aim of micro benchmark suite is to create a set of programs that can evaluate a memory allocator based on the performance matrices described in (FIX ME: local cite). These programs can be taken as a standard to benchmark an allocator's basic goals. These programs give details of an allocator's memory overhead and speed under a certain allocation pattern. The speed of the allocator is benchmarked in different ways. Similarly, false sharing happening in an allocator is also measured in multiple ways. These benchmarks evalute the allocator under a certain allocation pattern which is configurable and can be changed using a few knobs to benchmark observe an allocator's performance under a desired allocation pattern. Micro Benchmark Suite benchmarks an allocator's performance by allocating dynamic objects and, then, measuring specifc matrices. The benchmark suite evaluates an allocator with a certain allocation pattern. Bnechmarks have different knobs that can be used to change allocation pattern and evaluate an allocator under desired conditions. These can be set by giving commandline arguments to the benchmark on execution. \section{Current Benchmarks} There are multiple benchmarks that are built individually and evaluate different aspects of a memory allocator. But, there is not a set of benchamrks that can be used to evaluate multiple aspects of memory allocators. \subsection{threadtest}(FIX ME: cite benchmark and hoard) Each thread repeatedly allocates and then deallocates 100,000 objects. Runtime of the benchmark evaluates its efficiency. \subsection{shbench}(FIX ME: cite benchmark and hoard) Each thread allocates and randomly frees a number of random-sized objects. It is a stress test that also uses runtime to determine efficiency of the allocator. \subsection{larson}(FIX ME: cite benchmark and hoard) Larson simulates a server environment. Multiple threads are created where each thread allocator and free a number of objects within a size range. Some objects are passed from threads to the child threads to free. It caluculates memory operations per second as an indicator of memory allocator's performance. \section{Memory Benchmark} Memory benchmark measures memory overhead of an allocator. It allocates a number of dynamic objects. Then, by reading /self/proc/maps, gets the total memory that the allocator has reuested from the OS. It calculates the memory head by taking the difference between the memory the allocator has requested from the OS and the memory that program has allocated. \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/bench-memory.eps} \caption{Benchmark Memory Overhead} \label{fig:benchMemoryFig} \end{figure} Figure \ref{fig:benchMemoryFig} gives a flow of the memory benchmark. It creates a producer-consumer scenerio with K producers and each producer has M consumers. Producer has a separate buffer for each consumer. Producer allocates N objects of random sizes following the given distrubution for each consumer. Consumer frees those objects. After every memory operation, program memory usage is recorded throughout the runtime. This data then can be used to visualize the memory usage and consumption of the prigram. Different knobs can be adjusted to set certain thread model.\\ -threadA : sets number of alloc threads (producers) for mem benchmark\\ -consumeS: sets production and conumption round size\\ -threadF : sets number of free threads (consumers) for each producer for mem benchmark Object allocation size can be changed using the knobs:\\ -maxS : sets max object size\\ -minS : sets min object size\\ -stepS : sets object size increment\\ -distroS : sets object size distribution\\ -objN : sets number of objects per thread\\ \section{Speed Benchmark} Speed benchmark measures the runtime speed of an allocator (FIX ME: cite allocator routines). Speed benchmark measures runtime speed of individual memory allocation routines. It also considers different allocation chains to measures the performance of the allocator by combining multiple allocation routines in a chain. It uses following chains and measures allocator runtime speed against them: \begin{itemize} \item malloc \item realloc \item free \item calloc \item malloc-free \item realloc-free \item calloc-free \item malloc-realloc \item calloc-realloc \item malloc-realloc-free \item calloc-realloc-free \item malloc-realloc-free-calloc \end{itemize} \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/bench-speed.eps} \caption{Benchmark Speed} \label{fig:benchSpeedFig} \end{figure} As laid out in figure \ref{fig:benchSpeedFig}, each chain is measured separately. Each routine in the chain is called for N objects and then those allocated objects are used when call the next routine in the allocation chain. This way we can measure the complete latency of memory allocator when multiple routines are chained together e.g. malloc-realloc-free-calloc gives us the whole picture of the major allocation routines when combined together in a chain. For each chain, time taken is recorded which then can be used to visualize performance of a memory allocator against each chain. Following knobs can be adjusted to tune memory usage.\\ -maxS : sets max object size\\ -minS : sets min object size\\ -stepS : sets object size increment\\ -distroS : sets object size distribution\\ -objN : sets number of objects per thread\\ -threadN : sets number of worker threads\\ \section{Churn Benchmark} Churn benchmark measures the overall runtime speed of an allocator in a multi-threaded scenerio where each thread extinsevly allocates and frees dynamic memory. \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/bench-churn.eps} \caption{Benchmark Churn} \label{fig:benchChurnFig} \end{figure} Figure \ref{fig:benchChurnFig} illustrates churn benchmark. This benchmark creates a buffer with M spots and starts K threads. Each thread randomly picks a spot out of M spots, it frees the object currently at that spot and allocates a new object for that spot. Each thread repeats this cycle for N times. Main threads measures the total time taken for the whole benchmark and that time is used to evaluate memory allocator's performance. Only malloc and free are used to allocate and free an object to eliminate any extra cost such as memcpy in realloc etc. Malloc/free allows us to measure latency of memory allocation only without paying any extra cost. Churn simulates a memory intensive program that can be tuned to create different scenerios. Following commandline arguments can be used to tune the benchmark.\\ -threadN : sets number of threads, K\\ -cSpots : sets number of spots for churn, M\\ -objN : sets number of objects per thread, N\\ -maxS : sets max object size\\ -minS : sets min object size\\ -stepS : sets object size increment\\ -distroS : sets object size distribution \section{Cache Thrash}\label{sec:benchThrashSec} Cache Thrash benchmark measures allocator induced active false sharing in an allocator as illustrated in figure \ref{f:AllocatorInducedActiveFalseSharing}. If memory allocator allocates memory for multiple threads on same cache line, this can slow down the program performance. If both threads, who share one cache line, frequently read/write to their object on the cache line concurrently then this will cause cache miss everytime a thread accesse the object as the other thread might have written something at their memory location on the same cache line. \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/bench-cache-thrash.eps} \caption{Benchmark Allocator Induced Active False Sharing} \label{fig:benchThrashFig} \end{figure} Cache thrash tries to create a scenerio that should lead to allocator induced false sharing if the underlying memory allocator is allocating dynamic memory to multiple threads on same cache lines. Ideally, a memory allocator should distance dynamic memory region of one thread from other threads'. Having multiple threads allocating small objects simultanously should cause the memory allocator to allocate objects for multiple objects on the same cache line if its not distancing the memory among different threads. Figure \ref{fig:benchThrashFig} lays out flow of the cache thrash benchmark. It creates K worker threads. Each worker thread allocates an object and intensively read/write it for M times to invalidate cache lines frequently to slow down other threads who might be sharing this cache line with it. Each thread repeats this for N times. Main thread measures the total time taken to for all worker threads to complete. Worker threads sharing cahche lines with each other will take longer. Different cache access scenerios can be created using the following commandline arguments.\\ -threadN : sets number of threads, K\\ -cacheIt : iterations for cache benchmark, N\\ -cacheRep: repetations for cache benchmark, M\\ -cacheObj: object size for cache benchmark \section{Cache Scratch} Cache Scratch benchmark measures allocator induced passive false sharing in an allocator. An allocator can unintentionally induce false sharing depending upon its management of the freed objects as described in figure \ref{f:AllocatorInducedPassiveFalseSharing}. If a thread A allocates multiple objects together then they will be possibly allocated on the same cache line by the memory allocator. If the thread now passes this object to another thread B then the two of them will sharing the same cache line but this scenerio is not induced by the allocator. Instead, the program induced this situation. Now it might be possible that if thread B frees this object and then allocate an object of the same size then the allocator may return the same object which is on a cache line shared with thread A. Now this false sharing is being caused by the memory allocator although it was started by the program. \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/bench-cache-scratch.eps} \caption{Benchmark Program Induced Passive False Sharing} \label{fig:benchScratchFig} \end{figure} Cache scratch main thread induces false sharing and creates a scenerio that should make memory allocator preserve the program-induced false sharing if it does not retur a freed object to its owner thread and, instead, re-uses it instantly. An alloator using object ownership, as described in section \ref{s:Ownership}, would be less susceptible to allocator induced passive false sharing. If the object is returned to the thread who owns it or originally allocated it then the thread B will get a new object that will be less likely to be on the same cache line as thread A. As in figure \ref{fig:benchScratchFig}, cache Scratch allocates K dynamic objects together, one for each of the K worker threads, possibly causing memory allocator to allocate these objects on the same cache-line. Then it create K worker threads and passes an object from the K allocated objects to each of the K threads. Each worker thread frees the object passed by the main thread. Then, it allocates an object and reads/writes it repetitively for M times causing frequent cache invalidations. Each worker repeats this for N times. Each thread allocating an object after freeing the original object passed by the main thread should cause the memory allocator to return the same object that was initially allocated by the main thread if the allocator did not return the intial object bakc to its owner (main thread). Then, intensive read/write on the shared cache line by multiple threads should slow down worker threads due to to high cache invalidations and misses. Main thread measures the total time taken for all the workers to complete. Similar to bechmark cache thrash in section \ref{sec:benchThrashSec}, different cache access scenerios can be created using the following commandline arguments.\\ -threadN : sets number of threads, K\\ -cacheIt : iterations for cache benchmark, N\\ -cacheRep: repetations for cache benchmark, M\\ -cacheObj: object size for cache benchmark