source: doc/papers/llheap/Paper.tex@ 8136df1

Last change on this file since 8136df1 was 8e90fd6, checked in by Peter A. Buhr <pabuhr@…>, 8 months ago

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

  • Property mode set to 100644
File size: 176.5 KB
Line 
1\documentclass[AMA,STIX1COL]{WileyNJD-v2}
2
3% Latex packages used in the document.
4
5\usepackage{comment}
6\usepackage{epic,eepic}
7\usepackage{upquote} % switch curled `'" to straight
8\usepackage{relsize}
9\usepackage{xspace}
10\usepackage{calc}
11\usepackage[scaled=0.88]{helvet} % descent Helvetica font and scale to times size
12\usepackage[T1]{fontenc}
13\usepackage{listings} % format program code
14\usepackage[labelformat=simple,aboveskip=0pt,farskip=0pt]{subfig}
15\renewcommand{\thesubfigure}{(\alph{subfigure})}
16\usepackage{enumitem}
17
18\hypersetup{breaklinks=true}
19
20\usepackage[pagewise]{lineno}
21\renewcommand{\linenumberfont}{\scriptsize\sffamily}
22
23\usepackage{varioref} % extended references
24% adjust varioref package with default "section" and "page" titles, and optional title with faraway page numbers
25% \VRef{label} => Section 2.7, \VPageref{label} => page 17
26% \VRef[Figure]{label} => Figure 3.4, \VPageref{label} => page 17
27% \renewcommand{\reftextfaceafter}{\unskip}
28% \renewcommand{\reftextfacebefore}{\unskip}
29% \renewcommand{\reftextafter}{\unskip}
30% \renewcommand{\reftextbefore}{\unskip}
31% \renewcommand{\reftextfaraway}[1]{\unskip, p.~\pageref{#1}}
32% \renewcommand{\reftextpagerange}[2]{\unskip, pp.~\pageref{#1}--\pageref{#2}}
33% \newcommand{\VRef}[2][Section]{\ifx#1\@empty\else{#1}\nobreakspace\fi\vref{#2}}
34% \newcommand{\VRefrange}[3][Sections]{\ifx#1\@empty\else{#1}\nobreakspace\fi\vrefrange{#2}{#3}}
35% \newcommand{\VPageref}[2][page]{\ifx#1\@empty\else{#1}\nobreakspace\fi\pageref{#2}}
36% \newcommand{\VPagerefrange}[3][pages]{\ifx#1\@empty\else{#1}\nobreakspace\fi\pageref{#2}{#3}}
37
38\makeatletter
39\newcommand{\abbrevFont}{\textit} % set empty for no italics
40\newcommand{\CheckCommaColon}{\@ifnextchar{,}{}{\@ifnextchar{:}{}{,\xspace}}}
41\newcommand{\CheckPeriod}{\@ifnextchar{.}{}{.\xspace}}
42\newcommand{\EG}{\abbrevFont{e}.\abbrevFont{g}.}
43\newcommand{\eg}{\EG\CheckCommaColon}
44\newcommand{\IE}{\abbrevFont{i}.\abbrevFont{e}.}
45\newcommand{\ie}{\IE\CheckCommaColon}
46\newcommand{\ETC}{\abbrevFont{etc}}
47\newcommand{\etc}{\ETC\CheckPeriod}
48\newcommand{\VS}{\abbrevFont{vs}}
49\newcommand{\vs}{\VS\CheckPeriod}
50
51\newcommand{\newtermFont}{\emph}
52\newcommand{\newterm}[1]{\newtermFont{#1}}
53
54\newcommand{\CFAIcon}{\textsf{C}\raisebox{\depth}{\rotatebox{180}{\textsf{A}}}\xspace} % Cforall symbolic name
55\newcommand{\CFA}{\protect\CFAIcon} % safe for section/caption
56\newcommand{\CFL}{\textrm{Cforall}\xspace} % Cforall symbolic name
57\newcommand{\CCIcon}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}} % C++ icon
58\newcommand{\CC}[1][]{\protect\CCIcon{#1}\xspace} % C++ symbolic name
59\newcommand{\uC}{$\mu$\CC}
60\newcommand{\Csharp}{C\raisebox{-0.7ex}{\relsize{2}$^\sharp$}\xspace} % C# symbolic name
61
62\newcommand{\LstBasicStyle}[1]{{\lst@basicstyle{#1}}}
63\newcommand{\LstKeywordStyle}[1]{{\lst@basicstyle{\lst@keywordstyle{#1}}}}
64\newcommand{\LstCommentStyle}[1]{{\lst@basicstyle{\lst@commentstyle{#1}}}}
65\newcommand{\LstStringStyle}[1]{{\lst@basicstyle{\lst@stringstyle{#1}}}}
66
67\newlength{\parindentlnth}
68\setlength{\parindentlnth}{\parindent}
69\newlength{\gcolumnposn} % temporary hack because lstlisting does not handle tabs correctly
70\newlength{\columnposn}
71\setlength{\gcolumnposn}{3.25in}
72\setlength{\columnposn}{\gcolumnposn}
73\newcommand{\C}[2][\@empty]{\ifx#1\@empty\else\global\setlength{\columnposn}{#1}\global\columnposn=\columnposn\fi\hfill\makebox[\textwidth-\columnposn][l]{\lst@basicstyle{\LstCommentStyle{#2}}}}
74\newcommand{\CRT}{\global\columnposn=\gcolumnposn}
75\makeatother
76
77\lstset{
78columns=fullflexible,
79basicstyle=\linespread{0.9}\sf, % reduce line spacing and use sanserif font
80stringstyle=\small\tt, % use typewriter font
81tabsize=5, % N space tabbing
82xleftmargin=\parindentlnth, % indent code to paragraph indentation
83escapechar=\$, % LaTeX escape in CFA code
84mathescape=false, % disable LaTeX math escape in CFA code $...$
85keepspaces=true, %
86showstringspaces=false, % do not show spaces with cup
87showlines=true, % show blank lines at end of code
88aboveskip=4pt, % spacing above/below code block
89belowskip=2pt,
90numberstyle=\footnotesize\sf, % numbering style
91moredelim=**[is][\color{red}]{@}{@},
92% replace/adjust listing characters that look bad in sanserif
93literate=
94% {-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.75ex}{0.1ex}}}}1
95 {-}{\raisebox{-1pt}{\ttfamily-}}1
96 {^}{\raisebox{0.6ex}{\(\scriptstyle\land\,\)}}1
97 {~}{\raisebox{0.3ex}{\(\scriptstyle\sim\,\)}}1
98 {'}{\ttfamily'\hspace*{-0.4ex}}1
99 {`}{\ttfamily\upshape\hspace*{-0.3ex}`}1
100 {<-}{$\leftarrow$}2
101 {=>}{$\Rightarrow$}2
102% {->}{\raisebox{-1pt}{\texttt{-}}\kern-0.1ex\textgreater}2,
103}% lstset
104
105% CFA programming language, based on ANSI C (with some gcc additions)
106\lstdefinelanguage{CFA}[ANSI]{C}{
107 morekeywords={
108 _Alignas, _Alignof, __alignof, __alignof__, and, asm, __asm, __asm__, _Atomic, __attribute, __attribute__,
109 __auto_type, basetypeof, _Bool, catch, catchResume, choose, coerce, _Complex, __complex, __complex__, __const, __const__,
110 coroutine, _Decimal32, _Decimal64, _Decimal128, disable, enable, exception, __extension__, fallthrough, finally, fixup,
111 __float80, float80, __float128, float128, _Float16, _Float32, _Float32x, _Float64, _Float64x, _Float128, _Float128x,
112 forall, fortran, generator, _Generic, _Imaginary, __imag, __imag__, inline, __inline, __inline__, int128, __int128, __int128_t,
113 __label__, monitor, mutex, _Noreturn, __builtin_offsetof, one_t, or, recover, report, restrict, __restrict, __restrict__,
114 __signed, __signed__, _Static_assert, suspend, thread, __thread, _Thread_local, throw, throwResume, timeout, trait, try,
115 typeof, __typeof, __typeof__, typeid, __uint128_t, __builtin_va_arg, __builtin_va_list, virtual, __volatile, __volatile__,
116 vtable, waitfor, waituntil, when, with, zero_t,
117 },
118 moredirectives={defined,include_next},
119}
120
121% uC++ programming language, based on ANSI C++
122\lstdefinelanguage{uC++}[GNU]{C++}{
123 morekeywords={
124 _Accept, _AcceptReturn, _AcceptWait, _Actor, _At, _Catch, _CatchResume, _CorActor, _Cormonitor, _Coroutine,
125 _Disable, _Else, _Enable, _Event, _Exception, _Finally, _Monitor, _Mutex, _Nomutex, _PeriodicTask, _RealTimeTask,
126 _Resume, _ResumeTop, _Select, _SporadicTask, _Task, _Timeout, _When, _With, _Throw},
127}
128
129% Go programming language: https://github.com/julienc91/listings-golang/blob/master/listings-golang.sty
130\lstdefinelanguage{Golang}{
131 morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,},
132 morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64,
133 bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface},
134 morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,},
135 morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,},
136 morekeywords=[5]{Println,Printf,Error,},
137 sensitive=true,
138 morecomment=[l]{//},
139 morecomment=[s]{/*}{*/},
140 morestring=[b]',
141 morestring=[b]",
142 morestring=[s]{`}{`},
143}
144
145\lstnewenvironment{cfa}[1][]{\lstset{language=CFA,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}}{}
146\lstnewenvironment{C++}[1][]{\lstset{language=C++,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}}{}
147\lstnewenvironment{uC++}[1][]{\lstset{language=uC++,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}}{}
148\lstnewenvironment{Go}[1][]{\lstset{language=Golang,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}}{}
149\lstnewenvironment{python}[1][]{\lstset{language=python,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}}{}
150\lstnewenvironment{java}[1][]{\lstset{language=java,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}}{}
151
152% inline code @...@
153\lstMakeShortInline@%
154
155% \let\OLDthebibliography\thebibliography
156% \renewcommand\thebibliography[1]{
157% \OLDthebibliography{#1}
158% \setlength{\parskip}{0pt}
159% \setlength{\itemsep}{4pt plus 0.3ex}
160% }
161
162\newsavebox{\myboxA}
163\newsavebox{\myboxB}
164\newsavebox{\myboxC}
165\newsavebox{\myboxD}
166
167%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
168
169\articletype{RESEARCH ARTICLE}%
170
171% Referees
172% Doug Lea, dl@cs.oswego.edu, SUNY Oswego
173% Herb Sutter, hsutter@microsoft.com, Microsoft Corp
174% Gor Nishanov, gorn@microsoft.com, Microsoft Corp
175% James Noble, kjx@ecs.vuw.ac.nz, Victoria University of Wellington, School of Engineering and Computer Science
176
177\received{XXXXX}
178\revised{XXXXX}
179\accepted{XXXXX}
180
181\raggedbottom
182
183\title{High-Performance Concurrent Memory Allocation}
184
185\author[1]{Mubeen Zulfiqar}
186\author[1]{Ayelet Wasik}
187\author[1]{Peter A. Buhr*}
188\author[2]{Bryan Chan}
189\author[3]{Dave Dice}
190\authormark{ZULFIQAR \textsc{et al.}}
191
192\address[1]{\orgdiv{Cheriton School of Computer Science}, \orgname{University of Waterloo}, \orgaddress{\state{Waterloo, ON}, \country{Canada}}}
193\address[2]{\orgdiv{Huawei Compiler Lab}, \orgname{Huawei}, \orgaddress{\state{Markham, ON}, \country{Canada}}}
194\address[3]{\orgdiv{Oracle Labs}, \orgname{Oracle}, \orgaddress{\state{Burlington, MA}, \country{USA}}}
195
196
197\corres{*Peter A. Buhr, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada. \email{pabuhr{\char`\@}uwaterloo.ca}}
198
199% \fundingInfo{Natural Sciences and Engineering Research Council of Canada}
200
201\abstract[Summary]{%
202A new C-based concurrent memory-allocator is presented, called llheap (low latency).
203It can be used standalone in C/\CC applications with multiple kernel threads, or embedded into high-performance user-threading programming languages.
204llheap extends the feature set of existing C allocation by remembering zero-filled (\lstinline{calloc}) and aligned properties (\lstinline{memalign}) in an allocation.
205These properties can be queried, allowing programmers to write safer programs by preserving these properties in future allocations.
206As well, \lstinline{realloc}/\lstinline{reallocarray} preserve these properties when adjusting storage size, again increasing future allocation safety.
207llheap also extends the C allocation API with \lstinline{aalloc}, \lstinline{amemalign}, \lstinline{cmemalign}, \lstinline{resize}, and extended \lstinline{realloc}, providing orthogonal access to allocation features;
208hence, programmers do have to code missing combinations.
209The llheap allocator also provides a contention-free statistics gathering mode, and a debugging mode for dynamically checking allocation pre/post conditions and invariants.
210These modes are invaluable for understanding and debugging a program's dynamic allocation behaviour, with low enough cost to be used in production code.
211The llheap API is further extended with the \CFA advanced type-system, providing a single type-safe allocation routine using named arguments, increasing safety and simplifying usage.
212Finally, performance results across a number of benchmarks show llheap is competitive with the best memory allocators.
213}% abstract
214
215% While not as powerful as the \lstinline{valgrind} interpreter, a large number of allocations mistakes are detected.
216% A micro-benchmark test-suite is started for comparing allocators, rather than relying on a suite of arbitrary programs. It has been an interesting challenge.
217% These micro-benchmarks have adjustment knobs to simulate allocation patterns hard-coded into arbitrary test programs.
218% Existing memory allocators, glibc, dlmalloc, hoard, jemalloc, ptmalloc3, rpmalloc, tbmalloc, and the new allocator llheap are all compared using the new micro-benchmark test-suite.
219
220\keywords{memory allocation, (user-level) concurrency, type-safety, statistics, debugging, high performance}
221
222
223\begin{document}
224%\linenumbers % comment out to turn off line numbering
225
226\maketitle
227
228
229\section{Introduction}
230
231Memory management services a series of program allocation/deallocation requests and attempts to satisfy them from a variable-sized block(s) of memory, while minimizing total memory usage.
232A general-purpose dynamic-allocation algorithm cannot anticipate allocation requests so its time and space performance is rarely optimal (bin packing).
233However, allocators take advantage of allocation patterns in typical programs (heuristics) to produce excellent results, both in time and space (similar to LRU paging).
234Allocators use similar techniques, but each optimizes specific allocation patterns.
235Nevertheless, allocators are a series of compromises, occasionally with some static or dynamic tuning parameters to optimize specific request patterns.
236
237
238\subsection{Memory Structure}
239\label{s:MemoryStructure}
240
241Figure~\ref{f:ProgramAddressSpace} shows the typical layout of a program's address space (high to low) divided into a number of zones, with free memory surrounding the dynamic code/data~\cite{memlayout}.
242Static code and data are placed into memory at load time from the executable and are fixed-sized at runtime.
243Dynamic code/data memory is managed by the dynamic loader for libraries loaded at runtime, which is complex especially in a multi-threaded program~\cite{Huang06}.
244However, changes to the dynamic code/data space are typically infrequent, many occurring at program startup, and are largely outside of a program's control.
245Stack memory is managed by the program call/return-mechanism using a LIFO technique, which works well for sequential programs.
246For stackful coroutines and user threads, a new stack is commonly created in the dynamic-allocation memory.
247The dynamic-allocation memory is often a contiguous area (can be memory mapped as multiple areas), which starts empty and grows/shrinks as the program creates/deletes variables with independent lifetime.
248The programming-language's runtime manages this area, where management complexity is a function of the mechanism for deleting variables.
249This work focuses solely on management of the dynamic-allocation memory.
250
251\begin{figure}
252\centering
253\input{AddressSpace.pstex_t}
254\vspace{-5pt}
255\caption{Program Address Space Divided into Zones}
256\label{f:ProgramAddressSpace}
257\end{figure}
258
259
260\subsection{Dynamic Memory-Management}
261\label{s:DynamicMemoryManagement}
262
263Modern programming languages manage dynamic memory in different ways.
264Some languages, such as Lisp~\cite{CommonLisp}, Java~\cite{Java}, Haskell~\cite{Haskell}, Go~\cite{Go}, provide explicit allocation but \emph{implicit} deallocation of data through garbage collection~\cite{Wilson92}.
265In general, garbage collection supports memory compaction, where dynamic (live) data is moved during runtime to better utilize space.
266However, moving data requires finding and updating pointers to it to reflect the new data locations.
267Programming languages such as C~\cite{C}, \CC~\cite{C++}, and Rust~\cite{Rust} provide the programmer with explicit allocation \emph{and} deallocation of data.
268These languages cannot find and subsequently move live data because pointers can be created to any storage zone, including internal components of allocated objects, and may contain temporary invalid values generated by pointer arithmetic.
269Attempts have been made to perform quasi garbage collection in C/\CC~\cite{Boehm88}, but it is a compromise.
270This work only examines dynamic management with \emph{explicit} deallocation.
271While garbage collection and compaction are not part this work, many of the results are applicable to the allocation phase in any memory-management approach.
272
273Most programs use a general-purpose allocator, usually the one provided by the programming-language's runtime.
274In certain languages, programmers can write specialize allocators for specific needs.
275C and \CC allow easy replacement of the default memory allocator through a standard API.
276Jikes RVM MMTk~\cite{MMTk} provides a similar generalization for the Java virtual machine.
277As well, new languages support concurrency (kernel and/or user threading), which must be safely handled by the allocator.
278Hence, several alternative allocators exist for C/\CC with the goal of scaling in a multi-threaded program~\cite{Berger00,mtmalloc,streamflow,tcmalloc}.
279This work examines the design of high-performance allocators for use by kernel and user multi-threaded applications written in C/\CC.
280
281
282\subsection{Contributions}
283\label{s:Contributions}
284
285This work provides the following contributions in the area of explicit concurrent dynamic-allocation:
286\begin{enumerate}[leftmargin=*,itemsep=0pt]
287\item
288Implementation of a new stand-alone concurrent low-latency memory-allocator ($\approx$1,400 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions for the concurrent languages \uC~\cite{uC++} and \CFA~\cite{Moss18,Delisle21} using user-level threads running on multiple kernel threads (M:N threading).
289
290\item
291Extend the standard C heap functionality by preserving with each allocation its request size, the amount allocated, whether it is zero fill, and its alignment.
292
293\item
294Use the preserved zero fill and alignment as \emph{sticky} properties for @realloc@ and @reallocarray@ to zero-fill and align when storage is extended or copied.
295Without this extension, it is unsafe to @realloc@ storage these allocations if the properties are not preserved when copying.
296This silent problem is unintuitive to programmers and difficult to locate because it is transient.
297
298\item
299Provide additional heap operations to make allocation properties orthogonally accessible.
300\begin{itemize}[topsep=0pt,itemsep=0pt,parsep=0pt]
301\item
302@aalloc( dimension, elemSize )@ same as @calloc@ except memory is \emph{not} zero filled, which is significantly faster than @calloc@.
303\item
304@amemalign( alignment, dimension, elemSize )@ same as @aalloc@ with memory alignment.
305\item
306@cmemalign( alignment, dimension, elemSize )@ same as @calloc@ with memory alignment.
307\item
308@resize( oaddr, size )@ re-purpose an old allocation for a new type \emph{without} preserving fill or alignment.
309\item
310@aligned_resize( oaddr, alignment, size )@ re-purpose an old allocation with new alignment but \emph{without} preserving fill.
311\item
312@aligned_realloc( oaddr, alignment, size )@ same as @realloc@ but adding or changing alignment.
313\item
314@aligned_reallocarray( oaddr, alignment, dimension, elemSize )@ same as @reallocarray@ but adding or changing alignment.
315\end{itemize}
316
317\item
318Provide additional query operations to access information about an allocation:
319\begin{itemize}[topsep=0pt,itemsep=0pt,parsep=0pt]
320\item
321@malloc_alignment( addr )@ returns the alignment of the allocation.
322If the allocation is not aligned or @addr@ is @NULL@, the minimal alignment is returned.
323\item
324@malloc_zero_fill( addr )@ returns a boolean result indicating if the memory is allocated with zero fill, \eg by @calloc@/@cmemalign@.
325\item
326@malloc_size( addr )@ returns the size of the memory allocation.
327\item
328@malloc_usable_size( addr )@ returns the usable (total) size of the memory, \ie the bin size containing the allocation, where @malloc_size( addr )@ $\le$ @malloc_usable_size( addr )@.
329\end{itemize}
330
331\item
332Provide optional extensive, fast, and contention-free allocation statistics to understand allocation behaviour, accessed by:
333\begin{itemize}[topsep=0pt,itemsep=0pt,parsep=0pt]
334\item
335@malloc_stats()@ print memory-allocation statistics on the file-descriptor set by @malloc_stats_fd@ (default @stderr@).
336\item
337@malloc_info( options, stream )@ print memory-allocation statistics as an XML string on the specified file-descriptor set by @malloc_stats_fd@ (default @stderr@).
338\item
339@malloc_stats_fd( fd )@ set file-descriptor number for printing memory-allocation statistics (default @stderr@).
340This file descriptor is used implicitly by @malloc_stats@ and @malloc_info@.
341\end{itemize}
342
343\item
344Provide extensive runtime checks to validate allocation operations and identify the amount of unfreed storage at program termination.
345
346\item
347Build 8 different versions of the allocator: static or dynamic linking, with or without statistics or debugging.
348A program may link to any of these 8 versions of the allocator often without recompilation (@LD_PRELOAD@).
349
350\item
351Provide additional heap wrapper functions in \CFA creating a more usable set of allocation operations and properties.
352
353\item
354A micro-benchmark test-suite for comparing allocators rather than relying on a suite of arbitrary programs.
355These micro-benchmarks have adjustment knobs to simulate allocation patterns hard-coded into arbitrary test programs
356\end{enumerate}
357
358
359\section{Background}
360
361The following is a quick overview of allocator design options that affect memory usage and performance (see~\cite{Zulfiqar22} for more details).
362Dynamic acquires and releases obtain storage for a program variable, called an \newterm{object}, through calls such as @malloc@/@new@ and @free@/@delete@ in C/\CC.
363A \newterm{memory allocator} contains a complex data-structure and code that manages the layout of objects in the dynamic-allocation zone.
364The management goals are to make allocation/deallocation operations as fast as possible while densely packing objects to make efficient use of memory.
365Since objects in C/\CC cannot be moved to aid the packing process, only adjacent free storage can be \newterm{coalesced} into larger free areas.
366The allocator grows or shrinks the dynamic-allocation zone to obtain storage for objects and reduce memory usage via OS calls, such as @mmap@ or @sbrk@ in UNIX.
367
368
369\subsection{Allocator Components}
370\label{s:AllocatorComponents}
371
372Figure~\ref{f:AllocatorComponents} shows the two important data components for a memory allocator, management and storage, collectively called the \newterm{heap}.
373The \newterm{management data} is a data structure located at a known memory address and contains fixed-sized information in the static-data memory that references components in the dynamic-allocation memory.
374For multi-threaded programs, additional management data may exist in \newterm{thread-local storage} (TLS) for each kernel thread executing the program.
375The \newterm{storage data} is composed of allocated and freed objects, and \newterm{reserved memory}.
376Allocated objects (light grey) are variable sized, and are allocated and maintained by the program;
377\ie only the program knows the location of allocated storage.
378Freed objects (white) represent memory deallocated by the program, which are linked into one or more lists facilitating location of new allocations.
379Reserved memory (dark grey) is one or more blocks of memory obtained from the \newterm{operating system} (OS) but not yet allocated to the program;
380if there are multiple reserved blocks, they are also chained together.
381
382\begin{figure}
383\centering
384\input{AllocatorComponents}
385\caption{Allocator Components (Heap)}
386\label{f:AllocatorComponents}
387\end{figure}
388
389In many allocator designs, allocated objects and reserved blocks have management data embedded within them (see also Section~\ref{s:ObjectContainers}).
390Figure~\ref{f:AllocatedObject} shows an allocated object with a header, trailer, and optional spacing around the object.
391The header contains information about the object, \eg size, type, etc.
392The trailer may be used to simplify coalescing and/or for security purposes to mark the end of an object.
393An object may be preceded by padding to ensure proper alignment.
394Some algorithms quantize allocation requests, resulting in additional space after an object less than the quantized value.
395When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists.
396
397A free object often contains management data, \eg size, pointers, etc.
398Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks.
399For internal chaining, the amount of management data for a free node defines the minimum allocation size, \eg if 16 bytes are needed for a free-list node, allocation requests less than 16 bytes are rounded up.
400Often the minimum storage alignment and free-node size are the same.
401The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new program data and/or management information.
402
403\begin{figure}
404\centering
405\input{AllocatedObject}
406\caption{Allocated Object}
407\label{f:AllocatedObject}
408\end{figure}
409
410
411\subsection{Single-Threaded Memory-Allocator}
412\label{s:SingleThreadedMemoryAllocator}
413
414In a sequential (single threaded) program, the program thread performs all allocation operations and concurrency issues do not exist.
415However, interrupts logically introduce concurrency, if the signal handler performs allocation/deallocation (serially reusable problem~\cite{SeriallyReusable}).
416In general, the primary issues in a single-threaded allocator are fragmentation and locality.
417
418\subsubsection{Fragmentation}
419\label{s:Fragmentation}
420
421Fragmentation is memory requested from the OS but not used allocated objects in by the program.
422Figure~\ref{f:InternalExternalFragmentation} shows fragmentation is divided into two forms: \emph{internal} or \emph{external}.
423
424\begin{figure}
425\centering
426\input{IntExtFragmentation}
427\caption{Internal and External Fragmentation}
428\label{f:InternalExternalFragmentation}
429\end{figure}
430
431\newterm{Internal fragmentation} is unaccessible allocated memory, such as headers, trailers, padding, and spacing around an allocated object.
432Internal fragmentation is problematic when management space becomes a significant proportion of an allocated object, \eg for objects $<$16 bytes, memory usage doubles.
433An allocator strives to keep internal management information to a minimum.
434
435\newterm{External fragmentation} is memory not allocated in the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory.
436This memory is problematic in two ways: heap blowup and highly fragmented memory.
437\newterm{Heap blowup} occurs when freed memory cannot be reused for future allocations leading to potentially unbounded external fragmentation growth~\cite{Berger00}.
438Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects, resulting in a checkerboard of adjacent allocated and free areas, where the free blocks are to small to service requests.
439% Figure~\ref{f:MemoryFragmentation} shows an example of how a small block of memory fragments as objects are allocated and deallocated over time.
440Heap blowup occurs with allocator policies that are too restrictive in reusing freed memory, \eg the allocated size cannot use a larger free block and/or no coalescing of free storage.
441% Blocks of free memory become smaller and non-contiguous making them less useful in serving allocation requests.
442% Memory is highly fragmented when most free blocks are unusable because of their sizes.
443% For example, Figure~\ref{f:Contiguous} and Figure~\ref{f:HighlyFragmented} have the same quantity of external fragmentation, but Figure~\ref{f:HighlyFragmented} is highly fragmented.
444% If there is a request to allocate a large object, Figure~\ref{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while Figure~\ref{f:HighlyFragmented} likely has to request more memory from the OS.
445
446% \begin{figure}
447% \centering
448% \input{MemoryFragmentation}
449% \caption{Memory Fragmentation}
450% \label{f:MemoryFragmentation}
451% \vspace{10pt}
452% \subfloat[Contiguous]{
453% \input{ContigFragmentation}
454% \label{f:Contiguous}
455% } % subfloat
456% \subfloat[Highly Fragmented]{
457% \input{NonContigFragmentation}
458% \label{f:HighlyFragmented}
459% } % subfloat
460% \caption{Fragmentation Quality}
461% \label{f:FragmentationQuality}
462% \end{figure}
463
464For a single-threaded memory allocator, three basic approaches for controlling fragmentation are identified~\cite{Johnstone99}.
465The first approach is a \newterm{sequential-fit algorithm} with one list of free objects that is searched for a block large enough to fit a requested object size.
466Different search policies determine the free object selected, \eg the first free object large enough or closest to the requested size.
467Any storage larger than the request can become spacing after the object or split into a smaller free object.
468% The cost of the search depends on the shape and quality of the free list, \eg a linear versus a binary-tree free-list, a sorted versus unsorted free-list.
469
470The second approach is a \newterm{segregated} or \newterm{binning algorithm} with a set of lists for different sized freed objects.
471When an object is allocated, the requested size is rounded up to the nearest bin-size, often leading to space after the object.
472A binning algorithm is fast at finding free memory of the appropriate size and allocating it, since the first free object on the free list is used.
473Fewer bin sizes means a faster search to find a matching bin, but larger differences between allocation and bin size, which increases unusable space after objects (internal fragmentation).
474More bin sizes means a slower search but smaller differences matching between allocation and bin size resulting in less internal fragmentation but more external fragmentation if larger bins cannot service smaller requests.
475Allowing larger bins to service smaller allocations when the matching bin is empty means the freed object can be returned to the matching or larger bin (some advantages to either scheme).
476% For example, with bin sizes of 8 and 16 bytes, a request for 12 bytes allocates only 12 bytes, but when the object is freed, it is placed on the 8-byte bin-list.
477% For subsequent requests, the bin free-lists contain objects of different sizes, ranging from one bin-size to the next (8-16 in this example), and a sequential-fit algorithm may be used to find an object large enough for the requested size on the associated bin list.
478
479The third approach is a \newterm{splitting} and \newterm{coalescing} algorithms.
480When an object is allocated, if there is no matching free storage, a larger free object is split into two smaller objects, one matching the allocation size.
481For example, in the \newterm{buddy system}, a block of free memory is split into equal chunks, splitting continues until a minimal block is created that fits the allocation.
482When an object is deallocated, it is coalesced with the objects immediately before/after it in memory, if they are free, turning them into a larger block.
483Coalescing can be done eagerly at each deallocation or lazily when an allocation cannot be fulfilled.
484However, coalescing increases allocation latency (unbounded delays), both for allocation and deallocation.
485While coalescing does not reduce external fragmentation, the coalesced blocks improve fragmentation quality so future allocations are less likely to cause heap blowup.
486% Splitting and coalescing can be used with other algorithms to avoid highly fragmented memory.
487
488
489\subsubsection{Locality}
490\label{s:Locality}
491
492The principle of locality recognizes that programs tend to reference a small set of data, called a \newterm{working set}, for a certain period of time, composed of temporal and spatial accesses~\cite{Denning05}.
493% Temporal clustering implies a group of objects are accessed repeatedly within a short time period, while spatial clustering implies a group of objects physically close together (nearby addresses) are accessed repeatedly within a short time period.
494% Temporal locality commonly occurs during an iterative computation with a fixed set of disjoint variables, while spatial locality commonly occurs when traversing an array.
495Hardware takes advantage of the working set through multiple levels of caching and paging, \ie memory hierarchy.
496% When an object is accessed, the memory physically located around the object is also cached with the expectation that the current and nearby objects will be referenced within a short period of time.
497For example, entire cache lines are transferred between cache and memory, and entire virtual-memory pages are transferred between memory and disk.
498% A program exhibiting good locality has better performance due to fewer cache misses and page faults\footnote{With the advent of large RAM memory, paging is becoming less of an issue in modern programming.}.
499
500Temporal locality is largely controlled by program accesses to its variables~\cite{Feng05}.
501An allocator has only indirect influence on temporal locality but largely dictates spatial locality.
502For temporal locality, an allocator tries to return recently freed storage for new allocations, as this memory is still \emph{warm} in the memory hierarchy.
503For spatial locality, an allocator places objects used together close together in memory, so the working set of the program fits into the fewest possible cache lines and pages.
504% However, usage patterns are different for every program as is the underlying hardware memory architecture;
505% hence, no general-purpose memory-allocator can provide ideal locality for every program on every computer.
506
507An allocator can easily degrade locality by increasing the working set.
508An allocator can access an unbounded number of free objects when matching an allocation or coalescing, causing multiple cache or page misses~\cite{Grunwald93}.
509An allocator can spatially separate related data by binning free storage anywhere in memory, so the related objects are highly separated.
510
511
512\subsection{Multi-Threaded Memory-Allocator}
513\label{s:MultiThreadedMemoryAllocator}
514
515In a concurrent (multi-threaded) program, multiple program threads performs allocation operations and all concurrency issues arise.
516Along with fragmentation and locality issues, a multi-threaded allocator must deal with mutual exclusion, false sharing, and additional forms of heap blowup.
517
518
519\subsubsection{Mutual Exclusion}
520\label{s:MutualExclusion}
521
522\newterm{Mutual exclusion} provides sequential access to the shared-management data of the heap.
523There are two performance issues for mutual exclusion.
524First is the cost of performing at least one hardware atomic operation every time a shared resource is accessed.
525Second is \emph{contention} on simultaneous access, so some threads must wait until the resource is released.
526Contention can be reduced in a number of ways:
5271) Using multiple fine-grained locks versus a single lock to spread the contention across the locks.
5282) Using trylock and generating new storage if the lock is busy (classic space versus time tradeoff).
5293) Using one of the many lock-free approaches for reducing contention on basic data-structure operations~\cite{Oyama99}.
530However, all approaches have degenerate cases where program contention to the heap is high, which is beyond the allocator's control.
531
532
533\subsubsection{False Sharing}
534\label{s:FalseSharing}
535
536False sharing occurs when two or more threads simultaneously modify different objects sharing a cache line.
537Changes now invalidate each thread's cache, even though the threads may be uninterested in the other modified object.
538False sharing can occur three ways:
5391) Thread T$_1$ allocates objects O$_1$ and O$_2$ on the same cache line and passes O$_2$'s reference to thread T$_2$;
540both threads now simultaneously modifying the objects on the same cache line.
5412) Objects O$_1$ and O$_2$ are allocated on the same cache line by thread T$_3$ and their references are passed to T$_1$ and T$_2$, which simultaneously modify the objects.
5423) T$_2$ deallocates O$_2$, T$_1$ allocates O$_1$ on the same cache line as O$_2$, and T$_2$ reallocated O$_2$ while T$_1$ is using O$_1$.
543In all three cases, the allocator performs a hidden and possibly transient (non-determinism) operation, making it extremely difficult to find and fix the issue.
544
545
546\subsubsection{Heap Blowup}
547\label{s:HeapBlowup}
548
549In a multi-threaded program, heap blowup occurs when memory freed by one thread is inaccessible to other threads due to the allocation strategy.
550Specific examples are presented in later subsections.
551
552
553\subsection{Multi-Threaded Allocator Features}
554\label{s:MultiThreadedAllocatorFeatures}
555
556The following features are used in the construction of multi-threaded allocators.
557
558\subsubsection{Multiple Heaps}
559\label{s:MultipleHeaps}
560
561Figure~\ref{f:ThreadHeapRelationship} shows how a multi-threaded allocator reduced contention by subdividing a single heap into multiple heaps.
562
563\begin{figure}
564\centering
565\subfloat[T:1]{
566% \input{SingleHeap.pstex_t}
567 \input{SingleHeap}
568 \label{f:SingleHeap}
569} % subfloat
570\vrule
571\subfloat[T:H]{
572% \input{MultipleHeaps.pstex_t}
573 \input{SharedHeaps}
574 \label{f:SharedHeaps}
575} % subfloat
576\vrule
577\subfloat[1:1]{
578% \input{MultipleHeapsGlobal.pstex_t}
579 \input{PerThreadHeap}
580 \label{f:PerThreadHeap}
581} % subfloat
582\caption{Multiple Heaps, Thread:Heap Relationship}
583\label{f:ThreadHeapRelationship}
584\end{figure}
585
586\begin{description}[leftmargin=*]
587\item[T:1 model (Figure~\ref{f:SingleHeap})] is all threads (T) sharing a single heap (1).
588The arrows indicate memory movement for allocation/deallocation operations.
589Memory is obtained from freed objects, reserved memory, or the OS;
590freed memory can be returned to the OS.
591To handle concurrency, a single lock is used for all heap operations or fine-grained locking if operations can be made independent.
592As threads perform large numbers of allocations, a single heap becomes a significant source of contention.
593
594\item[T:H model (Figure~\ref{f:SharedHeaps})] is multiple threads (T) sharing multiple heaps (H).
595The allocator independently allocates/deallocates heaps and assigns threads to heaps based on dynamic contention pressure.
596Locking is required within each heap, but contention is reduced because fewer threads access a specific heap.
597The goal is minimal heaps (storage) and contention per heap (time).
598A worst case is more heaps than threads, \eg many threads at startup create a large number of heaps and then the threads reduce.
599
600% For example, multiple heaps are managed in a pool, starting with a single or a fixed number of heaps that increase\-/decrease depending on contention\-/space issues.
601% At creation, a thread is associated with a heap from the pool.
602% In some implementations of this model, when the thread attempts an allocation and its associated heap is locked (contention), it scans for an unlocked heap in the pool.
603% If an unlocked heap is found, the thread changes its association and uses that heap.
604% If all heaps are locked, the thread may create a new heap, use it, and then place the new heap into the pool;
605% or the thread can block waiting for a heap to become available.
606% While the heap-pool approach often minimizes the number of extant heaps, the worse case can result in more heaps than threads;
607% \eg if the number of threads is large at startup with many allocations creating a large number of heaps and then the number of threads reduces.
608
609% Threads using multiple heaps need to determine the specific heap to access for an allocation/deallocation, \ie association of thread to heap.
610% A number of techniques are used to establish this association.
611% The simplest approach is for each thread to have a pointer to its associated heap (or to administrative information that points to the heap), and this pointer changes if the association changes.
612% For threading systems with thread-local storage, the heap pointer is created using this mechanism;
613% otherwise, the heap routines must simulate thread-local storage using approaches like hashing the thread's stack-pointer or thread-id to find its associated heap.
614
615% The storage management for multiple heaps is more complex than for a single heap (see Figure~\ref{f:AllocatorComponents}).
616% Figure~\ref{f:MultipleHeapStorage} illustrates the general storage layout for multiple heaps.
617% Allocated and free objects are labelled by the thread or heap they are associated with.
618% (Links between free objects are removed for simplicity.)
619% The management information for multiple heaps in the static zone must be able to locate all heaps.
620% The management information for the heaps must reside in the dynamic-allocation zone if there are a variable number.
621% Each heap in the dynamic zone is composed of a list of free objects and a pointer to its reserved memory.
622% An alternative implementation is for all heaps to share one reserved memory, which requires a separate lock for the reserved storage to ensure mutual exclusion when acquiring new memory.
623% Because multiple threads can allocate/free/reallocate adjacent storage, all forms of false sharing may occur.
624% Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the OS.
625
626% \begin{figure}
627% \centering
628% \input{MultipleHeapsStorage}
629% \caption{Multiple-Heap Storage}
630% \label{f:MultipleHeapStorage}
631% \end{figure}
632
633Multiple heaps increase external fragmentation as the ratio of heaps to threads increases, which can lead to heap blowup.
634The external fragmentation experienced by a program with a single heap is now multiplied by the number of heaps, since each heap manages its own free storage and allocates its own reserved memory.
635Additionally, objects freed by one heap cannot be reused by other threads without increasing the cost of the memory operations, except indirectly by returning free memory to the OS (see Section~\ref{s:Ownership}).
636Returning storage to the OS may be difficult or impossible, \eg the contiguous @sbrk@ area in Unix.
637% In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused.
638
639Adding a \newterm{global heap} (G) attempts to reduce the cost of obtaining/returning memory among heaps (sharing) by buffering storage within the application address-space.
640Now, each heap obtains and returns storage to/from the global heap rather than the OS.
641Storage is obtained from the global heap only when a heap allocation cannot be fulfilled, and returned to the global heap when a heap's free memory exceeds some threshold.
642Similarly, the global heap buffers this memory, obtaining and returning storage to/from the OS as necessary.
643The global heap does not have its own thread and makes no internal allocation requests;
644instead, it uses the application thread, which called one of the multiple heaps and then the global heap, to perform operations.
645Hence, the worst-case cost of a memory operation includes all these steps.
646With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the OS to achieve the same goal and is independent of the mechanism used by the OS to present dynamic memory to an address space.
647However, since any thread may indirectly perform a memory operation on the global heap, it is a shared resource that requires locking.
648A single lock can be used to protect the global heap or fine-grained locking can be used to reduce contention.
649In general, the cost is minimal since the majority of memory operations are completed without the use of the global heap.
650
651\item[1:1 model (Figure~\ref{f:PerThreadHeap})] is each thread (1) has a heap (1), eliminating most contention and locking if threads seldom access another thread's heap (see Section~\ref{s:Ownership}).
652A thread's objects are consolidated in its heap, better utilizing the cache and paging during thread execution.
653In contrast, the T:H model can spread thread objects over a larger area in different heaps.
654Thread heaps can also reduces false-sharing, unless there are overlapping memory boundaries from another thread's heap.
655%For example, assume page boundaries coincide with cache line boundaries, if a thread heap always acquires pages of memory then no two threads share a page or cache line unless pointers are passed among them.
656
657When a thread terminates, it can free its heap objects to the global heap, or the thread heap is retained as-is and reused for a new thread in the future.
658Destroying a heap can reduce external fragmentation sooner, since all free objects in the global heap are available for immediate reuse.
659Alternatively, reusing a heap can aid the inheriting thread, if it has a similar allocation pattern because the heap in primed with unfreed storage of the right sizes.
660\end{description}
661
662
663\subsubsection{User-Level Threading}
664
665It is possible to use any of the heap models with user-level (M:N) threading.
666However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the OS, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000).
667It is difficult to retain this goal, if the user-threading model is directly involved with the heap model.
668Figure~\ref{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model is provided by the language runtime.
669Hence, a user thread allocates/deallocates from/to the heap of the kernel thread on which it is currently executing.
670
671\begin{figure}
672\centering
673\input{UserKernelHeaps}
674\caption{User-Level Kernel Heaps}
675\label{f:UserLevelKernelHeaps}
676\end{figure}
677
678Adopting user threading results in a subtle problem with shared heaps.
679With kernel threading, an operation started by a kernel thread is always completed by that thread.
680For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap, even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption.
681However, this correctness property is not preserved for user-level threading.
682A user thread can start an allocation/deallocation on one kernel thread, be preempted (time slice), and continue running on a different kernel thread to complete the operation~\cite{Dice02}.
683When the user thread continues on the new kernel thread, it may have pointers into the previous kernel-thread's heap and hold locks associated with it.
684To get the same kernel-thread safety, time slicing must be disabled/\-enabled around these operations, so the user thread cannot jump to another kernel thread.
685However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption is infrequent (milliseconds).
686Instead, techniques exist to lazily detect this case in the interrupt handler, abort the preemption, and return to the operation so it can complete atomically.
687Occasional ignoring of a preemption should be benign, but a persistent lack of preemption can result in starvation;
688techniques like rolling forward the preemption to the next context switch can be used.
689
690
691\subsubsection{Ownership}
692\label{s:Ownership}
693
694\newterm{Ownership} defines which heap an object is returned-to on deallocation.
695If a thread returns an object to the heap it was originally allocated from, a heap has ownership of its objects.
696Alternatively, a thread can return an object to the heap it is currently associated with, which can be any heap accessible during a thread's lifetime.
697Figure~\ref{f:HeapsOwnership} shows an example of multiple heaps (minus the global heap) with and without ownership.
698Again, the arrows indicate the direction memory conceptually moves for each kind of operation.
699For the 1:1 thread:heap relationship, a thread only allocates from its own heap, and without ownership, a thread only frees objects to its own heap, which means the heap is private to its owner thread and does not require any locking, called a \newterm{private heap}.
700For the T:1/T:H models with or without ownership or the 1:1 model with ownership, a thread may free objects to different heaps, which makes each heap publicly accessible to all threads, called a \newterm{public heap}.
701
702\begin{figure}
703\centering
704\subfloat[Ownership]{
705 \input{MultipleHeapsOwnership}
706} % subfloat
707\hspace{0.25in}
708\subfloat[No Ownership]{
709 \input{MultipleHeapsNoOwnership}
710} % subfloat
711\caption{Heap Ownership}
712\label{f:HeapsOwnership}
713\end{figure}
714
715% Figure~\ref{f:MultipleHeapStorageOwnership} shows the effect of ownership on storage layout.
716% (For simplicity, assume the heaps all use the same size of reserves storage.)
717% In contrast to Figure~\ref{f:MultipleHeapStorage}, each reserved area used by a heap only contains free storage for that particular heap because threads must return free objects back to the owner heap.
718% Passive false-sharing may still occur, if delayed ownership is used (see below).
719
720% \begin{figure}
721% \centering
722% \input{MultipleHeapsOwnershipStorage.pstex_t}
723% \caption{Multiple-Heap Storage with Ownership}
724% \label{f:MultipleHeapStorageOwnership}
725% \end{figure}
726
727The main advantage of ownership is preventing heap blowup by returning storage for reuse by the owner heap.
728Ownership prevents the classical problem where one thread performs allocations from one heap, passes the object to another thread, and the receiving thread deallocates the object to another heap, hence draining the initial heap of storage.
729Because multiple threads can allocate/free/reallocate adjacent storage in the same heap, all forms of false sharing may occur.
730The exception is for the 1:1 model if reserved memory does not overlap a cache-line because all allocated storage within a used area is associated with a single thread.
731In this case, there is no allocator-induced active false-sharing because two adjacent allocated objects used by different threads cannot share a cache-line.
732Finally, there is no allocator-induced passive false-sharing because two adjacent allocated objects used by different threads cannot occur as free objects are returned to the owner heap.
733% For example, in Figure~\ref{f:AllocatorInducedPassiveFalseSharing}, the deallocation by Thread$_2$ returns Object$_2$ back to Thread$_1$'s heap;
734% hence a subsequent allocation by Thread$_2$ cannot return this storage.
735The disadvantage of ownership is deallocating to another thread's heap so heaps are no longer private and require locks to provide safe concurrent access.
736
737Object ownership can be immediate or delayed, meaning free objects may be batched on a separate free list either by the returning or receiving thread.
738While the returning thread can batch objects, batching across multiple heaps is complex and there is no obvious time when to push back to the owner heap.
739It is better for returning threads to immediately return to the receiving thread's batch list as the receiving thread has better knowledge when to incorporate the batch list into its free pool.
740Batching leverages the fact that most allocation patterns use the contention-free fast-path, so locking on the batch list is rare for both the returning and receiving threads.
741Finally, it is possible for heaps to temporarily steal owned objects rather than return them immediately and then reallocate these objects again.
742It is unclear whether the complexity of this approach is worthwhile.
743% However, stealing can result in passive false-sharing.
744% For example, in Figure~\ref{f:AllocatorInducedPassiveFalseSharing}, Object$_2$ may be deallocated to Thread$_2$'s heap initially.
745% If Thread$_2$ reallocates Object$_2$ before it is returned to its owner heap, then passive false-sharing may occur.
746
747For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps.% (see~Figure~\ref{f:HybridPrivatePublicHeap}).
748The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup.
749In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap.
750Similarly, a thread first deallocates an object to its private heap, and second to the public heap.
751Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps.
752% Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap.
753Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which does allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return.
754
755% \begin{figure}
756% \centering
757% \input{PrivatePublicHeaps.pstex_t}
758% \caption{Hybrid Private/Public Heap for Per-thread Heaps}
759% \label{f:HybridPrivatePublicHeap}
760% \vspace{10pt}
761% \input{RemoteFreeList.pstex_t}
762% \caption{Remote Free-List}
763% \label{f:RemoteFreeList}
764% \end{figure}
765
766% As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified.
767% For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}.
768% To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage.
769% Since the remote free-list is occasionally cleared during an allocation, this adds to that cost.
770% Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object.
771
772% If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking.
773% In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur.
774% If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations.
775
776% The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator.
777% Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used.
778% In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps.
779% For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach.
780
781
782\begin{figure}
783\centering
784\subfloat[Object Headers]{
785 \input{ObjectHeaders}
786 \label{f:ObjectHeaders}
787} % subfloat
788\subfloat[Object Container]{
789 \input{Container}
790 \label{f:ObjectContainer}
791} % subfloat
792\caption{Header Placement}
793\label{f:HeaderPlacement}
794\end{figure}
795
796
797\subsubsection{Object Containers}
798\label{s:ObjectContainers}
799
800Associating header data with every allocation can result in significant internal fragmentation, as shown in Figure~\ref{f:ObjectHeaders}.
801Especially if the headers contain redundant data, \eg object size may be the same for many objects because programs only allocate a small set of object sizes.
802As well, the redundant data can result in poor cache usage, since only a portion of the cache line is holding useful data from the program's perspective.
803Spatial locality can also be negatively affected leading to poor cache locality~\cite{Feng05}.
804While the header and object are spatially together in memory, they are generally not accessed temporarily together;
805\eg an object is accessed by the program after it is allocated, while the header is accessed by the allocator after it is free.
806
807An alternative approach factors common header data to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks}~\cite{Berger00}), as in Figure~\ref{f:ObjectContainer}.
808The header for the container holds information necessary for all objects in the container;
809a trailer may also be used at the end of the container.
810Similar to the approach described for thread heaps in Section~\ref{s:MultipleHeaps}, if container boundaries do not overlap with memory of another container at crucial boundaries and all objects in a container are allocated to the same thread, allocator-induced active false-sharing is avoided.
811
812The difficulty with object containers lies in finding the object header/trailer given only the object address, since that is normally the only information passed to the deallocation operation.
813One way is to start containers on aligned addresses in memory, then truncate the lower bits of the object address to obtain the header address (or round up and subtract the trailer size to obtain the trailer address).
814For example, if an object at address 0xFC28\,EF08 is freed and containers are aligned on 64\,KB (0x0001\,0000) addresses, then the container header is at 0xFC28\,0000.
815
816Normally, a container has homogeneous objects, \eg object size and ownership.
817This approach greatly reduces internal fragmentation since far fewer headers are required, and potentially increases spatial locality as a cache line or page holds more objects since the objects are closer together.
818However, different sized objects are further apart in separate containers.
819Depending on the program, this may or may not improve locality.
820If the program uses several objects from a small number of containers in its working set, then locality is improved since fewer cache lines and pages are required.
821If the program uses many containers, there is poor locality, as both caching and paging increase.
822Another drawback is that external fragmentation may be increased since containers reserve space for objects that may never be allocated, \ie there are often multiple containers for each size only partially full.
823However, external fragmentation can be reduced by using small containers.
824
825Containers with heterogeneous objects implies different headers describing them, which complicates the problem of locating a specific header solely by an address.
826A couple of solutions can be used to implement containers with heterogeneous objects.
827However, the problem with allowing objects of different sizes is that the number of objects, and therefore headers, in a single container is unpredictable.
828One solution allocates headers at one end of the container, while allocating objects from the other end of the container;
829when the headers meet the objects, the container is full.
830Freed objects cannot be split or coalesced since this causes the number of headers to change.
831The difficulty in this strategy remains in finding the header for a specific object;
832in general, a search is necessary to find the object's header among the container headers.
833A second solution combines the use of container headers and individual object headers.
834Each object header stores the object's heterogeneous information, such as its size, while the container header stores the homogeneous information, such as the owner when using ownership.
835This approach allows containers to hold different types of objects, but does not completely separate headers from objects.
836% The benefit of the container in this case is to reduce some redundant information that is factored into the container header.
837
838% In summary, object containers trade off internal fragmentation for external fragmentation by isolating common administration information to remove/reduce internal fragmentation, but at the cost of external fragmentation as some portion of a container may not be used and this portion is unusable for other kinds of allocations.
839% A consequence of this tradeoff is its effect on spatial locality, which can produce positive or negative results depending on program access-patterns.
840
841
842\paragraph{Container Ownership}
843\label{s:ContainerOwnership}
844
845Without ownership, objects in a container are deallocated to the heap currently associated with the thread that frees the object.
846Thus, different objects in a container may be on different heap free-lists. % (see Figure~\ref{f:ContainerNoOwnershipFreelist}).
847With ownership, all objects in a container belong to the same heap,
848% (see Figure~\ref{f:ContainerOwnershipFreelist}),
849so ownership of an object is determined by the container owner.
850If multiple threads can allocate/free/reallocate adjacent storage in the same heap, all forms of false sharing may occur.
851Only with the 1:1 model and ownership is active and passive false-sharing avoided (see Section~\ref{s:Ownership}).
852Passive false-sharing may still occur, if delayed ownership is used.
853Finally, a completely free container can become reserved storage and be reset to allocate objects of a new size or freed to the global heap.
854
855% \begin{figure}
856% \centering
857% \subfloat[No Ownership]{
858% \input{ContainerNoOwnershipFreelist}
859% \label{f:ContainerNoOwnershipFreelist}
860% } % subfloat
861% \vrule
862% \subfloat[Ownership]{
863% \input{ContainerOwnershipFreelist}
864% \label{f:ContainerOwnershipFreelist}
865% } % subfloat
866% \caption{Free-list Structure with Container Ownership}
867% \end{figure}
868
869When a container changes ownership, the ownership of all objects within it change as well.
870Moving a container involves moving all objects on the heap's free-list in that container to the new owner.
871This approach can reduce contention for the global heap, since each request for objects from the global heap returns a container rather than individual objects.
872
873Additional restrictions may be applied to the movement of containers to prevent active false-sharing.
874For example, if a container changes ownership through the global heap, then a thread allocating from the newly acquired container is actively false-sharing even though no objects are passed among threads.
875Note, once the thread frees the object, no more false sharing can occur until the container changes ownership again.
876To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free.
877One implementation approach that increases the freedom to return a free container to the OS involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the OS.
878
879% \begin{figure}
880% \centering
881% \subfloat[]{
882% \input{ContainerFalseSharing1}
883% \label{f:ContainerFalseSharing1}
884% } % subfloat
885% \subfloat[]{
886% \input{ContainerFalseSharing2}
887% \label{f:ContainerFalseSharing2}
888% } % subfloat
889% \caption{Active False-Sharing using Containers}
890% \label{f:ActiveFalseSharingContainers}
891% \end{figure}
892
893Using containers with ownership increases external fragmentation since a new container for a requested object size must be allocated separately for each thread requesting it.
894% In Figure~\ref{f:ExternalFragmentationContainerOwnership}, using object ownership allocates 80\% more space than without ownership.
895
896% \begin{figure}
897% \centering
898% \subfloat[No Ownership]{
899% \input{ContainerNoOwnership}
900% } % subfloat
901% \\
902% \subfloat[Ownership]{
903% \input{ContainerOwnership}
904% } % subfloat
905% \caption{External Fragmentation with Container Ownership}
906% \label{f:ExternalFragmentationContainerOwnership}
907% \end{figure}
908
909
910\paragraph{Container Size}
911\label{s:ContainerSize}
912
913One way to control the external fragmentation caused by allocating a large container for a small number of requested objects is to vary the size of the container.
914As described earlier, container boundaries need to be aligned on addresses that are a power of two to allow easy location of the header (by truncating lower bits).
915Aligning containers in this manner also determines the size of the container.
916However, the size of the container has different implications for the allocator.
917
918The larger the container, the fewer containers are needed, and hence, the fewer headers need to be maintained in memory, improving both internal fragmentation and potentially performance.
919However, with more objects in a container, there may be more objects that are unallocated, increasing external fragmentation.
920With smaller containers, not only are there more containers, but a second new problem arises where objects are larger than the container.
921In general, large objects, \eg greater than 64\,KB, are allocated directly from the OS and are returned immediately to the OS to reduce long-term external fragmentation.
922If the container size is small, \eg 1\,KB, then a 1.5\,KB object is treated as a large object, which is likely to be inappropriate.
923Ideally, it is best to use smaller containers for smaller objects, and larger containers for medium objects, which leads to the issue of locating the container header.
924
925In order to find the container header when using different sized containers, a super container is used (see~Figure~\ref{f:SuperContainers}).
926The super container spans several containers, contains a header with information for finding each container header, and starts on an aligned address.
927Super-container headers are found using the same method used to find container headers by dropping the lower bits of an object address.
928The containers within a super container may be different sizes or all the same size.
929If the containers in the super container are different sizes, then the super-container header must be searched to determine the specific container for an object given its address.
930If all containers in the super container are the same size, \eg 16KB, then a specific container header can be found by a simple calculation.
931The free space at the end of a super container is used to allocate new containers.
932
933\begin{figure}
934\centering
935\input{SuperContainers}
936% \includegraphics{diagrams/supercontainer.eps}
937\caption{Super Containers}
938\label{f:SuperContainers}
939\end{figure}
940
941Minimal internal and external fragmentation is achieved by having as few containers as possible, each being as full as possible.
942It is also possible to achieve additional benefit by using larger containers for popular small sizes, as it reduces the number of containers with associated headers.
943However, this approach assumes it is possible for an allocator to determine in advance which sizes are popular.
944Keeping statistics on requested sizes allows the allocator to make a dynamic decision about which sizes are popular.
945For example, after receiving a number of allocation requests for a particular size, that size is considered a popular request size and larger containers are allocated for that size.
946If the decision is incorrect, larger containers than necessary are allocated that remain mostly unused.
947A programmer may be able to inform the allocator about popular object sizes, using a mechanism like @mallopt@, in order to select an appropriate container size for each object size.
948
949
950\paragraph{Container Free-Lists}
951\label{s:containersfreelists}
952
953The container header allows an alternate approach for managing the heap's free-list.
954Rather than maintain a global free-list throughout the heap the containers are linked through their headers and only the local free objects within a container are linked together.
955Note, maintaining free lists within a container assumes all free objects in the container are associated with the same heap;
956thus, this approach only applies to containers with ownership.
957
958This alternate free-list approach can greatly reduce the complexity of moving all freed objects belonging to a container to another heap.
959To move a container using a global free-list, the free list is first searched to find all objects within the container.
960Each object is then removed from the free list and linked together to form a local free-list for the move to the new heap.
961With local free-lists in containers, the container is simply removed from one heap's free list and placed on the new heap's free list.
962Thus, when using local free-lists, the operation of moving containers is reduced from $O(N)$ to $O(1)$.
963However, there is the additional storage cost in the header, which increases the header size, and therefore internal fragmentation.
964
965% \begin{figure}
966% \centering
967% \subfloat[Global Free-List Among Containers]{
968% \input{FreeListAmongContainers}
969% \label{f:GlobalFreeListAmongContainers}
970% } % subfloat
971% \hspace{0.25in}
972% \subfloat[Local Free-List Within Containers]{
973% \input{FreeListWithinContainers}
974% \label{f:LocalFreeListWithinContainers}
975% } % subfloat
976% \caption{Container Free-List Structure}
977% \label{f:ContainerFreeListStructure}
978% \end{figure}
979
980When all objects in the container are the same size, a single free-list is sufficient.
981However, when objects in the container are different size, the header needs a free list for each size class when using a binning allocation algorithm, which can be a significant increase in the container-header size.
982The alternative is to use a different allocation algorithm with a single free-list, such as a sequential-fit allocation-algorithm.
983
984
985\subsubsection{Allocation Buffer}
986\label{s:AllocationBuffer}
987
988An allocation buffer is reserved memory (see Section~\ref{s:AllocatorComponents}) not yet allocated to the program, and is used for allocating objects when the free list is empty.
989That is, rather than requesting new storage for a single object, an entire buffer is requested from which multiple objects are allocated later.
990Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or OS, respectively.
991The allocation buffer reduces contention and the number of global/OS calls.
992For coalescing, a buffer is split into smaller objects by allocations, and recomposed into larger buffer areas during deallocations.
993
994Allocation buffers are useful initially when there are no freed objects in a heap because many allocations usually occur when a thread starts (simple bump allocation).
995Furthermore, to prevent heap blowup, objects should be reused before allocating a new allocation buffer.
996Thus, allocation buffers are often allocated more frequently at program/thread start, and then allocations often diminish.
997
998Using an allocation buffer with a thread heap avoids active false-sharing, since all objects in the allocation buffer are allocated to the same thread.
999For example, if all objects sharing a cache line come from the same allocation buffer, then these objects are allocated to the same thread, avoiding active false-sharing.
1000Active false-sharing may still occur if objects are freed to the global heap and reused by another heap.
1001
1002Allocation buffers may increase external fragmentation, since some memory in the allocation buffer may never be allocated.
1003A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or OS.
1004The allocation buffer also slightly increases internal fragmentation, since a pointer is necessary to locate the next free object in the buffer.
1005
1006The unused part of a container, neither allocated or freed, is an allocation buffer.
1007For example, when a container is created, rather than placing all objects within the container on the free list, the objects form an allocation buffer and are allocated from the buffer as allocation requests are made.
1008This lazy method of constructing objects is beneficial in terms of paging and caching.
1009For example, although an entire container, possibly spanning several pages, is allocated from the OS, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache.
1010
1011
1012\subsubsection{Lock-Free Operations}
1013\label{s:LockFreeOperations}
1014
1015A \newterm{lock-free algorithm} guarantees safe concurrent-access to a data structure, so that at least one thread makes progress, but an individual thread has no execution bound and may starve~\cite[pp.~745--746]{Herlihy93}.
1016(A \newterm{wait-free algorithm} puts a bound on the number of steps any thread takes to complete an operation to prevent starvation.)
1017Lock-free operations can be used in an allocator to reduce or eliminate the use of locks.
1018While locks and lock-free data-structures often have equal performance, lock-free has the advantage of not holding a lock across preemption so other threads can continue to make progress.
1019With respect to the heap, these situations are unlikely unless all threads make extremely high use of dynamic-memory allocation, which can be an indication of poor design.
1020Nevertheless, lock-free algorithms can reduce the number of context switches, since a thread does not yield/block while waiting for a lock;
1021on the other hand, a thread may busy-wait for an unbounded period holding a processor.
1022Finally, lock-free implementations have greater complexity and hardware dependency.
1023Lock-free algorithms can be applied most easily to simple free-lists, \eg remote free-list, to allow lock-free insertion and removal from the head of a stack.
1024Implementing lock-free operations for more complex data-structures (queue~\cite{Valois94}/deque~\cite{Sundell08}) is correspondingly more complex.
1025Michael~\cite{Michael04} and Gidenstam \etal \cite{Gidenstam05} have created lock-free variations of the Hoard allocator.
1026
1027
1028\section{llheap}
1029
1030This section presents our new stand-alone, concurrent, low-latency memory-allocator, called llheap (low-latency heap), fulfilling the GNU C Library allocator API~\cite{GNUallocAPI} for C/\CC programs using kernel threads (1:1 threading), with specialized versions for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
1031The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator delay during an allocator call.
1032Excluded from the low-latency objective are (large) allocations requiring initialization, \eg zero fill, and/or data copying, which are outside the allocator's purview.
1033A direct consequence of this objective is very simple or no storage coalescing;
1034hence, llheap's design is willing to use more storage to lower latency.
1035This objective is apropos because systems research and industrial applications are striving for low latency and modern computers have huge amounts of RAM memory.
1036Finally, llheap's performance should be comparable with the current best allocators, both in space and time (see performance comparison in Section~\ref{c:Performance}).
1037
1038
1039\subsection{Design Choices}
1040
1041llheap's design was reviewed and changed multiple times during its development, with the final choices discussed here.
1042All designs focused on the allocation/free \newterm{fastpath}, \ie the shortest code path for the most common operations, \eg when an allocation can immediately return free storage or returned storage is not coalesced.
1043The model chosen is 1:1, so there is one thread-local heap for each KT.
1044(See Figure~\ref{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.)
1045Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted.
1046Therefore, heaps are uncontended for a KTs memory operations as every KT has its own thread-local heap, modulo operations on the global pool and ownership.
1047
1048Problems:
1049\begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt]
1050\item
1051Need to know when a KT starts/terminates to create/delete its heap.
1052
1053\noindent
1054It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates.
1055\item
1056There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap.
1057
1058\noindent
1059The classic solution only deletes a heap after all referents are returned, which is complex.
1060The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees.
1061If old referents return storage to a terminated heap, it is handled in the same way as an active heap.
1062To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation).
1063In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT.
1064\item
1065There can be significant external fragmentation as the number of KTs increases.
1066
1067\noindent
1068In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs.
1069Since the number of CPUs is relatively small, and a heap is also relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs.
1070\item
1071Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}.
1072\begin{quote}
1073A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable}
1074\end{quote}
1075If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
1076Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
1077Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler.
1078
1079Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted.
1080
1081%There is the same serially-reusable problem with UTs migrating across KTs.
1082\end{itemize}
1083Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach.
1084
1085
1086\vspace{5pt}
1087\noindent
1088The conclusion from this design exercise is: any atomic fence, atomic instruction (lock free), or lock along the allocation fastpath produces significant slowdown.
1089For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps might be shared by multiple threads, even when KTs $\le$ N.
1090For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
1091However, T:H=CPU has poor OS support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs.
1092More OS support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
1093So the 1:1 model had no atomic actions along the fastpath and no special OS support requirements.
1094The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in Section~\ref{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns.
1095
1096
1097% \begin{itemize}
1098% \item
1099% A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated.
1100% \item
1101% Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario.
1102% \item
1103% Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it.
1104% that because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
1105% \end{itemize}
1106% Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons:
1107
1108% \subsubsection{Advantages of distributed design}
1109%
1110% The distributed design of llheap is concurrent to work in multi-threaded applications.
1111% Some key benefits of the distributed design of llheap are as follows:
1112% \begin{itemize}
1113% \item
1114% The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned.
1115% \item
1116% Low or almost no contention on heap resources.
1117% \item
1118% It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
1119% \item
1120% Distributed design avoids unnecessary locks on resources shared across all KTs.
1121% \end{itemize}
1122
1123\subsubsection{Allocation Latency}
1124
1125A primary goal of llheap is low latency, hence the name low-latency heap (llheap).
1126Two forms of latency are internal and external.
1127Internal latency is the time to perform an allocation, while external latency is time to obtain or return storage from or to the OS.
1128Ideally latency is $O(1)$ with a small constant.
1129
1130$O(1)$ internal latency means no open searching on the allocation fastpath, which largely prohibits coalescing.
1131The mitigating factor is that most programs have a small, fixed, allocation pattern, where the majority of allocation operations can be $O(1)$ and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger).
1132Modern computers have large memories so a slight increase in program footprint is not a problem.
1133
1134$O(1)$ external latency means obtaining one large storage area from the OS and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
1135Excluding real-time OSs, OS operations are unbounded, and hence some external latency is unavoidable.
1136The mitigating factor is that OS calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \pageref{p:malloc_expansion}).
1137Furthermore, while OS calls are unbounded, many are now reasonably fast, so their latency is tolerable because it occurs infrequently.
1138
1139
1140%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1141
1142\subsection{llheap Structure}
1143
1144Figure~\ref{f:llheapStructure} shows the design of llheap, which uses the following features:
11451:1 multiple-heap model to minimize the fastpath,
1146can be built with or without heap ownership,
1147headers per allocation versus containers,
1148no coalescing to minimize latency,
1149global heap memory (pool) obtained from the OS using @mmap@ to create and reuse heaps needed by threads,
1150local reserved memory (pool) per heap obtained from global pool,
1151global reserved memory (pool) obtained from the OS using @sbrk@ call,
1152optional fast-lookup table for converting allocation requests into bucket sizes,
1153optional statistic-counters table for accumulating counts of allocation operations.
1154
1155\begin{figure}
1156\centering
1157% \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
1158\input{llheap}
1159\caption{llheap Structure}
1160\label{f:llheapStructure}
1161\end{figure}
1162
1163llheap starts by creating an array of $N$ global heaps from storage obtained using @mmap@, where $N$ is the number of computer cores, that persists for program duration.
1164There is a global bump-pointer to the next free heap in the array.
1165When this array is exhausted, another array of heaps is allocated.
1166There is a global top pointer for a intrusive linked-list to chain free heaps from terminated threads.
1167When statistics are turned on, there is a global top pointer for a intrusive linked-list to chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps using @malloc_stats@.
1168
1169When a KT starts, a heap is allocated from the current array for exclusive use by the KT.
1170When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of number of heaps.
1171The free heaps are stored on stack so hot storage is reused first.
1172Preserving all heaps, created during the program lifetime, solves the storage lifetime problem when ownership is used.
1173This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially.
1174llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap.
1175
1176Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
1177All objects in a bucket are of the same size.
1178The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the OS.
1179Each free bucket of a specific size has two lists.
11801) A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
1181The free objects are a stack so hot storage is reused first.
11822) For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operations require locking.
1183When the free stack is empty, the entire ownership stack is removed and becomes the head of the corresponding free stack.
1184
1185Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
1186First, the allocation is divided into small (@sbrk@) or large (@mmap@).
1187For large allocations, the storage is mapped directly from the OS.
1188For small allocations, $S$ is quantized into a bucket size.
1189Quantizing is performed using a binary search over the ordered bucket array.
1190An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket.
1191The @char@ type restricts the number of bucket sizes to 256.
1192For $S$ > 64K, a binary search is used.
1193Then, the allocation storage is obtained from the following locations (in order), with increasing latency:
1194bucket's free stack,
1195bucket's away stack,
1196heap's local pool,
1197global pool,
1198OS (@sbrk@).
1199
1200\begin{algorithm}
1201\caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc}
1202\begin{algorithmic}[1]
1203\State $\textit{O} \gets \text{NULL}$
1204\If {$S >= \textit{mmap-threshhold}$}
1205 \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
1206\Else
1207 \State $\textit{B} \gets \text{smallest free-bucket} \geq S$
1208 \If {$\textit{B's free-list is empty}$}
1209 \If {$\textit{B's away-list is empty}$}
1210 \If {$\textit{heap's allocation buffer} < S$}
1211 \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$
1212 \EndIf
1213 \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$
1214 \Else
1215 \State $\textit{merge B's away-list into free-list}$
1216 \State $\textit{O} \gets \text{pop an object from B's free-list}$
1217 \EndIf
1218 \Else
1219 \State $\textit{O} \gets \text{pop an object from B's free-list}$
1220 \EndIf
1221 \State $\textit{O's owner} \gets \text{B}$
1222\EndIf
1223\State $\Return \textit{ O}$
1224\end{algorithmic}
1225\end{algorithm}
1226
1227\begin{algorithm}
1228\caption{Dynamic object free at address $A$ with object ownership}\label{alg:heapObjectFreeOwn}
1229\begin{algorithmic}[1]
1230\If {$\textit{A mapped allocation}$}
1231 \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$
1232\Else
1233 \State $\text{B} \gets \textit{O's owner}$
1234 \If {$\textit{B is thread-local heap's bucket}$}
1235 \State $\text{push A to B's free-list}$
1236 \Else
1237 \State $\text{push A to B's away-list}$
1238 \EndIf
1239\EndIf
1240\end{algorithmic}
1241\end{algorithm}
1242
1243\begin{algorithm}
1244\caption{Dynamic object free at address $A$ without object ownership}\label{alg:heapObjectFreeNoOwn}
1245\begin{algorithmic}[1]
1246\If {$\textit{A mapped allocation}$}
1247 \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$
1248\Else
1249 \State $\text{B} \gets \textit{O's owner}$
1250 \If {$\textit{B is thread-local heap's bucket}$}
1251 \State $\text{push A to B's free-list}$
1252 \Else
1253 \State $\text{C} \gets \textit{thread local heap's bucket with same size as B}$
1254 \State $\text{push A to C's free-list}$
1255 \EndIf
1256\EndIf
1257\end{algorithmic}
1258\end{algorithm}
1259
1260
1261Algorithm~\ref{alg:heapObjectFreeOwn} shows the deallocation (free) outline for an object at address $A$ with ownership.
1262First, the address is divided into small (@sbrk@) or large (@mmap@).
1263For large allocations, the storage is unmapped back to the OS.
1264For small allocations, the bucket associated with the request size is retrieved.
1265If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket.
1266If the bucket is not local to the thread, the allocation is pushed onto the owning thread's associated away stack.
1267
1268Algorithm~\ref{alg:heapObjectFreeNoOwn} shows the deallocation (free) outline for an object at address $A$ without ownership.
1269The algorithm is the same as for ownership except if the bucket is not local to the thread.
1270Then the corresponding bucket of the owner thread is computed for the deallocating thread, and the allocation is pushed onto the deallocating thread's bucket.
1271
1272Finally, the llheap design funnels \label{p:FunnelRoutine} all allocation/deallocation operations through the @malloc@ and @free@ routines, which are the only routines to directly access and manage the internal data structures of the heap.
1273Other allocation operations, \eg @calloc@, @memalign@, and @realloc@, are composed of calls to @malloc@ and possibly @free@, and may manipulate header information after storage is allocated.
1274This design simplifies heap-management code during development and maintenance.
1275
1276
1277\subsubsection{Alignment}
1278
1279Allocators have a different minimum storage alignment from the hardware's basic types.
1280Often the minimum allocator alignment, $M$, is the bus width (32 or 64-bit), the largest register (double, long double), largest atomic instruction (DCAS), or vector data (MMMX).
1281The reason for this larger requirement is the lack of knowledge about the data type occupying the allocation.
1282Hence, an allocator assumes the worst-case scenario for the start of data and the compiler correctly aligns items within this data because it knows their types.
1283Often the minimum storage alignment is an 8/16-byte boundary on a 32/64-bit computer.
1284Alignments larger than $M$ are normally a power of 2, such as page alignment (4/8K).
1285Any alignment less than $M$ is raised to the minimal alignment.
1286
1287llheap aligns its header at the $M$ boundary and its size is $M$;
1288hence, data following the header is aligned at $M$.
1289This pattern means there is no minimal alignment computation along the allocation fastpath, \ie new storage and reused storage is always correctly aligned.
1290An alignment $N$ greater than $M$ is accomplished with a \emph{pessimistic} request for storage that ensures \emph{both} the alignment and size request are satisfied, \eg:
1291\begin{center}
1292\input{Alignment2}
1293\end{center}
1294The amount of storage necessary is $alignment - M + size$, which ensures there is an address, $A$, after the storage returned from @malloc@, $P$, that is a multiple of $alignment$ followed by sufficient storage for the data object.
1295The approach is pessimistic if $P$ happens to have the correct alignment $N$, and the initial allocation has requested sufficient space to move to the next multiple of $N$.
1296In this case, there is $alignment - M$ bytes of unused storage after the data object, which could be used by @realloc@.
1297Note, the address returned by the allocation is $A$, which is subsequently returned to @free@.
1298To correctly free the object, the value $P$ must be computable from $A$, since that is the actual start of the allocation, from which $H$ can be computed $P - M$.
1299Hence, there must be a mechanism to detect when $P$ $\neq$ $A$ and then compute $P$ from $A$.
1300
1301To detect and perform this computation, llheap uses two headers:
1302the \emph{original} header $H$ associated with the allocation, and a \emph{fake} header $F$ within this storage before the alignment boundary $A$, e.g.:
1303\begin{center}
1304\input{Alignment2Impl}
1305\end{center}
1306Since every allocation is aligned at $M$, $P$ $\neq$ $A$ only holds for alignments greater than $M$.
1307When $P$ $\neq$ $A$, the minimum distance between $P$ and $A$ is $M$ bytes, due to the pessimistic storage-allocation.
1308Therefore, there is always room for an $M$-byte fake header before $A$.
1309The fake header must supply an indicator to distinguish it from a normal header and the location of address $P$ generated by the allocation.
1310This information is encoded as an offset from A to P and the initialize alignment (discussed in Section~\ref{s:ReallocStickyProperties}).
1311To distinguish a fake header from a normal header, the least-significant bit of the alignment is used because the offset participates in multiple calculations, while the alignment is just remembered data.
1312\begin{center}
1313\input{FakeHeader}
1314\end{center}
1315
1316
1317\subsubsection{\lstinline{realloc} and Sticky Properties}
1318\label{s:ReallocStickyProperties}
1319
1320The allocation routine @realloc@ provides a memory-management pattern for shrinking/enlarging an existing allocation, while maintaining some or all of the object data.
1321The realloc pattern is simpler than the suboptimal manually steps.
1322\begin{flushleft}
1323\begin{tabular}{ll}
1324\multicolumn{1}{c}{\textbf{realloc pattern}} & \multicolumn{1}{c}{\textbf{manually}} \\
1325\begin{lstlisting}
1326T * naddr = realloc( oaddr, newSize );
1327
1328
1329
1330\end{lstlisting}
1331&
1332\begin{lstlisting}
1333T * naddr = (T *)malloc( newSize ); $\C[2in]{// new storage}$
1334memcpy( naddr, addr, oldSize ); $\C{// copy old bytes}$
1335free( addr ); $\C{// free old storage}$
1336addr = naddr; $\C{// change pointer}\CRT$
1337\end{lstlisting}
1338\end{tabular}
1339\end{flushleft}
1340The manual steps are suboptimal because there may be sufficient internal fragmentation at the end of the allocation due to bucket sizes.
1341If this storage is large enough, it eliminates a new allocation and copying.
1342Alternatively, if the storage is made smaller, there may be a reasonable crossover point, where just increasing the internal fragmentation eliminates a new allocation and copying.
1343This pattern should be used more frequently to reduce storage management costs.
1344In fact, if @oaddr@ is @nullptr@, @realloc@ does a @malloc( newSize)@, and if @newSize@ is 0, @realloc@ does a @free( oaddr )@, so all allocation/deallocation can be done with @realloc@.
1345
1346The hidden problem with this pattern is the effect of zero fill and alignment with respect to reallocation.
1347For safety, we argue these properties should be persistent (``sticky'') and not transient.
1348For example, when memory is initially allocated by @calloc@ or @memalign@ with zero fill or alignment properties, any subsequent reallocations of this storage must preserve these properties.
1349Currently, allocation properties are not preserved nor is it possible to query an allocation to maintain these properties manually.
1350Hence, subsequent use of @realloc@ storage that assumes any initially properties may cause errors.
1351This silent problem is unintuitive to programmers, can cause catastrophic failure, and is difficult to debug because it is transient.
1352To prevent these problems, llheap preserves initial allocation properties within an allocation, allowing them to be queried, and the semantics of @realloc@ preserve these properties on any storage change.
1353As a result, the realloc pattern is efficient and safe.
1354
1355
1356\subsubsection{Header}
1357
1358To preserve allocation properties requires storing additional information about an allocation.
1359Figure~\ref{f:llheapHeader} shows llheap captures this information in the header, which has two fields (left/right) sized appropriately for 32/64-bit alignment requirements.
1360
1361\begin{figure}
1362\centering
1363\input{Header}
1364\caption{llheap Header}
1365\label{f:llheapHeader}
1366\end{figure}
1367
1368The left field is a union of three values:
1369\begin{description}
1370\item[bucket pointer]
1371is for deallocated of heap storage and points back to the bucket associated with this storage requests (see Figure~\ref{f:llheapStructure} for the fields accessible in a bucket).
1372\item[mapped size]
1373is for deallocation of mapped storage and is the storage size for unmapping.
1374\item[next free block]
1375is for freed storage and is an intrusive pointer chaining same-size free blocks onto a bucket's stack of free objects.
1376\end{description}
1377The low-order 3-bits of this field are unused for any stored values as these values are at least 8-byte aligned.
1378The 3 unused bits are used to represent mapped allocation, zero filled, and alignment, respectively.
1379Note, the zero-filled/mapped bits are only used in the normal header and the alignment bit in the fake header.
1380This implementation allows a fast test if any of the lower 3-bits are on (@&@ and compare).
1381If no bits are on, it implies a basic allocation, which is handled quickly in the fastpath for allocation and free;
1382otherwise, the bits are analysed and appropriate actions are taken for the complex cases.
1383
1384The right field remembers the request size versus the allocation (bucket) size, \eg request of 42 bytes is rounded up to 64 bytes.
1385Since programmers think in request sizes rather than allocation sizes, the request size allows better generation of statistics or errors and also helps in memory management.
1386
1387
1388\subsection{Statistics and Debugging}
1389
1390llheap can be built to accumulate fast and largely contention-free allocation statistics to help understand dynamic-memory behaviour.
1391Incrementing statistic counters must appear on the allocation fastpath.
1392As noted, any atomic operation along the fastpath produces a significant increase in allocation costs.
1393To make statistics performant enough for use on running systems, each heap has its own set of statistic counters, so heap operations do not require atomic operations.
1394
1395To locate all statistic counters, heaps are linked together in statistics mode, and this list is locked and traversed to sum all counters across heaps.
1396Note, the list is locked to prevent errors traversing an active list;
1397the statistics counters are not locked and can flicker during accumulation.
1398Figure~\ref{f:StatiticsOutput} shows an example of statistics output, which covers all allocation operations and information about deallocating storage not owned by a thread.
1399No other memory allocator studied provides as comprehensive statistical information.
1400Finally, these statistics were invaluable during the development of this work for debugging and verifying correctness and should be equally valuable to application developers.
1401
1402\begin{figure}
1403\begin{lstlisting}
1404Heap statistics: (storage request / allocation)
1405 malloc >0 calls 2,766; 0 calls 2,064; storage 12,715 / 13,367 bytes
1406 aalloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes
1407 calloc >0 calls 6; 0 calls 0; storage 1,008 / 1,104 bytes
1408 memalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
1409 amemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
1410 cmemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
1411 resize >0 calls 0; 0 calls 0; storage 0 / 0 bytes
1412 realloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes
1413 free !null calls 2,766; null calls 4,064; storage 12,715 / 13,367 bytes
1414 away pulls 0; pushes 0; storage 0 / 0 bytes
1415 sbrk calls 1; storage 10,485,760 bytes
1416 mmap calls 10,000; storage 10,000 / 10,035 bytes
1417 munmap calls 10,000; storage 10,000 / 10,035 bytes
1418 threads started 4; exited 3
1419 heaps new 4; reused 0
1420\end{lstlisting}
1421\caption{Statistics Output}
1422\label{f:StatiticsOutput}
1423\end{figure}
1424
1425llheap can also be built with debug checking, which inserts many asserts along all allocation paths.
1426These assertions detect incorrect allocation usage, like double frees, unfreed storage, or memory corruptions because internal values (like header fields) are overwritten.
1427These checks are best effort as opposed to complete allocation checking as in @valgrind@.
1428Nevertheless, the checks detect many allocation problems.
1429There is an unfortunate problem in detecting unfreed storage because some library routines assume their allocations have life-time duration, and hence, do not free their storage.
1430For example, @printf@ allocates a 1024-byte buffer on the first call and never deletes this buffer.
1431To prevent a false positive for unfreed storage, it is possible to specify an amount of storage that is never freed (see @malloc_unfreed@ \pageref{p:malloc_unfreed}), and it is subtracted from the total allocate/free difference.
1432Determining the amount of never-freed storage is annoying, but once done, any warnings of unfreed storage are application related.
1433
1434Tests indicate only a 30\% performance decrease when statistics \emph{and} debugging are enabled, and the latency cost for accumulating statistic is mitigated by limited calls, often only one at the end of the program.
1435
1436
1437\subsection{User-level Threading Support}
1438\label{s:UserlevelThreadingSupport}
1439
1440The serially-reusable problem (see \pageref{p:SeriallyReusable}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model.
1441The solution is to prevent interrupts that can result in a CPU or KT change during operations that are logically critical subsections such as starting a memory operation on one KT and completing it on another.
1442Locking these critical subsections negates any attempt for a quick fastpath and results in high contention.
1443For user-level threading, the serially-reusable problem appears with time slicing for preemptable scheduling, as the signal handler context switches to another user-level thread.
1444Without time slicing, a user thread performing a long computation can prevent the execution of (starve) other threads.
1445To prevent starvation for a memory-allocation-intensive thread, \ie the time slice always triggers in an allocation critical-subsection for one thread so the thread never gets time sliced, a thread-local \newterm{rollforward} flag is set in the signal handler when it aborts a time slice.
1446The rollforward flag is tested at the end of each allocation funnel routine (see \pageref{p:FunnelRoutine}), and if set, it is reset and a volunteer yield (context switch) is performed to allow other threads to execute.
1447
1448llheap uses two techniques to detect when execution is in an allocation operation or routine called from allocation operation, to abort any time slice during this period.
1449On the slowpath when executing expensive operations, like @sbrk@ or @mmap@, interrupts are disabled/enabled by setting kernel-thread-local flags so the signal handler aborts immediately.
1450On the fastpath, disabling/enabling interrupts is too expensive as accessing kernel-thread-local storage can be expensive and not user-thread-safe.
1451For example, the ARM processor stores the thread-local pointer in a coprocessor register that cannot perform atomic base-displacement addressing.
1452Hence, there is a window between loading the kernel-thread-local pointer from the coprocessor register into a normal register and adding the displacement when a time slice can move a thread.
1453
1454The fast technique (with lower run time cost) is to define a special code subsection and places all non-interruptible routines in this subsection.
1455The linker places all code in this subsection into a contiguous block of memory, but the order of routines within the block is unspecified.
1456Then, the signal handler compares the program counter at the point of interrupt with the the start and end address of the non-interruptible subsection, and aborts if executing within this subsection and sets the rollforward flag.
1457This technique is fragile because any calls in the non-interruptible code outside of the non-interruptible subsection (like @sbrk@) must be bracketed with disable/enable interrupts and these calls must be along the slowpath.
1458Hence, for correctness, this approach requires inspection of generated assembler code for routines placed in the non-interruptible subsection.
1459This issue is mitigated by the llheap funnel design so only funnel routines and a few statistics routines are placed in the non-interruptible subsection and their assembler code examined.
1460These techniques are used in both the \uC and \CFA versions of llheap as both of these systems have user-level threading.
1461
1462
1463\subsection{Bootstrapping}
1464
1465There are problems bootstrapping a memory allocator.
1466\begin{enumerate}
1467\item
1468Programs can be statically or dynamically linked.
1469\item
1470The order in which the linker schedules startup code is poorly supported so it cannot be controlled entirely.
1471\item
1472Knowing a KT's start and end independently from the KT code is difficult.
1473\end{enumerate}
1474
1475For static linking, the allocator is loaded with the program.
1476Hence, allocation calls immediately invoke the allocator operation defined by the loaded allocation library and there is only one memory allocator used in the program.
1477This approach allows allocator substitution by placing an allocation library before any other in the linked/load path.
1478
1479Allocator substitution is similar for dynamic linking, but the problem is that the dynamic loader starts first and needs to perform dynamic allocations \emph{before} the substitution allocator is loaded.
1480As a result, the dynamic loader uses a default allocator until the substitution allocator is loaded, after which all allocation operations are handled by the substitution allocator, including from the dynamic loader.
1481Hence, some part of the @sbrk@ area may be used by the default allocator and statistics about allocation operations cannot be correct.
1482Furthermore, dynamic linking goes through trampolines, so there is an additional cost along the allocator fastpath for all allocation operations.
1483Testing showed up to a 5\% performance decrease with dynamic linking as compared to static linking, even when using @tls_model("initial-exec")@ so the dynamic loader can obtain tighter binding.
1484
1485All allocator libraries need to perform startup code to initialize data structures, such as the heap array for llheap.
1486The problem is getting initialization done before the first allocator call.
1487However, there does not seem to be mechanism to tell either the static or dynamic loader to first perform initialization code before any calls to a loaded library.
1488Also, initialization code of other libraries and the run-time environment may call memory allocation routines such as \lstinline{malloc}.
1489This compounds the situation as there is no mechanism to tell either the static or dynamic loader to first perform the initialization code of the memory allocator before any other initialization that may involve a dynamic memory allocation call.
1490As a result, calls to allocation routines occur without initialization.
1491To deal with this problem, it is necessary to put a conditional initialization check along the allocation fastpath to trigger initialization (singleton pattern).
1492
1493Two other important execution points are program startup and termination, which include prologue or epilogue code to bootstrap a program, which programmers are unaware of.
1494For example, dynamic-memory allocations before/after the application starts should not be considered in statistics because the application does not make these calls.
1495llheap establishes these two points using routines:
1496\begin{lstlisting}
1497__attribute__(( constructor( 100 ) )) static void startup( void ) {
1498 // clear statistic counters
1499 // reset allocUnfreed counter
1500}
1501__attribute__(( destructor( 100 ) )) static void shutdown( void ) {
1502 // sum allocUnfreed for all heaps
1503 // subtract global unfreed storage
1504 // if allocUnfreed > 0 then print warning message
1505}
1506\end{lstlisting}
1507which use global constructor/destructor priority 100, where the linker calls these routines at program prologue/epilogue in increasing/decreasing order of priority.
1508Application programs may only use global constructor/destructor priorities greater than 100.
1509Hence, @startup@ is called after the program prologue but before the application starts, and @shutdown@ is called after the program terminates but before the program epilogue.
1510By resetting counters in @startup@, prologue allocations are ignored, and checking unfreed storage in @shutdown@ checks only application memory management, ignoring the program epilogue.
1511
1512While @startup@/@shutdown@ apply to the program KT, a concurrent program creates additional KTs that do not trigger these routines.
1513However, it is essential for the allocator to know when each KT is started/terminated.
1514One approach is to create a thread-local object with a construct/destructor, which is triggered after a new KT starts and before it terminates, respectively.
1515\begin{lstlisting}
1516struct ThreadManager {
1517 volatile bool pgm_thread;
1518 ThreadManager() {} // unusable
1519 ~ThreadManager() { if ( pgm_thread ) heapManagerDtor(); }
1520};
1521static thread_local ThreadManager threadManager;
1522\end{lstlisting}
1523Unfortunately, thread-local variables are created lazily, \ie on the first dereference of @threadManager@, which then triggers its constructor.
1524Therefore, the constructor is useless for knowing when a KT starts because the KT must reference it, and the allocator does not control the application KT.
1525Fortunately, the singleton pattern needed for initializing the program KT also triggers KT allocator initialization, which can then reference @pgm_thread@ to call @threadManager@'s constructor, otherwise its destructor is not called.
1526Now when a KT terminates, @~ThreadManager@ is called to chain it onto the global-heap free-stack, where @pgm_thread@ is set to true only for the program KT.
1527The conditional destructor call prevents closing down the program heap, which must remain available because epilogue code may free more storage.
1528
1529Finally, there is a recursive problem when the singleton pattern dereferences @pgm_thread@ to initialize the thread-local object, because its initialization calls @atExit@, which immediately calls @malloc@ to obtain storage.
1530This recursion is handled with another thread-local flag to prevent double initialization.
1531A similar problem exists when the KT terminates and calls member @~ThreadManager@, because immediately afterwards, the terminating KT calls @free@ to deallocate the storage obtained from the @atExit@.
1532In the meantime, the terminated heap has been put on the global-heap free-stack, and may be active by a new KT, so the @atExit@ free is handled as a free to another heap and put onto the away list using locking.
1533
1534For user threading systems, the KTs are controlled by the runtime, and hence, start/end pointers are known and interact directly with the llheap allocator for \uC and \CFA, which eliminates or simplifies several of these problems.
1535The following API was created to provide interaction between the language runtime and the allocator.
1536\begin{lstlisting}
1537void startThread(); $\C{// KT starts}$
1538void finishThread(); $\C{// KT ends}$
1539void startup(); $\C{// when application code starts}$
1540void shutdown(); $\C{// when application code ends}$
1541bool traceHeap(); $\C{// enable allocation/free printing for debugging}$
1542bool traceHeapOn(); $\C{// start printing allocation/free calls}$
1543bool traceHeapOff(); $\C{// stop printing allocation/free calls}$
1544\end{lstlisting}
1545This kind of API is necessary to allow concurrent runtime systems to interact with different memory allocators in a consistent way.
1546
1547%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1548
1549\subsection{Added Features and Methods}
1550
1551The C dynamic-allocation API (see Figure~\ref{f:CDynamicAllocationAPI}) is neither orthogonal nor complete.
1552For example,
1553\begin{itemize}
1554\item
1555It is possible to zero fill or align an allocation but not both.
1556\item
1557It is \emph{only} possible to zero fill an array allocation.
1558\item
1559It is not possible to resize a memory allocation without data copying.
1560\item
1561@realloc@ does not preserve initial allocation properties.
1562\end{itemize}
1563As a result, programmers must provide these options, which is error prone, resulting in blaming the entire programming language for a poor dynamic-allocation API.
1564Furthermore, newer programming languages have better type systems that can provide safer and more powerful APIs for memory allocation.
1565
1566\begin{figure}
1567\begin{lstlisting}
1568void * malloc( size_t size );
1569void * calloc( size_t nmemb, size_t size );
1570void * realloc( void * ptr, size_t size );
1571void * reallocarray( void * ptr, size_t nmemb, size_t size );
1572void free( void * ptr );
1573void * memalign( size_t alignment, size_t size );
1574void * aligned_alloc( size_t alignment, size_t size );
1575int posix_memalign( void ** memptr, size_t alignment, size_t size );
1576void * valloc( size_t size );
1577void * pvalloc( size_t size );
1578
1579struct mallinfo mallinfo( void );
1580int mallopt( int param, int val );
1581int malloc_trim( size_t pad );
1582size_t malloc_usable_size( void * ptr );
1583void malloc_stats( void );
1584int malloc_info( int options, FILE * fp );
1585\end{lstlisting}
1586\caption{C Dynamic-Allocation API}
1587\label{f:CDynamicAllocationAPI}
1588\end{figure}
1589
1590The following presents design and API changes for C, \CC (\uC), and \CFA, all of which are implemented in llheap.
1591
1592
1593\subsubsection{Out of Memory}
1594
1595Most allocators use @nullptr@ to indicate an allocation failure, specifically out of memory;
1596hence the need to return an alternate value for a zero-sized allocation.
1597A different approach allowed by @C API@ is to abort a program when out of memory and return @nullptr@ for a zero-sized allocation.
1598In theory, notifying the programmer of memory failure allows recovery;
1599in practice, it is almost impossible to gracefully recover when out of memory.
1600Hence, the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen because no pseudo allocation is necessary.
1601
1602
1603\subsubsection{C Interface}
1604
1605For C, it is possible to increase functionality and orthogonality of the dynamic-memory API to make allocation better for programmers.
1606
1607For existing C allocation routines:
1608\begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt]
1609\item
1610@calloc@ sets the sticky zero-fill property.
1611\item
1612@memalign@, @aligned_alloc@, @posix_memalign@, @valloc@ and @pvalloc@ set the sticky alignment property.
1613\item
1614@realloc@ and @reallocarray@ preserve sticky properties.
1615\end{itemize}
1616
1617The C dynamic-memory API is extended with the following routines:
1618
1619\medskip\noindent
1620\lstinline{void * aalloc( size_t dimension, size_t elemSize )}
1621extends @calloc@ for allocating a dynamic array of objects with total size @dim@ $\times$ @elemSize@ but \emph{without} zero-filling the memory.
1622@aalloc@ is significantly faster than @calloc@, which is the only alternative given by the standard memory-allocation routines for array allocation.
1623It returns the address of the dynamic array or @NULL@ if either @dim@ or @elemSize@ are zero.
1624
1625\medskip\noindent
1626\lstinline{void * resize( void * oaddr, size_t size )}
1627extends @realloc@ for resizing an existing allocation, @oaddr@, to the new @size@ (smaller or larger than previous) \emph{without} copying previous data into the new allocation or preserving sticky properties.
1628@resize@ is significantly faster than @realloc@, which is the only alternative.
1629It returns the address of the old or new storage with the specified new size or @NULL@ if @size@ is zero.
1630
1631\medskip\noindent
1632\lstinline{void * amemalign( size_t alignment, size_t dimension, size_t elemSize )}
1633extends @aalloc@ and @memalign@ for allocating a dynamic array of objects with the starting address on the @alignment@ boundary.
1634Sets sticky alignment property.
1635It returns the address of the aligned dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero.
1636
1637\medskip\noindent
1638\lstinline{void * cmemalign( size_t alignment, size_t dimension, size_t elemSize )}
1639extends @amemalign@ with zero fill and has the same usage as @amemalign@.
1640Sets sticky zero-fill and alignment property.
1641It returns the address of the aligned, zero-filled dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero.
1642
1643\medskip\noindent
1644\lstinline{size_t malloc_alignment( void * addr )}
1645returns the object alignment, where objects not allocated with alignment return the minimal allocation alignment.
1646For use in aligning similar allocations.
1647
1648\medskip\noindent
1649\lstinline{bool malloc_zero_fill( void * addr )}
1650returns true if the objects zero-fill sticky property is set and false otherwise.
1651For use in zero filling similar allocations.
1652
1653\medskip\noindent
1654\lstinline{size_t malloc_size( void * addr )}
1655returns the object's request size, which is updated when an object is resized or zero if @addr@ is @NULL@ (see also @malloc_usable_size@).
1656For use in similar allocations.
1657
1658\medskip\noindent
1659\lstinline{int malloc_stats_fd( int fd )}
1660changes the file descriptor where @malloc_stats@ writes statistics (default @stdout@) and returns the previous file descriptor.
1661
1662\medskip\noindent
1663\lstinline{size_t malloc_expansion()}
1664\label{p:malloc_expansion}
1665set the amount (bytes) to extend the heap when there is insufficient free storage to service an allocation request.
1666It returns the heap extension size used throughout a program when requesting more memory from the system using @sbrk@ system-call, \ie called once at heap initialization.
1667
1668\medskip\noindent
1669\lstinline{size_t malloc_mmap_start()}
1670set the crossover between allocations occurring in the @sbrk@ area or separately mapped.
1671It returns the crossover point used throughout a program, \ie called once at heap initialization.
1672
1673\medskip\noindent
1674\lstinline{size_t malloc_unfreed()}
1675\label{p:malloc_unfreed}
1676amount subtracted to adjust for unfreed program storage (debug only).
1677It returns the new subtraction amount and called by @malloc_stats@ (discussed in Section~\ref{}).
1678
1679
1680\subsubsection{\CC Interface}
1681
1682The following extensions take advantage of overload polymorphism in the \CC type-system.
1683
1684\medskip\noindent
1685\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}
1686extends @resize@ with an alignment requirement, @nalign@.
1687It returns the address of the old or new storage with the specified new size and alignment, or @NULL@ if @size@ is zero.
1688
1689\medskip\noindent
1690\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}
1691extends @realloc@ with an alignment requirement, @nalign@.
1692It returns the address of the old or new storage with the specified new size and alignment, or @NULL@ if @size@ is zero.
1693
1694
1695\subsubsection{\CFA Interface}
1696
1697The following extensions take advantage of overload polymorphism in the \CFA type-system.
1698The key safety advantage of the \CFA type system is using the return type to select overloads;
1699hence, a polymorphic routine knows the returned type and its size.
1700This capability is used to remove the object size parameter and correctly cast the return storage to match the result type.
1701For example, the following is the \CFA wrapper for C @malloc@:
1702\begin{cfa}
1703forall( T & | sized(T) ) {
1704 T * malloc( void ) {
1705 if ( _Alignof(T) <= libAlign() ) return @(T *)@malloc( @sizeof(T)@ ); // C allocation
1706 else return @(T *)@memalign( @_Alignof(T)@, @sizeof(T)@ ); // C allocation
1707 } // malloc
1708\end{cfa}
1709and is used as follows:
1710\begin{lstlisting}
1711int * i = malloc();
1712double * d = malloc();
1713struct Spinlock { ... } __attribute__(( aligned(128) ));
1714Spinlock * sl = malloc();
1715\end{lstlisting}
1716where each @malloc@ call provides the return type as @T@, which is used with @sizeof@, @_Alignof@, and casting the storage to the correct type.
1717This interface removes many of the common allocation errors in C programs.
1718Figure~\ref{f:CFADynamicAllocationAPI} show the \CFA wrappers for the equivalent C/\CC allocation routines with same semantic behaviour.
1719
1720\begin{figure}
1721\begin{lstlisting}
1722T * malloc( void );
1723T * aalloc( size_t dim );
1724T * calloc( size_t dim );
1725T * resize( T * ptr, size_t size );
1726T * realloc( T * ptr, size_t size );
1727T * memalign( size_t align );
1728T * amemalign( size_t align, size_t dim );
1729T * cmemalign( size_t align, size_t dim );
1730T * aligned_alloc( size_t align );
1731int posix_memalign( T ** ptr, size_t align );
1732T * valloc( void );
1733T * pvalloc( void );
1734\end{lstlisting}
1735\caption{\CFA C-Style Dynamic-Allocation API}
1736\label{f:CFADynamicAllocationAPI}
1737\end{figure}
1738
1739In addition to the \CFA C-style allocator interface, a new allocator interface is provided to further increase orthogonality and usability of dynamic-memory allocation.
1740This interface helps programmers in three ways.
1741\begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt]
1742\item
1743naming: \CFA regular and @ttype@ polymorphism (@ttype@ polymorphism in \CFA is similar to \CC variadic templates) is used to encapsulate a wide range of allocation functionality into a single routine name, so programmers do not have to remember multiple routine names for different kinds of dynamic allocations.
1744\item
1745named arguments: individual allocation properties are specified using postfix function call, so the programmers do not have to remember parameter positions in allocation calls.
1746\item
1747object size: like the \CFA's C-interface, programmers do not have to specify object size or cast allocation results.
1748\end{itemize}
1749Note, postfix function call is an alternative call syntax, using backtick @`@, so the argument appears before the function name, \eg
1750\begin{cfa}
1751duration ?@`@h( int h ); // ? denote the position of the function operand
1752duration ?@`@m( int m );
1753duration ?@`@s( int s );
1754duration dur = 3@`@h + 42@`@m + 17@`@s;
1755\end{cfa}
1756
1757The following extensions take advantage of overload polymorphism in the \CC type-system.
1758
1759\medskip\noindent
1760\lstinline{T * alloc( ... )} or \lstinline{T * alloc( size_t dimension, ... )}
1761is overloaded with a variable number of specific allocation operations, or an integer dimension parameter followed by a variable number of specific allocation operations.
1762These allocation operations can be passed as named arguments when calling the \lstinline{alloc} routine.
1763A call without parameters returns a dynamically allocated object of type @T@ (@malloc@).
1764A call with only the dimension (dim) parameter returns a dynamically allocated array of objects of type @T@ (@aalloc@).
1765The variable number of arguments consist of allocation properties, which can be combined to produce different kinds of allocations.
1766The only restriction is for properties @realloc@ and @resize@, which cannot be combined.
1767
1768The allocation property functions are:
1769
1770\medskip\noindent
1771\lstinline{T_align ?`align( size_t alignment )}
1772to align the allocation.
1773The alignment parameter must be $\ge$ the default alignment (@libAlign()@ in \CFA) and a power of two.
1774The following example returns a dynamic object and object array aligned on a 4096-byte boundary.
1775\begin{cfa}
1776int * i0 = alloc( @4096`align@ ); sout | i0 | nl;
1777int * i1 = alloc( 3, @4096`align@ ); sout | i1; for (i; 3 ) sout | &i1[i]; sout | nl;
1778
17790x555555572000
17800x555555574000 0x555555574000 0x555555574004 0x555555574008
1781\end{cfa}
1782
1783\medskip\noindent
1784\lstinline{S_fill(T) ?`fill ( /* various types */ )}
1785to initialize storage.
1786There are three ways to fill storage:
1787\begin{enumerate}[itemsep=0pt,parsep=0pt]
1788\item
1789A char fills each byte of each object.
1790\item
1791An object of the returned type fills each object.
1792\item
1793An object array pointer fills some or all of the corresponding object array.
1794\end{enumerate}
1795For example:
1796\begin{cfa}[numbers=left,xleftmargin=2.5\parindentlnth]
1797int * i0 = alloc( @0n`fill@ ); sout | *i0 | nl; // disambiguate 0
1798int * i1 = alloc( @5`fill@ ); sout | *i1 | nl;
1799int * i2 = alloc( @'\xfe'`fill@ ); sout | hex( *i2 ) | nl;
1800int * i3 = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | i3[i]; sout | nl;
1801int * i4 = alloc( 5, @0xdeadbeefN`fill@ ); for ( i; 5 ) sout | hex( i4[i] ); sout | nl;
1802int * i5 = alloc( 5, @i3`fill@ ); for ( i; 5 ) sout | i5[i]; sout | nl;
1803int * i6 = alloc( 5, @[i3, 3]`fill@ ); for ( i; 5 ) sout | i6[i]; sout | nl;
1804\end{cfa}
1805\begin{lstlisting}[numbers=left,xleftmargin=2.5\parindentlnth]
18060
18075
18080xfefefefe
18095 5 5 5 5
18100xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef
18115 5 5 5 5
18125 5 5 -555819298 -555819298 // two undefined values
1813\end{lstlisting}
1814Examples 1 to 3 fill an object with a value or characters.
1815Examples 4 to 7 fill an array of objects with values, another array, or part of an array.
1816
1817\medskip\noindent
1818\lstinline{S_resize(T) ?`resize( void * oaddr )}
1819used to resize, realign, and fill, where the old object data is not copied to the new object.
1820The old object type may be different from the new object type, since the values are not used.
1821For example:
1822\begin{cfa}[numbers=left,xleftmargin=2.5\parindentlnth]
1823int * i = alloc( @5`fill@ ); sout | i | *i;
1824i = alloc( @i`resize@, @256`align@, @7`fill@ ); sout | i | *i;
1825double * d = alloc( @i`resize@, @4096`align@, @13.5`fill@ ); sout | d | *d;
1826\end{cfa}
1827\begin{lstlisting}[numbers=left,xleftmargin=2.5\parindentlnth]
18280x55555556d5c0 5
18290x555555570000 7
18300x555555571000 13.5
1831\end{lstlisting}
1832Examples 2 to 3 change the alignment, fill, and size for the initial storage of @i@.
1833
1834\begin{cfa}[numbers=left,xleftmargin=2.5\parindentlnth]
1835int * ia = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | ia[i]; sout | nl;
1836ia = alloc( 10, @ia`resize@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl;
1837sout | ia; ia = alloc( 5, @ia`resize@, @512`align@, @13`fill@ ); sout | ia; for ( i; 5 ) sout | ia[i]; sout | nl;;
1838ia = alloc( 3, @ia`resize@, @4096`align@, @2`fill@ ); sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl;
1839\end{cfa}
1840\begin{lstlisting}[numbers=left,xleftmargin=2.5\parindentlnth]
18415 5 5 5 5
18427 7 7 7 7 7 7 7 7 7
18430x55555556d560 0x555555571a00 13 13 13 13 13
18440x555555572000 0x555555572000 2 0x555555572004 2 0x555555572008 2
1845\end{lstlisting}
1846Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@.
1847
1848\medskip\noindent
1849\lstinline{S_realloc(T) ?`realloc( T * a ))}
1850used to resize, realign, and fill, where the old object data is copied to the new object.
1851The old object type must be the same as the new object type, since the value is used.
1852Note, for @fill@, only the extra space after copying the data from the old object is filled with the given parameter.
1853For example:
1854\begin{cfa}[numbers=left,xleftmargin=2.5\parindentlnth]
1855int * i = alloc( @5`fill@ ); sout | i | *i;
1856i = alloc( @i`realloc@, @256`align@ ); sout | i | *i;
1857i = alloc( @i`realloc@, @4096`align@, @13`fill@ ); sout | i | *i;
1858\end{cfa}
1859\begin{lstlisting}[numbers=left,xleftmargin=2.5\parindentlnth]
18600x55555556d5c0 5
18610x555555570000 5
18620x555555571000 5
1863\end{lstlisting}
1864Examples 2 to 3 change the alignment for the initial storage of @i@.
1865The @13`fill@ in example 3 does nothing because no extra space is added.
1866
1867\begin{cfa}[numbers=left,xleftmargin=2.5\parindentlnth]
1868int * ia = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | ia[i]; sout | nl;
1869ia = alloc( 10, @ia`realloc@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl;
1870sout | ia; ia = alloc( 1, @ia`realloc@, @512`align@, @13`fill@ ); sout | ia; for ( i; 1 ) sout | ia[i]; sout | nl;;
1871ia = alloc( 3, @ia`realloc@, @4096`align@, @2`fill@ ); sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl;
1872\end{cfa}
1873\begin{lstlisting}[numbers=left,xleftmargin=2.5\parindentlnth]
18745 5 5 5 5
18755 5 5 5 5 7 7 7 7 7
18760x55555556c560 0x555555570a00 5
18770x555555571000 0x555555571000 5 0x555555571004 2 0x555555571008 2
1878\end{lstlisting}
1879Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@.
1880The @13`fill@ in example 3 does nothing because no extra space is added.
1881
1882These \CFA allocation features are used extensively in the development of the \CFA runtime.
1883
1884
1885\section{Benchmarks}
1886\label{s:Benchmarks}
1887
1888%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1889%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1890%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Micro Benchmark Suite
1891%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1892%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1893
1894There are two basic approaches for evaluating computer software: benchmarks and micro-benchmarks.
1895\begin{description}
1896\item[Benchmarks]
1897are a suite of application programs (SPEC CPU/WEB) that are exercised in a common way (inputs) to find differences among underlying software implementations associated with an application (compiler, memory allocator, web server, \etc).
1898The applications are supposed to represent common execution patterns that need to perform well with respect to an underlying software implementation.
1899Benchmarks are often criticized for having overlapping patterns, insufficient patterns, or extraneous code that masks patterns.
1900\item[Micro-Benchmarks]
1901attempt to extract the common execution patterns associated with an application and run the pattern independently.
1902This approach removes any masking from extraneous application code, allows execution pattern to be very precise, and provides an opportunity for the execution pattern to have multiple independent tuning adjustments (knobs).
1903Micro-benchmarks are often criticized for inadequately representing real-world applications.
1904\end{description}
1905
1906While some crucial software components have standard benchmarks, no standard benchmark exists for testing and comparing memory allocators.
1907In the past, an assortment of applications have been used for benchmarking allocators~\cite{Detlefs93,Berger00,Berger01,berger02reconsidering}: P2C, GS, Espresso/Espresso-2, CFRAC/CFRAC-2, GMake, GCC, Perl/Perl-2, Gawk/Gawk-2, XPDF/XPDF-2, ROBOOP, Lindsay.
1908As well, an assortment of micro-benchmark have been used for benchmarking allocators~\cite{larson99memory,Berger00,streamflow}: threadtest, shbench, Larson, consume, false sharing.
1909Many of these benchmark applications and micro-benchmarks are old and may not reflect current application allocation patterns.
1910
1911This work designs and examines a new set of micro-benchmarks for memory allocators that test a variety of allocation patterns, each with multiple tuning parameters.
1912The aim of the micro-benchmark suite is to create a set of programs that can evaluate a memory allocator based on the key performance metrics such as speed, memory overhead, and cache performance.
1913% These programs can be taken as a standard to benchmark an allocator's basic goals.
1914These programs give details of an allocator's memory overhead and speed under certain allocation patterns.
1915The allocation patterns are configurable (adjustment knobs) to observe an allocator's performance across a spectrum allocation patterns, which is seldom possible with benchmark programs.
1916Each micro-benchmark program has multiple control knobs specified by command-line arguments.
1917
1918The new micro-benchmark suite measures performance by allocating dynamic objects and measuring specific metrics.
1919An allocator's speed is benchmarked in different ways, as are issues like false sharing.
1920
1921
1922\subsection{Prior Multi-Threaded Micro-Benchmarks}
1923
1924Modern memory allocators, such as llheap, must handle multi-threaded programs at the KT and UT level.
1925The following multi-threaded micro-benchmarks are presented to give a sense of prior work~\cite{Berger00} at the KT level.
1926None of the prior work addresses multi-threading at the UT level.
1927
1928
1929\subsubsection{threadtest}
1930
1931This benchmark stresses the ability of the allocator to handle different threads allocating and deallocating independently.
1932There is no interaction among threads, \ie no object sharing.
1933Each thread repeatedly allocates 100,000 \emph{8-byte} objects then deallocates them in the order they were allocated.
1934The execution time of the benchmark evaluates its efficiency.
1935
1936
1937\subsubsection{shbench}
1938
1939This benchmark is similar to threadtest but each thread randomly allocate and free a number of \emph{random-sized} objects.
1940It is a stress test that also uses runtime to determine efficiency of the allocator.
1941
1942
1943\subsubsection{Larson}
1944
1945This benchmark simulates a server environment.
1946Multiple threads are created where each thread allocates and frees a number of random-sized objects within a size range.
1947Before the thread terminates, it passes its array of 10,000 objects to a new child thread to continue the process.
1948The number of thread generations varies depending on the thread speed.
1949It calculates memory operations per second as an indicator of the memory allocator's performance.
1950
1951
1952\subsection{New Multi-Threaded Micro-Benchmarks}
1953
1954The following new benchmarks were created to assess multi-threaded programs at the KT and UT level.
1955For generating random values, two generators are supported: uniform~\cite{uniformPRNG} and fisher~\cite{fisherPRNG}.
1956
1957
1958\subsubsection{Churn Benchmark}
1959\label{s:ChurnBenchmark}
1960
1961The churn benchmark measures the runtime speed of an allocator in a multi-threaded scenario, where each thread extensively allocates and frees dynamic memory.
1962Only @malloc@ and @free@ are used to eliminate any extra cost, such as @memcpy@ in @calloc@ or @realloc@.
1963Churn simulates a memory intensive program and can be tuned to create different scenarios.
1964
1965Figure~\ref{fig:ChurnBenchFig} shows the pseudo code for the churn micro-benchmark.
1966This benchmark creates a buffer with M spots and an allocation in each spot, and then starts K threads.
1967Each thread picks a random spot in M, frees the object currently at that spot, and allocates a new object for that spot.
1968Each thread repeats this cycle N times.
1969The main thread measures the total time taken for the whole benchmark and that time is used to evaluate the memory allocator's performance.
1970
1971\begin{figure}
1972\centering
1973\begin{lstlisting}
1974Main Thread
1975 create worker threads
1976 note time T1
1977 ...
1978 note time T2
1979 churn_speed = (T2 - T1)
1980Worker Thread
1981 initialize variables
1982 ...
1983 for ( N )
1984 R = random spot in array
1985 free R
1986 allocate new object at R
1987\end{lstlisting}
1988%\includegraphics[width=1\textwidth]{figures/bench-churn.eps}
1989\caption{Churn Benchmark}
1990\label{fig:ChurnBenchFig}
1991\end{figure}
1992
1993The adjustment knobs for churn are:
1994\begin{description}[itemsep=0pt,parsep=0pt]
1995\item[thread:]
1996number of threads (K).
1997\item[spots:]
1998number of spots for churn (M).
1999\item[obj:]
2000number of objects per thread (N).
2001\item[max:]
2002maximum object size.
2003\item[min:]
2004minimum object size.
2005\item[step:]
2006object size increment.
2007\item[distro:]
2008object size distribution
2009\end{description}
2010
2011
2012\subsubsection{Cache Thrash}
2013\label{sec:benchThrashSec}
2014
2015The cache-thrash micro-benchmark measures allocator-induced active false-sharing as illustrated in Section~\ref{s:AllocatorInducedActiveFalseSharing}.
2016If memory is allocated for multiple threads on the same cache line, this can significantly slow down program performance.
2017When threads share a cache line, frequent reads/writes to their cache-line object causes cache misses, which cause escalating delays as cache distance increases.
2018
2019Cache thrash tries to create a scenario that leads to false sharing, if the underlying memory allocator is allocating dynamic memory to multiple threads on the same cache lines.
2020Ideally, a memory allocator should distance the dynamic memory region of one thread from another.
2021Having multiple threads allocating small objects simultaneously can cause a memory allocator to allocate objects on the same cache line, if its not distancing the memory among different threads.
2022
2023Figure~\ref{fig:benchThrashFig} shows the pseudo code for the cache-thrash micro-benchmark.
2024First, it creates K worker threads.
2025Each worker thread allocates an object and intensively reads/writes it for M times to possible invalidate cache lines that may interfere with other threads sharing the same cache line.
2026Each thread repeats this for N times.
2027The main thread measures the total time taken for all worker threads to complete.
2028Worker threads sharing cache lines with each other are expected to take longer.
2029
2030\begin{figure}
2031\centering
2032\input{AllocInducedActiveFalseSharing}
2033\medskip
2034\begin{lstlisting}
2035Main Thread
2036 create worker threads
2037 ...
2038 signal workers to allocate
2039 ...
2040 signal workers to free
2041 ...
2042Worker Thread$\(_1\)$
2043 warm up memory in chunks of 16 bytes
2044 ...
2045 For N
2046 malloc an object
2047 read/write the object M times
2048 free the object
2049 ...
2050Worker Thread$\(_2\)$
2051 // same as Worker Thread$\(_1\)$
2052\end{lstlisting}
2053%\input{MemoryOverhead}
2054%\includegraphics[width=1\textwidth]{figures/bench-cache-thrash.eps}
2055\caption{Allocator-Induced Active False-Sharing Benchmark}
2056\label{fig:benchThrashFig}
2057\end{figure}
2058
2059The adjustment knobs for cache access scenarios are:
2060\begin{description}[itemsep=0pt,parsep=0pt]
2061\item[thread:]
2062number of threads (K).
2063\item[iterations:]
2064iterations of cache benchmark (N).
2065\item[cacheRW:]
2066repetitions of reads/writes to object (M).
2067\item[size:]
2068object size.
2069\end{description}
2070
2071
2072\subsubsection{Cache Scratch}
2073\label{s:CacheScratch}
2074
2075The cache-scratch micro-benchmark measures allocator-induced passive false-sharing as illustrated in Section~\ref{s:AllocatorInducedPassiveFalseSharing}.
2076As with cache thrash, if memory is allocated for multiple threads on the same cache line, this can significantly slow down program performance.
2077In this scenario, the false sharing is being caused by the memory allocator although it is started by the program sharing an object.
2078
2079% An allocator can unintentionally induce false sharing depending upon its management of the freed objects.
2080% If thread Thread$_1$ allocates multiple objects together, they may be allocated on the same cache line by the memory allocator.
2081% If Thread$_1$ passes these object to thread Thread$_2$, then both threads may share the same cache line but this scenario is not induced by the allocator;
2082% instead, the program induced this situation.
2083% Now if Thread$_2$ frees this object and then allocate an object of the same size, the allocator may return the same object, which is on a cache line shared with thread Thread$_1$.
2084
2085Cache scratch tries to create a scenario that leads to false sharing and should make the memory allocator preserve the program-induced false sharing, if it does not return a freed object to its owner thread and, instead, re-uses it instantly.
2086An allocator using object ownership, as described in subsection Section~\ref{s:Ownership}, is less susceptible to allocator-induced passive false-sharing.
2087If the object is returned to the thread that owns it, then the new object that the thread gets is less likely to be on the same cache line.
2088
2089Figure~\ref{fig:benchScratchFig} shows the pseudo code for the cache-scratch micro-benchmark.
2090First, it allocates K dynamic objects together, one for each of the K worker threads, possibly causing memory allocator to allocate these objects on the same cache line.
2091Then it create K worker threads and passes an object from the K allocated objects to each of the K threads.
2092Each worker thread frees the object passed by the main thread.
2093Then, it allocates an object and reads/writes it repetitively for M times possibly causing frequent cache invalidations.
2094Each worker repeats this N times.
2095
2096\begin{figure}
2097\centering
2098\input{AllocInducedPassiveFalseSharing}
2099\medskip
2100\begin{lstlisting}
2101Main Thread
2102 malloc N objects $for$ each worker $thread$
2103 create worker threads and pass N objects to each worker
2104 ...
2105 signal workers to allocate
2106 ...
2107 signal workers to free
2108 ...
2109Worker Thread$\(_1\)$
2110 warmup memory in chunks of 16 bytes
2111 ...
2112 free the object passed by the Main Thread
2113 For N
2114 malloc new object
2115 read/write the object M times
2116 free the object
2117 ...
2118Worker Thread$\(_2\)$
2119 // same as Worker Thread$\(_1\)$
2120\end{lstlisting}
2121%\includegraphics[width=1\textwidth]{figures/bench-cache-scratch.eps}
2122\caption{Program-Induced Passive False-Sharing Benchmark}
2123\label{fig:benchScratchFig}
2124\end{figure}
2125
2126Each thread allocating an object after freeing the original object passed by the main thread should cause the memory allocator to return the same object that was initially allocated by the main thread if the allocator did not return the initial object back to its owner (main thread).
2127Then, intensive read/write on the shared cache line by multiple threads should slow down worker threads due to to high cache invalidations and misses.
2128Main thread measures the total time taken for all the workers to complete.
2129
2130Similar to benchmark cache thrash in subsection Section~\ref{sec:benchThrashSec}, different cache access scenarios can be created using the following command-line arguments.
2131\begin{description}[topsep=0pt,itemsep=0pt,parsep=0pt]
2132\item[threads:]
2133number of threads (K).
2134\item[iterations:]
2135iterations of cache benchmark (N).
2136\item[cacheRW:]
2137repetitions of reads/writes to object (M).
2138\item[size:]
2139object size.
2140\end{description}
2141
2142
2143\subsubsection{Speed Micro-Benchmark}
2144\label{s:SpeedMicroBenchmark}
2145\vspace*{-4pt}
2146
2147The speed benchmark measures the runtime speed of individual and sequences of memory allocation routines:
2148\begin{enumerate}[topsep=-5pt,itemsep=0pt,parsep=0pt]
2149\item malloc
2150\item realloc
2151\item free
2152\item calloc
2153\item malloc-free
2154\item realloc-free
2155\item calloc-free
2156\item malloc-realloc
2157\item calloc-realloc
2158\item malloc-realloc-free
2159\item calloc-realloc-free
2160\item malloc-realloc-free-calloc
2161\end{enumerate}
2162
2163Figure~\ref{fig:SpeedBenchFig} shows the pseudo code for the speed micro-benchmark.
2164Each routine in the chain is called for N objects and then those allocated objects are used when calling the next routine in the allocation chain.
2165This tests the latency of the memory allocator when multiple routines are chained together, \eg the call sequence malloc-realloc-free-calloc gives a complete picture of the major allocation routines when combined together.
2166For each chain, the time is recorded to visualize performance of a memory allocator against each chain.
2167
2168\begin{figure}
2169\centering
2170\begin{lstlisting}[morekeywords={foreach}]
2171Main Thread
2172 create worker threads
2173 foreach ( allocation chain )
2174 note time T1
2175 ...
2176 note time T2
2177 chain_speed = (T2 - T1) / number-of-worker-threads * N )
2178Worker Thread
2179 initialize variables
2180 ...
2181 foreach ( routine in allocation chain )
2182 call routine N times
2183\end{lstlisting}
2184%\includegraphics[width=1\textwidth]{figures/bench-speed.eps}
2185\caption{Speed Benchmark}
2186\label{fig:SpeedBenchFig}
2187\end{figure}
2188
2189The adjustment knobs for memory usage are:
2190\begin{description}[itemsep=0pt,parsep=0pt]
2191\item[max:]
2192maximum object size.
2193\item[min:]
2194minimum object size.
2195\item[step:]
2196object size increment.
2197\item[distro:]
2198object size distribution.
2199\item[objects:]
2200number of objects per thread.
2201\item[workers:]
2202number of worker threads.
2203\end{description}
2204
2205
2206\subsubsection{Memory Micro-Benchmark}
2207\label{s:MemoryMicroBenchmark}
2208
2209The memory micro-benchmark measures the memory overhead of an allocator.
2210It allocates a number of dynamic objects and reads @/proc/self/proc/maps@ to get the total memory requested by the allocator from the OS.
2211It calculates the memory overhead by computing the difference between the memory the allocator requests from the OS and the memory that the program allocates.
2212This micro-benchmark is like Larson and stresses the ability of an allocator to deal with object sharing.
2213
2214Figure~\ref{fig:MemoryBenchFig} shows the pseudo code for the memory micro-benchmark.
2215It creates a producer-consumer scenario with K producer threads and each producer has M consumer threads.
2216A producer has a separate buffer for each consumer and allocates N objects of random sizes following a configurable distribution for each consumer.
2217A consumer frees these objects.
2218After every memory operation, program memory usage is recorded throughout the runtime.
2219This data is used to visualize the memory usage and consumption for the program.
2220
2221\begin{figure}
2222\centering
2223\begin{lstlisting}
2224Main Thread
2225 print memory snapshot
2226 create producer threads
2227Producer Thread (K)
2228 set free start
2229 create consumer threads
2230 for ( N )
2231 allocate memory
2232 print memory snapshot
2233Consumer Thread (M)
2234 wait while ( allocations < free start )
2235 for ( N )
2236 free memory
2237 print memory snapshot
2238\end{lstlisting}
2239%\includegraphics[width=1\textwidth]{figures/bench-memory.eps}
2240\caption{Memory Footprint Micro-Benchmark}
2241\label{fig:MemoryBenchFig}
2242\end{figure}
2243
2244The global adjustment knobs for this micro-benchmark are:
2245\begin{description}[itemsep=0pt,parsep=0pt]
2246\item[producer (K):]
2247sets the number of producer threads.
2248\item[consumer (M):]
2249sets number of consumers threads for each producer.
2250\item[round:]
2251sets production and consumption round size.
2252\end{description}
2253
2254The adjustment knobs for object allocation are:
2255\begin{description}[itemsep=0pt,parsep=0pt]
2256\item[max:]
2257maximum object size.
2258\item[min:]
2259minimum object size.
2260\item[step:]
2261object size increment.
2262\item[distro:]
2263object size distribution.
2264\item[objects (N):]
2265number of objects per thread.
2266\end{description}
2267
2268
2269\section{Performance}
2270\label{c:Performance}
2271
2272This section uses the micro-benchmarks from Section~\ref{s:Benchmarks} to test a number of current memory allocators, including llheap.
2273The goal is to see if llheap is competitive with the currently popular memory allocators.
2274
2275
2276\subsection{Machine Specification}
2277
2278The performance experiments were run on two different multi-core architectures (x64 and ARM) to determine if there is consistency across platforms:
2279\begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt]
2280\item
2281\textbf{Algol} Huawei ARM TaiShan 2280 V2 Kunpeng 920, 24-core socket $\times$ 4, 2.6 GHz, GCC version 9.4.0
2282\item
2283\textbf{Nasus} AMD EPYC 7662, 64-core socket $\times$ 2, 2.0 GHz, GCC version 9.3.0
2284\end{itemize}
2285
2286
2287\subsection{Existing Memory Allocators}
2288\label{sec:curAllocatorSec}
2289
2290With dynamic allocation being an important feature of C, there are many stand-alone memory allocators that have been designed for different purposes.
2291For this work, 7 of the most popular and widely used memory allocators were selected for comparison, along with llheap.
2292
2293\paragraph{llheap (\textsf{llh})}
2294is the thread-safe allocator from Chapter~\ref{c:Allocator}
2295\\
2296\textbf{Version:} 1.0
2297\textbf{Configuration:} Compiled with dynamic linking, but without statistics or debugging.\\
2298\textbf{Compilation command:} @make@
2299
2300\paragraph{glibc (\textsf{glc})}
2301\cite{glibc} is the default glibc thread-safe allocator.
2302\\
2303\textbf{Version:} Ubuntu GLIBC 2.31-0ubuntu9.7 2.31\\
2304\textbf{Configuration:} Compiled by Ubuntu 20.04.\\
2305\textbf{Compilation command:} N/A
2306
2307\paragraph{dlmalloc (\textsf{dl})}
2308\cite{dlmalloc} is a thread-safe allocator that is single threaded and single heap.
2309It maintains free-lists of different sizes to store freed dynamic memory.
2310\\
2311\textbf{Version:} 2.8.6\\
2312\textbf{Configuration:} Compiled with preprocessor @USE_LOCKS@.\\
2313\textbf{Compilation command:} @gcc -g3 -O3 -Wall -Wextra -fno-builtin-malloc -fno-builtin-calloc@ @-fno-builtin-realloc -fno-builtin-free -fPIC -shared -DUSE_LOCKS -o libdlmalloc.so malloc-2.8.6.c@
2314
2315\paragraph{hoard (\textsf{hrd})}
2316\cite{hoard} is a thread-safe allocator that is multi-threaded and uses a heap layer framework. It has per-thread heaps that have thread-local free-lists, and a global shared heap.
2317\\
2318\textbf{Version:} 3.13\\
2319\textbf{Configuration:} Compiled with hoard's default configurations and @Makefile@.\\
2320\textbf{Compilation command:} @make all@
2321
2322\paragraph{jemalloc (\textsf{je})}
2323\cite{jemalloc} is a thread-safe allocator that uses multiple arenas. Each thread is assigned an arena.
2324Each arena has chunks that contain contagious memory regions of same size. An arena has multiple chunks that contain regions of multiple sizes.
2325\\
2326\textbf{Version:} 5.2.1\\
2327\textbf{Configuration:} Compiled with jemalloc's default configurations and @Makefile@.\\
2328\textbf{Compilation command:} @autogen.sh; configure; make; make install@
2329
2330\paragraph{ptmalloc3 (\textsf{pt3})}
2331\cite{ptmalloc3} is a modification of dlmalloc.
2332It is a thread-safe multi-threaded memory allocator that uses multiple heaps.
2333ptmalloc3 heap has similar design to dlmalloc's heap.
2334\\
2335\textbf{Version:} 1.8\\
2336\textbf{Configuration:} Compiled with ptmalloc3's @Makefile@ using option ``linux-shared''.\\
2337\textbf{Compilation command:} @make linux-shared@
2338
2339\paragraph{rpmalloc (\textsf{rp})}
2340\cite{rpmalloc} is a thread-safe allocator that is multi-threaded and uses per-thread heap.
2341Each heap has multiple size-classes and each size-class contains memory regions of the relevant size.
2342\\
2343\textbf{Version:} 1.4.1\\
2344\textbf{Configuration:} Compiled with rpmalloc's default configurations and ninja build system.\\
2345\textbf{Compilation command:} @python3 configure.py; ninja@
2346
2347\paragraph{tbb malloc (\textsf{tbb})}
2348\cite{tbbmalloc} is a thread-safe allocator that is multi-threaded and uses a private heap for each thread.
2349Each private-heap has multiple bins of different sizes. Each bin contains free regions of the same size.
2350\\
2351\textbf{Version:} intel tbb 2020 update 2, tbb\_interface\_version == 11102\\
2352\textbf{Configuration:} Compiled with tbbmalloc's default configurations and @Makefile@.\\
2353\textbf{Compilation command:} @make@
2354
2355% \subsection{Experiment Environment}
2356% We used our micro benchmark suite (FIX ME: cite mbench) to evaluate these memory allocators Section~\ref{sec:curAllocatorSec} and our own memory allocator uHeap Section~\ref{sec:allocatorSec}.
2357
2358\subsection{Experiments}
2359
2360Each micro-benchmark is configured and run with each of the allocators,
2361The less time an allocator takes to complete a benchmark the better so lower in the graphs is better, except for the Memory micro-benchmark graphs.
2362All graphs use log scale on the Y-axis, except for the Memory micro-benchmark (see Section~\ref{s:MemoryMicroBenchmark}).
2363
2364%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2365%% CHURN
2366%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2367
2368\subsubsection{Churn Micro-Benchmark}
2369
2370Churn tests allocators for speed under intensive dynamic memory usage (see Section~\ref{s:ChurnBenchmark}).
2371This experiment was run with following configurations:
2372\begin{description}[itemsep=0pt,parsep=0pt]
2373\item[thread:]
23741, 2, 4, 8, 16, 32, 48
2375\item[spots:]
237616
2377\item[obj:]
2378100,000
2379\item[max:]
2380500
2381\item[min:]
238250
2383\item[step:]
238450
2385\item[distro:]
2386fisher
2387\end{description}
2388
2389% -maxS : 500
2390% -minS : 50
2391% -stepS : 50
2392% -distroS : fisher
2393% -objN : 100000
2394% -cSpots : 16
2395% -threadN : 1, 2, 4, 8, 16
2396
2397Figure~\ref{fig:churn} shows the results for algol and nasus.
2398The X-axis shows the number of threads;
2399the Y-axis shows the total experiment time.
2400Each allocator's performance for each thread is shown in different colors.
2401
2402\begin{figure}
2403\centering
2404 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/churn} } \\
2405 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/churn} }
2406\caption{Churn}
2407\label{fig:churn}
2408\end{figure}
2409
2410\paragraph{Assessment}
2411All allocators did well in this micro-benchmark, except for \textsf{dl} on the ARM.
2412\textsf{dl}'s is the slowest, indicating some small bottleneck with respect to the other allocators.
2413\textsf{je} is the fastest, with only a small benefit over the other allocators.
2414% llheap is slightly slower because it uses ownership, where many of the allocations have remote frees, which requires locking.
2415% When llheap is compiled without ownership, its performance is the same as the other allocators (not shown).
2416
2417%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2418%% THRASH
2419%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2420
2421\subsubsection{Cache Thrash}
2422\label{sec:cache-thrash-perf}
2423
2424Thrash tests memory allocators for active false sharing (see Section~\ref{sec:benchThrashSec}).
2425This experiment was run with following configurations:
2426\begin{description}[itemsep=0pt,parsep=0pt]
2427\item[threads:]
24281, 2, 4, 8, 16, 32, 48
2429\item[iterations:]
24301,000
2431\item[cacheRW:]
24321,000,000
2433\item[size:]
24341
2435\end{description}
2436
2437% * Each allocator was tested for its performance across different number of threads.
2438% Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN.
2439
2440Figure~\ref{fig:cacheThrash} shows the results for algol and nasus.
2441The X-axis shows the number of threads;
2442the Y-axis shows the total experiment time.
2443Each allocator's performance for each thread is shown in different colors.
2444
2445\begin{figure}
2446\centering
2447 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/cache_thrash_0-thrash} } \\
2448 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/cache_thrash_0-thrash} }
2449\caption{Cache Thrash}
2450\label{fig:cacheThrash}
2451\end{figure}
2452
2453\paragraph{Assessment}
2454All allocators did well in this micro-benchmark, except for \textsf{dl} and \textsf{pt3}.
2455\textsf{dl} uses a single heap for all threads so it is understandable that it generates so much active false-sharing.
2456Requests from different threads are dealt with sequentially by the single heap (using a single lock), which can allocate objects to different threads on the same cache line.
2457\textsf{pt3} uses the T:H model, so multiple threads can use one heap, but the active false-sharing is less than \textsf{dl}.
2458The rest of the memory allocators generate little or no active false-sharing.
2459
2460%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2461%% SCRATCH
2462%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2463
2464\subsubsection{Cache Scratch}
2465
2466Scratch tests memory allocators for program-induced allocator-preserved passive false-sharing (see Section~\ref{s:CacheScratch}).
2467This experiment was run with following configurations:
2468\begin{description}[itemsep=0pt,parsep=0pt]
2469\item[threads:]
24701, 2, 4, 8, 16, 32, 48
2471\item[iterations:]
24721,000
2473\item[cacheRW:]
24741,000,000
2475\item[size:]
24761
2477\end{description}
2478
2479% * Each allocator was tested for its performance across different number of threads.
2480% Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN.
2481
2482Figure~\ref{fig:cacheScratch} shows the results for algol and nasus.
2483The X-axis shows the number of threads;
2484the Y-axis shows the total experiment time.
2485Each allocator's performance for each thread is shown in different colors.
2486
2487\begin{figure}
2488\centering
2489 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/cache_scratch_0-scratch} } \\
2490 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/cache_scratch_0-scratch} }
2491\caption{Cache Scratch}
2492\label{fig:cacheScratch}
2493\end{figure}
2494
2495\paragraph{Assessment}
2496This micro-benchmark divides the allocators into two groups.
2497First is the high-performer group: \textsf{llh}, \textsf{je}, and \textsf{rp}.
2498These memory allocators generate little or no passive false-sharing and their performance difference is negligible.
2499Second is the low-performer group, which includes the rest of the memory allocators.
2500These memory allocators have significant program-induced passive false-sharing, where \textsf{hrd}'s is the worst performing allocator.
2501All of the allocators in this group are sharing heaps among threads at some level.
2502
2503Interestingly, allocators such as \textsf{hrd} and \textsf{glc} performed well in micro-benchmark cache thrash (see Section~\ref{sec:cache-thrash-perf}), but, these allocators are among the low performers in the cache scratch.
2504It suggests these allocators do not actively produce false-sharing, but preserve program-induced passive false sharing.
2505
2506%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2507%% SPEED
2508%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2509
2510\subsubsection{Speed Micro-Benchmark}
2511
2512Speed tests memory allocators for runtime latency (see Section~\ref{s:SpeedMicroBenchmark}).
2513This experiment was run with following configurations:
2514\begin{description}
2515\item[max:]
2516500
2517\item[min:]
251850
2519\item[step:]
252050
2521\item[distro:]
2522fisher
2523\item[objects:]
2524100,000
2525\item[workers:]
25261, 2, 4, 8, 16, 32, 48
2527\end{description}
2528
2529% -maxS : 500
2530% -minS : 50
2531% -stepS : 50
2532% -distroS : fisher
2533% -objN : 1000000
2534% -threadN : \{ 1, 2, 4, 8, 16 \} *
2535
2536%* Each allocator was tested for its performance across different number of threads.
2537%Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN.
2538
2539Figures~\ref{fig:speed-3-malloc} to~\ref{fig:speed-14-malloc-calloc-realloc-free} show 12 figures, one figure for each chain of the speed benchmark.
2540The X-axis shows the number of threads;
2541the Y-axis shows the total experiment time.
2542Each allocator's performance for each thread is shown in different colors.
2543
2544\begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt]
2545\item Figure~\ref{fig:speed-3-malloc} shows results for chain: malloc
2546\item Figure~\ref{fig:speed-4-realloc} shows results for chain: realloc
2547\item Figure~\ref{fig:speed-5-free} shows results for chain: free
2548\item Figure~\ref{fig:speed-6-calloc} shows results for chain: calloc
2549\item Figure~\ref{fig:speed-7-malloc-free} shows results for chain: malloc-free
2550\item Figure~\ref{fig:speed-8-realloc-free} shows results for chain: realloc-free
2551\item Figure~\ref{fig:speed-9-calloc-free} shows results for chain: calloc-free
2552\item Figure~\ref{fig:speed-10-malloc-realloc} shows results for chain: malloc-realloc
2553\item Figure~\ref{fig:speed-11-calloc-realloc} shows results for chain: calloc-realloc
2554\item Figure~\ref{fig:speed-12-malloc-realloc-free} shows results for chain: malloc-realloc-free
2555\item Figure~\ref{fig:speed-13-calloc-realloc-free} shows results for chain: calloc-realloc-free
2556\item Figure~\ref{fig:speed-14-malloc-calloc-realloc-free} shows results for chain: malloc-realloc-free-calloc
2557\end{itemize}
2558
2559\paragraph{Assessment}
2560This micro-benchmark divides the allocators into two groups: with and without @calloc@.
2561@calloc@ uses @memset@ to set the allocated memory to zero, which dominates the cost of the allocation chain (large performance increase) and levels performance across the allocators.
2562But the difference among the allocators in a @calloc@ chain still gives an idea of their relative performance.
2563
2564All allocators did well in this micro-benchmark across all allocation chains, except for \textsf{dl}, \textsf{pt3}, and \textsf{hrd}.
2565Again, the low-performing allocators are sharing heaps among threads, so the contention causes performance increases with increasing numbers of threads.
2566Furthermore, chains with @free@ can trigger coalescing, which slows the fast path.
2567The high-performing allocators all illustrate low latency across the allocation chains, \ie there are no performance spikes as the chain lengths, that might be caused by contention and/or coalescing.
2568Low latency is important for applications that are sensitive to unknown execution delays.
2569
2570%speed-3-malloc.eps
2571\begin{figure}
2572\centering
2573 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-3-malloc} } \\
2574 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-3-malloc} }
2575\caption{Speed benchmark chain: malloc}
2576\label{fig:speed-3-malloc}
2577\end{figure}
2578
2579%speed-4-realloc.eps
2580\begin{figure}
2581\centering
2582 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-4-realloc} } \\
2583 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-4-realloc} }
2584\caption{Speed benchmark chain: realloc}
2585\label{fig:speed-4-realloc}
2586\end{figure}
2587
2588%speed-5-free.eps
2589\begin{figure}
2590\centering
2591 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-5-free} } \\
2592 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-5-free} }
2593\caption{Speed benchmark chain: free}
2594\label{fig:speed-5-free}
2595\end{figure}
2596
2597%speed-6-calloc.eps
2598\begin{figure}
2599\centering
2600 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-6-calloc} } \\
2601 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-6-calloc} }
2602\caption{Speed benchmark chain: calloc}
2603\label{fig:speed-6-calloc}
2604\end{figure}
2605
2606%speed-7-malloc-free.eps
2607\begin{figure}
2608\centering
2609 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-7-malloc-free} } \\
2610 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-7-malloc-free} }
2611\caption{Speed benchmark chain: malloc-free}
2612\label{fig:speed-7-malloc-free}
2613\end{figure}
2614
2615%speed-8-realloc-free.eps
2616\begin{figure}
2617\centering
2618 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-8-realloc-free} } \\
2619 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-8-realloc-free} }
2620\caption{Speed benchmark chain: realloc-free}
2621\label{fig:speed-8-realloc-free}
2622\end{figure}
2623
2624%speed-9-calloc-free.eps
2625\begin{figure}
2626\centering
2627 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-9-calloc-free} } \\
2628 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-9-calloc-free} }
2629\caption{Speed benchmark chain: calloc-free}
2630\label{fig:speed-9-calloc-free}
2631\end{figure}
2632
2633%speed-10-malloc-realloc.eps
2634\begin{figure}
2635\centering
2636 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-10-malloc-realloc} } \\
2637 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-10-malloc-realloc} }
2638\caption{Speed benchmark chain: malloc-realloc}
2639\label{fig:speed-10-malloc-realloc}
2640\end{figure}
2641
2642%speed-11-calloc-realloc.eps
2643\begin{figure}
2644\centering
2645 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-11-calloc-realloc} } \\
2646 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-11-calloc-realloc} }
2647\caption{Speed benchmark chain: calloc-realloc}
2648\label{fig:speed-11-calloc-realloc}
2649\end{figure}
2650
2651%speed-12-malloc-realloc-free.eps
2652\begin{figure}
2653\centering
2654 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-12-malloc-realloc-free} } \\
2655 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-12-malloc-realloc-free} }
2656\caption{Speed benchmark chain: malloc-realloc-free}
2657\label{fig:speed-12-malloc-realloc-free}
2658\end{figure}
2659
2660%speed-13-calloc-realloc-free.eps
2661\begin{figure}
2662\centering
2663 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-13-calloc-realloc-free} } \\
2664 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-13-calloc-realloc-free} }
2665\caption{Speed benchmark chain: calloc-realloc-free}
2666\label{fig:speed-13-calloc-realloc-free}
2667\end{figure}
2668
2669%speed-14-{m,c,re}alloc-free.eps
2670\begin{figure}
2671\centering
2672 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-14-m-c-re-alloc-free} } \\
2673 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-14-m-c-re-alloc-free} }
2674\caption{Speed benchmark chain: malloc-calloc-realloc-free}
2675\label{fig:speed-14-malloc-calloc-realloc-free}
2676\end{figure}
2677
2678%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2679%% MEMORY
2680%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2681
2682\newpage
2683\subsubsection{Memory Micro-Benchmark}
2684\label{s:MemoryMicroBenchmark}
2685
2686This experiment is run with the following two configurations for each allocator.
2687The difference between the two configurations is the number of producers and consumers.
2688Configuration 1 has one producer and one consumer, and configuration 2 has 4 producers, where each producer has 4 consumers.
2689
2690\noindent
2691Configuration 1:
2692\begin{description}[itemsep=0pt,parsep=0pt]
2693\item[producer (K):]
26941
2695\item[consumer (M):]
26961
2697\item[round:]
2698100,000
2699\item[max:]
2700500
2701\item[min:]
270250
2703\item[step:]
270450
2705\item[distro:]
2706fisher
2707\item[objects (N):]
2708100,000
2709\end{description}
2710
2711% -threadA : 1
2712% -threadF : 1
2713% -maxS : 500
2714% -minS : 50
2715% -stepS : 50
2716% -distroS : fisher
2717% -objN : 100000
2718% -consumeS: 100000
2719
2720\noindent
2721Configuration 2:
2722\begin{description}[itemsep=0pt,parsep=0pt]
2723\item[producer (K):]
27244
2725\item[consumer (M):]
27264
2727\item[round:]
2728100,000
2729\item[max:]
2730500
2731\item[min:]
273250
2733\item[step:]
273450
2735\item[distro:]
2736fisher
2737\item[objects (N):]
2738100,000
2739\end{description}
2740
2741% -threadA : 4
2742% -threadF : 4
2743% -maxS : 500
2744% -minS : 50
2745% -stepS : 50
2746% -distroS : fisher
2747% -objN : 100000
2748% -consumeS: 100000
2749
2750% \begin{table}[b]
2751% \centering
2752% \begin{tabular}{ |c|c|c| }
2753% \hline
2754% Memory Allocator & Configuration 1 Result & Configuration 2 Result\\
2755% \hline
2756% llh & Figure~\ref{fig:mem-1-prod-1-cons-100-llh} & Figure~\ref{fig:mem-4-prod-4-cons-100-llh}\\
2757% \hline
2758% dl & Figure~\ref{fig:mem-1-prod-1-cons-100-dl} & Figure~\ref{fig:mem-4-prod-4-cons-100-dl}\\
2759% \hline
2760% glibc & Figure~\ref{fig:mem-1-prod-1-cons-100-glc} & Figure~\ref{fig:mem-4-prod-4-cons-100-glc}\\
2761% \hline
2762% hoard & Figure~\ref{fig:mem-1-prod-1-cons-100-hrd} & Figure~\ref{fig:mem-4-prod-4-cons-100-hrd}\\
2763% \hline
2764% je & Figure~\ref{fig:mem-1-prod-1-cons-100-je} & Figure~\ref{fig:mem-4-prod-4-cons-100-je}\\
2765% \hline
2766% pt3 & Figure~\ref{fig:mem-1-prod-1-cons-100-pt3} & Figure~\ref{fig:mem-4-prod-4-cons-100-pt3}\\
2767% \hline
2768% rp & Figure~\ref{fig:mem-1-prod-1-cons-100-rp} & Figure~\ref{fig:mem-4-prod-4-cons-100-rp}\\
2769% \hline
2770% tbb & Figure~\ref{fig:mem-1-prod-1-cons-100-tbb} & Figure~\ref{fig:mem-4-prod-4-cons-100-tbb}\\
2771% \hline
2772% \end{tabular}
2773% \caption{Memory benchmark results}
2774% \label{table:mem-benchmark-figs}
2775% \end{table}
2776% Table Section~\ref{table:mem-benchmark-figs} shows the list of figures that contain memory benchmark results.
2777
2778Figures~\ref{fig:mem-1-prod-1-cons-100-llh}{fig:mem-4-prod-4-cons-100-tbb} show 16 figures, two figures for each of the 8 allocators, one for each configuration.
2779Each figure has 2 graphs, one for each experiment environment.
2780Each graph has following 5 subgraphs that show memory usage and statistics throughout the micro-benchmark's lifetime.
2781\begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt]
2782\item \textit{\textbf{current\_req\_mem(B)}} shows the amount of dynamic memory requested and currently in-use of the benchmark.
2783\item \textit{\textbf{heap}}* shows the memory requested by the program (allocator) from the system that lies in the heap (@sbrk@) area.
2784\item \textit{\textbf{mmap\_so}}* shows the memory requested by the program (allocator) from the system that lies in the @mmap@ area.
2785\item \textit{\textbf{mmap}}* shows the memory requested by the program (allocator or shared libraries) from the system that lies in the @mmap@ area.
2786\item \textit{\textbf{total\_dynamic}} shows the total usage of dynamic memory by the benchmark program, which is a sum of \textit{heap}, \textit{mmap}, and \textit{mmap\_so}.
2787\end{itemize}
2788* These statistics are gathered by monitoring a process's @/proc/self/maps@ file.
2789
2790The X-axis shows the time when the memory information is polled.
2791The Y-axis shows the memory usage in bytes.
2792
2793For this experiment, the difference between the memory requested by the benchmark (\textit{current\_req\_mem(B)}) and the memory that the process has received from system (\textit{heap}, \textit{mmap}) should be minimum.
2794This difference is the memory overhead caused by the allocator and shows the level of fragmentation in the allocator.
2795
2796\paragraph{Assessment}
2797First, the differences in the shape of the curves between architectures (top ARM, bottom x64) is small, where the differences are in the amount of memory used.
2798Hence, it is possible to focus on either the top or bottom graph.
2799
2800Second, the heap curve is 0 for four memory allocators: \textsf{hrd}, \textsf{je}, \textsf{pt3}, and \textsf{rp}, indicating these memory allocators only use @mmap@ to get memory from the system and ignore the @sbrk@ area.
2801
2802The total dynamic memory is higher for \textsf{hrd} and \textsf{tbb} than the other allocators.
2803The main reason is the use of superblocks (see Section~\ref{s:ObjectContainers}) containing objects of the same size.
2804These superblocks are maintained throughout the life of the program.
2805
2806\textsf{pt3} is the only memory allocator where the total dynamic memory goes down in the second half of the program lifetime when the memory is freed by the benchmark program.
2807It makes pt3 the only memory allocator that gives memory back to the OS as it is freed by the program.
2808
2809% FOR 1 THREAD
2810
2811%mem-1-prod-1-cons-100-llh.eps
2812\begin{figure}
2813\centering
2814 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-llh} } \\
2815 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-llh} }
2816\caption{Memory benchmark results with Configuration-1 for llh memory allocator}
2817\label{fig:mem-1-prod-1-cons-100-llh}
2818\end{figure}
2819
2820%mem-1-prod-1-cons-100-dl.eps
2821\begin{figure}
2822\centering
2823 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-dl} } \\
2824 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-dl} }
2825\caption{Memory benchmark results with Configuration-1 for dl memory allocator}
2826\label{fig:mem-1-prod-1-cons-100-dl}
2827\end{figure}
2828
2829%mem-1-prod-1-cons-100-glc.eps
2830\begin{figure}
2831\centering
2832 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-glc} } \\
2833 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-glc} }
2834\caption{Memory benchmark results with Configuration-1 for glibc memory allocator}
2835\label{fig:mem-1-prod-1-cons-100-glc}
2836\end{figure}
2837
2838%mem-1-prod-1-cons-100-hrd.eps
2839\begin{figure}
2840\centering
2841 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-hrd} } \\
2842 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-hrd} }
2843\caption{Memory benchmark results with Configuration-1 for hoard memory allocator}
2844\label{fig:mem-1-prod-1-cons-100-hrd}
2845\end{figure}
2846
2847%mem-1-prod-1-cons-100-je.eps
2848\begin{figure}
2849\centering
2850 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-je} } \\
2851 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-je} }
2852\caption{Memory benchmark results with Configuration-1 for je memory allocator}
2853\label{fig:mem-1-prod-1-cons-100-je}
2854\end{figure}
2855
2856%mem-1-prod-1-cons-100-pt3.eps
2857\begin{figure}
2858\centering
2859 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-pt3} } \\
2860 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-pt3} }
2861\caption{Memory benchmark results with Configuration-1 for pt3 memory allocator}
2862\label{fig:mem-1-prod-1-cons-100-pt3}
2863\end{figure}
2864
2865%mem-1-prod-1-cons-100-rp.eps
2866\begin{figure}
2867\centering
2868 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-rp} } \\
2869 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-rp} }
2870\caption{Memory benchmark results with Configuration-1 for rp memory allocator}
2871\label{fig:mem-1-prod-1-cons-100-rp}
2872\end{figure}
2873
2874%mem-1-prod-1-cons-100-tbb.eps
2875\begin{figure}
2876\centering
2877 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-tbb} } \\
2878 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-tbb} }
2879\caption{Memory benchmark results with Configuration-1 for tbb memory allocator}
2880\label{fig:mem-1-prod-1-cons-100-tbb}
2881\end{figure}
2882
2883% FOR 4 THREADS
2884
2885%mem-4-prod-4-cons-100-llh.eps
2886\begin{figure}
2887\centering
2888 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-llh} } \\
2889 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-llh} }
2890\caption{Memory benchmark results with Configuration-2 for llh memory allocator}
2891\label{fig:mem-4-prod-4-cons-100-llh}
2892\end{figure}
2893
2894%mem-4-prod-4-cons-100-dl.eps
2895\begin{figure}
2896\centering
2897 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-dl} } \\
2898 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-dl} }
2899\caption{Memory benchmark results with Configuration-2 for dl memory allocator}
2900\label{fig:mem-4-prod-4-cons-100-dl}
2901\end{figure}
2902
2903%mem-4-prod-4-cons-100-glc.eps
2904\begin{figure}
2905\centering
2906 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-glc} } \\
2907 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-glc} }
2908\caption{Memory benchmark results with Configuration-2 for glibc memory allocator}
2909\label{fig:mem-4-prod-4-cons-100-glc}
2910\end{figure}
2911
2912%mem-4-prod-4-cons-100-hrd.eps
2913\begin{figure}
2914\centering
2915 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-hrd} } \\
2916 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-hrd} }
2917\caption{Memory benchmark results with Configuration-2 for hoard memory allocator}
2918\label{fig:mem-4-prod-4-cons-100-hrd}
2919\end{figure}
2920
2921%mem-4-prod-4-cons-100-je.eps
2922\begin{figure}
2923\centering
2924 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-je} } \\
2925 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-je} }
2926\caption{Memory benchmark results with Configuration-2 for je memory allocator}
2927\label{fig:mem-4-prod-4-cons-100-je}
2928\end{figure}
2929
2930%mem-4-prod-4-cons-100-pt3.eps
2931\begin{figure}
2932\centering
2933 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-pt3} } \\
2934 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-pt3} }
2935\caption{Memory benchmark results with Configuration-2 for pt3 memory allocator}
2936\label{fig:mem-4-prod-4-cons-100-pt3}
2937\end{figure}
2938
2939%mem-4-prod-4-cons-100-rp.eps
2940\begin{figure}
2941\centering
2942 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-rp} } \\
2943 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-rp} }
2944\caption{Memory benchmark results with Configuration-2 for rp memory allocator}
2945\label{fig:mem-4-prod-4-cons-100-rp}
2946\end{figure}
2947
2948%mem-4-prod-4-cons-100-tbb.eps
2949\begin{figure}
2950\centering
2951 %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-tbb} } \\
2952 %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-tbb} }
2953\caption{Memory benchmark results with Configuration-2 for tbb memory allocator}
2954\label{fig:mem-4-prod-4-cons-100-tbb}
2955\end{figure}
2956
2957
2958\section{Conclusion}
2959
2960% \noindent
2961% ====================
2962%
2963% Writing Points:
2964% \begin{itemize}
2965% \item
2966% Summarize u-benchmark suite.
2967% \item
2968% Summarize @uHeapLmmm@.
2969% \item
2970% Make recommendations on memory allocator design.
2971% \end{itemize}
2972%
2973% \noindent
2974% ====================
2975
2976The goal of this work was to build a low-latency (or high bandwidth) memory allocator for both KT and UT multi-threading systems that is competitive with the best current memory allocators while extending the feature set of existing and new allocator routines.
2977The new llheap memory-allocator achieves all of these goals, while maintaining and managing sticky allocation information without a performance loss.
2978Hence, it becomes possible to use @realloc@ frequently as a safe operation, rather than just occasionally.
2979Furthermore, the ability to query sticky properties and information allows programmers to write safer programs, as it is possible to dynamically match allocation styles from unknown library routines that return allocations.
2980
2981Extending the C allocation API with @resize@, advanced @realloc@, @aalloc@, @amemalign@, and @cmemalign@ means programmers do not have to do these useful allocation operations themselves.
2982The ability to use \CFA's advanced type-system (and possibly \CC's too) to have one allocation routine with completely orthogonal sticky properties shows how far the allocation API can be pushed, which increases safety and greatly simplifies programmer's use of dynamic allocation.
2983
2984Providing comprehensive statistics for all allocation operations is invaluable in understanding and debugging a program's dynamic behaviour.
2985No other memory allocator provides such comprehensive statistics gathering.
2986This capability was used extensively during the development of llheap to verify its behaviour.
2987As well, providing a debugging mode where allocations are checked, along with internal pre/post conditions and invariants, is extremely useful, especially for students.
2988While not as powerful as the @valgrind@ interpreter, a large number of allocation mistakes are detected.
2989Finally, contention-free statistics gathering and debugging have a low enough cost to be used in production code.
2990
2991The ability to compile llheap with static/dynamic linking and optional statistics/debugging provides programers with multiple mechanisms to balance performance and safety.
2992These allocator versions are easy to use because they can be linked to an application without recompilation.
2993
2994Starting a micro-benchmark test-suite for comparing allocators, rather than relying on a suite of arbitrary programs, has been an interesting challenge.
2995The current micro-benchmarks allow some understanding of allocator implementation properties without actually looking at the implementation.
2996For example, the memory micro-benchmark quickly identified how several of the allocators work at the global level.
2997It was not possible to show how the micro-benchmarks adjustment knobs were used to tune to an interesting test point.
2998Many graphs were created and discarded until a few were selected for the work.
2999
3000
3001\subsection{Future Work}
3002
3003A careful walk-though of the allocator fastpath should yield additional optimizations for a slight performance gain.
3004In particular, analysing the implementation of rpmalloc, which is often the fastest allocator,
3005
3006The micro-benchmark project requires more testing and analysis.
3007Additional allocation patterns are needed to extract meaningful information about allocators, and within allocation patterns, what are the most useful tuning knobs.
3008Also, identifying ways to visualize the results of the micro-benchmarks is a work in progress.
3009
3010After llheap is made available on GitHub, interacting with its users to locate problems and improvements will make llbench a more robust memory allocator.
3011As well, feedback from the \uC and \CFA projects, which have adopted llheap for their memory allocator, will provide additional information.
3012
3013
3014
3015\section{Acknowledgements}
3016
3017This research is funded by the NSERC/Waterloo-Huawei (\url{http://www.huawei.com}) Joint Innovation Lab. %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.
3018
3019{%
3020\fontsize{9bp}{11.5bp}\selectfont%
3021\bibliography{pl,local}
3022}%
3023
3024\end{document}
3025
3026% Local Variables: %
3027% tab-width: 4 %
3028% fill-column: 120 %
3029% compile-command: "make" %
3030% End: %
Note: See TracBrowser for help on using the repository browser.