1 | \documentclass[AMA,STIX1COL]{WileyNJD-v2} |
---|
2 | |
---|
3 | % Latex packages used in the document. |
---|
4 | |
---|
5 | \usepackage{comment} |
---|
6 | \usepackage{epic,eepic} |
---|
7 | \usepackage{upquote} % switch curled `'" to straight |
---|
8 | \usepackage{relsize} |
---|
9 | \usepackage{xspace} |
---|
10 | \usepackage{calc} |
---|
11 | \usepackage[scaled=0.88]{helvet} % descent Helvetica font and scale to times size |
---|
12 | \usepackage[T1]{fontenc} |
---|
13 | \usepackage{listings} % format program code |
---|
14 | \usepackage[labelformat=simple,aboveskip=0pt,farskip=0pt]{subfig} |
---|
15 | \renewcommand{\thesubfigure}{(\alph{subfigure})} |
---|
16 | \usepackage{enumitem} |
---|
17 | |
---|
18 | \hypersetup{breaklinks=true} |
---|
19 | |
---|
20 | \usepackage[pagewise]{lineno} |
---|
21 | \renewcommand{\linenumberfont}{\scriptsize\sffamily} |
---|
22 | |
---|
23 | \usepackage{varioref} % extended references |
---|
24 | % adjust varioref package with default "section" and "page" titles, and optional title with faraway page numbers |
---|
25 | % \VRef{label} => Section 2.7, \VPageref{label} => page 17 |
---|
26 | % \VRef[Figure]{label} => Figure 3.4, \VPageref{label} => page 17 |
---|
27 | % \renewcommand{\reftextfaceafter}{\unskip} |
---|
28 | % \renewcommand{\reftextfacebefore}{\unskip} |
---|
29 | % \renewcommand{\reftextafter}{\unskip} |
---|
30 | % \renewcommand{\reftextbefore}{\unskip} |
---|
31 | % \renewcommand{\reftextfaraway}[1]{\unskip, p.~\pageref{#1}} |
---|
32 | % \renewcommand{\reftextpagerange}[2]{\unskip, pp.~\pageref{#1}--\pageref{#2}} |
---|
33 | % \newcommand{\VRef}[2][Section]{\ifx#1\@empty\else{#1}\nobreakspace\fi\vref{#2}} |
---|
34 | % \newcommand{\VRefrange}[3][Sections]{\ifx#1\@empty\else{#1}\nobreakspace\fi\vrefrange{#2}{#3}} |
---|
35 | % \newcommand{\VPageref}[2][page]{\ifx#1\@empty\else{#1}\nobreakspace\fi\pageref{#2}} |
---|
36 | % \newcommand{\VPagerefrange}[3][pages]{\ifx#1\@empty\else{#1}\nobreakspace\fi\pageref{#2}{#3}} |
---|
37 | |
---|
38 | \makeatletter |
---|
39 | \newcommand{\abbrevFont}{\textit} % set empty for no italics |
---|
40 | \newcommand{\CheckCommaColon}{\@ifnextchar{,}{}{\@ifnextchar{:}{}{,\xspace}}} |
---|
41 | \newcommand{\CheckPeriod}{\@ifnextchar{.}{}{.\xspace}} |
---|
42 | \newcommand{\EG}{\abbrevFont{e}.\abbrevFont{g}.} |
---|
43 | \newcommand{\eg}{\EG\CheckCommaColon} |
---|
44 | \newcommand{\IE}{\abbrevFont{i}.\abbrevFont{e}.} |
---|
45 | \newcommand{\ie}{\IE\CheckCommaColon} |
---|
46 | \newcommand{\ETC}{\abbrevFont{etc}} |
---|
47 | \newcommand{\etc}{\ETC\CheckPeriod} |
---|
48 | \newcommand{\VS}{\abbrevFont{vs}} |
---|
49 | \newcommand{\vs}{\VS\CheckPeriod} |
---|
50 | |
---|
51 | \newcommand{\newtermFont}{\emph} |
---|
52 | \newcommand{\newterm}[1]{\newtermFont{#1}} |
---|
53 | |
---|
54 | \newcommand{\CFAIcon}{\textsf{C}\raisebox{\depth}{\rotatebox{180}{\textsf{A}}}\xspace} % Cforall symbolic name |
---|
55 | \newcommand{\CFA}{\protect\CFAIcon} % safe for section/caption |
---|
56 | \newcommand{\CFL}{\textrm{Cforall}\xspace} % Cforall symbolic name |
---|
57 | \newcommand{\CCIcon}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}} % C++ icon |
---|
58 | \newcommand{\CC}[1][]{\protect\CCIcon{#1}\xspace} % C++ symbolic name |
---|
59 | \newcommand{\uC}{$\mu$\CC} |
---|
60 | \newcommand{\Csharp}{C\raisebox{-0.7ex}{\relsize{2}$^\sharp$}\xspace} % C# symbolic name |
---|
61 | |
---|
62 | \newcommand{\LstBasicStyle}[1]{{\lst@basicstyle{#1}}} |
---|
63 | \newcommand{\LstKeywordStyle}[1]{{\lst@basicstyle{\lst@keywordstyle{#1}}}} |
---|
64 | \newcommand{\LstCommentStyle}[1]{{\lst@basicstyle{\lst@commentstyle{#1}}}} |
---|
65 | \newcommand{\LstStringStyle}[1]{{\lst@basicstyle{\lst@stringstyle{#1}}}} |
---|
66 | |
---|
67 | \newlength{\parindentlnth} |
---|
68 | \setlength{\parindentlnth}{\parindent} |
---|
69 | \newlength{\gcolumnposn} % temporary hack because lstlisting does not handle tabs correctly |
---|
70 | \newlength{\columnposn} |
---|
71 | \setlength{\gcolumnposn}{3.25in} |
---|
72 | \setlength{\columnposn}{\gcolumnposn} |
---|
73 | \newcommand{\C}[2][\@empty]{\ifx#1\@empty\else\global\setlength{\columnposn}{#1}\global\columnposn=\columnposn\fi\hfill\makebox[\textwidth-\columnposn][l]{\lst@basicstyle{\LstCommentStyle{#2}}}} |
---|
74 | \newcommand{\CRT}{\global\columnposn=\gcolumnposn} |
---|
75 | \makeatother |
---|
76 | |
---|
77 | \lstset{ |
---|
78 | columns=fullflexible, |
---|
79 | basicstyle=\linespread{0.9}\sf, % reduce line spacing and use sanserif font |
---|
80 | stringstyle=\tt, % use typewriter font |
---|
81 | tabsize=5, % N space tabbing |
---|
82 | xleftmargin=\parindentlnth, % indent code to paragraph indentation |
---|
83 | %mathescape=true, % LaTeX math escape in CFA code $...$ |
---|
84 | escapechar=\$, % LaTeX escape in CFA code |
---|
85 | keepspaces=true, % |
---|
86 | showstringspaces=false, % do not show spaces with cup |
---|
87 | showlines=true, % show blank lines at end of code |
---|
88 | aboveskip=4pt, % spacing above/below code block |
---|
89 | belowskip=3pt, |
---|
90 | moredelim=**[is][\color{red}]{`}{`}, |
---|
91 | }% lstset |
---|
92 | |
---|
93 | % CFA programming language, based on ANSI C (with some gcc additions) |
---|
94 | \lstdefinelanguage{CFA}[ANSI]{C}{ |
---|
95 | morekeywords={ |
---|
96 | _Alignas, _Alignof, __alignof, __alignof__, asm, __asm, __asm__, __attribute, __attribute__, |
---|
97 | auto, _Bool, catch, catchResume, choose, _Complex, __complex, __complex__, __const, __const__, |
---|
98 | coroutine, disable, dtype, enable, exception, __extension__, fallthrough, fallthru, finally, |
---|
99 | __float80, float80, __float128, float128, forall, ftype, generator, _Generic, _Imaginary, __imag, __imag__, |
---|
100 | inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or, |
---|
101 | otype, restrict, resume, __restrict, __restrict__, __signed, __signed__, _Static_assert, suspend, thread, |
---|
102 | _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__, |
---|
103 | virtual, __volatile, __volatile__, waitfor, when, with, zero_t}, |
---|
104 | moredirectives={defined,include_next}, |
---|
105 | % replace/adjust listing characters that look bad in sanserif |
---|
106 | literate={-}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1 |
---|
107 | {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1 |
---|
108 | {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1 |
---|
109 | {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textrm{\textgreater}}}2, |
---|
110 | } |
---|
111 | |
---|
112 | % uC++ programming language, based on ANSI C++ |
---|
113 | \lstdefinelanguage{uC++}[ANSI]{C++}{ |
---|
114 | morekeywords={ |
---|
115 | _Accept, _AcceptReturn, _AcceptWait, _Actor, _At, _CatchResume, _Cormonitor, _Coroutine, _Disable, |
---|
116 | _Else, _Enable, _Event, _Finally, _Monitor, _Mutex, _Nomutex, _PeriodicTask, _RealTimeTask, |
---|
117 | _Resume, _Select, _SporadicTask, _Task, _Timeout, _When, _With, _Throw}, |
---|
118 | } |
---|
119 | |
---|
120 | % Go programming language: https://github.com/julienc91/listings-golang/blob/master/listings-golang.sty |
---|
121 | \lstdefinelanguage{Golang}{ |
---|
122 | morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,}, |
---|
123 | morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64, |
---|
124 | bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface}, |
---|
125 | morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,}, |
---|
126 | morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,}, |
---|
127 | morekeywords=[5]{Println,Printf,Error,}, |
---|
128 | sensitive=true, |
---|
129 | morecomment=[l]{//}, |
---|
130 | morecomment=[s]{/*}{*/}, |
---|
131 | morestring=[b]', |
---|
132 | morestring=[b]", |
---|
133 | morestring=[s]{`}{`}, |
---|
134 | % replace/adjust listing characters that look bad in sanserif |
---|
135 | literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1 |
---|
136 | {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1 |
---|
137 | {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1 |
---|
138 | {<-}{\makebox[2ex][c]{\textrm{\textless}\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}}2, |
---|
139 | } |
---|
140 | |
---|
141 | \lstnewenvironment{cfa}[1][] |
---|
142 | {\lstset{language=CFA,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}} |
---|
143 | {} |
---|
144 | \lstnewenvironment{C++}[1][] % use C++ style |
---|
145 | {\lstset{language=C++,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}} |
---|
146 | {} |
---|
147 | \lstnewenvironment{uC++}[1][] |
---|
148 | {\lstset{language=uC++,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}} |
---|
149 | {} |
---|
150 | \lstnewenvironment{Go}[1][] |
---|
151 | {\lstset{language=Golang,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}} |
---|
152 | {} |
---|
153 | \lstnewenvironment{python}[1][] |
---|
154 | {\lstset{language=python,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}} |
---|
155 | {} |
---|
156 | \lstnewenvironment{java}[1][] |
---|
157 | {\lstset{language=java,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}} |
---|
158 | {} |
---|
159 | |
---|
160 | % inline code @...@ |
---|
161 | \lstMakeShortInline@% |
---|
162 | |
---|
163 | % \let\OLDthebibliography\thebibliography |
---|
164 | % \renewcommand\thebibliography[1]{ |
---|
165 | % \OLDthebibliography{#1} |
---|
166 | % \setlength{\parskip}{0pt} |
---|
167 | % \setlength{\itemsep}{4pt plus 0.3ex} |
---|
168 | % } |
---|
169 | |
---|
170 | \newsavebox{\myboxA} |
---|
171 | \newsavebox{\myboxB} |
---|
172 | \newsavebox{\myboxC} |
---|
173 | \newsavebox{\myboxD} |
---|
174 | |
---|
175 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
176 | |
---|
177 | \articletype{RESEARCH ARTICLE}% |
---|
178 | |
---|
179 | % Referees |
---|
180 | % Doug Lea, dl@cs.oswego.edu, SUNY Oswego |
---|
181 | % Herb Sutter, hsutter@microsoft.com, Microsoft Corp |
---|
182 | % Gor Nishanov, gorn@microsoft.com, Microsoft Corp |
---|
183 | % James Noble, kjx@ecs.vuw.ac.nz, Victoria University of Wellington, School of Engineering and Computer Science |
---|
184 | |
---|
185 | \received{XXXXX} |
---|
186 | \revised{XXXXX} |
---|
187 | \accepted{XXXXX} |
---|
188 | |
---|
189 | \raggedbottom |
---|
190 | |
---|
191 | \title{High-Performance Concurrent Memory Allocation} |
---|
192 | |
---|
193 | \author[1]{Mubeen Zulfiqar} |
---|
194 | \author[1]{Peter A. Buhr*} |
---|
195 | \author[1]{Thierry Delisle} |
---|
196 | \author[1]{Ayelet Wasik} |
---|
197 | \authormark{ZULFIQAR \textsc{et al.}} |
---|
198 | |
---|
199 | \address[1]{\orgdiv{Cheriton School of Computer Science}, \orgname{University of Waterloo}, \orgaddress{\state{Waterloo, ON}, \country{Canada}}} |
---|
200 | |
---|
201 | \corres{*Peter A. Buhr, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada. \email{pabuhr{\char`\@}uwaterloo.ca}} |
---|
202 | |
---|
203 | % \fundingInfo{Natural Sciences and Engineering Research Council of Canada} |
---|
204 | |
---|
205 | \abstract[Summary]{ |
---|
206 | A new C-based concurrent memory-allocator is presented, called llheap. |
---|
207 | It can be used standalone in C/\CC applications with multiple kernel threads, or embedded into high-performance user-threading programming languages. |
---|
208 | llheap extends the feature set of existing C allocation by remembering zero-filled (\lstinline{calloc}) and aligned properties (\lstinline{memalign}) in an allocation. |
---|
209 | These properties can be queried, allowing programmers to write safer programs by preserving these properties in future allocations. |
---|
210 | As well, \lstinline{realloc} preserves these properties when enlarging storage requests, again increasing future allocation safety. |
---|
211 | llheap also extends the C allocation API with \lstinline{resize}, extended \lstinline{realloc}, \lstinline{aalloc}, \lstinline{amemalign}, and \lstinline{cmemalign} providing orthongoal ac, so programmers do not make mistakes writing theses useful allocation operations. |
---|
212 | It is competitive with the best current memory allocators, |
---|
213 | The ability to use \CFA's advanced type-system (and possibly \CC's too) to combine advanced memory operations into one allocation routine using named arguments shows how far the allocation API can be pushed, which increases safety and greatly simplifies programmer's use of dynamic allocation. |
---|
214 | low-latency |
---|
215 | without a performance loss |
---|
216 | The llheap allocator also provides comprehensive statistics for all allocation operations, which are invaluable in understanding and debugging a program's dynamic behaviour. |
---|
217 | As well, llheap provides a debugging mode where allocations are checked with internal pre/post conditions and invariants. It is extremely useful, especially for students. |
---|
218 | % No other memory allocator examined in the work provides such comprehensive statistics gathering. |
---|
219 | % While not as powerful as the \lstinline{valgrind} interpreter, a large number of allocations mistakes are detected. |
---|
220 | % Finally, contention-free statistics gathering and debugging have a low enough cost to be used in production code. |
---|
221 | % |
---|
222 | % A micro-benchmark test-suite is started for comparing allocators, rather than relying on a suite of arbitrary programs. It has been an interesting challenge. |
---|
223 | % These micro-benchmarks have adjustment knobs to simulate allocation patterns hard-coded into arbitrary test programs. |
---|
224 | % Existing memory allocators, glibc, dlmalloc, hoard, jemalloc, ptmalloc3, rpmalloc, tbmalloc, and the new allocator llheap are all compared using the new micro-benchmark test-suite. |
---|
225 | }% aabstract |
---|
226 | |
---|
227 | \keywords{C \CFA (Cforall) coroutine concurrency generator monitor parallelism runtime thread} |
---|
228 | |
---|
229 | |
---|
230 | \begin{document} |
---|
231 | %\linenumbers % comment out to turn off line numbering |
---|
232 | |
---|
233 | \maketitle |
---|
234 | |
---|
235 | |
---|
236 | \section{Introduction} |
---|
237 | |
---|
238 | Memory management takes a sequence of program generated allocation/deallocation requests and attempts to satisfy them within a fixed-sized block of memory while minimizing the total amount of memory used. |
---|
239 | A general-purpose dynamic-allocation algorithm cannot anticipate future allocation requests so its output is rarely optimal. |
---|
240 | However, memory allocators do take advantage of regularities in allocation patterns for typical programs to produce excellent results, both in time and space (similar to LRU paging). |
---|
241 | In general, allocators use a number of similar techniques, each optimizing specific allocation patterns. |
---|
242 | Nevertheless, memory allocators are a series of compromises, occasionally with some static or dynamic tuning parameters to optimize specific program-request patterns. |
---|
243 | |
---|
244 | |
---|
245 | \subsection{Memory Structure} |
---|
246 | \label{s:MemoryStructure} |
---|
247 | |
---|
248 | Figure~\ref{f:ProgramAddressSpace} shows the typical layout of a program's address space divided into the following zones (right to left): static code/data, dynamic allocation, dynamic code/data, and stack, with free memory surrounding the dynamic code/data~\cite{memlayout}. |
---|
249 | Static code and data are placed into memory at load time from the executable and are fixed-sized at runtime. |
---|
250 | Dynamic-allocation memory starts empty and grows/shrinks as the program dynamically creates/deletes variables with independent lifetime. |
---|
251 | The programming-language's runtime manages this area, where management complexity is a function of the mechanism for deleting variables. |
---|
252 | Dynamic code/data memory is managed by the dynamic loader for libraries loaded at runtime, which is complex especially in a multi-threaded program~\cite{Huang06}. |
---|
253 | However, changes to the dynamic code/data space are typically infrequent, many occurring at program startup, and are largely outside of a program's control. |
---|
254 | Stack memory is managed by the program call/return-mechanism using a LIFO technique, which works well for sequential programs. |
---|
255 | For stackful coroutines and user threads, a new stack is commonly created in the dynamic-allocation memory. |
---|
256 | This work focuses solely on management of the dynamic-allocation memory. |
---|
257 | |
---|
258 | \begin{figure} |
---|
259 | \centering |
---|
260 | \input{AddressSpace} |
---|
261 | \vspace{-5pt} |
---|
262 | \caption{Program Address Space Divided into Zones} |
---|
263 | \label{f:ProgramAddressSpace} |
---|
264 | \end{figure} |
---|
265 | |
---|
266 | |
---|
267 | \subsection{Dynamic Memory-Management} |
---|
268 | \label{s:DynamicMemoryManagement} |
---|
269 | |
---|
270 | Modern programming languages manage dynamic-allocation memory in different ways. |
---|
271 | Some languages, such as Lisp~\cite{CommonLisp}, Java~\cite{Java}, Haskell~\cite{Haskell}, Go~\cite{Go}, provide explicit allocation but \emph{implicit} deallocation of data through garbage collection~\cite{Wilson92}. |
---|
272 | In general, garbage collection supports memory compaction, where dynamic (live) data is moved during runtime to better utilize space. |
---|
273 | However, moving data requires finding pointers to it and updating them to reflect new data locations. |
---|
274 | Programming languages such as C~\cite{C}, \CC~\cite{C++}, and Rust~\cite{Rust} provide the programmer with explicit allocation \emph{and} deallocation of data. |
---|
275 | These languages cannot find and subsequently move live data because pointers can be created to any storage zone, including internal components of allocated objects, and may contain temporary invalid values generated by pointer arithmetic. |
---|
276 | Attempts have been made to perform quasi garbage collection in C/\CC~\cite{Boehm88}, but it is a compromise. |
---|
277 | This work only examines dynamic memory-management with \emph{explicit} deallocation. |
---|
278 | While garbage collection and compaction are not part this work, many of the results are applicable to the allocation phase in any memory-management approach. |
---|
279 | |
---|
280 | Most programs use a general-purpose allocator, often the one provided implicitly by the programming-language's runtime. |
---|
281 | When this allocator proves inadequate, programmers often write specialize allocators for specific needs. |
---|
282 | C and \CC allow easy replacement of the default memory allocator with an alternative specialized or general-purpose memory-allocator. |
---|
283 | Jikes RVM MMTk~\cite{MMTk} provides a similar generalization for the Java virtual machine. |
---|
284 | However, high-performance memory-allocators for kernel and user multi-threaded programs are still being designed and improved. |
---|
285 | For this reason, several alternative general-purpose allocators have been written for C/\CC with the goal of scaling in a multi-threaded program~\cite{Berger00,mtmalloc,streamflow,tcmalloc}. |
---|
286 | This work examines the design of high-performance allocators for use by kernel and user multi-threaded applications written in C/\CC. |
---|
287 | |
---|
288 | |
---|
289 | \subsection{Contributions} |
---|
290 | \label{s:Contributions} |
---|
291 | |
---|
292 | This work provides the following contributions in the area of explicit concurrent dynamic-allocation: |
---|
293 | \begin{enumerate}[leftmargin=*,itemsep=0pt] |
---|
294 | \item |
---|
295 | Implementation of a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC~\cite{uC++} and \CFA~\cite{Moss18,Delisle21} using user-level threads running on multiple kernel threads (M:N threading). |
---|
296 | |
---|
297 | \item |
---|
298 | Extend the standard C heap functionality by preserving with each allocation: its request size plus the amount allocated, whether an allocation is zero fill and/or allocation alignment. |
---|
299 | |
---|
300 | \item |
---|
301 | Use the preserved zero fill and alignment as \emph{sticky} properties for @realloc@ to zero-fill and align when storage is extended or copied. |
---|
302 | Without this extension, it is unsafe to @realloc@ storage initially allocated with zero-fill/alignment as these properties are not preserved when copying. |
---|
303 | This silent generation of a problem is unintuitive to programmers and difficult to locate because it is transient. |
---|
304 | |
---|
305 | \item |
---|
306 | Provide additional heap operations to complete programmer expectation with respect to accessing different allocation properties. |
---|
307 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
308 | \item |
---|
309 | @resize( oaddr, size )@ re-purpose an old allocation for a new type \emph{without} preserving fill or alignment. |
---|
310 | \item |
---|
311 | @resize( oaddr, alignment, size )@ re-purpose an old allocation with new alignment but \emph{without} preserving fill. |
---|
312 | \item |
---|
313 | @realloc( oaddr, alignment, size )@ same as @realloc@ but adding or changing alignment. |
---|
314 | \item |
---|
315 | @aalloc( dim, elemSize )@ same as @calloc@ except memory is \emph{not} zero filled. |
---|
316 | \item |
---|
317 | @amemalign( alignment, dim, elemSize )@ same as @aalloc@ with memory alignment. |
---|
318 | \item |
---|
319 | @cmemalign( alignment, dim, elemSize )@ same as @calloc@ with memory alignment. |
---|
320 | \end{itemize} |
---|
321 | |
---|
322 | \item |
---|
323 | Provide additional heap wrapper functions in \CFA creating a more usable set of allocation operations and properties. |
---|
324 | |
---|
325 | \item |
---|
326 | Provide additional query operations to access information about an allocation: |
---|
327 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
328 | \item |
---|
329 | @malloc_alignment( addr )@ returns the alignment of the allocation pointed-to by @addr@. |
---|
330 | If the allocation is not aligned or @addr@ is @NULL@, the minimal alignment is returned. |
---|
331 | \item |
---|
332 | @malloc_zero_fill( addr )@ returns a boolean result indicating if the memory pointed-to by @addr@ is allocated with zero fill, e.g., by @calloc@/@cmemalign@. |
---|
333 | \item |
---|
334 | @malloc_size( addr )@ returns the size of the memory allocation pointed-to by @addr@. |
---|
335 | \item |
---|
336 | @malloc_usable_size( addr )@ returns the usable (total) size of the memory pointed-to by @addr@, i.e., the bin size containing the allocation, where @malloc_size( addr )@ $\le$ @malloc_usable_size( addr )@. |
---|
337 | \end{itemize} |
---|
338 | |
---|
339 | \item |
---|
340 | Provide complete, fast, and contention-free allocation statistics to help understand allocation behaviour: |
---|
341 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
342 | \item |
---|
343 | @malloc_stats()@ print memory-allocation statistics on the file-descriptor set by @malloc_stats_fd@. |
---|
344 | \item |
---|
345 | @malloc_info( options, stream )@ print memory-allocation statistics as an XML string on the specified file-descriptor set by @malloc_stats_fd@. |
---|
346 | \item |
---|
347 | @malloc_stats_fd( fd )@ set file-descriptor number for printing memory-allocation statistics (default @STDERR_FILENO@). |
---|
348 | This file descriptor is used implicitly by @malloc_stats@ and @malloc_info@. |
---|
349 | \end{itemize} |
---|
350 | |
---|
351 | \item |
---|
352 | Provide extensive runtime checks to validate allocation operations and identify the amount of unfreed storage at program termination. |
---|
353 | |
---|
354 | \item |
---|
355 | Build 8 different versions of the allocator: static or dynamic linking, with or without statistics or debugging. |
---|
356 | A program may link to any of these 8 versions of the allocator often without recompilation. |
---|
357 | |
---|
358 | \item |
---|
359 | A micro-benchmark test-suite for comparing allocators rather than relying on a suite of arbitrary programs. |
---|
360 | These micro-benchmarks have adjustment knobs to simulate allocation patterns hard-coded into arbitrary test programs |
---|
361 | \end{enumerate} |
---|
362 | |
---|
363 | |
---|
364 | \section{Background} |
---|
365 | |
---|
366 | The following discussion is a quick overview of the moving-pieces that affect the design of a memory allocator and its performance. |
---|
367 | Dynamic acquires and releases obtain storage for a program variable, called an \newterm{object}, through calls such as @malloc@ and @free@ in C, and @new@ and @delete@ in \CC. |
---|
368 | Space for each allocated object comes from the dynamic-allocation zone. |
---|
369 | |
---|
370 | A \newterm{memory allocator} contains a complex data-structure and code that manages the layout of objects in the dynamic-allocation zone. |
---|
371 | The management goals are to make allocation/deallocation operations as fast as possible while densely packing objects to make efficient use of memory. |
---|
372 | Objects in C/\CC cannot be moved to aid the packing process, only adjacent free storage can be \newterm{coalesced} into larger free areas. |
---|
373 | The allocator grows or shrinks the dynamic-allocation zone to obtain storage for objects and reduce memory usage via operating-system calls, such as @mmap@ or @sbrk@ in UNIX. |
---|
374 | |
---|
375 | |
---|
376 | \subsection{Allocator Components} |
---|
377 | \label{s:AllocatorComponents} |
---|
378 | |
---|
379 | Figure~\ref{f:AllocatorComponents} shows the two important data components for a memory allocator, management and storage, collectively called the \newterm{heap}. |
---|
380 | The \newterm{management data} is a data structure located at a known memory address and contains fixed-sized information in the static-data memory that references components in the dynamic-allocation memory. |
---|
381 | For multi-threaded programs, additional management data may exist in \newterm{thread-local storage} (TLS) for each kernel thread executing the program. |
---|
382 | The \newterm{storage data} is composed of allocated and freed objects, and \newterm{reserved memory}. |
---|
383 | Allocated objects (light grey) are variable sized, and are allocated and maintained by the program; |
---|
384 | \ie only the program knows the location of allocated storage not the memory allocator. |
---|
385 | Freed objects (white) represent memory deallocated by the program, which are linked into one or more lists facilitating easy location of new allocations. |
---|
386 | Reserved memory (dark grey) is one or more blocks of memory obtained from the \newterm{operating system} (OS) but not yet allocated to the program; |
---|
387 | if there are multiple reserved blocks, they are also chained together. |
---|
388 | |
---|
389 | \begin{figure} |
---|
390 | \centering |
---|
391 | \input{AllocatorComponents} |
---|
392 | \caption{Allocator Components (Heap)} |
---|
393 | \label{f:AllocatorComponents} |
---|
394 | \end{figure} |
---|
395 | |
---|
396 | In many allocator designs, allocated objects and reserved blocks have management data embedded within them (see also Section~\ref{s:ObjectContainers}). |
---|
397 | Figure~\ref{f:AllocatedObject} shows an allocated object with a header, trailer, and optional spacing around the object. |
---|
398 | The header contains information about the object, \eg size, type, etc. |
---|
399 | The trailer may be used to simplify coalescing and/or for security purposes to mark the end of an object. |
---|
400 | An object may be preceded by padding to ensure proper alignment. |
---|
401 | Some algorithms quantize allocation requests, resulting in additional space after an object less than the quantized value. |
---|
402 | % The buckets are often organized as an array of ascending bucket sizes for fast searching, \eg binary search, and the array is stored in the heap management-area, where each bucket is a top point to the freed objects of that size. |
---|
403 | When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists. |
---|
404 | |
---|
405 | A free object often contains management data, \eg size, pointers, etc. |
---|
406 | Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks. |
---|
407 | For internal chaining, the amount of management data for a free node defines the minimum allocation size, \eg if 16 bytes are needed for a free-list node, allocation requests less than 16 bytes are rounded up. |
---|
408 | The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new program data and/or management information. |
---|
409 | |
---|
410 | \begin{figure} |
---|
411 | \centering |
---|
412 | \input{AllocatedObject} |
---|
413 | \caption{Allocated Object} |
---|
414 | \label{f:AllocatedObject} |
---|
415 | \end{figure} |
---|
416 | |
---|
417 | |
---|
418 | \subsection{Single-Threaded Memory-Allocator} |
---|
419 | \label{s:SingleThreadedMemoryAllocator} |
---|
420 | |
---|
421 | A single-threaded memory-allocator does not run any threads itself, but is used by a single-threaded program. |
---|
422 | Because the memory allocator is only executed by a single thread, concurrency issues do not exist. |
---|
423 | The primary issues in designing a single-threaded memory-allocator are fragmentation and locality. |
---|
424 | |
---|
425 | |
---|
426 | \subsubsection{Fragmentation} |
---|
427 | \label{s:Fragmentation} |
---|
428 | |
---|
429 | Fragmentation is memory requested from the OS but not used by the program; |
---|
430 | hence, allocated objects are not fragmentation. |
---|
431 | Figure~\ref{f:InternalExternalFragmentation} shows fragmentation is divided into two forms: internal or external. |
---|
432 | |
---|
433 | \begin{figure} |
---|
434 | \centering |
---|
435 | \input{IntExtFragmentation} |
---|
436 | \caption{Internal and External Fragmentation} |
---|
437 | \label{f:InternalExternalFragmentation} |
---|
438 | \end{figure} |
---|
439 | |
---|
440 | \newterm{Internal fragmentation} is memory space that is allocated to the program, but is not intended to be accessed by the program, such as headers, trailers, padding, and spacing around an allocated object. |
---|
441 | Internal fragmentation is problematic when management space is a significant proportion of an allocated object, \eg for small objects ($<$16 bytes), memory usage is doubled. |
---|
442 | An allocator should strive to keep internal management information to a minimum. |
---|
443 | |
---|
444 | \newterm{External fragmentation} is all memory space reserved from the OS but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory. |
---|
445 | This memory is problematic in two ways: heap blowup and highly fragmented memory. |
---|
446 | \newterm{Heap blowup} occurs when freed memory cannot be reused for future allocations leading to potentially unbounded external fragmentation growth~\cite{Berger00}. |
---|
447 | Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects, resulting in a checkerboard of adjacent allocated and free areas, where the free blocks have become to small to service requests. |
---|
448 | % Figure~\ref{f:MemoryFragmentation} shows an example of how a small block of memory fragments as objects are allocated and deallocated over time. |
---|
449 | Heap blowup can occur due to allocator policies that are too restrictive in reusing freed memory (the allocated size cannot use a larger free block) and/or no coalescing of free storage. |
---|
450 | % Blocks of free memory become smaller and non-contiguous making them less useful in serving allocation requests. |
---|
451 | % Memory is highly fragmented when most free blocks are unusable because of their sizes. |
---|
452 | % For example, Figure~\ref{f:Contiguous} and Figure~\ref{f:HighlyFragmented} have the same quantity of external fragmentation, but Figure~\ref{f:HighlyFragmented} is highly fragmented. |
---|
453 | % If there is a request to allocate a large object, Figure~\ref{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while Figure~\ref{f:HighlyFragmented} likely has to request more memory from the OS. |
---|
454 | |
---|
455 | % \begin{figure} |
---|
456 | % \centering |
---|
457 | % \input{MemoryFragmentation} |
---|
458 | % \caption{Memory Fragmentation} |
---|
459 | % \label{f:MemoryFragmentation} |
---|
460 | % \vspace{10pt} |
---|
461 | % \subfloat[Contiguous]{ |
---|
462 | % \input{ContigFragmentation} |
---|
463 | % \label{f:Contiguous} |
---|
464 | % } % subfloat |
---|
465 | % \subfloat[Highly Fragmented]{ |
---|
466 | % \input{NonContigFragmentation} |
---|
467 | % \label{f:HighlyFragmented} |
---|
468 | % } % subfloat |
---|
469 | % \caption{Fragmentation Quality} |
---|
470 | % \label{f:FragmentationQuality} |
---|
471 | % \end{figure} |
---|
472 | |
---|
473 | For a single-threaded memory allocator, three basic approaches for controlling fragmentation are identified~\cite{Johnstone99}. |
---|
474 | The first approach is a \newterm{sequential-fit algorithm} with one list of free objects that is searched for a block large enough to fit a requested object size. |
---|
475 | Different search policies determine the free object selected, \eg the first free object large enough or closest to the requested size. |
---|
476 | Any storage larger than the request can become spacing after the object or split into a smaller free object. |
---|
477 | % The cost of the search depends on the shape and quality of the free list, \eg a linear versus a binary-tree free-list, a sorted versus unsorted free-list. |
---|
478 | |
---|
479 | The second approach is a \newterm{segregated} or \newterm{binning algorithm} with a set of lists for different sized freed objects. |
---|
480 | When an object is allocated, the requested size is rounded up to the nearest bin-size, often leading to spacing after the object. |
---|
481 | A binning algorithm is fast at finding free memory of the appropriate size and allocating it, since the first free object on the free list is used. |
---|
482 | The fewer bin sizes, the fewer lists need to be searched and maintained; |
---|
483 | however, unusable space after object increases, leading to more internal fragmentation. |
---|
484 | The more bin sizes, the longer the search and the less likely a matching free objects is found, leading to more external fragmentation and potentially heap blowup. |
---|
485 | A variation of the binning algorithm allows objects to be allocated from larger bin sizes when the matching bins is empty, and the freed object can be returned to the matching or larger bin (some advantages to either scheme). |
---|
486 | % For example, with bin sizes of 8 and 16 bytes, a request for 12 bytes allocates only 12 bytes, but when the object is freed, it is placed on the 8-byte bin-list. |
---|
487 | % For subsequent requests, the bin free-lists contain objects of different sizes, ranging from one bin-size to the next (8-16 in this example), and a sequential-fit algorithm may be used to find an object large enough for the requested size on the associated bin list. |
---|
488 | |
---|
489 | The third approach is \newterm{splitting} and \newterm{coalescing algorithms}. |
---|
490 | When an object is allocated, if there are no free objects of the requested size, a larger free object is split into two smaller objects to satisfy the allocation request rather than obtaining more memory from the OS. |
---|
491 | For example, in the \newterm{buddy system}, a block of free memory is split into equal chunks, one of those chunks is again split, and so on until a minimal block is created that fits the requested object. |
---|
492 | When an object is deallocated, it is coalesced with the objects immediately before and after it in memory, if they are free, turning them into one larger block. |
---|
493 | Coalescing can be done eagerly at each deallocation or lazily when an allocation cannot be fulfilled. |
---|
494 | In all cases, coalescing increases allocation latency, hence some allocations can cause unbounded delays. |
---|
495 | While coalescing does not reduce external fragmentation, the coalesced blocks improve fragmentation quality so future allocations are less likely to cause heap blowup. |
---|
496 | % Splitting and coalescing can be used with other algorithms to avoid highly fragmented memory. |
---|
497 | |
---|
498 | |
---|
499 | \subsubsection{Locality} |
---|
500 | \label{s:Locality} |
---|
501 | |
---|
502 | The principle of locality recognizes that programs tend to reference a small set of data, called a \newterm{working set}, for a certain period of time, composed of temporal and spatial accesses~\cite{Denning05}. |
---|
503 | % Temporal clustering implies a group of objects are accessed repeatedly within a short time period, while spatial clustering implies a group of objects physically close together (nearby addresses) are accessed repeatedly within a short time period. |
---|
504 | % Temporal locality commonly occurs during an iterative computation with a fixed set of disjoint variables, while spatial locality commonly occurs when traversing an array. |
---|
505 | Hardware takes advantage of the working set through multiple levels of caching, \ie memory hierarchy. |
---|
506 | % When an object is accessed, the memory physically located around the object is also cached with the expectation that the current and nearby objects will be referenced within a short period of time. |
---|
507 | For example, entire cache lines are transferred between cache and memory, and entire virtual-memory pages are transferred between memory and disk. |
---|
508 | % A program exhibiting good locality has better performance due to fewer cache misses and page faults\footnote{With the advent of large RAM memory, paging is becoming less of an issue in modern programming.}. |
---|
509 | |
---|
510 | Temporal locality is largely controlled by how a program accesses its variables~\cite{Feng05}. |
---|
511 | Nevertheless, a memory allocator can have some indirect influence on temporal locality and largely dictates spatial locality. |
---|
512 | For temporal locality, an allocator can return storage for new allocations that was just freed as these memory locations are still \emph{warm} in the memory hierarchy. |
---|
513 | For spatial locality, an allocator can place objects used together close together in memory, so the working set of the program fits into the fewest possible cache lines and pages. |
---|
514 | % However, usage patterns are different for every program as is the underlying hardware memory architecture; |
---|
515 | % hence, no general-purpose memory-allocator can provide ideal locality for every program on every computer. |
---|
516 | |
---|
517 | There are a number of ways a memory allocator can degrade locality by increasing the working set. |
---|
518 | For example, a memory allocator may access multiple free objects before finding one to satisfy an allocation request, \eg sequential-fit algorithm, which can perturb the program's memory hierarchy causing multiple cache or page misses~\cite{Grunwald93}. |
---|
519 | Another way locality can be degraded is by spatially separating related data. |
---|
520 | For example, in a binning allocator, objects of different sizes are allocated from different bins that may be located in different pages of memory. |
---|
521 | |
---|
522 | |
---|
523 | \subsection{Multi-Threaded Memory-Allocator} |
---|
524 | \label{s:MultiThreadedMemoryAllocator} |
---|
525 | |
---|
526 | A multi-threaded memory-allocator does not run any threads itself, but is used by a multi-threaded program. |
---|
527 | In addition to single-threaded design issues of fragmentation and locality, a multi-threaded allocator is simultaneously accessed by multiple threads, and hence, must deal with concurrency issues such as mutual exclusion, false sharing, and additional forms of heap blowup. |
---|
528 | |
---|
529 | |
---|
530 | \subsubsection{Mutual Exclusion} |
---|
531 | \label{s:MutualExclusion} |
---|
532 | |
---|
533 | \newterm{Mutual exclusion} provides sequential access to the shared-management data of the heap. |
---|
534 | There are two performance issues for mutual exclusion. |
---|
535 | First is the overhead necessary to perform (at least) a hardware atomic operation every time a shared resource is accessed. |
---|
536 | Second is when multiple threads contend for a shared resource simultaneously, and hence, some threads must wait until the resource is released. |
---|
537 | Contention can be reduced in a number of ways: |
---|
538 | 1) Using multiple fine-grained locks versus a single lock to spread the contention across a number of locks. |
---|
539 | 2) Using trylock and generating new storage if the lock is busy, yielding a classic space versus time tradeoff. |
---|
540 | 3) Using one of the many lock-free approaches for reducing contention on basic data-structure operations~\cite{Oyama99}. |
---|
541 | However, all of these approaches have degenerate cases where program contention is high, which occurs outside of the allocator. |
---|
542 | |
---|
543 | |
---|
544 | \subsubsection{False Sharing} |
---|
545 | \label{s:FalseSharing} |
---|
546 | |
---|
547 | False sharing is a dynamic phenomenon leading to cache thrashing. |
---|
548 | When two or more threads on separate CPUs simultaneously change different objects sharing a cache line, the change invalidates the other thread's associated cache, even though these threads may be uninterested in the other modified object. |
---|
549 | False sharing can occur in three different ways: program induced, allocator-induced active, and allocator-induced passive; |
---|
550 | a memory allocator can only affect the latter two. |
---|
551 | |
---|
552 | Specifically, assume two objects, O$_1$ and O$_2$, share a cache line, with threads, T$_1$ and T$_2$. |
---|
553 | \newterm{Program-induced false-sharing} occurs when T$_1$ passes a reference to O$_2$ to T$_2$, and then T$_1$ modifies O$_1$ while T$_2$ modifies O$_2$. |
---|
554 | % Figure~\ref{f:ProgramInducedFalseSharing} shows when Thread$_1$ passes Object$_2$ to Thread$_2$, a false-sharing situation forms when Thread$_1$ modifies Object$_1$ and Thread$_2$ modifies Object$_2$. |
---|
555 | % Changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line. |
---|
556 | % \begin{figure} |
---|
557 | % \centering |
---|
558 | % \subfloat[Program-Induced False-Sharing]{ |
---|
559 | % \input{ProgramFalseSharing} |
---|
560 | % \label{f:ProgramInducedFalseSharing} |
---|
561 | % } \\ |
---|
562 | % \vspace{5pt} |
---|
563 | % \subfloat[Allocator-Induced Active False-Sharing]{ |
---|
564 | % \input{AllocInducedActiveFalseSharing} |
---|
565 | % \label{f:AllocatorInducedActiveFalseSharing} |
---|
566 | % } \\ |
---|
567 | % \vspace{5pt} |
---|
568 | % \subfloat[Allocator-Induced Passive False-Sharing]{ |
---|
569 | % \input{AllocInducedPassiveFalseSharing} |
---|
570 | % \label{f:AllocatorInducedPassiveFalseSharing} |
---|
571 | % } subfloat |
---|
572 | % \caption{False Sharing} |
---|
573 | % \label{f:FalseSharing} |
---|
574 | % \end{figure} |
---|
575 | \newterm{Allocator-induced active false-sharing}\label{s:AllocatorInducedActiveFalseSharing} occurs when O$_1$ and O$_2$ are heap allocated and their references are passed to T$_1$ and T$_2$, which modify the objects. |
---|
576 | % For example, in Figure~\ref{f:AllocatorInducedActiveFalseSharing}, each thread allocates an object and loads a cache-line of memory into its associated cache. |
---|
577 | % Again, changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line. |
---|
578 | \newterm{Allocator-induced passive false-sharing}\label{s:AllocatorInducedPassiveFalseSharing} occurs |
---|
579 | % is another form of allocator-induced false-sharing caused by program-induced false-sharing. |
---|
580 | % When an object in a program-induced false-sharing situation is deallocated, a future allocation of that object may cause passive false-sharing. |
---|
581 | when T$_1$ passes O$_2$ to T$_2$, and T$_2$ subsequently deallocates O$_2$, and then O$_2$ is reallocated to T$_2$ while T$_1$ is still using O$_1$. |
---|
582 | |
---|
583 | |
---|
584 | \subsubsection{Heap Blowup} |
---|
585 | \label{s:HeapBlowup} |
---|
586 | |
---|
587 | In a multi-threaded program, heap blowup can occur when memory freed by one thread is inaccessible to other threads due to the allocation strategy. |
---|
588 | Specific examples are presented in later subsections. |
---|
589 | |
---|
590 | |
---|
591 | \subsection{Multi-Threaded Memory-Allocator Features} |
---|
592 | \label{s:MultiThreadedMemoryAllocatorFeatures} |
---|
593 | |
---|
594 | The following features are used in the construction of multi-threaded memory-allocators: multiple heaps, user-level threading, ownership, object containers, allocation buffer, lock-free operations. |
---|
595 | The first feature, multiple heaps, pertains to different kinds of heaps. |
---|
596 | The second feature, object containers, pertains to the organization of objects within the storage area. |
---|
597 | The remaining features apply to different parts of the allocator design or implementation. |
---|
598 | |
---|
599 | |
---|
600 | \subsubsection{Multiple Heaps} |
---|
601 | \label{s:MultipleHeaps} |
---|
602 | |
---|
603 | A multi-threaded allocator has potentially multiple threads and heaps. |
---|
604 | The multiple threads cause complexity, and multiple heaps are a mechanism for dealing with the complexity. |
---|
605 | The spectrum ranges from multiple threads using a single heap, denoted as T:1, to multiple threads sharing multiple heaps, denoted as T:H, to one thread per heap, denoted as 1:1, which is almost back to a single-threaded allocator. |
---|
606 | |
---|
607 | \begin{figure} |
---|
608 | \centering |
---|
609 | \subfloat[T:1]{ |
---|
610 | % \input{SingleHeap.pstex_t} |
---|
611 | \input{SingleHeap} |
---|
612 | \label{f:SingleHeap} |
---|
613 | } % subfloat |
---|
614 | \vrule |
---|
615 | \subfloat[T:H]{ |
---|
616 | % \input{MultipleHeaps.pstex_t} |
---|
617 | \input{SharedHeaps} |
---|
618 | \label{f:SharedHeaps} |
---|
619 | } % subfloat |
---|
620 | \vrule |
---|
621 | \subfloat[1:1]{ |
---|
622 | % \input{MultipleHeapsGlobal.pstex_t} |
---|
623 | \input{PerThreadHeap} |
---|
624 | \label{f:PerThreadHeap} |
---|
625 | } % subfloat |
---|
626 | \caption{Multiple Heaps, Thread:Heap Relationship} |
---|
627 | \end{figure} |
---|
628 | |
---|
629 | \paragraph{T:1 model (see Figure~\ref{f:SingleHeap})} where all threads allocate and deallocate objects from one heap. |
---|
630 | Memory is obtained from the freed objects, or reserved memory in the heap, or from the OS; |
---|
631 | the heap may also return freed memory to the OS. |
---|
632 | The arrows indicate the direction memory conceptually moves for each kind of operation: allocation moves memory along the path from the heap/operating-system to the user application, while deallocation moves memory along the path from the application back to the heap/operating-system. |
---|
633 | To safely handle concurrency, a single lock may be used for all heap operations or fine-grained locking for different operations. |
---|
634 | Regardless, a single heap may be a significant source of contention for programs with a large amount of memory allocation. |
---|
635 | |
---|
636 | \paragraph{T:H model (see Figure~\ref{f:SharedHeaps})} where each thread allocates storage from several heaps depending on certain criteria, with the goal of reducing contention by spreading allocations/deallocations across the heaps. |
---|
637 | The decision on when to create a new heap and which heap a thread allocates from depends on the allocator design. |
---|
638 | To determine which heap to access, each thread must point to its associated heap in some way. |
---|
639 | The performance goal is to reduce the ratio of heaps to threads. |
---|
640 | However, the worse case can result in more heaps than threads, \eg if the number of threads is large at startup with many allocations creating a large number of heaps and then the number of threads reduces. |
---|
641 | Locking is required, since more than one thread may concurrently access a heap during its lifetime, but contention is reduced because fewer threads access a specific heap. |
---|
642 | |
---|
643 | % For example, multiple heaps are managed in a pool, starting with a single or a fixed number of heaps that increase\-/decrease depending on contention\-/space issues. |
---|
644 | % At creation, a thread is associated with a heap from the pool. |
---|
645 | % In some implementations of this model, when the thread attempts an allocation and its associated heap is locked (contention), it scans for an unlocked heap in the pool. |
---|
646 | % If an unlocked heap is found, the thread changes its association and uses that heap. |
---|
647 | % If all heaps are locked, the thread may create a new heap, use it, and then place the new heap into the pool; |
---|
648 | % or the thread can block waiting for a heap to become available. |
---|
649 | % While the heap-pool approach often minimizes the number of extant heaps, the worse case can result in more heaps than threads; |
---|
650 | % \eg if the number of threads is large at startup with many allocations creating a large number of heaps and then the number of threads reduces. |
---|
651 | |
---|
652 | % Threads using multiple heaps need to determine the specific heap to access for an allocation/deallocation, \ie association of thread to heap. |
---|
653 | % A number of techniques are used to establish this association. |
---|
654 | % The simplest approach is for each thread to have a pointer to its associated heap (or to administrative information that points to the heap), and this pointer changes if the association changes. |
---|
655 | % For threading systems with thread-local storage, the heap pointer is created using this mechanism; |
---|
656 | % otherwise, the heap routines must simulate thread-local storage using approaches like hashing the thread's stack-pointer or thread-id to find its associated heap. |
---|
657 | |
---|
658 | % The storage management for multiple heaps is more complex than for a single heap (see Figure~\ref{f:AllocatorComponents}). |
---|
659 | % Figure~\ref{f:MultipleHeapStorage} illustrates the general storage layout for multiple heaps. |
---|
660 | % Allocated and free objects are labelled by the thread or heap they are associated with. |
---|
661 | % (Links between free objects are removed for simplicity.) |
---|
662 | The management information for multiple heaps in the static zone must be able to locate all heaps. |
---|
663 | The management information for the heaps must reside in the dynamic-allocation zone if there are a variable number. |
---|
664 | Each heap in the dynamic zone is composed of a list of free objects and a pointer to its reserved memory. |
---|
665 | An alternative implementation is for all heaps to share one reserved memory, which requires a separate lock for the reserved storage to ensure mutual exclusion when acquiring new memory. |
---|
666 | Because multiple threads can allocate/free/reallocate adjacent storage, all forms of false sharing may occur. |
---|
667 | Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the OS. |
---|
668 | |
---|
669 | % \begin{figure} |
---|
670 | % \centering |
---|
671 | % \input{MultipleHeapsStorage} |
---|
672 | % \caption{Multiple-Heap Storage} |
---|
673 | % \label{f:MultipleHeapStorage} |
---|
674 | % \end{figure} |
---|
675 | |
---|
676 | Multiple heaps increase external fragmentation as the ratio of heaps to threads increases, which can lead to heap blowup. |
---|
677 | The external fragmentation experienced by a program with a single heap is now multiplied by the number of heaps, since each heap manages its own free storage and allocates its own reserved memory. |
---|
678 | Additionally, objects freed by one heap cannot be reused by other threads without increasing the cost of the memory operations, except indirectly by returning free memory to the OS (see Section~\ref{s:Ownership}). |
---|
679 | Returning storage to the OS may be difficult or impossible, \eg the contiguous @sbrk@ area in Unix. |
---|
680 | % In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused. |
---|
681 | |
---|
682 | Adding a \newterm{global heap} (G) attempts to reduce the cost of obtaining/returning memory among heaps (sharing) by buffering storage within the application address-space. |
---|
683 | Now, each heap obtains and returns storage to/from the global heap rather than the OS. |
---|
684 | Storage is obtained from the global heap only when a heap allocation cannot be fulfilled, and returned to the global heap when a heap's free memory exceeds some threshold. |
---|
685 | Similarly, the global heap buffers this memory, obtaining and returning storage to/from the OS as necessary. |
---|
686 | The global heap does not have its own thread and makes no internal allocation requests; |
---|
687 | instead, it uses the application thread, which called one of the multiple heaps and then the global heap, to perform operations. |
---|
688 | Hence, the worst-case cost of a memory operation includes all these steps. |
---|
689 | With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the OS to achieve the same goal and is independent of the mechanism used by the OS to present dynamic memory to an address space. |
---|
690 | However, since any thread may indirectly perform a memory operation on the global heap, it is a shared resource that requires locking. |
---|
691 | A single lock can be used to protect the global heap or fine-grained locking can be used to reduce contention. |
---|
692 | In general, the cost is minimal since the majority of memory operations are completed without the use of the global heap. |
---|
693 | |
---|
694 | \paragraph{1:1 model (see Figure~\ref{f:PerThreadHeap})} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see Section~\ref{s:Ownership}). |
---|
695 | An additional benefit of thread heaps is improved locality due to better memory layout. |
---|
696 | As each thread only allocates from its heap, all objects are consolidated in the storage area for that heap, better utilizing each CPUs cache and accessing fewer pages. |
---|
697 | In contrast, the T:H model spreads each thread's objects over a larger area in different heaps. |
---|
698 | Thread heaps can also eliminate allocator-induced active false-sharing, if memory is acquired so it does not overlap at crucial boundaries with memory for another thread's heap. |
---|
699 | For example, assume page boundaries coincide with cache line boundaries, if a thread heap always acquires pages of memory then no two threads share a page or cache line unless pointers are passed among them. |
---|
700 | % Hence, allocator-induced active false-sharing cannot occur because the memory for thread heaps never overlaps. |
---|
701 | |
---|
702 | When a thread terminates, there are two options for handling its thread heap. |
---|
703 | First is to free all objects in the thread heap to the global heap and destroy the thread heap. |
---|
704 | Second is to place the thread heap on a list of available heaps and reuse it for a new thread in the future. |
---|
705 | Destroying the thread heap immediately may reduce external fragmentation sooner, since all free objects are freed to the global heap and may be reused by other threads. |
---|
706 | Alternatively, reusing thread heaps may improve performance if the inheriting thread makes similar allocation requests as the thread that previously held the thread heap because any unfreed storage is immediately accessible. |
---|
707 | |
---|
708 | |
---|
709 | \subsubsection{User-Level Threading} |
---|
710 | |
---|
711 | It is possible to use any of the heap models with user-level (M:N) threading. |
---|
712 | However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the OS, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000). |
---|
713 | It is difficult to retain this goal, if the user-threading model is directly involved with the heap model. |
---|
714 | Figure~\ref{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model is provided by the language runtime. |
---|
715 | Hence, a user thread allocates/deallocates from/to the heap of the kernel thread on which it is currently executing. |
---|
716 | |
---|
717 | \begin{figure} |
---|
718 | \centering |
---|
719 | \input{UserKernelHeaps} |
---|
720 | \caption{User-Level Kernel Heaps} |
---|
721 | \label{f:UserLevelKernelHeaps} |
---|
722 | \end{figure} |
---|
723 | |
---|
724 | Adopting user threading results in a subtle problem with shared heaps. |
---|
725 | With kernel threading, an operation started by a kernel thread is always completed by that thread. |
---|
726 | For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap, even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption. |
---|
727 | However, this correctness property is not preserved for user-level threading. |
---|
728 | A user thread can start an allocation/deallocation on one kernel thread, be preempted (time slice), and continue running on a different kernel thread to complete the operation~\cite{Dice02}. |
---|
729 | When the user thread continues on the new kernel thread, it may have pointers into the previous kernel-thread's heap and hold locks associated with it. |
---|
730 | To get the same kernel-thread safety, time slicing must be disabled/\-enabled around these operations, so the user thread cannot jump to another kernel thread. |
---|
731 | However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption is infrequent (milliseconds). |
---|
732 | Instead, techniques exist to lazily detect this case in the interrupt handler, abort the preemption, and return to the operation so it can complete atomically. |
---|
733 | Occasional ignoring of a preemption should be benign, but a persistent lack of preemption can result in starvation; |
---|
734 | techniques like rolling forward the preemption to the next context switch can be used. |
---|
735 | |
---|
736 | |
---|
737 | \subsubsection{Ownership} |
---|
738 | \label{s:Ownership} |
---|
739 | |
---|
740 | \newterm{Ownership} defines which heap an object is returned-to on deallocation. |
---|
741 | If a thread returns an object to the heap it was originally allocated from, a heap has ownership of its objects. |
---|
742 | Alternatively, a thread can return an object to the heap it is currently associated with, which can be any heap accessible during a thread's lifetime. |
---|
743 | Figure~\ref{f:HeapsOwnership} shows an example of multiple heaps (minus the global heap) with and without ownership. |
---|
744 | Again, the arrows indicate the direction memory conceptually moves for each kind of operation. |
---|
745 | For the 1:1 thread:heap relationship, a thread only allocates from its own heap, and without ownership, a thread only frees objects to its own heap, which means the heap is private to its owner thread and does not require any locking, called a \newterm{private heap}. |
---|
746 | For the T:1/T:H models with or without ownership or the 1:1 model with ownership, a thread may free objects to different heaps, which makes each heap publicly accessible to all threads, called a \newterm{public heap}. |
---|
747 | |
---|
748 | \begin{figure} |
---|
749 | \centering |
---|
750 | \subfloat[Ownership]{ |
---|
751 | \input{MultipleHeapsOwnership} |
---|
752 | } % subfloat |
---|
753 | \hspace{0.25in} |
---|
754 | \subfloat[No Ownership]{ |
---|
755 | \input{MultipleHeapsNoOwnership} |
---|
756 | } % subfloat |
---|
757 | \caption{Heap Ownership} |
---|
758 | \label{f:HeapsOwnership} |
---|
759 | \end{figure} |
---|
760 | |
---|
761 | % Figure~\ref{f:MultipleHeapStorageOwnership} shows the effect of ownership on storage layout. |
---|
762 | % (For simplicity, assume the heaps all use the same size of reserves storage.) |
---|
763 | % In contrast to Figure~\ref{f:MultipleHeapStorage}, each reserved area used by a heap only contains free storage for that particular heap because threads must return free objects back to the owner heap. |
---|
764 | % Passive false-sharing may still occur, if delayed ownership is used (see below). |
---|
765 | |
---|
766 | % \begin{figure} |
---|
767 | % \centering |
---|
768 | % \input{MultipleHeapsOwnershipStorage.pstex_t} |
---|
769 | % \caption{Multiple-Heap Storage with Ownership} |
---|
770 | % \label{f:MultipleHeapStorageOwnership} |
---|
771 | % \end{figure} |
---|
772 | |
---|
773 | The main advantage of ownership is preventing heap blowup by returning storage for reuse by the owner heap. |
---|
774 | Ownership prevents the classical problem where one thread performs allocations from one heap, passes the object to another thread, and the receiving thread deallocates the object to another heap, hence draining the initial heap of storage. |
---|
775 | Because multiple threads can allocate/free/reallocate adjacent storage in the same heap, all forms of false sharing may occur. |
---|
776 | The exception is for the 1:1 model if reserved memory does not overlap a cache-line because all allocated storage within a used area is associated with a single thread. |
---|
777 | In this case, there is no allocator-induced active false-sharing because two adjacent allocated objects used by different threads cannot share a cache-line. |
---|
778 | Finally, there is no allocator-induced passive false-sharing because two adjacent allocated objects used by different threads cannot occur as free objects are returned to the owner heap. |
---|
779 | % For example, in Figure~\ref{f:AllocatorInducedPassiveFalseSharing}, the deallocation by Thread$_2$ returns Object$_2$ back to Thread$_1$'s heap; |
---|
780 | % hence a subsequent allocation by Thread$_2$ cannot return this storage. |
---|
781 | The disadvantage of ownership is deallocating to another thread's heap so heaps are no longer private and require locks to provide safe concurrent access. |
---|
782 | |
---|
783 | Object ownership can be immediate or delayed, meaning free objects may be batched on a separate free list either by the returning or receiving thread. |
---|
784 | While the returning thread can batch objects, batching across multiple heaps is complex and there is no obvious time when to push back to the owner heap. |
---|
785 | It is better for returning threads to immediately return to the receiving thread's batch list as the receiving thread has better knowledge when to incorporate the batch list into its free pool. |
---|
786 | Batching leverages the fact that most allocation patterns use the contention-free fast-path, so locking on the batch list is rare for both the returning and receiving threads. |
---|
787 | Finally, it is possible for heaps to temporarily steal owned objects rather than return them immediately and then reallocate these objects again. |
---|
788 | It is unclear whether the complexity of this approach is worthwhile. |
---|
789 | % However, stealing can result in passive false-sharing. |
---|
790 | % For example, in Figure~\ref{f:AllocatorInducedPassiveFalseSharing}, Object$_2$ may be deallocated to Thread$_2$'s heap initially. |
---|
791 | % If Thread$_2$ reallocates Object$_2$ before it is returned to its owner heap, then passive false-sharing may occur. |
---|
792 | |
---|
793 | For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps.% (see~Figure~\ref{f:HybridPrivatePublicHeap}). |
---|
794 | The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup. |
---|
795 | In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap. |
---|
796 | Similarly, a thread first deallocates an object to its private heap, and second to the public heap. |
---|
797 | Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps. |
---|
798 | % Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap. |
---|
799 | Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which does allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return. |
---|
800 | |
---|
801 | % \begin{figure} |
---|
802 | % \centering |
---|
803 | % \input{PrivatePublicHeaps.pstex_t} |
---|
804 | % \caption{Hybrid Private/Public Heap for Per-thread Heaps} |
---|
805 | % \label{f:HybridPrivatePublicHeap} |
---|
806 | % \vspace{10pt} |
---|
807 | % \input{RemoteFreeList.pstex_t} |
---|
808 | % \caption{Remote Free-List} |
---|
809 | % \label{f:RemoteFreeList} |
---|
810 | % \end{figure} |
---|
811 | |
---|
812 | % As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified. |
---|
813 | % For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}. |
---|
814 | % To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage. |
---|
815 | % Since the remote free-list is occasionally cleared during an allocation, this adds to that cost. |
---|
816 | % Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object. |
---|
817 | |
---|
818 | % If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking. |
---|
819 | % In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur. |
---|
820 | % If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations. |
---|
821 | |
---|
822 | % The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator. |
---|
823 | % Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used. |
---|
824 | % In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps. |
---|
825 | % For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach. |
---|
826 | |
---|
827 | |
---|
828 | \begin{figure} |
---|
829 | \centering |
---|
830 | \subfloat[Object Headers]{ |
---|
831 | \input{ObjectHeaders} |
---|
832 | \label{f:ObjectHeaders} |
---|
833 | } % subfloat |
---|
834 | \subfloat[Object Container]{ |
---|
835 | \input{Container} |
---|
836 | \label{f:ObjectContainer} |
---|
837 | } % subfloat |
---|
838 | \caption{Header Placement} |
---|
839 | \label{f:HeaderPlacement} |
---|
840 | \end{figure} |
---|
841 | |
---|
842 | |
---|
843 | \subsubsection{Object Containers} |
---|
844 | \label{s:ObjectContainers} |
---|
845 | |
---|
846 | Associating header data with every allocation can result in significant internal fragmentation, as shown in Figure~\ref{f:ObjectHeaders}. |
---|
847 | Especially if the headers contain redundant data, \eg object size may be the same for many objects because programs only allocate a small set of object sizes. |
---|
848 | As well, the redundant data can result in poor cache usage, since only a portion of the cache line is holding useful data from the program's perspective. |
---|
849 | Spatial locality can also be negatively affected leading to poor cache locality~\cite{Feng05}. |
---|
850 | While the header and object are spatially together in memory, they are generally not accessed temporarily together; |
---|
851 | \eg an object is accessed by the program after it is allocated, while the header is accessed by the allocator after it is free. |
---|
852 | |
---|
853 | An alternative approach factors common header data to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks}~\cite{Berger00}), as in Figure~\ref{f:ObjectContainer}. |
---|
854 | The header for the container holds information necessary for all objects in the container; |
---|
855 | a trailer may also be used at the end of the container. |
---|
856 | Similar to the approach described for thread heaps in Section~\ref{s:MultipleHeaps}, if container boundaries do not overlap with memory of another container at crucial boundaries and all objects in a container are allocated to the same thread, allocator-induced active false-sharing is avoided. |
---|
857 | |
---|
858 | The difficulty with object containers lies in finding the object header/trailer given only the object address, since that is normally the only information passed to the deallocation operation. |
---|
859 | One way is to start containers on aligned addresses in memory, then truncate the lower bits of the object address to obtain the header address (or round up and subtract the trailer size to obtain the trailer address). |
---|
860 | For example, if an object at address 0xFC28\,EF08 is freed and containers are aligned on 64\,KB (0x0001\,0000) addresses, then the container header is at 0xFC28\,0000. |
---|
861 | |
---|
862 | Normally, a container has homogeneous objects, \eg object size and ownership. |
---|
863 | This approach greatly reduces internal fragmentation since far fewer headers are required, and potentially increases spatial locality as a cache line or page holds more objects since the objects are closer together. |
---|
864 | However, different sized objects are further apart in separate containers. |
---|
865 | Depending on the program, this may or may not improve locality. |
---|
866 | If the program uses several objects from a small number of containers in its working set, then locality is improved since fewer cache lines and pages are required. |
---|
867 | If the program uses many containers, there is poor locality, as both caching and paging increase. |
---|
868 | Another drawback is that external fragmentation may be increased since containers reserve space for objects that may never be allocated, \ie there are often multiple containers for each size only partially full. |
---|
869 | However, external fragmentation can be reduced by using small containers. |
---|
870 | |
---|
871 | Containers with heterogeneous objects implies different headers describing them, which complicates the problem of locating a specific header solely by an address. |
---|
872 | A couple of solutions can be used to implement containers with heterogeneous objects. |
---|
873 | However, the problem with allowing objects of different sizes is that the number of objects, and therefore headers, in a single container is unpredictable. |
---|
874 | One solution allocates headers at one end of the container, while allocating objects from the other end of the container; |
---|
875 | when the headers meet the objects, the container is full. |
---|
876 | Freed objects cannot be split or coalesced since this causes the number of headers to change. |
---|
877 | The difficulty in this strategy remains in finding the header for a specific object; |
---|
878 | in general, a search is necessary to find the object's header among the container headers. |
---|
879 | A second solution combines the use of container headers and individual object headers. |
---|
880 | Each object header stores the object's heterogeneous information, such as its size, while the container header stores the homogeneous information, such as the owner when using ownership. |
---|
881 | This approach allows containers to hold different types of objects, but does not completely separate headers from objects. |
---|
882 | % The benefit of the container in this case is to reduce some redundant information that is factored into the container header. |
---|
883 | |
---|
884 | % In summary, object containers trade off internal fragmentation for external fragmentation by isolating common administration information to remove/reduce internal fragmentation, but at the cost of external fragmentation as some portion of a container may not be used and this portion is unusable for other kinds of allocations. |
---|
885 | % A consequence of this tradeoff is its effect on spatial locality, which can produce positive or negative results depending on program access-patterns. |
---|
886 | |
---|
887 | |
---|
888 | \paragraph{Container Ownership} |
---|
889 | \label{s:ContainerOwnership} |
---|
890 | |
---|
891 | Without ownership, objects in a container are deallocated to the heap currently associated with the thread that frees the object. |
---|
892 | Thus, different objects in a container may be on different heap free-lists. % (see Figure~\ref{f:ContainerNoOwnershipFreelist}). |
---|
893 | With ownership, all objects in a container belong to the same heap, |
---|
894 | % (see Figure~\ref{f:ContainerOwnershipFreelist}), |
---|
895 | so ownership of an object is determined by the container owner. |
---|
896 | If multiple threads can allocate/free/reallocate adjacent storage in the same heap, all forms of false sharing may occur. |
---|
897 | Only with the 1:1 model and ownership is active and passive false-sharing avoided (see Section~\ref{s:Ownership}). |
---|
898 | Passive false-sharing may still occur, if delayed ownership is used. |
---|
899 | Finally, a completely free container can become reserved storage and be reset to allocate objects of a new size or freed to the global heap. |
---|
900 | |
---|
901 | % \begin{figure} |
---|
902 | % \centering |
---|
903 | % \subfloat[No Ownership]{ |
---|
904 | % \input{ContainerNoOwnershipFreelist} |
---|
905 | % \label{f:ContainerNoOwnershipFreelist} |
---|
906 | % } % subfloat |
---|
907 | % \vrule |
---|
908 | % \subfloat[Ownership]{ |
---|
909 | % \input{ContainerOwnershipFreelist} |
---|
910 | % \label{f:ContainerOwnershipFreelist} |
---|
911 | % } % subfloat |
---|
912 | % \caption{Free-list Structure with Container Ownership} |
---|
913 | % \end{figure} |
---|
914 | |
---|
915 | When a container changes ownership, the ownership of all objects within it change as well. |
---|
916 | Moving a container involves moving all objects on the heap's free-list in that container to the new owner. |
---|
917 | This approach can reduce contention for the global heap, since each request for objects from the global heap returns a container rather than individual objects. |
---|
918 | |
---|
919 | Additional restrictions may be applied to the movement of containers to prevent active false-sharing. |
---|
920 | For example, if a container changes ownership through the global heap, then a thread allocating from the newly acquired container is actively false-sharing even though no objects are passed among threads. |
---|
921 | Note, once the thread frees the object, no more false sharing can occur until the container changes ownership again. |
---|
922 | To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free. |
---|
923 | One implementation approach that increases the freedom to return a free container to the OS involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the OS. |
---|
924 | |
---|
925 | % \begin{figure} |
---|
926 | % \centering |
---|
927 | % \subfloat[]{ |
---|
928 | % \input{ContainerFalseSharing1} |
---|
929 | % \label{f:ContainerFalseSharing1} |
---|
930 | % } % subfloat |
---|
931 | % \subfloat[]{ |
---|
932 | % \input{ContainerFalseSharing2} |
---|
933 | % \label{f:ContainerFalseSharing2} |
---|
934 | % } % subfloat |
---|
935 | % \caption{Active False-Sharing using Containers} |
---|
936 | % \label{f:ActiveFalseSharingContainers} |
---|
937 | % \end{figure} |
---|
938 | |
---|
939 | Using containers with ownership increases external fragmentation since a new container for a requested object size must be allocated separately for each thread requesting it. |
---|
940 | % In Figure~\ref{f:ExternalFragmentationContainerOwnership}, using object ownership allocates 80\% more space than without ownership. |
---|
941 | |
---|
942 | % \begin{figure} |
---|
943 | % \centering |
---|
944 | % \subfloat[No Ownership]{ |
---|
945 | % \input{ContainerNoOwnership} |
---|
946 | % } % subfloat |
---|
947 | % \\ |
---|
948 | % \subfloat[Ownership]{ |
---|
949 | % \input{ContainerOwnership} |
---|
950 | % } % subfloat |
---|
951 | % \caption{External Fragmentation with Container Ownership} |
---|
952 | % \label{f:ExternalFragmentationContainerOwnership} |
---|
953 | % \end{figure} |
---|
954 | |
---|
955 | |
---|
956 | \paragraph{Container Size} |
---|
957 | \label{s:ContainerSize} |
---|
958 | |
---|
959 | One way to control the external fragmentation caused by allocating a large container for a small number of requested objects is to vary the size of the container. |
---|
960 | As described earlier, container boundaries need to be aligned on addresses that are a power of two to allow easy location of the header (by truncating lower bits). |
---|
961 | Aligning containers in this manner also determines the size of the container. |
---|
962 | However, the size of the container has different implications for the allocator. |
---|
963 | |
---|
964 | The larger the container, the fewer containers are needed, and hence, the fewer headers need to be maintained in memory, improving both internal fragmentation and potentially performance. |
---|
965 | However, with more objects in a container, there may be more objects that are unallocated, increasing external fragmentation. |
---|
966 | With smaller containers, not only are there more containers, but a second new problem arises where objects are larger than the container. |
---|
967 | In general, large objects, \eg greater than 64\,KB, are allocated directly from the OS and are returned immediately to the OS to reduce long-term external fragmentation. |
---|
968 | If the container size is small, \eg 1\,KB, then a 1.5\,KB object is treated as a large object, which is likely to be inappropriate. |
---|
969 | Ideally, it is best to use smaller containers for smaller objects, and larger containers for medium objects, which leads to the issue of locating the container header. |
---|
970 | |
---|
971 | In order to find the container header when using different sized containers, a super container is used (see~Figure~\ref{f:SuperContainers}). |
---|
972 | The super container spans several containers, contains a header with information for finding each container header, and starts on an aligned address. |
---|
973 | Super-container headers are found using the same method used to find container headers by dropping the lower bits of an object address. |
---|
974 | The containers within a super container may be different sizes or all the same size. |
---|
975 | If the containers in the super container are different sizes, then the super-container header must be searched to determine the specific container for an object given its address. |
---|
976 | If all containers in the super container are the same size, \eg 16KB, then a specific container header can be found by a simple calculation. |
---|
977 | The free space at the end of a super container is used to allocate new containers. |
---|
978 | |
---|
979 | \begin{figure} |
---|
980 | \centering |
---|
981 | \input{SuperContainers} |
---|
982 | % \includegraphics{diagrams/supercontainer.eps} |
---|
983 | \caption{Super Containers} |
---|
984 | \label{f:SuperContainers} |
---|
985 | \end{figure} |
---|
986 | |
---|
987 | Minimal internal and external fragmentation is achieved by having as few containers as possible, each being as full as possible. |
---|
988 | It is also possible to achieve additional benefit by using larger containers for popular small sizes, as it reduces the number of containers with associated headers. |
---|
989 | However, this approach assumes it is possible for an allocator to determine in advance which sizes are popular. |
---|
990 | Keeping statistics on requested sizes allows the allocator to make a dynamic decision about which sizes are popular. |
---|
991 | For example, after receiving a number of allocation requests for a particular size, that size is considered a popular request size and larger containers are allocated for that size. |
---|
992 | If the decision is incorrect, larger containers than necessary are allocated that remain mostly unused. |
---|
993 | A programmer may be able to inform the allocator about popular object sizes, using a mechanism like @mallopt@, in order to select an appropriate container size for each object size. |
---|
994 | |
---|
995 | |
---|
996 | \paragraph{Container Free-Lists} |
---|
997 | \label{s:containersfreelists} |
---|
998 | |
---|
999 | The container header allows an alternate approach for managing the heap's free-list. |
---|
1000 | Rather than maintain a global free-list throughout the heap the containers are linked through their headers and only the local free objects within a container are linked together. |
---|
1001 | Note, maintaining free lists within a container assumes all free objects in the container are associated with the same heap; |
---|
1002 | thus, this approach only applies to containers with ownership. |
---|
1003 | |
---|
1004 | This alternate free-list approach can greatly reduce the complexity of moving all freed objects belonging to a container to another heap. |
---|
1005 | To move a container using a global free-list, the free list is first searched to find all objects within the container. |
---|
1006 | Each object is then removed from the free list and linked together to form a local free-list for the move to the new heap. |
---|
1007 | With local free-lists in containers, the container is simply removed from one heap's free list and placed on the new heap's free list. |
---|
1008 | Thus, when using local free-lists, the operation of moving containers is reduced from $O(N)$ to $O(1)$. |
---|
1009 | However, there is the additional storage cost in the header, which increases the header size, and therefore internal fragmentation. |
---|
1010 | |
---|
1011 | % \begin{figure} |
---|
1012 | % \centering |
---|
1013 | % \subfloat[Global Free-List Among Containers]{ |
---|
1014 | % \input{FreeListAmongContainers} |
---|
1015 | % \label{f:GlobalFreeListAmongContainers} |
---|
1016 | % } % subfloat |
---|
1017 | % \hspace{0.25in} |
---|
1018 | % \subfloat[Local Free-List Within Containers]{ |
---|
1019 | % \input{FreeListWithinContainers} |
---|
1020 | % \label{f:LocalFreeListWithinContainers} |
---|
1021 | % } % subfloat |
---|
1022 | % \caption{Container Free-List Structure} |
---|
1023 | % \label{f:ContainerFreeListStructure} |
---|
1024 | % \end{figure} |
---|
1025 | |
---|
1026 | When all objects in the container are the same size, a single free-list is sufficient. |
---|
1027 | However, when objects in the container are different size, the header needs a free list for each size class when using a binning allocation algorithm, which can be a significant increase in the container-header size. |
---|
1028 | The alternative is to use a different allocation algorithm with a single free-list, such as a sequential-fit allocation-algorithm. |
---|
1029 | |
---|
1030 | |
---|
1031 | \subsubsection{Allocation Buffer} |
---|
1032 | \label{s:AllocationBuffer} |
---|
1033 | |
---|
1034 | An allocation buffer is reserved memory (see Section~\ref{s:AllocatorComponents}) not yet allocated to the program, and is used for allocating objects when the free list is empty. |
---|
1035 | That is, rather than requesting new storage for a single object, an entire buffer is requested from which multiple objects are allocated later. |
---|
1036 | Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or OS, respectively. |
---|
1037 | The allocation buffer reduces contention and the number of global/operating-system calls. |
---|
1038 | For coalescing, a buffer is split into smaller objects by allocations, and recomposed into larger buffer areas during deallocations. |
---|
1039 | |
---|
1040 | Allocation buffers are useful initially when there are no freed objects in a heap because many allocations usually occur when a thread starts (simple bump allocation). |
---|
1041 | Furthermore, to prevent heap blowup, objects should be reused before allocating a new allocation buffer. |
---|
1042 | Thus, allocation buffers are often allocated more frequently at program/thread start, and then allocations often diminish. |
---|
1043 | |
---|
1044 | Using an allocation buffer with a thread heap avoids active false-sharing, since all objects in the allocation buffer are allocated to the same thread. |
---|
1045 | For example, if all objects sharing a cache line come from the same allocation buffer, then these objects are allocated to the same thread, avoiding active false-sharing. |
---|
1046 | Active false-sharing may still occur if objects are freed to the global heap and reused by another heap. |
---|
1047 | |
---|
1048 | Allocation buffers may increase external fragmentation, since some memory in the allocation buffer may never be allocated. |
---|
1049 | A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or OS. |
---|
1050 | The allocation buffer also slightly increases internal fragmentation, since a pointer is necessary to locate the next free object in the buffer. |
---|
1051 | |
---|
1052 | The unused part of a container, neither allocated or freed, is an allocation buffer. |
---|
1053 | For example, when a container is created, rather than placing all objects within the container on the free list, the objects form an allocation buffer and are allocated from the buffer as allocation requests are made. |
---|
1054 | This lazy method of constructing objects is beneficial in terms of paging and caching. |
---|
1055 | For example, although an entire container, possibly spanning several pages, is allocated from the OS, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache. |
---|
1056 | |
---|
1057 | |
---|
1058 | \subsubsection{Lock-Free Operations} |
---|
1059 | \label{s:LockFreeOperations} |
---|
1060 | |
---|
1061 | A \newterm{lock-free algorithm} guarantees safe concurrent-access to a data structure, so that at least one thread makes progress, but an individual thread has no execution bound and may starve~\cite[pp.~745--746]{Herlihy93}. |
---|
1062 | (A \newterm{wait-free algorithm} puts a bound on the number of steps any thread takes to complete an operation to prevent starvation.) |
---|
1063 | Lock-free operations can be used in an allocator to reduce or eliminate the use of locks. |
---|
1064 | While locks and lock-free data-structures often have equal performance, lock-free has the advantage of not holding a lock across preemption so other threads can continue to make progress. |
---|
1065 | With respect to the heap, these situations are unlikely unless all threads make extremely high use of dynamic-memory allocation, which can be an indication of poor design. |
---|
1066 | Nevertheless, lock-free algorithms can reduce the number of context switches, since a thread does not yield/block while waiting for a lock; |
---|
1067 | on the other hand, a thread may busy-wait for an unbounded period holding a processor. |
---|
1068 | Finally, lock-free implementations have greater complexity and hardware dependency. |
---|
1069 | Lock-free algorithms can be applied most easily to simple free-lists, \eg remote free-list, to allow lock-free insertion and removal from the head of a stack. |
---|
1070 | Implementing lock-free operations for more complex data-structures (queue~\cite{Valois94}/deque~\cite{Sundell08}) is correspondingly more complex. |
---|
1071 | Michael~\cite{Michael04} and Gidenstam \etal \cite{Gidenstam05} have created lock-free variations of the Hoard allocator. |
---|
1072 | |
---|
1073 | |
---|
1074 | \section{Allocator} |
---|
1075 | \label{c:Allocator} |
---|
1076 | |
---|
1077 | This section presents a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading). |
---|
1078 | The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}. |
---|
1079 | |
---|
1080 | |
---|
1081 | \subsection{llheap} |
---|
1082 | |
---|
1083 | The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call. |
---|
1084 | (Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.) |
---|
1085 | A direct consequence of this objective is very simple or no storage coalescing; |
---|
1086 | hence, llheap's design is willing to use more storage to lower latency. |
---|
1087 | This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory. |
---|
1088 | Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in Section~\ref{c:Performance}). |
---|
1089 | |
---|
1090 | % The objective of llheap's new design was to fulfill following requirements: |
---|
1091 | % \begin{itemize} |
---|
1092 | % \item It should be concurrent and thread-safe for multi-threaded programs. |
---|
1093 | % \item It should avoid global locks, on resources shared across all threads, as much as possible. |
---|
1094 | % \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators). |
---|
1095 | % \item It should be a lightweight memory allocator. |
---|
1096 | % \end{itemize} |
---|
1097 | |
---|
1098 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
1099 | |
---|
1100 | \subsection{Design Choices} |
---|
1101 | |
---|
1102 | % Some of the rejected designs are discussed because they show the path to the final design (see discussion in Section~\ref{s:MultipleHeaps}). |
---|
1103 | % Note, a few simple tests for a design choice were compared with the current best allocators to determine the viability of a design. |
---|
1104 | |
---|
1105 | |
---|
1106 | % \paragraph{T:1 model} |
---|
1107 | % Figure~\ref{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are shared among N KTs. |
---|
1108 | % This design leverages the fact that usually the allocation requests are less than 1024 bytes and there are only a few different request sizes. |
---|
1109 | % When KTs $\le$ N, the common bucket sizes are uncontented; |
---|
1110 | % when KTs $>$ N, the free buckets are contented and latency increases significantly. |
---|
1111 | % In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared. |
---|
1112 | % Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency; |
---|
1113 | % however, picking N is workload specific. |
---|
1114 | % |
---|
1115 | % \begin{figure} |
---|
1116 | % \centering |
---|
1117 | % \input{AllocDS1} |
---|
1118 | % \caption{T:1 with Shared Buckets} |
---|
1119 | % \label{f:T1SharedBuckets} |
---|
1120 | % \end{figure} |
---|
1121 | % |
---|
1122 | % Problems: |
---|
1123 | % \begin{itemize} |
---|
1124 | % \item |
---|
1125 | % Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator. |
---|
1126 | % \item |
---|
1127 | % When no thread is assigned a bucket number, its free storage is unavailable. |
---|
1128 | % \item |
---|
1129 | % All KTs contend for the global-pool lock for initial allocations, before free-lists get populated. |
---|
1130 | % \end{itemize} |
---|
1131 | % Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency. |
---|
1132 | |
---|
1133 | % \paragraph{T:H model} |
---|
1134 | % Figure~\ref{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded (distributed) across the KTs. |
---|
1135 | % A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket. |
---|
1136 | % When KT $\le$ N, the heaps might be uncontented; |
---|
1137 | % when KTs $>$ N, the heaps are contented. |
---|
1138 | % In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared. |
---|
1139 | % By increasing N, this approach reduces contention but increases storage (time versus space); |
---|
1140 | % however, picking N is workload specific. |
---|
1141 | % |
---|
1142 | % \begin{figure} |
---|
1143 | % \centering |
---|
1144 | % \input{AllocDS2} |
---|
1145 | % \caption{T:H with Shared Heaps} |
---|
1146 | % \label{f:THSharedHeaps} |
---|
1147 | % \end{figure} |
---|
1148 | % |
---|
1149 | % Problems: |
---|
1150 | % \begin{itemize} |
---|
1151 | % \item |
---|
1152 | % Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator. |
---|
1153 | % \item |
---|
1154 | % When no thread is assigned to a heap, its free storage is unavailable. |
---|
1155 | % \item |
---|
1156 | % Ownership issues arise (see Section~\ref{s:Ownership}). |
---|
1157 | % \item |
---|
1158 | % All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated. |
---|
1159 | % \end{itemize} |
---|
1160 | % Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency. |
---|
1161 | |
---|
1162 | % \paragraph{T:H model, H = number of CPUs} |
---|
1163 | % This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@. |
---|
1164 | % (See Figure~\ref{f:THSharedHeaps} but with a heap bucket per CPU.) |
---|
1165 | % Hence, each CPU logically has its own private heap and local pool. |
---|
1166 | % A memory operation is serviced from the heap associated with the CPU executing the operation. |
---|
1167 | % This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership). |
---|
1168 | % This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs. |
---|
1169 | % |
---|
1170 | % Problems: |
---|
1171 | % \begin{itemize} |
---|
1172 | % \item |
---|
1173 | % Need to know when a CPU is added/removed from the @taskset@. |
---|
1174 | % \item |
---|
1175 | % Need a fast way to determine the CPU a KT is executing on to access the appropriate heap. |
---|
1176 | % \item |
---|
1177 | % Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}. |
---|
1178 | % \begin{quote} |
---|
1179 | % A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable} |
---|
1180 | % \end{quote} |
---|
1181 | % If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness. |
---|
1182 | % Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable. |
---|
1183 | % Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler. |
---|
1184 | % |
---|
1185 | % Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted. |
---|
1186 | % \end{itemize} |
---|
1187 | % Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-subsection along the allocation fast-path produced a significant increase in allocation costs. |
---|
1188 | % Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs. |
---|
1189 | % For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs. |
---|
1190 | % The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation. |
---|
1191 | % Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical subsection. |
---|
1192 | % Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap. |
---|
1193 | % If @librseq@ had an @rseq_abort@ which: |
---|
1194 | % \begin{enumerate} |
---|
1195 | % \item |
---|
1196 | % Marked the current restartable critical-subsection as cancelled so it restarts when attempting to commit. |
---|
1197 | % \item |
---|
1198 | % Do nothing if there is no current restartable critical subsection in progress. |
---|
1199 | % \end{enumerate} |
---|
1200 | % Then @rseq_abort@ could be called on the backside of a user-level context-switching. |
---|
1201 | % A feature similar to this idea might exist for hardware transactional-memory. |
---|
1202 | % A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection. |
---|
1203 | |
---|
1204 | % \subsubsection{Allocation Fastpath} |
---|
1205 | % \label{s:AllocationFastpath} |
---|
1206 | |
---|
1207 | llheap's design was reviewed and changed multiple times during its development. Only the final design choices are |
---|
1208 | discussed in this paper. |
---|
1209 | (See~\cite{Zulfiqar22} for a discussion of alternate choices and reasons for rejecting them.) |
---|
1210 | All designs were analyzed for the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced. |
---|
1211 | The heap model choosen is 1:1, which is the T:H model with T = H, where there is one thread-local heap for each KT. |
---|
1212 | (See Figure~\ref{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.) |
---|
1213 | Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted. |
---|
1214 | Heaps are uncontended for a KTs memory operations as every KT has its own thread-local heap, modulo operations on the global pool and ownership. |
---|
1215 | |
---|
1216 | Problems: |
---|
1217 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1218 | \item |
---|
1219 | Need to know when a KT starts/terminates to create/delete its heap. |
---|
1220 | |
---|
1221 | \noindent |
---|
1222 | It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates. |
---|
1223 | \item |
---|
1224 | There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap. |
---|
1225 | |
---|
1226 | \noindent |
---|
1227 | The classic solution only deletes a heap after all referents are returned, which is complex. |
---|
1228 | The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees. |
---|
1229 | If old referents return storage to a terminated heap, it is handled in the same way as an active heap. |
---|
1230 | To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation). |
---|
1231 | In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT. |
---|
1232 | \item |
---|
1233 | There can be significant external fragmentation as the number of KTs increases. |
---|
1234 | |
---|
1235 | \noindent |
---|
1236 | In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs. |
---|
1237 | Since the number of CPUs is relatively small, and a heap is also relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs. |
---|
1238 | \item |
---|
1239 | Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}. |
---|
1240 | \begin{quote} |
---|
1241 | A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable} |
---|
1242 | \end{quote} |
---|
1243 | If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness. |
---|
1244 | Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable. |
---|
1245 | Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler. |
---|
1246 | |
---|
1247 | Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted. |
---|
1248 | |
---|
1249 | %There is the same serially-reusable problem with UTs migrating across KTs. |
---|
1250 | \end{itemize} |
---|
1251 | Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach. |
---|
1252 | |
---|
1253 | |
---|
1254 | \vspace{5pt} |
---|
1255 | \noindent |
---|
1256 | The conclusion from this design exercise is: any atomic fence, atomic instruction (lock free), or lock along the allocation fastpath produces significant slowdown. |
---|
1257 | For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps might be shared by multiple threads, even when KTs $\le$ N. |
---|
1258 | For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath. |
---|
1259 | However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs. |
---|
1260 | More OS support is required to make this model viable, but there is still the serially-reusable problem with user-level threading. |
---|
1261 | So the 1:1 model had no atomic actions along the fastpath and no special operating-system support requirements. |
---|
1262 | The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in Section~\ref{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns. |
---|
1263 | |
---|
1264 | |
---|
1265 | % \begin{itemize} |
---|
1266 | % \item |
---|
1267 | % A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated. |
---|
1268 | % \item |
---|
1269 | % Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario. |
---|
1270 | % \item |
---|
1271 | % Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it. |
---|
1272 | % that because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower. |
---|
1273 | % \end{itemize} |
---|
1274 | % Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons: |
---|
1275 | |
---|
1276 | % \subsubsection{Advantages of distributed design} |
---|
1277 | % |
---|
1278 | % The distributed design of llheap is concurrent to work in multi-threaded applications. |
---|
1279 | % Some key benefits of the distributed design of llheap are as follows: |
---|
1280 | % \begin{itemize} |
---|
1281 | % \item |
---|
1282 | % The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned. |
---|
1283 | % \item |
---|
1284 | % Low or almost no contention on heap resources. |
---|
1285 | % \item |
---|
1286 | % It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty. |
---|
1287 | % \item |
---|
1288 | % Distributed design avoids unnecessary locks on resources shared across all KTs. |
---|
1289 | % \end{itemize} |
---|
1290 | |
---|
1291 | \subsubsection{Allocation Latency} |
---|
1292 | |
---|
1293 | A primary goal of llheap is low latency, hence the name low-latency heap (llheap). |
---|
1294 | Two forms of latency are internal and external. |
---|
1295 | Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the OS. |
---|
1296 | Ideally latency is $O(1)$ with a small constant. |
---|
1297 | |
---|
1298 | To obtain $O(1)$ internal latency means no searching on the allocation fastpath and largely prohibits coalescing, which leads to external fragmentation. |
---|
1299 | The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger). |
---|
1300 | |
---|
1301 | To obtain $O(1)$ external latency means obtaining one large storage area from the OS and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation. |
---|
1302 | Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable. |
---|
1303 | The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \pageref{p:malloc_expansion}). |
---|
1304 | Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent. |
---|
1305 | |
---|
1306 | |
---|
1307 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
1308 | |
---|
1309 | \subsection{llheap Structure} |
---|
1310 | |
---|
1311 | Figure~\ref{f:llheapStructure} shows the design of llheap, which uses the following features: |
---|
1312 | 1:1 multiple-heap model to minimize the fastpath, |
---|
1313 | can be built with or without heap ownership, |
---|
1314 | headers per allocation versus containers, |
---|
1315 | no coalescing to minimize latency, |
---|
1316 | global heap memory (pool) obtained from the OS using @mmap@ to create and reuse heaps needed by threads, |
---|
1317 | local reserved memory (pool) per heap obtained from global pool, |
---|
1318 | global reserved memory (pool) obtained from the OS using @sbrk@ call, |
---|
1319 | optional fast-lookup table for converting allocation requests into bucket sizes, |
---|
1320 | optional statistic-counters table for accumulating counts of allocation operations. |
---|
1321 | |
---|
1322 | \begin{figure} |
---|
1323 | \centering |
---|
1324 | % \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps} |
---|
1325 | \input{llheap} |
---|
1326 | \caption{llheap Structure} |
---|
1327 | \label{f:llheapStructure} |
---|
1328 | \end{figure} |
---|
1329 | |
---|
1330 | llheap starts by creating an array of $N$ global heaps from storage obtained using @mmap@, where $N$ is the number of computer cores, that persists for program duration. |
---|
1331 | There is a global bump-pointer to the next free heap in the array. |
---|
1332 | When this array is exhausted, another array of heaps is allocated. |
---|
1333 | There is a global top pointer for a intrusive linked-list to chain free heaps from terminated threads. |
---|
1334 | When statistics are turned on, there is a global top pointer for a intrusive linked-list to chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps using @malloc_stats@. |
---|
1335 | |
---|
1336 | When a KT starts, a heap is allocated from the current array for exclusive use by the KT. |
---|
1337 | When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of number of heaps. |
---|
1338 | The free heaps are stored on stack so hot storage is reused first. |
---|
1339 | Preserving all heaps, created during the program lifetime, solves the storage lifetime problem when ownership is used. |
---|
1340 | This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially. |
---|
1341 | llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap. |
---|
1342 | |
---|
1343 | Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M. |
---|
1344 | All objects in a bucket are of the same size. |
---|
1345 | The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the OS. |
---|
1346 | Each free bucket of a specific size has two lists. |
---|
1347 | 1) A free stack used solely by the KT heap-owner, so push/pop operations do not require locking. |
---|
1348 | The free objects are a stack so hot storage is reused first. |
---|
1349 | 2) For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operations require locking. |
---|
1350 | When the free stack is empty, the entire ownership stack is removed and becomes the head of the corresponding free stack. |
---|
1351 | |
---|
1352 | Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$. |
---|
1353 | First, the allocation is divided into small (@sbrk@) or large (@mmap@). |
---|
1354 | For large allocations, the storage is mapped directly from the OS. |
---|
1355 | For small allocations, $S$ is quantized into a bucket size. |
---|
1356 | Quantizing is performed using a binary search over the ordered bucket array. |
---|
1357 | An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket. |
---|
1358 | The @char@ type restricts the number of bucket sizes to 256. |
---|
1359 | For $S$ > 64K, a binary search is used. |
---|
1360 | Then, the allocation storage is obtained from the following locations (in order), with increasing latency: |
---|
1361 | bucket's free stack, |
---|
1362 | bucket's away stack, |
---|
1363 | heap's local pool, |
---|
1364 | global pool, |
---|
1365 | OS (@sbrk@). |
---|
1366 | |
---|
1367 | \begin{algorithm} |
---|
1368 | \caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc} |
---|
1369 | \begin{algorithmic}[1] |
---|
1370 | \State $\textit{O} \gets \text{NULL}$ |
---|
1371 | \If {$S >= \textit{mmap-threshhold}$} |
---|
1372 | \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$ |
---|
1373 | \Else |
---|
1374 | \State $\textit{B} \gets \text{smallest free-bucket} \geq S$ |
---|
1375 | \If {$\textit{B's free-list is empty}$} |
---|
1376 | \If {$\textit{B's away-list is empty}$} |
---|
1377 | \If {$\textit{heap's allocation buffer} < S$} |
---|
1378 | \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$ |
---|
1379 | \EndIf |
---|
1380 | \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$ |
---|
1381 | \Else |
---|
1382 | \State $\textit{merge B's away-list into free-list}$ |
---|
1383 | \State $\textit{O} \gets \text{pop an object from B's free-list}$ |
---|
1384 | \EndIf |
---|
1385 | \Else |
---|
1386 | \State $\textit{O} \gets \text{pop an object from B's free-list}$ |
---|
1387 | \EndIf |
---|
1388 | \State $\textit{O's owner} \gets \text{B}$ |
---|
1389 | \EndIf |
---|
1390 | \State $\Return \textit{ O}$ |
---|
1391 | \end{algorithmic} |
---|
1392 | \end{algorithm} |
---|
1393 | |
---|
1394 | \begin{algorithm} |
---|
1395 | \caption{Dynamic object free at address $A$ with object ownership}\label{alg:heapObjectFreeOwn} |
---|
1396 | \begin{algorithmic}[1] |
---|
1397 | \If {$\textit{A mapped allocation}$} |
---|
1398 | \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$ |
---|
1399 | \Else |
---|
1400 | \State $\text{B} \gets \textit{O's owner}$ |
---|
1401 | \If {$\textit{B is thread-local heap's bucket}$} |
---|
1402 | \State $\text{push A to B's free-list}$ |
---|
1403 | \Else |
---|
1404 | \State $\text{push A to B's away-list}$ |
---|
1405 | \EndIf |
---|
1406 | \EndIf |
---|
1407 | \end{algorithmic} |
---|
1408 | \end{algorithm} |
---|
1409 | |
---|
1410 | \begin{algorithm} |
---|
1411 | \caption{Dynamic object free at address $A$ without object ownership}\label{alg:heapObjectFreeNoOwn} |
---|
1412 | \begin{algorithmic}[1] |
---|
1413 | \If {$\textit{A mapped allocation}$} |
---|
1414 | \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$ |
---|
1415 | \Else |
---|
1416 | \State $\text{B} \gets \textit{O's owner}$ |
---|
1417 | \If {$\textit{B is thread-local heap's bucket}$} |
---|
1418 | \State $\text{push A to B's free-list}$ |
---|
1419 | \Else |
---|
1420 | \State $\text{C} \gets \textit{thread local heap's bucket with same size as B}$ |
---|
1421 | \State $\text{push A to C's free-list}$ |
---|
1422 | \EndIf |
---|
1423 | \EndIf |
---|
1424 | \end{algorithmic} |
---|
1425 | \end{algorithm} |
---|
1426 | |
---|
1427 | |
---|
1428 | Algorithm~\ref{alg:heapObjectFreeOwn} shows the de-allocation (free) outline for an object at address $A$ with ownership. |
---|
1429 | First, the address is divided into small (@sbrk@) or large (@mmap@). |
---|
1430 | For large allocations, the storage is unmapped back to the OS. |
---|
1431 | For small allocations, the bucket associated with the request size is retrieved. |
---|
1432 | If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket. |
---|
1433 | If the bucket is not local to the thread, the allocation is pushed onto the owning thread's associated away stack. |
---|
1434 | |
---|
1435 | Algorithm~\ref{alg:heapObjectFreeNoOwn} shows the de-allocation (free) outline for an object at address $A$ without ownership. |
---|
1436 | The algorithm is the same as for ownership except if the bucket is not local to the thread. |
---|
1437 | Then the corresponding bucket of the owner thread is computed for the deallocating thread, and the allocation is pushed onto the deallocating thread's bucket. |
---|
1438 | |
---|
1439 | Finally, the llheap design funnels \label{p:FunnelRoutine} all allocation/deallocation operations through the @malloc@ and @free@ routines, which are the only routines to directly access and manage the internal data structures of the heap. |
---|
1440 | Other allocation operations, \eg @calloc@, @memalign@, and @realloc@, are composed of calls to @malloc@ and possibly @free@, and may manipulate header information after storage is allocated. |
---|
1441 | This design simplifies heap-management code during development and maintenance. |
---|
1442 | |
---|
1443 | |
---|
1444 | \subsubsection{Alignment} |
---|
1445 | |
---|
1446 | Most dynamic memory allocations have a minimum storage alignment for the contained object(s). |
---|
1447 | Often the minimum memory alignment, M, is the bus width (32 or 64-bit) or the largest register (double, long double) or largest atomic instruction (DCAS) or vector data (MMMX). |
---|
1448 | In general, the minimum storage alignment is 8/16-byte boundary on 32/64-bit computers. |
---|
1449 | For consistency, the object header is normally aligned at this same boundary. |
---|
1450 | Larger alignments must be a power of 2, such as page alignment (4/8K). |
---|
1451 | Any alignment request, N, $\le$ the minimum alignment is handled as a normal allocation with minimal alignment. |
---|
1452 | |
---|
1453 | For alignments greater than the minimum, the obvious approach for aligning to address @A@ is: compute the next address that is a multiple of @N@ after the current end of the heap, @E@, plus room for the header before @A@ and the size of the allocation after @A@, moving the end of the heap to @E'@. |
---|
1454 | \begin{center} |
---|
1455 | \input{Alignment1} |
---|
1456 | \end{center} |
---|
1457 | The storage between @E@ and @H@ is chained onto the appropriate free list for future allocations. |
---|
1458 | The same approach is used for sufficiently large free blocks, where @E@ is the start of the free block, and any unused storage before @H@ or after the allocated object becomes free storage. |
---|
1459 | In this approach, the aligned address @A@ is the same as the allocated storage address @P@, \ie @P@ $=$ @A@ for all allocation routines, which simplifies deallocation. |
---|
1460 | However, if there are a large number of aligned requests, this approach leads to memory fragmentation from the small free areas around the aligned object. |
---|
1461 | As well, it does not work for large allocations, where many memory allocators switch from program @sbrk@ to operating-system @mmap@. |
---|
1462 | The reason is that @mmap@ only starts on a page boundary, and it is difficult to reuse the storage before the alignment boundary for other requests. |
---|
1463 | Finally, this approach is incompatible with allocator designs that funnel allocation requests through @malloc@ as it directly manipulates management information within the allocator to optimize the space/time of a request. |
---|
1464 | |
---|
1465 | Instead, llheap alignment is accomplished by making a \emph{pessimistic} allocation request for sufficient storage to ensure that \emph{both} the alignment and size request are satisfied, \eg: |
---|
1466 | \begin{center} |
---|
1467 | \input{Alignment2} |
---|
1468 | \end{center} |
---|
1469 | The amount of storage necessary is @alignment - M + size@, which ensures there is an address, @A@, after the storage returned from @malloc@, @P@, that is a multiple of @alignment@ followed by sufficient storage for the data object. |
---|
1470 | The approach is pessimistic because if @P@ already has the correct alignment @N@, the initial allocation has already requested sufficient space to move to the next multiple of @N@. |
---|
1471 | For this special case, there is @alignment - M@ bytes of unused storage after the data object, which subsequently can be used by @realloc@. |
---|
1472 | |
---|
1473 | Note, the address returned is @A@, which is subsequently returned to @free@. |
---|
1474 | However, to correctly free the allocated object, the value @P@ must be computable, since that is the value generated by @malloc@ and returned within @memalign@. |
---|
1475 | Hence, there must be a mechanism to detect when @P@ $\neq$ @A@ and how to compute @P@ from @A@. |
---|
1476 | |
---|
1477 | The llheap approach uses two headers: |
---|
1478 | the \emph{original} header associated with a memory allocation from @malloc@, and a \emph{fake} header within this storage before the alignment boundary @A@, which is returned from @memalign@, e.g.: |
---|
1479 | \begin{center} |
---|
1480 | \input{Alignment2Impl} |
---|
1481 | \end{center} |
---|
1482 | Since @malloc@ has a minimum alignment of @M@, @P@ $\neq$ @A@ only holds for alignments greater than @M@. |
---|
1483 | When @P@ $\neq$ @A@, the minimum distance between @P@ and @A@ is @M@ bytes, due to the pessimistic storage-allocation. |
---|
1484 | Therefore, there is always room for an @M@-byte fake header before @A@. |
---|
1485 | |
---|
1486 | The fake header must supply an indicator to distinguish it from a normal header and the location of address @P@ generated by @malloc@. |
---|
1487 | This information is encoded as an offset from A to P and the initialize alignment (discussed in Section~\ref{s:ReallocStickyProperties}). |
---|
1488 | To distinguish a fake header from a normal header, the least-significant bit of the alignment is used because the offset participates in multiple calculations, while the alignment is just remembered data. |
---|
1489 | \begin{center} |
---|
1490 | \input{FakeHeader} |
---|
1491 | \end{center} |
---|
1492 | |
---|
1493 | |
---|
1494 | \subsubsection{\lstinline{realloc} and Sticky Properties} |
---|
1495 | \label{s:ReallocStickyProperties} |
---|
1496 | |
---|
1497 | The allocation routine @realloc@ provides a memory-management pattern for shrinking/enlarging an existing allocation, while maintaining some or all of the object data, rather than performing the following steps manually. |
---|
1498 | \begin{flushleft} |
---|
1499 | \begin{tabular}{ll} |
---|
1500 | \multicolumn{1}{c}{\textbf{realloc pattern}} & \multicolumn{1}{c}{\textbf{manually}} \\ |
---|
1501 | \begin{lstlisting} |
---|
1502 | T * naddr = realloc( oaddr, newSize ); |
---|
1503 | |
---|
1504 | |
---|
1505 | |
---|
1506 | \end{lstlisting} |
---|
1507 | & |
---|
1508 | \begin{lstlisting} |
---|
1509 | T * naddr = (T *)malloc( newSize ); $\C[2.4in]{// new storage}$ |
---|
1510 | memcpy( naddr, addr, oldSize ); $\C{// copy old bytes}$ |
---|
1511 | free( addr ); $\C{// free old storage}$ |
---|
1512 | addr = naddr; $\C{// change pointer}\CRT$ |
---|
1513 | \end{lstlisting} |
---|
1514 | \end{tabular} |
---|
1515 | \end{flushleft} |
---|
1516 | The realloc pattern leverages available storage at the end of an allocation due to bucket sizes, possibly eliminating a new allocation and copying. |
---|
1517 | This pattern is not used enough to reduce storage management costs. |
---|
1518 | In fact, if @oaddr@ is @nullptr@, @realloc@ does a @malloc@, so even the initial @malloc@ can be a @realloc@ for consistency in the allocation pattern. |
---|
1519 | |
---|
1520 | The hidden problem for this pattern is the effect of zero fill and alignment with respect to reallocation. |
---|
1521 | Are these properties transient or persistent (``sticky'')? |
---|
1522 | For example, when memory is initially allocated by @calloc@ or @memalign@ with zero fill or alignment properties, respectively, what happens when those allocations are given to @realloc@ to change size? |
---|
1523 | That is, if @realloc@ logically extends storage into unused bucket space or allocates new storage to satisfy a size change, are initial allocation properties preserved? |
---|
1524 | Currently, allocation properties are not preserved, so subsequent use of @realloc@ storage may cause inefficient execution or errors due to lack of zero fill or alignment. |
---|
1525 | This silent problem is unintuitive to programmers and difficult to locate because it is transient. |
---|
1526 | To prevent these problems, llheap preserves initial allocation properties for the lifetime of an allocation and the semantics of @realloc@ are augmented to preserve these properties, with additional query routines. |
---|
1527 | This change makes the realloc pattern efficient and safe. |
---|
1528 | |
---|
1529 | |
---|
1530 | \subsubsection{Header} |
---|
1531 | |
---|
1532 | To preserve allocation properties requires storing additional information with an allocation, |
---|
1533 | The best available option is the header, where Figure~\ref{f:llheapNormalHeader} shows the llheap storage layout. |
---|
1534 | The header has two data field sized appropriately for 32/64-bit alignment requirements. |
---|
1535 | The first field is a union of three values: |
---|
1536 | \begin{description} |
---|
1537 | \item[bucket pointer] |
---|
1538 | is for allocated storage and points back to the bucket associated with this storage requests (see Figure~\ref{f:llheapStructure} for the fields accessible in a bucket). |
---|
1539 | \item[mapped size] |
---|
1540 | is for mapped storage and is the storage size for use in unmapping. |
---|
1541 | \item[next free block] |
---|
1542 | is for free storage and is an intrusive pointer chaining same-size free blocks onto a bucket's free stack. |
---|
1543 | \end{description} |
---|
1544 | The second field remembers the request size versus the allocation (bucket) size, \eg request 42 bytes which is rounded up to 64 bytes. |
---|
1545 | Since programmers think in request sizes rather than allocation sizes, the request size allows better generation of statistics or errors and also helps in memory management. |
---|
1546 | |
---|
1547 | \begin{figure} |
---|
1548 | \centering |
---|
1549 | \input{Header} |
---|
1550 | \caption{llheap Normal Header} |
---|
1551 | \label{f:llheapNormalHeader} |
---|
1552 | \end{figure} |
---|
1553 | |
---|
1554 | The low-order 3-bits of the first field are \emph{unused} for any stored values as these values are 16-byte aligned by default, whereas the second field may use all of its bits. |
---|
1555 | The 3 unused bits are used to represent mapped allocation, zero filled, and alignment, respectively. |
---|
1556 | Note, the alignment bit is not used in the normal header and the zero-filled/mapped bits are not used in the fake header. |
---|
1557 | This implementation allows a fast test if any of the lower 3-bits are on (@&@ and compare). |
---|
1558 | If no bits are on, it implies a basic allocation, which is handled quickly; |
---|
1559 | otherwise, the bits are analysed and appropriate actions are taken for the complex cases. |
---|
1560 | Since most allocations are basic, they will take significantly less time as the memory operations will be done along the allocation and free fastpath. |
---|
1561 | |
---|
1562 | |
---|
1563 | \subsection{Statistics and Debugging} |
---|
1564 | |
---|
1565 | llheap can be built to accumulate fast and largely contention-free allocation statistics to help understand allocation behaviour. |
---|
1566 | Incrementing statistic counters must appear on the allocation fastpath. |
---|
1567 | As noted, any atomic operation along the fastpath produces a significant increase in allocation costs. |
---|
1568 | To make statistics performant enough for use on running systems, each heap has its own set of statistic counters, so heap operations do not require atomic operations. |
---|
1569 | |
---|
1570 | To locate all statistic counters, heaps are linked together in statistics mode, and this list is locked and traversed to sum all counters across heaps. |
---|
1571 | Note, the list is locked to prevent errors traversing an active list; |
---|
1572 | the statistics counters are not locked and can flicker during accumulation. |
---|
1573 | Figure~\ref{f:StatiticsOutput} shows an example of statistics output, which covers all allocation operations and information about deallocating storage not owned by a thread. |
---|
1574 | No other memory allocator studied provides as comprehensive statistical information. |
---|
1575 | Finally, these statistics were invaluable during the development of this work for debugging and verifying correctness and should be equally valuable to application developers. |
---|
1576 | |
---|
1577 | \begin{figure} |
---|
1578 | \begin{lstlisting} |
---|
1579 | Heap statistics: (storage request / allocation) |
---|
1580 | malloc >0 calls 2,766; 0 calls 2,064; storage 12,715 / 13,367 bytes |
---|
1581 | aalloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
1582 | calloc >0 calls 6; 0 calls 0; storage 1,008 / 1,104 bytes |
---|
1583 | memalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
1584 | amemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
1585 | cmemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
1586 | resize >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
1587 | realloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
1588 | free !null calls 2,766; null calls 4,064; storage 12,715 / 13,367 bytes |
---|
1589 | away pulls 0; pushes 0; storage 0 / 0 bytes |
---|
1590 | sbrk calls 1; storage 10,485,760 bytes |
---|
1591 | mmap calls 10,000; storage 10,000 / 10,035 bytes |
---|
1592 | munmap calls 10,000; storage 10,000 / 10,035 bytes |
---|
1593 | threads started 4; exited 3 |
---|
1594 | heaps new 4; reused 0 |
---|
1595 | \end{lstlisting} |
---|
1596 | \caption{Statistics Output} |
---|
1597 | \label{f:StatiticsOutput} |
---|
1598 | \end{figure} |
---|
1599 | |
---|
1600 | llheap can also be built with debug checking, which inserts many asserts along all allocation paths. |
---|
1601 | These assertions detect incorrect allocation usage, like double frees, unfreed storage, or memory corruptions because internal values (like header fields) are overwritten. |
---|
1602 | These checks are best effort as opposed to complete allocation checking as in @valgrind@. |
---|
1603 | Nevertheless, the checks detect many allocation problems. |
---|
1604 | There is an unfortunate problem in detecting unfreed storage because some library routines assume their allocations have life-time duration, and hence, do not free their storage. |
---|
1605 | For example, @printf@ allocates a 1024-byte buffer on the first call and never deletes this buffer. |
---|
1606 | To prevent a false positive for unfreed storage, it is possible to specify an amount of storage that is never freed (see @malloc_unfreed@ \pageref{p:malloc_unfreed}), and it is subtracted from the total allocate/free difference. |
---|
1607 | Determining the amount of never-freed storage is annoying, but once done, any warnings of unfreed storage are application related. |
---|
1608 | |
---|
1609 | Tests indicate only a 30\% performance decrease when statistics \emph{and} debugging are enabled, and the latency cost for accumulating statistic is mitigated by limited calls, often only one at the end of the program. |
---|
1610 | |
---|
1611 | |
---|
1612 | \subsection{User-level Threading Support} |
---|
1613 | \label{s:UserlevelThreadingSupport} |
---|
1614 | |
---|
1615 | The serially-reusable problem (see \pageref{p:SeriallyReusable}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model. |
---|
1616 | The solution is to prevent interrupts that can result in a CPU or KT change during operations that are logically critical subsections such as starting a memory operation on one KT and completing it on another. |
---|
1617 | Locking these critical subsections negates any attempt for a quick fastpath and results in high contention. |
---|
1618 | For user-level threading, the serially-reusable problem appears with time slicing for preemptable scheduling, as the signal handler context switches to another user-level thread. |
---|
1619 | Without time slicing, a user thread performing a long computation can prevent the execution of (starve) other threads. |
---|
1620 | To prevent starvation for a memory-allocation-intensive thread, \ie the time slice always triggers in an allocation critical-subsection for one thread so the thread never gets time sliced, a thread-local \newterm{rollforward} flag is set in the signal handler when it aborts a time slice. |
---|
1621 | The rollforward flag is tested at the end of each allocation funnel routine (see \pageref{p:FunnelRoutine}), and if set, it is reset and a volunteer yield (context switch) is performed to allow other threads to execute. |
---|
1622 | |
---|
1623 | llheap uses two techniques to detect when execution is in an allocation operation or routine called from allocation operation, to abort any time slice during this period. |
---|
1624 | On the slowpath when executing expensive operations, like @sbrk@ or @mmap@, interrupts are disabled/enabled by setting kernel-thread-local flags so the signal handler aborts immediately. |
---|
1625 | On the fastpath, disabling/enabling interrupts is too expensive as accessing kernel-thread-local storage can be expensive and not user-thread-safe. |
---|
1626 | For example, the ARM processor stores the thread-local pointer in a coprocessor register that cannot perform atomic base-displacement addressing. |
---|
1627 | Hence, there is a window between loading the kernel-thread-local pointer from the coprocessor register into a normal register and adding the displacement when a time slice can move a thread. |
---|
1628 | |
---|
1629 | The fast technique (with lower run time cost) is to define a special code subsection and places all non-interruptible routines in this subsection. |
---|
1630 | The linker places all code in this subsection into a contiguous block of memory, but the order of routines within the block is unspecified. |
---|
1631 | Then, the signal handler compares the program counter at the point of interrupt with the the start and end address of the non-interruptible subsection, and aborts if executing within this subsection and sets the rollforward flag. |
---|
1632 | This technique is fragile because any calls in the non-interruptible code outside of the non-interruptible subsection (like @sbrk@) must be bracketed with disable/enable interrupts and these calls must be along the slowpath. |
---|
1633 | Hence, for correctness, this approach requires inspection of generated assembler code for routines placed in the non-interruptible subsection. |
---|
1634 | This issue is mitigated by the llheap funnel design so only funnel routines and a few statistics routines are placed in the non-interruptible subsection and their assembler code examined. |
---|
1635 | These techniques are used in both the \uC and \CFA versions of llheap as both of these systems have user-level threading. |
---|
1636 | |
---|
1637 | |
---|
1638 | \subsection{Bootstrapping} |
---|
1639 | |
---|
1640 | There are problems bootstrapping a memory allocator. |
---|
1641 | \begin{enumerate} |
---|
1642 | \item |
---|
1643 | Programs can be statically or dynamically linked. |
---|
1644 | \item |
---|
1645 | The order in which the linker schedules startup code is poorly supported so it cannot be controlled entirely. |
---|
1646 | \item |
---|
1647 | Knowing a KT's start and end independently from the KT code is difficult. |
---|
1648 | \end{enumerate} |
---|
1649 | |
---|
1650 | For static linking, the allocator is loaded with the program. |
---|
1651 | Hence, allocation calls immediately invoke the allocator operation defined by the loaded allocation library and there is only one memory allocator used in the program. |
---|
1652 | This approach allows allocator substitution by placing an allocation library before any other in the linked/load path. |
---|
1653 | |
---|
1654 | Allocator substitution is similar for dynamic linking, but the problem is that the dynamic loader starts first and needs to perform dynamic allocations \emph{before} the substitution allocator is loaded. |
---|
1655 | As a result, the dynamic loader uses a default allocator until the substitution allocator is loaded, after which all allocation operations are handled by the substitution allocator, including from the dynamic loader. |
---|
1656 | Hence, some part of the @sbrk@ area may be used by the default allocator and statistics about allocation operations cannot be correct. |
---|
1657 | Furthermore, dynamic linking goes through trampolines, so there is an additional cost along the allocator fastpath for all allocation operations. |
---|
1658 | Testing showed up to a 5\% performance decrease with dynamic linking as compared to static linking, even when using @tls_model("initial-exec")@ so the dynamic loader can obtain tighter binding. |
---|
1659 | |
---|
1660 | All allocator libraries need to perform startup code to initialize data structures, such as the heap array for llheap. |
---|
1661 | The problem is getting initialization done before the first allocator call. |
---|
1662 | However, there does not seem to be mechanism to tell either the static or dynamic loader to first perform initialization code before any calls to a loaded library. |
---|
1663 | Also, initialization code of other libraries and the run-time environment may call memory allocation routines such as \lstinline{malloc}. |
---|
1664 | This compounds the situation as there is no mechanism to tell either the static or dynamic loader to first perform the initialization code of the memory allocator before any other initialization that may involve a dynamic memory allocation call. |
---|
1665 | As a result, calls to allocation routines occur without initialization. |
---|
1666 | To deal with this problem, it is necessary to put a conditional initialization check along the allocation fastpath to trigger initialization (singleton pattern). |
---|
1667 | |
---|
1668 | Two other important execution points are program startup and termination, which include prologue or epilogue code to bootstrap a program, which programmers are unaware of. |
---|
1669 | For example, dynamic-memory allocations before/after the application starts should not be considered in statistics because the application does not make these calls. |
---|
1670 | llheap establishes these two points using routines: |
---|
1671 | \begin{lstlisting} |
---|
1672 | __attribute__(( constructor( 100 ) )) static void startup( void ) { |
---|
1673 | // clear statistic counters |
---|
1674 | // reset allocUnfreed counter |
---|
1675 | } |
---|
1676 | __attribute__(( destructor( 100 ) )) static void shutdown( void ) { |
---|
1677 | // sum allocUnfreed for all heaps |
---|
1678 | // subtract global unfreed storage |
---|
1679 | // if allocUnfreed > 0 then print warning message |
---|
1680 | } |
---|
1681 | \end{lstlisting} |
---|
1682 | which use global constructor/destructor priority 100, where the linker calls these routines at program prologue/epilogue in increasing/decreasing order of priority. |
---|
1683 | Application programs may only use global constructor/destructor priorities greater than 100. |
---|
1684 | Hence, @startup@ is called after the program prologue but before the application starts, and @shutdown@ is called after the program terminates but before the program epilogue. |
---|
1685 | By resetting counters in @startup@, prologue allocations are ignored, and checking unfreed storage in @shutdown@ checks only application memory management, ignoring the program epilogue. |
---|
1686 | |
---|
1687 | While @startup@/@shutdown@ apply to the program KT, a concurrent program creates additional KTs that do not trigger these routines. |
---|
1688 | However, it is essential for the allocator to know when each KT is started/terminated. |
---|
1689 | One approach is to create a thread-local object with a construct/destructor, which is triggered after a new KT starts and before it terminates, respectively. |
---|
1690 | \begin{lstlisting} |
---|
1691 | struct ThreadManager { |
---|
1692 | volatile bool pgm_thread; |
---|
1693 | ThreadManager() {} // unusable |
---|
1694 | ~ThreadManager() { if ( pgm_thread ) heapManagerDtor(); } |
---|
1695 | }; |
---|
1696 | static thread_local ThreadManager threadManager; |
---|
1697 | \end{lstlisting} |
---|
1698 | Unfortunately, thread-local variables are created lazily, \ie on the first dereference of @threadManager@, which then triggers its constructor. |
---|
1699 | Therefore, the constructor is useless for knowing when a KT starts because the KT must reference it, and the allocator does not control the application KT. |
---|
1700 | Fortunately, the singleton pattern needed for initializing the program KT also triggers KT allocator initialization, which can then reference @pgm_thread@ to call @threadManager@'s constructor, otherwise its destructor is not called. |
---|
1701 | Now when a KT terminates, @~ThreadManager@ is called to chain it onto the global-heap free-stack, where @pgm_thread@ is set to true only for the program KT. |
---|
1702 | The conditional destructor call prevents closing down the program heap, which must remain available because epilogue code may free more storage. |
---|
1703 | |
---|
1704 | Finally, there is a recursive problem when the singleton pattern dereferences @pgm_thread@ to initialize the thread-local object, because its initialization calls @atExit@, which immediately calls @malloc@ to obtain storage. |
---|
1705 | This recursion is handled with another thread-local flag to prevent double initialization. |
---|
1706 | A similar problem exists when the KT terminates and calls member @~ThreadManager@, because immediately afterwards, the terminating KT calls @free@ to deallocate the storage obtained from the @atExit@. |
---|
1707 | In the meantime, the terminated heap has been put on the global-heap free-stack, and may be active by a new KT, so the @atExit@ free is handled as a free to another heap and put onto the away list using locking. |
---|
1708 | |
---|
1709 | For user threading systems, the KTs are controlled by the runtime, and hence, start/end pointers are known and interact directly with the llheap allocator for \uC and \CFA, which eliminates or simplifies several of these problems. |
---|
1710 | The following API was created to provide interaction between the language runtime and the allocator. |
---|
1711 | \begin{lstlisting} |
---|
1712 | void startThread(); $\C{// KT starts}$ |
---|
1713 | void finishThread(); $\C{// KT ends}$ |
---|
1714 | void startup(); $\C{// when application code starts}$ |
---|
1715 | void shutdown(); $\C{// when application code ends}$ |
---|
1716 | bool traceHeap(); $\C{// enable allocation/free printing for debugging}$ |
---|
1717 | bool traceHeapOn(); $\C{// start printing allocation/free calls}$ |
---|
1718 | bool traceHeapOff(); $\C{// stop printing allocation/free calls}$ |
---|
1719 | \end{lstlisting} |
---|
1720 | This kind of API is necessary to allow concurrent runtime systems to interact with different memory allocators in a consistent way. |
---|
1721 | |
---|
1722 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
1723 | |
---|
1724 | \subsection{Added Features and Methods} |
---|
1725 | |
---|
1726 | The C dynamic-allocation API (see Figure~\ref{f:CDynamicAllocationAPI}) is neither orthogonal nor complete. |
---|
1727 | For example, |
---|
1728 | \begin{itemize} |
---|
1729 | \item |
---|
1730 | It is possible to zero fill or align an allocation but not both. |
---|
1731 | \item |
---|
1732 | It is \emph{only} possible to zero fill an array allocation. |
---|
1733 | \item |
---|
1734 | It is not possible to resize a memory allocation without data copying. |
---|
1735 | \item |
---|
1736 | @realloc@ does not preserve initial allocation properties. |
---|
1737 | \end{itemize} |
---|
1738 | As a result, programmers must provide these options, which is error prone, resulting in blaming the entire programming language for a poor dynamic-allocation API. |
---|
1739 | Furthermore, newer programming languages have better type systems that can provide safer and more powerful APIs for memory allocation. |
---|
1740 | |
---|
1741 | \begin{figure} |
---|
1742 | \begin{lstlisting} |
---|
1743 | void * malloc( size_t size ); |
---|
1744 | void * calloc( size_t nmemb, size_t size ); |
---|
1745 | void * realloc( void * ptr, size_t size ); |
---|
1746 | void * reallocarray( void * ptr, size_t nmemb, size_t size ); |
---|
1747 | void free( void * ptr ); |
---|
1748 | void * memalign( size_t alignment, size_t size ); |
---|
1749 | void * aligned_alloc( size_t alignment, size_t size ); |
---|
1750 | int posix_memalign( void ** memptr, size_t alignment, size_t size ); |
---|
1751 | void * valloc( size_t size ); |
---|
1752 | void * pvalloc( size_t size ); |
---|
1753 | |
---|
1754 | struct mallinfo mallinfo( void ); |
---|
1755 | int mallopt( int param, int val ); |
---|
1756 | int malloc_trim( size_t pad ); |
---|
1757 | size_t malloc_usable_size( void * ptr ); |
---|
1758 | void malloc_stats( void ); |
---|
1759 | int malloc_info( int options, FILE * fp ); |
---|
1760 | \end{lstlisting} |
---|
1761 | \caption{C Dynamic-Allocation API} |
---|
1762 | \label{f:CDynamicAllocationAPI} |
---|
1763 | \end{figure} |
---|
1764 | |
---|
1765 | The following presents design and API changes for C, \CC (\uC), and \CFA, all of which are implemented in llheap. |
---|
1766 | |
---|
1767 | |
---|
1768 | \subsubsection{Out of Memory} |
---|
1769 | |
---|
1770 | Most allocators use @nullptr@ to indicate an allocation failure, specifically out of memory; |
---|
1771 | hence the need to return an alternate value for a zero-sized allocation. |
---|
1772 | A different approach allowed by @C API@ is to abort a program when out of memory and return @nullptr@ for a zero-sized allocation. |
---|
1773 | In theory, notifying the programmer of memory failure allows recovery; |
---|
1774 | in practice, it is almost impossible to gracefully recover when out of memory. |
---|
1775 | Hence, the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen because no pseudo allocation is necessary. |
---|
1776 | |
---|
1777 | |
---|
1778 | \subsubsection{C Interface} |
---|
1779 | |
---|
1780 | For C, it is possible to increase functionality and orthogonality of the dynamic-memory API to make allocation better for programmers. |
---|
1781 | |
---|
1782 | For existing C allocation routines: |
---|
1783 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1784 | \item |
---|
1785 | @calloc@ sets the sticky zero-fill property. |
---|
1786 | \item |
---|
1787 | @memalign@, @aligned_alloc@, @posix_memalign@, @valloc@ and @pvalloc@ set the sticky alignment property. |
---|
1788 | \item |
---|
1789 | @realloc@ and @reallocarray@ preserve sticky properties. |
---|
1790 | \end{itemize} |
---|
1791 | |
---|
1792 | The C dynamic-memory API is extended with the following routines: |
---|
1793 | |
---|
1794 | \paragraph{\lstinline{void * aalloc( size_t dim, size_t elemSize )}} |
---|
1795 | extends @calloc@ for allocating a dynamic array of objects without calculating the total size of array explicitly but \emph{without} zero-filling the memory. |
---|
1796 | @aalloc@ is significantly faster than @calloc@, which is the only alternative given by the standard memory-allocation routines. |
---|
1797 | |
---|
1798 | \noindent\textbf{Usage} |
---|
1799 | @aalloc@ takes two parameters. |
---|
1800 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1801 | \item |
---|
1802 | @dim@: number of array objects |
---|
1803 | \item |
---|
1804 | @elemSize@: size of array object |
---|
1805 | \end{itemize} |
---|
1806 | It returns the address of the dynamic array or @NULL@ if either @dim@ or @elemSize@ are zero. |
---|
1807 | |
---|
1808 | \paragraph{\lstinline{void * resize( void * oaddr, size_t size )}} |
---|
1809 | extends @realloc@ for resizing an existing allocation \emph{without} copying previous data into the new allocation or preserving sticky properties. |
---|
1810 | @resize@ is significantly faster than @realloc@, which is the only alternative. |
---|
1811 | |
---|
1812 | \noindent\textbf{Usage} |
---|
1813 | @resize@ takes two parameters. |
---|
1814 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1815 | \item |
---|
1816 | @oaddr@: address to be resized |
---|
1817 | \item |
---|
1818 | @size@: new allocation size (smaller or larger than previous) |
---|
1819 | \end{itemize} |
---|
1820 | It returns the address of the old or new storage with the specified new size or @NULL@ if @size@ is zero. |
---|
1821 | |
---|
1822 | \paragraph{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}} |
---|
1823 | extends @aalloc@ and @memalign@ for allocating an aligned dynamic array of objects. |
---|
1824 | Sets sticky alignment property. |
---|
1825 | |
---|
1826 | \noindent\textbf{Usage} |
---|
1827 | @amemalign@ takes three parameters. |
---|
1828 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1829 | \item |
---|
1830 | @alignment@: alignment requirement |
---|
1831 | \item |
---|
1832 | @dim@: number of array objects |
---|
1833 | \item |
---|
1834 | @elemSize@: size of array object |
---|
1835 | \end{itemize} |
---|
1836 | It returns the address of the aligned dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero. |
---|
1837 | |
---|
1838 | \paragraph{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}} |
---|
1839 | extends @amemalign@ with zero fill and has the same usage as @amemalign@. |
---|
1840 | Sets sticky zero-fill and alignment property. |
---|
1841 | It returns the address of the aligned, zero-filled dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero. |
---|
1842 | |
---|
1843 | \paragraph{\lstinline{size_t malloc_alignment( void * addr )}} |
---|
1844 | returns the alignment of the dynamic object for use in aligning similar allocations. |
---|
1845 | |
---|
1846 | \noindent\textbf{Usage} |
---|
1847 | @malloc_alignment@ takes one parameter. |
---|
1848 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1849 | \item |
---|
1850 | @addr@: address of an allocated object. |
---|
1851 | \end{itemize} |
---|
1852 | It returns the alignment of the given object, where objects not allocated with alignment return the minimal allocation alignment. |
---|
1853 | |
---|
1854 | \paragraph{\lstinline{bool malloc_zero_fill( void * addr )}} |
---|
1855 | returns true if the object has the zero-fill sticky property for use in zero filling similar allocations. |
---|
1856 | |
---|
1857 | \noindent\textbf{Usage} |
---|
1858 | @malloc_zero_fill@ takes one parameters. |
---|
1859 | |
---|
1860 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1861 | \item |
---|
1862 | @addr@: address of an allocated object. |
---|
1863 | \end{itemize} |
---|
1864 | It returns true if the zero-fill sticky property is set and false otherwise. |
---|
1865 | |
---|
1866 | \paragraph{\lstinline{size_t malloc_size( void * addr )}} |
---|
1867 | returns the request size of the dynamic object (updated when an object is resized) for use in similar allocations. |
---|
1868 | See also @malloc_usable_size@. |
---|
1869 | |
---|
1870 | \noindent\textbf{Usage} |
---|
1871 | @malloc_size@ takes one parameters. |
---|
1872 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1873 | \item |
---|
1874 | @addr@: address of an allocated object. |
---|
1875 | \end{itemize} |
---|
1876 | It returns the request size or zero if @addr@ is @NULL@. |
---|
1877 | |
---|
1878 | \paragraph{\lstinline{int malloc_stats_fd( int fd )}} |
---|
1879 | changes the file descriptor where @malloc_stats@ writes statistics (default @stdout@). |
---|
1880 | |
---|
1881 | \noindent\textbf{Usage} |
---|
1882 | @malloc_stats_fd@ takes one parameters. |
---|
1883 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1884 | \item |
---|
1885 | @fd@: file descriptor. |
---|
1886 | \end{itemize} |
---|
1887 | It returns the previous file descriptor. |
---|
1888 | |
---|
1889 | \paragraph{\lstinline{size_t malloc_expansion()}} |
---|
1890 | \label{p:malloc_expansion} |
---|
1891 | set the amount (bytes) to extend the heap when there is insufficient free storage to service an allocation request. |
---|
1892 | It returns the heap extension size used throughout a program when requesting more memory from the system using @sbrk@ system-call, \ie called once at heap initialization. |
---|
1893 | |
---|
1894 | \paragraph{\lstinline{size_t malloc_mmap_start()}} |
---|
1895 | set the crossover between allocations occurring in the @sbrk@ area or separately mapped. |
---|
1896 | It returns the crossover point used throughout a program, \ie called once at heap initialization. |
---|
1897 | |
---|
1898 | \paragraph{\lstinline{size_t malloc_unfreed()}} |
---|
1899 | \label{p:malloc_unfreed} |
---|
1900 | amount subtracted to adjust for unfreed program storage (debug only). |
---|
1901 | It returns the new subtraction amount and called by @malloc_stats@. |
---|
1902 | |
---|
1903 | |
---|
1904 | \subsubsection{\CC Interface} |
---|
1905 | |
---|
1906 | The following extensions take advantage of overload polymorphism in the \CC type-system. |
---|
1907 | |
---|
1908 | \paragraph{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}} |
---|
1909 | extends @resize@ with an alignment re\-quirement. |
---|
1910 | |
---|
1911 | \noindent\textbf{Usage} |
---|
1912 | takes three parameters. |
---|
1913 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1914 | \item |
---|
1915 | @oaddr@: address to be resized |
---|
1916 | \item |
---|
1917 | @nalign@: alignment requirement |
---|
1918 | \item |
---|
1919 | @size@: new allocation size (smaller or larger than previous) |
---|
1920 | \end{itemize} |
---|
1921 | It returns the address of the old or new storage with the specified new size and alignment, or @NULL@ if @size@ is zero. |
---|
1922 | |
---|
1923 | \paragraph{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}} |
---|
1924 | extends @realloc@ with an alignment re\-quirement and has the same usage as aligned @resize@. |
---|
1925 | |
---|
1926 | |
---|
1927 | \subsubsection{\CFA Interface} |
---|
1928 | |
---|
1929 | The following extensions take advantage of overload polymorphism in the \CFA type-system. |
---|
1930 | The key safety advantage of the \CFA type system is using the return type to select overloads; |
---|
1931 | hence, a polymorphic routine knows the returned type and its size. |
---|
1932 | This capability is used to remove the object size parameter and correctly cast the return storage to match the result type. |
---|
1933 | For example, the following is the \CFA wrapper for C @malloc@: |
---|
1934 | \begin{cfa} |
---|
1935 | forall( T & | sized(T) ) { |
---|
1936 | T * malloc( void ) { |
---|
1937 | if ( _Alignof(T) <= libAlign() ) return @(T *)@malloc( @sizeof(T)@ ); // C allocation |
---|
1938 | else return @(T *)@memalign( @_Alignof(T)@, @sizeof(T)@ ); // C allocation |
---|
1939 | } // malloc |
---|
1940 | \end{cfa} |
---|
1941 | and is used as follows: |
---|
1942 | \begin{lstlisting} |
---|
1943 | int * i = malloc(); |
---|
1944 | double * d = malloc(); |
---|
1945 | struct Spinlock { ... } __attribute__(( aligned(128) )); |
---|
1946 | Spinlock * sl = malloc(); |
---|
1947 | \end{lstlisting} |
---|
1948 | where each @malloc@ call provides the return type as @T@, which is used with @sizeof@, @_Alignof@, and casting the storage to the correct type. |
---|
1949 | This interface removes many of the common allocation errors in C programs. |
---|
1950 | Figure~\ref{f:CFADynamicAllocationAPI} show the \CFA wrappers for the equivalent C/\CC allocation routines with same semantic behaviour. |
---|
1951 | |
---|
1952 | \begin{figure} |
---|
1953 | \begin{lstlisting} |
---|
1954 | T * malloc( void ); |
---|
1955 | T * aalloc( size_t dim ); |
---|
1956 | T * calloc( size_t dim ); |
---|
1957 | T * resize( T * ptr, size_t size ); |
---|
1958 | T * realloc( T * ptr, size_t size ); |
---|
1959 | T * memalign( size_t align ); |
---|
1960 | T * amemalign( size_t align, size_t dim ); |
---|
1961 | T * cmemalign( size_t align, size_t dim ); |
---|
1962 | T * aligned_alloc( size_t align ); |
---|
1963 | int posix_memalign( T ** ptr, size_t align ); |
---|
1964 | T * valloc( void ); |
---|
1965 | T * pvalloc( void ); |
---|
1966 | \end{lstlisting} |
---|
1967 | \caption{\CFA C-Style Dynamic-Allocation API} |
---|
1968 | \label{f:CFADynamicAllocationAPI} |
---|
1969 | \end{figure} |
---|
1970 | |
---|
1971 | In addition to the \CFA C-style allocator interface, a new allocator interface is provided to further increase orthogonality and usability of dynamic-memory allocation. |
---|
1972 | This interface helps programmers in three ways. |
---|
1973 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
1974 | \item |
---|
1975 | naming: \CFA regular and @ttype@ polymorphism (@ttype@ polymorphism in \CFA is similar to \CC variadic templates) is used to encapsulate a wide range of allocation functionality into a single routine name, so programmers do not have to remember multiple routine names for different kinds of dynamic allocations. |
---|
1976 | \item |
---|
1977 | named arguments: individual allocation properties are specified using postfix function call, so the programmers do not have to remember parameter positions in allocation calls. |
---|
1978 | \item |
---|
1979 | object size: like the \CFA's C-interface, programmers do not have to specify object size or cast allocation results. |
---|
1980 | \end{itemize} |
---|
1981 | Note, postfix function call is an alternative call syntax, using backtick @`@, where the argument appears before the function name, \eg |
---|
1982 | \begin{cfa} |
---|
1983 | duration ?@`@h( int h ); // ? denote the position of the function operand |
---|
1984 | duration ?@`@m( int m ); |
---|
1985 | duration ?@`@s( int s ); |
---|
1986 | duration dur = 3@`@h + 42@`@m + 17@`@s; |
---|
1987 | \end{cfa} |
---|
1988 | |
---|
1989 | \paragraph{\lstinline{T * alloc( ... )} or \lstinline{T * alloc( size_t dim, ... )}} |
---|
1990 | is overloaded with a variable number of specific allocation operations, or an integer dimension parameter followed by a variable number of specific allocation operations. |
---|
1991 | These allocation operations can be passed as named arguments when calling the \lstinline{alloc} routine. |
---|
1992 | A call without parameters returns a dynamically allocated object of type @T@ (@malloc@). |
---|
1993 | A call with only the dimension (dim) parameter returns a dynamically allocated array of objects of type @T@ (@aalloc@). |
---|
1994 | The variable number of arguments consist of allocation properties, which can be combined to produce different kinds of allocations. |
---|
1995 | The only restriction is for properties @realloc@ and @resize@, which cannot be combined. |
---|
1996 | |
---|
1997 | The allocation property functions are: |
---|
1998 | \subparagraph{\lstinline{T_align ?`align( size_t alignment )}} |
---|
1999 | to align the allocation. |
---|
2000 | The alignment parameter must be $\ge$ the default alignment (@libAlign()@ in \CFA) and a power of two, \eg: |
---|
2001 | \begin{cfa} |
---|
2002 | int * i0 = alloc( @4096`align@ ); sout | i0 | nl; |
---|
2003 | int * i1 = alloc( 3, @4096`align@ ); sout | i1; for (i; 3 ) sout | &i1[i]; sout | nl; |
---|
2004 | |
---|
2005 | 0x555555572000 |
---|
2006 | 0x555555574000 0x555555574000 0x555555574004 0x555555574008 |
---|
2007 | \end{cfa} |
---|
2008 | returns a dynamic object and object array aligned on a 4096-byte boundary. |
---|
2009 | |
---|
2010 | \subparagraph{\lstinline{S_fill(T) ?`fill ( /* various types */ )}} |
---|
2011 | to initialize storage. |
---|
2012 | There are three ways to fill storage: |
---|
2013 | \begin{enumerate} |
---|
2014 | \item |
---|
2015 | A char fills each byte of each object. |
---|
2016 | \item |
---|
2017 | An object of the returned type fills each object. |
---|
2018 | \item |
---|
2019 | An object array pointer fills some or all of the corresponding object array. |
---|
2020 | \end{enumerate} |
---|
2021 | For example: |
---|
2022 | \begin{cfa}[numbers=left] |
---|
2023 | int * i0 = alloc( @0n`fill@ ); sout | *i0 | nl; // disambiguate 0 |
---|
2024 | int * i1 = alloc( @5`fill@ ); sout | *i1 | nl; |
---|
2025 | int * i2 = alloc( @'\xfe'`fill@ ); sout | hex( *i2 ) | nl; |
---|
2026 | int * i3 = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | i3[i]; sout | nl; |
---|
2027 | int * i4 = alloc( 5, @0xdeadbeefN`fill@ ); for ( i; 5 ) sout | hex( i4[i] ); sout | nl; |
---|
2028 | int * i5 = alloc( 5, @i3`fill@ ); for ( i; 5 ) sout | i5[i]; sout | nl; |
---|
2029 | int * i6 = alloc( 5, @[i3, 3]`fill@ ); for ( i; 5 ) sout | i6[i]; sout | nl; |
---|
2030 | \end{cfa} |
---|
2031 | \begin{lstlisting}[numbers=left] |
---|
2032 | 0 |
---|
2033 | 5 |
---|
2034 | 0xfefefefe |
---|
2035 | 5 5 5 5 5 |
---|
2036 | 0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef |
---|
2037 | 5 5 5 5 5 |
---|
2038 | 5 5 5 -555819298 -555819298 // two undefined values |
---|
2039 | \end{lstlisting} |
---|
2040 | Examples 1 to 3 fill an object with a value or characters. |
---|
2041 | Examples 4 to 7 fill an array of objects with values, another array, or part of an array. |
---|
2042 | |
---|
2043 | \subparagraph{\lstinline{S_resize(T) ?`resize( void * oaddr )}} |
---|
2044 | used to resize, realign, and fill, where the old object data is not copied to the new object. |
---|
2045 | The old object type may be different from the new object type, since the values are not used. |
---|
2046 | For example: |
---|
2047 | \begin{cfa}[numbers=left] |
---|
2048 | int * i = alloc( @5`fill@ ); sout | i | *i; |
---|
2049 | i = alloc( @i`resize@, @256`align@, @7`fill@ ); sout | i | *i; |
---|
2050 | double * d = alloc( @i`resize@, @4096`align@, @13.5`fill@ ); sout | d | *d; |
---|
2051 | \end{cfa} |
---|
2052 | \begin{lstlisting}[numbers=left] |
---|
2053 | 0x55555556d5c0 5 |
---|
2054 | 0x555555570000 7 |
---|
2055 | 0x555555571000 13.5 |
---|
2056 | \end{lstlisting} |
---|
2057 | Examples 2 to 3 change the alignment, fill, and size for the initial storage of @i@. |
---|
2058 | |
---|
2059 | \begin{cfa}[numbers=left] |
---|
2060 | int * ia = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | ia[i]; sout | nl; |
---|
2061 | ia = alloc( 10, @ia`resize@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl; |
---|
2062 | sout | ia; ia = alloc( 5, @ia`resize@, @512`align@, @13`fill@ ); sout | ia; for ( i; 5 ) sout | ia[i]; sout | nl;; |
---|
2063 | ia = alloc( 3, @ia`resize@, @4096`align@, @2`fill@ ); sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl; |
---|
2064 | \end{cfa} |
---|
2065 | \begin{lstlisting}[numbers=left] |
---|
2066 | 5 5 5 5 5 |
---|
2067 | 7 7 7 7 7 7 7 7 7 7 |
---|
2068 | 0x55555556d560 0x555555571a00 13 13 13 13 13 |
---|
2069 | 0x555555572000 0x555555572000 2 0x555555572004 2 0x555555572008 2 |
---|
2070 | \end{lstlisting} |
---|
2071 | Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@. |
---|
2072 | |
---|
2073 | \subparagraph{\lstinline{S_realloc(T) ?`realloc( T * a ))}} |
---|
2074 | used to resize, realign, and fill, where the old object data is copied to the new object. |
---|
2075 | The old object type must be the same as the new object type, since the value is used. |
---|
2076 | Note, for @fill@, only the extra space after copying the data from the old object is filled with the given parameter. |
---|
2077 | For example: |
---|
2078 | \begin{cfa}[numbers=left] |
---|
2079 | int * i = alloc( @5`fill@ ); sout | i | *i; |
---|
2080 | i = alloc( @i`realloc@, @256`align@ ); sout | i | *i; |
---|
2081 | i = alloc( @i`realloc@, @4096`align@, @13`fill@ ); sout | i | *i; |
---|
2082 | \end{cfa} |
---|
2083 | \begin{lstlisting}[numbers=left] |
---|
2084 | 0x55555556d5c0 5 |
---|
2085 | 0x555555570000 5 |
---|
2086 | 0x555555571000 5 |
---|
2087 | \end{lstlisting} |
---|
2088 | Examples 2 to 3 change the alignment for the initial storage of @i@. |
---|
2089 | The @13`fill@ in example 3 does nothing because no extra space is added. |
---|
2090 | |
---|
2091 | \begin{cfa}[numbers=left] |
---|
2092 | int * ia = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | ia[i]; sout | nl; |
---|
2093 | ia = alloc( 10, @ia`realloc@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl; |
---|
2094 | sout | ia; ia = alloc( 1, @ia`realloc@, @512`align@, @13`fill@ ); sout | ia; for ( i; 1 ) sout | ia[i]; sout | nl;; |
---|
2095 | ia = alloc( 3, @ia`realloc@, @4096`align@, @2`fill@ ); sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl; |
---|
2096 | \end{cfa} |
---|
2097 | \begin{lstlisting}[numbers=left] |
---|
2098 | 5 5 5 5 5 |
---|
2099 | 5 5 5 5 5 7 7 7 7 7 |
---|
2100 | 0x55555556c560 0x555555570a00 5 |
---|
2101 | 0x555555571000 0x555555571000 5 0x555555571004 2 0x555555571008 2 |
---|
2102 | \end{lstlisting} |
---|
2103 | Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@. |
---|
2104 | The @13`fill@ in example 3 does nothing because no extra space is added. |
---|
2105 | |
---|
2106 | These \CFA allocation features are used extensively in the development of the \CFA runtime. |
---|
2107 | |
---|
2108 | |
---|
2109 | \section{Benchmarks} |
---|
2110 | \label{s:Benchmarks} |
---|
2111 | |
---|
2112 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2113 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2114 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Micro Benchmark Suite |
---|
2115 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2116 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2117 | |
---|
2118 | There are two basic approaches for evaluating computer software: benchmarks and micro-benchmarks. |
---|
2119 | \begin{description} |
---|
2120 | \item[Benchmarks] |
---|
2121 | are a suite of application programs (SPEC CPU/WEB) that are exercised in a common way (inputs) to find differences among underlying software implementations associated with an application (compiler, memory allocator, web server, \etc). |
---|
2122 | The applications are supposed to represent common execution patterns that need to perform well with respect to an underlying software implementation. |
---|
2123 | Benchmarks are often criticized for having overlapping patterns, insufficient patterns, or extraneous code that masks patterns. |
---|
2124 | \item[Micro-Benchmarks] |
---|
2125 | attempt to extract the common execution patterns associated with an application and run the pattern independently. |
---|
2126 | This approach removes any masking from extraneous application code, allows execution pattern to be very precise, and provides an opportunity for the execution pattern to have multiple independent tuning adjustments (knobs). |
---|
2127 | Micro-benchmarks are often criticized for inadequately representing real-world applications. |
---|
2128 | \end{description} |
---|
2129 | |
---|
2130 | While some crucial software components have standard benchmarks, no standard benchmark exists for testing and comparing memory allocators. |
---|
2131 | In the past, an assortment of applications have been used for benchmarking allocators~\cite{Detlefs93,Berger00,Berger01,berger02reconsidering}: P2C, GS, Espresso/Espresso-2, CFRAC/CFRAC-2, GMake, GCC, Perl/Perl-2, Gawk/Gawk-2, XPDF/XPDF-2, ROBOOP, Lindsay. |
---|
2132 | As well, an assortment of micro-benchmark have been used for benchmarking allocators~\cite{larson99memory,Berger00,streamflow}: threadtest, shbench, Larson, consume, false sharing. |
---|
2133 | Many of these benchmark applications and micro-benchmarks are old and may not reflect current application allocation patterns. |
---|
2134 | |
---|
2135 | This work designs and examines a new set of micro-benchmarks for memory allocators that test a variety of allocation patterns, each with multiple tuning parameters. |
---|
2136 | The aim of the micro-benchmark suite is to create a set of programs that can evaluate a memory allocator based on the key performance metrics such as speed, memory overhead, and cache performance. |
---|
2137 | % These programs can be taken as a standard to benchmark an allocator's basic goals. |
---|
2138 | These programs give details of an allocator's memory overhead and speed under certain allocation patterns. |
---|
2139 | The allocation patterns are configurable (adjustment knobs) to observe an allocator's performance across a spectrum allocation patterns, which is seldom possible with benchmark programs. |
---|
2140 | Each micro-benchmark program has multiple control knobs specified by command-line arguments. |
---|
2141 | |
---|
2142 | The new micro-benchmark suite measures performance by allocating dynamic objects and measuring specific metrics. |
---|
2143 | An allocator's speed is benchmarked in different ways, as are issues like false sharing. |
---|
2144 | |
---|
2145 | |
---|
2146 | \subsection{Prior Multi-Threaded Micro-Benchmarks} |
---|
2147 | |
---|
2148 | Modern memory allocators, such as llheap, must handle multi-threaded programs at the KT and UT level. |
---|
2149 | The following multi-threaded micro-benchmarks are presented to give a sense of prior work~\cite{Berger00} at the KT level. |
---|
2150 | None of the prior work addresses multi-threading at the UT level. |
---|
2151 | |
---|
2152 | |
---|
2153 | \subsubsection{threadtest} |
---|
2154 | |
---|
2155 | This benchmark stresses the ability of the allocator to handle different threads allocating and deallocating independently. |
---|
2156 | There is no interaction among threads, \ie no object sharing. |
---|
2157 | Each thread repeatedly allocates 100,000 \emph{8-byte} objects then deallocates them in the order they were allocated. |
---|
2158 | The execution time of the benchmark evaluates its efficiency. |
---|
2159 | |
---|
2160 | |
---|
2161 | \subsubsection{shbench} |
---|
2162 | |
---|
2163 | This benchmark is similar to threadtest but each thread randomly allocate and free a number of \emph{random-sized} objects. |
---|
2164 | It is a stress test that also uses runtime to determine efficiency of the allocator. |
---|
2165 | |
---|
2166 | |
---|
2167 | \subsubsection{Larson} |
---|
2168 | |
---|
2169 | This benchmark simulates a server environment. |
---|
2170 | Multiple threads are created where each thread allocates and frees a number of random-sized objects within a size range. |
---|
2171 | Before the thread terminates, it passes its array of 10,000 objects to a new child thread to continue the process. |
---|
2172 | The number of thread generations varies depending on the thread speed. |
---|
2173 | It calculates memory operations per second as an indicator of the memory allocator's performance. |
---|
2174 | |
---|
2175 | |
---|
2176 | \subsection{New Multi-Threaded Micro-Benchmarks} |
---|
2177 | |
---|
2178 | The following new benchmarks were created to assess multi-threaded programs at the KT and UT level. |
---|
2179 | For generating random values, two generators are supported: uniform~\cite{uniformPRNG} and fisher~\cite{fisherPRNG}. |
---|
2180 | |
---|
2181 | |
---|
2182 | \subsubsection{Churn Benchmark} |
---|
2183 | \label{s:ChurnBenchmark} |
---|
2184 | |
---|
2185 | The churn benchmark measures the runtime speed of an allocator in a multi-threaded scenario, where each thread extensively allocates and frees dynamic memory. |
---|
2186 | Only @malloc@ and @free@ are used to eliminate any extra cost, such as @memcpy@ in @calloc@ or @realloc@. |
---|
2187 | Churn simulates a memory intensive program and can be tuned to create different scenarios. |
---|
2188 | |
---|
2189 | Figure~\ref{fig:ChurnBenchFig} shows the pseudo code for the churn micro-benchmark. |
---|
2190 | This benchmark creates a buffer with M spots and an allocation in each spot, and then starts K threads. |
---|
2191 | Each thread picks a random spot in M, frees the object currently at that spot, and allocates a new object for that spot. |
---|
2192 | Each thread repeats this cycle N times. |
---|
2193 | The main thread measures the total time taken for the whole benchmark and that time is used to evaluate the memory allocator's performance. |
---|
2194 | |
---|
2195 | \begin{figure} |
---|
2196 | \centering |
---|
2197 | \begin{lstlisting} |
---|
2198 | Main Thread |
---|
2199 | create worker threads |
---|
2200 | note time T1 |
---|
2201 | ... |
---|
2202 | note time T2 |
---|
2203 | churn_speed = (T2 - T1) |
---|
2204 | Worker Thread |
---|
2205 | initialize variables |
---|
2206 | ... |
---|
2207 | for ( N ) |
---|
2208 | R = random spot in array |
---|
2209 | free R |
---|
2210 | allocate new object at R |
---|
2211 | \end{lstlisting} |
---|
2212 | %\includegraphics[width=1\textwidth]{figures/bench-churn.eps} |
---|
2213 | \caption{Churn Benchmark} |
---|
2214 | \label{fig:ChurnBenchFig} |
---|
2215 | \end{figure} |
---|
2216 | |
---|
2217 | The adjustment knobs for churn are: |
---|
2218 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2219 | \item[thread:] |
---|
2220 | number of threads (K). |
---|
2221 | \item[spots:] |
---|
2222 | number of spots for churn (M). |
---|
2223 | \item[obj:] |
---|
2224 | number of objects per thread (N). |
---|
2225 | \item[max:] |
---|
2226 | maximum object size. |
---|
2227 | \item[min:] |
---|
2228 | minimum object size. |
---|
2229 | \item[step:] |
---|
2230 | object size increment. |
---|
2231 | \item[distro:] |
---|
2232 | object size distribution |
---|
2233 | \end{description} |
---|
2234 | |
---|
2235 | |
---|
2236 | \subsubsection{Cache Thrash} |
---|
2237 | \label{sec:benchThrashSec} |
---|
2238 | |
---|
2239 | The cache-thrash micro-benchmark measures allocator-induced active false-sharing as illustrated in Section~\ref{s:AllocatorInducedActiveFalseSharing}. |
---|
2240 | If memory is allocated for multiple threads on the same cache line, this can significantly slow down program performance. |
---|
2241 | When threads share a cache line, frequent reads/writes to their cache-line object causes cache misses, which cause escalating delays as cache distance increases. |
---|
2242 | |
---|
2243 | Cache thrash tries to create a scenario that leads to false sharing, if the underlying memory allocator is allocating dynamic memory to multiple threads on the same cache lines. |
---|
2244 | Ideally, a memory allocator should distance the dynamic memory region of one thread from another. |
---|
2245 | Having multiple threads allocating small objects simultaneously can cause a memory allocator to allocate objects on the same cache line, if its not distancing the memory among different threads. |
---|
2246 | |
---|
2247 | Figure~\ref{fig:benchThrashFig} shows the pseudo code for the cache-thrash micro-benchmark. |
---|
2248 | First, it creates K worker threads. |
---|
2249 | Each worker thread allocates an object and intensively reads/writes it for M times to possible invalidate cache lines that may interfere with other threads sharing the same cache line. |
---|
2250 | Each thread repeats this for N times. |
---|
2251 | The main thread measures the total time taken for all worker threads to complete. |
---|
2252 | Worker threads sharing cache lines with each other are expected to take longer. |
---|
2253 | |
---|
2254 | \begin{figure} |
---|
2255 | \centering |
---|
2256 | \input{AllocInducedActiveFalseSharing} |
---|
2257 | \medskip |
---|
2258 | \begin{lstlisting} |
---|
2259 | Main Thread |
---|
2260 | create worker threads |
---|
2261 | ... |
---|
2262 | signal workers to allocate |
---|
2263 | ... |
---|
2264 | signal workers to free |
---|
2265 | ... |
---|
2266 | Worker Thread$\(_1\)$ |
---|
2267 | warm up memory in chunks of 16 bytes |
---|
2268 | ... |
---|
2269 | For N |
---|
2270 | malloc an object |
---|
2271 | read/write the object M times |
---|
2272 | free the object |
---|
2273 | ... |
---|
2274 | Worker Thread$\(_2\)$ |
---|
2275 | // same as Worker Thread$\(_1\)$ |
---|
2276 | \end{lstlisting} |
---|
2277 | %\input{MemoryOverhead} |
---|
2278 | %\includegraphics[width=1\textwidth]{figures/bench-cache-thrash.eps} |
---|
2279 | \caption{Allocator-Induced Active False-Sharing Benchmark} |
---|
2280 | \label{fig:benchThrashFig} |
---|
2281 | \end{figure} |
---|
2282 | |
---|
2283 | The adjustment knobs for cache access scenarios are: |
---|
2284 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2285 | \item[thread:] |
---|
2286 | number of threads (K). |
---|
2287 | \item[iterations:] |
---|
2288 | iterations of cache benchmark (N). |
---|
2289 | \item[cacheRW:] |
---|
2290 | repetitions of reads/writes to object (M). |
---|
2291 | \item[size:] |
---|
2292 | object size. |
---|
2293 | \end{description} |
---|
2294 | |
---|
2295 | |
---|
2296 | \subsubsection{Cache Scratch} |
---|
2297 | \label{s:CacheScratch} |
---|
2298 | |
---|
2299 | The cache-scratch micro-benchmark measures allocator-induced passive false-sharing as illustrated in Section~\ref{s:AllocatorInducedPassiveFalseSharing}. |
---|
2300 | As with cache thrash, if memory is allocated for multiple threads on the same cache line, this can significantly slow down program performance. |
---|
2301 | In this scenario, the false sharing is being caused by the memory allocator although it is started by the program sharing an object. |
---|
2302 | |
---|
2303 | % An allocator can unintentionally induce false sharing depending upon its management of the freed objects. |
---|
2304 | % If thread Thread$_1$ allocates multiple objects together, they may be allocated on the same cache line by the memory allocator. |
---|
2305 | % If Thread$_1$ passes these object to thread Thread$_2$, then both threads may share the same cache line but this scenario is not induced by the allocator; |
---|
2306 | % instead, the program induced this situation. |
---|
2307 | % Now if Thread$_2$ frees this object and then allocate an object of the same size, the allocator may return the same object, which is on a cache line shared with thread Thread$_1$. |
---|
2308 | |
---|
2309 | Cache scratch tries to create a scenario that leads to false sharing and should make the memory allocator preserve the program-induced false sharing, if it does not return a freed object to its owner thread and, instead, re-uses it instantly. |
---|
2310 | An allocator using object ownership, as described in subsection Section~\ref{s:Ownership}, is less susceptible to allocator-induced passive false-sharing. |
---|
2311 | If the object is returned to the thread that owns it, then the new object that the thread gets is less likely to be on the same cache line. |
---|
2312 | |
---|
2313 | Figure~\ref{fig:benchScratchFig} shows the pseudo code for the cache-scratch micro-benchmark. |
---|
2314 | First, it allocates K dynamic objects together, one for each of the K worker threads, possibly causing memory allocator to allocate these objects on the same cache line. |
---|
2315 | Then it create K worker threads and passes an object from the K allocated objects to each of the K threads. |
---|
2316 | Each worker thread frees the object passed by the main thread. |
---|
2317 | Then, it allocates an object and reads/writes it repetitively for M times possibly causing frequent cache invalidations. |
---|
2318 | Each worker repeats this N times. |
---|
2319 | |
---|
2320 | \begin{figure} |
---|
2321 | \centering |
---|
2322 | \input{AllocInducedPassiveFalseSharing} |
---|
2323 | \medskip |
---|
2324 | \begin{lstlisting} |
---|
2325 | Main Thread |
---|
2326 | malloc N objects $for$ each worker $thread$ |
---|
2327 | create worker threads and pass N objects to each worker |
---|
2328 | ... |
---|
2329 | signal workers to allocate |
---|
2330 | ... |
---|
2331 | signal workers to free |
---|
2332 | ... |
---|
2333 | Worker Thread$\(_1\)$ |
---|
2334 | warmup memory in chunks of 16 bytes |
---|
2335 | ... |
---|
2336 | free the object passed by the Main Thread |
---|
2337 | For N |
---|
2338 | malloc new object |
---|
2339 | read/write the object M times |
---|
2340 | free the object |
---|
2341 | ... |
---|
2342 | Worker Thread$\(_2\)$ |
---|
2343 | // same as Worker Thread$\(_1\)$ |
---|
2344 | \end{lstlisting} |
---|
2345 | %\includegraphics[width=1\textwidth]{figures/bench-cache-scratch.eps} |
---|
2346 | \caption{Program-Induced Passive False-Sharing Benchmark} |
---|
2347 | \label{fig:benchScratchFig} |
---|
2348 | \end{figure} |
---|
2349 | |
---|
2350 | Each thread allocating an object after freeing the original object passed by the main thread should cause the memory allocator to return the same object that was initially allocated by the main thread if the allocator did not return the initial object back to its owner (main thread). |
---|
2351 | Then, intensive read/write on the shared cache line by multiple threads should slow down worker threads due to to high cache invalidations and misses. |
---|
2352 | Main thread measures the total time taken for all the workers to complete. |
---|
2353 | |
---|
2354 | Similar to benchmark cache thrash in subsection Section~\ref{sec:benchThrashSec}, different cache access scenarios can be created using the following command-line arguments. |
---|
2355 | \begin{description}[topsep=0pt,itemsep=0pt,parsep=0pt] |
---|
2356 | \item[threads:] |
---|
2357 | number of threads (K). |
---|
2358 | \item[iterations:] |
---|
2359 | iterations of cache benchmark (N). |
---|
2360 | \item[cacheRW:] |
---|
2361 | repetitions of reads/writes to object (M). |
---|
2362 | \item[size:] |
---|
2363 | object size. |
---|
2364 | \end{description} |
---|
2365 | |
---|
2366 | |
---|
2367 | \subsubsection{Speed Micro-Benchmark} |
---|
2368 | \label{s:SpeedMicroBenchmark} |
---|
2369 | \vspace*{-4pt} |
---|
2370 | |
---|
2371 | The speed benchmark measures the runtime speed of individual and sequences of memory allocation routines: |
---|
2372 | \begin{enumerate}[topsep=-5pt,itemsep=0pt,parsep=0pt] |
---|
2373 | \item malloc |
---|
2374 | \item realloc |
---|
2375 | \item free |
---|
2376 | \item calloc |
---|
2377 | \item malloc-free |
---|
2378 | \item realloc-free |
---|
2379 | \item calloc-free |
---|
2380 | \item malloc-realloc |
---|
2381 | \item calloc-realloc |
---|
2382 | \item malloc-realloc-free |
---|
2383 | \item calloc-realloc-free |
---|
2384 | \item malloc-realloc-free-calloc |
---|
2385 | \end{enumerate} |
---|
2386 | |
---|
2387 | Figure~\ref{fig:SpeedBenchFig} shows the pseudo code for the speed micro-benchmark. |
---|
2388 | Each routine in the chain is called for N objects and then those allocated objects are used when calling the next routine in the allocation chain. |
---|
2389 | This tests the latency of the memory allocator when multiple routines are chained together, \eg the call sequence malloc-realloc-free-calloc gives a complete picture of the major allocation routines when combined together. |
---|
2390 | For each chain, the time is recorded to visualize performance of a memory allocator against each chain. |
---|
2391 | |
---|
2392 | \begin{figure} |
---|
2393 | \centering |
---|
2394 | \begin{lstlisting}[morekeywords={foreach}] |
---|
2395 | Main Thread |
---|
2396 | create worker threads |
---|
2397 | foreach ( allocation chain ) |
---|
2398 | note time T1 |
---|
2399 | ... |
---|
2400 | note time T2 |
---|
2401 | chain_speed = (T2 - T1) / number-of-worker-threads * N ) |
---|
2402 | Worker Thread |
---|
2403 | initialize variables |
---|
2404 | ... |
---|
2405 | foreach ( routine in allocation chain ) |
---|
2406 | call routine N times |
---|
2407 | \end{lstlisting} |
---|
2408 | %\includegraphics[width=1\textwidth]{figures/bench-speed.eps} |
---|
2409 | \caption{Speed Benchmark} |
---|
2410 | \label{fig:SpeedBenchFig} |
---|
2411 | \end{figure} |
---|
2412 | |
---|
2413 | The adjustment knobs for memory usage are: |
---|
2414 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2415 | \item[max:] |
---|
2416 | maximum object size. |
---|
2417 | \item[min:] |
---|
2418 | minimum object size. |
---|
2419 | \item[step:] |
---|
2420 | object size increment. |
---|
2421 | \item[distro:] |
---|
2422 | object size distribution. |
---|
2423 | \item[objects:] |
---|
2424 | number of objects per thread. |
---|
2425 | \item[workers:] |
---|
2426 | number of worker threads. |
---|
2427 | \end{description} |
---|
2428 | |
---|
2429 | |
---|
2430 | \subsubsection{Memory Micro-Benchmark} |
---|
2431 | \label{s:MemoryMicroBenchmark} |
---|
2432 | |
---|
2433 | The memory micro-benchmark measures the memory overhead of an allocator. |
---|
2434 | It allocates a number of dynamic objects and reads @/proc/self/proc/maps@ to get the total memory requested by the allocator from the OS. |
---|
2435 | It calculates the memory overhead by computing the difference between the memory the allocator requests from the OS and the memory that the program allocates. |
---|
2436 | This micro-benchmark is like Larson and stresses the ability of an allocator to deal with object sharing. |
---|
2437 | |
---|
2438 | Figure~\ref{fig:MemoryBenchFig} shows the pseudo code for the memory micro-benchmark. |
---|
2439 | It creates a producer-consumer scenario with K producer threads and each producer has M consumer threads. |
---|
2440 | A producer has a separate buffer for each consumer and allocates N objects of random sizes following a configurable distribution for each consumer. |
---|
2441 | A consumer frees these objects. |
---|
2442 | After every memory operation, program memory usage is recorded throughout the runtime. |
---|
2443 | This data is used to visualize the memory usage and consumption for the program. |
---|
2444 | |
---|
2445 | \begin{figure} |
---|
2446 | \centering |
---|
2447 | \begin{lstlisting} |
---|
2448 | Main Thread |
---|
2449 | print memory snapshot |
---|
2450 | create producer threads |
---|
2451 | Producer Thread (K) |
---|
2452 | set free start |
---|
2453 | create consumer threads |
---|
2454 | for ( N ) |
---|
2455 | allocate memory |
---|
2456 | print memory snapshot |
---|
2457 | Consumer Thread (M) |
---|
2458 | wait while ( allocations < free start ) |
---|
2459 | for ( N ) |
---|
2460 | free memory |
---|
2461 | print memory snapshot |
---|
2462 | \end{lstlisting} |
---|
2463 | %\includegraphics[width=1\textwidth]{figures/bench-memory.eps} |
---|
2464 | \caption{Memory Footprint Micro-Benchmark} |
---|
2465 | \label{fig:MemoryBenchFig} |
---|
2466 | \end{figure} |
---|
2467 | |
---|
2468 | The global adjustment knobs for this micro-benchmark are: |
---|
2469 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2470 | \item[producer (K):] |
---|
2471 | sets the number of producer threads. |
---|
2472 | \item[consumer (M):] |
---|
2473 | sets number of consumers threads for each producer. |
---|
2474 | \item[round:] |
---|
2475 | sets production and consumption round size. |
---|
2476 | \end{description} |
---|
2477 | |
---|
2478 | The adjustment knobs for object allocation are: |
---|
2479 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2480 | \item[max:] |
---|
2481 | maximum object size. |
---|
2482 | \item[min:] |
---|
2483 | minimum object size. |
---|
2484 | \item[step:] |
---|
2485 | object size increment. |
---|
2486 | \item[distro:] |
---|
2487 | object size distribution. |
---|
2488 | \item[objects (N):] |
---|
2489 | number of objects per thread. |
---|
2490 | \end{description} |
---|
2491 | |
---|
2492 | |
---|
2493 | \section{Performance} |
---|
2494 | \label{c:Performance} |
---|
2495 | |
---|
2496 | This section uses the micro-benchmarks from Section~\ref{s:Benchmarks} to test a number of current memory allocators, including llheap. |
---|
2497 | The goal is to see if llheap is competitive with the currently popular memory allocators. |
---|
2498 | |
---|
2499 | |
---|
2500 | \subsection{Machine Specification} |
---|
2501 | |
---|
2502 | The performance experiments were run on two different multi-core architectures (x64 and ARM) to determine if there is consistency across platforms: |
---|
2503 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
2504 | \item |
---|
2505 | \textbf{Algol} Huawei ARM TaiShan 2280 V2 Kunpeng 920, 24-core socket $\times$ 4, 2.6 GHz, GCC version 9.4.0 |
---|
2506 | \item |
---|
2507 | \textbf{Nasus} AMD EPYC 7662, 64-core socket $\times$ 2, 2.0 GHz, GCC version 9.3.0 |
---|
2508 | \end{itemize} |
---|
2509 | |
---|
2510 | |
---|
2511 | \subsection{Existing Memory Allocators} |
---|
2512 | \label{sec:curAllocatorSec} |
---|
2513 | |
---|
2514 | With dynamic allocation being an important feature of C, there are many stand-alone memory allocators that have been designed for different purposes. |
---|
2515 | For this work, 7 of the most popular and widely used memory allocators were selected for comparison, along with llheap. |
---|
2516 | |
---|
2517 | \paragraph{llheap (\textsf{llh})} |
---|
2518 | is the thread-safe allocator from Chapter~\ref{c:Allocator} |
---|
2519 | \\ |
---|
2520 | \textbf{Version:} 1.0 |
---|
2521 | \textbf{Configuration:} Compiled with dynamic linking, but without statistics or debugging.\\ |
---|
2522 | \textbf{Compilation command:} @make@ |
---|
2523 | |
---|
2524 | \paragraph{glibc (\textsf{glc})} |
---|
2525 | \cite{glibc} is the default glibc thread-safe allocator. |
---|
2526 | \\ |
---|
2527 | \textbf{Version:} Ubuntu GLIBC 2.31-0ubuntu9.7 2.31\\ |
---|
2528 | \textbf{Configuration:} Compiled by Ubuntu 20.04.\\ |
---|
2529 | \textbf{Compilation command:} N/A |
---|
2530 | |
---|
2531 | \paragraph{dlmalloc (\textsf{dl})} |
---|
2532 | \cite{dlmalloc} is a thread-safe allocator that is single threaded and single heap. |
---|
2533 | It maintains free-lists of different sizes to store freed dynamic memory. |
---|
2534 | \\ |
---|
2535 | \textbf{Version:} 2.8.6\\ |
---|
2536 | \textbf{Configuration:} Compiled with preprocessor @USE_LOCKS@.\\ |
---|
2537 | \textbf{Compilation command:} @gcc -g3 -O3 -Wall -Wextra -fno-builtin-malloc -fno-builtin-calloc@ @-fno-builtin-realloc -fno-builtin-free -fPIC -shared -DUSE_LOCKS -o libdlmalloc.so malloc-2.8.6.c@ |
---|
2538 | |
---|
2539 | \paragraph{hoard (\textsf{hrd})} |
---|
2540 | \cite{hoard} is a thread-safe allocator that is multi-threaded and uses a heap layer framework. It has per-thread heaps that have thread-local free-lists, and a global shared heap. |
---|
2541 | \\ |
---|
2542 | \textbf{Version:} 3.13\\ |
---|
2543 | \textbf{Configuration:} Compiled with hoard's default configurations and @Makefile@.\\ |
---|
2544 | \textbf{Compilation command:} @make all@ |
---|
2545 | |
---|
2546 | \paragraph{jemalloc (\textsf{je})} |
---|
2547 | \cite{jemalloc} is a thread-safe allocator that uses multiple arenas. Each thread is assigned an arena. |
---|
2548 | Each arena has chunks that contain contagious memory regions of same size. An arena has multiple chunks that contain regions of multiple sizes. |
---|
2549 | \\ |
---|
2550 | \textbf{Version:} 5.2.1\\ |
---|
2551 | \textbf{Configuration:} Compiled with jemalloc's default configurations and @Makefile@.\\ |
---|
2552 | \textbf{Compilation command:} @autogen.sh; configure; make; make install@ |
---|
2553 | |
---|
2554 | \paragraph{ptmalloc3 (\textsf{pt3})} |
---|
2555 | \cite{ptmalloc3} is a modification of dlmalloc. |
---|
2556 | It is a thread-safe multi-threaded memory allocator that uses multiple heaps. |
---|
2557 | ptmalloc3 heap has similar design to dlmalloc's heap. |
---|
2558 | \\ |
---|
2559 | \textbf{Version:} 1.8\\ |
---|
2560 | \textbf{Configuration:} Compiled with ptmalloc3's @Makefile@ using option ``linux-shared''.\\ |
---|
2561 | \textbf{Compilation command:} @make linux-shared@ |
---|
2562 | |
---|
2563 | \paragraph{rpmalloc (\textsf{rp})} |
---|
2564 | \cite{rpmalloc} is a thread-safe allocator that is multi-threaded and uses per-thread heap. |
---|
2565 | Each heap has multiple size-classes and each size-class contains memory regions of the relevant size. |
---|
2566 | \\ |
---|
2567 | \textbf{Version:} 1.4.1\\ |
---|
2568 | \textbf{Configuration:} Compiled with rpmalloc's default configurations and ninja build system.\\ |
---|
2569 | \textbf{Compilation command:} @python3 configure.py; ninja@ |
---|
2570 | |
---|
2571 | \paragraph{tbb malloc (\textsf{tbb})} |
---|
2572 | \cite{tbbmalloc} is a thread-safe allocator that is multi-threaded and uses a private heap for each thread. |
---|
2573 | Each private-heap has multiple bins of different sizes. Each bin contains free regions of the same size. |
---|
2574 | \\ |
---|
2575 | \textbf{Version:} intel tbb 2020 update 2, tbb\_interface\_version == 11102\\ |
---|
2576 | \textbf{Configuration:} Compiled with tbbmalloc's default configurations and @Makefile@.\\ |
---|
2577 | \textbf{Compilation command:} @make@ |
---|
2578 | |
---|
2579 | % \subsection{Experiment Environment} |
---|
2580 | % We used our micro benchmark suite (FIX ME: cite mbench) to evaluate these memory allocators Section~\ref{sec:curAllocatorSec} and our own memory allocator uHeap Section~\ref{sec:allocatorSec}. |
---|
2581 | |
---|
2582 | \subsection{Experiments} |
---|
2583 | |
---|
2584 | Each micro-benchmark is configured and run with each of the allocators, |
---|
2585 | The less time an allocator takes to complete a benchmark the better so lower in the graphs is better, except for the Memory micro-benchmark graphs. |
---|
2586 | All graphs use log scale on the Y-axis, except for the Memory micro-benchmark (see Section~\ref{s:MemoryMicroBenchmark}). |
---|
2587 | |
---|
2588 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2589 | %% CHURN |
---|
2590 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2591 | |
---|
2592 | \subsubsection{Churn Micro-Benchmark} |
---|
2593 | |
---|
2594 | Churn tests allocators for speed under intensive dynamic memory usage (see Section~\ref{s:ChurnBenchmark}). |
---|
2595 | This experiment was run with following configurations: |
---|
2596 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2597 | \item[thread:] |
---|
2598 | 1, 2, 4, 8, 16, 32, 48 |
---|
2599 | \item[spots:] |
---|
2600 | 16 |
---|
2601 | \item[obj:] |
---|
2602 | 100,000 |
---|
2603 | \item[max:] |
---|
2604 | 500 |
---|
2605 | \item[min:] |
---|
2606 | 50 |
---|
2607 | \item[step:] |
---|
2608 | 50 |
---|
2609 | \item[distro:] |
---|
2610 | fisher |
---|
2611 | \end{description} |
---|
2612 | |
---|
2613 | % -maxS : 500 |
---|
2614 | % -minS : 50 |
---|
2615 | % -stepS : 50 |
---|
2616 | % -distroS : fisher |
---|
2617 | % -objN : 100000 |
---|
2618 | % -cSpots : 16 |
---|
2619 | % -threadN : 1, 2, 4, 8, 16 |
---|
2620 | |
---|
2621 | Figure~\ref{fig:churn} shows the results for algol and nasus. |
---|
2622 | The X-axis shows the number of threads; |
---|
2623 | the Y-axis shows the total experiment time. |
---|
2624 | Each allocator's performance for each thread is shown in different colors. |
---|
2625 | |
---|
2626 | \begin{figure} |
---|
2627 | \centering |
---|
2628 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/churn} } \\ |
---|
2629 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/churn} } |
---|
2630 | \caption{Churn} |
---|
2631 | \label{fig:churn} |
---|
2632 | \end{figure} |
---|
2633 | |
---|
2634 | \paragraph{Assessment} |
---|
2635 | All allocators did well in this micro-benchmark, except for \textsf{dl} on the ARM. |
---|
2636 | \textsf{dl}'s is the slowest, indicating some small bottleneck with respect to the other allocators. |
---|
2637 | \textsf{je} is the fastest, with only a small benefit over the other allocators. |
---|
2638 | % llheap is slightly slower because it uses ownership, where many of the allocations have remote frees, which requires locking. |
---|
2639 | % When llheap is compiled without ownership, its performance is the same as the other allocators (not shown). |
---|
2640 | |
---|
2641 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2642 | %% THRASH |
---|
2643 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2644 | |
---|
2645 | \subsubsection{Cache Thrash} |
---|
2646 | \label{sec:cache-thrash-perf} |
---|
2647 | |
---|
2648 | Thrash tests memory allocators for active false sharing (see Section~\ref{sec:benchThrashSec}). |
---|
2649 | This experiment was run with following configurations: |
---|
2650 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2651 | \item[threads:] |
---|
2652 | 1, 2, 4, 8, 16, 32, 48 |
---|
2653 | \item[iterations:] |
---|
2654 | 1,000 |
---|
2655 | \item[cacheRW:] |
---|
2656 | 1,000,000 |
---|
2657 | \item[size:] |
---|
2658 | 1 |
---|
2659 | \end{description} |
---|
2660 | |
---|
2661 | % * Each allocator was tested for its performance across different number of threads. |
---|
2662 | % Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. |
---|
2663 | |
---|
2664 | Figure~\ref{fig:cacheThrash} shows the results for algol and nasus. |
---|
2665 | The X-axis shows the number of threads; |
---|
2666 | the Y-axis shows the total experiment time. |
---|
2667 | Each allocator's performance for each thread is shown in different colors. |
---|
2668 | |
---|
2669 | \begin{figure} |
---|
2670 | \centering |
---|
2671 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/cache_thrash_0-thrash} } \\ |
---|
2672 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/cache_thrash_0-thrash} } |
---|
2673 | \caption{Cache Thrash} |
---|
2674 | \label{fig:cacheThrash} |
---|
2675 | \end{figure} |
---|
2676 | |
---|
2677 | \paragraph{Assessment} |
---|
2678 | All allocators did well in this micro-benchmark, except for \textsf{dl} and \textsf{pt3}. |
---|
2679 | \textsf{dl} uses a single heap for all threads so it is understandable that it generates so much active false-sharing. |
---|
2680 | Requests from different threads are dealt with sequentially by the single heap (using a single lock), which can allocate objects to different threads on the same cache line. |
---|
2681 | \textsf{pt3} uses the T:H model, so multiple threads can use one heap, but the active false-sharing is less than \textsf{dl}. |
---|
2682 | The rest of the memory allocators generate little or no active false-sharing. |
---|
2683 | |
---|
2684 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2685 | %% SCRATCH |
---|
2686 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2687 | |
---|
2688 | \subsubsection{Cache Scratch} |
---|
2689 | |
---|
2690 | Scratch tests memory allocators for program-induced allocator-preserved passive false-sharing (see Section~\ref{s:CacheScratch}). |
---|
2691 | This experiment was run with following configurations: |
---|
2692 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2693 | \item[threads:] |
---|
2694 | 1, 2, 4, 8, 16, 32, 48 |
---|
2695 | \item[iterations:] |
---|
2696 | 1,000 |
---|
2697 | \item[cacheRW:] |
---|
2698 | 1,000,000 |
---|
2699 | \item[size:] |
---|
2700 | 1 |
---|
2701 | \end{description} |
---|
2702 | |
---|
2703 | % * Each allocator was tested for its performance across different number of threads. |
---|
2704 | % Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. |
---|
2705 | |
---|
2706 | Figure~\ref{fig:cacheScratch} shows the results for algol and nasus. |
---|
2707 | The X-axis shows the number of threads; |
---|
2708 | the Y-axis shows the total experiment time. |
---|
2709 | Each allocator's performance for each thread is shown in different colors. |
---|
2710 | |
---|
2711 | \begin{figure} |
---|
2712 | \centering |
---|
2713 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/cache_scratch_0-scratch} } \\ |
---|
2714 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/cache_scratch_0-scratch} } |
---|
2715 | \caption{Cache Scratch} |
---|
2716 | \label{fig:cacheScratch} |
---|
2717 | \end{figure} |
---|
2718 | |
---|
2719 | \paragraph{Assessment} |
---|
2720 | This micro-benchmark divides the allocators into two groups. |
---|
2721 | First is the high-performer group: \textsf{llh}, \textsf{je}, and \textsf{rp}. |
---|
2722 | These memory allocators generate little or no passive false-sharing and their performance difference is negligible. |
---|
2723 | Second is the low-performer group, which includes the rest of the memory allocators. |
---|
2724 | These memory allocators have significant program-induced passive false-sharing, where \textsf{hrd}'s is the worst performing allocator. |
---|
2725 | All of the allocators in this group are sharing heaps among threads at some level. |
---|
2726 | |
---|
2727 | Interestingly, allocators such as \textsf{hrd} and \textsf{glc} performed well in micro-benchmark cache thrash (see Section~\ref{sec:cache-thrash-perf}), but, these allocators are among the low performers in the cache scratch. |
---|
2728 | It suggests these allocators do not actively produce false-sharing, but preserve program-induced passive false sharing. |
---|
2729 | |
---|
2730 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2731 | %% SPEED |
---|
2732 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2733 | |
---|
2734 | \subsubsection{Speed Micro-Benchmark} |
---|
2735 | |
---|
2736 | Speed tests memory allocators for runtime latency (see Section~\ref{s:SpeedMicroBenchmark}). |
---|
2737 | This experiment was run with following configurations: |
---|
2738 | \begin{description} |
---|
2739 | \item[max:] |
---|
2740 | 500 |
---|
2741 | \item[min:] |
---|
2742 | 50 |
---|
2743 | \item[step:] |
---|
2744 | 50 |
---|
2745 | \item[distro:] |
---|
2746 | fisher |
---|
2747 | \item[objects:] |
---|
2748 | 100,000 |
---|
2749 | \item[workers:] |
---|
2750 | 1, 2, 4, 8, 16, 32, 48 |
---|
2751 | \end{description} |
---|
2752 | |
---|
2753 | % -maxS : 500 |
---|
2754 | % -minS : 50 |
---|
2755 | % -stepS : 50 |
---|
2756 | % -distroS : fisher |
---|
2757 | % -objN : 1000000 |
---|
2758 | % -threadN : \{ 1, 2, 4, 8, 16 \} * |
---|
2759 | |
---|
2760 | %* Each allocator was tested for its performance across different number of threads. |
---|
2761 | %Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. |
---|
2762 | |
---|
2763 | Figures~\ref{fig:speed-3-malloc} to~\ref{fig:speed-14-malloc-calloc-realloc-free} show 12 figures, one figure for each chain of the speed benchmark. |
---|
2764 | The X-axis shows the number of threads; |
---|
2765 | the Y-axis shows the total experiment time. |
---|
2766 | Each allocator's performance for each thread is shown in different colors. |
---|
2767 | |
---|
2768 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
2769 | \item Figure~\ref{fig:speed-3-malloc} shows results for chain: malloc |
---|
2770 | \item Figure~\ref{fig:speed-4-realloc} shows results for chain: realloc |
---|
2771 | \item Figure~\ref{fig:speed-5-free} shows results for chain: free |
---|
2772 | \item Figure~\ref{fig:speed-6-calloc} shows results for chain: calloc |
---|
2773 | \item Figure~\ref{fig:speed-7-malloc-free} shows results for chain: malloc-free |
---|
2774 | \item Figure~\ref{fig:speed-8-realloc-free} shows results for chain: realloc-free |
---|
2775 | \item Figure~\ref{fig:speed-9-calloc-free} shows results for chain: calloc-free |
---|
2776 | \item Figure~\ref{fig:speed-10-malloc-realloc} shows results for chain: malloc-realloc |
---|
2777 | \item Figure~\ref{fig:speed-11-calloc-realloc} shows results for chain: calloc-realloc |
---|
2778 | \item Figure~\ref{fig:speed-12-malloc-realloc-free} shows results for chain: malloc-realloc-free |
---|
2779 | \item Figure~\ref{fig:speed-13-calloc-realloc-free} shows results for chain: calloc-realloc-free |
---|
2780 | \item Figure~\ref{fig:speed-14-malloc-calloc-realloc-free} shows results for chain: malloc-realloc-free-calloc |
---|
2781 | \end{itemize} |
---|
2782 | |
---|
2783 | \paragraph{Assessment} |
---|
2784 | This micro-benchmark divides the allocators into two groups: with and without @calloc@. |
---|
2785 | @calloc@ uses @memset@ to set the allocated memory to zero, which dominates the cost of the allocation chain (large performance increase) and levels performance across the allocators. |
---|
2786 | But the difference among the allocators in a @calloc@ chain still gives an idea of their relative performance. |
---|
2787 | |
---|
2788 | All allocators did well in this micro-benchmark across all allocation chains, except for \textsf{dl}, \textsf{pt3}, and \textsf{hrd}. |
---|
2789 | Again, the low-performing allocators are sharing heaps among threads, so the contention causes performance increases with increasing numbers of threads. |
---|
2790 | Furthermore, chains with @free@ can trigger coalescing, which slows the fast path. |
---|
2791 | The high-performing allocators all illustrate low latency across the allocation chains, \ie there are no performance spikes as the chain lengths, that might be caused by contention and/or coalescing. |
---|
2792 | Low latency is important for applications that are sensitive to unknown execution delays. |
---|
2793 | |
---|
2794 | %speed-3-malloc.eps |
---|
2795 | \begin{figure} |
---|
2796 | \centering |
---|
2797 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-3-malloc} } \\ |
---|
2798 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-3-malloc} } |
---|
2799 | \caption{Speed benchmark chain: malloc} |
---|
2800 | \label{fig:speed-3-malloc} |
---|
2801 | \end{figure} |
---|
2802 | |
---|
2803 | %speed-4-realloc.eps |
---|
2804 | \begin{figure} |
---|
2805 | \centering |
---|
2806 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-4-realloc} } \\ |
---|
2807 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-4-realloc} } |
---|
2808 | \caption{Speed benchmark chain: realloc} |
---|
2809 | \label{fig:speed-4-realloc} |
---|
2810 | \end{figure} |
---|
2811 | |
---|
2812 | %speed-5-free.eps |
---|
2813 | \begin{figure} |
---|
2814 | \centering |
---|
2815 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-5-free} } \\ |
---|
2816 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-5-free} } |
---|
2817 | \caption{Speed benchmark chain: free} |
---|
2818 | \label{fig:speed-5-free} |
---|
2819 | \end{figure} |
---|
2820 | |
---|
2821 | %speed-6-calloc.eps |
---|
2822 | \begin{figure} |
---|
2823 | \centering |
---|
2824 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-6-calloc} } \\ |
---|
2825 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-6-calloc} } |
---|
2826 | \caption{Speed benchmark chain: calloc} |
---|
2827 | \label{fig:speed-6-calloc} |
---|
2828 | \end{figure} |
---|
2829 | |
---|
2830 | %speed-7-malloc-free.eps |
---|
2831 | \begin{figure} |
---|
2832 | \centering |
---|
2833 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-7-malloc-free} } \\ |
---|
2834 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-7-malloc-free} } |
---|
2835 | \caption{Speed benchmark chain: malloc-free} |
---|
2836 | \label{fig:speed-7-malloc-free} |
---|
2837 | \end{figure} |
---|
2838 | |
---|
2839 | %speed-8-realloc-free.eps |
---|
2840 | \begin{figure} |
---|
2841 | \centering |
---|
2842 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-8-realloc-free} } \\ |
---|
2843 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-8-realloc-free} } |
---|
2844 | \caption{Speed benchmark chain: realloc-free} |
---|
2845 | \label{fig:speed-8-realloc-free} |
---|
2846 | \end{figure} |
---|
2847 | |
---|
2848 | %speed-9-calloc-free.eps |
---|
2849 | \begin{figure} |
---|
2850 | \centering |
---|
2851 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-9-calloc-free} } \\ |
---|
2852 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-9-calloc-free} } |
---|
2853 | \caption{Speed benchmark chain: calloc-free} |
---|
2854 | \label{fig:speed-9-calloc-free} |
---|
2855 | \end{figure} |
---|
2856 | |
---|
2857 | %speed-10-malloc-realloc.eps |
---|
2858 | \begin{figure} |
---|
2859 | \centering |
---|
2860 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-10-malloc-realloc} } \\ |
---|
2861 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-10-malloc-realloc} } |
---|
2862 | \caption{Speed benchmark chain: malloc-realloc} |
---|
2863 | \label{fig:speed-10-malloc-realloc} |
---|
2864 | \end{figure} |
---|
2865 | |
---|
2866 | %speed-11-calloc-realloc.eps |
---|
2867 | \begin{figure} |
---|
2868 | \centering |
---|
2869 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-11-calloc-realloc} } \\ |
---|
2870 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-11-calloc-realloc} } |
---|
2871 | \caption{Speed benchmark chain: calloc-realloc} |
---|
2872 | \label{fig:speed-11-calloc-realloc} |
---|
2873 | \end{figure} |
---|
2874 | |
---|
2875 | %speed-12-malloc-realloc-free.eps |
---|
2876 | \begin{figure} |
---|
2877 | \centering |
---|
2878 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-12-malloc-realloc-free} } \\ |
---|
2879 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-12-malloc-realloc-free} } |
---|
2880 | \caption{Speed benchmark chain: malloc-realloc-free} |
---|
2881 | \label{fig:speed-12-malloc-realloc-free} |
---|
2882 | \end{figure} |
---|
2883 | |
---|
2884 | %speed-13-calloc-realloc-free.eps |
---|
2885 | \begin{figure} |
---|
2886 | \centering |
---|
2887 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-13-calloc-realloc-free} } \\ |
---|
2888 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-13-calloc-realloc-free} } |
---|
2889 | \caption{Speed benchmark chain: calloc-realloc-free} |
---|
2890 | \label{fig:speed-13-calloc-realloc-free} |
---|
2891 | \end{figure} |
---|
2892 | |
---|
2893 | %speed-14-{m,c,re}alloc-free.eps |
---|
2894 | \begin{figure} |
---|
2895 | \centering |
---|
2896 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/speed-14-m-c-re-alloc-free} } \\ |
---|
2897 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/speed-14-m-c-re-alloc-free} } |
---|
2898 | \caption{Speed benchmark chain: malloc-calloc-realloc-free} |
---|
2899 | \label{fig:speed-14-malloc-calloc-realloc-free} |
---|
2900 | \end{figure} |
---|
2901 | |
---|
2902 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2903 | %% MEMORY |
---|
2904 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
2905 | |
---|
2906 | \newpage |
---|
2907 | \subsubsection{Memory Micro-Benchmark} |
---|
2908 | \label{s:MemoryMicroBenchmark} |
---|
2909 | |
---|
2910 | This experiment is run with the following two configurations for each allocator. |
---|
2911 | The difference between the two configurations is the number of producers and consumers. |
---|
2912 | Configuration 1 has one producer and one consumer, and configuration 2 has 4 producers, where each producer has 4 consumers. |
---|
2913 | |
---|
2914 | \noindent |
---|
2915 | Configuration 1: |
---|
2916 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2917 | \item[producer (K):] |
---|
2918 | 1 |
---|
2919 | \item[consumer (M):] |
---|
2920 | 1 |
---|
2921 | \item[round:] |
---|
2922 | 100,000 |
---|
2923 | \item[max:] |
---|
2924 | 500 |
---|
2925 | \item[min:] |
---|
2926 | 50 |
---|
2927 | \item[step:] |
---|
2928 | 50 |
---|
2929 | \item[distro:] |
---|
2930 | fisher |
---|
2931 | \item[objects (N):] |
---|
2932 | 100,000 |
---|
2933 | \end{description} |
---|
2934 | |
---|
2935 | % -threadA : 1 |
---|
2936 | % -threadF : 1 |
---|
2937 | % -maxS : 500 |
---|
2938 | % -minS : 50 |
---|
2939 | % -stepS : 50 |
---|
2940 | % -distroS : fisher |
---|
2941 | % -objN : 100000 |
---|
2942 | % -consumeS: 100000 |
---|
2943 | |
---|
2944 | \noindent |
---|
2945 | Configuration 2: |
---|
2946 | \begin{description}[itemsep=0pt,parsep=0pt] |
---|
2947 | \item[producer (K):] |
---|
2948 | 4 |
---|
2949 | \item[consumer (M):] |
---|
2950 | 4 |
---|
2951 | \item[round:] |
---|
2952 | 100,000 |
---|
2953 | \item[max:] |
---|
2954 | 500 |
---|
2955 | \item[min:] |
---|
2956 | 50 |
---|
2957 | \item[step:] |
---|
2958 | 50 |
---|
2959 | \item[distro:] |
---|
2960 | fisher |
---|
2961 | \item[objects (N):] |
---|
2962 | 100,000 |
---|
2963 | \end{description} |
---|
2964 | |
---|
2965 | % -threadA : 4 |
---|
2966 | % -threadF : 4 |
---|
2967 | % -maxS : 500 |
---|
2968 | % -minS : 50 |
---|
2969 | % -stepS : 50 |
---|
2970 | % -distroS : fisher |
---|
2971 | % -objN : 100000 |
---|
2972 | % -consumeS: 100000 |
---|
2973 | |
---|
2974 | % \begin{table}[b] |
---|
2975 | % \centering |
---|
2976 | % \begin{tabular}{ |c|c|c| } |
---|
2977 | % \hline |
---|
2978 | % Memory Allocator & Configuration 1 Result & Configuration 2 Result\\ |
---|
2979 | % \hline |
---|
2980 | % llh & Figure~\ref{fig:mem-1-prod-1-cons-100-llh} & Figure~\ref{fig:mem-4-prod-4-cons-100-llh}\\ |
---|
2981 | % \hline |
---|
2982 | % dl & Figure~\ref{fig:mem-1-prod-1-cons-100-dl} & Figure~\ref{fig:mem-4-prod-4-cons-100-dl}\\ |
---|
2983 | % \hline |
---|
2984 | % glibc & Figure~\ref{fig:mem-1-prod-1-cons-100-glc} & Figure~\ref{fig:mem-4-prod-4-cons-100-glc}\\ |
---|
2985 | % \hline |
---|
2986 | % hoard & Figure~\ref{fig:mem-1-prod-1-cons-100-hrd} & Figure~\ref{fig:mem-4-prod-4-cons-100-hrd}\\ |
---|
2987 | % \hline |
---|
2988 | % je & Figure~\ref{fig:mem-1-prod-1-cons-100-je} & Figure~\ref{fig:mem-4-prod-4-cons-100-je}\\ |
---|
2989 | % \hline |
---|
2990 | % pt3 & Figure~\ref{fig:mem-1-prod-1-cons-100-pt3} & Figure~\ref{fig:mem-4-prod-4-cons-100-pt3}\\ |
---|
2991 | % \hline |
---|
2992 | % rp & Figure~\ref{fig:mem-1-prod-1-cons-100-rp} & Figure~\ref{fig:mem-4-prod-4-cons-100-rp}\\ |
---|
2993 | % \hline |
---|
2994 | % tbb & Figure~\ref{fig:mem-1-prod-1-cons-100-tbb} & Figure~\ref{fig:mem-4-prod-4-cons-100-tbb}\\ |
---|
2995 | % \hline |
---|
2996 | % \end{tabular} |
---|
2997 | % \caption{Memory benchmark results} |
---|
2998 | % \label{table:mem-benchmark-figs} |
---|
2999 | % \end{table} |
---|
3000 | % Table Section~\ref{table:mem-benchmark-figs} shows the list of figures that contain memory benchmark results. |
---|
3001 | |
---|
3002 | Figures~\ref{fig:mem-1-prod-1-cons-100-llh}{fig:mem-4-prod-4-cons-100-tbb} show 16 figures, two figures for each of the 8 allocators, one for each configuration. |
---|
3003 | Each figure has 2 graphs, one for each experiment environment. |
---|
3004 | Each graph has following 5 subgraphs that show memory usage and statistics throughout the micro-benchmark's lifetime. |
---|
3005 | \begin{itemize}[topsep=3pt,itemsep=2pt,parsep=0pt] |
---|
3006 | \item \textit{\textbf{current\_req\_mem(B)}} shows the amount of dynamic memory requested and currently in-use of the benchmark. |
---|
3007 | \item \textit{\textbf{heap}}* shows the memory requested by the program (allocator) from the system that lies in the heap (@sbrk@) area. |
---|
3008 | \item \textit{\textbf{mmap\_so}}* shows the memory requested by the program (allocator) from the system that lies in the @mmap@ area. |
---|
3009 | \item \textit{\textbf{mmap}}* shows the memory requested by the program (allocator or shared libraries) from the system that lies in the @mmap@ area. |
---|
3010 | \item \textit{\textbf{total\_dynamic}} shows the total usage of dynamic memory by the benchmark program, which is a sum of \textit{heap}, \textit{mmap}, and \textit{mmap\_so}. |
---|
3011 | \end{itemize} |
---|
3012 | * These statistics are gathered by monitoring a process's @/proc/self/maps@ file. |
---|
3013 | |
---|
3014 | The X-axis shows the time when the memory information is polled. |
---|
3015 | The Y-axis shows the memory usage in bytes. |
---|
3016 | |
---|
3017 | For this experiment, the difference between the memory requested by the benchmark (\textit{current\_req\_mem(B)}) and the memory that the process has received from system (\textit{heap}, \textit{mmap}) should be minimum. |
---|
3018 | This difference is the memory overhead caused by the allocator and shows the level of fragmentation in the allocator. |
---|
3019 | |
---|
3020 | \paragraph{Assessment} |
---|
3021 | First, the differences in the shape of the curves between architectures (top ARM, bottom x64) is small, where the differences are in the amount of memory used. |
---|
3022 | Hence, it is possible to focus on either the top or bottom graph. |
---|
3023 | |
---|
3024 | Second, the heap curve is 0 for four memory allocators: \textsf{hrd}, \textsf{je}, \textsf{pt3}, and \textsf{rp}, indicating these memory allocators only use @mmap@ to get memory from the system and ignore the @sbrk@ area. |
---|
3025 | |
---|
3026 | The total dynamic memory is higher for \textsf{hrd} and \textsf{tbb} than the other allocators. |
---|
3027 | The main reason is the use of superblocks (see Section~\ref{s:ObjectContainers}) containing objects of the same size. |
---|
3028 | These superblocks are maintained throughout the life of the program. |
---|
3029 | |
---|
3030 | \textsf{pt3} is the only memory allocator where the total dynamic memory goes down in the second half of the program lifetime when the memory is freed by the benchmark program. |
---|
3031 | It makes pt3 the only memory allocator that gives memory back to the OS as it is freed by the program. |
---|
3032 | |
---|
3033 | % FOR 1 THREAD |
---|
3034 | |
---|
3035 | %mem-1-prod-1-cons-100-llh.eps |
---|
3036 | \begin{figure} |
---|
3037 | \centering |
---|
3038 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-llh} } \\ |
---|
3039 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-llh} } |
---|
3040 | \caption{Memory benchmark results with Configuration-1 for llh memory allocator} |
---|
3041 | \label{fig:mem-1-prod-1-cons-100-llh} |
---|
3042 | \end{figure} |
---|
3043 | |
---|
3044 | %mem-1-prod-1-cons-100-dl.eps |
---|
3045 | \begin{figure} |
---|
3046 | \centering |
---|
3047 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-dl} } \\ |
---|
3048 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-dl} } |
---|
3049 | \caption{Memory benchmark results with Configuration-1 for dl memory allocator} |
---|
3050 | \label{fig:mem-1-prod-1-cons-100-dl} |
---|
3051 | \end{figure} |
---|
3052 | |
---|
3053 | %mem-1-prod-1-cons-100-glc.eps |
---|
3054 | \begin{figure} |
---|
3055 | \centering |
---|
3056 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-glc} } \\ |
---|
3057 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-glc} } |
---|
3058 | \caption{Memory benchmark results with Configuration-1 for glibc memory allocator} |
---|
3059 | \label{fig:mem-1-prod-1-cons-100-glc} |
---|
3060 | \end{figure} |
---|
3061 | |
---|
3062 | %mem-1-prod-1-cons-100-hrd.eps |
---|
3063 | \begin{figure} |
---|
3064 | \centering |
---|
3065 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-hrd} } \\ |
---|
3066 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-hrd} } |
---|
3067 | \caption{Memory benchmark results with Configuration-1 for hoard memory allocator} |
---|
3068 | \label{fig:mem-1-prod-1-cons-100-hrd} |
---|
3069 | \end{figure} |
---|
3070 | |
---|
3071 | %mem-1-prod-1-cons-100-je.eps |
---|
3072 | \begin{figure} |
---|
3073 | \centering |
---|
3074 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-je} } \\ |
---|
3075 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-je} } |
---|
3076 | \caption{Memory benchmark results with Configuration-1 for je memory allocator} |
---|
3077 | \label{fig:mem-1-prod-1-cons-100-je} |
---|
3078 | \end{figure} |
---|
3079 | |
---|
3080 | %mem-1-prod-1-cons-100-pt3.eps |
---|
3081 | \begin{figure} |
---|
3082 | \centering |
---|
3083 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-pt3} } \\ |
---|
3084 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-pt3} } |
---|
3085 | \caption{Memory benchmark results with Configuration-1 for pt3 memory allocator} |
---|
3086 | \label{fig:mem-1-prod-1-cons-100-pt3} |
---|
3087 | \end{figure} |
---|
3088 | |
---|
3089 | %mem-1-prod-1-cons-100-rp.eps |
---|
3090 | \begin{figure} |
---|
3091 | \centering |
---|
3092 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-rp} } \\ |
---|
3093 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-rp} } |
---|
3094 | \caption{Memory benchmark results with Configuration-1 for rp memory allocator} |
---|
3095 | \label{fig:mem-1-prod-1-cons-100-rp} |
---|
3096 | \end{figure} |
---|
3097 | |
---|
3098 | %mem-1-prod-1-cons-100-tbb.eps |
---|
3099 | \begin{figure} |
---|
3100 | \centering |
---|
3101 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-tbb} } \\ |
---|
3102 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-tbb} } |
---|
3103 | \caption{Memory benchmark results with Configuration-1 for tbb memory allocator} |
---|
3104 | \label{fig:mem-1-prod-1-cons-100-tbb} |
---|
3105 | \end{figure} |
---|
3106 | |
---|
3107 | % FOR 4 THREADS |
---|
3108 | |
---|
3109 | %mem-4-prod-4-cons-100-llh.eps |
---|
3110 | \begin{figure} |
---|
3111 | \centering |
---|
3112 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-llh} } \\ |
---|
3113 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-llh} } |
---|
3114 | \caption{Memory benchmark results with Configuration-2 for llh memory allocator} |
---|
3115 | \label{fig:mem-4-prod-4-cons-100-llh} |
---|
3116 | \end{figure} |
---|
3117 | |
---|
3118 | %mem-4-prod-4-cons-100-dl.eps |
---|
3119 | \begin{figure} |
---|
3120 | \centering |
---|
3121 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-dl} } \\ |
---|
3122 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-dl} } |
---|
3123 | \caption{Memory benchmark results with Configuration-2 for dl memory allocator} |
---|
3124 | \label{fig:mem-4-prod-4-cons-100-dl} |
---|
3125 | \end{figure} |
---|
3126 | |
---|
3127 | %mem-4-prod-4-cons-100-glc.eps |
---|
3128 | \begin{figure} |
---|
3129 | \centering |
---|
3130 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-glc} } \\ |
---|
3131 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-glc} } |
---|
3132 | \caption{Memory benchmark results with Configuration-2 for glibc memory allocator} |
---|
3133 | \label{fig:mem-4-prod-4-cons-100-glc} |
---|
3134 | \end{figure} |
---|
3135 | |
---|
3136 | %mem-4-prod-4-cons-100-hrd.eps |
---|
3137 | \begin{figure} |
---|
3138 | \centering |
---|
3139 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-hrd} } \\ |
---|
3140 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-hrd} } |
---|
3141 | \caption{Memory benchmark results with Configuration-2 for hoard memory allocator} |
---|
3142 | \label{fig:mem-4-prod-4-cons-100-hrd} |
---|
3143 | \end{figure} |
---|
3144 | |
---|
3145 | %mem-4-prod-4-cons-100-je.eps |
---|
3146 | \begin{figure} |
---|
3147 | \centering |
---|
3148 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-je} } \\ |
---|
3149 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-je} } |
---|
3150 | \caption{Memory benchmark results with Configuration-2 for je memory allocator} |
---|
3151 | \label{fig:mem-4-prod-4-cons-100-je} |
---|
3152 | \end{figure} |
---|
3153 | |
---|
3154 | %mem-4-prod-4-cons-100-pt3.eps |
---|
3155 | \begin{figure} |
---|
3156 | \centering |
---|
3157 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-pt3} } \\ |
---|
3158 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-pt3} } |
---|
3159 | \caption{Memory benchmark results with Configuration-2 for pt3 memory allocator} |
---|
3160 | \label{fig:mem-4-prod-4-cons-100-pt3} |
---|
3161 | \end{figure} |
---|
3162 | |
---|
3163 | %mem-4-prod-4-cons-100-rp.eps |
---|
3164 | \begin{figure} |
---|
3165 | \centering |
---|
3166 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-rp} } \\ |
---|
3167 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-rp} } |
---|
3168 | \caption{Memory benchmark results with Configuration-2 for rp memory allocator} |
---|
3169 | \label{fig:mem-4-prod-4-cons-100-rp} |
---|
3170 | \end{figure} |
---|
3171 | |
---|
3172 | %mem-4-prod-4-cons-100-tbb.eps |
---|
3173 | \begin{figure} |
---|
3174 | \centering |
---|
3175 | %\subfloat[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-tbb} } \\ |
---|
3176 | %\subfloat[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-tbb} } |
---|
3177 | \caption{Memory benchmark results with Configuration-2 for tbb memory allocator} |
---|
3178 | \label{fig:mem-4-prod-4-cons-100-tbb} |
---|
3179 | \end{figure} |
---|
3180 | |
---|
3181 | |
---|
3182 | \section{Conclusion} |
---|
3183 | |
---|
3184 | % \noindent |
---|
3185 | % ==================== |
---|
3186 | % |
---|
3187 | % Writing Points: |
---|
3188 | % \begin{itemize} |
---|
3189 | % \item |
---|
3190 | % Summarize u-benchmark suite. |
---|
3191 | % \item |
---|
3192 | % Summarize @uHeapLmmm@. |
---|
3193 | % \item |
---|
3194 | % Make recommendations on memory allocator design. |
---|
3195 | % \end{itemize} |
---|
3196 | % |
---|
3197 | % \noindent |
---|
3198 | % ==================== |
---|
3199 | |
---|
3200 | The goal of this work was to build a low-latency (or high bandwidth) memory allocator for both KT and UT multi-threading systems that is competitive with the best current memory allocators while extending the feature set of existing and new allocator routines. |
---|
3201 | The new llheap memory-allocator achieves all of these goals, while maintaining and managing sticky allocation information without a performance loss. |
---|
3202 | Hence, it becomes possible to use @realloc@ frequently as a safe operation, rather than just occasionally. |
---|
3203 | Furthermore, the ability to query sticky properties and information allows programmers to write safer programs, as it is possible to dynamically match allocation styles from unknown library routines that return allocations. |
---|
3204 | |
---|
3205 | Extending the C allocation API with @resize@, advanced @realloc@, @aalloc@, @amemalign@, and @cmemalign@ means programmers do not have to do these useful allocation operations themselves. |
---|
3206 | The ability to use \CFA's advanced type-system (and possibly \CC's too) to have one allocation routine with completely orthogonal sticky properties shows how far the allocation API can be pushed, which increases safety and greatly simplifies programmer's use of dynamic allocation. |
---|
3207 | |
---|
3208 | Providing comprehensive statistics for all allocation operations is invaluable in understanding and debugging a program's dynamic behaviour. |
---|
3209 | No other memory allocator provides such comprehensive statistics gathering. |
---|
3210 | This capability was used extensively during the development of llheap to verify its behaviour. |
---|
3211 | As well, providing a debugging mode where allocations are checked, along with internal pre/post conditions and invariants, is extremely useful, especially for students. |
---|
3212 | While not as powerful as the @valgrind@ interpreter, a large number of allocation mistakes are detected. |
---|
3213 | Finally, contention-free statistics gathering and debugging have a low enough cost to be used in production code. |
---|
3214 | |
---|
3215 | The ability to compile llheap with static/dynamic linking and optional statistics/debugging provides programers with multiple mechanisms to balance performance and safety. |
---|
3216 | These allocator versions are easy to use because they can be linked to an application without recompilation. |
---|
3217 | |
---|
3218 | Starting a micro-benchmark test-suite for comparing allocators, rather than relying on a suite of arbitrary programs, has been an interesting challenge. |
---|
3219 | The current micro-benchmarks allow some understanding of allocator implementation properties without actually looking at the implementation. |
---|
3220 | For example, the memory micro-benchmark quickly identified how several of the allocators work at the global level. |
---|
3221 | It was not possible to show how the micro-benchmarks adjustment knobs were used to tune to an interesting test point. |
---|
3222 | Many graphs were created and discarded until a few were selected for the work. |
---|
3223 | |
---|
3224 | |
---|
3225 | \subsection{Future Work} |
---|
3226 | |
---|
3227 | A careful walk-though of the allocator fastpath should yield additional optimizations for a slight performance gain. |
---|
3228 | In particular, analysing the implementation of rpmalloc, which is often the fastest allocator, |
---|
3229 | |
---|
3230 | The micro-benchmark project requires more testing and analysis. |
---|
3231 | Additional allocation patterns are needed to extract meaningful information about allocators, and within allocation patterns, what are the most useful tuning knobs. |
---|
3232 | Also, identifying ways to visualize the results of the micro-benchmarks is a work in progress. |
---|
3233 | |
---|
3234 | After llheap is made available on GitHub, interacting with its users to locate problems and improvements will make llbench a more robust memory allocator. |
---|
3235 | As well, feedback from the \uC and \CFA projects, which have adopted llheap for their memory allocator, will provide additional information. |
---|
3236 | |
---|
3237 | |
---|
3238 | |
---|
3239 | \section{Acknowledgements} |
---|
3240 | |
---|
3241 | This research is funded by the NSERC/Waterloo-Huawei (\url{http://www.huawei.com}) Joint Innovation Lab. %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada. |
---|
3242 | |
---|
3243 | {% |
---|
3244 | \fontsize{9bp}{11.5bp}\selectfont% |
---|
3245 | \bibliography{pl,local} |
---|
3246 | }% |
---|
3247 | |
---|
3248 | \end{document} |
---|
3249 | |
---|
3250 | % Local Variables: % |
---|
3251 | % tab-width: 4 % |
---|
3252 | % fill-column: 120 % |
---|
3253 | % compile-command: "make" % |
---|
3254 | % End: % |
---|