Changeset 1eec0b0
- Timestamp:
- Feb 22, 2022, 2:42:45 PM (3 years ago)
- Branches:
- ADT, ast-experimental, enum, master, pthread-emulation, qualifiedEnum
- Children:
- 5cefa43
- Parents:
- 5c216b4
- git-author:
- Peter A. Buhr <pabuhr@…> (02/20/22 20:37:23)
- git-committer:
- Peter A. Buhr <pabuhr@…> (02/22/22 14:42:45)
- Location:
- doc/theses/mubeen_zulfiqar_MMath
- Files:
-
- 39 added
- 6 edited
- 2 moved
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/mubeen_zulfiqar_MMath/Makefile
r5c216b4 r1eec0b0 1 DOC = uw-ethesis.pdf2 BASE = ${DOC:%.pdf=%} # remove suffix3 1 # directory for latex clutter files 4 BUILD = build 5 TEXSRC = $(wildcard *.tex) 6 FIGSRC = $(wildcard *.fig) 7 BIBSRC = $(wildcard *.bib) 8 TEXLIB = .:../../LaTeXmacros:${BUILD}: # common latex macros 9 BIBLIB = .:../../bibliography # common citation repository 2 Build = build 3 Figures = figures 4 Pictures = pictures 5 TeXSRC = ${wildcard *.tex} 6 FigSRC = ${notdir ${wildcard ${Figures}/*.fig}} 7 PicSRC = ${notdir ${wildcard ${Pictures}/*.fig}} 8 BIBSRC = ${wildcard *.bib} 9 TeXLIB = .:../../LaTeXmacros:${Build}: # common latex macros 10 BibLIB = .:../../bibliography # common citation repository 10 11 11 12 MAKEFLAGS = --no-print-directory # --silent 12 VPATH = ${B UILD}13 VPATH = ${Build} ${Figures} ${Pictures} # extra search path for file names used in document 13 14 14 15 ### Special Rules: … … 18 19 19 20 ### Commands: 20 LATEX = TEXINPUTS=${TEXLIB} && export TEXINPUTS && latex -halt-on-error -output-directory=${BUILD} 21 BIBTEX = BIBINPUTS=${BIBLIB} bibtex 22 #GLOSSARY = INDEXSTYLE=${BUILD} makeglossaries-lite 21 22 LaTeX = TEXINPUTS=${TeXLIB} && export TEXINPUTS && latex -halt-on-error -output-directory=${Build} 23 BibTeX = BIBINPUTS=${BibLIB} bibtex 24 #Glossary = INDEXSTYLE=${Build} makeglossaries-lite 23 25 24 26 ### Rules and Recipes: 25 27 28 DOC = uw-ethesis.pdf 29 BASE = ${DOC:%.pdf=%} # remove suffix 30 26 31 all: ${DOC} 27 32 28 ${BUILD}/%.dvi: ${TEXSRC} ${FIGSRC:%.fig=%.tex} ${BIBSRC} Makefile | ${BUILD} 29 ${LATEX} ${BASE} 30 ${BIBTEX} ${BUILD}/${BASE} 31 ${LATEX} ${BASE} 32 # ${GLOSSARY} ${BUILD}/${BASE} 33 # ${LATEX} ${BASE} 33 clean: 34 @rm -frv ${DOC} ${Build} 34 35 35 ${BUILD}: 36 # File Dependencies # 37 38 ${Build}/%.dvi : ${TeXSRC} ${FigSRC:%.fig=%.tex} ${PicSRC:%.fig=%.pstex} ${BIBSRC} Makefile | ${Build} 39 ${LaTeX} ${BASE} 40 ${BibTeX} ${Build}/${BASE} 41 ${LaTeX} ${BASE} 42 # if nedded, run latex again to get citations 43 if fgrep -s "LaTeX Warning: Citation" ${basename $@}.log ; then ${LaTeX} ${BASE} ; fi 44 # ${Glossary} ${Build}/${BASE} 45 # ${LaTeX} ${BASE} 46 47 ${Build}: 36 48 mkdir $@ 37 49 38 %.pdf : ${B UILD}/%.ps | ${BUILD}50 %.pdf : ${Build}/%.ps | ${Build} 39 51 ps2pdf $< 40 52 41 %.ps : %.dvi | ${B UILD}53 %.ps : %.dvi | ${Build} 42 54 dvips $< -o $@ 43 55 44 %.tex : %.fig | ${B UILD}45 fig2dev -L eepic $< > ${B UILD}/$@56 %.tex : %.fig | ${Build} 57 fig2dev -L eepic $< > ${Build}/$@ 46 58 47 %.ps : %.fig | ${B UILD}48 fig2dev -L ps $< > ${B UILD}/$@59 %.ps : %.fig | ${Build} 60 fig2dev -L ps $< > ${Build}/$@ 49 61 50 %.pstex : %.fig | ${BUILD} 51 fig2dev -L pstex $< > ${BUILD}/$@ 52 fig2dev -L pstex_t -p ${BUILD}/$@ $< > ${BUILD}/$@_t 53 54 clean: 55 @rm -frv ${DOC} ${BUILD} *.fig.bak 62 %.pstex : %.fig | ${Build} 63 fig2dev -L pstex $< > ${Build}/$@ 64 fig2dev -L pstex_t -p ${Build}/$@ $< > ${Build}/$@_t -
doc/theses/mubeen_zulfiqar_MMath/allocator.tex
r5c216b4 r1eec0b0 24 24 \end{itemize} 25 25 26 The new features added to uHeapLmmm (incl. @malloc \_size@ routine)26 The new features added to uHeapLmmm (incl. @malloc_size@ routine) 27 27 \CFA alloc interface with examples. 28 28 … … 99 99 \begin{itemize} 100 100 \item 101 The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The lock on bump allocation (on memory taken from sbrk) will only be contended if KTs >N. The contention on sbrk area is less likely as it will only happen in the case if heaps assigned to two KTs get short of bump allocation reserve simultanously.102 \item 103 N heaps are created at the start of the program and destroyed at the end of program. When a KT is created, we only assign it to one of the heaps. When a KT is destroyed, we only dissociate it from the assigned heap but we do not destroy that heap. That heap will go back to our pool-of-heaps, ready to be used by some new KT. And if that heap was shared among multiple KTs (like the case of KTs >N) then, on deletion of one KT, that heap will be still in-use of the other KTs. This will prevent creation and deletion of heaps during run-time as heaps are re-usable which helps in keeping low-memory footprint.101 The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The lock on bump allocation (on memory taken from sbrk) will only be contended if KTs $<$ N. The contention on sbrk area is less likely as it will only happen in the case if heaps assigned to two KTs get short of bump allocation reserve simultanously. 102 \item 103 N heaps are created at the start of the program and destroyed at the end of program. When a KT is created, we only assign it to one of the heaps. When a KT is destroyed, we only dissociate it from the assigned heap but we do not destroy that heap. That heap will go back to our pool-of-heaps, ready to be used by some new KT. And if that heap was shared among multiple KTs (like the case of KTs $<$ N) then, on deletion of one KT, that heap will be still in-use of the other KTs. This will prevent creation and deletion of heaps during run-time as heaps are re-usable which helps in keeping low-memory footprint. 104 104 \item 105 105 It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty. … … 113 113 114 114 \section{Added Features and Methods} 115 To improve the UHeapLmmm allocator (FIX ME: cite uHeapLmmm) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.115 To improve the UHeapLmmm allocator (FIX ME: cite uHeapLmmm) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator. 116 116 117 117 \subsection{C Interface} 118 118 We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. THese features will programmer more control on the dynamic memory allocation. 119 119 120 \subs ubsection void * aalloc( size\_t dim, size\_t elemSize )121 aallocis an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is calloc but calloc also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.122 \paragraph{Usage} 123 aalloctakes two parameters.124 125 \begin{itemize} 126 \item 127 dim: number of objects in the array128 \item 129 elemSize: size of the object in the array.130 \end{itemize} 131 It returns address of dynamic object allocatoed on heap that can contain dim number of objects of the size elemSize. On failure, it returns NULLpointer.132 133 \subs ubsection void * resize( void * oaddr, size\_t size )134 resize is an extension of relloc. It allows programmer to reuse a cuurently allocated dynamic object with a new size requirement. Its alternate in the other allocators is reallocbut relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.135 \paragraph{Usage} 136 resizetakes two parameters.137 138 \begin{itemize} 139 \item 140 oaddr: the address of the old object that needs to be resized.141 \item 142 size: the new size requirement of the to which the old object needs to be resized.143 \end{itemize} 144 It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns NULLpointer.145 146 \subs ubsection void * resize( void * oaddr, size\_t nalign, size\_t size )147 This resize is an extension of the above resize(FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.120 \subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}} 121 @aalloc@ is an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is calloc but calloc also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0. 122 \paragraph{Usage} 123 @aalloc@ takes two parameters. 124 125 \begin{itemize} 126 \item 127 @dim@: number of objects in the array 128 \item 129 @elemSize@: size of the object in the array. 130 \end{itemize} 131 It returns address of dynamic object allocatoed on heap that can contain dim number of objects of the size elemSize. On failure, it returns a @NULL@ pointer. 132 133 \subsection{\lstinline{void * resize( void * oaddr, size_t size )}} 134 @resize@ is an extension of relloc. It allows programmer to reuse a cuurently allocated dynamic object with a new size requirement. Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object. 135 \paragraph{Usage} 136 @resize@ takes two parameters. 137 138 \begin{itemize} 139 \item 140 @oaddr@: the address of the old object that needs to be resized. 141 \item 142 @size@: the new size requirement of the to which the old object needs to be resized. 143 \end{itemize} 144 It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns a @NULL@ pointer. 145 146 \subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}} 147 This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement. 148 148 \paragraph{Usage} 149 149 This resize takes three parameters. It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize). … … 151 151 \begin{itemize} 152 152 \item 153 oaddr: the address of the old object that needs to be resized.154 \item 155 nalign: the new alignment to which the old object needs to be realigned.156 \item 157 size: the new size requirement of the to which the old object needs to be resized.158 \end{itemize} 159 It returns an object with the size and alignment given in the parameters. On failure, it returns a NULLpointer.160 161 \subs ubsection void * amemalign( size\_t alignment, size\_t dim, size\_t elemSize )153 @oaddr@: the address of the old object that needs to be resized. 154 \item 155 @nalign@: the new alignment to which the old object needs to be realigned. 156 \item 157 @size@: the new size requirement of the to which the old object needs to be resized. 158 \end{itemize} 159 It returns an object with the size and alignment given in the parameters. On failure, it returns a @NULL@ pointer. 160 161 \subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}} 162 162 amemalign is a hybrid of memalign and aalloc. It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. It frees the programmer from calculating the total size of the array. 163 163 \paragraph{Usage} … … 166 166 \begin{itemize} 167 167 \item 168 alignment: the alignment to which the dynamic array needs to be aligned.169 \item 170 dim: number of objects in the array171 \item 172 elemSize: size of the object in the array.173 \end{itemize} 174 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns NULLpointer.175 176 \subs ubsection void * cmemalign( size\_t alignment, size\_t dim, size\_t elemSize )168 @alignment@: the alignment to which the dynamic array needs to be aligned. 169 \item 170 @dim@: number of objects in the array 171 \item 172 @elemSize@: size of the object in the array. 173 \end{itemize} 174 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns a @NULL@ pointer. 175 176 \subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}} 177 177 cmemalign is a hybrid of amemalign and calloc. It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. This routine provides both features of aligning and 0 filling, implicitly. 178 178 \paragraph{Usage} … … 181 181 \begin{itemize} 182 182 \item 183 alignment: the alignment to which the dynamic array needs to be aligned.184 \item 185 dim: number of objects in the array186 \item 187 elemSize: size of the object in the array.188 \end{itemize} 189 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns NULLpointer.190 191 \subs ubsection size\_t malloc\_alignment( void * addr )192 malloc\_alignmentreturns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.193 \paragraph{Usage} 194 malloc\_alignmenttakes one parameters.195 196 \begin{itemize} 197 \item 198 addr: the address of the currently allocated dynamic object.199 \end{itemize} 200 malloc\_alignmentreturns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeapLmmm allocator.201 202 \subs ubsection bool malloc\_zero\_fill( void * addr )203 malloc\_zero\_fillreturns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.204 \paragraph{Usage} 205 malloc\_zero\_filltakes one parameters.206 207 \begin{itemize} 208 \item 209 addr: the address of the currently allocated dynamic object.210 \end{itemize} 211 malloc\_zero\_fillreturns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false.212 213 \subs ubsection size\_t malloc\_size( void * addr )214 malloc\_size returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is malloc\_usable\_size. But, malloc\_size is different from malloc\_usable\_size as malloc\_usabe\_size returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, malloc\_sizereturns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine.215 \paragraph{Usage} 216 malloc\_sizetakes one parameters.217 218 \begin{itemize} 219 \item 220 addr: the address of the currently allocated dynamic object.221 \end{itemize} 222 malloc\_sizereturns the allocation size of the given dynamic object. On failure, it return zero.223 224 \subs ubsection void * realloc( void * oaddr, size\_t nalign, size\_t size )225 This realloc is an extension of the default realloc (FIX ME: cite default realloc). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.226 \paragraph{Usage} 227 This realloc takes three parameters. It takes an additional parameter of nalign as compared to the default realloc.228 229 \begin{itemize} 230 \item 231 oaddr: the address of the old object that needs to be reallocated.232 \item 233 nalign: the new alignment to which the old object needs to be realigned.234 \item 235 size: the new size requirement of the to which the old object needs to be resized.236 \end{itemize} 237 It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a NULLpointer.238 239 \subsection{ CFA Malloc Interface}240 We added some routines to the malloc interface of CFA. These routines can only be used in CFA and not in our standalone uHeapLmmm allocator as these routines use some features that are only provided byCFA and not by C. It makes the allocator even more usable to the programmers.241 CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.242 243 \subs ubsection T * malloc( void )183 @alignment@: the alignment to which the dynamic array needs to be aligned. 184 \item 185 @dim@: number of objects in the array 186 \item 187 @elemSize@: size of the object in the array. 188 \end{itemize} 189 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns a @NULL@ pointer. 190 191 \subsection{\lstinline{size_t malloc_alignment( void * addr )}} 192 @malloc_alignment@ returns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment. 193 \paragraph{Usage} 194 @malloc_alignment@ takes one parameters. 195 196 \begin{itemize} 197 \item 198 @addr@: the address of the currently allocated dynamic object. 199 \end{itemize} 200 @malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeapLmmm allocator. 201 202 \subsection{\lstinline{bool malloc_zero_fill( void * addr )}} 203 @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation. 204 \paragraph{Usage} 205 @malloc_zero_fill@ takes one parameters. 206 207 \begin{itemize} 208 \item 209 @addr@: the address of the currently allocated dynamic object. 210 \end{itemize} 211 @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false. 212 213 \subsection{\lstinline{size_t malloc_size( void * addr )}} 214 @malloc_size@ returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is @malloc_usable_size@. But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine. 215 \paragraph{Usage} 216 @malloc_size@ takes one parameters. 217 218 \begin{itemize} 219 \item 220 @addr@: the address of the currently allocated dynamic object. 221 \end{itemize} 222 @malloc_size@ returns the allocation size of the given dynamic object. On failure, it return zero. 223 224 \subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}} 225 This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement. 226 \paragraph{Usage} 227 This @realloc@ takes three parameters. It takes an additional parameter of nalign as compared to the default @realloc@. 228 229 \begin{itemize} 230 \item 231 @oaddr@: the address of the old object that needs to be reallocated. 232 \item 233 @nalign@: the new alignment to which the old object needs to be realigned. 234 \item 235 @size@: the new size requirement of the to which the old object needs to be resized. 236 \end{itemize} 237 It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a @NULL@ pointer. 238 239 \subsection{\CFA Malloc Interface} 240 We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeapLmmm allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers. 241 \CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type. 242 243 \subsection{\lstinline{T * malloc( void )}} 244 244 This malloc is a simplified polymorphic form of defualt malloc (FIX ME: cite malloc). It does not take any parameter as compared to default malloc that takes one parameter. 245 245 \paragraph{Usage} 246 246 This malloc takes no parameters. 247 It returns a dynamic object of the size of type T. On failure, it return NULLpointer.248 249 \subs ubsection T * aalloc( size\_t dim )247 It returns a dynamic object of the size of type @T@. On failure, it returns a @NULL@ pointer. 248 249 \subsection{\lstinline{T * aalloc( size_t dim )}} 250 250 This aalloc is a simplified polymorphic form of above aalloc (FIX ME: cite aalloc). It takes one parameter as compared to the above aalloc that takes two parameters. 251 251 \paragraph{Usage} … … 254 254 \begin{itemize} 255 255 \item 256 dim: required number of objects in the array.257 \end{itemize} 258 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type T. On failure, it return NULLpointer.259 260 \subs ubsection T * calloc( size\_t dim )256 @dim@: required number of objects in the array. 257 \end{itemize} 258 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer. 259 260 \subsection{\lstinline{T * calloc( size_t dim )}} 261 261 This calloc is a simplified polymorphic form of defualt calloc (FIX ME: cite calloc). It takes one parameter as compared to the default calloc that takes two parameters. 262 262 \paragraph{Usage} … … 265 265 \begin{itemize} 266 266 \item 267 dim: required number of objects in the array.268 \end{itemize} 269 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type T. On failure, it return NULLpointer.270 271 \subs ubsection T * resize( T * ptr, size\_t size )272 This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as CFA provides gives allocator the liberty to get the alignment of the returned type.267 @dim@: required number of objects in the array. 268 \end{itemize} 269 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer. 270 271 \subsection{\lstinline{T * resize( T * ptr, size_t size )}} 272 This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. 273 273 \paragraph{Usage} 274 274 This resize takes two parameters. … … 276 276 \begin{itemize} 277 277 \item 278 ptr: address of the old object.279 \item 280 size: the required size of the new object.281 \end{itemize} 282 It returns a dynamic object of the size given in paramters. The returned object is aligned to the alignemtn of type T. On failure, it return NULLpointer.283 284 \subs ubsection T * realloc( T * ptr, size\_t size )285 This realloc is a simplified polymorphic form of defualt realloc (FIX ME: cite realloc with align). It takes two parameters as compared to the above realloc that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation asCFA provides gives allocator the liberty to get the alignment of the returned type.286 \paragraph{Usage} 287 This realloctakes two parameters.288 289 \begin{itemize} 290 \item 291 ptr: address of the old object.292 \item 293 size: the required size of the new object.294 \end{itemize} 295 It returns a dynamic object of the size given in paramters that preserves the data in the given object. The returned object is aligned to the alignemtn of type T. On failure, it return NULLpointer.296 297 \subs ubsection T * memalign( size\_t align )278 @ptr@: address of the old object. 279 \item 280 @size@: the required size of the new object. 281 \end{itemize} 282 It returns a dynamic object of the size given in paramters. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer. 283 284 \subsection{\lstinline{T * realloc( T * ptr, size_t size )}} 285 This @realloc@ is a simplified polymorphic form of defualt @realloc@ (FIX ME: cite @realloc@ with align). It takes two parameters as compared to the above @realloc@ that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. 286 \paragraph{Usage} 287 This @realloc@ takes two parameters. 288 289 \begin{itemize} 290 \item 291 @ptr@: address of the old object. 292 \item 293 @size@: the required size of the new object. 294 \end{itemize} 295 It returns a dynamic object of the size given in paramters that preserves the data in the given object. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer. 296 297 \subsection{\lstinline{T * memalign( size_t align )}} 298 298 This memalign is a simplified polymorphic form of defualt memalign (FIX ME: cite memalign). It takes one parameters as compared to the default memalign that takes two parameters. 299 299 \paragraph{Usage} … … 302 302 \begin{itemize} 303 303 \item 304 align: the required alignment of the dynamic object.305 \end{itemize} 306 It returns a dynamic object of the size of type T that is aligned to given parameter align. On failure, it return NULLpointer.307 308 \subs ubsection T * amemalign( size\_t align, size\_t dim )304 @align@: the required alignment of the dynamic object. 305 \end{itemize} 306 It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. On failure, it returns a @NULL@ pointer. 307 308 \subsection{\lstinline{T * amemalign( size_t align, size_t dim )}} 309 309 This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). It takes two parameter as compared to the above amemalign that takes three parameters. 310 310 \paragraph{Usage} … … 313 313 \begin{itemize} 314 314 \item 315 align: required alignment of the dynamic array.316 \item 317 dim: required number of objects in the array.318 \end{itemize} 319 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type T. The returned object is aligned to the given parameter align. On failure, it return NULLpointer.320 321 \subs ubsection T * cmemalign( size\_t align, size\_t dim )315 @align@: required alignment of the dynamic array. 316 \item 317 @dim@: required number of objects in the array. 318 \end{itemize} 319 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align. On failure, it returns a @NULL@ pointer. 320 321 \subsection{\lstinline{T * cmemalign( size_t align, size_t dim )}} 322 322 This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). It takes two parameter as compared to the above cmemalign that takes three parameters. 323 323 \paragraph{Usage} … … 326 326 \begin{itemize} 327 327 \item 328 align: required alignment of the dynamic array. 329 \item 330 dim: required number of objects in the array. 331 \end{itemize} 332 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type T. The returned object is aligned to the given parameter align and is zero filled. On failure, it return NULL pointer. 333 334 \subsubsection T * aligned\_alloc( size\_t align ) 335 This aligned\_alloc is a simplified polymorphic form of defualt aligned\_alloc (FIX ME: cite aligned\_alloc). It takes one parameter as compared to the default aligned\_alloc that takes two parameters. 336 \paragraph{Usage} 337 This aligned\_alloc takes one parameter. 338 339 \begin{itemize} 340 \item 341 align: required alignment of the dynamic object. 342 \end{itemize} 343 It returns a dynamic object of the size of type T that is aligned to the given parameter. On failure, it return NULL pointer. 344 345 \subsubsection int posix\_memalign( T ** ptr, size\_t align ) 346 This posix\_memalign is a simplified polymorphic form of defualt posix\_memalign (FIX ME: cite posix\_memalign). It takes two parameters as compared to the default posix\_memalign that takes three parameters. 347 \paragraph{Usage} 348 This posix\_memalign takes two parameter. 349 350 \begin{itemize} 351 \item 352 ptr: variable address to store the address of the allocated object. 353 \item 354 align: required alignment of the dynamic object. 355 \end{itemize} 356 357 It stores address of the dynamic object of the size of type T in given parameter ptr. This object is aligned to the given parameter. On failure, it return NULL pointer. 358 359 \subsubsection T * valloc( void ) 360 This valloc is a simplified polymorphic form of defualt valloc (FIX ME: cite valloc). It takes no parameters as compared to the default valloc that takes one parameter. 361 \paragraph{Usage} 362 valloc takes no parameters. 363 It returns a dynamic object of the size of type T that is aligned to the page size. On failure, it return NULL pointer. 364 365 \subsubsection T * pvalloc( void ) 366 This pcvalloc is a simplified polymorphic form of defualt pcvalloc (FIX ME: cite pcvalloc). It takes no parameters as compared to the default pcvalloc that takes one parameter. 367 \paragraph{Usage} 368 pvalloc takes no parameters. 369 It returns a dynamic object of the size that is calcutaed by rouding the size of type T. The returned object is also aligned to the page size. On failure, it return NULL pointer. 370 371 \subsection Alloc Interface 372 In addition to improve allocator interface both for CFA and our standalone allocator uHeapLmmm in C. We also added a new alloc interface in CFA that increases usability of dynamic memory allocation. 328 @align@: required alignment of the dynamic array. 329 \item 330 @dim@: required number of objects in the array. 331 \end{itemize} 332 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align and is zero filled. On failure, it returns a @NULL@ pointer. 333 334 \subsection{\lstinline{T * aligned_alloc( size_t align )}} 335 This @aligned_alloc@ is a simplified polymorphic form of defualt @aligned_alloc@ (FIX ME: cite @aligned_alloc@). It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters. 336 \paragraph{Usage} 337 This @aligned_alloc@ takes one parameter. 338 339 \begin{itemize} 340 \item 341 @align@: required alignment of the dynamic object. 342 \end{itemize} 343 It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. On failure, it returns a @NULL@ pointer. 344 345 \subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}} 346 This @posix_memalign@ is a simplified polymorphic form of defualt @posix_memalign@ (FIX ME: cite @posix_memalign@). It takes two parameters as compared to the default @posix_memalign@ that takes three parameters. 347 \paragraph{Usage} 348 This @posix_memalign@ takes two parameter. 349 350 \begin{itemize} 351 \item 352 @ptr@: variable address to store the address of the allocated object. 353 \item 354 @align@: required alignment of the dynamic object. 355 \end{itemize} 356 357 It stores address of the dynamic object of the size of type @T@ in given parameter ptr. This object is aligned to the given parameter. On failure, it returns a @NULL@ pointer. 358 359 \subsection{\lstinline{T * valloc( void )}} 360 This @valloc@ is a simplified polymorphic form of defualt @valloc@ (FIX ME: cite @valloc@). It takes no parameters as compared to the default @valloc@ that takes one parameter. 361 \paragraph{Usage} 362 @valloc@ takes no parameters. 363 It returns a dynamic object of the size of type @T@ that is aligned to the page size. On failure, it returns a @NULL@ pointer. 364 365 \subsection{\lstinline{T * pvalloc( void )}} 366 \paragraph{Usage} 367 @pvalloc@ takes no parameters. 368 It returns a dynamic object of the size that is calcutaed by rouding the size of type @T@. The returned object is also aligned to the page size. On failure, it returns a @NULL@ pointer. 369 370 \subsection{Alloc Interface} 371 In addition to improve allocator interface both for \CFA and our standalone allocator uHeapLmmm in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation. 373 372 This interface helps programmers in three major ways. 374 373 … … 379 378 Parametre Positions: alloc interface frees programmers from remembering parameter postions in call to routines. 380 379 \item 381 Object Size: alloc interface does not require programmer to mention the object size as CFA allows allocator to determince the object size from returned type of alloc call.382 \end{itemize} 383 384 Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interfece has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.385 386 \subs ubsection{Routine: T * alloc( ... )}387 Call to alloc wihout any parameter returns one object of size of type Tallocated dynamically.380 Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determince the object size from returned type of alloc call. 381 \end{itemize} 382 383 Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interfece has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter. 384 385 \subsection{Routine: \lstinline{T * alloc( ... )}} 386 Call to alloc wihout any parameter returns one object of size of type @T@ allocated dynamically. 388 387 Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine. 389 alocc routine accepts six kinds of arguments. Using different combinations of tha parameters, different kind of allocations can be performed. Any combincation of parameters can be used together except `realloc and `resize that should not be used simultanously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both `resize and `reallocare used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.388 alocc routine accepts six kinds of arguments. Using different combinations of tha parameters, different kind of allocations can be performed. Any combincation of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultanously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced. 390 389 391 390 \paragraph{Dim} 392 This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type T.393 It represents the required number of members in the array allocation as in CFA's aalloc (FIX ME: cite aalloc).394 This parameter should be of type size\_t.395 396 Example: int a = alloc( 5 )391 This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@. 392 It represents the required number of members in the array allocation as in \CFA's aalloc (FIX ME: cite aalloc). 393 This parameter should be of type @size_t@. 394 395 Example: @int a = alloc( 5 )@ 397 396 This call will return a dynamic array of five integers. 398 397 399 398 \paragraph{Align} 400 This parameter is position-free and uses a backtick routine align ( `align). The parameter passed with `align should be of type size\_t. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign inCFA) then the passed alignment parameter will be rejected and the default alignment will be used.401 402 Example: int b = alloc( 5 , 64`align )399 This parameter is position-free and uses a backtick routine align (@`align@). The parameter passed with @`align@ should be of type @size_t@. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used. 400 401 Example: @int b = alloc( 5 , 64`align )@ 403 402 This call will return a dynamic array of five integers. It will align the allocated object to 64. 404 403 405 404 \paragraph{Fill} 406 This parameter is position-free and uses a backtick routine fill ( `fill). In case of realloc, only the extra space after copying the data in the old object will be filled with given parameter.405 This parameter is position-free and uses a backtick routine fill (@`fill@). In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter. 407 406 Three types of parameters can be passed using `fill. 408 407 409 408 \begin{itemize} 410 409 \item 411 char: A char can be passed with `fillto fill the whole dynamic allocation with the given char recursively till the end of required allocation.412 \item 413 Object of returned type: An object of type of returned type can be passed with `fillto fill the whole dynamic allocation with the given object recursively till the end of required allocation.414 \item 415 Dynamic object of returned type: A dynamic object of type of returned type can be passed with `fill to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen untill the end object passed to `fillor the end of requested allocation reaches.416 \end{itemize} 417 418 Example: int b = alloc( 5 , 'a'`fill )410 @char@: A char can be passed with @`fill@ to fill the whole dynamic allocation with the given char recursively till the end of required allocation. 411 \item 412 Object of returned type: An object of type of returned type can be passed with @`fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation. 413 \item 414 Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen untill the end object passed to @`fill@ or the end of requested allocation reaches. 415 \end{itemize} 416 417 Example: @int b = alloc( 5 , 'a'`fill )@ 419 418 This call will return a dynamic array of five integers. It will fill the allocated object with character 'a' recursively till the end of requested allocation size. 420 419 421 Example: int b = alloc( 5 , 4`fill )420 Example: @int b = alloc( 5 , 4`fill )@ 422 421 This call will return a dynamic array of five integers. It will fill the allocated object with integer 4 recursively till the end of requested allocation size. 423 422 424 Example: int b = alloc( 5 , a`fill ) where ais a pointer of int type423 Example: @int b = alloc( 5 , a`fill )@ where @a@ is a pointer of int type 425 424 This call will return a dynamic array of five integers. It will copy data in a to the returned object non-recursively untill end of a or the newly allocated object is reached. 426 425 427 426 \paragraph{Resize} 428 This parameter is position-free and uses a backtick routine resize ( `resize). It represents the old dynamic object (oaddr) that the programmer wants to427 This parameter is position-free and uses a backtick routine resize (@`resize@). It represents the old dynamic object (oaddr) that the programmer wants to 429 428 \begin{itemize} 430 429 \item … … 435 434 fill with something. 436 435 \end{itemize} 437 The data in old dynamic object will not be preserved in the new object. The type of object passed to `resizeand the returned type of alloc call can be different.438 439 Example: int b = alloc( 5 , a`resize )436 The data in old dynamic object will not be preserved in the new object. The type of object passed to @`resize@ and the returned type of alloc call can be different. 437 438 Example: @int b = alloc( 5 , a`resize )@ 440 439 This call will resize object a to a dynamic array that can contain 5 integers. 441 440 442 Example: int b = alloc( 5 , a`resize , 32`align )441 Example: @int b = alloc( 5 , a`resize , 32`align )@ 443 442 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. 444 443 445 Example: int b = alloc( 5 , a`resize , 32`align , 2`fill)444 Example: @int b = alloc( 5 , a`resize , 32`align , 2`fill )@ 446 445 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32 and will be filled with 2. 447 446 448 447 \paragraph{Realloc} 449 This parameter is position-free and uses a backtick routine realloc (`realloc). It represents the old dynamic object (oaddr) that the programmer wants to448 This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@). It represents the old dynamic object (oaddr) that the programmer wants to 450 449 \begin{itemize} 451 450 \item … … 456 455 fill with something. 457 456 \end{itemize} 458 The data in old dynamic object will be preserved in the new object. The type of object passed to `reallocand the returned type of alloc call cannot be different.459 460 Example: int b = alloc( 5 , a`realloc )457 The data in old dynamic object will be preserved in the new object. The type of object passed to @`realloc@ and the returned type of alloc call cannot be different. 458 459 Example: @int b = alloc( 5 , a`realloc )@ 461 460 This call will realloc object a to a dynamic array that can contain 5 integers. 462 461 463 Example: int b = alloc( 5 , a`realloc , 32`align )462 Example: @int b = alloc( 5 , a`realloc , 32`align )@ 464 463 This call will realloc object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. 465 464 466 Example: int b = alloc( 5 , a`realloc , 32`align , 2`fill)465 Example: @int b = alloc( 5 , a`realloc , 32`align , 2`fill )@ 467 466 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. The extra space after copying data of a to the returned object will be filled with 2. -
doc/theses/mubeen_zulfiqar_MMath/background.tex
r5c216b4 r1eec0b0 1 1 \chapter{Background} 2 3 4 5 \section{Memory-Allocator Background} 6 \label{s:MemoryAllocatorBackground} 7 8 A program dynamically allocates and deallocates the storage for a variable, referred to as an \newterm{object}, through calls such as @malloc@ and @free@ in C, and @new@ and @delete@ in \CC. 9 Space for each allocated object comes from the dynamic-allocation zone. 10 A \newterm{memory allocator} is a complex data-structure and code that manages the layout of objects in the dynamic-allocation zone. 11 The management goals are to make allocation/deallocation operations as fast as possible while densely packing objects to make efficient use of memory. 12 Objects cannot be moved to aid the packing process. 13 The allocator grows or shrinks the dynamic-allocation zone to obtain storage for objects and reduce memory usage via operating-system calls, such as @mmap@ or @sbrk@ in UNIX. 14 15 16 \subsection{Allocator Components} 17 \label{s:AllocatorComponents} 18 19 There are two important parts to a memory allocator, management and storage data (see \VRef[Figure]{f:AllocatorComponents}), collectively called the \newterm{heap}. 20 The \newterm{management data} is a data structure located at a known memory address and contains all information necessary to manage the storage data. 21 The management data starts with fixed-sized information in the static-data memory that flows into the dynamic-allocation memory. 22 The \newterm{storage data} is composed of allocated and freed objects, and reserved memory. 23 Allocated objects (white) are variable sized and allocated to and maintained by the program. 24 Freed objects (light grey) are memory deallocated by the program that is linked to form a list facilitating easy location of storage for new allocations. 25 Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks. 26 Reserved memory (dark grey) is one or more blocks of memory obtained from the operating system but not yet allocated to the program; 27 if there are multiple reserved blocks, they are also chained together, usually internally. 28 29 \begin{figure} 30 \centering 31 \input{AllocatorComponents} 32 \caption{Allocator Components (Heap)} 33 \label{f:AllocatorComponents} 34 \end{figure} 35 36 Allocated and freed objects typically have additional management data embedded within them. 37 \VRef[Figure]{f:AllocatedObject} shows an allocated object with a header, trailer, and padding/spacing around the object. 38 The header contains information about the object, \eg size, type, etc. 39 The trailer may be used to simplify an allocation implementation, \eg coalescing, and/or for security purposes to mark the end of an object. 40 An object may be preceded by padding to ensure proper alignment. 41 Some algorithms quantize allocation requests into distinct sizes resulting in additional spacing after objects less than the quantized value. 42 When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists. 43 A free object also contains management data, \eg size, chaining, etc. 44 The amount of management data for a free node defines the minimum allocation size, \eg if 16 bytes are needed for a free-list node, any allocation request less than 16 bytes must be rounded up, otherwise the free list cannot use internal chaining. 45 The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new management information and possibly data. 46 47 \begin{figure} 48 \centering 49 \input{AllocatedObject} 50 \caption{Allocated Object} 51 \label{f:AllocatedObject} 52 \end{figure} 53 54 55 \subsection{Single-Threaded Memory-Allocator} 56 \label{s:SingleThreadedMemoryAllocator} 57 58 A single-threaded memory-allocator does not run any threads itself, but is used by a single-threaded program. 59 Because the memory allocator is only executed by a single thread, concurrency issues do not exist. 60 The primary issues in designing a single-threaded memory-allocator are fragmentation and locality. 61 62 63 \subsubsection{Fragmentation} 64 \label{s:Fragmentation} 65 66 Fragmentation is memory requested from the operating system but not used by the program; 67 hence, allocated objects are not fragmentation. 68 Fragmentation is often divided into internal or external (see~\VRef[Figure]{f:InternalExternalFragmentation}). 69 70 \begin{figure} 71 \centering 72 \input{IntExtFragmentation} 73 \caption{Internal and External Fragmentation} 74 \label{f:InternalExternalFragmentation} 75 \end{figure} 76 77 \newterm{Internal fragmentation} is memory space that is allocated to the program, but is not intended to be accessed by the program, such as headers, trailers, padding, and spacing around an allocated object. 78 This memory is typically used by the allocator for management purposes or required by the architecture for correctness (\eg alignment). 79 Internal fragmentation is problematic when management space is a significant proportion of an allocated object. 80 For example, if internal fragmentation is as large as the object being managed, then the memory usage for that object is doubled. 81 An allocator should strive to keep internal management information to a minimum. 82 83 \newterm{External fragmentation} is all memory space reserved from the operating system but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes freed objects, all external management data, and reserved memory. 84 This memory is problematic in two ways: heap blowup and highly fragmented memory. 85 \newterm{Heap blowup} occurs when memory freed by the program is not reused for future allocations leading to potentially unbounded external fragmentation growth~\cite{Berger00}. 86 Heap blowup can occur due to allocator policies that are too restrictive in reusing freed memory. 87 Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects. 88 \VRef[Figure]{f:MemoryFragmentation} shows an example of how a small block of memory fragments as objects are allocated and deallocated over time. 89 Blocks of free memory become smaller and non-contiguous making them less useful in serving allocation requests. 90 Memory is highly fragmented when the sizes of most free blocks are unusable. 91 For example, \VRef[Figure]{f:Contiguous} and \VRef[Figure]{f:HighlyFragmented} have the same quantity of external fragmentation, but \VRef[Figure]{f:HighlyFragmented} is highly fragmented. 92 If there is a request to allocate a large object, \VRef[Figure]{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while \VRef[Figure]{f:HighlyFragmented} likely has to request more memory from the operating system. 93 94 For a single-threaded memory allocator, three basic approaches for controlling fragmentation have been identified~\cite{Johnstone99}. 95 The first approach is a \newterm{sequential-fit algorithm} with one list of free objects that is searched for a block large enough to fit a requested object size. 96 Different search policies determine the free object selected, \eg the first free object large enough or closest to the requested size. 97 Any storage larger than the request can become spacing after the object or be split into a smaller free object. 98 The cost of the search depends on the shape and quality of the free list, \eg a linear versus a binary-tree free-list, a sorted versus unsorted free-list. 99 100 \begin{figure} 101 \centering 102 \input{MemoryFragmentation} 103 \caption{Memory Fragmentation} 104 \label{f:MemoryFragmentation} 105 \vspace{10pt} 106 \subfigure[Contiguous]{ 107 \input{ContigFragmentation} 108 \label{f:Contiguous} 109 } % subfigure 110 \subfigure[Highly Fragmented]{ 111 \input{NonContigFragmentation} 112 \label{f:HighlyFragmented} 113 } % subfigure 114 \caption{Fragmentation Quality} 115 \label{f:FragmentationQuality} 116 \end{figure} 117 118 The second approach is a \newterm{segregated} or \newterm{binning algorithm} with a set of lists for different sized freed objects. 119 When an object is allocated, the requested size is rounded up to the nearest bin-size, possibly with spacing after the object. 120 A binning algorithm is fast at finding free memory of the appropriate size and allocating it, since the first free object on the free list is used. 121 The fewer bin-sizes, the fewer lists need to be searched and maintained; 122 however, the bin sizes are less likely to closely fit the requested object size, leading to more internal fragmentation. 123 The more bin-sizes, the longer the search and the less likely free objects are to be reused, leading to more external fragmentation and potentially heap blowup. 124 A variation of the binning algorithm allows objects to be allocated to the requested size, but when an object is freed, it is placed on the free list of the next smallest or equal bin-size. 125 For example, with bin sizes of 8 and 16 bytes, a request for 12 bytes allocates only 12 bytes, but when the object is freed, it is placed on the 8-byte bin-list. 126 For subsequent requests, the bin free-lists contain objects of different sizes, ranging from one bin-size to the next (8-16 in this example), and a sequential-fit algorithm may be used to find an object large enough for the requested size on the associated bin list. 127 128 The third approach is \newterm{splitting} and \newterm{coalescing algorithms}. 129 When an object is allocated, if there are no free objects of the requested size, a larger free object may be split into two smaller objects to satisfy the allocation request without obtaining more memory from the operating system. 130 For example, in the buddy system, a block of free memory is split into two equal chunks, one of those chunks is again split into two equal chunks, and so on until a block just large enough to fit the requested object is created. 131 When an object is deallocated it is coalesced with the objects immediately before and after it in memory, if they are free, turning them into one larger object. 132 Coalescing can be done eagerly at each deallocation or lazily when an allocation cannot be fulfilled. 133 While coalescing does not reduce external fragmentation, the coalesced blocks improve fragmentation quality so future allocations are less likely to cause heap blowup. 134 Splitting and coalescing can be used with other algorithms to avoid highly fragmented memory. 135 136 137 \subsubsection{Locality} 138 \label{s:Locality} 139 140 The principle of locality recognizes that programs tend to reference a small set of data, called a working set, for a certain period of time, where a working set is composed of temporal and spatial accesses~\cite{Denning05}. 141 Temporal clustering implies a group of objects are accessed repeatedly within a short time period, while spatial clustering implies a group of objects physically close together (nearby addresses) are accessed repeatedly within a short time period. 142 Temporal locality commonly occurs during an iterative computation with a fix set of disjoint variables, while spatial locality commonly occurs when traversing an array. 143 144 Hardware takes advantage of temporal and spatial locality through multiple levels of caching (\ie memory hierarchy). 145 When an object is accessed, the memory physically located around the object is also cached with the expectation that the current and nearby objects will be referenced within a short period of time. 146 For example, entire cache lines are transfered between memory and cache and entire virtual-memory pages are transferred between disk and memory. 147 A program exhibiting good locality has better performance due to fewer cache misses and page faults. 148 149 Temporal locality is largely controlled by how a program accesses its variables~\cite{Feng05}. 150 Nevertheless, a memory allocator can have some indirect influence on temporal locality and largely dictates spatial locality. 151 For temporal locality, an allocator can return storage for new allocations that was just freed as these memory locations are still \emph{warm} in the memory hierarchy. 152 For spatial locality, an allocator can place objects used together close together in memory, so the working set of the program fits into the fewest possible cache lines and pages. 153 However, usage patterns are different for every program as is the underlying hardware architecture (\ie memory hierarchy); 154 hence, no general-purpose memory-allocator can provide ideal locality for every program on every computer. 155 156 There are a number of ways a memory allocator can degrade locality by increasing the working set. 157 For example, a memory allocator may access multiple free objects before finding one to satisfy an allocation request (\eg sequential-fit algorithm). 158 If there are a (large) number of objects accessed in very different areas of memory, the allocator may perturb the program's memory hierarchy causing multiple cache or page misses~\cite{Grunwald93}. 159 Another way locality can be degraded is by spatially separating related data. 160 For example, in a binning allocator, objects of different sizes are allocated from different bins that may be located in different pages of memory. 161 162 163 \subsection{Multi-Threaded Memory-Allocator} 164 \label{s:MultiThreadedMemoryAllocator} 165 166 A multi-threaded memory-allocator does not run any threads itself, but is used by a multi-threaded program. 167 In addition to single-threaded design issues of locality and fragmentation, a multi-threaded allocator may be simultaneously accessed by multiple threads, and hence, must deal with concurrency issues such as mutual exclusion, false sharing, and additional forms of heap blowup. 168 169 170 \subsubsection{Mutual Exclusion} 171 \label{s:MutualExclusion} 172 173 Mutual exclusion provides sequential access to the management data of the heap. 174 There are two performance issues for mutual exclusion. 175 First is the overhead necessary to perform (at least) a hardware atomic operation every time a shared resource is accessed. 176 Second is when multiple threads contend for a shared resource simultaneously, and hence, some threads must wait until the resource is released. 177 Contention can be reduced in a number of ways: 178 using multiple fine-grained locks versus a single lock, spreading the contention across a number of locks; 179 using trylock and generating new storage if the lock is busy, yielding a space vs contention trade-off; 180 using one of the many lock-free approaches for reducing contention on basic data-structure operations~\cite{Oyama99}. 181 However, all of these approaches have degenerate cases where contention occurs. 182 183 184 \subsubsection{False Sharing} 185 \label{s:FalseSharing} 186 187 False sharing is a dynamic phenomenon leading to cache thrashing. 188 When two or more threads on separate CPUs simultaneously change different objects sharing a cache line, the change invalidates the other thread's associated cache, even though these threads may be uninterested in the modified object. 189 False sharing can occur in three different ways: program induced, allocator-induced active, and allocator-induced passive; 190 a memory allocator can only affect the latter two. 191 192 \newterm{Program-induced false-sharing} occurs when one thread passes an object sharing a cache line to another thread, and both threads modify the respective objects. 193 For example, in \VRef[Figure]{f:ProgramInducedFalseSharing}, when Task$_1$ passes Object$_2$ to Task$_2$, a false-sharing situation forms when Task$_1$ modifies Object$_1$ and Task$_2$ modifies Object$_2$. 194 Changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line. 195 196 \begin{figure} 197 \centering 198 \subfigure[Program-Induced False-Sharing]{ 199 \input{ProgramFalseSharing} 200 \label{f:ProgramInducedFalseSharing} 201 } \\ 202 \vspace{5pt} 203 \subfigure[Allocator-Induced Active False-Sharing]{ 204 \input{AllocInducedActiveFalseSharing} 205 \label{f:AllocatorInducedActiveFalseSharing} 206 } \\ 207 \vspace{5pt} 208 \subfigure[Allocator-Induced Passive False-Sharing]{ 209 \input{AllocInducedPassiveFalseSharing} 210 \label{f:AllocatorInducedPassiveFalseSharing} 211 } % subfigure 212 \caption{False Sharing} 213 \label{f:FalseSharing} 214 \end{figure} 215 216 \newterm{Allocator-induced active false-sharing} occurs when objects are allocated within the same cache line but to different threads. 217 For example, in \VRef[Figure]{f:AllocatorInducedActiveFalseSharing}, each task allocates an object and loads a cache-line of memory into its associated cache. 218 Again, changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line. 219 220 \newterm{Allocator-induced passive false-sharing} is another form of allocator-induced false-sharing caused by program-induced false-sharing. 221 When an object in a program-induced false-sharing situation is deallocated, a future allocation of that object may cause passive false-sharing. 222 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, Task$_1$ passes Object$_2$ to Task$_2$, and Task$_2$ subsequently deallocates Object$_2$. 223 Allocator-induced passive false-sharing occurs when Object$_2$ is reallocated to Task$_2$ while Task$_1$ is still using Object$_1$. 224 225 226 \subsubsection{Heap Blowup} 227 \label{s:HeapBlowup} 228 229 In a multi-threaded program, heap blowup can occur when memory freed by one thread is inaccessible to other threads due to the allocation strategy. 230 Specific examples are presented in later sections. 231 232 233 \section{Multi-Threaded Memory-Allocator Features} 234 \label{s:MultiThreadedMemoryAllocatorFeatures} 235 236 By analyzing a suite of existing allocators (see \VRef{s:ExistingAllocators}), the following salient features were identified: 237 \begin{list}{\arabic{enumi}.}{\usecounter{enumi}\topsep=0.5ex\parsep=0pt\itemsep=0pt} 238 \item multiple heaps 239 \begin{list}{\alph{enumii})}{\usecounter{enumii}\topsep=0.5ex\parsep=0pt\itemsep=0pt} 240 \item with or without a global heap 241 \item with or without ownership 242 \end{list} 243 \item object containers 244 \begin{list}{\alph{enumii})}{\usecounter{enumii}\topsep=0.5ex\parsep=0pt\itemsep=0pt} 245 \item with or without ownership 246 \item fixed or variable sized 247 \item global or local free-lists 248 \end{list} 249 \item hybrid private/public heap 250 \item allocation buffer 251 \item lock-free operations 252 \end{list} 253 The first feature, multiple heaps, pertains to different kinds of heaps. 254 The second feature, object containers, pertains to the organization of objects within the storage area. 255 The remaining features apply to different parts of the allocator design or implementation. 256 257 258 \subsection{Multiple Heaps} 259 \label{s:MultipleHeaps} 260 261 A single-threaded allocator has at most one thread and heap, while a multi-threaded allocator has potentially multiple threads and heaps. 262 The multiple threads cause complexity, and multiple heaps are a mechanism for dealing with the complexity. 263 The spectrum ranges from multiple threads using a single heap, denoted as T:1 (see \VRef[Figure]{f:SingleHeap}), to multiple threads sharing multiple heaps, denoted as T:H (see \VRef[Figure]{f:SharedHeaps}), to one thread per heap, denoted as 1:1 (see \VRef[Figure]{f:PerThreadHeap}), which is almost back to a single-threaded allocator. 264 265 In the T:1 model, all threads allocate and deallocate objects from one heap. 266 Memory is obtained from the freed objects or reserved memory in the heap, or from the operating system (OS); 267 the heap may also return freed memory to the operating system. 268 The arrows indicate the direction memory conceptually moves for each kind of operation: allocation moves memory along the path from the heap/operating-system to the user application, while deallocation moves memory along the path from the application back to the heap/operating-system. 269 To safely handle concurrency, a single heap uses locking to provide mutual exclusion. 270 Whether using a single lock for all heap operations or fine-grained locking for different operations, a single heap may be a significant source of contention for programs with a large amount of memory allocation. 271 272 \begin{figure} 273 \centering 274 \subfigure[T:1]{ 275 % \input{SingleHeap.pstex_t} 276 \input{SingleHeap} 277 \label{f:SingleHeap} 278 } % subfigure 279 \vrule 280 \subfigure[T:H]{ 281 % \input{MultipleHeaps.pstex_t} 282 \input{SharedHeaps} 283 \label{f:SharedHeaps} 284 } % subfigure 285 \vrule 286 \subfigure[1:1]{ 287 % \input{MultipleHeapsGlobal.pstex_t} 288 \input{PerThreadHeap} 289 \label{f:PerThreadHeap} 290 } % subfigure 291 \caption{Multiple Heaps, Thread:Heap Relationship} 292 \end{figure} 293 294 In the T:H model, each thread allocates storage from several heaps depending on certain criteria, with the goal of reducing contention by spreading allocations/deallocations across the heaps. 295 The decision on when to create a new heap and which heap a thread allocates from depends on the allocator design. 296 The performance goal is to reduce the ratio of heaps to threads. 297 In general, locking is required, since more than one thread may concurrently access a heap during its lifetime, but contention is reduced because fewer threads access a specific heap. 298 Two examples of this approach are: 299 \begin{description} 300 \item[heap pool:] 301 Multiple heaps are managed in a pool, starting with a single or a fixed number of heaps that increase\-/decrease depending on contention\-/space issues. 302 At creation, a thread is associated with a heap from the pool. 303 When the thread attempts an allocation and its associated heap is locked (contention), it scans for an unlocked heap in the pool. 304 If an unlocked heap is found, the thread changes its association and uses that heap. 305 If all heaps are locked, the thread may create a new heap, use it, and then place the new heap into the pool; 306 or the thread can block waiting for a heap to become available. 307 While the heap-pool approach often minimizes the number of extant heaps, the worse case can result in more heaps than threads; 308 \eg if the number of threads is large at startup with many allocations creating a large number of heaps and then the number of threads reduces. 309 \item[kernel threads:] 310 Each kernel thread (CPU) executing an application has its own heap. 311 A thread allocates/deallocates from/to the heap of the kernel thread on which it is executing. 312 Special precautions must be taken to handle or prevent the case where a thread is preempted during allocation/deallocation and restarts execution on a different kernel thread~\cite{Dice02}. 313 \end{description} 314 315 In the 1:1 model (thread heaps), each thread has its own heap, which eliminates contention and locking because no thread accesses another thread's heap. 316 An additional benefit of thread heaps is improved locality due to better memory layout. 317 As each thread only allocates from its heap, all objects for a thread are more consolidated in the storage area for that heap, better utilizing each CPUs cache and accessing fewer pages. 318 In contrast, the T:H model spreads each thread's objects over a larger area in different heaps. 319 Thread heaps can also eliminate allocator-induced active false-sharing, if memory is acquired so it does not overlap at crucial boundaries with memory for another thread's heap. 320 For example, assume page boundaries coincide with cache line boundaries, then if a thread heap always acquires pages of memory, no two threads share a page or cache line unless pointers are passed among them. 321 Hence, allocator-induced active false-sharing in \VRef[Figure]{f:AllocatorInducedActiveFalseSharing} cannot occur because the memory for thread heaps never overlaps. 322 323 Threads using multiple heaps need to determine the specific heap to access for an allocation/deallocation, \ie association of thread to heap. 324 A number of techniques are used to establish this association. 325 The simplest approach is for each thread to have a pointer to its associated heap (or to administrative information that points to the heap), and this pointer changes if the association changes. 326 For threading systems with thread-local/specific storage, the heap pointer/data is created using this mechanism; 327 otherwise, the heap routines must use approaches like hashing the thread's stack-pointer or thread-id to find its associated heap. 328 329 The storage management for multiple heaps is more complex than for a single heap (see \VRef[Figure]{f:AllocatorComponents}). 330 \VRef[Figure]{f:MultipleHeapStorage} illustrates the general storage layout for multiple heaps. 331 Allocated and free objects are labelled by the thread or heap they are associated with. 332 (Links between free objects are removed for simplicity.) 333 The management information in the static zone must be able to locate all heaps in the dynamic zone. 334 The management information for the heaps must reside in the dynamic-allocation zone if there are a variable number. 335 Each heap in the dynamic zone is composed of a list of a free objects and a pointer to its reserved memory. 336 An alternative implementation is for all heaps to share one reserved memory, which requires a separate lock for the reserved storage to ensure mutual exclusion when acquiring new memory. 337 Because multiple threads can allocate/free/reallocate adjacent storage, all forms of false sharing may occur. 338 Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area. 339 340 \begin{figure} 341 \centering 342 \input{MultipleHeapsStorage} 343 \caption{Multiple-Heap Storage} 344 \label{f:MultipleHeapStorage} 345 \end{figure} 346 347 Multiple heaps increase external fragmentation as the ratio of heaps to threads increases, which can lead to heap blowup. 348 The external fragmentation experienced by a program with a single heap is now multiplied by the number of heaps, since each heap manages its own free storage and allocates its own reserved memory. 349 Additionally, objects freed by one heap cannot be reused by other threads, except indirectly by returning free memory to the operating system, which can be expensive. 350 (Depending on how the operating system provides dynamic storage to an application, returning storage may be difficult or impossible, \eg the contiguous @sbrk@ area in Unix.) 351 In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused. 352 353 Adding a \newterm{global heap} (G) attempts to reduce the cost of obtaining/returning memory among heaps (sharing) by buffering storage within the application address-space. 354 Now, each heap obtains and returns storage to/from the global heap rather than the operating system. 355 Storage is obtained from the global heap only when a heap allocation cannot be fulfilled, and returned to the global heap when a heap's free memory exceeds some threshold. 356 Similarly, the global heap buffers this memory, obtaining and returning storage to/from the operating system as necessary. 357 The global heap does not have its own thread and makes no internal allocation requests; 358 instead, it uses the application thread, which called one of the multiple heaps and then the global heap, to perform operations. 359 Hence, the worst-case cost of a memory operation includes all these steps. 360 With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the operating system to achieve the same goal and is independent of the mechanism used by the operating system to present dynamic memory to an address space. 361 362 However, since any thread may indirectly perform a memory operation on the global heap, it is a shared resource that requires locking. 363 A single lock can be used to protect the global heap or fine-grained locking can be used to reduce contention. 364 In general, the cost is minimal since the majority of memory operations are completed without the use of the global heap. 365 366 For thread heaps, when a kernel/user-thread terminates, there are two options for handling its heap. 367 First is to free all objects in the heap to the global heap and destroy the thread heap. 368 Second is to place the thread heap on a list of available heaps and reuse it for a new kernel/user thread in the future. 369 Destroying the thread heap immediately may reduce external fragmentation sooner, since all free objects are freed to the global heap and may be reused by other threads. 370 Alternatively, reusing thread heaps may improve performance if the inheriting thread makes similar allocation requests as the thread that previously held the thread heap. 371 372 As multiple heaps are a key feature for a multi-threaded allocator, all further discussion assumes multiple heaps with a global heap to eliminate direct interaction with the operating system. 373 374 375 \subsubsection{Ownership} 376 \label{s:Ownership} 377 378 \newterm{Ownership} defines which heap an object is returned-to on deallocation. 379 If a thread returns an object to the heap it was originally allocated from, the heap has ownership of its objects. 380 Alternatively, a thread can return an object to the heap it is currently allocating from, which can be any heap accessible during a thread's lifetime. 381 \VRef[Figure]{f:HeapsOwnership} shows an example of multiple heaps (minus the global heap) with and without ownership. 382 Again, the arrows indicate the direction memory conceptually moves for each kind of operation. 383 For the 1:1 thread:heap relationship, a thread only allocates from its own heap, and without ownership, a thread only frees objects to its own heap, which means the heap is private to its owner thread and does not require any locking, called a \newterm{private heap}. 384 For the T:1/T:H models with or without ownership or the 1:1 model with ownership, a thread may free objects to different heaps, which makes each heap publicly accessible to all threads, called a \newterm{public heap}. 385 386 \begin{figure} 387 \centering 388 \subfigure[Ownership]{ 389 \input{MultipleHeapsOwnership} 390 } % subfigure 391 \hspace{0.25in} 392 \subfigure[No Ownership]{ 393 \input{MultipleHeapsNoOwnership} 394 } % subfigure 395 \caption{Heap Ownership} 396 \label{f:HeapsOwnership} 397 \end{figure} 398 399 \VRef[Figure]{f:MultipleHeapStorageOwnership} shows the effect of ownership on storage layout. 400 (For simplicity assume the heaps all use the same size of reserves storage.) 401 In contrast to \VRef[Figure]{f:MultipleHeapStorage}, each reserved area used by a heap only contains free storage for that particular heap because threads must return free objects back to the owner heap. 402 Again, because multiple threads can allocate/free/reallocate adjacent storage in the same heap, all forms of false sharing may occur. 403 The exception is for the 1:1 model if reserved memory does not overlap a cache-line because all allocated storage within a used area is associated with a single thread. 404 In this case, there is no allocator-induced active false-sharing (see \VRef[Figure]{f:AllocatorInducedActiveFalseSharing}) because two adjacent allocated objects used by different threads cannot share a cache-line. 405 As well, there is no allocator-induced passive false-sharing (see \VRef[Figure]{f:AllocatorInducedActiveFalseSharing}) because two adjacent allocated objects used by different threads cannot occur because free objects are returned to the owner heap. 406 % Passive false-sharing may still occur, if delayed ownership is used (see below). 407 408 \begin{figure} 409 \centering 410 \input{MultipleHeapsOwnershipStorage.pstex_t} 411 \caption{Multiple-Heap Storage with Ownership} 412 \label{f:MultipleHeapStorageOwnership} 413 \end{figure} 414 415 The main advantage of ownership is preventing heap blowup by returning storage for reuse by the owner heap. 416 Ownership prevents the classical problem where one thread performs allocations from one heap, passes the object to another thread, and the receiving thread deallocates the object to another heap, hence draining the initial heap of storage. 417 As well, allocator-induced passive false-sharing is eliminated because returning an object to its owner heap means it can never be allocated to another thread. 418 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, the deallocation by Task$_2$ returns Object$_2$ back to Task$_1$'s heap; 419 hence a subsequent allocation by Task$_2$ cannot return this storage. 420 The disadvantage of ownership is deallocating to another task's heap so heaps are no longer private and require locks to provide safe concurrent access. 421 422 Object ownership can be immediate or delayed, meaning objects may be returned to the owner heap immediately at deallocation or after some delay. 423 A thread may delay the return by storing objects it does not own on a separate free list. 424 Delaying can improve performance by batching objects for return to their owner heap and possibly reallocating these objects if storage runs out on the current heap. 425 However, reallocation can result in passive false-sharing. 426 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, Object$_2$ may be deallocated to Task$_2$'s heap initially. 427 If Task$_2$ reallocates Object$_2$ before it is returned to its owner heap, then passive false-sharing may occur. 428 429 430 \subsection{Object Containers} 431 \label{s:ObjectContainers} 432 433 One approach for managing objects places headers/trailers around individual objects, meaning memory adjacent to the object is reserved for object-management information, as shown in \VRef[Figure]{f:ObjectHeaders}. 434 However, this approach leads to poor cache usage, since only a portion of the cache line is holding useful information from the program's perspective. 435 Spatial locality is also negatively affected. 436 While the header and object are together in memory, they are generally not accessed together; 437 \eg the object is accessed by the program when it is allocated, while the header is accessed by the allocator when the object is free. 438 This difference in usage patterns can lead to poor cache locality~\cite{Feng05}. 439 Additionally, placing headers on individual objects can lead to redundant management information. 440 For example, if a header stores only the object size, then all objects with the same size have identical headers. 441 442 \begin{figure} 443 \centering 444 \subfigure[Object Headers]{ 445 \input{ObjectHeaders} 446 \label{f:ObjectHeaders} 447 } % subfigure 448 \\ 449 \subfigure[Object Container]{ 450 \input{Container} 451 \label{f:ObjectContainer} 452 } % subfigure 453 \caption{Header Placement} 454 \label{f:HeaderPlacement} 455 \end{figure} 456 457 An alternative approach for managing objects factors common header/trailer information to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks} in~\cite{Berger00}), as in \VRef[Figure]{f:ObjectContainer}. 458 The header for the container holds information necessary for all objects in the container; 459 a trailer may also be used at the end of the container. 460 Similar to the approach described for thread heaps in \VRef{s:MultipleHeaps}, if container boundaries do not overlap with memory of another container at crucial boundaries and all objects in a container are allocated to the same thread, allocator-induced active false-sharing is avoided. 461 462 The difficulty with object containers lies in finding the object header/trailer given only the object address, since that is normally the only information passed to the deallocation operation. 463 One way to do this is to start containers on aligned addresses in memory, then truncate the lower bits of the object address to obtain the header address (or round up and subtract the trailer size to obtain the trailer address). 464 For example, if an object at address 0xFC28\,EF08 is freed and containers are aligned on 64\,KB (0x0001\,0000) addresses, then the container header is at 0xFC28\,0000. 465 466 Normally, a container has homogeneous objects of fixed size, with fixed information in the header that applies to all container objects (\eg object size and ownership). 467 This approach greatly reduces internal fragmentation since far fewer headers are required, and potentially increases spatial locality as a cache line or page holds more objects since the objects are closer together due to the lack of headers. 468 However, although similar objects are close spatially within the same container, different sized objects are further apart in separate containers. 469 Depending on the program, this may or may not improve locality. 470 If the program uses several objects from a small number of containers in its working set, then locality is improved since fewer cache lines and pages are required. 471 If the program uses many containers, there is poor locality, as both caching and paging increase. 472 Another drawback is that external fragmentation may be increased since containers reserve space for objects that may never be allocated by the program, \ie there are often multiple containers for each size only partially full. 473 However, external fragmentation can be reduced by using small containers. 474 475 Containers with heterogeneous objects implies different headers describing them, which complicates the problem of locating a specific header solely by an address. 476 A couple of solutions can be used to implement containers with heterogeneous objects. 477 However, the problem with allowing objects of different sizes is that the number of objects, and therefore headers, in a single container is unpredictable. 478 One solution allocates headers at one end of the container, while allocating objects from the other end of the container; 479 when the headers meet the objects, the container is full. 480 Freed objects cannot be split or coalesced since this causes the number of headers to change. 481 The difficulty in this strategy remains in finding the header for a specific object; 482 in general, a search is necessary to find the object's header among the container headers. 483 A second solution combines the use of container headers and individual object headers. 484 Each object header stores the object's heterogeneous information, such as its size, while the container header stores the homogeneous information, such as the owner when using ownership. 485 This approach allows containers to hold different types of objects, but does not completely separate headers from objects. 486 The benefit of the container in this case is to reduce some redundant information that is factored into the container header. 487 488 In summary, object containers trade off internal fragmentation for external fragmentation by isolating common administration information to remove/reduce internal fragmentation, but at the cost of external fragmentation as some portion of a container may not be used and this portion is unusable for other kinds of allocations. 489 A consequence of this tradeoff is its effect on spatial locality, which can produce positive or negative results depending on program access-patterns. 490 491 492 \subsubsection{Container Ownership} 493 \label{s:ContainerOwnership} 494 495 Without ownership, objects in a container are deallocated to the heap currently associated with the thread that frees the object. 496 Thus, different objects in a container may be on different heap free-lists (see \VRef[Figure]{f:ContainerNoOwnershipFreelist}). 497 With ownership, all objects in a container belong to the same heap (see \VRef[Figure]{f:ContainerOwnershipFreelist}), so ownership of an object is determined by the container owner. 498 If multiple threads can allocate/free/reallocate adjacent storage in the same heap, all forms of false sharing may occur. 499 Only with the 1:1 model and ownership is active and passive false-sharing avoided (see \VRef{s:Ownership}). 500 Passive false-sharing may still occur, if delayed ownership is used. 501 502 \begin{figure} 503 \centering 504 \subfigure[No Ownership]{ 505 \input{ContainerNoOwnershipFreelist} 506 \label{f:ContainerNoOwnershipFreelist} 507 } % subfigure 508 \vrule 509 \subfigure[Ownership]{ 510 \input{ContainerOwnershipFreelist} 511 \label{f:ContainerOwnershipFreelist} 512 } % subfigure 513 \caption{Free-list Structure with Container Ownership} 514 \end{figure} 515 516 A fragmented heap has multiple containers that may be partially or completely free. 517 A completely free container can become reserved storage and be reset to allocate objects of a new size. 518 When a heap reaches a threshold of free objects, it moves some free storage to the global heap for reuse to prevent heap blowup. 519 Without ownership, when a heap frees objects to the global heap, individual objects must be passed, and placed on the global-heap's free-list. 520 Containers cannot be freed to the global heap unless completely free because 521 522 When a container changes ownership, the ownership of all objects within it change as well. 523 Moving a container involves moving all objects on the heap's free-list in that container to the new owner. 524 This approach can reduce contention for the global heap, since each request for objects from the global heap returns a container rather than individual objects. 525 526 Additional restrictions may be applied to the movement of containers to prevent active false-sharing. 527 For example, in \VRef[Figure]{f:ContainerFalseSharing1}, a container being used by Task$_1$ changes ownership, through the global heap. 528 In \VRef[Figure]{f:ContainerFalseSharing2}, when Task$_2$ allocates an object from the newly acquired container it is actively false-sharing even though no objects are passed among threads. 529 Note, once the object is freed by Task$_1$, no more false sharing can occur until the container changes ownership again. 530 To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free. 531 One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area. 532 533 \begin{figure} 534 \centering 535 \subfigure[]{ 536 \input{ContainerFalseSharing1} 537 \label{f:ContainerFalseSharing1} 538 } % subfigure 539 \subfigure[]{ 540 \input{ContainerFalseSharing2} 541 \label{f:ContainerFalseSharing2} 542 } % subfigure 543 \caption{Active False-Sharing using Containers} 544 \label{f:ActiveFalseSharingContainers} 545 \end{figure} 546 547 Using containers with ownership increases external fragmentation since a new container for a requested object size must be allocated separately for each thread requesting it. 548 In \VRef[Figure]{f:ExternalFragmentationContainerOwnership}, using object ownership allocates 80\% more space than without ownership. 549 550 \begin{figure} 551 \centering 552 \subfigure[No Ownership]{ 553 \input{ContainerNoOwnership} 554 } % subfigure 555 \\ 556 \subfigure[Ownership]{ 557 \input{ContainerOwnership} 558 } % subfigure 559 \caption{External Fragmentation with Container Ownership} 560 \label{f:ExternalFragmentationContainerOwnership} 561 \end{figure} 562 563 564 \subsubsection{Container Size} 565 \label{s:ContainerSize} 566 567 One way to control the external fragmentation caused by allocating a large container for a small number of requested objects is to vary the size of the container. 568 As described earlier, container boundaries need to be aligned on addresses that are a power of two to allow easy location of the header (by truncating lower bits). 569 Aligning containers in this manner also determines the size of the container. 570 However, the size of the container has different implications for the allocator. 571 572 The larger the container, the fewer containers are needed, and hence, the fewer headers need to be maintained in memory, improving both internal fragmentation and potentially performance. 573 However, with more objects in a container, there may be more objects that are unallocated, increasing external fragmentation. 574 With smaller containers, not only are there more containers, but a second new problem arises where objects are larger than the container. 575 In general, large objects, \eg greater than 64\,KB, are allocated directly from the operating system and are returned immediately to the operating system to reduce long-term external fragmentation. 576 If the container size is small, \eg 1\,KB, then a 1.5\,KB object is treated as a large object, which is likely to be inappropriate. 577 Ideally, it is best to use smaller containers for smaller objects, and larger containers for medium objects, which leads to the issue of locating the container header. 578 579 In order to find the container header when using different sized containers, a super container is used (see~\VRef[Figure]{f:SuperContainers}). 580 The super container spans several containers, contains a header with information for finding each container header, and starts on an aligned address. 581 Super-container headers are found using the same method used to find container headers by dropping the lower bits of an object address. 582 The containers within a super container may be different sizes or all the same size. 583 If the containers in the super container are different sizes, then the super-container header must be searched to determine the specific container for an object given its address. 584 If all containers in the super container are the same size, \eg 16KB, then a specific container header can be found by a simple calculation. 585 The free space at the end of a super container is used to allocate new containers. 586 587 \begin{figure} 588 \centering 589 \input{SuperContainers} 590 % \includegraphics{diagrams/supercontainer.eps} 591 \caption{Super Containers} 592 \label{f:SuperContainers} 593 \end{figure} 594 595 Minimal internal and external fragmentation is achieved by having as few containers as possible, each being as full as possible. 596 It is also possible to achieve additional benefit by using larger containers for popular small sizes, as it reduces the number of containers with associated headers. 597 However, this approach assumes it is possible for an allocator to determine in advance which sizes are popular. 598 Keeping statistics on requested sizes allows the allocator to make a dynamic decision about which sizes are popular. 599 For example, after receiving a number of allocation requests for a particular size, that size is considered a popular request size and larger containers are allocated for that size. 600 If the decision is incorrect, larger containers than necessary are allocated that remain mostly unused. 601 A programmer may be able to inform the allocator about popular object sizes, using a mechanism like @mallopt@, in order to select an appropriate container size for each object size. 602 603 604 \subsubsection{Container Free-Lists} 605 \label{s:containersfreelists} 606 607 The container header allows an alternate approach for managing the heap's free-list. 608 Rather than maintain a global free-list throughout the heap (see~\VRef[Figure]{f:GlobalFreeListAmongContainers}), the containers are linked through their headers and only the local free objects within a container are linked together (see~\VRef[Figure]{f:LocalFreeListWithinContainers}). 609 Note, maintaining free lists within a container assumes all free objects in the container are associated with the same heap; 610 thus, this approach only applies to containers with ownership. 611 612 This alternate free-list approach can greatly reduce the complexity of moving all freed objects belonging to a container to another heap. 613 To move a container using a global free-list, as in \VRef[Figure]{f:GlobalFreeListAmongContainers}, the free list is first searched to find all objects within the container. 614 Each object is then removed from the free list and linked together to form a local free-list for the move to the new heap. 615 With local free-lists in containers, as in \VRef[Figure]{f:LocalFreeListWithinContainers}, the container is simply removed from one heap's free list and placed on the new heap's free list. 616 Thus, when using local free-lists, the operation of moving containers is reduced from $O(N)$ to $O(1)$. 617 The cost is adding information to a header, which increases the header size, and therefore internal fragmentation. 618 619 \begin{figure} 620 \centering 621 \subfigure[Global Free-List Among Containers]{ 622 \input{FreeListAmongContainers} 623 \label{f:GlobalFreeListAmongContainers} 624 } % subfigure 625 \hspace{0.25in} 626 \subfigure[Local Free-List Within Containers]{ 627 \input{FreeListWithinContainers} 628 \label{f:LocalFreeListWithinContainers} 629 } % subfigure 630 \caption{Container Free-List Structure} 631 \label{f:ContainerFreeListStructure} 632 \end{figure} 633 634 When all objects in the container are the same size, a single free-list is sufficient. 635 However, when objects in the container are different size, the header needs a free list for each size class when using a binning allocation algorithm, which can be a significant increase in the container-header size. 636 The alternative is to use a different allocation algorithm with a single free-list, such as a sequential-fit allocation-algorithm. 637 638 639 \subsection{Hybrid Private/Public Heap} 640 \label{s:HybridPrivatePublicHeap} 641 642 Section~\Vref{s:Ownership} discusses advantages and disadvantages of public heaps (T:H model and with ownership) and private heaps (thread heaps with ownership). 643 For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps (see~\VRef[Figure]{f:HybridPrivatePublicHeap}). 644 The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup. 645 In the hybrid approach, a task first allocates from its private heap and second from its public heap if no free memory exists in the private heap. 646 Similarly, a task first deallocates an object its private heap, and second to the public heap. 647 Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps. 648 Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap. 649 Finally, when a task frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing task's private heap for delayed ownership, which allows the freeing task to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return. 650 651 \begin{figure} 652 \centering 653 \input{PrivatePublicHeaps.pstex_t} 654 \caption{Hybrid Private/Public Heap for Per-thread Heaps} 655 \label{f:HybridPrivatePublicHeap} 656 % \vspace{10pt} 657 % \input{RemoteFreeList.pstex_t} 658 % \caption{Remote Free-List} 659 % \label{f:RemoteFreeList} 660 \end{figure} 661 662 As mentioned, an implementation may have only one heap deal with the global heap, so the other heap can be simplified. 663 For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}. 664 To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage. 665 Since the remote free-list is occasionally cleared during an allocation, this adds to that cost. 666 Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object. 667 668 If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking. 669 In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur. 670 If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations. 671 672 The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator. 673 Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used. 674 In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps. 675 For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach. 676 677 678 \subsection{Allocation Buffer} 679 \label{s:AllocationBuffer} 680 681 An allocation buffer is reserved memory (see~\VRef{s:AllocatorComponents}) not yet allocated to the program, and is used for allocating objects when the free list is empty. 682 That is, rather than requesting new storage for a single object, an entire buffer is requested from which multiple objects are allocated later. 683 Both any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or operating system, respectively. 684 The allocation buffer reduces contention and the number of global/operating-system calls. 685 For coalescing, a buffer is split into smaller objects by allocations, and recomposed into larger buffer areas during deallocations. 686 687 Allocation buffers are useful initially when there are no freed objects in a heap because many allocations usually occur when a thread starts. 688 Furthermore, to prevent heap blowup, objects should be reused before allocating a new allocation buffer. 689 Thus, allocation buffers are often allocated more frequently at program/thread start, and then their use often diminishes. 690 691 Using an allocation buffer with a thread heap avoids active false-sharing, since all objects in the allocation buffer are allocated to the same thread. 692 For example, if all objects sharing a cache line come from the same allocation buffer, then these objects are allocated to the same thread, avoiding active false-sharing. 693 Active false-sharing may still occur if objects are freed to the global heap and reused by another heap. 694 695 Allocation buffers may increase external fragmentation, since some memory in the allocation buffer may never be allocated. 696 A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or operating system. 697 The allocation buffer also slightly increases internal fragmentation, since a pointer is necessary to locate the next free object in the buffer. 698 699 The unused part of a container, neither allocated or freed, is an allocation buffer. 700 For example, when a container is created, rather than placing all objects within the container on the free list, the objects form an allocation buffer and are allocated from the buffer as allocation requests are made. 701 This lazy method of constructing objects is beneficial in terms of paging and caching. 702 For example, although an entire container, possibly spanning several pages, is allocated from the operating system, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache. 703 704 705 \subsection{Lock-Free Operations} 706 \label{s:LockFreeOperations} 707 708 A lock-free algorithm guarantees safe concurrent-access to a data structure, so that at least one thread can make progress in the system, but an individual task has no bound to execution, and hence, may starve~\cite[pp.~745--746]{Herlihy93}. 709 % A wait-free algorithm puts a finite bound on the number of steps any thread takes to complete an operation, so an individual task cannot starve 710 Lock-free operations can be used in an allocator to reduce or eliminate the use of locks. 711 Locks are a problem for high contention or if the thread holding the lock is preempted and other threads attempt to use that lock. 712 With respect to the heap, these situations are unlikely unless all threads makes extremely high use of dynamic-memory allocation, which can be an indication of poor design. 713 Nevertheless, lock-free algorithms can reduce the number of context switches, since a thread does not yield/block while waiting for a lock; 714 on the other hand, a thread may busy-wait for an unbounded period. 715 Finally, lock-free implementations have greater complexity and hardware dependency. 716 Lock-free algorithms can be applied most easily to simple free-lists, \eg remote free-list, to allow lock-free insertion and removal from the head of a stack. 717 Implementing lock-free operations for more complex data-structures (queue~\cite{Valois94}/deque~\cite{Sundell08}) is more complex. 718 Michael~\cite{Michael04} and Gidenstam \etal \cite{Gidenstam05} have created lock-free variations of the Hoard allocator. 719 2 720 3 721 \noindent -
doc/theses/mubeen_zulfiqar_MMath/intro.tex
r5c216b4 r1eec0b0 1 1 \chapter{Introduction} 2 2 3 4 \section{Introduction} 5 6 % Shared-memory multi-processor computers are ubiquitous and important for improving application performance. 7 % However, writing programs that take advantage of multiple processors is not an easy task~\cite{Alexandrescu01b}, \eg shared resources can become a bottleneck when increasing (scaling) threads. 8 % One crucial shared resource is program memory, since it is used by all threads in a shared-memory concurrent-program~\cite{Berger00}. 9 % Therefore, providing high-performance, scalable memory-management is important for virtually all shared-memory multi-threaded programs. 10 11 Memory management takes a sequence of program generated allocation/deallocation requests and attempts to satisfy them within a fixed-sized block of memory while minimizing the total amount of memory used. 12 A general-purpose dynamic-allocation algorithm cannot anticipate future allocation requests so its output is rarely optimal. 13 However, memory allocators do take advantage of regularities in allocation patterns for typical programs to produce excellent results, both in time and space (similar to LRU paging). 14 In general, allocators use a number of similar techniques, each optimizing specific allocation patterns. 15 Nevertheless, memory allocators are a series of compromises, occasionally with some static or dynamic tuning parameters to optimize specific program-request patterns. 16 17 18 \subsection{Memory Structure} 19 \label{s:MemoryStructure} 20 21 \VRef[Figure]{f:ProgramAddressSpace} shows the typical layout of a program's address space divided into the following zones (right to left): static code/data, dynamic allocation, dynamic code/data, and stack, with free memory surrounding the dynamic code/data~\cite{memlayout}. 22 Static code and data are placed into memory at load time from the executable and are fixed-sized at runtime. 23 Dynamic-allocation memory starts empty and grows/shrinks as the program dynamically creates/deletes variables with independent lifetime. 24 The programming-language's runtime manages this area, where management complexity is a function of the mechanism for deleting variables. 25 Dynamic code/data memory is managed by the dynamic loader for libraries loaded at runtime, which is complex especially in a multi-threaded program~\cite{Huang06}. 26 However, changes to the dynamic code/data space are typically infrequent, many occurring at program startup, and are largely outside of a program's control. 27 Stack memory is managed by the program call-mechanism using simple LIFO management, which works well for sequential programs. 28 For multi-threaded programs (and coroutines), a new stack is created for each thread; 29 these thread stacks are commonly created in dynamic-allocation memory. 30 This thesis focuses on management of the dynamic-allocation memory. 31 32 \begin{figure} 33 \centering 34 \input{AddressSpace} 35 \vspace{-5pt} 36 \caption{Program Address Space Divided into Zones} 37 \label{f:ProgramAddressSpace} 38 \end{figure} 39 40 41 \subsection{Dynamic Memory-Management} 42 \label{s:DynamicMemoryManagement} 43 44 Modern programming languages manage dynamic-allocation memory in different ways. 45 Some languages, such as Lisp~\cite{CommonLisp}, Java~\cite{Java}, Go~\cite{Go}, Haskell~\cite{Haskell}, provide explicit allocation but \emph{implicit} deallocation of data through garbage collection~\cite{Wilson92}. 46 In general, garbage collection supports memory compaction, where dynamic (live) data is moved during runtime to better utilize space. 47 However, moving data requires finding pointers to it and updating them to reflect new data locations. 48 Programming languages such as C~\cite{C}, \CC~\cite{C++}, and Rust~\cite{Rust} provide the programmer with explicit allocation \emph{and} deallocation of data. 49 These languages cannot find and subsequently move live data because pointers can be created to any storage zone, including internal components of allocated objects, and may contain temporary invalid values generated by pointer arithmetic. 50 Attempts have been made to perform quasi garbage collection in C/\CC~\cite{Boehm88}, but it is a compromise. 51 This thesis only examines dynamic memory-management with \emph{explicit} deallocation. 52 While garbage collection and compaction are not part this work, many of the results are applicable to the allocation phase in any memory-management approach. 53 54 Most programs use a general-purpose allocator, often the one provided implicitly by the programming-language's runtime. 55 When this allocator proves inadequate, programmers often write specialize allocators for specific needs. 56 C and \CC allow easy replacement of the default memory allocator with an alternative specialized or general-purpose memory-allocator. 57 (Jikes RVM MMTk~\cite{MMTk} provides a similar generalization for the Java virtual machine.) 58 However, high-performance memory-allocators for kernel and user multi-threaded programs are still being designed and improved. 59 For this reason, several alternative general-purpose allocators have been written for C/\CC with the goal of scaling in a multi-threaded program~\cite{Berger00,mtmalloc,streamflow,tcmalloc}. 60 This work examines the design of high-performance allocators for use by kernel and user multi-threaded applications written in C/\CC. 61 62 63 \subsection{Contributions} 64 \label{s:Contributions} 65 66 This work provides the following contributions in the area of concurrent dynamic allocation: 67 \begin{enumerate} 68 \item 69 Implementation of a new stand-lone concurrent memory allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading). 70 71 \item 72 Adopt the return of @nullptr@ for a zero-sized allocation, rather than an actual memory address, both of which can be passed to @free@. 73 Most allocators use @nullptr@ to indicate an allocation failure, such as full memory; 74 hence the need to return an alternate value for a zero-sized allocation. 75 The alternative is to abort the program on allocation failure. 76 In theory, notifying the programmer of a failure allows recovery; 77 in practice, it is almost impossible to gracefully recover from allocation failure, especially full memory, so adopting the cheaper return @nullptr@ for a zero-sized allocation is chosen. 78 79 \item 80 Extended the standard C heap functionality by preserving with each allocation its original request size versus the amount allocated due to bucketing, if an allocation is zero fill, and the allocation alignment. 81 82 \item 83 Use the zero fill and alignment as \emph{sticky} properties for @realloc@, to realign existing storage, or preserve existing zero-fill and alignment when storage is copied. 84 Without this extension, it is unsafe to @realloc@ storage initially allocated with zero-fill/alignment as these properties are not preserved when copying. 85 This silent generation of a problem is unintuitive to programmers and difficult to locate because it is transient. 86 87 \item 88 Provide additional heap operations to complete programmer expectation with respect to accessing different allocation properties. 89 \begin{itemize} 90 \item 91 @resize( oaddr, size )@ re-purpose an old allocation for a new type \emph{without} preserving fill or alignment. 92 \item 93 @resize( oaddr, alignment, size )@ re-purpose an old allocation with new alignment but \emph{without} preserving fill. 94 \item 95 @realloc( oaddr, alignment, size )@ same as previous @realloc@ but adding or changing alignment. 96 \item 97 @aalloc( dim, elemSize )@ same as @calloc@ except memory is \emph{not} zero filled. 98 \item 99 @amemalign( alignment, dim, elemSize )@ same as @aalloc@ with memory alignment. 100 \item 101 @cmemalign( alignment, dim, elemSize )@ same as @calloc@ with memory alignment. 102 \end{itemize} 103 104 \item 105 Provide additional query operations to access information about an allocation: 106 \begin{itemize} 107 \item 108 @malloc_alignment( addr )@ returns the alignment of the allocation pointed-to by @addr@. 109 If the allocation is not aligned or @addr@ is the @nulladdr@, the minimal alignment is returned. 110 \item 111 @malloc_zero_fill( addr )@ returns a boolean result indicating if the memory pointed-to by @addr@ is allocated with zero fill, e.g., by @calloc@/@cmemalign@. 112 \item 113 @malloc_size( addr )@ returns the size of the memory allocation pointed-to by @addr@. 114 \item 115 @malloc_usable_size( addr )@ returns the usable size of the memory pointed-to by @addr@, i.e., the bin size containing the allocation, where @malloc_size( addr )@ $\le$ @malloc_usable_size( addr )@. 116 \end{itemize} 117 118 \item 119 Provide complete and fast allocation statistics to help understand program behaviour: 120 \begin{itemize} 121 \item 122 @malloc_stats()@ print memory-allocation statistics on the file-descriptor set by @malloc_stats_fd@. 123 \item 124 @malloc_info( options, stream )@ print memory-allocation statistics as an XML string on the specified file-descriptor set by @malloc_stats_fd@. 125 \item 126 @malloc_stats_fd( fd )@ set file-descriptor number for printing memory-allocation statistics (default @STDERR_FILENO@). 127 This file descriptor is used implicitly by @malloc_stats@ and @malloc_info@. 128 \end{itemize} 129 130 \item 131 Provide mostly contention-free allocation and free operations via a heap-per-kernel-thread implementation. 132 133 \item 134 Provide extensive contention-free runtime checks to valid allocation operations and identify the amount of unfreed storage at program termination. 135 136 \item 137 Build 4 different versions of the allocator: 138 \begin{itemize} 139 \item 140 static or dynamic linking 141 \item 142 statistic/debugging (testing) or no statistic/debugging (performance) 143 \end{itemize} 144 A program may link to any of these 4 versions of the allocator often without recompilation. 145 (It is possible to separate statistics and debugging, giving 8 different versions.) 146 147 \item 148 A micro-benchmark test-suite for comparing allocators rather than relying on a suite of arbitrary programs. 149 These micro-benchmarks have adjustment knobs to simulate allocation patterns hard-coded into arbitrary test programs 150 \end{enumerate} 151 152 \begin{comment} 3 153 \noindent 4 154 ==================== … … 26 176 27 177 \section{Introduction} 28 Dynamic memory allocation and management is one of the core features of C. It gives programmer the freedom to allocate, free, use, and manage dynamic memory himself. The programmer is not given the complete control of the dynamic memory management instead an interface of memory allocator is given to the progr mmer that can be used to allocate/free dynamic memory for the application's use.29 30 Memory allocator is a layer between th rprogrammer and the system. Allocator gets dynamic memory from the system in heap/mmap area of application storage and manages it for programmer's use.31 32 GNU C Library (FIX ME: cite this) provides an interchangeable memory allocator that can be replaced with a custom memory allocator that supports required features and fulfills application's custom needs. It also allows others to innovate in memory allocation and design their own memory allocator. GNU C Library has set guidelines that should be followed when designing a stand alone memory allocator. GNU C Library requires new memory allocators to have atlease following set of functions in their allocator's interface:178 Dynamic memory allocation and management is one of the core features of C. It gives programmer the freedom to allocate, free, use, and manage dynamic memory himself. The programmer is not given the complete control of the dynamic memory management instead an interface of memory allocator is given to the programmer that can be used to allocate/free dynamic memory for the application's use. 179 180 Memory allocator is a layer between the programmer and the system. Allocator gets dynamic memory from the system in heap/mmap area of application storage and manages it for programmer's use. 181 182 GNU C Library (FIX ME: cite this) provides an interchangeable memory allocator that can be replaced with a custom memory allocator that supports required features and fulfills application's custom needs. It also allows others to innovate in memory allocation and design their own memory allocator. GNU C Library has set guidelines that should be followed when designing a stand-alone memory allocator. GNU C Library requires new memory allocators to have at lease following set of functions in their allocator's interface: 33 183 34 184 \begin{itemize} … … 43 193 \end{itemize} 44 194 45 In addition to the above functions, GNU C Library also provides some more functions to increase the usability of the dynamic memory allocator. Most stand alone allocators also provide all or some of the above additional functions.195 In addition to the above functions, GNU C Library also provides some more functions to increase the usability of the dynamic memory allocator. Most stand-alone allocators also provide all or some of the above additional functions. 46 196 47 197 \begin{itemize} … … 60 210 \end{itemize} 61 211 62 With the rise of concurrent applications, memory allocators should be able to fulfill dynamic memory requests from multiple threads in parallel without causing contention on shared resources. There needs to be a set of a standard benchmarks that can be used to evaluate an allocator's performance in different scen erios.212 With the rise of concurrent applications, memory allocators should be able to fulfill dynamic memory requests from multiple threads in parallel without causing contention on shared resources. There needs to be a set of a standard benchmarks that can be used to evaluate an allocator's performance in different scenarios. 63 213 64 214 \section{Research Objectives} … … 69 219 Design a lightweight concurrent memory allocator with added features and usability that are currently not present in the other memory allocators. 70 220 \item 71 Design a suite of benchmarks to evalu te multiple aspects of a memory allocator.221 Design a suite of benchmarks to evaluate multiple aspects of a memory allocator. 72 222 \end{itemize} 73 223 74 224 \section{An outline of the thesis} 75 225 LAST FIX ME: add outline at the end 226 \end{comment} -
doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.bib
r5c216b4 r1eec0b0 34 34 year = "2008" 35 35 } 36 37 @article{Sleator85, 38 author = {Sleator, Daniel Dominic and Tarjan, Robert Endre}, 39 title = {Self-Adjusting Binary Search Trees}, 40 journal = jacm, 41 volume = 32, 42 number = 3, 43 year = 1985, 44 issn = {0004-5411}, 45 pages = {652-686}, 46 doi = {http://doi.acm.org.proxy.lib.uwaterloo.ca/10.1145/3828.3835}, 47 address = {New York, NY, USA}, 48 } 49 50 @article{Berger00, 51 author = {Emery D. Berger and Kathryn S. McKinley and Robert D. Blumofe and Paul R. Wilson}, 52 title = {Hoard: A Scalable Memory Allocator for Multithreaded Applications}, 53 booktitle = {International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-IX)}, 54 journal = sigplan, 55 volume = 35, 56 number = 11, 57 month = nov, 58 year = 2000, 59 pages = {117-128}, 60 note = {International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-IX)}, 61 } 62 63 @inproceedings{berger02reconsidering, 64 author = {Emery D. Berger and Benjamin G. Zorn and Kathryn S. McKinley}, 65 title = {Reconsidering Custom Memory Allocation}, 66 booktitle = {Proceedings of the 17th ACM SIGPLAN Conference on Object-Oriented Programming: Systems, Languages, and Applications (OOPSLA) 2002}, 67 month = nov, 68 year = 2002, 69 location = {Seattle, Washington, USA}, 70 publisher = {ACM}, 71 address = {New York, NY, USA}, 72 } 73 74 @article{larson99memory, 75 author = {Per-{\AA}ke Larson and Murali Krishnan}, 76 title = {Memory Allocation for Long-Running Server Applications}, 77 journal = sigplan, 78 volume = 34, 79 number = 3, 80 pages = {176-185}, 81 year = 1999, 82 url = {http://citeseer.ist.psu.edu/article/larson98memory.html} 83 } 84 85 @techreport{gidpt04, 86 author = {Anders Gidenstam and Marina Papatriantafilou and Philippas Tsigas}, 87 title = {Allocating Memory in a Lock-Free Manner}, 88 number = {2004-04}, 89 institution = {Computing Science}, 90 address = {Chalmers University of Technology}, 91 year = 2004, 92 url = {http://citeseer.ist.psu.edu/gidenstam04allocating.html} 93 } 94 95 @phdthesis{berger02thesis, 96 author = {Emery Berger}, 97 title = {Memory Management for High-Performance Applications}, 98 school = {The University of Texas at Austin}, 99 year = 2002, 100 month = aug, 101 url = {http://citeseer.ist.psu.edu/article/berger02memory.html} 102 } 103 104 @misc{sgimisc, 105 author = {SGI}, 106 title = {The Standard Template Library for {C++}}, 107 note = {\textsf{www.sgi.com/\-tech/\-stl/\-Allocators.html}}, 108 } 109 110 @misc{dlmalloc, 111 author = {Doug Lea}, 112 title = {dlmalloc version 2.8.4}, 113 month = may, 114 year = 2009, 115 note = {\textsf{ftp://g.oswego.edu/\-pub/\-misc/\-malloc.c}}, 116 } 117 118 @misc{ptmalloc2, 119 author = {Wolfram Gloger}, 120 title = {ptmalloc version 2}, 121 month = jun, 122 year = 2006, 123 note = {\textsf{http://www.malloc.de/\-malloc/\-ptmalloc2-current.tar.gz}}, 124 } 125 126 @misc{nedmalloc, 127 author = {Niall Douglas}, 128 title = {nedmalloc version 1.06 Beta}, 129 month = jan, 130 year = 2010, 131 note = {\textsf{http://\-prdownloads.\-sourceforge.\-net/\-nedmalloc/\-nedmalloc\_v1.06beta1\_svn1151.zip}}, 132 } 133 134 @misc{hoard, 135 author = {Emery D. Berger}, 136 title = {hoard version 3.8}, 137 month = nov, 138 year = 2009, 139 note = {\textsf{http://www.cs.umass.edu/\-$\sim$emery/\-hoard/\-hoard-3.8/\-source/hoard-38.tar.gz}}, 140 } 141 142 @comment{mtmalloc, 143 author = {Greg Nakhimovsky}, 144 title = {Improving Scalability of Multithreaded Dynamic Memory Allocation}, 145 journal = {Dr. Dobb's}, 146 month = jul, 147 year = 2001, 148 url = {http://www.ddj.com/mobile/184404685?pgno=1} 149 } 150 151 @misc{mtmalloc, 152 key = {mtmalloc}, 153 title = {mtmalloc.c}, 154 year = 2009, 155 note = {\textsf{http://src.opensolaris.org/\-source/\-xref/\-onnv/\-onnv-gate/\-usr/\-src/\-lib/\-libmtmalloc/\-common/\-mtmalloc.c}}, 156 } 157 158 @misc{tcmalloc, 159 author = {Sanjay Ghemawat and Paul Menage}, 160 title = {tcmalloc version 1.5}, 161 month = jan, 162 year = 2010, 163 note = {\textsf{http://google-perftools.\-googlecode.\-com/\-files/\-google-perftools-1.5.tar.gz}}, 164 } 165 166 @inproceedings{streamflow, 167 author = {Scott Schneider and Christos D. Antonopoulos and Dimitrios S. Nikolopoulos}, 168 title = {Scalable Locality-Conscious Multithreaded Memory Allocation}, 169 booktitle = {International Symposium on Memory Management (ISSM'06)}, 170 month = jun, 171 year = 2006, 172 pages = {84-94}, 173 location = {Ottawa, Ontario, Canada}, 174 publisher = {ACM}, 175 address = {New York, NY, USA}, 176 } 177 178 @misc{streamflowweb, 179 author = {Scott Schneider and Christos Antonopoulos and Dimitrios Nikolopoulos}, 180 title = {Streamflow}, 181 note = {\textsf{http://people.cs.vt.edu/\-\char`\~scschnei/\-streamflow}}, 182 } 183 184 @inproceedings{Blumofe94, 185 author = {R. Blumofe and C. Leiserson}, 186 title = {Scheduling Multithreaded Computations by Work Stealing}, 187 booktitle = {Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, New Mexico.}, 188 pages = {356-368}, 189 year = 1994, 190 month = nov, 191 url = {http://citeseer.ist.psu.edu/article/blumofe94scheduling.html} 192 } 193 194 @article{Johnstone99, 195 author = {Mark S. Johnstone and Paul R. Wilson}, 196 title = {The Memory Fragmentation Problem: Solved?}, 197 journal = sigplan, 198 volume = 34, 199 number = 3, 200 pages = {26-36}, 201 year = 1999, 202 } 203 204 @inproceedings{Grunwald93, 205 author = {Dirk Grunwald and Benjamin G. Zorn and Robert Henderson}, 206 title = {Improving the Cache Locality of Memory Allocation}, 207 booktitle = {{SIGPLAN} Conference on Programming Language Design and Implementation}, 208 pages = {177-186}, 209 year = 1993, 210 url = {http://citeseer.ist.psu.edu/grunwald93improving.html} 211 } 212 213 @inproceedings{Wilson95, 214 author = {Wilson, Paul R. and Johnstone, Mark S. and Neely, Michael and Boles, David}, 215 title = {Dynamic Storage Allocation: A Survey and Critical Review}, 216 booktitle = {Proc. Int. Workshop on Memory Management}, 217 address = {Kinross Scotland, UK}, 218 year = 1995, 219 url = {http://citeseer.ist.psu.edu/wilson95dynamic.html} 220 } 221 222 @inproceedings{Siebert00, 223 author = {Fridtjof Siebert}, 224 title = {Eliminating External Fragmentation in a Non-moving Garbage Collector for Java}, 225 booktitle = {CASES '00: Proceedings of the 2000 international conference on Compilers, architecture, and synthesis for embedded systems}, 226 year = 2000, 227 isbn = {1-58113-338-3}, 228 pages = {9-17}, 229 location = {San Jose, California, United States}, 230 doi = {http://doi.acm.org.proxy.lib.uwaterloo.ca/10.1145/354880.354883}, 231 publisher = {ACM Press}, 232 address = {New York, NY, USA} 233 } 234 235 @inproceedings{Lim98, 236 author = {Tian F. Lim and Przemyslaw Pardyak and Brian N. Bershad}, 237 title = {A Memory-Efficient Real-Time Non-copying Garbage Collector}, 238 booktitle = {ISMM '98: Proceedings of the 1st international symposium on Memory management}, 239 year = 1998, 240 isbn = {1-58113-114-3}, 241 pages = {118-129}, 242 location = {Vancouver, British Columbia, Canada}, 243 doi = {http://doi.acm.org.proxy.lib.uwaterloo.ca/10.1145/286860.286873}, 244 publisher = {ACM Press}, 245 address = {New York, NY, USA} 246 } 247 248 @article{Chang01, 249 author = {J. Morris Chang and Woo Hyong Lee and Witawas Srisa-an}, 250 title = {A Study of the Allocation Behavior of {C++} Programs}, 251 journal = {J. Syst. Softw.}, 252 volume = 57, 253 number = 2, 254 year = 2001, 255 issn = {0164-1212}, 256 pages = {107-118}, 257 doi = {http://dx.doi.org/10.1016/S0164-1212(00)00122-9}, 258 publisher = {Elsevier Science Inc.}, 259 address = {New York, NY, USA} 260 } 261 262 @article{Herlihy93, 263 author = {Maurice Herlihy}, 264 title = {A Methodology for Implementing Highly Concurrent Data Objects}, 265 journal = toplas, 266 volume = 15, 267 number = 5, 268 year = 1993, 269 issn = {0164-0925}, 270 pages = {745-770}, 271 doi = {http://doi.acm.org.proxy.lib.uwaterloo.ca/10.1145/161468.161469}, 272 publisher = {ACM Press}, 273 address = {New York, NY, USA} 274 } 275 276 @article{Denning05, 277 author = {Peter J. Denning}, 278 title = {The Locality Principle}, 279 journal = cacm, 280 volume = 48, 281 number = 7, 282 year = 2005, 283 issn = {0001-0782}, 284 pages = {19-24}, 285 doi = {http://doi.acm.org.proxy.lib.uwaterloo.ca/10.1145/1070838.1070856}, 286 publisher = {ACM Press}, 287 address = {New York, NY, USA} 288 } 289 290 @misc{wilson-locality, 291 author = {Paul R. Wilson}, 292 title = {Locality of Reference, Patterns in Program Behavior, Memory Management, and Memory Hierarchies}, 293 url = {http://citeseer.ist.psu.edu/337869.html} 294 } 295 296 @inproceedings{Feng05, 297 author = {Yi Feng and Emery D. Berger}, 298 title = {A Locality-Improving Dynamic Memory Allocator}, 299 booktitle = {Proceedings of the 2005 Workshop on Memory System Performance}, 300 location = {Chicago, Illinois}, 301 publisher = {ACM}, 302 address = {New York, NY, USA}, 303 month = jun, 304 year = 2005, 305 pages = {68-77}, 306 } 307 308 @inproceedings{grunwald-locality, 309 author = {Dirk Grunwald and Benjamin Zorn and Robert Henderson}, 310 title = {Improving the Cache Locality of Memory Allocation}, 311 booktitle = {PLDI '93: Proceedings of the ACM SIGPLAN 1993 conference on Programming language design and implementation}, 312 year = 1993, 313 isbn = {0-89791-598-4}, 314 pages = {177-186}, 315 location = {Albuquerque, New Mexico, United States}, 316 doi = {http://doi.acm.org.proxy.lib.uwaterloo.ca/10.1145/155090.155107}, 317 publisher = {ACM Press}, 318 address = {New York, NY, USA} 319 } 320 321 @article{Alexandrescu01b, 322 author = {Andrei Alexandrescu}, 323 title = {{volatile} -- Multithreaded Programmer's Best Friend}, 324 journal = {Dr. Dobb's}, 325 month = feb, 326 year = 2001, 327 url = {http://www.ddj.com/cpp/184403766} 328 } 329 330 @article{Attardi03, 331 author = {Joseph Attardi and Neelakanth Nadgir}, 332 title = {A Comparison of Memory Allocators in Multiprocessors}, 333 journal = {Sun Developer Network}, 334 month = jun, 335 year = 2003, 336 note = {\textsf{http://developers.sun.com/\-solaris/\-articles/\-multiproc/\-multiproc.html}}, 337 } 338 339 @unpublished{memlayout, 340 author = {Peter Jay Salzman}, 341 title = {Memory Layout and the Stack}, 342 journal = {Using GNU's GDB Debugger}, 343 note = {\textsf{http://dirac.org/\-linux/\-gdb/\-02a-Memory\_Layout\_And\_The\_Stack.php}}, 344 } 345 346 @unpublished{Ferguson07, 347 author = {Justin N. Ferguson}, 348 title = {Understanding the Heap by Breaking It}, 349 note = {\textsf{https://www.blackhat.com/\-presentations/\-bh-usa-07/Ferguson/\-Whitepaper/\-bh-usa-07-ferguson-WP.pdf}}, 350 } 351 352 @inproceedings{Huang06, 353 author = {Xianglong Huang and Brian T Lewis and Kathryn S McKinley}, 354 title = {Dynamic Code Management: Improving Whole Program Code Locality in Managed Runtimes}, 355 booktitle = {VEE '06: Proceedings of the 2nd international conference on Virtual execution environments}, 356 year = 2006, 357 isbn = {1-59593-332-6}, 358 pages = {133-143}, 359 location = {Ottawa, Ontario, Canada}, 360 doi = {http://doi.acm.org/10.1145/1134760.1134779}, 361 publisher = {ACM Press}, 362 address = {New York, NY, USA} 363 } 364 365 @inproceedings{Herlihy03, 366 author = {M. Herlihy and V. Luchangco and M. Moir}, 367 title = {Obstruction-free Synchronization: Double-ended Queues as an Example}, 368 booktitle = {Proceedings of the 23rd IEEE International Conference on Distributed Computing Systems}, 369 year = 2003, 370 month = may, 371 url = {http://www.cs.brown.edu/~mph/publications.html} 372 } 373 374 @techreport{Detlefs93, 375 author = {David L. Detlefs and Al Dosser and Benjamin Zorn}, 376 title = {Memory Allocation Costs in Large {C} and {C++} Programs}, 377 number = {CU-CS-665-93}, 378 institution = {University of Colorado}, 379 address = {130 Lytton Avenue, Palo Alto, CA 94301 and Campus Box 430, Boulder, CO 80309}, 380 year = 1993, 381 url = {http://citeseer.ist.psu.edu/detlefs93memory.html} 382 } 383 384 @inproceedings{Oyama99, 385 author = {Y. Oyama and K. Taura and A. Yonezawa}, 386 title = {Executing Parallel Programs With Synchronization Bottlenecks Efficiently}, 387 booktitle = {Proceedings of International Workshop on Parallel and Distributed Computing for Symbolic and Irregular Applications (PDSIA '99)}, 388 year = {1999}, 389 pages = {182--204}, 390 publisher = {World Scientific}, 391 address = {Sendai, Japan}, 392 } 393 394 @inproceedings{Dice02, 395 author = {Dave Dice and Alex Garthwaite}, 396 title = {Mostly Lock-Free Malloc}, 397 booktitle = {Proceedings of the 3rd international symposium on Memory management (ISMM'02)}, 398 month = jun, 399 year = 2002, 400 pages = {163-174}, 401 location = {Berlin, Germany}, 402 publisher = {ACM}, 403 address = {New York, NY, USA}, 404 } -
doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.tex
r5c216b4 r1eec0b0 85 85 \usepackage{comment} % Removes large sections of the document. 86 86 \usepackage{tabularx} 87 \usepackage{subfigure} 87 88 88 89 % Hyperlinks make it very easy to navigate an electronic document. … … 168 169 %\usepackageinput{common} 169 170 \CFAStyle % CFA code-style for all languages 170 \lstset{basicstyle=\linespread{0.9}\tt} % CFA typewriter font 171 \lstset{basicstyle=\linespread{0.9}\sf} % CFA typewriter font 172 \newcommand{\uC}{$\mu$\CC} 171 173 \newcommand{\PAB}[1]{{\color{red}PAB: #1}} 172 174 … … 224 226 \addcontentsline{toc}{chapter}{\textbf{References}} 225 227 226 \bibliography{ uw-ethesis,pl}228 \bibliography{pl,uw-ethesis} 227 229 % Tip: You can create multiple .bib files to organize your references. 228 230 % Just list them all in the \bibliogaphy command, separated by commas (no spaces).
Note: See TracChangeset
for help on using the changeset viewer.