Changes in / [bf8b77e:b0d0285]


Ignore:
Location:
doc/theses/mubeen_zulfiqar_MMath
Files:
1 deleted
2 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/allocator.tex

    rbf8b77e rb0d0285  
    11\chapter{Allocator}
    22
    3 \section{uHeap}
    4 uHeap is a lightweight memory allocator. The objective behind uHeap is to design a minimal concurrent memory allocator that has new features and also fulfills GNU C Library requirements (FIX ME: cite requirements).
    5 
    6 The objective of uHeap's new design was to fulfill following requirements:
    7 \begin{itemize}
    8 \item It should be concurrent and thread-safe for multi-threaded programs.
     3\noindent
     4====================
     5
     6Writing Points:
     7\begin{itemize}
     8\item
     9Objective of uHeapLmmm.
     10\item
     11Design philosophy.
     12\item
     13Background and previous design of uHeapLmmm.
     14\item
     15Distributed design of uHeapLmmm.
     16
     17----- SHOULD WE GIVE IMPLEMENTATION DETAILS HERE? -----
     18
     19\PAB{Maybe. There might be an Implementation chapter.}
     20\item
     21figure.
     22\item
     23Advantages of distributed design.
     24\end{itemize}
     25
     26The new features added to uHeapLmmm (incl. @malloc_size@ routine)
     27\CFA alloc interface with examples.
     28
     29\begin{itemize}
     30\item
     31Why did we need it?
     32\item
     33The added benefits.
     34\end{itemize}
     35
     36
     37%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     38%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     39%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% uHeapLmmm Design
     40%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     41%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     42
     43\section{Objective of uHeapLmmm}
     44UHeapLmmm is a lightweight memory allocator. The objective behind uHeapLmmm is to design a minimal concurrent memory allocator that has new features and also fulfills GNU C Library requirements (FIX ME: cite requirements).
     45
     46\subsection{Design philosophy}
     47The objective of uHeapLmmm's new design was to fulfill following requirements:
     48\begin{itemize}
     49\item It should be concurrent to be used in multi-threaded programs.
    950\item It should avoid global locks, on resources shared across all threads, as much as possible.
    1051\item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
     
    1455%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    1556
    16 \section{Design choices for uHeap}
    17 uHeap's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed:
    18 
    19 \paragraph{Design 1: Centralized}
     57\section{Background and previous design of uHeapLmmm}
     58uHeapLmmm was originally designed by X in X (FIX ME: add original author after confirming with Peter).
     59(FIX ME: make and add figure of previous design with description)
     60
     61%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     62
     63\section{Distributed design of uHeapLmmm}
     64uHeapLmmm's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed:
     65
     66\paragraph{Design 1: Decentralized}
     67Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area.
     68Kernel threads (KT) are assigned to the N heaps.
     69When KTs $\le$ N, the heaps are uncontented.
     70When KTs $>$ N, the heaps are contented.
     71By adjusting N, this approach reduces storage at the cost of speed due to contention.
     72In all cases, a thread acquires/releases a lock, contented or uncontented.
     73\begin{cquote}
     74\centering
     75\input{AllocDS1}
     76\end{cquote}
     77Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT.
     78
     79\paragraph{Design 2: Centralized}
    2080One heap, but lower bucket sizes are N-shared across KTs.
    2181This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes.
     
    3090When no thread is assigned a bucket number, its free storage is unavailable. All KTs will be contended for one lock on sbrk for their initial allocations (before free-lists gets populated).
    3191
    32 \paragraph{Design 2: Decentralized N Heaps}
    33 Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area.
    34 Kernel threads (KT) are assigned to the N heaps.
    35 When KTs $\le$ N, the heaps are uncontented.
    36 When KTs $>$ N, the heaps are contented.
    37 By adjusting N, this approach reduces storage at the cost of speed due to contention.
    38 In all cases, a thread acquires/releases a lock, contented or uncontented.
    39 \begin{cquote}
    40 \centering
    41 \input{AllocDS1}
    42 \end{cquote}
    43 Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT.
    44 
    45 \paragraph{Design 3: Decentralized Per-thread Heaps}
    46 Design 3 is similar to design 2 but instead of having an M:N model, it uses a 1:1 model. So, instead of having N heaos and sharing them among M KTs, Design 3 has one heap for each KT.
    47 Dynamic number of heaps: create a thread-local heap for each kernel thread (KT) with a bump-area allocated from the @sbrk@ area.
    48 Each KT will have its own exclusive thread-local heap. Heap will be uncontended between KTs regardless how many KTs have been created.
    49 Operations on @sbrk@ area will still be protected by locks.
    50 %\begin{cquote}
    51 %\centering
    52 %\input{AllocDS3} FIXME add figs
    53 %\end{cquote}
    54 Problems: We cannot destroy the heap when a KT exits because our dynamic objects have ownership and they are returned to the heap that created them when the program frees a dynamic object. All dynamic objects point back to their owner heap. If a thread A creates an object O, passes it to another thread B, and A itself exits. When B will free object O, O should return to A's heap so A's heap should be preserved for the lifetime of the whole program as their might be objects in-use of other threads that were allocated by A. Also, we need to know when a KT is created and destroyed to know when to create/destroy a heap for the KT.
    55 
    56 \paragraph{Design 4: Decentralized Per-CPU Heaps}
    57 Design 4 is similar to Design 3 but instead of having a heap for each thread, it creates a heap for each CPU.
    58 Fixed number of heaps for a machine: create a heap for each CPU with a bump-area allocated from the @sbrk@ area.
    59 Each CPU will have its own CPU-local heap. When the program does a dynamic memory operation, it will be entertained by the heap of the CPU where the process is currently running on.
    60 Each CPU will have its own exclusive heap. Just like Design 3(FIXME cite), heap will be uncontended between KTs regardless how many KTs have been created.
    61 Operations on @sbrk@ area will still be protected by locks.
    62 To deal with preemtion during a dynamic memory operation, librseq(FIXME cite) will be used to make sure that the whole dynamic memory operation completes on one CPU. librseq's restartable sequences can make it possible to re-run a critical section and undo the current writes if a preemption happened during the critical section's execution.
    63 %\begin{cquote}
    64 %\centering
    65 %\input{AllocDS4} FIXME add figs
    66 %\end{cquote}
    67 
    68 Problems: This approach was slower than the per-thread model. Also, librseq does not provide such restartable sequences to detect preemtions in user-level threading system which is important to us as CFA(FIXME cite) has its own threading system that we want to support.
    69 
    70 Out of the four designs, Design 3 was chosen because of the following reasons.
    71 \begin{itemize}
    72 \item
    73 Decentralized designes are better in general as compared to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designes shard the whole heap which has all the buckets with the addition of sharding sbrk area. So Design 1 was eliminated.
    74 \item
    75 Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenerio.
    76 \item
    77 Design 4 was eliminated because it was slower than Design 3 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achive user-threading safety which has some cost to it. Desing 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
    78 \end{itemize}
    79 
     92Out of the two designs, Design 1 was chosen because it's concurrency is better across all bucket-sizes as design-2 shards a few buckets of selected sizes while design-1 shards all the buckets. Design-2 shards the whole heap which has all the buckets with the addition of sharding sbrk area.
    8093
    8194\subsection{Advantages of distributed design}
    82 
    83 The distributed design of uHeap is concurrent to work in multi-threaded applications.
    84 
    85 Some key benefits of the distributed design of uHeap are as follows:
    86 
    87 \begin{itemize}
    88 \item
    89 The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The call to sbrk will be protected using locks but bump allocation (on memory taken from sbrk) will not be contended once the sbrk call has returned.
    90 \item
    91 Low or almost no contention on heap resources.
     95The distributed design of uHeapLmmm is concurrent to work in multi-threaded applications.
     96
     97Some key benefits of the distributed design of uHeapLmmm are as follows:
     98
     99\begin{itemize}
     100\item
     101The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The lock on bump allocation (on memory taken from sbrk) will only be contended if KTs $<$ N. The contention on sbrk area is less likely as it will only happen in the case if heaps assigned to two KTs get short of bump allocation reserve simultanously.
     102\item
     103N heaps are created at the start of the program and destroyed at the end of program. When a KT is created, we only assign it to one of the heaps. When a KT is destroyed, we only dissociate it from the assigned heap but we do not destroy that heap. That heap will go back to our pool-of-heaps, ready to be used by some new KT. And if that heap was shared among multiple KTs (like the case of KTs $<$ N) then, on deletion of one KT, that heap will be still in-use of the other KTs. This will prevent creation and deletion of heaps during run-time as heaps are re-usable which helps in keeping low-memory footprint.
    92104\item
    93105It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
     
    96108\end{itemize}
    97109
    98 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    99 
    100 \section{uHeap Structure}
    101 
    102 As described in (FIXME cite 2.4) uHeap uses following features of multi-threaded memory allocators.
    103 \begin{itemize}
    104 \item
    105 uHeap has multiple heaps without a global heap and uses 1:1 model. (FIXME cite 2.5 1:1 model)
    106 \item
    107 uHeap uses object ownership. (FIXME cite 2.5.2)
    108 \item
    109 uHeap does not use object containers (FIXME cite 2.6) or any coalescing technique. Instead each dynamic object allocated by uHeap has a header than contains bookkeeping information.
    110 \item
    111 Each thread-local heap in uHeap has its own allocation buffer that is taken from the system using sbrk() call. (FIXME cite 2.7)
    112 \item
    113 Unless a heap is freeing an object that is owned by another thread's heap or heap is using sbrk() system call, uHeap is mostly lock-free which eliminates most of the contention on shared resources. (FIXME cite 2.8)
    114 \end{itemize}
    115 
    116 As uHeap uses a heap per-thread model to reduce contention on heap resources, we manage a list of heaps (heap-list) that can be used by threads. The list is empty at the start of the program. When a kernel thread (KT) is created, we check if heap-list is empty. If no then a heap is removed from the heap-list and is given to this new KT to use exclusively. If yes then a new heap object is created in dynamic memory and is given to this new KT to use exclusively. When a KT exits, its heap is not destroyed but instead its heap is put on the heap-list and is ready to be reused by new KTs.
    117 
    118 This reduces the memory footprint as the objects on free-lists of a KT that has exited can be reused by a new KT. Also, we preserve all the heaps that were created during the lifetime of the program till the end of the program. uHeap uses object ownership where an object is freed to the free-buckets of the heap that allocated it. Even after a KT A has exited, its heap has to be preserved as there might be objects in-use of other threads that were initially allocated by A and the passed to other threads.
    119 
    120 \begin{figure}
    121 \centering
    122 \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
    123 \caption{HeapStructure}
    124 \label{fig:heapStructureFig}
    125 \end{figure}
    126 
    127 Each heap uses seggregated free-buckets that have free objects of a specific size. Each free-bucket of a specific size has following 2 lists in it:
    128 \begin{itemize}
    129 \item
    130 Free list is used when a thread is freeing an object that is owned by its own heap so free list does not use any locks/atomic-operations as it is only used by the owner KT.
    131 \item
    132 Away list is used when a thread A is freeing an object that is owned by another KT B's heap. This object should be freed to the owner heap (B's heap) so A will place the object on the away list of B. Away list is lock protected as it is shared by all other threads.
    133 \end{itemize}
    134 
    135 When a dynamic object of a size S is requested. The thread-local heap will check if S is greater than or equal to the mmap threshhold. Any request larger than the mmap threshhold is fulfilled by allocating an mmap area of that size and such requests are not allocated on sbrk area. The value of this threshhold can be changed using mallopt routine but the new value should not be larger than our biggest free-bucket size.
    136 
    137 Algorithm~\ref{alg:heapObjectAlloc} briefly shows how an allocation request is fulfilled.
    138 
    139 \begin{algorithm}
    140 \caption{Dynamic object allocation of size S}\label{alg:heapObjectAlloc}
    141 \begin{algorithmic}[1]
    142 \State $\textit{O} \gets \text{NULL}$
    143 \If {$S < \textit{mmap-threshhold}$}
    144         \State $\textit{B} \gets (\text{smallest free-bucket} \geq S)$
    145         \If {$\textit{B's free-list is empty}$}
    146                 \If {$\textit{B's away-list is empty}$}
    147                         \If {$\textit{heap's allocation buffer} < S$}
    148                                 \State $\text{get allocation buffer using system call sbrk()}$
    149                         \EndIf
    150                         \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$
    151                 \Else
    152                         \State $\textit{merge B's away-list into free-list}$
    153                         \State $\textit{O} \gets \text{pop an object from B's free-list}$
    154                 \EndIf
    155         \Else
    156                 \State $\textit{O} \gets \text{pop an object from B's free-list}$
    157         \EndIf
    158         \State $\textit{O's owner} \gets \text{B}$
    159 \Else
    160         \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
    161 \EndIf
    162 \State $\Return \textit{ O}$
    163 \end{algorithmic}
    164 \end{algorithm}
    165 
     110FIX ME: Cite performance comparison of the two heap designs if required
    166111
    167112%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    168113
    169114\section{Added Features and Methods}
    170 To improve the uHeap allocator (FIX ME: cite uHeap) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
     115To improve the UHeapLmmm allocator (FIX ME: cite uHeapLmmm) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
    171116
    172117\subsection{C Interface}
     
    262207@addr@: the address of the currently allocated dynamic object.
    263208\end{itemize}
    264 @malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeap allocator.
     209@malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeapLmmm allocator.
    265210
    266211\subsection{\lstinline{bool malloc_zero_fill( void * addr )}}
     
    302247
    303248\subsection{\CFA Malloc Interface}
    304 We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeap allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers.
     249We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeapLmmm allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers.
    305250\CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
    306251
     
    433378
    434379\subsection{Alloc Interface}
    435 In addition to improve allocator interface both for \CFA and our standalone allocator uHeap in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
     380In addition to improve allocator interface both for \CFA and our standalone allocator uHeapLmmm in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
    436381This interface helps programmers in three major ways.
    437382
  • doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.tex

    rbf8b77e rb0d0285  
    8686\usepackage{tabularx}
    8787\usepackage{subfigure}
    88 
    89 \usepackage{algorithm}
    90 \usepackage{algpseudocode}
    9188
    9289% Hyperlinks make it very easy to navigate an electronic document.
Note: See TracChangeset for help on using the changeset viewer.