Ignore:
Timestamp:
Jul 8, 2021, 4:09:34 PM (3 years ago)
Author:
m3zulfiq <m3zulfiq@…>
Branches:
ADT, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, master, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
6ff08d8
Parents:
b1a2c4a
Message:

Added allocator design objectives

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/allocator.tex

    rb1a2c4a ra953c2e3  
    4444
    4545\subsection{Design philosophy}
    46 
     46The objective of uHeapLmmm's new design was to fulfill following requirements:
     47\begin{itemize}
     48\item It should be concurrent to be used in multi-threaded programs.
     49\item It should avoid global locks, on resources shared across all threads, as much as possible.
     50\item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
     51\item It should be a lightweight memory allocator.
     52\end{itemize}
    4753
    4854%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    4955
    5056\section{Background and previous design of uHeapLmmm}
    51 
     57uHeapLmmm was originally designed by X in X (FIX ME: add original author after confirming with Peter).
     58(FIX ME: make and add figure of previous design with description)
    5259
    5360%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    5461
    5562\section{Distributed design of uHeapLmmm}
     63uHeapLmmm's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed:
    5664
     65\paragraph{Design 1: Decentralized}
     66Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area.
     67Kernel threads (KT) are assigned to the N heaps.
     68When KTs $\le$ N, the heaps are uncontented.
     69When KTs $>$ N, the heaps are contented.
     70By adjusting N, this approach reduces storage at the cost of speed due to contention.
     71In all cases, a thread acquires/releases a lock, contented or uncontented.
     72\begin{cquote}
     73\centering
     74\input{AllocDS1}
     75\end{cquote}
     76Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT.
     77
     78\paragraph{Design 2: Centralized}
     79One heap, but lower bucket sizes are N-shared across KTs.
     80This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes.
     81When KTs $\le$ N, the important bucket sizes are uncontented.
     82When KTs $>$ N, the free buckets are contented.
     83Therefore, threads are only contending for a small number of buckets, which are distributed among them to reduce contention.
     84\begin{cquote}
     85\centering
     86\input{AllocDS2}
     87\end{cquote}
     88Problems: need to know when a kernel thread (KT) is created and destroyed to know when to assign a shared bucket-number.
     89When no thread is assigned a bucket number, its free storage is unavailable. All KTs will be contended for one lock on sbrk for their initial allocations (before free-lists gets populated).
     90
     91Out of the two designs, Design 1 was chosen because it's concurrency is better across all bucket-sizes as design-2 shards a few buckets of selected sizes while design-1 shards all the buckets. Design-2 shards the whole heap which has all the buckets with the addition of sharding sbrk area.
    5792
    5893\subsection{Advantages of distributed design}
     94The distributed design of uHeapLmmm is concurrent to work in multi-threaded applications.
    5995
     96Some key benefits of the distributed design of uHeapLmmm are as follows:
     97
     98\begin{itemize}
     99\item
     100The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The lock on bump allocation (on memory taken from sbrk) will only be contended if KTs > N. The contention on sbrk area is less likely as it will only happen in the case if heaps assigned to two KTs get short of bump allocation reserve simultanously.
     101\item
     102N heaps are created at the start of the program and destroyed at the end of program. When a KT is created, we only assign it to one of the heaps. When a KT is destroyed, we only dissociate it from the assigned heap but we do not destroy that heap. That heap will go back to our pool-of-heaps, ready to be used by some new KT. And if that heap was shared among multiple KTs (like the case of KTs > N) then, on deletion of one KT, that heap will be still in-use of the other KTs. This will prevent creation and deletion of heaps during run-time as heaps are re-usable which helps in keeping low-memory footprint.
     103\item
     104It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
     105\item
     106Distributed design avoids unnecassry locks on resources shared across all KTs.
     107\end{itemize}
     108
     109FIX ME: Cite performance comparison of the two heap designs if required
    60110
    61111%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     
    72122Why did we need it?
    73123The added benefits.
    74 
    75 
    76 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    77 % Following is added by Peter
    78 
    79 \noindent
    80 ====================
    81 
    82 \newpage
    83 \paragraph{Design 1: Decentralized}
    84 Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area.
    85 Kernel threads (KT) are assigned to the N heaps.
    86 When KTs $\le$ N, the heaps are uncontented.
    87 When KTs $>$ N, the heaps are contented.
    88 By adjusting N, this approach reduces storage at the cost of speed due to contention.
    89 In all cases, a thread acquires/releases a lock, contented or uncontented.
    90 \begin{cquote}
    91 \centering
    92 \input{AllocDS1}
    93 \end{cquote}
    94 Problems: need to know when a KT is created and destroyed to know when to create/delete the KT's heap.
    95 On KT deletion, its heap freed-storage needs to be distributed somewhere.
    96 
    97 \paragraph{Design 2: Centralized}
    98 
    99 One heap, but lower bucket sizes are N-shared across KTs.
    100 This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes.
    101 When KTs $\le$ N, the important bucket sizes are uncontented.
    102 When KTs $>$ N, the free buckets are contented.
    103 Therefore, threads are only contending for a small number of buckets, which are distributed among them to reduce contention.
    104 \begin{cquote}
    105 \centering
    106 \input{AllocDS2}
    107 \end{cquote}
    108 Problems: need to know when a kernel thread (KT) is created and destroyed to know when to assign a shared bucket-number.
    109 When no thread is assigned a bucket number, its free storage is unavailable.
    110 It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
Note: See TracChangeset for help on using the changeset viewer.