Ignore:
Timestamp:
Apr 18, 2022, 9:48:33 AM (2 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
Children:
75cd27b
Parents:
ac4476d
Message:

small wording changes to intro and background chapters, continue proofreading allocator chapter

Location:
doc/theses/mubeen_zulfiqar_MMath
Files:
3 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/allocator.tex

    rac4476d r23f1065  
    11\chapter{Allocator}
    22
    3 This chapter presents a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
     3This chapter presents a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
    44The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}.
    55
     
    1212hence, llheap's design is willing to use more storage to lower latency.
    1313This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory.
    14 Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{Performance}).
     14Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{c:Performance}).
    1515
    1616% The objective of llheap's new design was to fulfill following requirements:
     
    3232
    3333\subsection{Allocation Fastpath}
     34\label{s:AllocationFastpath}
    3435
    3536These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced.
     
    113114Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler.
    114115
    115 \noindent
    116116Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted.
    117117\end{itemize}
     
    122122Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section.
    123123Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap.
     124If @librseq@ had an @rseq_abort@ which:
     125\begin{enumerate}
     126\item
     127Marked the current restartable critical-section as cancelled so it restarts when attempting to commit.
     128\item
     129Do nothing if there is no current restartable critical section in progress.
     130\end{enumerate}
     131Then @rseq_abort@ could be called on the backside of a  user-level context-switching.
     132A feature similar to this idea might exist for hardware transactional-memory.
    124133A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection.
    125 
    126134
    127135\paragraph{1:1 model}
     
    161169\vspace{5pt}
    162170\noindent
    163 The conclusion from this design exercise is: any atomic fence, instruction (lock free), or lock along the allocation fastpath produces significant slowdown.
     171The conclusion from this design exercise is: any atomic fence, atomic instruction (lock free), or lock along the allocation fastpath produces significant slowdown.
    164172For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps maybe shared by multiple threads, even when KTs $\le$ N.
    165173For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
     
    167175More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
    168176Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required.
    169 The 1:1 model still has the serially-reusable problem with user-level threading, which is address in \VRef{}, and the greatest potential for heap blowup for certain allocation patterns.
     177The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in \VRef{}, and the greatest potential for heap blowup for certain allocation patterns.
    170178
    171179
     
    227235no coalescing to minimize latency,
    228236\item
    229 local reserved memory (pool) obtained from the operating system using @sbrk@ call,
    230 \item
    231 global reserved memory (pool) obtained from the operating system using @mmap@ call to create and reuse heaps needed by threads.
     237global heap memory (pool) obtained from the operating system using @mmap@ to create and reuse heaps needed by threads,
     238\item
     239local reserved memory (pool) per heap obtained from global pool,
     240\item
     241global reserved memory (pool) obtained from the operating system using @sbrk@ call,
     242\item
     243optional fast-lookup table for converting allocation requests into bucket sizes,
     244\item
     245optional statistic-counters table for accumulating counts of allocation operations.
    232246\end{itemize}
    233247
     
    240254\end{figure}
    241255
    242 llheap starts by creating an array of $N$ global heaps from storage obtained by @mmap@, where $N$ is the number of computer cores.
     256llheap starts by creating an array of $N$ global heaps from storage obtained using @mmap@, where $N$ is the number of computer cores, that persists for program duration.
    243257There is a global bump-pointer to the next free heap in the array.
    244258When this array is exhausted, another array is allocated.
    245 There is a global top pointer to a heap intrusive link that chain free heaps from terminated threads, where these heaps are reused by new threads.
    246 When statistics are turned on, there is a global top pointer to a heap intrusive link that chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps (see @malloc_stats@ \VRef{}).
    247 
    248 When a KT starts, a heap is allocated from the current array for exclusive used by the KT.
     259There is a global top pointer for a heap intrusive link to chain free heaps from terminated threads.
     260When statistics are turned on, there is a global top pointer for a heap intrusive link to chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps using @malloc_stats@.
     261
     262When a KT starts, a heap is allocated from the current array for exclusive use by the KT.
    249263When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of heaps.
    250264The free heaps is a stack so hot storage is reused first.
    251 Preserving all heaps created during the program lifetime, solves the storage lifetime problem.
     265Preserving all heaps created during the program lifetime, solves the storage lifetime problem, when ownership is used.
    252266This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially.
    253267llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap.
    254268
    255269Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
    256 The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation (see @mallopt@ \VRef{}), \ie small objects managed by the program and large objects managed by the operating system.
    257 Each free bucket of a specific size has following two lists:
     270The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the operating system.
     271Each free bucket of a specific size has the following two lists:
    258272\begin{itemize}
    259273\item
    260274A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
    261 The free objects is a stack so hot storage is reused first.
    262 \item
    263 For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operation require locking.
    264 The entire ownership stack be removed and become the head of the corresponding free stack, when the free stack is empty.
     275The free objects are a stack so hot storage is reused first.
     276\item
     277For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operations require locking.
     278When the free stack is empty, the entire ownership stack is removed and becomes the head of the corresponding free stack.
    265279\end{itemize}
    266280
    267281Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
    268282First, the allocation is divided into small (@sbrk@) or large (@mmap@).
     283For large allocations, the storage is mapped directly from the operating system.
    269284For small allocations, $S$ is quantized into a bucket size.
    270 Quantizing is performed using a binary search, using the ordered bucket array.
     285Quantizing is performed using a binary search over the ordered bucket array.
    271286An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket.
    272287(Type @char@ restricts the number of bucket sizes to 256.)
    273 For $S$ > 64K, the binary search is used.
     288For $S$ > 64K, a binary search is used.
    274289Then, the allocation storage is obtained from the following locations (in order), with increasing latency.
    275290\begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt]
     
    286301\end{enumerate}
    287302
    288 \begin{algorithm}
     303\begin{figure}
     304\vspace*{-10pt}
     305\begin{algorithm}[H]
     306\small
    289307\caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc}
    290308\begin{algorithmic}[1]
    291309\State $\textit{O} \gets \text{NULL}$
    292 \If {$S < \textit{mmap-threshhold}$}
     310\If {$S >= \textit{mmap-threshhold}$}
     311        \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
     312\Else
    293313        \State $\textit{B} \gets \text{smallest free-bucket} \geq S$
    294314        \If {$\textit{B's free-list is empty}$}
     
    306326        \EndIf
    307327        \State $\textit{O's owner} \gets \text{B}$
    308 \Else
    309         \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
    310328\EndIf
    311329\State $\Return \textit{ O}$
     
    313331\end{algorithm}
    314332
    315 Algorithm~\ref{alg:heapObjectFree} shows the de-allocation (free) outline for an object at address $A$.
    316 
    317 \begin{algorithm}[h]
    318 \caption{Dynamic object free at address $A$}\label{alg:heapObjectFree}
    319 %\begin{algorithmic}[1]
    320 %\State write this algorithm
    321 %\end{algorithmic}
     333\vspace*{-15pt}
     334\begin{algorithm}[H]
     335\small
     336\caption{Dynamic object free at address $A$ with object ownership}\label{alg:heapObjectFreeOwn}
     337\begin{algorithmic}[1]
     338\If {$\textit{A mapped allocation}$}
     339        \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$
     340\Else
     341        \State $\text{B} \gets \textit{O's owner}$
     342        \If {$\textit{B is thread-local heap's bucket}$}
     343                \State $\text{push A to B's free-list}$
     344        \Else
     345                \State $\text{push A to B's away-list}$
     346        \EndIf
     347\EndIf
     348\end{algorithmic}
    322349\end{algorithm}
    323350
     351\vspace*{-15pt}
     352\begin{algorithm}[H]
     353\small
     354\caption{Dynamic object free at address $A$ without object ownership}\label{alg:heapObjectFreeNoOwn}
     355\begin{algorithmic}[1]
     356\If {$\textit{A mapped allocation}$}
     357        \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$
     358\Else
     359        \State $\text{B} \gets \textit{O's owner}$
     360        \If {$\textit{B is thread-local heap's bucket}$}
     361                \State $\text{push A to B's free-list}$
     362        \Else
     363                \State $\text{C} \gets \textit{thread local heap's bucket with same size as B}$
     364                \State $\text{push A to C's free-list}$
     365        \EndIf
     366\EndIf
     367\end{algorithmic}
     368\end{algorithm}
     369\end{figure}
     370
     371Algorithm~\ref{alg:heapObjectFreeOwn} shows the de-allocation (free) outline for an object at address $A$ with ownership.
     372First, the address is divided into small (@sbrk@) or large (@mmap@).
     373For large allocations, the storage is unmapped back to the operating system.
     374For small allocations, the bucket associated with the request size is retrieved.
     375If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket.
     376If the bucket is not local to the thread, the allocation is pushed onto the owning thread's associated away stack.
     377
     378Algorithm~\ref{alg:heapObjectFreeNoOwn} shows the de-allocation (free) outline for an object at address $A$ without ownership.
     379The algorithm is the same as for ownership except if the bucket is not local to the thread.
     380Then the corresponding bucket of the owner thread is computed for the deallocating thread, and the allocation is pushed onto the deallocating thread's bucket.
     381
     382Finally, the llheap design funnels \label{p:FunnelRoutine} all allocation/deallocation operations through routines @malloc@/@free@, which are the only routines to directly access and manage the internal data structures of the heap.
     383Other allocation operations, \eg @calloc@, @memalign@, and @realloc@, are composed of calls to @malloc@ and possibly @free@, and may manipulate header information after storage is allocated.
     384This design simplifies heap-management code during development and maintenance.
     385
     386
     387\subsection{Alignment}
     388
     389All dynamic memory allocations must have a minimum storage alignment for the contained object(s).
     390Often the minimum memory alignment, M, is the bus width (32 or 64-bit) or the largest register (double, long double) or largest atomic instruction (DCAS) or vector data (MMMX).
     391In general, the minimum storage alignment is 8/16-byte boundary on 32/64-bit computers.
     392For consistency, the object header is normally aligned at this same boundary.
     393Larger alignments must be a power of 2, such page alignment (4/8K).
     394Any alignment request, N, $\le$ the minimum alignment is handled as a normal allocation with minimal alignment.
     395
     396For alignments greater than the minimum, the obvious approach for aligning to address @A@ is: compute the next address that is a multiple of @N@ after the current end of the heap, @E@, plus room for the header before @A@ and the size of the allocation after @A@, moving the end of the heap to @E'@.
     397\begin{center}
     398\input{Alignment1}
     399\end{center}
     400The storage between @E@ and @H@ is chained onto the appropriate free list for future allocations.
     401This approach is also valid within any sufficiently large free block, where @E@ is the start of the free block, and any unused storage before @H@ or after the allocated object becomes free storage.
     402In this approach, the aligned address @A@ is the same as the allocated storage address @P@, \ie @P@ $=$ @A@ for all allocation routines, which simplifies deallocation.
     403However, if there are a large number of aligned requests, this approach leads to memory fragmentation from the small free areas around the aligned object.
     404As well, it does not work for large allocations, where many memory allocators switch from program @sbrk@ to operating-system @mmap@.
     405The reason is that @mmap@ only starts on a page boundary, and it is difficult to reuse the storage before the alignment boundary for other requests.
     406Finally, this approach is incompatible with allocator designs that funnel allocation requests through @malloc@ as it directly manipulates management information within the allocator to optimize the space/time of a request.
     407
     408Instead, llheap alignment is accomplished by making a \emph{pessimistically} allocation request for sufficient storage to ensure that \emph{both} the alignment and size request are satisfied, \eg:
     409\begin{center}
     410\input{Alignment2}
     411\end{center}
     412The amount of storage necessary is @alignment - M + size@, which ensures there is an address, @A@, after the storage returned from @malloc@, @P@, that is a multiple of @alignment@ followed by sufficient storage for the data object.
     413The approach is pessimistic because if @P@ already has the correct alignment @N@, the initial allocation has already requested sufficient space to move to the next multiple of @N@.
     414For this special case, there is @alignment - M@ bytes of unused storage after the data object, which subsequently can be used by @realloc@.
     415
     416Note, the address returned is @A@, which is subsequently returned to @free@.
     417However, to correctly free the allocated object, the value @P@ must be computable, since that is the value generated by @malloc@ and returned within @memalign@.
     418Hence, there must be a mechanism to detect when @P@ $\neq$ @A@ and how to compute @P@ from @A@.
     419
     420The llheap approach uses two headers:
     421the \emph{original} header associated with a memory allocation from @malloc@, and a \emph{fake} header within this storage before the alignment boundary @A@, which is returned from @memalign@, e.g.:
     422\begin{center}
     423\input{Alignment2Impl}
     424\end{center}
     425Since @malloc@ has a minimum alignment of @M@, @P@ $\neq$ @A@ only holds for alignments of @M@ or greater.
     426When @P@ $\neq$ @A@, the minimum distance between @P@ and @A@ is @M@ bytes, due to the pessimistic storage-allocation.
     427Therefore, there is always room for an @M@-byte fake header before @A@.
     428
     429The fake header must supply an indicator to distinguish it from a normal header and the location of address @P@ generated by @malloc@.
     430This information is encoded as an offset from A to P and the initialize alignment (discussed in \VRef{s:ReallocStickyProperties}).
     431To distinguish a fake header from a normal header, the least-significant bit of the alignment is used because the offset participates in multiple calculations, while the alignment is just remembered data.
     432\begin{center}
     433\input{FakeHeader}
     434\end{center}
     435
     436
     437\subsection{\lstinline{realloc} and Sticky Properties}
     438\label{s:ReallocStickyProperties}
     439
     440Allocation routine @realloc@ provides a memory-management pattern for shrinking/enlarging an existing allocation, while maintaining some or all of the object data, rather than performing the following steps manually.
     441\begin{flushleft}
     442\begin{tabular}{ll}
     443\multicolumn{1}{c}{\textbf{realloc pattern}} & \multicolumn{1}{c}{\textbf{manually}} \\
     444\begin{lstlisting}
     445T * naddr = realloc( oaddr, newSize );
     446
     447
     448
     449\end{lstlisting}
     450&
     451\begin{lstlisting}
     452T * naddr = (T *)malloc( newSize ); $\C[2.4in]{// new storage}$
     453memcpy( naddr, addr, oldSize );  $\C{// copy old bytes}$
     454free( addr );                           $\C{// free old storage}$
     455addr = naddr;                           $\C{// change pointer}\CRT$
     456\end{lstlisting}
     457\end{tabular}
     458\end{flushleft}
     459The realloc pattern leverages available storage at the end of an allocation due to bucket sizes, possibly eliminating a new allocation and copying.
     460This pattern is not used enough to reduce storage management costs.
     461In fact, if @oaddr@ is @nullptr@, @realloc@ does a @malloc@, so even the initial @malloc@ can be a @realloc@ for consistency in the pattern.
     462
     463The hidden problem for this pattern is the effect of zero fill and alignment with respect to reallocation.
     464Are these properties transient or persistent (``sticky'')?
     465For example, when memory is initially allocated by @calloc@ or @memalign@ with zero fill or alignment properties, respectively, what happens when those allocations are given to @realloc@ to change size.
     466That is, if @realloc@ logically extends storage into unused bucket space or allocates new storage to satisfy a size change, are initial allocation properties preserve?
     467Currently, allocation properties are not preserved, so subsequent use of @realloc@ storage may cause inefficient execution or errors due to lack of zero fill or alignment.
     468This silent problem is unintuitive to programmers and difficult to locate because it is transient.
     469To prevent these problems, llheap preserves initial allocation properties for the lifetime of an allocation and the semantics of @realloc@ are augmented to preserve these properties, with additional query routines.
     470This change makes the realloc pattern efficient and safe.
     471
     472
     473\subsection{Header}
     474
     475To preserve allocation properties requires storing additional information with an allocation,
     476The only available location is the header, where \VRef[Figure]{f:llheapNormalHeader} shows the llheap storage layout.
     477The header has two data field sized appropriately for 32/64-bit alignment requirements.
     478The first field is a union of three values:
     479\begin{description}
     480\item[bucket pointer]
     481is for allocated storage and points back to the bucket associated with this storage requests (see \VRef[Figure]{f:llheapStructure} for the fields accessible in a bucket).
     482\item[mapped size]
     483is for mapped storage and is the storage size for use in unmapping.
     484\item[next free block]
     485is for free storage and is an intrusive pointer chaining same-size free blocks onto a bucket's free stack.
     486\end{description}
     487The second field remembers the request size versus the allocation (bucket) size, \eg request 42 bytes which is rounded up to 64 bytes.
     488Since programmers think in request sizes rather than allocation sizes, the request size allows better generation of statistics or errors.
     489
     490\begin{figure}
     491\centering
     492\input{Header}
     493\caption{llheap Normal Header}
     494\label{f:llheapNormalHeader}
     495\end{figure}
     496
     497The low-order 3-bits of the first field are \emph{unused} for any stored values, whereas the second field may use all of its bits.
     498The 3 unused bits are used to represent mapped allocation, zero filled, and alignment, respectively.
     499Note, the alignment bit is not used in the normal header and the zero-filled/mapped bits are not used in the fake header.
     500This implementation allows a fast test if any of the lower 3-bits are on (@&@ and compare).
     501If no bits are on, it implies a basic allocation, which is handled quickly;
     502otherwise, the bits are analysed and appropriate actions are taken for the complex cases.
     503Since most allocations are basic, this implementation results in a significant performance gain along the allocation and free fastpath.
     504
     505
     506\section{Statistics and Debugging Modes}
     507
     508llheap can be built to accumulate fast and largely contention-free allocation statistics to help understand allocation behaviour.
     509Incrementing statistic counters must appear on the allocation fastpath.
     510As noted, any atomic operation along the fastpath produces a significant increase in allocation costs.
     511To make statistics performant enough for use on running systems, each heap has its own set of statistic counters, so heap operations do not require atomic operations.
     512
     513To locate all statistic counters, heaps are linked together in statistics mode, and this list is locked and traversed to sum all counters across heaps.
     514Note, the list is locked to prevent errors traversing an active list;
     515the statistics counters are not locked and can flicker during accumulation, which is not an issue with atomic read/write.
     516\VRef[Figure]{f:StatiticsOutput} shows an example of statistics output, which covers all allocation operations and information about deallocating storage not owned by a thread.
     517No other memory allocator studied provides as comprehensive statistical information.
     518Finally, these statistics were invaluable during the development of this thesis for debugging and verifying correctness, and hence, should be equally valuable to application developers.
     519
     520\begin{figure}
     521\begin{lstlisting}
     522Heap statistics: (storage request / allocation)
     523  malloc >0 calls 2,766; 0 calls 2,064; storage 12,715 / 13,367 bytes
     524  aalloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     525  calloc >0 calls 6; 0 calls 0; storage 1,008 / 1,104 bytes
     526  memalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     527  amemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     528  cmemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     529  resize >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     530  realloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     531  free !null calls 2,766; null calls 4,064; storage 12,715 / 13,367 bytes
     532  away pulls 0; pushes 0; storage 0 / 0 bytes
     533  sbrk calls 1; storage 10,485,760 bytes
     534  mmap calls 10,000; storage 10,000 / 10,035 bytes
     535  munmap calls 10,000; storage 10,000 / 10,035 bytes
     536  threads started 4; exited 3
     537  heaps new 4; reused 0
     538\end{lstlisting}
     539\caption{Statistics Output}
     540\label{f:StatiticsOutput}
     541\end{figure}
     542
     543llheap can also be built with debug checking, which inserts many asserts along all allocation paths.
     544These assertions detect incorrect allocation usage, like double frees, unfreed storage, or memory corruptions because internal values (like header fields) are overwritten.
     545These checks are best effort as opposed to complete allocation checking as in @valgrind@.
     546Nevertheless, the checks detect many allocation problems.
     547There is an unfortunate problem in detecting unfreed storage because some library routines assume their allocations have life-time duration, and hence, do not free their storage.
     548For example, @printf@ allocates a 1024 buffer on first call and never deletes this buffer.
     549To prevent a false positive for unfreed storage, it is possible to specify an amount of storage that is never freed (see \VRef{}), and it is subtracted from the total allocate/free difference.
     550Determining the amount of never-freed storage is annoying, but once done, any warnings of unfreed storage are application related.
     551
     552Tests indicate only a 30\% performance increase when statistics \emph{and} debugging are enabled, and the latency cost for accumulating statistic is mitigated by limited calls, often only one at the end of the program.
     553
     554
     555\section{User-level Threading Support}
     556
     557The serially-reusable problem (see \VRef{s:AllocationFastpath}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model.
     558The solution is to prevent interrupts that can result in CPU or KT change during operations that are logically critical sections.
     559Locking these critical sections negates any attempt for a quick fastpath and results in high contention.
     560For user-level threading, the serially-reusable problem appears with time slicing for preemptable scheduling, as the signal handler context switches to another user-level thread.
     561Without time slicing, a user thread performing a long computation can prevent execution (starve) other threads.
     562To prevent starvation for an allocation-active thread, \ie the time slice always triggers in an allocation critical-section for one thread, a thread-local \newterm{rollforward} flag is set in the signal handler when it aborts a time slice.
     563The rollforward flag is tested at the end of each allocation funnel routine (see \VPageref{p:FunnelRoutine}), and if set, it is reset and a volunteer yield (context switch) is performed to allow other threads to execute.
     564
     565llheap uses two techniques to detect when execution is in a allocation operation or routine called from allocation operation, to abort any time slice during this period.
     566On the slowpath when executing expensive operations, like @sbrk@ or @mmap@, interrupts are disabled/enabled by setting thread-local flags so the signal handler aborts immediately.
     567On the fastpath, disabling/enabling interrupts is too expensive as accessing thread-local storage can be expensive and not thread-safe.
     568For example, the ARM processor stores the thread-local pointer in a coprocessor register that cannot perform atomic base-displacement addressing.
     569Hence, there is a window between loading the thread-local pointer from the coprocessor register into a normal register and adding the displacement when a time slice can move a thread.
     570
     571The fast technique defines a special code section and places all non-interruptible routines in this section.
     572The linker places all code in this section into a contiguous block of memory, but the order of routines within the block is unspecified.
     573Then, the signal handler compares the program counter at the point of interrupt with the the start and end address of the non-interruptible section, and aborts if executing within this section and sets the rollforward flag.
     574This technique is fragile because any calls in the non-interruptible code outside of the non-interruptible section (like @sbrk@) must be bracketed with disable/enable interrupts and these calls must be along the slowpath.
     575Hence, for correctness, this approach requires inspection of generated assembler code for routines placed in the non-interruptible section.
     576This issue is mitigated by the llheap funnel design so only funnel routines and a few statistics routines are placed in the non-interruptible section and their assembler code examined.
     577These techniques are used in both the \uC and \CFA versions of llheap, where both of these systems have user-level threading.
     578
     579
     580\section{Bootstrapping}
     581
     582There are problems bootstrapping a memory allocator.
     583\begin{enumerate}
     584\item
     585Programs can be statically or dynamically linked.
     586\item
     587The order the linker schedules startup code is poorly supported.
     588\item
     589Knowing a KT's start and end independently from the KT code is difficult.
     590\end{enumerate}
     591
     592For static linking, the allocator is loaded with the program.
     593Hence, allocation calls immediately invoke the allocator operation defined by the loaded allocation library and there is only one memory allocator used in the program.
     594This approach allows allocator substitution by placing an allocation library before any other in the linked/load path.
     595
     596Allocator substitution is similar for dynamic linking, but the problem is that the dynamic loader starts first and needs to perform dynamic allocations \emph{before} the substitution allocator is loaded.
     597As a result, the dynamic loader uses a default allocator until the substitution allocator is loaded, after which all allocation operations are handled by the substitution allocator, including from the dynamic loader.
     598Hence, some part of the @sbrk@ area may be used by the default allocator and statistics about allocation operations cannot be correct.
     599Furthermore, dynamic linking goes through trampolines, so there is an additional cost along the allocator fastpath for all allocation operations.
     600Testing showed up to a 5\% performance increase for dynamic linking over static linking, even when using @tls_model("initial-exec")@ so the dynamic loader can obtain tighter binding.
     601
     602All allocator libraries need to perform startup code to initialize data structures, such as the heap array for llheap.
     603The problem is getting initialized done before the first allocator call.
     604However, there does not seem to be mechanism to tell either the static or dynamic loader to first perform initialization code before any calls to a loaded library.
     605As a result, calls to allocation routines occur without initialization.
     606To deal with this problem, it is necessary to put a conditional initialization check along the allocation fastpath to trigger initialization (singleton pattern).
     607
     608Two other important execution points are program startup and termination, which include prologue or epilogue code to bootstrap a program, which programmers are unaware of.
     609For example, dynamic-memory allocations before/after the application starts should not be considered in statistics because the application does not make these calls.
     610llheap establishes these two points using routines:
     611\begin{lstlisting}
     612__attribute__(( constructor( 100 ) )) static void startup( void ) {
     613        // clear statistic counters
     614        // reset allocUnfreed counter
     615}
     616__attribute__(( destructor( 100 ) )) static void shutdown( void ) {
     617        // sum allocUnfreed for all heaps
     618        // subtract global unfreed storage
     619        // if allocUnfreed > 0 then print warning message
     620}
     621\end{lstlisting}
     622which use global constructor/destructor priority 100, where the linker calls these routines at program prologue/epilogue in increasing/decreasing order of priority.
     623Application programs may only use global constructor/destructor priorities greater than 100.
     624Hence, @startup@ is called after the program prologue but before the application starts, and @shutdown@ is called after the program terminates but before the program epilogue.
     625By resetting counters in @startup@, prologue allocations are ignored, and checking unfreed storage in @shutdown@ checks only application memory management, ignoring the program epilogue.
     626
     627While @startup@/@shutdown@ apply to the program KT, a concurrent program creates additional KTs that do not trigger these routines.
     628However, it is essential for the allocator to know when each KT is started/terminated.
     629One approach is to create a thread-local object with a construct/destructor, which is triggered after a new KT starts and before it terminates, respectively.
     630\begin{lstlisting}
     631struct ThreadManager {
     632        volatile bool pgm_thread;
     633        ThreadManager() {} // unusable
     634        ~ThreadManager() { if ( pgm_thread ) heapManagerDtor(); }
     635};
     636static thread_local ThreadManager threadManager;
     637\end{lstlisting}
     638Unfortunately, thread-local variables are created lazily, \ie on the first dereference of @threadManager@, which then triggers its constructor.
     639Therefore, the constructor is useless for knowing when a KT starts because the KT must reference it, and the allocator does not control the application KT.
     640Fortunately, the singleton pattern needed for initializing the program KT also triggers KT allocator initialization, which can then reference @pgm_thread@ to call @threadManager@'s constructor, otherwise its destructor is not called.
     641Now when a KT terminates, @~ThreadManager@ is called to chained it onto the global-heap free-stack, where @pgm_thread@ is set to true only for the program KT.
     642The conditional destructor call prevents closing down the program heap, which must remain available because epilogue code may free more storage.
     643
     644Finally, there is a recursive problem when the singleton pattern dereferences @pgm_thread@ to initialize the thread-local object, because its initialization calls @atExit@, which immediately calls @malloc@ to obtain storage.
     645This recursion is handled with another thread-local flag to prevent double initialization.
     646A similar problem exists when the KT terminates and calls member @~ThreadManager@, because immediately afterwards, the terminating KT calls @free@ to deallocate the storage obtained from the @atExit@.
     647In the meantime, the terminated heap has been put on the global-heap free-stack, and may be active by a new KT, so the @atExit@ free is handled as a free to another heap and put onto the away list using locking.
     648
     649For user threading systems, the KTs are controlled by the runtime, and hence, start/end pointers are known and interact directly with the llheap allocator for \uC and \CFA, which eliminates or simplifies several of these problems.
     650The following API was created to provide interaction between the language runtime and the allocator.
     651\begin{lstlisting}
     652void startTask();                       $\C{// KT starts}$
     653void finishTask();                      $\C{// KT ends}$
     654void startup();                         $\C{// when application code starts}$
     655void shutdown();                        $\C{// when application code ends}$
     656bool traceHeap();                       $\C{// enable allocation/free printing for debugging}$
     657bool traceHeapOn();                     $\C{// start printing allocation/free calls}$
     658bool traceHeapOff();                    $\C{// stop printing allocation/free calls}$
     659\end{lstlisting}
     660This kind of API is necessary to allow concurrent runtime systems to interact with difference memory allocators in a consistent way.
    324661
    325662%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    326663
    327664\section{Added Features and Methods}
    328 To improve the llheap allocator (FIX ME: cite llheap) interface and make it more user friendly, we added a few more routines to the C allocator.
    329 Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
    330 
    331 \subsection{C Interface}
    332 We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers.
    333 These features will programmer more control on the dynamic memory allocation.
     665
     666The C dynamic-allocation API (see \VRef[Figure]{f:CDynamicAllocationAPI}) is neither orthogonal nor complete.
     667For example,
     668\begin{itemize}
     669\item
     670It is possible to zero fill or align an allocation but not both.
     671\item
     672It is \emph{only} possible to zero fill and array allocation.
     673\item
     674It is not possible to resize a memory allocation without data copying.
     675\item
     676@realloc@ does not preserve initial allocation properties.
     677\end{itemize}
     678As a result, programmers must provide these options, which is error prone, resulting in blaming the entire programming language for a poor dynamic-allocation API.
     679Furthermore, newer programming languages have better type systems that can provide safer and more powerful APIs for memory allocation.
     680
     681\begin{figure}
     682\begin{lstlisting}
     683void * malloc( size_t size );
     684void * calloc( size_t nmemb, size_t size );
     685void * realloc( void * ptr, size_t size );
     686void * reallocarray( void * ptr, size_t nmemb, size_t size );
     687void free( void * ptr );
     688void * memalign( size_t alignment, size_t size );
     689void * valloc( size_t size );
     690void * pvalloc( size_t size );
     691struct mallinfo mallinfo( void );
     692int mallopt( int param, int val );
     693int malloc_trim( size_t pad );
     694size_t malloc_usable_size( void * ptr );
     695void malloc_stats( void );
     696int malloc_info( int options, FILE * fp );
     697\end{lstlisting}
     698\caption{C Dynamic-Allocation API}
     699\label{f:CDynamicAllocationAPI}
     700\end{figure}
     701
     702The following presents design and API changes for C, \CC (\uC), and \CFA, all of which are implemented in llheap.
     703
    334704
    335705\subsection{Out of Memory}
     
    339709The alternative is to abort a program when out of memory.
    340710In theory, notifying the programmer allows recovery;
    341 in practice, it is almost impossible to gracefully when out of memory, so the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen.
    342 
    343 
    344 \subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
     711in practice, it is almost impossible to gracefully recover when out of memory, so the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen for llheap.
     712
     713
     714\subsection{C Interface}
     715
     716Within the C type-system, it is still possible to increase orthogonality and functionality of the dynamic-memory API to make the allocator more usable for programmers.
     717
     718\paragraph{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
    345719@aalloc@ is an extension of malloc.
    346720It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly.
     
    358732On failure, it returns a @NULL@ pointer.
    359733
    360 \subsection{\lstinline{void * resize( void * oaddr, size_t size )}}
     734\paragraph{\lstinline{void * resize( void * oaddr, size_t size )}}
    361735@resize@ is an extension of relloc.
    362736It allows programmer to reuse a currently allocated dynamic object with a new size requirement.
     
    374748On failure, it returns a @NULL@ pointer.
    375749
    376 \subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
     750\paragraph{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
    377751This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize).
    378752In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
     
    392766On failure, it returns a @NULL@ pointer.
    393767
    394 \subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
     768\paragraph{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
    395769amemalign is a hybrid of memalign and aalloc.
    396770It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly.
     
    411785On failure, it returns a @NULL@ pointer.
    412786
    413 \subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
     787\paragraph{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
    414788cmemalign is a hybrid of amemalign and calloc.
    415789It allows programmer to allocate an aligned dynamic array of objects that is 0 filled.
     
    431805On failure, it returns a @NULL@ pointer.
    432806
    433 \subsection{\lstinline{size_t malloc_alignment( void * addr )}}
     807\paragraph{\lstinline{size_t malloc_alignment( void * addr )}}
    434808@malloc_alignment@ returns the alignment of a currently allocated dynamic object.
    435809It allows the programmer in memory management and personal bookkeeping.
     
    445819On failure, it return the value of default alignment of the llheap allocator.
    446820
    447 \subsection{\lstinline{bool malloc_zero_fill( void * addr )}}
     821\paragraph{\lstinline{bool malloc_zero_fill( void * addr )}}
    448822@malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation.
    449823It allows the programmer in memory management and personal bookkeeping.
     
    459833On failure, it returns false.
    460834
    461 \subsection{\lstinline{size_t malloc_size( void * addr )}}
    462 @malloc_size@ returns the allocation size of a currently allocated dynamic object.
     835\paragraph{\lstinline{size_t malloc_size( void * addr )}}
     836@malloc_size@ returns the request size of a currently allocated dynamic object.
    463837It allows the programmer in memory management and personal bookkeeping.
    464838It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size.
     
    474848@addr@: the address of the currently allocated dynamic object.
    475849\end{itemize}
    476 @malloc_size@ returns the allocation size of the given dynamic object.
     850@malloc_size@ returns the request size of the given dynamic object.
    477851On failure, it return zero.
    478852
    479 \subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
     853
     854\subsection{\CC Interface}
     855
     856\paragraph{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
    480857This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@).
    481858In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
     
    495872On failure, it returns a @NULL@ pointer.
    496873
    497 \subsection{\CFA Malloc Interface}
     874
     875\subsection{\CFA Interface}
    498876We added some routines to the @malloc@ interface of \CFA.
    499877These routines can only be used in \CFA and not in our stand-alone llheap allocator as these routines use some features that are only provided by \CFA and not by C.
  • doc/theses/mubeen_zulfiqar_MMath/background.tex

    rac4476d r23f1065  
    3636The management data starts with fixed-sized information in the static-data memory that references components in the dynamic-allocation memory.
    3737The \newterm{storage data} is composed of allocated and freed objects, and \newterm{reserved memory}.
    38 Allocated objects (white) are variable sized, and allocated and maintained by the program;
     38Allocated objects (light grey) are variable sized, and allocated and maintained by the program;
    3939\ie only the program knows the location of allocated storage, not the memory allocator.
    4040\begin{figure}[h]
     
    4444\label{f:AllocatorComponents}
    4545\end{figure}
    46 Freed objects (light grey) represent memory deallocated by the program, which are linked into one or more lists facilitating easy location of new allocations.
     46Freed objects (white) represent memory deallocated by the program, which are linked into one or more lists facilitating easy location of new allocations.
    4747Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks.
    4848Reserved memory (dark grey) is one or more blocks of memory obtained from the operating system but not yet allocated to the program;
  • doc/theses/mubeen_zulfiqar_MMath/intro.tex

    rac4476d r23f1065  
    124124
    125125\item
    126 Provide complete, fast, and contention-free allocation statistics to help understand program behaviour:
     126Provide complete, fast, and contention-free allocation statistics to help understand allocation behaviour:
    127127\begin{itemize}
    128128\item
Note: See TracChangeset for help on using the changeset viewer.