Ignore:
Timestamp:
Apr 19, 2022, 3:00:04 PM (3 years ago)
Author:
m3zulfiq <m3zulfiq@…>
Branches:
ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
Children:
5b84a321
Parents:
ba897d21 (diff), bb7c77d (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

added benchmark and evaluations chapter to thesis

Location:
doc/theses/mubeen_zulfiqar_MMath
Files:
7 added
9 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/Makefile

    rba897d21 r2e9b59b  
    1 # directory for latex clutter files
     1# Configuration variables
     2
    23Build = build
    34Figures = figures
    45Pictures = pictures
     6
     7LaTMac = ../../LaTeXmacros
     8BibRep = ../../bibliography
     9
    510TeXSRC = ${wildcard *.tex}
    611FigSRC = ${notdir ${wildcard ${Figures}/*.fig}}
    712PicSRC = ${notdir ${wildcard ${Pictures}/*.fig}}
    8 BIBSRC = ${wildcard *.bib}
    9 TeXLIB = .:../../LaTeXmacros:${Build}: # common latex macros
    10 BibLIB = .:../../bibliography # common citation repository
     13BibSRC = ${wildcard *.bib}
     14
     15TeXLIB = .:${LaTMac}:${Build}:
     16BibLIB = .:${BibRep}:
    1117
    1218MAKEFLAGS = --no-print-directory # --silent
    1319VPATH = ${Build} ${Figures} ${Pictures} # extra search path for file names used in document
    1420
    15 ### Special Rules:
     21DOCUMENT = uw-ethesis.pdf
     22BASE = ${basename ${DOCUMENT}}                  # remove suffix
    1623
    17 .PHONY: all clean
    18 .PRECIOUS: %.dvi %.ps # do not delete intermediate files
    19 
    20 ### Commands:
     24# Commands
    2125
    2226LaTeX = TEXINPUTS=${TeXLIB} && export TEXINPUTS && latex -halt-on-error -output-directory=${Build}
    23 BibTeX = BIBINPUTS=${BibLIB} bibtex
     27BibTeX = BIBINPUTS=${BibLIB} && export BIBINPUTS && bibtex
    2428#Glossary = INDEXSTYLE=${Build} makeglossaries-lite
    2529
    26 ### Rules and Recipes:
     30# Rules and Recipes
    2731
    28 DOC = uw-ethesis.pdf
    29 BASE = ${DOC:%.pdf=%} # remove suffix
     32.PHONY : all clean                              # not file names
     33.PRECIOUS: %.dvi %.ps # do not delete intermediate files
     34.ONESHELL :
    3035
    31 all: ${DOC}
     36all : ${DOCUMENT}
    3237
    33 clean:
    34         @rm -frv ${DOC} ${Build}
     38clean :
     39        @rm -frv ${DOCUMENT} ${Build}
    3540
    36 # File Dependencies #
     41# File Dependencies
    3742
    38 ${Build}/%.dvi : ${TeXSRC} ${FigSRC:%.fig=%.tex} ${PicSRC:%.fig=%.pstex} ${BIBSRC} Makefile | ${Build}
     43%.dvi : ${TeXSRC} ${FigSRC:%.fig=%.tex} ${PicSRC:%.fig=%.pstex} ${BibSRC} ${BibRep}/pl.bib ${LaTMac}/common.tex Makefile | ${Build}
    3944        ${LaTeX} ${BASE}
    4045        ${BibTeX} ${Build}/${BASE}
    4146        ${LaTeX} ${BASE}
    42         # if nedded, run latex again to get citations
     47        # if needed, run latex again to get citations
    4348        if fgrep -s "LaTeX Warning: Citation" ${basename $@}.log ; then ${LaTeX} ${BASE} ; fi
    4449#       ${Glossary} ${Build}/${BASE}
     
    4651
    4752${Build}:
    48         mkdir $@
     53        mkdir -p $@
    4954
    5055%.pdf : ${Build}/%.ps | ${Build}
  • doc/theses/mubeen_zulfiqar_MMath/allocator.tex

    rba897d21 r2e9b59b  
    11\chapter{Allocator}
    22
    3 \section{uHeap}
    4 uHeap is a lightweight memory allocator. The objective behind uHeap is to design a minimal concurrent memory allocator that has new features and also fulfills GNU C Library requirements (FIX ME: cite requirements).
    5 
    6 The objective of uHeap's new design was to fulfill following requirements:
    7 \begin{itemize}
    8 \item It should be concurrent and thread-safe for multi-threaded programs.
    9 \item It should avoid global locks, on resources shared across all threads, as much as possible.
    10 \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
    11 \item It should be a lightweight memory allocator.
    12 \end{itemize}
     3This chapter presents a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
     4The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}.
     5
     6
     7\section{llheap}
     8
     9The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call.
     10(Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.)
     11A direct consequence of this objective is very simple or no storage coalescing;
     12hence, llheap's design is willing to use more storage to lower latency.
     13This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory.
     14Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{c:Performance}).
     15
     16% The objective of llheap's new design was to fulfill following requirements:
     17% \begin{itemize}
     18% \item It should be concurrent and thread-safe for multi-threaded programs.
     19% \item It should avoid global locks, on resources shared across all threads, as much as possible.
     20% \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
     21% \item It should be a lightweight memory allocator.
     22% \end{itemize}
    1323
    1424%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    1525
     26<<<<<<< HEAD
    1627\section{Design choices for uHeap}\label{sec:allocatorSec}
    1728uHeap's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed:
    18 
    19 \paragraph{Design 1: Centralized}
    20 One heap, but lower bucket sizes are N-shared across KTs.
    21 This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes.
    22 When KTs $\le$ N, the important bucket sizes are uncontented.
    23 When KTs $>$ N, the free buckets are contented.
    24 Therefore, threads are only contending for a small number of buckets, which are distributed among them to reduce contention.
    25 \begin{cquote}
     29=======
     30\section{Design Choices}
     31>>>>>>> bb7c77dc425e289ed60aa638529b3e5c7c3e4961
     32
     33llheap's design was reviewed and changed multiple times throughout the thesis.
     34Some of the rejected designs are discussed because they show the path to the final design (see discussion in \VRef{s:MultipleHeaps}).
     35Note, a few simples tests for a design choice were compared with the current best allocators to determine the viability of a design.
     36
     37
     38\subsection{Allocation Fastpath}
     39\label{s:AllocationFastpath}
     40
     41These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced.
     42\paragraph{T:1 model}
     43\VRef[Figure]{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are N-shared across KTs.
     44This design leverages the fact that 95\% of allocation requests are less than 1024 bytes and there are only 3--5 different request sizes.
     45When KTs $\le$ N, the common bucket sizes are uncontented;
     46when KTs $>$ N, the free buckets are contented and latency increases significantly.
     47In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared.
     48Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency;
     49however, picking N is workload specific.
     50
     51\begin{figure}
     52\centering
     53\input{AllocDS1}
     54\caption{T:1 with Shared Buckets}
     55\label{f:T1SharedBuckets}
     56\end{figure}
     57
     58Problems:
     59\begin{itemize}
     60\item
     61Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator.
     62\item
     63When no thread is assigned a bucket number, its free storage is unavailable.
     64\item
     65All KTs contend for the global-pool lock for initial allocations, before free-lists get populated.
     66\end{itemize}
     67Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
     68
     69\paragraph{T:H model}
     70\VRef[Figure]{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded across the KTs.
     71A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket.
     72When KT $\le$ N, the heaps are uncontented;
     73when KTs $>$ N, the heaps are contented.
     74In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared.
     75By adjusting N upwards, this approach reduces contention but increases storage (time versus space);
     76however, picking N is workload specific.
     77
     78\begin{figure}
    2679\centering
    2780\input{AllocDS2}
    28 \end{cquote}
    29 Problems: need to know when a kernel thread (KT) is created and destroyed to know when to assign a shared bucket-number.
    30 When no thread is assigned a bucket number, its free storage is unavailable. All KTs will be contended for one lock on sbrk for their initial allocations (before free-lists gets populated).
    31 
    32 \paragraph{Design 2: Decentralized N Heaps}
    33 Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area.
    34 Kernel threads (KT) are assigned to the N heaps.
    35 When KTs $\le$ N, the heaps are uncontented.
    36 When KTs $>$ N, the heaps are contented.
    37 By adjusting N, this approach reduces storage at the cost of speed due to contention.
    38 In all cases, a thread acquires/releases a lock, contented or uncontented.
    39 \begin{cquote}
    40 \centering
    41 \input{AllocDS1}
    42 \end{cquote}
    43 Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT.
    44 
    45 \paragraph{Design 3: Decentralized Per-thread Heaps}
    46 Design 3 is similar to design 2 but instead of having an M:N model, it uses a 1:1 model. So, instead of having N heaos and sharing them among M KTs, Design 3 has one heap for each KT.
    47 Dynamic number of heaps: create a thread-local heap for each kernel thread (KT) with a bump-area allocated from the @sbrk@ area.
    48 Each KT will have its own exclusive thread-local heap. Heap will be uncontended between KTs regardless how many KTs have been created.
    49 Operations on @sbrk@ area will still be protected by locks.
    50 %\begin{cquote}
    51 %\centering
    52 %\input{AllocDS3} FIXME add figs
    53 %\end{cquote}
    54 Problems: We cannot destroy the heap when a KT exits because our dynamic objects have ownership and they are returned to the heap that created them when the program frees a dynamic object. All dynamic objects point back to their owner heap. If a thread A creates an object O, passes it to another thread B, and A itself exits. When B will free object O, O should return to A's heap so A's heap should be preserved for the lifetime of the whole program as their might be objects in-use of other threads that were allocated by A. Also, we need to know when a KT is created and destroyed to know when to create/destroy a heap for the KT.
    55 
    56 \paragraph{Design 4: Decentralized Per-CPU Heaps}
    57 Design 4 is similar to Design 3 but instead of having a heap for each thread, it creates a heap for each CPU.
    58 Fixed number of heaps for a machine: create a heap for each CPU with a bump-area allocated from the @sbrk@ area.
    59 Each CPU will have its own CPU-local heap. When the program does a dynamic memory operation, it will be entertained by the heap of the CPU where the process is currently running on.
    60 Each CPU will have its own exclusive heap. Just like Design 3(FIXME cite), heap will be uncontended between KTs regardless how many KTs have been created.
    61 Operations on @sbrk@ area will still be protected by locks.
    62 To deal with preemtion during a dynamic memory operation, librseq(FIXME cite) will be used to make sure that the whole dynamic memory operation completes on one CPU. librseq's restartable sequences can make it possible to re-run a critical section and undo the current writes if a preemption happened during the critical section's execution.
    63 %\begin{cquote}
    64 %\centering
    65 %\input{AllocDS4} FIXME add figs
    66 %\end{cquote}
    67 
    68 Problems: This approach was slower than the per-thread model. Also, librseq does not provide such restartable sequences to detect preemtions in user-level threading system which is important to us as CFA(FIXME cite) has its own threading system that we want to support.
    69 
    70 Out of the four designs, Design 3 was chosen because of the following reasons.
    71 \begin{itemize}
    72 \item
    73 Decentralized designes are better in general as compared to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designes shard the whole heap which has all the buckets with the addition of sharding sbrk area. So Design 1 was eliminated.
    74 \item
    75 Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenerio.
    76 \item
    77 Design 4 was eliminated because it was slower than Design 3 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achive user-threading safety which has some cost to it. Desing 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
    78 \end{itemize}
    79 
    80 
    81 \subsection{Advantages of distributed design}
    82 
    83 The distributed design of uHeap is concurrent to work in multi-threaded applications.
    84 
    85 Some key benefits of the distributed design of uHeap are as follows:
    86 
    87 \begin{itemize}
    88 \item
    89 The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The call to sbrk will be protected using locks but bump allocation (on memory taken from sbrk) will not be contended once the sbrk call has returned.
    90 \item
    91 Low or almost no contention on heap resources.
    92 \item
    93 It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
    94 \item
    95 Distributed design avoids unnecassry locks on resources shared across all KTs.
    96 \end{itemize}
     81\caption{T:H with Shared Heaps}
     82\label{f:THSharedHeaps}
     83\end{figure}
     84
     85Problems:
     86\begin{itemize}
     87\item
     88Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator.
     89\item
     90When no thread is assigned to a heap, its free storage is unavailable.
     91\item
     92Ownership issues arise (see \VRef{s:Ownership}).
     93\item
     94All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated.
     95\end{itemize}
     96Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
     97
     98\paragraph{T:H model, H = number of CPUs}
     99This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@.
     100(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per CPU.)
     101Hence, each CPU logically has its own private heap and local pool.
     102A memory operation is serviced from the heap associated with the CPU executing the operation.
     103This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership).
     104This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs.
     105
     106Problems:
     107\begin{itemize}
     108\item
     109Need to know when a CPU is added/removed from the @taskset@.
     110\item
     111Need a fast way to determine the CPU a KT is executing on to access the appropriate heap.
     112\item
     113Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}.
     114\begin{quote}
     115A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}
     116\end{quote}
     117If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
     118Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
     119Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler.
     120
     121Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted.
     122\end{itemize}
     123Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-section along the allocation fast-path produced a significant increase in allocation costs.
     124Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs.
     125For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs.
     126The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation.
     127Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section.
     128Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap.
     129If @librseq@ had an @rseq_abort@ which:
     130\begin{enumerate}
     131\item
     132Marked the current restartable critical-section as cancelled so it restarts when attempting to commit.
     133\item
     134Do nothing if there is no current restartable critical section in progress.
     135\end{enumerate}
     136Then @rseq_abort@ could be called on the backside of a  user-level context-switching.
     137A feature similar to this idea might exist for hardware transactional-memory.
     138A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection.
     139
     140\paragraph{1:1 model}
     141This design is the T:H model with T = H, where there is one thread-local heap for each KT.
     142(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.)
     143Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted.
     144Heaps are uncontended for a KTs memory operations to its heap (modulo operations on the global pool and ownership).
     145
     146Problems:
     147\begin{itemize}
     148\item
     149Need to know when a KT is starts/terminates to create/delete its heap.
     150
     151\noindent
     152It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates.
     153\item
     154There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap.
     155
     156\noindent
     157The classic solution only deletes a heap after all referents are returned, which is complex.
     158The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees.
     159If old referents return storage to a terminated heap, it is handled in the same way as an active heap.
     160To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation).
     161In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT.
     162\item
     163There can be significant external fragmentation as the number of KTs increases.
     164
     165\noindent
     166In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs.
     167Since the number of CPUs is relatively small, >~1024, and a heap relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs.
     168\item
     169There is the same serially-reusable problem with UTs migrating across KTs.
     170\end{itemize}
     171Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach.
     172
     173
     174\vspace{5pt}
     175\noindent
     176The conclusion from this design exercise is: any atomic fence, atomic instruction (lock free), or lock along the allocation fastpath produces significant slowdown.
     177For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps maybe shared by multiple threads, even when KTs $\le$ N.
     178For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
     179However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs.
     180More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
     181Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required.
     182The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in \VRef{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns.
     183
     184
     185% \begin{itemize}
     186% \item
     187% A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated.
     188% \item
     189% Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario.
     190% \item
     191% Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it.
     192% that  because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
     193% \end{itemize}
     194% Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons:
     195
     196% \subsection{Advantages of distributed design}
     197%
     198% The distributed design of llheap is concurrent to work in multi-threaded applications.
     199% Some key benefits of the distributed design of llheap are as follows:
     200% \begin{itemize}
     201% \item
     202% The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned.
     203% \item
     204% Low or almost no contention on heap resources.
     205% \item
     206% It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
     207% \item
     208% Distributed design avoids unnecessary locks on resources shared across all KTs.
     209% \end{itemize}
     210
     211\subsection{Allocation Latency}
     212
     213A primary goal of llheap is low latency.
     214Two forms of latency are internal and external.
     215Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system.
     216Ideally latency is $O(1)$ with a small constant.
     217
     218To obtain $O(1)$ internal latency means no searching on the allocation fastpath, largely prohibits coalescing, which leads to external fragmentation.
     219The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger).
     220
     221To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
     222Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable.
     223The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VPageref{p:malloc_expansion}).
     224Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent.
     225
    97226
    98227%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    99228
    100 \section{uHeap Structure}
    101 
    102 As described in (FIXME cite 2.4) uHeap uses following features of multi-threaded memory allocators.
    103 \begin{itemize}
    104 \item
    105 uHeap has multiple heaps without a global heap and uses 1:1 model. (FIXME cite 2.5 1:1 model)
    106 \item
    107 uHeap uses object ownership. (FIXME cite 2.5.2)
    108 \item
    109 uHeap does not use object containers (FIXME cite 2.6) or any coalescing technique. Instead each dynamic object allocated by uHeap has a header than contains bookkeeping information.
    110 \item
    111 Each thread-local heap in uHeap has its own allocation buffer that is taken from the system using sbrk() call. (FIXME cite 2.7)
    112 \item
    113 Unless a heap is freeing an object that is owned by another thread's heap or heap is using sbrk() system call, uHeap is mostly lock-free which eliminates most of the contention on shared resources. (FIXME cite 2.8)
    114 \end{itemize}
    115 
    116 As uHeap uses a heap per-thread model to reduce contention on heap resources, we manage a list of heaps (heap-list) that can be used by threads. The list is empty at the start of the program. When a kernel thread (KT) is created, we check if heap-list is empty. If no then a heap is removed from the heap-list and is given to this new KT to use exclusively. If yes then a new heap object is created in dynamic memory and is given to this new KT to use exclusively. When a KT exits, its heap is not destroyed but instead its heap is put on the heap-list and is ready to be reused by new KTs.
    117 
    118 This reduces the memory footprint as the objects on free-lists of a KT that has exited can be reused by a new KT. Also, we preserve all the heaps that were created during the lifetime of the program till the end of the program. uHeap uses object ownership where an object is freed to the free-buckets of the heap that allocated it. Even after a KT A has exited, its heap has to be preserved as there might be objects in-use of other threads that were initially allocated by A and the passed to other threads.
     229\section{llheap Structure}
     230
     231\VRef[Figure]{f:llheapStructure} shows the design of llheap, which uses the following features:
     232\begin{itemize}
     233\item
     2341:1 multiple-heap model to minimize the fastpath,
     235\item
     236can be built with or without heap ownership,
     237\item
     238headers per allocation versus containers,
     239\item
     240no coalescing to minimize latency,
     241\item
     242global heap memory (pool) obtained from the operating system using @mmap@ to create and reuse heaps needed by threads,
     243\item
     244local reserved memory (pool) per heap obtained from global pool,
     245\item
     246global reserved memory (pool) obtained from the operating system using @sbrk@ call,
     247\item
     248optional fast-lookup table for converting allocation requests into bucket sizes,
     249\item
     250optional statistic-counters table for accumulating counts of allocation operations.
     251\end{itemize}
    119252
    120253\begin{figure}
    121254\centering
     255<<<<<<< HEAD
    122256\includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
    123257\caption{uHeap Structure}
    124258\label{fig:heapStructureFig}
     259=======
     260% \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
     261\input{llheap}
     262\caption{llheap Structure}
     263\label{f:llheapStructure}
     264>>>>>>> bb7c77dc425e289ed60aa638529b3e5c7c3e4961
    125265\end{figure}
    126266
    127 Each heap uses seggregated free-buckets that have free objects of a specific size. Each free-bucket of a specific size has following 2 lists in it:
    128 \begin{itemize}
    129 \item
    130 Free list is used when a thread is freeing an object that is owned by its own heap so free list does not use any locks/atomic-operations as it is only used by the owner KT.
    131 \item
    132 Away list is used when a thread A is freeing an object that is owned by another KT B's heap. This object should be freed to the owner heap (B's heap) so A will place the object on the away list of B. Away list is lock protected as it is shared by all other threads.
    133 \end{itemize}
    134 
    135 When a dynamic object of a size S is requested. The thread-local heap will check if S is greater than or equal to the mmap threshhold. Any request larger than the mmap threshhold is fulfilled by allocating an mmap area of that size and such requests are not allocated on sbrk area. The value of this threshhold can be changed using mallopt routine but the new value should not be larger than our biggest free-bucket size.
    136 
    137 Algorithm~\ref{alg:heapObjectAlloc} briefly shows how an allocation request is fulfilled.
    138 
    139 \begin{algorithm}
    140 \caption{Dynamic object allocation of size S}\label{alg:heapObjectAlloc}
     267llheap starts by creating an array of $N$ global heaps from storage obtained using @mmap@, where $N$ is the number of computer cores, that persists for program duration.
     268There is a global bump-pointer to the next free heap in the array.
     269When this array is exhausted, another array is allocated.
     270There is a global top pointer for a heap intrusive link to chain free heaps from terminated threads.
     271When statistics are turned on, there is a global top pointer for a heap intrusive link to chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps using @malloc_stats@.
     272
     273When a KT starts, a heap is allocated from the current array for exclusive use by the KT.
     274When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of heaps.
     275The free heaps is a stack so hot storage is reused first.
     276Preserving all heaps created during the program lifetime, solves the storage lifetime problem, when ownership is used.
     277This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially.
     278llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap.
     279
     280Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
     281The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the operating system.
     282Each free bucket of a specific size has the following two lists:
     283\begin{itemize}
     284\item
     285A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
     286The free objects are a stack so hot storage is reused first.
     287\item
     288For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operations require locking.
     289When the free stack is empty, the entire ownership stack is removed and becomes the head of the corresponding free stack.
     290\end{itemize}
     291
     292Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
     293First, the allocation is divided into small (@sbrk@) or large (@mmap@).
     294For large allocations, the storage is mapped directly from the operating system.
     295For small allocations, $S$ is quantized into a bucket size.
     296Quantizing is performed using a binary search over the ordered bucket array.
     297An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket.
     298(Type @char@ restricts the number of bucket sizes to 256.)
     299For $S$ > 64K, a binary search is used.
     300Then, the allocation storage is obtained from the following locations (in order), with increasing latency.
     301\begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt]
     302\item
     303bucket's free stack,
     304\item
     305bucket's away stack,
     306\item
     307heap's local pool
     308\item
     309global pool
     310\item
     311operating system (@sbrk@)
     312\end{enumerate}
     313
     314\begin{figure}
     315\vspace*{-10pt}
     316\begin{algorithm}[H]
     317\small
     318\caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc}
    141319\begin{algorithmic}[1]
    142320\State $\textit{O} \gets \text{NULL}$
    143 \If {$S < \textit{mmap-threshhold}$}
    144         \State $\textit{B} \gets (\text{smallest free-bucket} \geq S)$
     321\If {$S >= \textit{mmap-threshhold}$}
     322        \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
     323\Else
     324        \State $\textit{B} \gets \text{smallest free-bucket} \geq S$
    145325        \If {$\textit{B's free-list is empty}$}
    146326                \If {$\textit{B's away-list is empty}$}
    147327                        \If {$\textit{heap's allocation buffer} < S$}
    148                                 \State $\text{get allocation buffer using system call sbrk()}$
     328                                \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$
    149329                        \EndIf
    150330                        \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$
     
    157337        \EndIf
    158338        \State $\textit{O's owner} \gets \text{B}$
    159 \Else
    160         \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
    161339\EndIf
    162340\State $\Return \textit{ O}$
     
    164342\end{algorithm}
    165343
     344<<<<<<< HEAD
    166345Algorithm~\ref{alg:heapObjectFreeOwn} shows how a free request is fulfilled if object ownership is turned on. Algorithm~\ref{alg:heapObjectFreeNoOwn} shows how the same free request is fulfilled without object ownership.
    167346
     
    171350\If {$\textit{A was mmap-ed}$}
    172351        \State $\text{return A's dynamic memory to system using system call munmap}$
     352=======
     353\vspace*{-15pt}
     354\begin{algorithm}[H]
     355\small
     356\caption{Dynamic object free at address $A$ with object ownership}\label{alg:heapObjectFreeOwn}
     357\begin{algorithmic}[1]
     358\If {$\textit{A mapped allocation}$}
     359        \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$
     360>>>>>>> bb7c77dc425e289ed60aa638529b3e5c7c3e4961
    173361\Else
    174362        \State $\text{B} \gets \textit{O's owner}$
     
    181369\end{algorithmic}
    182370\end{algorithm}
     371<<<<<<< HEAD
    183372
    184373\begin{algorithm}
     
    199388\end{algorithm}
    200389
     390=======
     391>>>>>>> bb7c77dc425e289ed60aa638529b3e5c7c3e4961
     392
     393\vspace*{-15pt}
     394\begin{algorithm}[H]
     395\small
     396\caption{Dynamic object free at address $A$ without object ownership}\label{alg:heapObjectFreeNoOwn}
     397\begin{algorithmic}[1]
     398\If {$\textit{A mapped allocation}$}
     399        \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$
     400\Else
     401        \State $\text{B} \gets \textit{O's owner}$
     402        \If {$\textit{B is thread-local heap's bucket}$}
     403                \State $\text{push A to B's free-list}$
     404        \Else
     405                \State $\text{C} \gets \textit{thread local heap's bucket with same size as B}$
     406                \State $\text{push A to C's free-list}$
     407        \EndIf
     408\EndIf
     409\end{algorithmic}
     410\end{algorithm}
     411\end{figure}
     412
     413Algorithm~\ref{alg:heapObjectFreeOwn} shows the de-allocation (free) outline for an object at address $A$ with ownership.
     414First, the address is divided into small (@sbrk@) or large (@mmap@).
     415For large allocations, the storage is unmapped back to the operating system.
     416For small allocations, the bucket associated with the request size is retrieved.
     417If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket.
     418If the bucket is not local to the thread, the allocation is pushed onto the owning thread's associated away stack.
     419
     420Algorithm~\ref{alg:heapObjectFreeNoOwn} shows the de-allocation (free) outline for an object at address $A$ without ownership.
     421The algorithm is the same as for ownership except if the bucket is not local to the thread.
     422Then the corresponding bucket of the owner thread is computed for the deallocating thread, and the allocation is pushed onto the deallocating thread's bucket.
     423
     424Finally, the llheap design funnels \label{p:FunnelRoutine} all allocation/deallocation operations through routines @malloc@/@free@, which are the only routines to directly access and manage the internal data structures of the heap.
     425Other allocation operations, \eg @calloc@, @memalign@, and @realloc@, are composed of calls to @malloc@ and possibly @free@, and may manipulate header information after storage is allocated.
     426This design simplifies heap-management code during development and maintenance.
     427
     428
     429\subsection{Alignment}
     430
     431All dynamic memory allocations must have a minimum storage alignment for the contained object(s).
     432Often the minimum memory alignment, M, is the bus width (32 or 64-bit) or the largest register (double, long double) or largest atomic instruction (DCAS) or vector data (MMMX).
     433In general, the minimum storage alignment is 8/16-byte boundary on 32/64-bit computers.
     434For consistency, the object header is normally aligned at this same boundary.
     435Larger alignments must be a power of 2, such page alignment (4/8K).
     436Any alignment request, N, $\le$ the minimum alignment is handled as a normal allocation with minimal alignment.
     437
     438For alignments greater than the minimum, the obvious approach for aligning to address @A@ is: compute the next address that is a multiple of @N@ after the current end of the heap, @E@, plus room for the header before @A@ and the size of the allocation after @A@, moving the end of the heap to @E'@.
     439\begin{center}
     440\input{Alignment1}
     441\end{center}
     442The storage between @E@ and @H@ is chained onto the appropriate free list for future allocations.
     443This approach is also valid within any sufficiently large free block, where @E@ is the start of the free block, and any unused storage before @H@ or after the allocated object becomes free storage.
     444In this approach, the aligned address @A@ is the same as the allocated storage address @P@, \ie @P@ $=$ @A@ for all allocation routines, which simplifies deallocation.
     445However, if there are a large number of aligned requests, this approach leads to memory fragmentation from the small free areas around the aligned object.
     446As well, it does not work for large allocations, where many memory allocators switch from program @sbrk@ to operating-system @mmap@.
     447The reason is that @mmap@ only starts on a page boundary, and it is difficult to reuse the storage before the alignment boundary for other requests.
     448Finally, this approach is incompatible with allocator designs that funnel allocation requests through @malloc@ as it directly manipulates management information within the allocator to optimize the space/time of a request.
     449
     450Instead, llheap alignment is accomplished by making a \emph{pessimistically} allocation request for sufficient storage to ensure that \emph{both} the alignment and size request are satisfied, \eg:
     451\begin{center}
     452\input{Alignment2}
     453\end{center}
     454The amount of storage necessary is @alignment - M + size@, which ensures there is an address, @A@, after the storage returned from @malloc@, @P@, that is a multiple of @alignment@ followed by sufficient storage for the data object.
     455The approach is pessimistic because if @P@ already has the correct alignment @N@, the initial allocation has already requested sufficient space to move to the next multiple of @N@.
     456For this special case, there is @alignment - M@ bytes of unused storage after the data object, which subsequently can be used by @realloc@.
     457
     458Note, the address returned is @A@, which is subsequently returned to @free@.
     459However, to correctly free the allocated object, the value @P@ must be computable, since that is the value generated by @malloc@ and returned within @memalign@.
     460Hence, there must be a mechanism to detect when @P@ $\neq$ @A@ and how to compute @P@ from @A@.
     461
     462The llheap approach uses two headers:
     463the \emph{original} header associated with a memory allocation from @malloc@, and a \emph{fake} header within this storage before the alignment boundary @A@, which is returned from @memalign@, e.g.:
     464\begin{center}
     465\input{Alignment2Impl}
     466\end{center}
     467Since @malloc@ has a minimum alignment of @M@, @P@ $\neq$ @A@ only holds for alignments of @M@ or greater.
     468When @P@ $\neq$ @A@, the minimum distance between @P@ and @A@ is @M@ bytes, due to the pessimistic storage-allocation.
     469Therefore, there is always room for an @M@-byte fake header before @A@.
     470
     471The fake header must supply an indicator to distinguish it from a normal header and the location of address @P@ generated by @malloc@.
     472This information is encoded as an offset from A to P and the initialize alignment (discussed in \VRef{s:ReallocStickyProperties}).
     473To distinguish a fake header from a normal header, the least-significant bit of the alignment is used because the offset participates in multiple calculations, while the alignment is just remembered data.
     474\begin{center}
     475\input{FakeHeader}
     476\end{center}
     477
     478
     479\subsection{\lstinline{realloc} and Sticky Properties}
     480\label{s:ReallocStickyProperties}
     481
     482Allocation routine @realloc@ provides a memory-management pattern for shrinking/enlarging an existing allocation, while maintaining some or all of the object data, rather than performing the following steps manually.
     483\begin{flushleft}
     484\begin{tabular}{ll}
     485\multicolumn{1}{c}{\textbf{realloc pattern}} & \multicolumn{1}{c}{\textbf{manually}} \\
     486\begin{lstlisting}
     487T * naddr = realloc( oaddr, newSize );
     488
     489
     490
     491\end{lstlisting}
     492&
     493\begin{lstlisting}
     494T * naddr = (T *)malloc( newSize ); $\C[2.4in]{// new storage}$
     495memcpy( naddr, addr, oldSize );  $\C{// copy old bytes}$
     496free( addr );                           $\C{// free old storage}$
     497addr = naddr;                           $\C{// change pointer}\CRT$
     498\end{lstlisting}
     499\end{tabular}
     500\end{flushleft}
     501The realloc pattern leverages available storage at the end of an allocation due to bucket sizes, possibly eliminating a new allocation and copying.
     502This pattern is not used enough to reduce storage management costs.
     503In fact, if @oaddr@ is @nullptr@, @realloc@ does a @malloc@, so even the initial @malloc@ can be a @realloc@ for consistency in the pattern.
     504
     505The hidden problem for this pattern is the effect of zero fill and alignment with respect to reallocation.
     506Are these properties transient or persistent (``sticky'')?
     507For example, when memory is initially allocated by @calloc@ or @memalign@ with zero fill or alignment properties, respectively, what happens when those allocations are given to @realloc@ to change size.
     508That is, if @realloc@ logically extends storage into unused bucket space or allocates new storage to satisfy a size change, are initial allocation properties preserve?
     509Currently, allocation properties are not preserved, so subsequent use of @realloc@ storage may cause inefficient execution or errors due to lack of zero fill or alignment.
     510This silent problem is unintuitive to programmers and difficult to locate because it is transient.
     511To prevent these problems, llheap preserves initial allocation properties for the lifetime of an allocation and the semantics of @realloc@ are augmented to preserve these properties, with additional query routines.
     512This change makes the realloc pattern efficient and safe.
     513
     514
     515\subsection{Header}
     516
     517To preserve allocation properties requires storing additional information with an allocation,
     518The only available location is the header, where \VRef[Figure]{f:llheapNormalHeader} shows the llheap storage layout.
     519The header has two data field sized appropriately for 32/64-bit alignment requirements.
     520The first field is a union of three values:
     521\begin{description}
     522\item[bucket pointer]
     523is for allocated storage and points back to the bucket associated with this storage requests (see \VRef[Figure]{f:llheapStructure} for the fields accessible in a bucket).
     524\item[mapped size]
     525is for mapped storage and is the storage size for use in unmapping.
     526\item[next free block]
     527is for free storage and is an intrusive pointer chaining same-size free blocks onto a bucket's free stack.
     528\end{description}
     529The second field remembers the request size versus the allocation (bucket) size, \eg request 42 bytes which is rounded up to 64 bytes.
     530Since programmers think in request sizes rather than allocation sizes, the request size allows better generation of statistics or errors.
     531
     532\begin{figure}
     533\centering
     534\input{Header}
     535\caption{llheap Normal Header}
     536\label{f:llheapNormalHeader}
     537\end{figure}
     538
     539The low-order 3-bits of the first field are \emph{unused} for any stored values, whereas the second field may use all of its bits.
     540The 3 unused bits are used to represent mapped allocation, zero filled, and alignment, respectively.
     541Note, the alignment bit is not used in the normal header and the zero-filled/mapped bits are not used in the fake header.
     542This implementation allows a fast test if any of the lower 3-bits are on (@&@ and compare).
     543If no bits are on, it implies a basic allocation, which is handled quickly;
     544otherwise, the bits are analysed and appropriate actions are taken for the complex cases.
     545Since most allocations are basic, this implementation results in a significant performance gain along the allocation and free fastpath.
     546
     547
     548\section{Statistics and Debugging}
     549
     550llheap can be built to accumulate fast and largely contention-free allocation statistics to help understand allocation behaviour.
     551Incrementing statistic counters must appear on the allocation fastpath.
     552As noted, any atomic operation along the fastpath produces a significant increase in allocation costs.
     553To make statistics performant enough for use on running systems, each heap has its own set of statistic counters, so heap operations do not require atomic operations.
     554
     555To locate all statistic counters, heaps are linked together in statistics mode, and this list is locked and traversed to sum all counters across heaps.
     556Note, the list is locked to prevent errors traversing an active list;
     557the statistics counters are not locked and can flicker during accumulation, which is not an issue with atomic read/write.
     558\VRef[Figure]{f:StatiticsOutput} shows an example of statistics output, which covers all allocation operations and information about deallocating storage not owned by a thread.
     559No other memory allocator studied provides as comprehensive statistical information.
     560Finally, these statistics were invaluable during the development of this thesis for debugging and verifying correctness, and hence, should be equally valuable to application developers.
     561
     562\begin{figure}
     563\begin{lstlisting}
     564Heap statistics: (storage request / allocation)
     565  malloc >0 calls 2,766; 0 calls 2,064; storage 12,715 / 13,367 bytes
     566  aalloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     567  calloc >0 calls 6; 0 calls 0; storage 1,008 / 1,104 bytes
     568  memalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     569  amemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     570  cmemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     571  resize >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     572  realloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes
     573  free !null calls 2,766; null calls 4,064; storage 12,715 / 13,367 bytes
     574  away pulls 0; pushes 0; storage 0 / 0 bytes
     575  sbrk calls 1; storage 10,485,760 bytes
     576  mmap calls 10,000; storage 10,000 / 10,035 bytes
     577  munmap calls 10,000; storage 10,000 / 10,035 bytes
     578  threads started 4; exited 3
     579  heaps new 4; reused 0
     580\end{lstlisting}
     581\caption{Statistics Output}
     582\label{f:StatiticsOutput}
     583\end{figure}
     584
     585llheap can also be built with debug checking, which inserts many asserts along all allocation paths.
     586These assertions detect incorrect allocation usage, like double frees, unfreed storage, or memory corruptions because internal values (like header fields) are overwritten.
     587These checks are best effort as opposed to complete allocation checking as in @valgrind@.
     588Nevertheless, the checks detect many allocation problems.
     589There is an unfortunate problem in detecting unfreed storage because some library routines assume their allocations have life-time duration, and hence, do not free their storage.
     590For example, @printf@ allocates a 1024 buffer on first call and never deletes this buffer.
     591To prevent a false positive for unfreed storage, it is possible to specify an amount of storage that is never freed (see @malloc_unfreed@ \VPageref{p:malloc_unfreed}), and it is subtracted from the total allocate/free difference.
     592Determining the amount of never-freed storage is annoying, but once done, any warnings of unfreed storage are application related.
     593
     594Tests indicate only a 30\% performance increase when statistics \emph{and} debugging are enabled, and the latency cost for accumulating statistic is mitigated by limited calls, often only one at the end of the program.
     595
     596
     597\section{User-level Threading Support}
     598\label{s:UserlevelThreadingSupport}
     599
     600The serially-reusable problem (see \VRef{s:AllocationFastpath}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model.
     601The solution is to prevent interrupts that can result in CPU or KT change during operations that are logically critical sections.
     602Locking these critical sections negates any attempt for a quick fastpath and results in high contention.
     603For user-level threading, the serially-reusable problem appears with time slicing for preemptable scheduling, as the signal handler context switches to another user-level thread.
     604Without time slicing, a user thread performing a long computation can prevent execution (starve) other threads.
     605To prevent starvation for an allocation-active thread, \ie the time slice always triggers in an allocation critical-section for one thread, a thread-local \newterm{rollforward} flag is set in the signal handler when it aborts a time slice.
     606The rollforward flag is tested at the end of each allocation funnel routine (see \VPageref{p:FunnelRoutine}), and if set, it is reset and a volunteer yield (context switch) is performed to allow other threads to execute.
     607
     608llheap uses two techniques to detect when execution is in a allocation operation or routine called from allocation operation, to abort any time slice during this period.
     609On the slowpath when executing expensive operations, like @sbrk@ or @mmap@, interrupts are disabled/enabled by setting thread-local flags so the signal handler aborts immediately.
     610On the fastpath, disabling/enabling interrupts is too expensive as accessing thread-local storage can be expensive and not thread-safe.
     611For example, the ARM processor stores the thread-local pointer in a coprocessor register that cannot perform atomic base-displacement addressing.
     612Hence, there is a window between loading the thread-local pointer from the coprocessor register into a normal register and adding the displacement when a time slice can move a thread.
     613
     614The fast technique defines a special code section and places all non-interruptible routines in this section.
     615The linker places all code in this section into a contiguous block of memory, but the order of routines within the block is unspecified.
     616Then, the signal handler compares the program counter at the point of interrupt with the the start and end address of the non-interruptible section, and aborts if executing within this section and sets the rollforward flag.
     617This technique is fragile because any calls in the non-interruptible code outside of the non-interruptible section (like @sbrk@) must be bracketed with disable/enable interrupts and these calls must be along the slowpath.
     618Hence, for correctness, this approach requires inspection of generated assembler code for routines placed in the non-interruptible section.
     619This issue is mitigated by the llheap funnel design so only funnel routines and a few statistics routines are placed in the non-interruptible section and their assembler code examined.
     620These techniques are used in both the \uC and \CFA versions of llheap, where both of these systems have user-level threading.
     621
     622
     623\section{Bootstrapping}
     624
     625There are problems bootstrapping a memory allocator.
     626\begin{enumerate}
     627\item
     628Programs can be statically or dynamically linked.
     629\item
     630The order the linker schedules startup code is poorly supported.
     631\item
     632Knowing a KT's start and end independently from the KT code is difficult.
     633\end{enumerate}
     634
     635For static linking, the allocator is loaded with the program.
     636Hence, allocation calls immediately invoke the allocator operation defined by the loaded allocation library and there is only one memory allocator used in the program.
     637This approach allows allocator substitution by placing an allocation library before any other in the linked/load path.
     638
     639Allocator substitution is similar for dynamic linking, but the problem is that the dynamic loader starts first and needs to perform dynamic allocations \emph{before} the substitution allocator is loaded.
     640As a result, the dynamic loader uses a default allocator until the substitution allocator is loaded, after which all allocation operations are handled by the substitution allocator, including from the dynamic loader.
     641Hence, some part of the @sbrk@ area may be used by the default allocator and statistics about allocation operations cannot be correct.
     642Furthermore, dynamic linking goes through trampolines, so there is an additional cost along the allocator fastpath for all allocation operations.
     643Testing showed up to a 5\% performance increase for dynamic linking over static linking, even when using @tls_model("initial-exec")@ so the dynamic loader can obtain tighter binding.
     644
     645All allocator libraries need to perform startup code to initialize data structures, such as the heap array for llheap.
     646The problem is getting initialized done before the first allocator call.
     647However, there does not seem to be mechanism to tell either the static or dynamic loader to first perform initialization code before any calls to a loaded library.
     648As a result, calls to allocation routines occur without initialization.
     649To deal with this problem, it is necessary to put a conditional initialization check along the allocation fastpath to trigger initialization (singleton pattern).
     650
     651Two other important execution points are program startup and termination, which include prologue or epilogue code to bootstrap a program, which programmers are unaware of.
     652For example, dynamic-memory allocations before/after the application starts should not be considered in statistics because the application does not make these calls.
     653llheap establishes these two points using routines:
     654\begin{lstlisting}
     655__attribute__(( constructor( 100 ) )) static void startup( void ) {
     656        // clear statistic counters
     657        // reset allocUnfreed counter
     658}
     659__attribute__(( destructor( 100 ) )) static void shutdown( void ) {
     660        // sum allocUnfreed for all heaps
     661        // subtract global unfreed storage
     662        // if allocUnfreed > 0 then print warning message
     663}
     664\end{lstlisting}
     665which use global constructor/destructor priority 100, where the linker calls these routines at program prologue/epilogue in increasing/decreasing order of priority.
     666Application programs may only use global constructor/destructor priorities greater than 100.
     667Hence, @startup@ is called after the program prologue but before the application starts, and @shutdown@ is called after the program terminates but before the program epilogue.
     668By resetting counters in @startup@, prologue allocations are ignored, and checking unfreed storage in @shutdown@ checks only application memory management, ignoring the program epilogue.
     669
     670While @startup@/@shutdown@ apply to the program KT, a concurrent program creates additional KTs that do not trigger these routines.
     671However, it is essential for the allocator to know when each KT is started/terminated.
     672One approach is to create a thread-local object with a construct/destructor, which is triggered after a new KT starts and before it terminates, respectively.
     673\begin{lstlisting}
     674struct ThreadManager {
     675        volatile bool pgm_thread;
     676        ThreadManager() {} // unusable
     677        ~ThreadManager() { if ( pgm_thread ) heapManagerDtor(); }
     678};
     679static thread_local ThreadManager threadManager;
     680\end{lstlisting}
     681Unfortunately, thread-local variables are created lazily, \ie on the first dereference of @threadManager@, which then triggers its constructor.
     682Therefore, the constructor is useless for knowing when a KT starts because the KT must reference it, and the allocator does not control the application KT.
     683Fortunately, the singleton pattern needed for initializing the program KT also triggers KT allocator initialization, which can then reference @pgm_thread@ to call @threadManager@'s constructor, otherwise its destructor is not called.
     684Now when a KT terminates, @~ThreadManager@ is called to chained it onto the global-heap free-stack, where @pgm_thread@ is set to true only for the program KT.
     685The conditional destructor call prevents closing down the program heap, which must remain available because epilogue code may free more storage.
     686
     687Finally, there is a recursive problem when the singleton pattern dereferences @pgm_thread@ to initialize the thread-local object, because its initialization calls @atExit@, which immediately calls @malloc@ to obtain storage.
     688This recursion is handled with another thread-local flag to prevent double initialization.
     689A similar problem exists when the KT terminates and calls member @~ThreadManager@, because immediately afterwards, the terminating KT calls @free@ to deallocate the storage obtained from the @atExit@.
     690In the meantime, the terminated heap has been put on the global-heap free-stack, and may be active by a new KT, so the @atExit@ free is handled as a free to another heap and put onto the away list using locking.
     691
     692For user threading systems, the KTs are controlled by the runtime, and hence, start/end pointers are known and interact directly with the llheap allocator for \uC and \CFA, which eliminates or simplifies several of these problems.
     693The following API was created to provide interaction between the language runtime and the allocator.
     694\begin{lstlisting}
     695void startTask();                       $\C{// KT starts}$
     696void finishTask();                      $\C{// KT ends}$
     697void startup();                         $\C{// when application code starts}$
     698void shutdown();                        $\C{// when application code ends}$
     699bool traceHeap();                       $\C{// enable allocation/free printing for debugging}$
     700bool traceHeapOn();                     $\C{// start printing allocation/free calls}$
     701bool traceHeapOff();                    $\C{// stop printing allocation/free calls}$
     702\end{lstlisting}
     703This kind of API is necessary to allow concurrent runtime systems to interact with difference memory allocators in a consistent way.
    201704
    202705%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    203706
    204707\section{Added Features and Methods}
    205 To improve the uHeap allocator (FIX ME: cite uHeap) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
    206 
    207 \subsection{C Interface}
    208 We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. THese features will programmer more control on the dynamic memory allocation.
     708
     709The C dynamic-allocation API (see \VRef[Figure]{f:CDynamicAllocationAPI}) is neither orthogonal nor complete.
     710For example,
     711\begin{itemize}
     712\item
     713It is possible to zero fill or align an allocation but not both.
     714\item
     715It is \emph{only} possible to zero fill an array allocation.
     716\item
     717It is not possible to resize a memory allocation without data copying.
     718\item
     719@realloc@ does not preserve initial allocation properties.
     720\end{itemize}
     721As a result, programmers must provide these options, which is error prone, resulting in blaming the entire programming language for a poor dynamic-allocation API.
     722Furthermore, newer programming languages have better type systems that can provide safer and more powerful APIs for memory allocation.
     723
     724\begin{figure}
     725\begin{lstlisting}
     726void * malloc( size_t size );
     727void * calloc( size_t nmemb, size_t size );
     728void * realloc( void * ptr, size_t size );
     729void * reallocarray( void * ptr, size_t nmemb, size_t size );
     730void free( void * ptr );
     731void * memalign( size_t alignment, size_t size );
     732void * aligned_alloc( size_t alignment, size_t size );
     733int posix_memalign( void ** memptr, size_t alignment, size_t size );
     734void * valloc( size_t size );
     735void * pvalloc( size_t size );
     736
     737struct mallinfo mallinfo( void );
     738int mallopt( int param, int val );
     739int malloc_trim( size_t pad );
     740size_t malloc_usable_size( void * ptr );
     741void malloc_stats( void );
     742int malloc_info( int options, FILE * fp );
     743\end{lstlisting}
     744\caption{C Dynamic-Allocation API}
     745\label{f:CDynamicAllocationAPI}
     746\end{figure}
     747
     748The following presents design and API changes for C, \CC (\uC), and \CFA, all of which are implemented in llheap.
     749
    209750
    210751\subsection{Out of Memory}
     
    212753Most allocators use @nullptr@ to indicate an allocation failure, specifically out of memory;
    213754hence the need to return an alternate value for a zero-sized allocation.
    214 The alternative is to abort a program when out of memory.
    215 In theory, notifying the programmer allows recovery;
    216 in practice, it is almost impossible to gracefully when out of memory, so the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen.
    217 
    218 
    219 \subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
    220 @aalloc@ is an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is calloc but calloc also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.
    221 \paragraph{Usage}
     755A different approach allowed by the C API is to abort a program when out of memory and return @nullptr@ for a zero-sized allocation.
     756In theory, notifying the programmer of memory failure allows recovery;
     757in practice, it is almost impossible to gracefully recover when out of memory.
     758Hence, the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen because no pseudo allocation is necessary.
     759
     760
     761\subsection{C Interface}
     762
     763For C, it is possible to increase functionality and orthogonality of the dynamic-memory API to make allocation better for programmers.
     764
     765For existing C allocation routines:
     766\begin{itemize}
     767\item
     768@calloc@ sets the sticky zero-fill property.
     769\item
     770@memalign@, @aligned_alloc@, @posix_memalign@, @valloc@ and @pvalloc@ set the sticky alignment property.
     771\item
     772@realloc@ and @reallocarray@ preserve sticky properties.
     773\end{itemize}
     774
     775The C dynamic-memory API is extended with the following routines:
     776
     777\paragraph{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
     778extends @calloc@ for allocating a dynamic array of objects without calculating the total size of array explicitly but \emph{without} zero-filling the memory.
     779@aalloc@ is significantly faster than @calloc@, which is the only alternative.
     780
     781\noindent\textbf{Usage}
    222782@aalloc@ takes two parameters.
    223 
    224 \begin{itemize}
    225 \item
    226 @dim@: number of objects in the array
    227 \item
    228 @elemSize@: size of the object in the array.
    229 \end{itemize}
    230 It returns address of dynamic object allocatoed on heap that can contain dim number of objects of the size elemSize. On failure, it returns a @NULL@ pointer.
    231 
    232 \subsection{\lstinline{void * resize( void * oaddr, size_t size )}}
    233 @resize@ is an extension of relloc. It allows programmer to reuse a cuurently allocated dynamic object with a new size requirement. Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.
    234 \paragraph{Usage}
     783\begin{itemize}
     784\item
     785@dim@: number of array objects
     786\item
     787@elemSize@: size of array object
     788\end{itemize}
     789It returns the address of the dynamic array or @NULL@ if either @dim@ or @elemSize@ are zero.
     790
     791\paragraph{\lstinline{void * resize( void * oaddr, size_t size )}}
     792extends @realloc@ for resizing an existing allocation \emph{without} copying previous data into the new allocation or preserving sticky properties.
     793@resize@ is significantly faster than @realloc@, which is the only alternative.
     794
     795\noindent\textbf{Usage}
    235796@resize@ takes two parameters.
    236 
    237 \begin{itemize}
    238 \item
    239 @oaddr@: the address of the old object that needs to be resized.
    240 \item
    241 @size@: the new size requirement of the to which the old object needs to be resized.
    242 \end{itemize}
    243 It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns a @NULL@ pointer.
    244 
    245 \subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
    246 This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
    247 \paragraph{Usage}
    248 This resize takes three parameters. It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize).
    249 
    250 \begin{itemize}
    251 \item
    252 @oaddr@: the address of the old object that needs to be resized.
    253 \item
    254 @nalign@: the new alignment to which the old object needs to be realigned.
    255 \item
    256 @size@: the new size requirement of the to which the old object needs to be resized.
    257 \end{itemize}
    258 It returns an object with the size and alignment given in the parameters. On failure, it returns a @NULL@ pointer.
    259 
    260 \subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
    261 amemalign is a hybrid of memalign and aalloc. It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. It frees the programmer from calculating the total size of the array.
    262 \paragraph{Usage}
    263 amemalign takes three parameters.
    264 
    265 \begin{itemize}
    266 \item
    267 @alignment@: the alignment to which the dynamic array needs to be aligned.
    268 \item
    269 @dim@: number of objects in the array
    270 \item
    271 @elemSize@: size of the object in the array.
    272 \end{itemize}
    273 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns a @NULL@ pointer.
    274 
    275 \subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
    276 cmemalign is a hybrid of amemalign and calloc. It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. This routine provides both features of aligning and 0 filling, implicitly.
    277 \paragraph{Usage}
    278 cmemalign takes three parameters.
    279 
    280 \begin{itemize}
    281 \item
    282 @alignment@: the alignment to which the dynamic array needs to be aligned.
    283 \item
    284 @dim@: number of objects in the array
    285 \item
    286 @elemSize@: size of the object in the array.
    287 \end{itemize}
    288 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns a @NULL@ pointer.
    289 
    290 \subsection{\lstinline{size_t malloc_alignment( void * addr )}}
    291 @malloc_alignment@ returns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.
    292 \paragraph{Usage}
    293 @malloc_alignment@ takes one parameters.
    294 
    295 \begin{itemize}
    296 \item
    297 @addr@: the address of the currently allocated dynamic object.
    298 \end{itemize}
    299 @malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeap allocator.
    300 
    301 \subsection{\lstinline{bool malloc_zero_fill( void * addr )}}
    302 @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.
    303 \paragraph{Usage}
     797\begin{itemize}
     798\item
     799@oaddr@: address to be resized
     800\item
     801@size@: new allocation size (smaller or larger than previous)
     802\end{itemize}
     803It returns the address of the old or new storage with the specified new size or @NULL@ if @size@ is zero.
     804
     805\paragraph{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
     806extends @aalloc@ and @memalign@ for allocating an aligned dynamic array of objects.
     807Sets sticky alignment property.
     808
     809\noindent\textbf{Usage}
     810@amemalign@ takes three parameters.
     811\begin{itemize}
     812\item
     813@alignment@: alignment requirement
     814\item
     815@dim@: number of array objects
     816\item
     817@elemSize@: size of array object
     818\end{itemize}
     819It returns the address of the aligned dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero.
     820
     821\paragraph{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
     822extends @amemalign@ with zero fill and has the same usage as @amemalign@.
     823Sets sticky zero-fill and alignment property.
     824It returns the address of the aligned, zero-filled dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero.
     825
     826\paragraph{\lstinline{size_t malloc_alignment( void * addr )}}
     827returns the alignment of the dynamic object for use in aligning similar allocations.
     828
     829\noindent\textbf{Usage}
     830@malloc_alignment@ takes one parameter.
     831\begin{itemize}
     832\item
     833@addr@: address of an allocated object.
     834\end{itemize}
     835It returns the alignment of the given object, where objects not allocated with alignment return the minimal allocation alignment.
     836
     837\paragraph{\lstinline{bool malloc_zero_fill( void * addr )}}
     838returns true if the object has the zero-fill sticky property for use in zero filling similar allocations.
     839
     840\noindent\textbf{Usage}
    304841@malloc_zero_fill@ takes one parameters.
    305842
    306843\begin{itemize}
    307844\item
    308 @addr@: the address of the currently allocated dynamic object.
    309 \end{itemize}
    310 @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false.
    311 
    312 \subsection{\lstinline{size_t malloc_size( void * addr )}}
    313 @malloc_size@ returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is @malloc_usable_size@. But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine.
    314 \paragraph{Usage}
     845@addr@: address of an allocated object.
     846\end{itemize}
     847It returns true if the zero-fill sticky property is set and false otherwise.
     848
     849\paragraph{\lstinline{size_t malloc_size( void * addr )}}
     850returns the request size of the dynamic object (updated when an object is resized) for use in similar allocations.
     851See also @malloc_usable_size@.
     852
     853\noindent\textbf{Usage}
    315854@malloc_size@ takes one parameters.
    316 
    317 \begin{itemize}
    318 \item
    319 @addr@: the address of the currently allocated dynamic object.
    320 \end{itemize}
    321 @malloc_size@ returns the allocation size of the given dynamic object. On failure, it return zero.
    322 
    323 \subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
    324 This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
    325 \paragraph{Usage}
    326 This @realloc@ takes three parameters. It takes an additional parameter of nalign as compared to the default @realloc@.
    327 
    328 \begin{itemize}
    329 \item
    330 @oaddr@: the address of the old object that needs to be reallocated.
    331 \item
    332 @nalign@: the new alignment to which the old object needs to be realigned.
    333 \item
    334 @size@: the new size requirement of the to which the old object needs to be resized.
    335 \end{itemize}
    336 It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a @NULL@ pointer.
    337 
    338 \subsection{\CFA Malloc Interface}
    339 We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeap allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers.
    340 \CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
    341 
    342 \subsection{\lstinline{T * malloc( void )}}
    343 This malloc is a simplified polymorphic form of defualt malloc (FIX ME: cite malloc). It does not take any parameter as compared to default malloc that takes one parameter.
    344 \paragraph{Usage}
    345 This malloc takes no parameters.
    346 It returns a dynamic object of the size of type @T@. On failure, it returns a @NULL@ pointer.
    347 
    348 \subsection{\lstinline{T * aalloc( size_t dim )}}
    349 This aalloc is a simplified polymorphic form of above aalloc (FIX ME: cite aalloc). It takes one parameter as compared to the above aalloc that takes two parameters.
    350 \paragraph{Usage}
    351 aalloc takes one parameters.
    352 
    353 \begin{itemize}
    354 \item
    355 @dim@: required number of objects in the array.
    356 \end{itemize}
    357 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer.
    358 
    359 \subsection{\lstinline{T * calloc( size_t dim )}}
    360 This calloc is a simplified polymorphic form of defualt calloc (FIX ME: cite calloc). It takes one parameter as compared to the default calloc that takes two parameters.
    361 \paragraph{Usage}
    362 This calloc takes one parameter.
    363 
    364 \begin{itemize}
    365 \item
    366 @dim@: required number of objects in the array.
    367 \end{itemize}
    368 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer.
    369 
    370 \subsection{\lstinline{T * resize( T * ptr, size_t size )}}
    371 This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
    372 \paragraph{Usage}
    373 This resize takes two parameters.
    374 
    375 \begin{itemize}
    376 \item
    377 @ptr@: address of the old object.
    378 \item
    379 @size@: the required size of the new object.
    380 \end{itemize}
    381 It returns a dynamic object of the size given in paramters. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer.
    382 
    383 \subsection{\lstinline{T * realloc( T * ptr, size_t size )}}
    384 This @realloc@ is a simplified polymorphic form of defualt @realloc@ (FIX ME: cite @realloc@ with align). It takes two parameters as compared to the above @realloc@ that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
    385 \paragraph{Usage}
    386 This @realloc@ takes two parameters.
    387 
    388 \begin{itemize}
    389 \item
    390 @ptr@: address of the old object.
    391 \item
    392 @size@: the required size of the new object.
    393 \end{itemize}
    394 It returns a dynamic object of the size given in paramters that preserves the data in the given object. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer.
    395 
    396 \subsection{\lstinline{T * memalign( size_t align )}}
    397 This memalign is a simplified polymorphic form of defualt memalign (FIX ME: cite memalign). It takes one parameters as compared to the default memalign that takes two parameters.
    398 \paragraph{Usage}
    399 memalign takes one parameters.
    400 
    401 \begin{itemize}
    402 \item
    403 @align@: the required alignment of the dynamic object.
    404 \end{itemize}
    405 It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. On failure, it returns a @NULL@ pointer.
    406 
    407 \subsection{\lstinline{T * amemalign( size_t align, size_t dim )}}
    408 This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). It takes two parameter as compared to the above amemalign that takes three parameters.
    409 \paragraph{Usage}
    410 amemalign takes two parameters.
    411 
    412 \begin{itemize}
    413 \item
    414 @align@: required alignment of the dynamic array.
    415 \item
    416 @dim@: required number of objects in the array.
    417 \end{itemize}
    418 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align. On failure, it returns a @NULL@ pointer.
    419 
    420 \subsection{\lstinline{T * cmemalign( size_t align, size_t dim  )}}
    421 This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). It takes two parameter as compared to the above cmemalign that takes three parameters.
    422 \paragraph{Usage}
    423 cmemalign takes two parameters.
    424 
    425 \begin{itemize}
    426 \item
    427 @align@: required alignment of the dynamic array.
    428 \item
    429 @dim@: required number of objects in the array.
    430 \end{itemize}
    431 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align and is zero filled. On failure, it returns a @NULL@ pointer.
    432 
    433 \subsection{\lstinline{T * aligned_alloc( size_t align )}}
    434 This @aligned_alloc@ is a simplified polymorphic form of defualt @aligned_alloc@ (FIX ME: cite @aligned_alloc@). It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters.
    435 \paragraph{Usage}
    436 This @aligned_alloc@ takes one parameter.
    437 
    438 \begin{itemize}
    439 \item
    440 @align@: required alignment of the dynamic object.
    441 \end{itemize}
    442 It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. On failure, it returns a @NULL@ pointer.
    443 
    444 \subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}}
    445 This @posix_memalign@ is a simplified polymorphic form of defualt @posix_memalign@ (FIX ME: cite @posix_memalign@). It takes two parameters as compared to the default @posix_memalign@ that takes three parameters.
    446 \paragraph{Usage}
    447 This @posix_memalign@ takes two parameter.
    448 
    449 \begin{itemize}
    450 \item
    451 @ptr@: variable address to store the address of the allocated object.
    452 \item
    453 @align@: required alignment of the dynamic object.
    454 \end{itemize}
    455 
    456 It stores address of the dynamic object of the size of type @T@ in given parameter ptr. This object is aligned to the given parameter. On failure, it returns a @NULL@ pointer.
    457 
    458 \subsection{\lstinline{T * valloc( void )}}
    459 This @valloc@ is a simplified polymorphic form of defualt @valloc@ (FIX ME: cite @valloc@). It takes no parameters as compared to the default @valloc@ that takes one parameter.
    460 \paragraph{Usage}
    461 @valloc@ takes no parameters.
    462 It returns a dynamic object of the size of type @T@ that is aligned to the page size. On failure, it returns a @NULL@ pointer.
    463 
    464 \subsection{\lstinline{T * pvalloc( void )}}
    465 \paragraph{Usage}
    466 @pvalloc@ takes no parameters.
    467 It returns a dynamic object of the size that is calcutaed by rouding the size of type @T@. The returned object is also aligned to the page size. On failure, it returns a @NULL@ pointer.
    468 
    469 \subsection{Alloc Interface}
    470 In addition to improve allocator interface both for \CFA and our standalone allocator uHeap in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
    471 This interface helps programmers in three major ways.
    472 
    473 \begin{itemize}
    474 \item
    475 Routine Name: alloc interfce frees programmers from remmebring different routine names for different kind of dynamic allocations.
    476 \item
    477 Parametre Positions: alloc interface frees programmers from remembering parameter postions in call to routines.
    478 \item
    479 Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determince the object size from returned type of alloc call.
    480 \end{itemize}
    481 
    482 Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interfece has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.
    483 
    484 \subsection{Routine: \lstinline{T * alloc( ... )}}
    485 Call to alloc wihout any parameter returns one object of size of type @T@ allocated dynamically.
    486 Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine.
    487 alocc routine accepts six kinds of arguments. Using different combinations of tha parameters, different kind of allocations can be performed. Any combincation of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultanously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.
    488 
    489 \paragraph{Dim}
    490 This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@.
    491 It represents the required number of members in the array allocation as in \CFA's aalloc (FIX ME: cite aalloc).
    492 This parameter should be of type @size_t@.
    493 
    494 Example: @int a = alloc( 5 )@
    495 This call will return a dynamic array of five integers.
    496 
    497 \paragraph{Align}
    498 This parameter is position-free and uses a backtick routine align (@`align@). The parameter passed with @`align@ should be of type @size_t@. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used.
    499 
    500 Example: @int b = alloc( 5 , 64`align )@
    501 This call will return a dynamic array of five integers. It will align the allocated object to 64.
    502 
    503 \paragraph{Fill}
    504 This parameter is position-free and uses a backtick routine fill (@`fill@). In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter.
    505 Three types of parameters can be passed using `fill.
    506 
    507 \begin{itemize}
    508 \item
    509 @char@: A char can be passed with @`fill@ to fill the whole dynamic allocation with the given char recursively till the end of required allocation.
    510 \item
    511 Object of returned type: An object of type of returned type can be passed with @`fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation.
    512 \item
    513 Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen untill the end object passed to @`fill@ or the end of requested allocation reaches.
    514 \end{itemize}
    515 
    516 Example: @int b = alloc( 5 , 'a'`fill )@
    517 This call will return a dynamic array of five integers. It will fill the allocated object with character 'a' recursively till the end of requested allocation size.
    518 
    519 Example: @int b = alloc( 5 , 4`fill )@
    520 This call will return a dynamic array of five integers. It will fill the allocated object with integer 4 recursively till the end of requested allocation size.
    521 
    522 Example: @int b = alloc( 5 , a`fill )@ where @a@ is a pointer of int type
    523 This call will return a dynamic array of five integers. It will copy data in a to the returned object non-recursively untill end of a or the newly allocated object is reached.
    524 
    525 \paragraph{Resize}
    526 This parameter is position-free and uses a backtick routine resize (@`resize@). It represents the old dynamic object (oaddr) that the programmer wants to
    527 \begin{itemize}
    528 \item
    529 resize to a new size.
    530 \item
    531 realign to a new alignment
    532 \item
    533 fill with something.
    534 \end{itemize}
    535 The data in old dynamic object will not be preserved in the new object. The type of object passed to @`resize@ and the returned type of alloc call can be different.
    536 
    537 Example: @int b = alloc( 5 , a`resize )@
    538 This call will resize object a to a dynamic array that can contain 5 integers.
    539 
    540 Example: @int b = alloc( 5 , a`resize , 32`align )@
    541 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32.
    542 
    543 Example: @int b = alloc( 5 , a`resize , 32`align , 2`fill )@
    544 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32 and will be filled with 2.
    545 
    546 \paragraph{Realloc}
    547 This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@). It represents the old dynamic object (oaddr) that the programmer wants to
    548 \begin{itemize}
    549 \item
    550 realloc to a new size.
    551 \item
    552 realign to a new alignment
    553 \item
    554 fill with something.
    555 \end{itemize}
    556 The data in old dynamic object will be preserved in the new object. The type of object passed to @`realloc@ and the returned type of alloc call cannot be different.
    557 
    558 Example: @int b = alloc( 5 , a`realloc )@
    559 This call will realloc object a to a dynamic array that can contain 5 integers.
    560 
    561 Example: @int b = alloc( 5 , a`realloc , 32`align )@
    562 This call will realloc object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32.
    563 
    564 Example: @int b = alloc( 5 , a`realloc , 32`align , 2`fill )@
    565 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. The extra space after copying data of a to the returned object will be filled with 2.
     855\begin{itemize}
     856\item
     857@addr@: address of an allocated object.
     858\end{itemize}
     859It returns the request size or zero if @addr@ is @NULL@.
     860
     861\paragraph{\lstinline{int malloc_stats_fd( int fd )}}
     862changes the file descriptor where @malloc_stats@ writes statistics (default @stdout@).
     863
     864\noindent\textbf{Usage}
     865@malloc_stats_fd@ takes one parameters.
     866\begin{itemize}
     867\item
     868@fd@: files description.
     869\end{itemize}
     870It returns the previous file descriptor.
     871
     872\paragraph{\lstinline{size_t malloc_expansion()}}
     873\label{p:malloc_expansion}
     874set the amount (bytes) to extend the heap when there is insufficient free storage to service an allocation request.
     875It returns the heap extension size used throughout a program, \ie called once at heap initialization.
     876
     877\paragraph{\lstinline{size_t malloc_mmap_start()}}
     878set the crossover between allocations occurring in the @sbrk@ area or separately mapped.
     879It returns the crossover point used throughout a program, \ie called once at heap initialization.
     880
     881\paragraph{\lstinline{size_t malloc_unfreed()}}
     882\label{p:malloc_unfreed}
     883amount subtracted to adjust for unfreed program storage (debug only).
     884It returns the new subtraction amount and called by @malloc_stats@.
     885
     886
     887\subsection{\CC Interface}
     888
     889The following extensions take advantage of overload polymorphism in the \CC type-system.
     890
     891\paragraph{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
     892extends @resize@ with an alignment re\-quirement.
     893
     894\noindent\textbf{Usage}
     895takes three parameters.
     896\begin{itemize}
     897\item
     898@oaddr@: address to be resized
     899\item
     900@nalign@: alignment requirement
     901\item
     902@size@: new allocation size (smaller or larger than previous)
     903\end{itemize}
     904It returns the address of the old or new storage with the specified new size and alignment, or @NULL@ if @size@ is zero.
     905
     906\paragraph{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
     907extends @realloc@ with an alignment re\-quirement and has the same usage as aligned @resize@.
     908
     909
     910\subsection{\CFA Interface}
     911
     912The following extensions take advantage of overload polymorphism in the \CFA type-system.
     913The key safety advantage of the \CFA type system is using the return type to select overloads;
     914hence, a polymorphic routine knows the returned type and its size.
     915This capability is used to remove the object size parameter and correctly cast the return storage to match the result type.
     916For example, the following is the \CFA wrapper for C @malloc@:
     917\begin{cfa}
     918forall( T & | sized(T) ) {
     919        T * malloc( void ) {
     920                if ( _Alignof(T) <= libAlign() ) return @(T *)@malloc( @sizeof(T)@ ); // C allocation
     921                else return @(T *)@memalign( @_Alignof(T)@, @sizeof(T)@ ); // C allocation
     922        } // malloc
     923\end{cfa}
     924and is used as follows:
     925\begin{lstlisting}
     926int * i = malloc();
     927double * d = malloc();
     928struct Spinlock { ... } __attribute__(( aligned(128) ));
     929Spinlock * sl = malloc();
     930\end{lstlisting}
     931where each @malloc@ call provides the return type as @T@, which is used with @sizeof@, @_Alignof@, and casting the storage to the correct type.
     932This interface removes many of the common allocation errors in C programs.
     933\VRef[Figure]{f:CFADynamicAllocationAPI} show the \CFA wrappers for the equivalent C/\CC allocation routines with same semantic behaviour.
     934
     935\begin{figure}
     936\begin{lstlisting}
     937T * malloc( void );
     938T * aalloc( size_t dim );
     939T * calloc( size_t dim );
     940T * resize( T * ptr, size_t size );
     941T * realloc( T * ptr, size_t size );
     942T * memalign( size_t align );
     943T * amemalign( size_t align, size_t dim );
     944T * cmemalign( size_t align, size_t dim  );
     945T * aligned_alloc( size_t align );
     946int posix_memalign( T ** ptr, size_t align );
     947T * valloc( void );
     948T * pvalloc( void );
     949\end{lstlisting}
     950\caption{\CFA C-Style Dynamic-Allocation API}
     951\label{f:CFADynamicAllocationAPI}
     952\end{figure}
     953
     954In addition to the \CFA C-style allocator interface, a new allocator interface is provided to further increase orthogonality and usability of dynamic-memory allocation.
     955This interface helps programmers in three ways.
     956\begin{itemize}
     957\item
     958naming: \CFA regular and @ttype@ polymorphism is used to encapsulate a wide range of allocation functionality into a single routine name, so programmers do not have to remember multiple routine names for different kinds of dynamic allocations.
     959\item
     960named arguments: individual allocation properties are specified using postfix function call, so programmers do have to remember parameter positions in allocation calls.
     961\item
     962object size: like the \CFA C-style interface, programmers do not have to specify object size or cast allocation results.
     963\end{itemize}
     964Note, postfix function call is an alternative call syntax, using backtick @`@, where the argument appears before the function name, \eg
     965\begin{cfa}
     966duration ?@`@h( int h );                // ? denote the position of the function operand
     967duration ?@`@m( int m );
     968duration ?@`@s( int s );
     969duration dur = 3@`@h + 42@`@m + 17@`@s;
     970\end{cfa}
     971@ttype@ polymorphism is similar to \CC variadic templates.
     972
     973\paragraph{\lstinline{T * alloc( ... )} or \lstinline{T * alloc( size_t dim, ... )}}
     974is overloaded with a variable number of specific allocation routines, or an integer dimension parameter followed by a variable number specific allocation routines.
     975A call without parameters returns a dynamically allocated object of type @T@ (@malloc@).
     976A call with only the dimension (dim) parameter returns a dynamically allocated array of objects of type @T@ (@aalloc@).
     977The variable number of arguments consist of allocation properties, which can be combined to produce different kinds of allocations.
     978The only restriction is for properties @realloc@ and @resize@, which cannot be combined.
     979
     980The allocation property functions are:
     981\subparagraph{\lstinline{T_align ?`align( size_t alignment )}}
     982to align the allocation.
     983The alignment parameter must be $\ge$ the default alignment (@libAlign()@ in \CFA) and a power of two, \eg:
     984\begin{cfa}
     985int * i0 = alloc( @4096`align@ );  sout | i0 | nl;
     986int * i1 = alloc( 3, @4096`align@ );  sout | i1; for (i; 3 ) sout | &i1[i]; sout | nl;
     987
     9880x555555572000
     9890x555555574000 0x555555574000 0x555555574004 0x555555574008
     990\end{cfa}
     991returns a dynamic object and object array aligned on a 4096-byte boundary.
     992
     993\subparagraph{\lstinline{S_fill(T) ?`fill ( /* various types */ )}}
     994to initialize storage.
     995There are three ways to fill storage:
     996\begin{enumerate}
     997\item
     998A char fills each byte of each object.
     999\item
     1000An object of the returned type fills each object.
     1001\item
     1002An object array pointer fills some or all of the corresponding object array.
     1003\end{enumerate}
     1004For example:
     1005\begin{cfa}[numbers=left]
     1006int * i0 = alloc( @0n`fill@ );  sout | *i0 | nl;  // disambiguate 0
     1007int * i1 = alloc( @5`fill@ );  sout | *i1 | nl;
     1008int * i2 = alloc( @'\xfe'`fill@ ); sout | hex( *i2 ) | nl;
     1009int * i3 = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | i3[i]; sout | nl;
     1010int * i4 = alloc( 5, @0xdeadbeefN`fill@ );  for ( i; 5 ) sout | hex( i4[i] ); sout | nl;
     1011int * i5 = alloc( 5, @i3`fill@ );  for ( i; 5 ) sout | i5[i]; sout | nl;
     1012int * i6 = alloc( 5, @[i3, 3]`fill@ );  for ( i; 5 ) sout | i6[i]; sout | nl;
     1013\end{cfa}
     1014\begin{lstlisting}[numbers=left]
     10150
     10165
     10170xfefefefe
     10185 5 5 5 5
     10190xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef
     10205 5 5 5 5
     10215 5 5 -555819298 -555819298  // two undefined values
     1022\end{lstlisting}
     1023Examples 1 to 3, fill an object with a value or characters.
     1024Examples 4 to 7, fill an array of objects with values, another array, or part of an array.
     1025
     1026\subparagraph{\lstinline{S_resize(T) ?`resize( void * oaddr )}}
     1027used to resize, realign, and fill, where the old object data is not copied to the new object.
     1028The old object type may be different from the new object type, since the values are not used.
     1029For example:
     1030\begin{cfa}[numbers=left]
     1031int * i = alloc( @5`fill@ );  sout | i | *i;
     1032i = alloc( @i`resize@, @256`align@, @7`fill@ );  sout | i | *i;
     1033double * d = alloc( @i`resize@, @4096`align@, @13.5`fill@ );  sout | d | *d;
     1034\end{cfa}
     1035\begin{lstlisting}[numbers=left]
     10360x55555556d5c0 5
     10370x555555570000 7
     10380x555555571000 13.5
     1039\end{lstlisting}
     1040Examples 2 to 3 change the alignment, fill, and size for the initial storage of @i@.
     1041
     1042\begin{cfa}[numbers=left]
     1043int * ia = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | ia[i]; sout | nl;
     1044ia = alloc( 10, @ia`resize@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl;
     1045sout | ia; ia = alloc( 5, @ia`resize@, @512`align@, @13`fill@ ); sout | ia; for ( i; 5 ) sout | ia[i]; sout | nl;;
     1046ia = alloc( 3, @ia`resize@, @4096`align@, @2`fill@ );  sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl;
     1047\end{cfa}
     1048\begin{lstlisting}[numbers=left]
     10495 5 5 5 5
     10507 7 7 7 7 7 7 7 7 7
     10510x55555556d560 0x555555571a00 13 13 13 13 13
     10520x555555572000 0x555555572000 2 0x555555572004 2 0x555555572008 2
     1053\end{lstlisting}
     1054Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@.
     1055
     1056\subparagraph{\lstinline{S_realloc(T) ?`realloc( T * a ))}}
     1057used to resize, realign, and fill, where the old object data is copied to the new object.
     1058The old object type must be the same as the new object type, since the values used.
     1059Note, for @fill@, only the extra space after copying the data from the old object is filled with the given parameter.
     1060For example:
     1061\begin{cfa}[numbers=left]
     1062int * i = alloc( @5`fill@ );  sout | i | *i;
     1063i = alloc( @i`realloc@, @256`align@ );  sout | i | *i;
     1064i = alloc( @i`realloc@, @4096`align@, @13`fill@ );  sout | i | *i;
     1065\end{cfa}
     1066\begin{lstlisting}[numbers=left]
     10670x55555556d5c0 5
     10680x555555570000 5
     10690x555555571000 5
     1070\end{lstlisting}
     1071Examples 2 to 3 change the alignment for the initial storage of @i@.
     1072The @13`fill@ for example 3 does nothing because no extra space is added.
     1073
     1074\begin{cfa}[numbers=left]
     1075int * ia = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | ia[i]; sout | nl;
     1076ia = alloc( 10, @ia`realloc@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl;
     1077sout | ia; ia = alloc( 1, @ia`realloc@, @512`align@, @13`fill@ ); sout | ia; for ( i; 1 ) sout | ia[i]; sout | nl;;
     1078ia = alloc( 3, @ia`realloc@, @4096`align@, @2`fill@ );  sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl;
     1079\end{cfa}
     1080\begin{lstlisting}[numbers=left]
     10815 5 5 5 5
     10825 5 5 5 5 7 7 7 7 7
     10830x55555556c560 0x555555570a00 5
     10840x555555571000 0x555555571000 5 0x555555571004 2 0x555555571008 2
     1085\end{lstlisting}
     1086Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@.
     1087The @13`fill@ for example 3 does nothing because no extra space is added.
     1088
     1089These \CFA allocation features are used extensively in the development of the \CFA runtime.
  • doc/theses/mubeen_zulfiqar_MMath/background.tex

    rba897d21 r2e9b59b  
    3434\VRef[Figure]{f:AllocatorComponents} shows the two important data components for a memory allocator, management and storage, collectively called the \newterm{heap}.
    3535The \newterm{management data} is a data structure located at a known memory address and contains all information necessary to manage the storage data.
    36 The management data starts with fixed-sized information in the static-data memory that flows into the dynamic-allocation memory.
     36The management data starts with fixed-sized information in the static-data memory that references components in the dynamic-allocation memory.
    3737The \newterm{storage data} is composed of allocated and freed objects, and \newterm{reserved memory}.
    38 Allocated objects (white) are variable sized, and allocated and maintained by the program;
     38Allocated objects (light grey) are variable sized, and allocated and maintained by the program;
    3939\ie only the program knows the location of allocated storage, not the memory allocator.
    4040\begin{figure}[h]
     
    4444\label{f:AllocatorComponents}
    4545\end{figure}
    46 Freed objects (light grey) are memory deallocated by the program, which are linked into one or more lists facilitating easy location for new allocations.
     46Freed objects (white) represent memory deallocated by the program, which are linked into one or more lists facilitating easy location of new allocations.
    4747Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks.
    4848Reserved memory (dark grey) is one or more blocks of memory obtained from the operating system but not yet allocated to the program;
     
    5454The trailer may be used to simplify an allocation implementation, \eg coalescing, and/or for security purposes to mark the end of an object.
    5555An object may be preceded by padding to ensure proper alignment.
    56 Some algorithms quantize allocation requests into distinct sizes resulting in additional spacing after objects less than the quantized value.
     56Some algorithms quantize allocation requests into distinct sizes, called \newterm{buckets}, resulting in additional spacing after objects less than the quantized value.
     57(Note, the buckets are often organized as an array of ascending bucket sizes for fast searching, \eg binary search, and the array is stored in the heap management-area, where each bucket is a top point to the freed objects of that size.)
    5758When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists.
    5859A free object also contains management data, \eg size, chaining, etc.
     
    8182Fragmentation is memory requested from the operating system but not used by the program;
    8283hence, allocated objects are not fragmentation.
    83 \VRef[Figure]{f:InternalExternalFragmentation}) shows fragmentation is divided into two forms: internal or external.
     84\VRef[Figure]{f:InternalExternalFragmentation} shows fragmentation is divided into two forms: internal or external.
    8485
    8586\begin{figure}
     
    9697An allocator should strive to keep internal management information to a minimum.
    9798
    98 \newterm{External fragmentation} is all memory space reserved from the operating system but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes freed objects, all external management data, and reserved memory.
     99\newterm{External fragmentation} is all memory space reserved from the operating system but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory.
    99100This memory is problematic in two ways: heap blowup and highly fragmented memory.
    100101\newterm{Heap blowup} occurs when memory freed by the program is not reused for future allocations leading to potentially unbounded external fragmentation growth~\cite{Berger00}.
     
    125126\end{figure}
    126127
    127 For a single-threaded memory allocator, three basic approaches for controlling fragmentation have been identified~\cite{Johnstone99}.
     128For a single-threaded memory allocator, three basic approaches for controlling fragmentation are identified~\cite{Johnstone99}.
    128129The first approach is a \newterm{sequential-fit algorithm} with one list of free objects that is searched for a block large enough to fit a requested object size.
    129130Different search policies determine the free object selected, \eg the first free object large enough or closest to the requested size.
     
    132133
    133134The second approach is a \newterm{segregated} or \newterm{binning algorithm} with a set of lists for different sized freed objects.
    134 When an object is allocated, the requested size is rounded up to the nearest bin-size, possibly with spacing after the object.
     135When an object is allocated, the requested size is rounded up to the nearest bin-size, often leading to spacing after the object.
    135136A binning algorithm is fast at finding free memory of the appropriate size and allocating it, since the first free object on the free list is used.
    136137The fewer bin-sizes, the fewer lists need to be searched and maintained;
     
    158159Temporal locality commonly occurs during an iterative computation with a fix set of disjoint variables, while spatial locality commonly occurs when traversing an array.
    159160
    160 Hardware takes advantage of temporal and spatial locality through multiple levels of caching (\ie memory hierarchy).
     161Hardware takes advantage of temporal and spatial locality through multiple levels of caching, \ie memory hierarchy.
    161162When an object is accessed, the memory physically located around the object is also cached with the expectation that the current and nearby objects will be referenced within a short period of time.
    162163For example, entire cache lines are transferred between memory and cache and entire virtual-memory pages are transferred between disk and memory.
     
    171172
    172173There are a number of ways a memory allocator can degrade locality by increasing the working set.
    173 For example, a memory allocator may access multiple free objects before finding one to satisfy an allocation request (\eg sequential-fit algorithm).
     174For example, a memory allocator may access multiple free objects before finding one to satisfy an allocation request, \eg sequential-fit algorithm.
    174175If there are a (large) number of objects accessed in very different areas of memory, the allocator may perturb the program's memory hierarchy causing multiple cache or page misses~\cite{Grunwald93}.
    175176Another way locality can be degraded is by spatially separating related data.
     
    181182
    182183A multi-threaded memory-allocator does not run any threads itself, but is used by a multi-threaded program.
    183 In addition to single-threaded design issues of locality and fragmentation, a multi-threaded allocator may be simultaneously accessed by multiple threads, and hence, must deal with concurrency issues such as mutual exclusion, false sharing, and additional forms of heap blowup.
     184In addition to single-threaded design issues of fragmentation and locality, a multi-threaded allocator is simultaneously accessed by multiple threads, and hence, must deal with concurrency issues such as mutual exclusion, false sharing, and additional forms of heap blowup.
    184185
    185186
     
    192193Second is when multiple threads contend for a shared resource simultaneously, and hence, some threads must wait until the resource is released.
    193194Contention can be reduced in a number of ways:
     195\begin{itemize}[itemsep=0pt]
     196\item
    194197using multiple fine-grained locks versus a single lock, spreading the contention across a number of locks;
     198\item
    195199using trylock and generating new storage if the lock is busy, yielding a classic space versus time tradeoff;
     200\item
    196201using one of the many lock-free approaches for reducing contention on basic data-structure operations~\cite{Oyama99}.
    197 However, all of these approaches have degenerate cases where contention occurs.
     202\end{itemize}
     203However, all of these approaches have degenerate cases where program contention is high, which occurs outside of the allocator.
    198204
    199205
     
    275281\label{s:MultipleHeaps}
    276282
    277 A single-threaded allocator has at most one thread and heap, while a multi-threaded allocator has potentially multiple threads and heaps.
     283A multi-threaded allocator has potentially multiple threads and heaps.
    278284The multiple threads cause complexity, and multiple heaps are a mechanism for dealing with the complexity.
    279285The spectrum ranges from multiple threads using a single heap, denoted as T:1 (see \VRef[Figure]{f:SingleHeap}), to multiple threads sharing multiple heaps, denoted as T:H (see \VRef[Figure]{f:SharedHeaps}), to one thread per heap, denoted as 1:1 (see \VRef[Figure]{f:PerThreadHeap}), which is almost back to a single-threaded allocator.
     
    339345An alternative implementation is for all heaps to share one reserved memory, which requires a separate lock for the reserved storage to ensure mutual exclusion when acquiring new memory.
    340346Because multiple threads can allocate/free/reallocate adjacent storage, all forms of false sharing may occur.
    341 Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area.
     347Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the operating system.
    342348
    343349\begin{figure}
     
    368374
    369375
    370 \paragraph{1:1 model (thread heaps)} where each thread has its own heap, which eliminates most contention and locking because threads seldom accesses another thread's heap (see ownership in \VRef{s:Ownership}).
     376\paragraph{1:1 model (thread heaps)} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see ownership in \VRef{s:Ownership}).
    371377An additional benefit of thread heaps is improved locality due to better memory layout.
    372378As each thread only allocates from its heap, all objects for a thread are consolidated in the storage area for that heap, better utilizing each CPUs cache and accessing fewer pages.
     
    380386Second is to place the thread heap on a list of available heaps and reuse it for a new thread in the future.
    381387Destroying the thread heap immediately may reduce external fragmentation sooner, since all free objects are freed to the global heap and may be reused by other threads.
    382 Alternatively, reusing thread heaps may improve performance if the inheriting thread makes similar allocation requests as the thread that previously held the thread heap.
     388Alternatively, reusing thread heaps may improve performance if the inheriting thread makes similar allocation requests as the thread that previously held the thread heap because any unfreed storage is immediately accessible..
    383389
    384390
     
    388394However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the operating system, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000).
    389395It is difficult to retain this goal, if the user-threading model is directly involved with the heap model.
    390 \VRef[Figure]{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model provided by the language runtime.
     396\VRef[Figure]{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model is provided by the language runtime.
    391397Hence, a user thread allocates/deallocates from/to the heap of the kernel thread on which it is currently executing.
    392398
     
    400406Adopting this model results in a subtle problem with shared heaps.
    401407With kernel threading, an operation that is started by a kernel thread is always completed by that thread.
    402 For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap even if preempted.
    403 Any correctness locking associated with the shared heap is preserved across preemption.
     408For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption.
    404409
    405410However, this correctness property is not preserved for user-level threading.
     
    409414However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption is rare (10--100 milliseconds).
    410415Instead, techniques exist to lazily detect this case in the interrupt handler, abort the preemption, and return to the operation so it can complete atomically.
    411 Occasionally ignoring a preemption should be benign.
     416Occasionally ignoring a preemption should be benign, but a persistent lack of preemption can result in both short and long term starvation.
    412417
    413418
     
    430435
    431436\newterm{Ownership} defines which heap an object is returned-to on deallocation.
    432 If a thread returns an object to the heap it was originally allocated from, the heap has ownership of its objects.
    433 Alternatively, a thread can return an object to the heap it is currently allocating from, which can be any heap accessible during a thread's lifetime.
     437If a thread returns an object to the heap it was originally allocated from, a heap has ownership of its objects.
     438Alternatively, a thread can return an object to the heap it is currently associated with, which can be any heap accessible during a thread's lifetime.
    434439\VRef[Figure]{f:HeapsOwnership} shows an example of multiple heaps (minus the global heap) with and without ownership.
    435440Again, the arrows indicate the direction memory conceptually moves for each kind of operation.
     
    539544Only with the 1:1 model and ownership is active and passive false-sharing avoided (see \VRef{s:Ownership}).
    540545Passive false-sharing may still occur, if delayed ownership is used.
     546Finally, a completely free container can become reserved storage and be reset to allocate objects of a new size or freed to the global heap.
    541547
    542548\begin{figure}
     
    553559\caption{Free-list Structure with Container Ownership}
    554560\end{figure}
    555 
    556 A fragmented heap has multiple containers that may be partially or completely free.
    557 A completely free container can become reserved storage and be reset to allocate objects of a new size.
    558 When a heap reaches a threshold of free objects, it moves some free storage to the global heap for reuse to prevent heap blowup.
    559 Without ownership, when a heap frees objects to the global heap, individual objects must be passed, and placed on the global-heap's free-list.
    560 Containers cannot be freed to the global heap unless completely free because
    561561
    562562When a container changes ownership, the ownership of all objects within it change as well.
     
    569569Note, once the object is freed by Task$_1$, no more false sharing can occur until the container changes ownership again.
    570570To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free.
    571 One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area.
     571One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the operating system.
    572572
    573573\begin{figure}
     
    700700\end{figure}
    701701
    702 As mentioned, an implementation may have only one heap deal with the global heap, so the other heap can be simplified.
     702As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified.
    703703For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}.
    704704To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage.
     
    721721An allocation buffer is reserved memory (see~\VRef{s:AllocatorComponents}) not yet allocated to the program, and is used for allocating objects when the free list is empty.
    722722That is, rather than requesting new storage for a single object, an entire buffer is requested from which multiple objects are allocated later.
    723 Both any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or operating system, respectively.
     723Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or operating system, respectively.
    724724The allocation buffer reduces contention and the number of global/operating-system calls.
    725725For coalescing, a buffer is split into smaller objects by allocations, and recomposed into larger buffer areas during deallocations.
    726726
    727 Allocation buffers are useful initially when there are no freed objects in a heap because many allocations usually occur when a thread starts.
     727Allocation buffers are useful initially when there are no freed objects in a heap because many allocations usually occur when a thread starts (simple bump allocation).
    728728Furthermore, to prevent heap blowup, objects should be reused before allocating a new allocation buffer.
    729 Thus, allocation buffers are often allocated more frequently at program/thread start, and then their use often diminishes.
     729Thus, allocation buffers are often allocated more frequently at program/thread start, and then allocations often diminish.
    730730
    731731Using an allocation buffer with a thread heap avoids active false-sharing, since all objects in the allocation buffer are allocated to the same thread.
     
    746746\label{s:LockFreeOperations}
    747747
    748 A lock-free algorithm guarantees safe concurrent-access to a data structure, so that at least one thread can make progress in the system, but an individual task has no bound to execution, and hence, may starve~\cite[pp.~745--746]{Herlihy93}.
    749 % A wait-free algorithm puts a finite bound on the number of steps any thread takes to complete an operation, so an individual task cannot starve
     748A \newterm{lock-free algorithm} guarantees safe concurrent-access to a data structure, so that at least one thread makes progress, but an individual task has no execution bound and may starve~\cite[pp.~745--746]{Herlihy93}.
     749(A \newterm{wait-free algorithm} puts a bound on the number of steps any thread takes to complete an operation to prevent starvation.)
    750750Lock-free operations can be used in an allocator to reduce or eliminate the use of locks.
    751 Locks are a problem for high contention or if the thread holding the lock is preempted and other threads attempt to use that lock.
    752 With respect to the heap, these situations are unlikely unless all threads makes extremely high use of dynamic-memory allocation, which can be an indication of poor design.
     751While locks and lock-free data-structures often have equal performance, lock-free has the advantage of not holding a lock across preemption so other threads can continue to make progress.
     752With respect to the heap, these situations are unlikely unless all threads make extremely high use of dynamic-memory allocation, which can be an indication of poor design.
    753753Nevertheless, lock-free algorithms can reduce the number of context switches, since a thread does not yield/block while waiting for a lock;
    754 on the other hand, a thread may busy-wait for an unbounded period.
     754on the other hand, a thread may busy-wait for an unbounded period holding a processor.
    755755Finally, lock-free implementations have greater complexity and hardware dependency.
    756756Lock-free algorithms can be applied most easily to simple free-lists, \eg remote free-list, to allow lock-free insertion and removal from the head of a stack.
    757 Implementing lock-free operations for more complex data-structures (queue~\cite{Valois94}/deque~\cite{Sundell08}) is more complex.
     757Implementing lock-free operations for more complex data-structures (queue~\cite{Valois94}/deque~\cite{Sundell08}) is correspondingly more complex.
    758758Michael~\cite{Michael04} and Gidenstam \etal \cite{Gidenstam05} have created lock-free variations of the Hoard allocator.
    759759
  • doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS1.fig

    rba897d21 r2e9b59b  
    88-2
    991200 2
    10 6 4200 1575 4500 1725
    11 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650
    12 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4350 1650 20 20 4350 1650 4370 1650
    13 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4425 1650 20 20 4425 1650 4445 1650
     106 2850 2100 3150 2250
     111 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175
     121 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175
     131 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175
    1414-6
    15 6 2850 2475 3150 2850
     156 4050 2100 4350 2250
     161 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175
     171 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175
     181 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175
     19-6
     206 4650 2100 4950 2250
     211 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175
     221 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175
     231 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175
     24-6
     256 3450 2100 3750 2250
     261 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175
     271 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175
     281 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175
     29-6
     306 3300 2175 3600 2550
    16312 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    1732        1 1 1.00 45.00 90.00
    18          2925 2475 2925 2700
     33         3375 2175 3375 2400
    19342 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    20          2850 2700 3150 2700 3150 2850 2850 2850 2850 2700
     35         3300 2400 3600 2400 3600 2550 3300 2550 3300 2400
    2136-6
    22 6 4350 2475 4650 2850
     372 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     38         3150 1800 3150 2250
     392 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     40         2850 1800 2850 2250
     412 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     42         4650 1800 4650 2250
     432 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     44         4950 1800 4950 2250
     452 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     46         4500 1725 4500 2250
     472 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     48         5100 1725 5100 2250
     492 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     50         3450 1800 3450 2250
     512 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     52         3750 1800 3750 2250
     532 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     54         3300 1725 3300 2250
     552 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     56         3900 1725 3900 2250
     572 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     58         5250 1800 5250 2250
     592 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     60         5400 1800 5400 2250
     612 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     62         5550 1800 5550 2250
     632 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     64         5700 1800 5700 2250
     652 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     66         5850 1800 5850 2250
     672 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     68         2700 1725 2700 2250
    23692 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    2470        1 1 1.00 45.00 90.00
    25          4425 2475 4425 2700
    26 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    27          4350 2700 4650 2700 4650 2850 4350 2850 4350 2700
    28 -6
    29 6 3600 2475 3825 3150
     71         3375 1275 3375 1575
    30722 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    3173        1 1 1.00 45.00 90.00
    32          3675 2475 3675 2700
    33 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    34          3600 2700 3825 2700 3825 2850 3600 2850 3600 2700
    35 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    36          3600 3000 3825 3000 3825 3150 3600 3150 3600 3000
     74         2700 1275 2700 1575
     752 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
     76        1 1 1.00 45.00 90.00
     77         2775 1275 2775 1575
    37782 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    3879        1 1 1.00 45.00 90.00
    39          3675 2775 3675 3000
    40 -6
    41 6 4875 3600 5175 3750
    42 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675
    43 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675
    44 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675
    45 -6
    46 6 4875 2325 5175 2475
    47 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400
    48 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400
    49 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400
    50 -6
    51 6 5625 2325 5925 2475
    52 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 2400 20 20 5700 2400 5720 2400
    53 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 2400 20 20 5775 2400 5795 2400
    54 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 2400 20 20 5850 2400 5870 2400
    55 -6
    56 6 5625 3600 5925 3750
    57 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 3675 20 20 5700 3675 5720 3675
    58 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 3675 20 20 5775 3675 5795 3675
    59 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 3675 20 20 5850 3675 5870 3675
    60 -6
    61 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    62          2400 2100 2400 2550
    63 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    64          2550 2100 2550 2550
    65 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    66          2700 2100 2700 2550
    67 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    68          2850 2100 2850 2550
    69 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    70          3000 2100 3000 2550
    71 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    72          3600 2100 3600 2550
    73 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    74          3900 2100 3900 2550
    75 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    76          4050 2100 4050 2550
    77 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    78          4200 2100 4200 2550
    79 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    80          4350 2100 4350 2550
    81 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    82          4500 2100 4500 2550
    83 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    84          3300 1500 3300 1800
    85 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    86          3600 1500 3600 1800
    87 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    88          3900 1500 3900 1800
    89 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    90          3000 1500 4800 1500 4800 1800 3000 1800 3000 1500
     80         5175 1275 5175 1575
     812 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     82        1 1 1.00 45.00 90.00
     83         5625 1275 5625 1575
     842 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     85        1 1 1.00 45.00 90.00
     86         3750 1275 3750 1575
    91872 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
    9288        1 1 1.00 45.00 90.00
    93          3225 1650 2625 2100
     89         3825 1275 3825 1575
     902 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     91         2700 1950 6000 1950
     922 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     93         2700 2100 6000 2100
     942 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     95         2700 1800 6000 1800 6000 2250 2700 2250 2700 1800
    94962 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    9597        1 1 1.00 45.00 90.00
    96          3150 1650 2550 2100
     98         2775 2175 2775 2400
    97992 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    98100        1 1 1.00 45.00 90.00
    99          3450 1650 4050 2100
    100 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
    101         1 1 1.00 45.00 90.00
    102          3375 1650 3975 2100
    103 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    104          2100 2100 2100 2550
    105 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    106          1950 2250 3150 2250
    107 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    108          3450 2250 4650 2250
     101         2775 2475 2775 2700
    1091022 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    110          1950 2100 3150 2100 3150 2550 1950 2550 1950 2100
     103         2700 2700 2850 2700 2850 2850 2700 2850 2700 2700
    1111042 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    112          3450 2100 4650 2100 4650 2550 3450 2550 3450 2100
    113 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    114          2250 2100 2250 2550
    115 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    116          3750 2100 3750 2550
     105         2700 2400 2850 2400 2850 2550 2700 2550 2700 2400
    1171062 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    118107        1 1 1.00 45.00 90.00
    119          2025 2475 2025 2700
    120 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    121         1 1 1.00 45.00 90.00
    122          2025 2775 2025 3000
     108         4575 2175 4575 2400
    1231092 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    124          1950 3000 2100 3000 2100 3150 1950 3150 1950 3000
    125 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    126          1950 2700 2100 2700 2100 2850 1950 2850 1950 2700
     110         4500 2400 5025 2400 5025 2550 4500 2550 4500 2400
    1271112 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
    128112        1 1 1.00 45.00 90.00
    129          1950 3750 2700 3750 2700 3525
     113         3600 3375 4350 3375 4350 3150
    1301142 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    131          1950 3525 3150 3525 3150 3900 1950 3900 1950 3525
    132 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
    133         1 1 1.00 45.00 90.00
    134          3450 3750 4200 3750 4200 3525
    135 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    136          3450 3525 4650 3525 4650 3900 3450 3900 3450 3525
    137 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
    138         1 1 1.00 45.00 90.00
    139          3150 4650 4200 4650 4200 4275
    140 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    141          3150 4275 4650 4275 4650 4875 3150 4875 3150 4275
    142 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    143          1950 2400 3150 2400
    144 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    145          3450 2400 4650 2400
    146 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    147          5400 2100 5400 3900
    148 4 2 0 50 -1 0 11 0.0000 2 120 300 1875 2250 lock\001
    149 4 1 0 50 -1 0 12 0.0000 2 135 1935 3900 1425 N kernel-thread buckets\001
    150 4 1 0 50 -1 0 12 0.0000 2 195 810 4425 2025 heap$_2$\001
    151 4 1 0 50 -1 0 12 0.0000 2 195 810 2175 2025 heap$_1$\001
    152 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001
    153 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001
    154 4 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001
    155 4 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001
    156 4 0 0 50 -1 0 12 0.0000 2 135 360 3225 4450 lock\001
    157 4 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001
    158 4 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001
    159 4 1 0 50 -1 0 12 0.0000 2 180 1455 3900 4200 global pool (sbrk)\001
    160 4 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001
    161 4 1 0 50 -1 0 12 0.0000 2 180 720 6450 3150 free pool\001
    162 4 1 0 50 -1 0 12 0.0000 2 180 390 6450 2925 heap\001
     115         3600 3150 5100 3150 5100 3525 3600 3525 3600 3150
     1164 2 0 50 -1 0 11 0.0000 2 135 300 2625 1950 lock\001
     1174 1 0 50 -1 0 11 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001
     1184 1 0 50 -1 0 11 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001
     1194 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001
     1204 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001
     1214 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001
     1224 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001
     1234 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001
     1244 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001
     1254 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001
     1264 1 0 50 -1 0 11 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001
  • doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS2.fig

    rba897d21 r2e9b59b  
    88-2
    991200 2
    10 6 2850 2100 3150 2250
    11 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175
    12 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175
    13 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175
    14 -6
    15 6 4050 2100 4350 2250
    16 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175
    17 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175
    18 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175
    19 -6
    20 6 4650 2100 4950 2250
    21 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175
    22 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175
    23 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175
    24 -6
    25 6 3450 2100 3750 2250
    26 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175
    27 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175
    28 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175
    29 -6
    30 6 3300 2175 3600 2550
     106 2850 2475 3150 2850
    31112 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    3212        1 1 1.00 45.00 90.00
    33          3375 2175 3375 2400
     13         2925 2475 2925 2700
    34142 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    35          3300 2400 3600 2400 3600 2550 3300 2550 3300 2400
     15         2850 2700 3150 2700 3150 2850 2850 2850 2850 2700
     16-6
     176 4350 2475 4650 2850
     182 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     19        1 1 1.00 45.00 90.00
     20         4425 2475 4425 2700
     212 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     22         4350 2700 4650 2700 4650 2850 4350 2850 4350 2700
     23-6
     246 3600 2475 3825 3150
     252 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     26        1 1 1.00 45.00 90.00
     27         3675 2475 3675 2700
     282 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     29         3600 2700 3825 2700 3825 2850 3600 2850 3600 2700
     302 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     31         3600 3000 3825 3000 3825 3150 3600 3150 3600 3000
     322 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     33        1 1 1.00 45.00 90.00
     34         3675 2775 3675 3000
     35-6
     366 1950 3525 3150 3900
     372 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
     38        1 1 1.00 45.00 90.00
     39         1950 3750 2700 3750 2700 3525
     402 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     41         1950 3525 3150 3525 3150 3900 1950 3900 1950 3525
     424 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001
     43-6
     446 4050 1575 4350 1725
     451 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 1650 20 20 4125 1650 4145 1650
     461 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 1650 20 20 4200 1650 4220 1650
     471 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650
     48-6
     496 4875 2325 6150 3750
     506 4875 2325 5175 2475
     511 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400
     521 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400
     531 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400
     54-6
     556 4875 3600 5175 3750
     561 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675
     571 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675
     581 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675
     59-6
     604 1 0 50 -1 0 12 0.0000 2 180 900 5700 3150 local pools\001
     614 1 0 50 -1 0 12 0.0000 2 180 465 5700 2925 heaps\001
     62-6
     636 3600 4050 5100 4650
     642 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
     65        1 1 1.00 45.00 90.00
     66         3600 4500 4350 4500 4350 4275
     672 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     68         3600 4275 5100 4275 5100 4650 3600 4650 3600 4275
     694 1 0 50 -1 0 12 0.0000 2 180 1455 4350 4200 global pool (sbrk)\001
     704 0 0 50 -1 0 12 0.0000 2 135 360 3675 4450 lock\001
    3671-6
    37722 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    38          3150 1800 3150 2250
     73         2400 2100 2400 2550
    39742 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    40          2850 1800 2850 2250
     75         2550 2100 2550 2550
    41762 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    42          4650 1800 4650 2250
     77         2700 2100 2700 2550
    43782 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    44          4950 1800 4950 2250
    45 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    46          4500 1725 4500 2250
    47 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    48          5100 1725 5100 2250
     79         2850 2100 2850 2550
    49802 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    50          3450 1800 3450 2250
     81         3000 2100 3000 2550
    51822 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    52          3750 1800 3750 2250
    53 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    54          3300 1725 3300 2250
    55 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    56          3900 1725 3900 2250
     83         3600 2100 3600 2550
    57842 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    58          5250 1800 5250 2250
     85         3900 2100 3900 2550
    59862 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    60          5400 1800 5400 2250
     87         4050 2100 4050 2550
    61882 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    62          5550 1800 5550 2250
     89         4200 2100 4200 2550
    63902 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    64          5700 1800 5700 2250
     91         4350 2100 4350 2550
    65922 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    66          5850 1800 5850 2250
    67 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    68          2700 1725 2700 2250
     93         4500 2100 4500 2550
     942 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     95         3300 1500 3300 1800
     962 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     97         3600 1500 3600 1800
     982 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     99         3000 1500 4800 1500 4800 1800 3000 1800 3000 1500
    691002 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    70101        1 1 1.00 45.00 90.00
    71          3375 1275 3375 1575
     102         3150 1650 2550 2100
    721032 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    73104        1 1 1.00 45.00 90.00
    74          2700 1275 2700 1575
    75 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
    76         1 1 1.00 45.00 90.00
    77          2775 1275 2775 1575
     105         3450 1650 4050 2100
     1062 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     107         2100 2100 2100 2550
     1082 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     109         1950 2250 3150 2250
     1102 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     111         3450 2250 4650 2250
     1122 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     113         1950 2100 3150 2100 3150 2550 1950 2550 1950 2100
     1142 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     115         3450 2100 4650 2100 4650 2550 3450 2550 3450 2100
     1162 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     117         2250 2100 2250 2550
     1182 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     119         3750 2100 3750 2550
    781202 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    79121        1 1 1.00 45.00 90.00
    80          5175 1275 5175 1575
     122         2025 2475 2025 2700
    811232 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    82124        1 1 1.00 45.00 90.00
    83          5625 1275 5625 1575
    84 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    85         1 1 1.00 45.00 90.00
    86          3750 1275 3750 1575
    87 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
    88         1 1 1.00 45.00 90.00
    89          3825 1275 3825 1575
    90 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    91          2700 1950 6000 1950
    92 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    93          2700 2100 6000 2100
     125         2025 2775 2025 3000
    941262 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    95          2700 1800 6000 1800 6000 2250 2700 2250 2700 1800
    96 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    97         1 1 1.00 45.00 90.00
    98          2775 2175 2775 2400
    99 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    100         1 1 1.00 45.00 90.00
    101          2775 2475 2775 2700
     127         1950 3000 2100 3000 2100 3150 1950 3150 1950 3000
    1021282 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    103          2700 2700 2850 2700 2850 2850 2700 2850 2700 2700
    104 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    105          2700 2400 2850 2400 2850 2550 2700 2550 2700 2400
    106 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    107         1 1 1.00 45.00 90.00
    108          4575 2175 4575 2400
    109 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    110          4500 2400 5025 2400 5025 2550 4500 2550 4500 2400
     129         1950 2700 2100 2700 2100 2850 1950 2850 1950 2700
    1111302 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
    112131        1 1 1.00 45.00 90.00
    113          3600 3525 4650 3525 4650 3150
     132         3450 3750 4200 3750 4200 3525
    1141332 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    115          3600 3150 5100 3150 5100 3750 3600 3750 3600 3150
    116 4 2 0 50 -1 0 11 0.0000 2 120 300 2625 1950 lock\001
    117 4 1 0 50 -1 0 10 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001
    118 4 1 0 50 -1 0 10 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001
    119 4 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001
    120 4 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001
    121 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001
    122 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001
    123 4 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001
    124 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001
    125 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001
    126 4 1 0 50 -1 0 10 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001
     134         3450 3525 4650 3525 4650 3900 3450 3900 3450 3525
     1352 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     136         1950 2400 3150 2400
     1372 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     138         3450 2400 4650 2400
     1394 2 0 50 -1 0 11 0.0000 2 135 300 1875 2250 lock\001
     1404 1 0 50 -1 0 12 0.0000 2 180 1245 3900 1425 H heap buckets\001
     1414 1 0 50 -1 0 12 0.0000 2 180 810 4425 2025 heap$_2$\001
     1424 1 0 50 -1 0 12 0.0000 2 180 810 2175 2025 heap$_1$\001
     1434 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001
     1444 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001
     1454 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001
     1464 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001
     1474 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001
     1484 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001
  • doc/theses/mubeen_zulfiqar_MMath/intro.tex

    rba897d21 r2e9b59b  
    4848Attempts have been made to perform quasi garbage collection in C/\CC~\cite{Boehm88}, but it is a compromise.
    4949This thesis only examines dynamic memory-management with \emph{explicit} deallocation.
    50 While garbage collection and compaction are not part this work, many of the results are applicable to the allocation phase in any memory-management approach.
     50While garbage collection and compaction are not part this work, many of the work's results are applicable to the allocation phase in any memory-management approach.
    5151
    5252Most programs use a general-purpose allocator, often the one provided implicitly by the programming-language's runtime.
     
    6565\begin{enumerate}[leftmargin=*]
    6666\item
    67 Implementation of a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
    68 
    69 \item
    70 Adopt returning of @nullptr@ for a zero-sized allocation, rather than an actual memory address, both of which can be passed to @free@.
    71 
    72 \item
    73 Extended the standard C heap functionality by preserving with each allocation its original request size versus the amount allocated, if an allocation is zero fill, and the allocation alignment.
    74 
    75 \item
    76 Use the zero fill and alignment as \emph{sticky} properties for @realloc@, to realign existing storage, or preserve existing zero-fill and alignment when storage is copied.
     67Implementation of a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
     68
     69\item
     70Adopt @nullptr@ return for a zero-sized allocation, rather than an actual memory address, which can be passed to @free@.
     71
     72\item
     73Extend the standard C heap functionality by preserving with each allocation:
     74\begin{itemize}[itemsep=0pt]
     75\item
     76its request size plus the amount allocated,
     77\item
     78whether an allocation is zero fill,
     79\item
     80and allocation alignment.
     81\end{itemize}
     82
     83\item
     84Use the preserved zero fill and alignment as \emph{sticky} properties for @realloc@ to zero-fill and align when storage is extended or copied.
    7785Without this extension, it is unsafe to @realloc@ storage initially allocated with zero-fill/alignment as these properties are not preserved when copying.
    7886This silent generation of a problem is unintuitive to programmers and difficult to locate because it is transient.
     
    8694@resize( oaddr, alignment, size )@ re-purpose an old allocation with new alignment but \emph{without} preserving fill.
    8795\item
    88 @realloc( oaddr, alignment, size )@ same as previous @realloc@ but adding or changing alignment.
     96@realloc( oaddr, alignment, size )@ same as @realloc@ but adding or changing alignment.
    8997\item
    9098@aalloc( dim, elemSize )@ same as @calloc@ except memory is \emph{not} zero filled.
     
    96104
    97105\item
    98 Provide additional heap wrapper functions in \CFA to provide a complete orthogonal set of allocation operations and properties.
     106Provide additional heap wrapper functions in \CFA creating an orthogonal set of allocation operations and properties.
    99107
    100108\item
     
    109117@malloc_size( addr )@ returns the size of the memory allocation pointed-to by @addr@.
    110118\item
    111 @malloc_usable_size( addr )@ returns the usable size of the memory pointed-to by @addr@, i.e., the bin size containing the allocation, where @malloc_size( addr )@ $\le$ @malloc_usable_size( addr )@.
     119@malloc_usable_size( addr )@ returns the usable (total) size of the memory pointed-to by @addr@, i.e., the bin size containing the allocation, where @malloc_size( addr )@ $\le$ @malloc_usable_size( addr )@.
    112120\end{itemize}
    113121
     
    116124
    117125\item
    118 Provide complete, fast, and contention-free allocation statistics to help understand program behaviour:
     126Provide complete, fast, and contention-free allocation statistics to help understand allocation behaviour:
    119127\begin{itemize}
    120128\item
  • doc/theses/mubeen_zulfiqar_MMath/performance.tex

    rba897d21 r2e9b59b  
    11\chapter{Performance}
     2\label{c:Performance}
    23
    34\section{Machine Specification}
  • doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.bib

    rba897d21 r2e9b59b  
    124124}
    125125
    126 @misc{nedmalloc,
    127     author      = {Niall Douglas},
    128     title       = {nedmalloc version 1.06 Beta},
    129     month       = jan,
    130     year        = 2010,
    131     note        = {\textsf{http://\-prdownloads.\-sourceforge.\-net/\-nedmalloc/\-nedmalloc\_v1.06beta1\_svn1151.zip}},
     126@misc{ptmalloc2,
     127    author      = {Wolfram Gloger},
     128    title       = {ptmalloc version 2},
     129    month       = jun,
     130    year        = 2006,
     131    note        = {\href{http://www.malloc.de/malloc/ptmalloc2-current.tar.gz}{http://www.malloc.de/\-malloc/\-ptmalloc2-current.tar.gz}},
     132}
     133
     134@misc{GNUallocAPI,
     135    author      = {GNU},
     136    title       = {Summary of malloc-Related Functions},
     137    year        = 2020,
     138    note        = {\href{https://www.gnu.org/software/libc/manual/html\_node/Summary-of-Malloc.html}{https://www.gnu.org/\-software/\-libc/\-manual/\-html\_node/\-Summary-of-Malloc.html}},
     139}
     140
     141@misc{SeriallyReusable,
     142    author      = {IBM},
     143    title       = {Serially reusable programs},
     144    month       = mar,
     145    year        = 2021,
     146    note        = {\href{https://www.ibm.com/docs/en/ztpf/1.1.0.15?topic=structures-serially-reusable-programs}{https://www.ibm.com/\-docs/\-en/\-ztpf/\-1.1.0.15?\-topic=structures-serially-reusable-programs}},
     147}
     148
     149@misc{librseq,
     150    author      = {Mathieu Desnoyers},
     151    title       = {Library for Restartable Sequences},
     152    month       = mar,
     153    year        = 2022,
     154    note        = {\href{https://github.com/compudj/librseq}{https://github.com/compudj/librseq}},
    132155}
    133156
  • doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.tex

    rba897d21 r2e9b59b  
    6060% For hyperlinked PDF, suitable for viewing on a computer, use this:
    6161\documentclass[letterpaper,12pt,titlepage,oneside,final]{book}
     62\usepackage[T1]{fontenc}        % Latin-1 => 256-bit characters, => | not dash, <> not Spanish question marks
    6263
    6364% For PDF, suitable for double-sided printing, change the PrintVersion variable below to "true" and use this \documentclass line instead of the one above:
     
    9495% Use the "hyperref" package
    9596% N.B. HYPERREF MUST BE THE LAST PACKAGE LOADED; ADD ADDITIONAL PKGS ABOVE
    96 \usepackage[pagebackref=true]{hyperref} % with basic options
     97\usepackage{url}
     98\usepackage[dvips,pagebackref=true]{hyperref} % with basic options
    9799%\usepackage[pdftex,pagebackref=true]{hyperref}
    98100% N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing.
     
    113115    citecolor=blue,        % color of links to bibliography
    114116    filecolor=magenta,      % color of file links
    115     urlcolor=blue           % color of external links
     117    urlcolor=blue,           % color of external links
     118    breaklinks=true
    116119}
    117120\ifthenelse{\boolean{PrintVersion}}{   % for improved print quality, change some hyperref options
     
    122125    urlcolor=black
    123126}}{} % end of ifthenelse (no else)
     127%\usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,pagebackref=true,breaklinks=true,colorlinks=true,linkcolor=blue,citecolor=blue,urlcolor=blue]{hyperref}
     128\usepackage{breakurl}
     129\urlstyle{sf}
    124130
    125131%\usepackage[automake,toc,abbreviations]{glossaries-extra} % Exception to the rule of hyperref being the last add-on package
     
    171177\input{common}
    172178%\usepackageinput{common}
    173 \CFAStyle                                               % CFA code-style for all languages
     179\CFAStyle                                               % CFA code-style
     180\lstset{language=CFA}                                   % default language
    174181\lstset{basicstyle=\linespread{0.9}\sf}                 % CFA typewriter font
    175182\newcommand{\uC}{$\mu$\CC}
Note: See TracChangeset for help on using the changeset viewer.