Changes in / [e88c2fb:05e33f5]


Ignore:
Location:
doc/theses/mubeen_zulfiqar_MMath
Files:
1 deleted
7 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/allocator.tex

    re88c2fb r05e33f5  
    11\chapter{Allocator}
    22
    3 This chapter presents a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
    4 The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}.
    5 
    6 
    7 \section{llheap}
    8 
    9 The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call.
    10 (Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.)
    11 A direct consequence of this objective is very simple or no storage coalescing;
    12 hence, llheap's design is willing to use more storage to lower latency.
    13 This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory.
    14 Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{Performance}).
    15 
    16 % The objective of llheap's new design was to fulfill following requirements:
    17 % \begin{itemize}
    18 % \item It should be concurrent and thread-safe for multi-threaded programs.
    19 % \item It should avoid global locks, on resources shared across all threads, as much as possible.
    20 % \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
    21 % \item It should be a lightweight memory allocator.
    22 % \end{itemize}
     3\section{uHeap}
     4uHeap is a lightweight memory allocator. The objective behind uHeap is to design a minimal concurrent memory allocator that has new features and also fulfills GNU C Library requirements (FIX ME: cite requirements).
     5
     6The objective of uHeap's new design was to fulfill following requirements:
     7\begin{itemize}
     8\item It should be concurrent and thread-safe for multi-threaded programs.
     9\item It should avoid global locks, on resources shared across all threads, as much as possible.
     10\item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
     11\item It should be a lightweight memory allocator.
     12\end{itemize}
    2313
    2414%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    2515
    26 \section{Design Choices}
    27 
    28 llheap's design was reviewed and changed multiple times throughout the thesis.
    29 Some of the rejected designs are discussed because they show the path to the final design (see discussion in \VRef{s:MultipleHeaps}).
    30 Note, a few simples tests for a design choice were compared with the current best allocators to determine the viability of a design.
    31 
    32 
    33 \subsection{Allocation Fastpath}
    34 
    35 These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced.
    36 \paragraph{T:1 model}
    37 \VRef[Figure]{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are N-shared across KTs.
    38 This design leverages the fact that 95\% of allocation requests are less than 1024 bytes and there are only 3--5 different request sizes.
    39 When KTs $\le$ N, the common bucket sizes are uncontented;
    40 when KTs $>$ N, the free buckets are contented and latency increases significantly.
    41 In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared.
    42 Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency;
    43 however, picking N is workload specific.
     16\section{Design choices for uHeap}
     17uHeap's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed:
     18
     19\paragraph{Design 1: Centralized}
     20One heap, but lower bucket sizes are N-shared across KTs.
     21This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes.
     22When KTs $\le$ N, the important bucket sizes are uncontented.
     23When KTs $>$ N, the free buckets are contented.
     24Therefore, threads are only contending for a small number of buckets, which are distributed among them to reduce contention.
     25\begin{cquote}
     26\centering
     27\input{AllocDS2}
     28\end{cquote}
     29Problems: need to know when a kernel thread (KT) is created and destroyed to know when to assign a shared bucket-number.
     30When no thread is assigned a bucket number, its free storage is unavailable. All KTs will be contended for one lock on sbrk for their initial allocations (before free-lists gets populated).
     31
     32\paragraph{Design 2: Decentralized N Heaps}
     33Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area.
     34Kernel threads (KT) are assigned to the N heaps.
     35When KTs $\le$ N, the heaps are uncontented.
     36When KTs $>$ N, the heaps are contented.
     37By adjusting N, this approach reduces storage at the cost of speed due to contention.
     38In all cases, a thread acquires/releases a lock, contented or uncontented.
     39\begin{cquote}
     40\centering
     41\input{AllocDS1}
     42\end{cquote}
     43Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT.
     44
     45\paragraph{Design 3: Decentralized Per-thread Heaps}
     46Design 3 is similar to design 2 but instead of having an M:N model, it uses a 1:1 model. So, instead of having N heaos and sharing them among M KTs, Design 3 has one heap for each KT.
     47Dynamic number of heaps: create a thread-local heap for each kernel thread (KT) with a bump-area allocated from the @sbrk@ area.
     48Each KT will have its own exclusive thread-local heap. Heap will be uncontended between KTs regardless how many KTs have been created.
     49Operations on @sbrk@ area will still be protected by locks.
     50%\begin{cquote}
     51%\centering
     52%\input{AllocDS3} FIXME add figs
     53%\end{cquote}
     54Problems: We cannot destroy the heap when a KT exits because our dynamic objects have ownership and they are returned to the heap that created them when the program frees a dynamic object. All dynamic objects point back to their owner heap. If a thread A creates an object O, passes it to another thread B, and A itself exits. When B will free object O, O should return to A's heap so A's heap should be preserved for the lifetime of the whole program as their might be objects in-use of other threads that were allocated by A. Also, we need to know when a KT is created and destroyed to know when to create/destroy a heap for the KT.
     55
     56\paragraph{Design 4: Decentralized Per-CPU Heaps}
     57Design 4 is similar to Design 3 but instead of having a heap for each thread, it creates a heap for each CPU.
     58Fixed number of heaps for a machine: create a heap for each CPU with a bump-area allocated from the @sbrk@ area.
     59Each CPU will have its own CPU-local heap. When the program does a dynamic memory operation, it will be entertained by the heap of the CPU where the process is currently running on.
     60Each CPU will have its own exclusive heap. Just like Design 3(FIXME cite), heap will be uncontended between KTs regardless how many KTs have been created.
     61Operations on @sbrk@ area will still be protected by locks.
     62To deal with preemtion during a dynamic memory operation, librseq(FIXME cite) will be used to make sure that the whole dynamic memory operation completes on one CPU. librseq's restartable sequences can make it possible to re-run a critical section and undo the current writes if a preemption happened during the critical section's execution.
     63%\begin{cquote}
     64%\centering
     65%\input{AllocDS4} FIXME add figs
     66%\end{cquote}
     67
     68Problems: This approach was slower than the per-thread model. Also, librseq does not provide such restartable sequences to detect preemtions in user-level threading system which is important to us as CFA(FIXME cite) has its own threading system that we want to support.
     69
     70Out of the four designs, Design 3 was chosen because of the following reasons.
     71\begin{itemize}
     72\item
     73Decentralized designes are better in general as compared to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designes shard the whole heap which has all the buckets with the addition of sharding sbrk area. So Design 1 was eliminated.
     74\item
     75Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenerio.
     76\item
     77Design 4 was eliminated because it was slower than Design 3 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achive user-threading safety which has some cost to it. Desing 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
     78\end{itemize}
     79
     80
     81\subsection{Advantages of distributed design}
     82
     83The distributed design of uHeap is concurrent to work in multi-threaded applications.
     84
     85Some key benefits of the distributed design of uHeap are as follows:
     86
     87\begin{itemize}
     88\item
     89The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The call to sbrk will be protected using locks but bump allocation (on memory taken from sbrk) will not be contended once the sbrk call has returned.
     90\item
     91Low or almost no contention on heap resources.
     92\item
     93It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
     94\item
     95Distributed design avoids unnecassry locks on resources shared across all KTs.
     96\end{itemize}
     97
     98%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     99
     100\section{uHeap Structure}
     101
     102As described in (FIXME cite 2.4) uHeap uses following features of multi-threaded memory allocators.
     103\begin{itemize}
     104\item
     105uHeap has multiple heaps without a global heap and uses 1:1 model. (FIXME cite 2.5 1:1 model)
     106\item
     107uHeap uses object ownership. (FIXME cite 2.5.2)
     108\item
     109uHeap does not use object containers (FIXME cite 2.6) or any coalescing technique. Instead each dynamic object allocated by uHeap has a header than contains bookkeeping information.
     110\item
     111Each thread-local heap in uHeap has its own allocation buffer that is taken from the system using sbrk() call. (FIXME cite 2.7)
     112\item
     113Unless a heap is freeing an object that is owned by another thread's heap or heap is using sbrk() system call, uHeap is mostly lock-free which eliminates most of the contention on shared resources. (FIXME cite 2.8)
     114\end{itemize}
     115
     116As uHeap uses a heap per-thread model to reduce contention on heap resources, we manage a list of heaps (heap-list) that can be used by threads. The list is empty at the start of the program. When a kernel thread (KT) is created, we check if heap-list is empty. If no then a heap is removed from the heap-list and is given to this new KT to use exclusively. If yes then a new heap object is created in dynamic memory and is given to this new KT to use exclusively. When a KT exits, its heap is not destroyed but instead its heap is put on the heap-list and is ready to be reused by new KTs.
     117
     118This reduces the memory footprint as the objects on free-lists of a KT that has exited can be reused by a new KT. Also, we preserve all the heaps that were created during the lifetime of the program till the end of the program. uHeap uses object ownership where an object is freed to the free-buckets of the heap that allocated it. Even after a KT A has exited, its heap has to be preserved as there might be objects in-use of other threads that were initially allocated by A and the passed to other threads.
    44119
    45120\begin{figure}
    46121\centering
    47 \input{AllocDS1}
    48 \caption{T:1 with Shared Buckets}
    49 \label{f:T1SharedBuckets}
     122\includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
     123\caption{HeapStructure}
     124\label{fig:heapStructureFig}
    50125\end{figure}
    51126
    52 Problems:
    53 \begin{itemize}
    54 \item
    55 Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator.
    56 \item
    57 When no thread is assigned a bucket number, its free storage is unavailable.
    58 \item
    59 All KTs contend for the global-pool lock for initial allocations, before free-lists get populated.
    60 \end{itemize}
    61 Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
    62 
    63 \paragraph{T:H model}
    64 \VRef[Figure]{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded across the KTs.
    65 A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket.
    66 When KT $\le$ N, the heaps are uncontented;
    67 when KTs $>$ N, the heaps are contented.
    68 In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared.
    69 By adjusting N upwards, this approach reduces contention but increases storage (time versus space);
    70 however, picking N is workload specific.
    71 
    72 \begin{figure}
    73 \centering
    74 \input{AllocDS2}
    75 \caption{T:H with Shared Heaps}
    76 \label{f:THSharedHeaps}
    77 \end{figure}
    78 
    79 Problems:
    80 \begin{itemize}
    81 \item
    82 Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator.
    83 \item
    84 When no thread is assigned to a heap, its free storage is unavailable.
    85 \item
    86 Ownership issues arise (see \VRef{s:Ownership}).
    87 \item
    88 All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated.
    89 \end{itemize}
    90 Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
    91 
    92 \paragraph{T:H model, H = number of CPUs}
    93 This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@.
    94 (See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per CPU.)
    95 Hence, each CPU logically has its own private heap and local pool.
    96 A memory operation is serviced from the heap associated with the CPU executing the operation.
    97 This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership).
    98 This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs.
    99 
    100 Problems:
    101 \begin{itemize}
    102 \item
    103 Need to know when a CPU is added/removed from the @taskset@.
    104 \item
    105 Need a fast way to determine the CPU a KT is executing on to access the appropriate heap.
    106 \item
    107 Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}.
    108 \begin{quote}
    109 A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}
    110 \end{quote}
    111 If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
    112 Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
    113 Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler.
    114 
    115 \noindent
    116 Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted.
    117 \end{itemize}
    118 Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-section along the allocation fast-path produced a significant increase in allocation costs.
    119 Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs.
    120 For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs.
    121 The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation.
    122 Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section.
    123 Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap.
    124 A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection.
    125 
    126 
    127 \paragraph{1:1 model}
    128 This design is the T:H model with T = H, where there is one thread-local heap for each KT.
    129 (See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.)
    130 Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted.
    131 Heaps are uncontended for a KTs memory operations to its heap (modulo operations on the global pool and ownership).
    132 
    133 Problems:
    134 \begin{itemize}
    135 \item
    136 Need to know when a KT is starts/terminates to create/delete its heap.
    137 
    138 \noindent
    139 It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates.
    140 \item
    141 There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap.
    142 
    143 \noindent
    144 The classic solution only deletes a heap after all referents are returned, which is complex.
    145 The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees.
    146 If old referents return storage to a terminated heap, it is handled in the same way as an active heap.
    147 To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation).
    148 In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT.
    149 \item
    150 There can be significant external fragmentation as the number of KTs increases.
    151 
    152 \noindent
    153 In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs.
    154 Since the number of CPUs is relatively small, >~1024, and a heap relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs.
    155 \item
    156 There is the same serially-reusable problem with UTs migrating across KTs.
    157 \end{itemize}
    158 Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach.
    159 
    160 
    161 \vspace{5pt}
    162 \noindent
    163 The conclusion from this design exercise is: any atomic fence, instruction (lock free), or lock along the allocation fastpath produces significant slowdown.
    164 For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps maybe shared by multiple threads, even when KTs $\le$ N.
    165 For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
    166 However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs.
    167 More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
    168 Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required.
    169 The 1:1 model still has the serially-reusable problem with user-level threading, which is address in \VRef{}, and the greatest potential for heap blowup for certain allocation patterns.
    170 
    171 
    172 % \begin{itemize}
    173 % \item
    174 % A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated.
    175 % \item
    176 % Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario.
    177 % \item
    178 % Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it.
    179 % that  because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
    180 % \end{itemize}
    181 % Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons:
    182 
    183 % \subsection{Advantages of distributed design}
    184 %
    185 % The distributed design of llheap is concurrent to work in multi-threaded applications.
    186 % Some key benefits of the distributed design of llheap are as follows:
    187 % \begin{itemize}
    188 % \item
    189 % The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned.
    190 % \item
    191 % Low or almost no contention on heap resources.
    192 % \item
    193 % It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
    194 % \item
    195 % Distributed design avoids unnecessary locks on resources shared across all KTs.
    196 % \end{itemize}
    197 
    198 \subsection{Allocation Latency}
    199 
    200 A primary goal of llheap is low latency.
    201 Two forms of latency are internal and external.
    202 Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system.
    203 Ideally latency is $O(1)$ with a small constant.
    204 
    205 To obtain $O(1)$ internal latency means no searching on the allocation fastpath, largely prohibits coalescing, which leads to external fragmentation.
    206 The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger).
    207 
    208 To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
    209 Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable.
    210 The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VRef{}).
    211 Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent.
    212 
    213 
    214 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    215 
    216 \section{llheap Structure}
    217 
    218 \VRef[Figure]{f:llheapStructure} shows the design of llheap, which uses the following features:
    219 \begin{itemize}
    220 \item
    221 1:1 multiple-heap model to minimize the fastpath,
    222 \item
    223 can be built with or without heap ownership,
    224 \item
    225 headers per allocation versus containers,
    226 \item
    227 no coalescing to minimize latency,
    228 \item
    229 local reserved memory (pool) obtained from the operating system using @sbrk@ call,
    230 \item
    231 global reserved memory (pool) obtained from the operating system using @mmap@ call to create and reuse heaps needed by threads.
    232 \end{itemize}
    233 
    234 \begin{figure}
    235 \centering
    236 % \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
    237 \input{llheap}
    238 \caption{llheap Structure}
    239 \label{f:llheapStructure}
    240 \end{figure}
    241 
    242 llheap starts by creating an array of $N$ global heaps from storage obtained by @mmap@, where $N$ is the number of computer cores.
    243 There is a global bump-pointer to the next free heap in the array.
    244 When this array is exhausted, another array is allocated.
    245 There is a global top pointer to a heap intrusive link that chain free heaps from terminated threads, where these heaps are reused by new threads.
    246 When statistics are turned on, there is a global top pointer to a heap intrusive link that chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps (see @malloc_stats@ \VRef{}).
    247 
    248 When a KT starts, a heap is allocated from the current array for exclusive used by the KT.
    249 When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of heaps.
    250 The free heaps is a stack so hot storage is reused first.
    251 Preserving all heaps created during the program lifetime, solves the storage lifetime problem.
    252 This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially.
    253 llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap.
    254 
    255 Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
    256 The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation (see @mallopt@ \VRef{}), \ie small objects managed by the program and large objects managed by the operating system.
    257 Each free bucket of a specific size has following two lists:
    258 \begin{itemize}
    259 \item
    260 A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
    261 The free objects is a stack so hot storage is reused first.
    262 \item
    263 For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operation require locking.
    264 The entire ownership stack be removed and become the head of the corresponding free stack, when the free stack is empty.
    265 \end{itemize}
    266 
    267 Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
    268 First, the allocation is divided into small (@sbrk@) or large (@mmap@).
    269 For small allocations, $S$ is quantized into a bucket size.
    270 Quantizing is performed using a binary search, using the ordered bucket array.
    271 An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket.
    272 (Type @char@ restricts the number of bucket sizes to 256.)
    273 For $S$ > 64K, the binary search is used.
    274 Then, the allocation storage is obtained from the following locations (in order), with increasing latency.
    275 \begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt]
    276 \item
    277 bucket's free stack,
    278 \item
    279 bucket's away stack,
    280 \item
    281 heap's local pool
    282 \item
    283 global pool
    284 \item
    285 operating system (@sbrk@)
    286 \end{enumerate}
     127Each heap uses seggregated free-buckets that have free objects of a specific size. Each free-bucket of a specific size has following 2 lists in it:
     128\begin{itemize}
     129\item
     130Free list is used when a thread is freeing an object that is owned by its own heap so free list does not use any locks/atomic-operations as it is only used by the owner KT.
     131\item
     132Away list is used when a thread A is freeing an object that is owned by another KT B's heap. This object should be freed to the owner heap (B's heap) so A will place the object on the away list of B. Away list is lock protected as it is shared by all other threads.
     133\end{itemize}
     134
     135When a dynamic object of a size S is requested. The thread-local heap will check if S is greater than or equal to the mmap threshhold. Any request larger than the mmap threshhold is fulfilled by allocating an mmap area of that size and such requests are not allocated on sbrk area. The value of this threshhold can be changed using mallopt routine but the new value should not be larger than our biggest free-bucket size.
     136
     137Algorithm~\ref{alg:heapObjectAlloc} briefly shows how an allocation request is fulfilled.
    287138
    288139\begin{algorithm}
    289 \caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc}
     140\caption{Dynamic object allocation of size S}\label{alg:heapObjectAlloc}
    290141\begin{algorithmic}[1]
    291142\State $\textit{O} \gets \text{NULL}$
    292143\If {$S < \textit{mmap-threshhold}$}
    293         \State $\textit{B} \gets \text{smallest free-bucket} \geq S$
     144        \State $\textit{B} \gets (\text{smallest free-bucket} \geq S)$
    294145        \If {$\textit{B's free-list is empty}$}
    295146                \If {$\textit{B's away-list is empty}$}
    296147                        \If {$\textit{heap's allocation buffer} < S$}
    297                                 \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$
     148                                \State $\text{get allocation buffer using system call sbrk()}$
    298149                        \EndIf
    299150                        \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$
     
    313164\end{algorithm}
    314165
    315 Algorithm~\ref{alg:heapObjectFree} shows the de-allocation (free) outline for an object at address $A$.
    316 
    317 \begin{algorithm}[h]
    318 \caption{Dynamic object free at address $A$}\label{alg:heapObjectFree}
    319 %\begin{algorithmic}[1]
    320 %\State write this algorithm
    321 %\end{algorithmic}
    322 \end{algorithm}
    323 
    324166
    325167%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    326168
    327169\section{Added Features and Methods}
    328 To improve the llheap allocator (FIX ME: cite llheap) interface and make it more user friendly, we added a few more routines to the C allocator.
    329 Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
     170To improve the uHeap allocator (FIX ME: cite uHeap) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
    330171
    331172\subsection{C Interface}
    332 We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers.
    333 These features will programmer more control on the dynamic memory allocation.
     173We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. THese features will programmer more control on the dynamic memory allocation.
    334174
    335175\subsection{Out of Memory}
     
    343183
    344184\subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
    345 @aalloc@ is an extension of malloc.
    346 It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly.
    347 The only alternate of this routine in the other allocators is @calloc@ but @calloc@ also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.
     185@aalloc@ is an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is calloc but calloc also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.
    348186\paragraph{Usage}
    349187@aalloc@ takes two parameters.
     
    355193@elemSize@: size of the object in the array.
    356194\end{itemize}
    357 It returns address of dynamic object allocated on heap that can contain dim number of objects of the size elemSize.
    358 On failure, it returns a @NULL@ pointer.
     195It returns address of dynamic object allocatoed on heap that can contain dim number of objects of the size elemSize. On failure, it returns a @NULL@ pointer.
    359196
    360197\subsection{\lstinline{void * resize( void * oaddr, size_t size )}}
    361 @resize@ is an extension of relloc.
    362 It allows programmer to reuse a currently allocated dynamic object with a new size requirement.
    363 Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.
     198@resize@ is an extension of relloc. It allows programmer to reuse a cuurently allocated dynamic object with a new size requirement. Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.
    364199\paragraph{Usage}
    365200@resize@ takes two parameters.
     
    371206@size@: the new size requirement of the to which the old object needs to be resized.
    372207\end{itemize}
    373 It returns an object that is of the size given but it does not preserve the data in the old object.
    374 On failure, it returns a @NULL@ pointer.
     208It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns a @NULL@ pointer.
    375209
    376210\subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
    377 This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize).
    378 In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
    379 \paragraph{Usage}
    380 This resize takes three parameters.
    381 It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize).
     211This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
     212\paragraph{Usage}
     213This resize takes three parameters. It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize).
    382214
    383215\begin{itemize}
     
    389221@size@: the new size requirement of the to which the old object needs to be resized.
    390222\end{itemize}
    391 It returns an object with the size and alignment given in the parameters.
    392 On failure, it returns a @NULL@ pointer.
     223It returns an object with the size and alignment given in the parameters. On failure, it returns a @NULL@ pointer.
    393224
    394225\subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
    395 amemalign is a hybrid of memalign and aalloc.
    396 It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly.
    397 It frees the programmer from calculating the total size of the array.
     226amemalign is a hybrid of memalign and aalloc. It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. It frees the programmer from calculating the total size of the array.
    398227\paragraph{Usage}
    399228amemalign takes three parameters.
     
    407236@elemSize@: size of the object in the array.
    408237\end{itemize}
    409 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize.
    410 The returned dynamic array is aligned to the given alignment.
    411 On failure, it returns a @NULL@ pointer.
     238It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns a @NULL@ pointer.
    412239
    413240\subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
    414 cmemalign is a hybrid of amemalign and calloc.
    415 It allows programmer to allocate an aligned dynamic array of objects that is 0 filled.
    416 The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly.
    417 This routine provides both features of aligning and 0 filling, implicitly.
     241cmemalign is a hybrid of amemalign and calloc. It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. This routine provides both features of aligning and 0 filling, implicitly.
    418242\paragraph{Usage}
    419243cmemalign takes three parameters.
     
    427251@elemSize@: size of the object in the array.
    428252\end{itemize}
    429 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize.
    430 The returned dynamic array is aligned to the given alignment and is 0 filled.
    431 On failure, it returns a @NULL@ pointer.
     253It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns a @NULL@ pointer.
    432254
    433255\subsection{\lstinline{size_t malloc_alignment( void * addr )}}
    434 @malloc_alignment@ returns the alignment of a currently allocated dynamic object.
    435 It allows the programmer in memory management and personal bookkeeping.
    436 It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.
     256@malloc_alignment@ returns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.
    437257\paragraph{Usage}
    438258@malloc_alignment@ takes one parameters.
     
    442262@addr@: the address of the currently allocated dynamic object.
    443263\end{itemize}
    444 @malloc_alignment@ returns the alignment of the given dynamic object.
    445 On failure, it return the value of default alignment of the llheap allocator.
     264@malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeap allocator.
    446265
    447266\subsection{\lstinline{bool malloc_zero_fill( void * addr )}}
    448 @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation.
    449 It allows the programmer in memory management and personal bookkeeping.
    450 It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.
     267@malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.
    451268\paragraph{Usage}
    452269@malloc_zero_fill@ takes one parameters.
     
    456273@addr@: the address of the currently allocated dynamic object.
    457274\end{itemize}
    458 @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise.
    459 On failure, it returns false.
     275@malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false.
    460276
    461277\subsection{\lstinline{size_t malloc_size( void * addr )}}
    462 @malloc_size@ returns the allocation size of a currently allocated dynamic object.
    463 It allows the programmer in memory management and personal bookkeeping.
    464 It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size.
    465 Its current alternate in the other allocators is @malloc_usable_size@.
    466 But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object.
    467 On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object.
    468 This size is updated when an object is realloced, resized, or passed through a similar allocator routine.
     278@malloc_size@ returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is @malloc_usable_size@. But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine.
    469279\paragraph{Usage}
    470280@malloc_size@ takes one parameters.
     
    474284@addr@: the address of the currently allocated dynamic object.
    475285\end{itemize}
    476 @malloc_size@ returns the allocation size of the given dynamic object.
    477 On failure, it return zero.
     286@malloc_size@ returns the allocation size of the given dynamic object. On failure, it return zero.
    478287
    479288\subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
    480 This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@).
    481 In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
    482 \paragraph{Usage}
    483 This @realloc@ takes three parameters.
    484 It takes an additional parameter of nalign as compared to the default @realloc@.
     289This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
     290\paragraph{Usage}
     291This @realloc@ takes three parameters. It takes an additional parameter of nalign as compared to the default @realloc@.
    485292
    486293\begin{itemize}
     
    492299@size@: the new size requirement of the to which the old object needs to be resized.
    493300\end{itemize}
    494 It returns an object with the size and alignment given in the parameters that preserves the data in the old object.
    495 On failure, it returns a @NULL@ pointer.
     301It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a @NULL@ pointer.
    496302
    497303\subsection{\CFA Malloc Interface}
    498 We added some routines to the @malloc@ interface of \CFA.
    499 These routines can only be used in \CFA and not in our stand-alone llheap allocator as these routines use some features that are only provided by \CFA and not by C.
    500 It makes the allocator even more usable to the programmers.
    501 \CFA provides the liberty to know the returned type of a call to the allocator.
    502 So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
     304We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeap allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers.
     305\CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
    503306
    504307\subsection{\lstinline{T * malloc( void )}}
    505 This @malloc@ is a simplified polymorphic form of default @malloc@ (FIX ME: cite malloc).
    506 It does not take any parameter as compared to default @malloc@ that takes one parameter.
    507 \paragraph{Usage}
    508 This @malloc@ takes no parameters.
    509 It returns a dynamic object of the size of type @T@.
    510 On failure, it returns a @NULL@ pointer.
     308This malloc is a simplified polymorphic form of defualt malloc (FIX ME: cite malloc). It does not take any parameter as compared to default malloc that takes one parameter.
     309\paragraph{Usage}
     310This malloc takes no parameters.
     311It returns a dynamic object of the size of type @T@. On failure, it returns a @NULL@ pointer.
    511312
    512313\subsection{\lstinline{T * aalloc( size_t dim )}}
    513 This @aalloc@ is a simplified polymorphic form of above @aalloc@ (FIX ME: cite aalloc).
    514 It takes one parameter as compared to the above @aalloc@ that takes two parameters.
     314This aalloc is a simplified polymorphic form of above aalloc (FIX ME: cite aalloc). It takes one parameter as compared to the above aalloc that takes two parameters.
    515315\paragraph{Usage}
    516316aalloc takes one parameters.
     
    520320@dim@: required number of objects in the array.
    521321\end{itemize}
    522 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
    523 On failure, it returns a @NULL@ pointer.
     322It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer.
    524323
    525324\subsection{\lstinline{T * calloc( size_t dim )}}
    526 This @calloc@ is a simplified polymorphic form of default @calloc@ (FIX ME: cite calloc).
    527 It takes one parameter as compared to the default @calloc@ that takes two parameters.
    528 \paragraph{Usage}
    529 This @calloc@ takes one parameter.
     325This calloc is a simplified polymorphic form of defualt calloc (FIX ME: cite calloc). It takes one parameter as compared to the default calloc that takes two parameters.
     326\paragraph{Usage}
     327This calloc takes one parameter.
    530328
    531329\begin{itemize}
     
    533331@dim@: required number of objects in the array.
    534332\end{itemize}
    535 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
    536 On failure, it returns a @NULL@ pointer.
     333It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer.
    537334
    538335\subsection{\lstinline{T * resize( T * ptr, size_t size )}}
    539 This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment).
    540 It takes two parameters as compared to the above resize that takes three parameters.
    541 It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
     336This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
    542337\paragraph{Usage}
    543338This resize takes two parameters.
     
    549344@size@: the required size of the new object.
    550345\end{itemize}
    551 It returns a dynamic object of the size given in parameters.
    552 The returned object is aligned to the alignment of type @T@.
    553 On failure, it returns a @NULL@ pointer.
     346It returns a dynamic object of the size given in paramters. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer.
    554347
    555348\subsection{\lstinline{T * realloc( T * ptr, size_t size )}}
    556 This @realloc@ is a simplified polymorphic form of default @realloc@ (FIX ME: cite @realloc@ with align).
    557 It takes two parameters as compared to the above @realloc@ that takes three parameters.
    558 It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
     349This @realloc@ is a simplified polymorphic form of defualt @realloc@ (FIX ME: cite @realloc@ with align). It takes two parameters as compared to the above @realloc@ that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
    559350\paragraph{Usage}
    560351This @realloc@ takes two parameters.
     
    566357@size@: the required size of the new object.
    567358\end{itemize}
    568 It returns a dynamic object of the size given in parameters that preserves the data in the given object.
    569 The returned object is aligned to the alignment of type @T@.
    570 On failure, it returns a @NULL@ pointer.
     359It returns a dynamic object of the size given in paramters that preserves the data in the given object. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer.
    571360
    572361\subsection{\lstinline{T * memalign( size_t align )}}
    573 This memalign is a simplified polymorphic form of default memalign (FIX ME: cite memalign).
    574 It takes one parameters as compared to the default memalign that takes two parameters.
     362This memalign is a simplified polymorphic form of defualt memalign (FIX ME: cite memalign). It takes one parameters as compared to the default memalign that takes two parameters.
    575363\paragraph{Usage}
    576364memalign takes one parameters.
     
    580368@align@: the required alignment of the dynamic object.
    581369\end{itemize}
    582 It returns a dynamic object of the size of type @T@ that is aligned to given parameter align.
    583 On failure, it returns a @NULL@ pointer.
     370It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. On failure, it returns a @NULL@ pointer.
    584371
    585372\subsection{\lstinline{T * amemalign( size_t align, size_t dim )}}
    586 This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign).
    587 It takes two parameter as compared to the above amemalign that takes three parameters.
     373This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). It takes two parameter as compared to the above amemalign that takes three parameters.
    588374\paragraph{Usage}
    589375amemalign takes two parameters.
     
    595381@dim@: required number of objects in the array.
    596382\end{itemize}
    597 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
    598 The returned object is aligned to the given parameter align.
    599 On failure, it returns a @NULL@ pointer.
     383It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align. On failure, it returns a @NULL@ pointer.
    600384
    601385\subsection{\lstinline{T * cmemalign( size_t align, size_t dim  )}}
    602 This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign).
    603 It takes two parameter as compared to the above cmemalign that takes three parameters.
     386This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). It takes two parameter as compared to the above cmemalign that takes three parameters.
    604387\paragraph{Usage}
    605388cmemalign takes two parameters.
     
    611394@dim@: required number of objects in the array.
    612395\end{itemize}
    613 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
    614 The returned object is aligned to the given parameter align and is zero filled.
    615 On failure, it returns a @NULL@ pointer.
     396It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align and is zero filled. On failure, it returns a @NULL@ pointer.
    616397
    617398\subsection{\lstinline{T * aligned_alloc( size_t align )}}
    618 This @aligned_alloc@ is a simplified polymorphic form of default @aligned_alloc@ (FIX ME: cite @aligned_alloc@).
    619 It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters.
     399This @aligned_alloc@ is a simplified polymorphic form of defualt @aligned_alloc@ (FIX ME: cite @aligned_alloc@). It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters.
    620400\paragraph{Usage}
    621401This @aligned_alloc@ takes one parameter.
     
    625405@align@: required alignment of the dynamic object.
    626406\end{itemize}
    627 It returns a dynamic object of the size of type @T@ that is aligned to the given parameter.
    628 On failure, it returns a @NULL@ pointer.
     407It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. On failure, it returns a @NULL@ pointer.
    629408
    630409\subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}}
    631 This @posix_memalign@ is a simplified polymorphic form of default @posix_memalign@ (FIX ME: cite @posix_memalign@).
    632 It takes two parameters as compared to the default @posix_memalign@ that takes three parameters.
     410This @posix_memalign@ is a simplified polymorphic form of defualt @posix_memalign@ (FIX ME: cite @posix_memalign@). It takes two parameters as compared to the default @posix_memalign@ that takes three parameters.
    633411\paragraph{Usage}
    634412This @posix_memalign@ takes two parameter.
     
    641419\end{itemize}
    642420
    643 It stores address of the dynamic object of the size of type @T@ in given parameter ptr.
    644 This object is aligned to the given parameter.
    645 On failure, it returns a @NULL@ pointer.
     421It stores address of the dynamic object of the size of type @T@ in given parameter ptr. This object is aligned to the given parameter. On failure, it returns a @NULL@ pointer.
    646422
    647423\subsection{\lstinline{T * valloc( void )}}
    648 This @valloc@ is a simplified polymorphic form of default @valloc@ (FIX ME: cite @valloc@).
    649 It takes no parameters as compared to the default @valloc@ that takes one parameter.
     424This @valloc@ is a simplified polymorphic form of defualt @valloc@ (FIX ME: cite @valloc@). It takes no parameters as compared to the default @valloc@ that takes one parameter.
    650425\paragraph{Usage}
    651426@valloc@ takes no parameters.
    652 It returns a dynamic object of the size of type @T@ that is aligned to the page size.
    653 On failure, it returns a @NULL@ pointer.
     427It returns a dynamic object of the size of type @T@ that is aligned to the page size. On failure, it returns a @NULL@ pointer.
    654428
    655429\subsection{\lstinline{T * pvalloc( void )}}
    656430\paragraph{Usage}
    657431@pvalloc@ takes no parameters.
    658 It returns a dynamic object of the size that is calculated by rounding the size of type @T@.
    659 The returned object is also aligned to the page size.
    660 On failure, it returns a @NULL@ pointer.
     432It returns a dynamic object of the size that is calcutaed by rouding the size of type @T@. The returned object is also aligned to the page size. On failure, it returns a @NULL@ pointer.
    661433
    662434\subsection{Alloc Interface}
    663 In addition to improve allocator interface both for \CFA and our stand-alone allocator llheap in C.
    664 We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
     435In addition to improve allocator interface both for \CFA and our standalone allocator uHeap in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
    665436This interface helps programmers in three major ways.
    666437
    667438\begin{itemize}
    668439\item
    669 Routine Name: alloc interface frees programmers from remembering different routine names for different kind of dynamic allocations.
    670 \item
    671 Parameter Positions: alloc interface frees programmers from remembering parameter positions in call to routines.
    672 \item
    673 Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determine the object size from returned type of alloc call.
    674 \end{itemize}
    675 
    676 Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers.
    677 The new interface has just one routine name alloc that can be used to perform a wide range of dynamic allocations.
    678 The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.
    679 
    680 \subsection{Routine: \lstinline{T * alloc( ...
    681 )}}
    682 Call to alloc without any parameter returns one object of size of type @T@ allocated dynamically.
    683 Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine.
    684 If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine.
    685 alloc routine accepts six kinds of arguments.
    686 Using different combinations of than parameters, different kind of allocations can be performed.
    687 Any combination of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultaneously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object.
    688 If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.
     440Routine Name: alloc interfce frees programmers from remmebring different routine names for different kind of dynamic allocations.
     441\item
     442Parametre Positions: alloc interface frees programmers from remembering parameter postions in call to routines.
     443\item
     444Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determince the object size from returned type of alloc call.
     445\end{itemize}
     446
     447Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interfece has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.
     448
     449\subsection{Routine: \lstinline{T * alloc( ... )}}
     450Call to alloc wihout any parameter returns one object of size of type @T@ allocated dynamically.
     451Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine.
     452alocc routine accepts six kinds of arguments. Using different combinations of tha parameters, different kind of allocations can be performed. Any combincation of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultanously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.
    689453
    690454\paragraph{Dim}
    691 This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function.
    692 It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@.
    693 It represents the required number of members in the array allocation as in \CFA's @aalloc@ (FIX ME: cite aalloc).
     455This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@.
     456It represents the required number of members in the array allocation as in \CFA's aalloc (FIX ME: cite aalloc).
    694457This parameter should be of type @size_t@.
    695458
     
    698461
    699462\paragraph{Align}
    700 This parameter is position-free and uses a backtick routine align (@`align@).
    701 The parameter passed with @`align@ should be of type @size_t@.
    702 If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used.
     463This parameter is position-free and uses a backtick routine align (@`align@). The parameter passed with @`align@ should be of type @size_t@. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used.
    703464
    704465Example: @int b = alloc( 5 , 64`align )@
    705 This call will return a dynamic array of five integers.
    706 It will align the allocated object to 64.
     466This call will return a dynamic array of five integers. It will align the allocated object to 64.
    707467
    708468\paragraph{Fill}
    709 This parameter is position-free and uses a backtick routine fill (@`fill@).
    710 In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter.
     469This parameter is position-free and uses a backtick routine fill (@`fill@). In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter.
    711470Three types of parameters can be passed using `fill.
    712471
     
    717476Object of returned type: An object of type of returned type can be passed with @`fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation.
    718477\item
    719 Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object.
    720 In this case, the allocated memory is not filled recursively till the end of allocation.
    721 The filling happen until the end object passed to @`fill@ or the end of requested allocation reaches.
     478Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen untill the end object passed to @`fill@ or the end of requested allocation reaches.
    722479\end{itemize}
    723480
    724481Example: @int b = alloc( 5 , 'a'`fill )@
    725 This call will return a dynamic array of five integers.
    726 It will fill the allocated object with character 'a' recursively till the end of requested allocation size.
     482This call will return a dynamic array of five integers. It will fill the allocated object with character 'a' recursively till the end of requested allocation size.
    727483
    728484Example: @int b = alloc( 5 , 4`fill )@
    729 This call will return a dynamic array of five integers.
    730 It will fill the allocated object with integer 4 recursively till the end of requested allocation size.
     485This call will return a dynamic array of five integers. It will fill the allocated object with integer 4 recursively till the end of requested allocation size.
    731486
    732487Example: @int b = alloc( 5 , a`fill )@ where @a@ is a pointer of int type
    733 This call will return a dynamic array of five integers.
    734 It will copy data in a to the returned object non-recursively until end of a or the newly allocated object is reached.
     488This call will return a dynamic array of five integers. It will copy data in a to the returned object non-recursively untill end of a or the newly allocated object is reached.
    735489
    736490\paragraph{Resize}
    737 This parameter is position-free and uses a backtick routine resize (@`resize@).
    738 It represents the old dynamic object (oaddr) that the programmer wants to
     491This parameter is position-free and uses a backtick routine resize (@`resize@). It represents the old dynamic object (oaddr) that the programmer wants to
    739492\begin{itemize}
    740493\item
     
    745498fill with something.
    746499\end{itemize}
    747 The data in old dynamic object will not be preserved in the new object.
    748 The type of object passed to @`resize@ and the returned type of alloc call can be different.
     500The data in old dynamic object will not be preserved in the new object. The type of object passed to @`resize@ and the returned type of alloc call can be different.
    749501
    750502Example: @int b = alloc( 5 , a`resize )@
     
    752504
    753505Example: @int b = alloc( 5 , a`resize , 32`align )@
    754 This call will resize object a to a dynamic array that can contain 5 integers.
    755 The returned object will also be aligned to 32.
     506This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32.
    756507
    757508Example: @int b = alloc( 5 , a`resize , 32`align , 2`fill )@
    758 This call will resize object a to a dynamic array that can contain 5 integers.
    759 The returned object will also be aligned to 32 and will be filled with 2.
     509This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32 and will be filled with 2.
    760510
    761511\paragraph{Realloc}
    762 This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@).
    763 It represents the old dynamic object (oaddr) that the programmer wants to
     512This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@). It represents the old dynamic object (oaddr) that the programmer wants to
    764513\begin{itemize}
    765514\item
     
    770519fill with something.
    771520\end{itemize}
    772 The data in old dynamic object will be preserved in the new object.
    773 The type of object passed to @`realloc@ and the returned type of alloc call cannot be different.
     521The data in old dynamic object will be preserved in the new object. The type of object passed to @`realloc@ and the returned type of alloc call cannot be different.
    774522
    775523Example: @int b = alloc( 5 , a`realloc )@
     
    777525
    778526Example: @int b = alloc( 5 , a`realloc , 32`align )@
    779 This call will realloc object a to a dynamic array that can contain 5 integers.
    780 The returned object will also be aligned to 32.
     527This call will realloc object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32.
    781528
    782529Example: @int b = alloc( 5 , a`realloc , 32`align , 2`fill )@
    783 This call will resize object a to a dynamic array that can contain 5 integers.
    784 The returned object will also be aligned to 32.
    785 The extra space after copying data of a to the returned object will be filled with 2.
     530This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. The extra space after copying data of a to the returned object will be filled with 2.
  • doc/theses/mubeen_zulfiqar_MMath/background.tex

    re88c2fb r05e33f5  
    5454The trailer may be used to simplify an allocation implementation, \eg coalescing, and/or for security purposes to mark the end of an object.
    5555An object may be preceded by padding to ensure proper alignment.
    56 Some algorithms quantize allocation requests into distinct sizes, called \newterm{buckets}, resulting in additional spacing after objects less than the quantized value.
    57 (Note, the buckets are often organized as an array of ascending bucket sizes for fast searching, \eg binary search, and the array is stored in the heap management-area, where each bucket is a top point to the freed objects of that size.)
     56Some algorithms quantize allocation requests into distinct sizes resulting in additional spacing after objects less than the quantized value.
    5857When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists.
    5958A free object also contains management data, \eg size, chaining, etc.
  • doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS1.fig

    re88c2fb r05e33f5  
    88-2
    991200 2
    10 6 2850 2100 3150 2250
    11 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175
    12 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175
    13 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175
     106 4200 1575 4500 1725
     111 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650
     121 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4350 1650 20 20 4350 1650 4370 1650
     131 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4425 1650 20 20 4425 1650 4445 1650
    1414-6
    15 6 4050 2100 4350 2250
    16 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175
    17 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175
    18 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175
    19 -6
    20 6 4650 2100 4950 2250
    21 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175
    22 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175
    23 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175
    24 -6
    25 6 3450 2100 3750 2250
    26 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175
    27 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175
    28 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175
    29 -6
    30 6 3300 2175 3600 2550
     156 2850 2475 3150 2850
    31162 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    3217        1 1 1.00 45.00 90.00
    33          3375 2175 3375 2400
     18         2925 2475 2925 2700
    34192 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    35          3300 2400 3600 2400 3600 2550 3300 2550 3300 2400
     20         2850 2700 3150 2700 3150 2850 2850 2850 2850 2700
     21-6
     226 4350 2475 4650 2850
     232 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     24        1 1 1.00 45.00 90.00
     25         4425 2475 4425 2700
     262 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     27         4350 2700 4650 2700 4650 2850 4350 2850 4350 2700
     28-6
     296 3600 2475 3825 3150
     302 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     31        1 1 1.00 45.00 90.00
     32         3675 2475 3675 2700
     332 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     34         3600 2700 3825 2700 3825 2850 3600 2850 3600 2700
     352 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     36         3600 3000 3825 3000 3825 3150 3600 3150 3600 3000
     372 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     38        1 1 1.00 45.00 90.00
     39         3675 2775 3675 3000
     40-6
     416 4875 3600 5175 3750
     421 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675
     431 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675
     441 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675
     45-6
     466 4875 2325 5175 2475
     471 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400
     481 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400
     491 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400
     50-6
     516 5625 2325 5925 2475
     521 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 2400 20 20 5700 2400 5720 2400
     531 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 2400 20 20 5775 2400 5795 2400
     541 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 2400 20 20 5850 2400 5870 2400
     55-6
     566 5625 3600 5925 3750
     571 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 3675 20 20 5700 3675 5720 3675
     581 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 3675 20 20 5775 3675 5795 3675
     591 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 3675 20 20 5850 3675 5870 3675
    3660-6
    37612 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    38          3150 1800 3150 2250
     62         2400 2100 2400 2550
    39632 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    40          2850 1800 2850 2250
     64         2550 2100 2550 2550
    41652 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    42          4650 1800 4650 2250
     66         2700 2100 2700 2550
    43672 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    44          4950 1800 4950 2250
    45 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    46          4500 1725 4500 2250
    47 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    48          5100 1725 5100 2250
     68         2850 2100 2850 2550
    49692 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    50          3450 1800 3450 2250
     70         3000 2100 3000 2550
    51712 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    52          3750 1800 3750 2250
    53 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    54          3300 1725 3300 2250
    55 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    56          3900 1725 3900 2250
     72         3600 2100 3600 2550
    57732 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    58          5250 1800 5250 2250
     74         3900 2100 3900 2550
    59752 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    60          5400 1800 5400 2250
     76         4050 2100 4050 2550
    61772 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    62          5550 1800 5550 2250
     78         4200 2100 4200 2550
    63792 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    64          5700 1800 5700 2250
     80         4350 2100 4350 2550
    65812 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    66          5850 1800 5850 2250
    67 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    68          2700 1725 2700 2250
     82         4500 2100 4500 2550
     832 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     84         3300 1500 3300 1800
     852 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     86         3600 1500 3600 1800
     872 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     88         3900 1500 3900 1800
     892 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     90         3000 1500 4800 1500 4800 1800 3000 1800 3000 1500
     912 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
     92        1 1 1.00 45.00 90.00
     93         3225 1650 2625 2100
    69942 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    7095        1 1 1.00 45.00 90.00
    71          3375 1275 3375 1575
     96         3150 1650 2550 2100
    72972 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    7398        1 1 1.00 45.00 90.00
    74          2700 1275 2700 1575
     99         3450 1650 4050 2100
    751002 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
    76101        1 1 1.00 45.00 90.00
    77          2775 1275 2775 1575
     102         3375 1650 3975 2100
     1032 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     104         2100 2100 2100 2550
     1052 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     106         1950 2250 3150 2250
     1072 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     108         3450 2250 4650 2250
     1092 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     110         1950 2100 3150 2100 3150 2550 1950 2550 1950 2100
     1112 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     112         3450 2100 4650 2100 4650 2550 3450 2550 3450 2100
     1132 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     114         2250 2100 2250 2550
     1152 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     116         3750 2100 3750 2550
    781172 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    79118        1 1 1.00 45.00 90.00
    80          5175 1275 5175 1575
     119         2025 2475 2025 2700
    811202 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    82121        1 1 1.00 45.00 90.00
    83          5625 1275 5625 1575
    84 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    85         1 1 1.00 45.00 90.00
    86          3750 1275 3750 1575
    87 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
    88         1 1 1.00 45.00 90.00
    89          3825 1275 3825 1575
    90 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    91          2700 1950 6000 1950
    92 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    93          2700 2100 6000 2100
     122         2025 2775 2025 3000
    941232 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    95          2700 1800 6000 1800 6000 2250 2700 2250 2700 1800
    96 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    97         1 1 1.00 45.00 90.00
    98          2775 2175 2775 2400
    99 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    100         1 1 1.00 45.00 90.00
    101          2775 2475 2775 2700
     124         1950 3000 2100 3000 2100 3150 1950 3150 1950 3000
    1021252 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    103          2700 2700 2850 2700 2850 2850 2700 2850 2700 2700
    104 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    105          2700 2400 2850 2400 2850 2550 2700 2550 2700 2400
    106 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    107         1 1 1.00 45.00 90.00
    108          4575 2175 4575 2400
    109 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    110          4500 2400 5025 2400 5025 2550 4500 2550 4500 2400
     126         1950 2700 2100 2700 2100 2850 1950 2850 1950 2700
    1111272 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
    112128        1 1 1.00 45.00 90.00
    113          3600 3375 4350 3375 4350 3150
     129         1950 3750 2700 3750 2700 3525
    1141302 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    115          3600 3150 5100 3150 5100 3525 3600 3525 3600 3150
    116 4 2 0 50 -1 0 11 0.0000 2 135 300 2625 1950 lock\001
    117 4 1 0 50 -1 0 11 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001
    118 4 1 0 50 -1 0 11 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001
    119 4 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001
    120 4 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001
    121 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001
    122 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001
    123 4 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001
    124 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001
    125 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001
    126 4 1 0 50 -1 0 11 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001
     131         1950 3525 3150 3525 3150 3900 1950 3900 1950 3525
     1322 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
     133        1 1 1.00 45.00 90.00
     134         3450 3750 4200 3750 4200 3525
     1352 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     136         3450 3525 4650 3525 4650 3900 3450 3900 3450 3525
     1372 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
     138        1 1 1.00 45.00 90.00
     139         3150 4650 4200 4650 4200 4275
     1402 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     141         3150 4275 4650 4275 4650 4875 3150 4875 3150 4275
     1422 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     143         1950 2400 3150 2400
     1442 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     145         3450 2400 4650 2400
     1462 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     147         5400 2100 5400 3900
     1484 2 0 50 -1 0 11 0.0000 2 120 300 1875 2250 lock\001
     1494 1 0 50 -1 0 12 0.0000 2 135 1935 3900 1425 N kernel-thread buckets\001
     1504 1 0 50 -1 0 12 0.0000 2 195 810 4425 2025 heap$_2$\001
     1514 1 0 50 -1 0 12 0.0000 2 195 810 2175 2025 heap$_1$\001
     1524 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001
     1534 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001
     1544 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001
     1554 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001
     1564 0 0 50 -1 0 12 0.0000 2 135 360 3225 4450 lock\001
     1574 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001
     1584 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001
     1594 1 0 50 -1 0 12 0.0000 2 180 1455 3900 4200 global pool (sbrk)\001
     1604 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001
     1614 1 0 50 -1 0 12 0.0000 2 180 720 6450 3150 free pool\001
     1624 1 0 50 -1 0 12 0.0000 2 180 390 6450 2925 heap\001
  • doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS2.fig

    re88c2fb r05e33f5  
    88-2
    991200 2
    10 6 2850 2475 3150 2850
     106 2850 2100 3150 2250
     111 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175
     121 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175
     131 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175
     14-6
     156 4050 2100 4350 2250
     161 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175
     171 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175
     181 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175
     19-6
     206 4650 2100 4950 2250
     211 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175
     221 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175
     231 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175
     24-6
     256 3450 2100 3750 2250
     261 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175
     271 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175
     281 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175
     29-6
     306 3300 2175 3600 2550
    11312 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    1232        1 1 1.00 45.00 90.00
    13          2925 2475 2925 2700
     33         3375 2175 3375 2400
    14342 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    15          2850 2700 3150 2700 3150 2850 2850 2850 2850 2700
     35         3300 2400 3600 2400 3600 2550 3300 2550 3300 2400
    1636-6
    17 6 4350 2475 4650 2850
     372 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     38         3150 1800 3150 2250
     392 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     40         2850 1800 2850 2250
     412 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     42         4650 1800 4650 2250
     432 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     44         4950 1800 4950 2250
     452 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     46         4500 1725 4500 2250
     472 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     48         5100 1725 5100 2250
     492 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     50         3450 1800 3450 2250
     512 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     52         3750 1800 3750 2250
     532 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     54         3300 1725 3300 2250
     552 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     56         3900 1725 3900 2250
     572 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     58         5250 1800 5250 2250
     592 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     60         5400 1800 5400 2250
     612 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     62         5550 1800 5550 2250
     632 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     64         5700 1800 5700 2250
     652 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     66         5850 1800 5850 2250
     672 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     68         2700 1725 2700 2250
    18692 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    1970        1 1 1.00 45.00 90.00
    20          4425 2475 4425 2700
    21 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    22          4350 2700 4650 2700 4650 2850 4350 2850 4350 2700
    23 -6
    24 6 3600 2475 3825 3150
     71         3375 1275 3375 1575
    25722 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    2673        1 1 1.00 45.00 90.00
    27          3675 2475 3675 2700
    28 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    29          3600 2700 3825 2700 3825 2850 3600 2850 3600 2700
    30 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    31          3600 3000 3825 3000 3825 3150 3600 3150 3600 3000
     74         2700 1275 2700 1575
     752 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
     76        1 1 1.00 45.00 90.00
     77         2775 1275 2775 1575
    32782 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    3379        1 1 1.00 45.00 90.00
    34          3675 2775 3675 3000
    35 -6
    36 6 1950 3525 3150 3900
     80         5175 1275 5175 1575
     812 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     82        1 1 1.00 45.00 90.00
     83         5625 1275 5625 1575
     842 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     85        1 1 1.00 45.00 90.00
     86         3750 1275 3750 1575
     872 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
     88        1 1 1.00 45.00 90.00
     89         3825 1275 3825 1575
     902 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     91         2700 1950 6000 1950
     922 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     93         2700 2100 6000 2100
     942 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     95         2700 1800 6000 1800 6000 2250 2700 2250 2700 1800
     962 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     97        1 1 1.00 45.00 90.00
     98         2775 2175 2775 2400
     992 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     100        1 1 1.00 45.00 90.00
     101         2775 2475 2775 2700
     1022 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     103         2700 2700 2850 2700 2850 2850 2700 2850 2700 2700
     1042 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     105         2700 2400 2850 2400 2850 2550 2700 2550 2700 2400
     1062 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
     107        1 1 1.00 45.00 90.00
     108         4575 2175 4575 2400
     1092 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     110         4500 2400 5025 2400 5025 2550 4500 2550 4500 2400
    371112 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
    38112        1 1 1.00 45.00 90.00
    39          1950 3750 2700 3750 2700 3525
     113         3600 3525 4650 3525 4650 3150
    401142 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    41          1950 3525 3150 3525 3150 3900 1950 3900 1950 3525
    42 4 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001
    43 -6
    44 6 4050 1575 4350 1725
    45 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 1650 20 20 4125 1650 4145 1650
    46 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 1650 20 20 4200 1650 4220 1650
    47 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650
    48 -6
    49 6 4875 2325 6150 3750
    50 6 4875 2325 5175 2475
    51 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400
    52 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400
    53 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400
    54 -6
    55 6 4875 3600 5175 3750
    56 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675
    57 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675
    58 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675
    59 -6
    60 4 1 0 50 -1 0 12 0.0000 2 180 900 5700 3150 local pools\001
    61 4 1 0 50 -1 0 12 0.0000 2 180 465 5700 2925 heaps\001
    62 -6
    63 6 3600 4050 5100 4650
    64 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
    65         1 1 1.00 45.00 90.00
    66          3600 4500 4350 4500 4350 4275
    67 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    68          3600 4275 5100 4275 5100 4650 3600 4650 3600 4275
    69 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 4200 global pool (sbrk)\001
    70 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 4450 lock\001
    71 -6
    72 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    73          2400 2100 2400 2550
    74 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    75          2550 2100 2550 2550
    76 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    77          2700 2100 2700 2550
    78 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    79          2850 2100 2850 2550
    80 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    81          3000 2100 3000 2550
    82 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    83          3600 2100 3600 2550
    84 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    85          3900 2100 3900 2550
    86 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    87          4050 2100 4050 2550
    88 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    89          4200 2100 4200 2550
    90 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    91          4350 2100 4350 2550
    92 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    93          4500 2100 4500 2550
    94 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    95          3300 1500 3300 1800
    96 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    97          3600 1500 3600 1800
    98 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    99          3000 1500 4800 1500 4800 1800 3000 1800 3000 1500
    100 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    101         1 1 1.00 45.00 90.00
    102          3150 1650 2550 2100
    103 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    104         1 1 1.00 45.00 90.00
    105          3450 1650 4050 2100
    106 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    107          2100 2100 2100 2550
    108 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    109          1950 2250 3150 2250
    110 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    111          3450 2250 4650 2250
    112 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    113          1950 2100 3150 2100 3150 2550 1950 2550 1950 2100
    114 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    115          3450 2100 4650 2100 4650 2550 3450 2550 3450 2100
    116 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    117          2250 2100 2250 2550
    118 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    119          3750 2100 3750 2550
    120 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    121         1 1 1.00 45.00 90.00
    122          2025 2475 2025 2700
    123 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    124         1 1 1.00 45.00 90.00
    125          2025 2775 2025 3000
    126 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    127          1950 3000 2100 3000 2100 3150 1950 3150 1950 3000
    128 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    129          1950 2700 2100 2700 2100 2850 1950 2850 1950 2700
    130 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
    131         1 1 1.00 45.00 90.00
    132          3450 3750 4200 3750 4200 3525
    133 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    134          3450 3525 4650 3525 4650 3900 3450 3900 3450 3525
    135 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    136          1950 2400 3150 2400
    137 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    138          3450 2400 4650 2400
    139 4 2 0 50 -1 0 11 0.0000 2 135 300 1875 2250 lock\001
    140 4 1 0 50 -1 0 12 0.0000 2 180 1245 3900 1425 H heap buckets\001
    141 4 1 0 50 -1 0 12 0.0000 2 180 810 4425 2025 heap$_2$\001
    142 4 1 0 50 -1 0 12 0.0000 2 180 810 2175 2025 heap$_1$\001
    143 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001
    144 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001
    145 4 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001
    146 4 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001
    147 4 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001
    148 4 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001
     115         3600 3150 5100 3150 5100 3750 3600 3750 3600 3150
     1164 2 0 50 -1 0 11 0.0000 2 120 300 2625 1950 lock\001
     1174 1 0 50 -1 0 10 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001
     1184 1 0 50 -1 0 10 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001
     1194 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001
     1204 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001
     1214 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001
     1224 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001
     1234 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001
     1244 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001
     1254 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001
     1264 1 0 50 -1 0 10 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001
  • doc/theses/mubeen_zulfiqar_MMath/performance.tex

    re88c2fb r05e33f5  
    11\chapter{Performance}
    2 \label{c:Performance}
    32
    43\noindent
  • doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.bib

    re88c2fb r05e33f5  
    124124}
    125125
    126 @misc{ptmalloc2,
    127     author      = {Wolfram Gloger},
    128     title       = {ptmalloc version 2},
    129     month       = jun,
    130     year        = 2006,
    131     note        = {\href{http://www.malloc.de/malloc/ptmalloc2-current.tar.gz}{http://www.malloc.de/\-malloc/\-ptmalloc2-current.tar.gz}},
    132 }
    133 
    134 @misc{GNUallocAPI,
    135     author      = {GNU},
    136     title       = {Summary of malloc-Related Functions},
    137     year        = 2020,
    138     note        = {\href{https://www.gnu.org/software/libc/manual/html\_node/Summary-of-Malloc.html}{https://www.gnu.org/\-software/\-libc/\-manual/\-html\_node/\-Summary-of-Malloc.html}},
    139 }
    140 
    141 @misc{SeriallyReusable,
    142     author      = {IBM},
    143     title       = {Serially reusable programs},
    144     month       = mar,
    145     year        = 2021,
    146     note        = {\href{https://www.ibm.com/docs/en/ztpf/1.1.0.15?topic=structures-serially-reusable-programs}{https://www.ibm.com/\-docs/\-en/\-ztpf/\-1.1.0.15?\-topic=structures-serially-reusable-programs}},
    147 }
    148 
    149 @misc{librseq,
    150     author      = {Mathieu Desnoyers},
    151     title       = {Library for Restartable Sequences},
    152     month       = mar,
    153     year        = 2022,
    154     note        = {\href{https://github.com/compudj/librseq}{https://github.com/compudj/librseq}},
     126@misc{nedmalloc,
     127    author      = {Niall Douglas},
     128    title       = {nedmalloc version 1.06 Beta},
     129    month       = jan,
     130    year        = 2010,
     131    note        = {\textsf{http://\-prdownloads.\-sourceforge.\-net/\-nedmalloc/\-nedmalloc\_v1.06beta1\_svn1151.zip}},
    155132}
    156133
  • doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.tex

    re88c2fb r05e33f5  
    9595% Use the "hyperref" package
    9696% N.B. HYPERREF MUST BE THE LAST PACKAGE LOADED; ADD ADDITIONAL PKGS ABOVE
    97 \usepackage{url}
    98 \usepackage[dvips,pagebackref=true]{hyperref} % with basic options
     97\usepackage[pagebackref=true]{hyperref} % with basic options
    9998%\usepackage[pdftex,pagebackref=true]{hyperref}
    10099% N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing.
     
    115114    citecolor=blue,        % color of links to bibliography
    116115    filecolor=magenta,      % color of file links
    117     urlcolor=blue,           % color of external links
    118     breaklinks=true
     116    urlcolor=blue           % color of external links
    119117}
    120118\ifthenelse{\boolean{PrintVersion}}{   % for improved print quality, change some hyperref options
     
    125123    urlcolor=black
    126124}}{} % end of ifthenelse (no else)
    127 %\usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,pagebackref=true,breaklinks=true,colorlinks=true,linkcolor=blue,citecolor=blue,urlcolor=blue]{hyperref}
    128 \usepackage{breakurl}
    129 \urlstyle{sf}
    130125
    131126%\usepackage[automake,toc,abbreviations]{glossaries-extra} % Exception to the rule of hyperref being the last add-on package
Note: See TracChangeset for help on using the changeset viewer.