Ignore:
Timestamp:
Apr 12, 2022, 11:24:18 AM (3 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, enum, master, pthread-emulation, qualifiedEnum
Children:
6978468
Parents:
a9cf339
Message:

proofread allocator chapter

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/allocator.tex

    ra9cf339 rdb4a8cf  
    11\chapter{Allocator}
    22
    3 \section{uHeap}
    4 uHeap is a lightweight memory allocator. The objective behind uHeap is to design a minimal concurrent memory allocator that has new features and also fulfills GNU C Library requirements (FIX ME: cite requirements).
    5 
    6 The objective of uHeap's new design was to fulfill following requirements:
    7 \begin{itemize}
    8 \item It should be concurrent and thread-safe for multi-threaded programs.
    9 \item It should avoid global locks, on resources shared across all threads, as much as possible.
    10 \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
    11 \item It should be a lightweight memory allocator.
    12 \end{itemize}
     3This chapter presents a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
     4The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}.
     5
     6
     7\section{llheap}
     8
     9The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call.
     10(Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.)
     11A direct consequence of this objective is very simple or no storage coalescing;
     12hence, llheap's design is willing to use more storage to lower latency.
     13This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory.
     14Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{Performance}).
     15
     16% The objective of llheap's new design was to fulfill following requirements:
     17% \begin{itemize}
     18% \item It should be concurrent and thread-safe for multi-threaded programs.
     19% \item It should avoid global locks, on resources shared across all threads, as much as possible.
     20% \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
     21% \item It should be a lightweight memory allocator.
     22% \end{itemize}
    1323
    1424%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    1525
    16 \section{Design choices for uHeap}
    17 uHeap's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed:
    18 
    19 \paragraph{Design 1: Centralized}
    20 One heap, but lower bucket sizes are N-shared across KTs.
    21 This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes.
    22 When KTs $\le$ N, the important bucket sizes are uncontented.
    23 When KTs $>$ N, the free buckets are contented.
    24 Therefore, threads are only contending for a small number of buckets, which are distributed among them to reduce contention.
    25 \begin{cquote}
     26\section{Design Choices}
     27
     28llheap's design was reviewed and changed multiple times throughout the thesis.
     29Some of the rejected designs are discussed because they show the path to the final design (see discussion in \VRef{s:MultipleHeaps}).
     30Note, a few simples tests for a design choice were compared with the current best allocators to determine the viability of a design.
     31
     32
     33\subsection{Allocation Fastpath}
     34
     35These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced.
     36\paragraph{T:1 model}
     37\VRef[Figure]{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are N-shared across KTs.
     38This design leverages the fact that 95\% of allocation requests are less than 1024 bytes and there are only 3--5 different request sizes.
     39When KTs $\le$ N, the common bucket sizes are uncontented;
     40when KTs $>$ N, the free buckets are contented and latency increases significantly.
     41In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared.
     42Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency;
     43however, picking N is workload specific.
     44
     45\begin{figure}
     46\centering
     47\input{AllocDS1}
     48\caption{T:1 with Shared Buckets}
     49\label{f:T1SharedBuckets}
     50\end{figure}
     51
     52Problems:
     53\begin{itemize}
     54\item
     55Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator.
     56\item
     57When no thread is assigned a bucket number, its free storage is unavailable.
     58\item
     59All KTs contend for the global-pool lock for initial allocations, before free-lists get populated.
     60\end{itemize}
     61Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
     62
     63\paragraph{T:H model}
     64\VRef[Figure]{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded across the KTs.
     65A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket.
     66When KT $\le$ N, the heaps are uncontented;
     67when KTs $>$ N, the heaps are contented.
     68In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared.
     69By adjusting N upwards, this approach reduces contention but increases storage (time versus space);
     70however, picking N is workload specific.
     71
     72\begin{figure}
    2673\centering
    2774\input{AllocDS2}
    28 \end{cquote}
    29 Problems: need to know when a kernel thread (KT) is created and destroyed to know when to assign a shared bucket-number.
    30 When no thread is assigned a bucket number, its free storage is unavailable. All KTs will be contended for one lock on sbrk for their initial allocations (before free-lists gets populated).
    31 
    32 \paragraph{Design 2: Decentralized N Heaps}
    33 Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area.
    34 Kernel threads (KT) are assigned to the N heaps.
    35 When KTs $\le$ N, the heaps are uncontented.
    36 When KTs $>$ N, the heaps are contented.
    37 By adjusting N, this approach reduces storage at the cost of speed due to contention.
    38 In all cases, a thread acquires/releases a lock, contented or uncontented.
    39 \begin{cquote}
    40 \centering
    41 \input{AllocDS1}
    42 \end{cquote}
    43 Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT.
    44 
    45 \paragraph{Design 3: Decentralized Per-thread Heaps}
    46 Design 3 is similar to design 2 but instead of having an M:N model, it uses a 1:1 model. So, instead of having N heaos and sharing them among M KTs, Design 3 has one heap for each KT.
    47 Dynamic number of heaps: create a thread-local heap for each kernel thread (KT) with a bump-area allocated from the @sbrk@ area.
    48 Each KT will have its own exclusive thread-local heap. Heap will be uncontended between KTs regardless how many KTs have been created.
    49 Operations on @sbrk@ area will still be protected by locks.
    50 %\begin{cquote}
    51 %\centering
    52 %\input{AllocDS3} FIXME add figs
    53 %\end{cquote}
    54 Problems: We cannot destroy the heap when a KT exits because our dynamic objects have ownership and they are returned to the heap that created them when the program frees a dynamic object. All dynamic objects point back to their owner heap. If a thread A creates an object O, passes it to another thread B, and A itself exits. When B will free object O, O should return to A's heap so A's heap should be preserved for the lifetime of the whole program as their might be objects in-use of other threads that were allocated by A. Also, we need to know when a KT is created and destroyed to know when to create/destroy a heap for the KT.
    55 
    56 \paragraph{Design 4: Decentralized Per-CPU Heaps}
    57 Design 4 is similar to Design 3 but instead of having a heap for each thread, it creates a heap for each CPU.
    58 Fixed number of heaps for a machine: create a heap for each CPU with a bump-area allocated from the @sbrk@ area.
    59 Each CPU will have its own CPU-local heap. When the program does a dynamic memory operation, it will be entertained by the heap of the CPU where the process is currently running on.
    60 Each CPU will have its own exclusive heap. Just like Design 3(FIXME cite), heap will be uncontended between KTs regardless how many KTs have been created.
    61 Operations on @sbrk@ area will still be protected by locks.
    62 To deal with preemtion during a dynamic memory operation, librseq(FIXME cite) will be used to make sure that the whole dynamic memory operation completes on one CPU. librseq's restartable sequences can make it possible to re-run a critical section and undo the current writes if a preemption happened during the critical section's execution.
    63 %\begin{cquote}
    64 %\centering
    65 %\input{AllocDS4} FIXME add figs
    66 %\end{cquote}
    67 
    68 Problems: This approach was slower than the per-thread model. Also, librseq does not provide such restartable sequences to detect preemtions in user-level threading system which is important to us as CFA(FIXME cite) has its own threading system that we want to support.
    69 
    70 Out of the four designs, Design 3 was chosen because of the following reasons.
    71 \begin{itemize}
    72 \item
    73 Decentralized designes are better in general as compared to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designes shard the whole heap which has all the buckets with the addition of sharding sbrk area. So Design 1 was eliminated.
    74 \item
    75 Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenerio.
    76 \item
    77 Design 4 was eliminated because it was slower than Design 3 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achive user-threading safety which has some cost to it. Desing 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
    78 \end{itemize}
    79 
    80 
    81 \subsection{Advantages of distributed design}
    82 
    83 The distributed design of uHeap is concurrent to work in multi-threaded applications.
    84 
    85 Some key benefits of the distributed design of uHeap are as follows:
    86 
    87 \begin{itemize}
    88 \item
    89 The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The call to sbrk will be protected using locks but bump allocation (on memory taken from sbrk) will not be contended once the sbrk call has returned.
    90 \item
    91 Low or almost no contention on heap resources.
    92 \item
    93 It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
    94 \item
    95 Distributed design avoids unnecassry locks on resources shared across all KTs.
    96 \end{itemize}
     75\caption{T:H with Shared Heaps}
     76\label{f:THSharedHeaps}
     77\end{figure}
     78
     79Problems:
     80\begin{itemize}
     81\item
     82Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator.
     83\item
     84When no thread is assigned to a heap, its free storage is unavailable.
     85\item
     86Ownership issues arise (see \VRef{s:Ownership}).
     87\item
     88All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated.
     89\end{itemize}
     90Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
     91
     92\paragraph{T:H model, H = number of CPUs}
     93This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@.
     94(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per CPU.)
     95Hence, each CPU logically has its own private heap and local pool.
     96A memory operation is serviced from the heap associated with the CPU executing the operation.
     97This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership).
     98This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs.
     99
     100Problems:
     101\begin{itemize}
     102\item
     103Need to know when a CPU is added/removed from the @taskset@.
     104\item
     105Need a fast way to determine the CPU a KT is executing on to access the appropriate heap.
     106\item
     107Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}.
     108\begin{quote}
     109A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}
     110\end{quote}
     111If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
     112Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
     113Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler.
     114
     115\noindent
     116Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted.
     117\end{itemize}
     118Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-section along the allocation fast-path produced a significant increase in allocation costs.
     119Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs.
     120For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs.
     121The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation.
     122Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section.
     123Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap.
     124A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection.
     125
     126
     127\paragraph{1:1 model}
     128This design is the T:H model with T = H, where there is one thread-local heap for each KT.
     129(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.)
     130Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted.
     131Heaps are uncontended for a KTs memory operations to its heap (modulo operations on the global pool and ownership).
     132
     133Problems:
     134\begin{itemize}
     135\item
     136Need to know when a KT is starts/terminates to create/delete its heap.
     137
     138\noindent
     139It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates.
     140\item
     141There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap.
     142
     143\noindent
     144The classic solution only deletes a heap after all referents are returned, which is complex.
     145The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees.
     146If old referents return storage to a terminated heap, it is handled in the same way as an active heap.
     147To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation).
     148In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT.
     149\item
     150There can be significant external fragmentation as the number of KTs increases.
     151
     152\noindent
     153In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs.
     154Since the number of CPUs is relatively small, >~1024, and a heap relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs.
     155\item
     156There is the same serially-reusable problem with UTs migrating across KTs.
     157\end{itemize}
     158Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach.
     159
     160
     161\vspace{5pt}
     162\noindent
     163The conclusion from this design exercise is: any atomic fence, instruction (lock free), or lock along the allocation fastpath produces significant slowdown.
     164For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps maybe shared by multiple threads, even when KTs $\le$ N.
     165For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
     166However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs.
     167More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
     168Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required.
     169The 1:1 model still has the serially-reusable problem with user-level threading, which is address in \VRef{}, and the greatest potential for heap blowup for certain allocation patterns.
     170
     171
     172% \begin{itemize}
     173% \item
     174% A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated.
     175% \item
     176% Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario.
     177% \item
     178% Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it.
     179% that  because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
     180% \end{itemize}
     181% Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons:
     182
     183% \subsection{Advantages of distributed design}
     184%
     185% The distributed design of llheap is concurrent to work in multi-threaded applications.
     186% Some key benefits of the distributed design of llheap are as follows:
     187% \begin{itemize}
     188% \item
     189% The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned.
     190% \item
     191% Low or almost no contention on heap resources.
     192% \item
     193% It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
     194% \item
     195% Distributed design avoids unnecessary locks on resources shared across all KTs.
     196% \end{itemize}
     197
     198\subsection{Allocation Latency}
     199
     200A primary goal of llheap is low latency.
     201Two forms of latency are internal and external.
     202Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system.
     203Ideally latency is $O(1)$ with a small constant.
     204
     205To obtain $O(1)$ internal latency means no searching on the allocation fastpath, largely prohibits coalescing, which leads to external fragmentation.
     206The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger).
     207
     208To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
     209Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable.
     210The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VRef{}).
     211Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent.
     212
    97213
    98214%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    99215
    100 \section{uHeap Structure}
    101 
    102 As described in (FIXME cite 2.4) uHeap uses following features of multi-threaded memory allocators.
    103 \begin{itemize}
    104 \item
    105 uHeap has multiple heaps without a global heap and uses 1:1 model. (FIXME cite 2.5 1:1 model)
    106 \item
    107 uHeap uses object ownership. (FIXME cite 2.5.2)
    108 \item
    109 uHeap does not use object containers (FIXME cite 2.6) or any coalescing technique. Instead each dynamic object allocated by uHeap has a header than contains bookkeeping information.
    110 \item
    111 Each thread-local heap in uHeap has its own allocation buffer that is taken from the system using sbrk() call. (FIXME cite 2.7)
    112 \item
    113 Unless a heap is freeing an object that is owned by another thread's heap or heap is using sbrk() system call, uHeap is mostly lock-free which eliminates most of the contention on shared resources. (FIXME cite 2.8)
    114 \end{itemize}
    115 
    116 As uHeap uses a heap per-thread model to reduce contention on heap resources, we manage a list of heaps (heap-list) that can be used by threads. The list is empty at the start of the program. When a kernel thread (KT) is created, we check if heap-list is empty. If no then a heap is removed from the heap-list and is given to this new KT to use exclusively. If yes then a new heap object is created in dynamic memory and is given to this new KT to use exclusively. When a KT exits, its heap is not destroyed but instead its heap is put on the heap-list and is ready to be reused by new KTs.
    117 
    118 This reduces the memory footprint as the objects on free-lists of a KT that has exited can be reused by a new KT. Also, we preserve all the heaps that were created during the lifetime of the program till the end of the program. uHeap uses object ownership where an object is freed to the free-buckets of the heap that allocated it. Even after a KT A has exited, its heap has to be preserved as there might be objects in-use of other threads that were initially allocated by A and the passed to other threads.
     216\section{llheap Structure}
     217
     218\VRef[Figure]{f:llheapStructure} shows the design of llheap, which uses the following features:
     219\begin{itemize}
     220\item
     2211:1 multiple-heap model to minimize the fastpath,
     222\item
     223can be built with or without heap ownership,
     224\item
     225headers per allocation versus containers,
     226\item
     227no coalescing to minimize latency,
     228\item
     229local reserved memory (pool) obtained from the operating system using @sbrk@ call,
     230\item
     231global reserved memory (pool) obtained from the operating system using @mmap@ call to create and reuse heaps needed by threads.
     232\end{itemize}
    119233
    120234\begin{figure}
    121235\centering
    122 \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
    123 \caption{HeapStructure}
    124 \label{fig:heapStructureFig}
     236% \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
     237\input{llheap}
     238\caption{llheap Structure}
     239\label{f:llheapStructure}
    125240\end{figure}
    126241
    127 Each heap uses seggregated free-buckets that have free objects of a specific size. Each free-bucket of a specific size has following 2 lists in it:
    128 \begin{itemize}
    129 \item
    130 Free list is used when a thread is freeing an object that is owned by its own heap so free list does not use any locks/atomic-operations as it is only used by the owner KT.
    131 \item
    132 Away list is used when a thread A is freeing an object that is owned by another KT B's heap. This object should be freed to the owner heap (B's heap) so A will place the object on the away list of B. Away list is lock protected as it is shared by all other threads.
    133 \end{itemize}
    134 
    135 When a dynamic object of a size S is requested. The thread-local heap will check if S is greater than or equal to the mmap threshhold. Any request larger than the mmap threshhold is fulfilled by allocating an mmap area of that size and such requests are not allocated on sbrk area. The value of this threshhold can be changed using mallopt routine but the new value should not be larger than our biggest free-bucket size.
    136 
    137 Algorithm~\ref{alg:heapObjectAlloc} briefly shows how an allocation request is fulfilled.
     242llheap starts by creating an array of $N$ global heaps from storage obtained by @mmap@, where $N$ is the number of computer cores.
     243There is a global bump-pointer to the next free heap in the array.
     244When this array is exhausted, another array is allocated.
     245There is a global top pointer to a heap intrusive link that chain free heaps from terminated threads, where these heaps are reused by new threads.
     246When statistics are turned on, there is a global top pointer to a heap intrusive link that chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps (see @malloc_stats@ \VRef{}).
     247
     248When a KT starts, a heap is allocated from the current array for exclusive used by the KT.
     249When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of heaps.
     250The free heaps is a stack so hot storage is reused first.
     251Preserving all heaps created during the program lifetime, solves the storage lifetime problem.
     252This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially.
     253llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap.
     254
     255Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
     256The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation (see @mallopt@ \VRef{}), \ie small objects managed by the program and large objects managed by the operating system.
     257Each free bucket of a specific size has following two lists:
     258\begin{itemize}
     259\item
     260A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
     261The free objects is a stack so hot storage is reused first.
     262\item
     263For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operation require locking.
     264The entire ownership stack be removed and become the head of the corresponding free stack, when the free stack is empty.
     265\end{itemize}
     266
     267Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
     268First, the allocation is divided into small (@sbrk@) or large (@mmap@).
     269For small allocations, $S$ is quantized into a bucket size.
     270Quantizing is performed using a binary search, using the ordered bucket array.
     271An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket.
     272(Type @char@ restricts the number of bucket sizes to 256.)
     273For $S$ > 64K, the binary search is used.
     274Then, the allocation storage is obtained from the following locations (in order), with increasing latency.
     275\begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt]
     276\item
     277bucket's free stack,
     278\item
     279bucket's away stack,
     280\item
     281heap's local pool
     282\item
     283global pool
     284\item
     285operating system (@sbrk@)
     286\end{enumerate}
    138287
    139288\begin{algorithm}
    140 \caption{Dynamic object allocation of size S}\label{alg:heapObjectAlloc}
     289\caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc}
    141290\begin{algorithmic}[1]
    142291\State $\textit{O} \gets \text{NULL}$
    143292\If {$S < \textit{mmap-threshhold}$}
    144         \State $\textit{B} \gets (\text{smallest free-bucket} \geq S)$
     293        \State $\textit{B} \gets \text{smallest free-bucket} \geq S$
    145294        \If {$\textit{B's free-list is empty}$}
    146295                \If {$\textit{B's away-list is empty}$}
    147296                        \If {$\textit{heap's allocation buffer} < S$}
    148                                 \State $\text{get allocation buffer using system call sbrk()}$
     297                                \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$
    149298                        \EndIf
    150299                        \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$
     
    164313\end{algorithm}
    165314
     315Algorithm~\ref{alg:heapObjectFree} shows the de-allocation (free) outline for an object at address $A$.
     316
     317\begin{algorithm}[h]
     318\caption{Dynamic object free at address $A$}\label{alg:heapObjectFree}
     319%\begin{algorithmic}[1]
     320%\State write this algorithm
     321%\end{algorithmic}
     322\end{algorithm}
     323
    166324
    167325%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    168326
    169327\section{Added Features and Methods}
    170 To improve the uHeap allocator (FIX ME: cite uHeap) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
     328To improve the llheap allocator (FIX ME: cite llheap) interface and make it more user friendly, we added a few more routines to the C allocator.
     329Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
    171330
    172331\subsection{C Interface}
    173 We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. THese features will programmer more control on the dynamic memory allocation.
     332We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers.
     333These features will programmer more control on the dynamic memory allocation.
    174334
    175335\subsection{Out of Memory}
     
    183343
    184344\subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
    185 @aalloc@ is an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is calloc but calloc also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.
     345@aalloc@ is an extension of malloc.
     346It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly.
     347The only alternate of this routine in the other allocators is @calloc@ but @calloc@ also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.
    186348\paragraph{Usage}
    187349@aalloc@ takes two parameters.
     
    193355@elemSize@: size of the object in the array.
    194356\end{itemize}
    195 It returns address of dynamic object allocatoed on heap that can contain dim number of objects of the size elemSize. On failure, it returns a @NULL@ pointer.
     357It returns address of dynamic object allocated on heap that can contain dim number of objects of the size elemSize.
     358On failure, it returns a @NULL@ pointer.
    196359
    197360\subsection{\lstinline{void * resize( void * oaddr, size_t size )}}
    198 @resize@ is an extension of relloc. It allows programmer to reuse a cuurently allocated dynamic object with a new size requirement. Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.
     361@resize@ is an extension of relloc.
     362It allows programmer to reuse a currently allocated dynamic object with a new size requirement.
     363Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.
    199364\paragraph{Usage}
    200365@resize@ takes two parameters.
     
    206371@size@: the new size requirement of the to which the old object needs to be resized.
    207372\end{itemize}
    208 It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns a @NULL@ pointer.
     373It returns an object that is of the size given but it does not preserve the data in the old object.
     374On failure, it returns a @NULL@ pointer.
    209375
    210376\subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
    211 This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
    212 \paragraph{Usage}
    213 This resize takes three parameters. It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize).
     377This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize).
     378In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
     379\paragraph{Usage}
     380This resize takes three parameters.
     381It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize).
    214382
    215383\begin{itemize}
     
    221389@size@: the new size requirement of the to which the old object needs to be resized.
    222390\end{itemize}
    223 It returns an object with the size and alignment given in the parameters. On failure, it returns a @NULL@ pointer.
     391It returns an object with the size and alignment given in the parameters.
     392On failure, it returns a @NULL@ pointer.
    224393
    225394\subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
    226 amemalign is a hybrid of memalign and aalloc. It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. It frees the programmer from calculating the total size of the array.
     395amemalign is a hybrid of memalign and aalloc.
     396It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly.
     397It frees the programmer from calculating the total size of the array.
    227398\paragraph{Usage}
    228399amemalign takes three parameters.
     
    236407@elemSize@: size of the object in the array.
    237408\end{itemize}
    238 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns a @NULL@ pointer.
     409It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize.
     410The returned dynamic array is aligned to the given alignment.
     411On failure, it returns a @NULL@ pointer.
    239412
    240413\subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
    241 cmemalign is a hybrid of amemalign and calloc. It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. This routine provides both features of aligning and 0 filling, implicitly.
     414cmemalign is a hybrid of amemalign and calloc.
     415It allows programmer to allocate an aligned dynamic array of objects that is 0 filled.
     416The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly.
     417This routine provides both features of aligning and 0 filling, implicitly.
    242418\paragraph{Usage}
    243419cmemalign takes three parameters.
     
    251427@elemSize@: size of the object in the array.
    252428\end{itemize}
    253 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns a @NULL@ pointer.
     429It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize.
     430The returned dynamic array is aligned to the given alignment and is 0 filled.
     431On failure, it returns a @NULL@ pointer.
    254432
    255433\subsection{\lstinline{size_t malloc_alignment( void * addr )}}
    256 @malloc_alignment@ returns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.
     434@malloc_alignment@ returns the alignment of a currently allocated dynamic object.
     435It allows the programmer in memory management and personal bookkeeping.
     436It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.
    257437\paragraph{Usage}
    258438@malloc_alignment@ takes one parameters.
     
    262442@addr@: the address of the currently allocated dynamic object.
    263443\end{itemize}
    264 @malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeap allocator.
     444@malloc_alignment@ returns the alignment of the given dynamic object.
     445On failure, it return the value of default alignment of the llheap allocator.
    265446
    266447\subsection{\lstinline{bool malloc_zero_fill( void * addr )}}
    267 @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.
     448@malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation.
     449It allows the programmer in memory management and personal bookkeeping.
     450It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.
    268451\paragraph{Usage}
    269452@malloc_zero_fill@ takes one parameters.
     
    273456@addr@: the address of the currently allocated dynamic object.
    274457\end{itemize}
    275 @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false.
     458@malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise.
     459On failure, it returns false.
    276460
    277461\subsection{\lstinline{size_t malloc_size( void * addr )}}
    278 @malloc_size@ returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is @malloc_usable_size@. But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine.
     462@malloc_size@ returns the allocation size of a currently allocated dynamic object.
     463It allows the programmer in memory management and personal bookkeeping.
     464It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size.
     465Its current alternate in the other allocators is @malloc_usable_size@.
     466But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object.
     467On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object.
     468This size is updated when an object is realloced, resized, or passed through a similar allocator routine.
    279469\paragraph{Usage}
    280470@malloc_size@ takes one parameters.
     
    284474@addr@: the address of the currently allocated dynamic object.
    285475\end{itemize}
    286 @malloc_size@ returns the allocation size of the given dynamic object. On failure, it return zero.
     476@malloc_size@ returns the allocation size of the given dynamic object.
     477On failure, it return zero.
    287478
    288479\subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
    289 This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
    290 \paragraph{Usage}
    291 This @realloc@ takes three parameters. It takes an additional parameter of nalign as compared to the default @realloc@.
     480This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@).
     481In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
     482\paragraph{Usage}
     483This @realloc@ takes three parameters.
     484It takes an additional parameter of nalign as compared to the default @realloc@.
    292485
    293486\begin{itemize}
     
    299492@size@: the new size requirement of the to which the old object needs to be resized.
    300493\end{itemize}
    301 It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a @NULL@ pointer.
     494It returns an object with the size and alignment given in the parameters that preserves the data in the old object.
     495On failure, it returns a @NULL@ pointer.
    302496
    303497\subsection{\CFA Malloc Interface}
    304 We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeap allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers.
    305 \CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
     498We added some routines to the @malloc@ interface of \CFA.
     499These routines can only be used in \CFA and not in our stand-alone llheap allocator as these routines use some features that are only provided by \CFA and not by C.
     500It makes the allocator even more usable to the programmers.
     501\CFA provides the liberty to know the returned type of a call to the allocator.
     502So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
    306503
    307504\subsection{\lstinline{T * malloc( void )}}
    308 This malloc is a simplified polymorphic form of defualt malloc (FIX ME: cite malloc). It does not take any parameter as compared to default malloc that takes one parameter.
    309 \paragraph{Usage}
    310 This malloc takes no parameters.
    311 It returns a dynamic object of the size of type @T@. On failure, it returns a @NULL@ pointer.
     505This @malloc@ is a simplified polymorphic form of default @malloc@ (FIX ME: cite malloc).
     506It does not take any parameter as compared to default @malloc@ that takes one parameter.
     507\paragraph{Usage}
     508This @malloc@ takes no parameters.
     509It returns a dynamic object of the size of type @T@.
     510On failure, it returns a @NULL@ pointer.
    312511
    313512\subsection{\lstinline{T * aalloc( size_t dim )}}
    314 This aalloc is a simplified polymorphic form of above aalloc (FIX ME: cite aalloc). It takes one parameter as compared to the above aalloc that takes two parameters.
     513This @aalloc@ is a simplified polymorphic form of above @aalloc@ (FIX ME: cite aalloc).
     514It takes one parameter as compared to the above @aalloc@ that takes two parameters.
    315515\paragraph{Usage}
    316516aalloc takes one parameters.
     
    320520@dim@: required number of objects in the array.
    321521\end{itemize}
    322 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer.
     522It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
     523On failure, it returns a @NULL@ pointer.
    323524
    324525\subsection{\lstinline{T * calloc( size_t dim )}}
    325 This calloc is a simplified polymorphic form of defualt calloc (FIX ME: cite calloc). It takes one parameter as compared to the default calloc that takes two parameters.
    326 \paragraph{Usage}
    327 This calloc takes one parameter.
     526This @calloc@ is a simplified polymorphic form of default @calloc@ (FIX ME: cite calloc).
     527It takes one parameter as compared to the default @calloc@ that takes two parameters.
     528\paragraph{Usage}
     529This @calloc@ takes one parameter.
    328530
    329531\begin{itemize}
     
    331533@dim@: required number of objects in the array.
    332534\end{itemize}
    333 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer.
     535It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
     536On failure, it returns a @NULL@ pointer.
    334537
    335538\subsection{\lstinline{T * resize( T * ptr, size_t size )}}
    336 This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
     539This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment).
     540It takes two parameters as compared to the above resize that takes three parameters.
     541It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
    337542\paragraph{Usage}
    338543This resize takes two parameters.
     
    344549@size@: the required size of the new object.
    345550\end{itemize}
    346 It returns a dynamic object of the size given in paramters. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer.
     551It returns a dynamic object of the size given in parameters.
     552The returned object is aligned to the alignment of type @T@.
     553On failure, it returns a @NULL@ pointer.
    347554
    348555\subsection{\lstinline{T * realloc( T * ptr, size_t size )}}
    349 This @realloc@ is a simplified polymorphic form of defualt @realloc@ (FIX ME: cite @realloc@ with align). It takes two parameters as compared to the above @realloc@ that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
     556This @realloc@ is a simplified polymorphic form of default @realloc@ (FIX ME: cite @realloc@ with align).
     557It takes two parameters as compared to the above @realloc@ that takes three parameters.
     558It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
    350559\paragraph{Usage}
    351560This @realloc@ takes two parameters.
     
    357566@size@: the required size of the new object.
    358567\end{itemize}
    359 It returns a dynamic object of the size given in paramters that preserves the data in the given object. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer.
     568It returns a dynamic object of the size given in parameters that preserves the data in the given object.
     569The returned object is aligned to the alignment of type @T@.
     570On failure, it returns a @NULL@ pointer.
    360571
    361572\subsection{\lstinline{T * memalign( size_t align )}}
    362 This memalign is a simplified polymorphic form of defualt memalign (FIX ME: cite memalign). It takes one parameters as compared to the default memalign that takes two parameters.
     573This memalign is a simplified polymorphic form of default memalign (FIX ME: cite memalign).
     574It takes one parameters as compared to the default memalign that takes two parameters.
    363575\paragraph{Usage}
    364576memalign takes one parameters.
     
    368580@align@: the required alignment of the dynamic object.
    369581\end{itemize}
    370 It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. On failure, it returns a @NULL@ pointer.
     582It returns a dynamic object of the size of type @T@ that is aligned to given parameter align.
     583On failure, it returns a @NULL@ pointer.
    371584
    372585\subsection{\lstinline{T * amemalign( size_t align, size_t dim )}}
    373 This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). It takes two parameter as compared to the above amemalign that takes three parameters.
     586This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign).
     587It takes two parameter as compared to the above amemalign that takes three parameters.
    374588\paragraph{Usage}
    375589amemalign takes two parameters.
     
    381595@dim@: required number of objects in the array.
    382596\end{itemize}
    383 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align. On failure, it returns a @NULL@ pointer.
     597It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
     598The returned object is aligned to the given parameter align.
     599On failure, it returns a @NULL@ pointer.
    384600
    385601\subsection{\lstinline{T * cmemalign( size_t align, size_t dim  )}}
    386 This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). It takes two parameter as compared to the above cmemalign that takes three parameters.
     602This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign).
     603It takes two parameter as compared to the above cmemalign that takes three parameters.
    387604\paragraph{Usage}
    388605cmemalign takes two parameters.
     
    394611@dim@: required number of objects in the array.
    395612\end{itemize}
    396 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align and is zero filled. On failure, it returns a @NULL@ pointer.
     613It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
     614The returned object is aligned to the given parameter align and is zero filled.
     615On failure, it returns a @NULL@ pointer.
    397616
    398617\subsection{\lstinline{T * aligned_alloc( size_t align )}}
    399 This @aligned_alloc@ is a simplified polymorphic form of defualt @aligned_alloc@ (FIX ME: cite @aligned_alloc@). It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters.
     618This @aligned_alloc@ is a simplified polymorphic form of default @aligned_alloc@ (FIX ME: cite @aligned_alloc@).
     619It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters.
    400620\paragraph{Usage}
    401621This @aligned_alloc@ takes one parameter.
     
    405625@align@: required alignment of the dynamic object.
    406626\end{itemize}
    407 It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. On failure, it returns a @NULL@ pointer.
     627It returns a dynamic object of the size of type @T@ that is aligned to the given parameter.
     628On failure, it returns a @NULL@ pointer.
    408629
    409630\subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}}
    410 This @posix_memalign@ is a simplified polymorphic form of defualt @posix_memalign@ (FIX ME: cite @posix_memalign@). It takes two parameters as compared to the default @posix_memalign@ that takes three parameters.
     631This @posix_memalign@ is a simplified polymorphic form of default @posix_memalign@ (FIX ME: cite @posix_memalign@).
     632It takes two parameters as compared to the default @posix_memalign@ that takes three parameters.
    411633\paragraph{Usage}
    412634This @posix_memalign@ takes two parameter.
     
    419641\end{itemize}
    420642
    421 It stores address of the dynamic object of the size of type @T@ in given parameter ptr. This object is aligned to the given parameter. On failure, it returns a @NULL@ pointer.
     643It stores address of the dynamic object of the size of type @T@ in given parameter ptr.
     644This object is aligned to the given parameter.
     645On failure, it returns a @NULL@ pointer.
    422646
    423647\subsection{\lstinline{T * valloc( void )}}
    424 This @valloc@ is a simplified polymorphic form of defualt @valloc@ (FIX ME: cite @valloc@). It takes no parameters as compared to the default @valloc@ that takes one parameter.
     648This @valloc@ is a simplified polymorphic form of default @valloc@ (FIX ME: cite @valloc@).
     649It takes no parameters as compared to the default @valloc@ that takes one parameter.
    425650\paragraph{Usage}
    426651@valloc@ takes no parameters.
    427 It returns a dynamic object of the size of type @T@ that is aligned to the page size. On failure, it returns a @NULL@ pointer.
     652It returns a dynamic object of the size of type @T@ that is aligned to the page size.
     653On failure, it returns a @NULL@ pointer.
    428654
    429655\subsection{\lstinline{T * pvalloc( void )}}
    430656\paragraph{Usage}
    431657@pvalloc@ takes no parameters.
    432 It returns a dynamic object of the size that is calcutaed by rouding the size of type @T@. The returned object is also aligned to the page size. On failure, it returns a @NULL@ pointer.
     658It returns a dynamic object of the size that is calculated by rounding the size of type @T@.
     659The returned object is also aligned to the page size.
     660On failure, it returns a @NULL@ pointer.
    433661
    434662\subsection{Alloc Interface}
    435 In addition to improve allocator interface both for \CFA and our standalone allocator uHeap in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
     663In addition to improve allocator interface both for \CFA and our stand-alone allocator llheap in C.
     664We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
    436665This interface helps programmers in three major ways.
    437666
    438667\begin{itemize}
    439668\item
    440 Routine Name: alloc interfce frees programmers from remmebring different routine names for different kind of dynamic allocations.
    441 \item
    442 Parametre Positions: alloc interface frees programmers from remembering parameter postions in call to routines.
    443 \item
    444 Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determince the object size from returned type of alloc call.
    445 \end{itemize}
    446 
    447 Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interfece has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.
    448 
    449 \subsection{Routine: \lstinline{T * alloc( ... )}}
    450 Call to alloc wihout any parameter returns one object of size of type @T@ allocated dynamically.
    451 Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine.
    452 alocc routine accepts six kinds of arguments. Using different combinations of tha parameters, different kind of allocations can be performed. Any combincation of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultanously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.
     669Routine Name: alloc interface frees programmers from remembering different routine names for different kind of dynamic allocations.
     670\item
     671Parameter Positions: alloc interface frees programmers from remembering parameter positions in call to routines.
     672\item
     673Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determine the object size from returned type of alloc call.
     674\end{itemize}
     675
     676Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers.
     677The new interface has just one routine name alloc that can be used to perform a wide range of dynamic allocations.
     678The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.
     679
     680\subsection{Routine: \lstinline{T * alloc( ...
     681)}}
     682Call to alloc without any parameter returns one object of size of type @T@ allocated dynamically.
     683Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine.
     684If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine.
     685alloc routine accepts six kinds of arguments.
     686Using different combinations of than parameters, different kind of allocations can be performed.
     687Any combination of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultaneously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object.
     688If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.
    453689
    454690\paragraph{Dim}
    455 This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@.
    456 It represents the required number of members in the array allocation as in \CFA's aalloc (FIX ME: cite aalloc).
     691This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function.
     692It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@.
     693It represents the required number of members in the array allocation as in \CFA's @aalloc@ (FIX ME: cite aalloc).
    457694This parameter should be of type @size_t@.
    458695
     
    461698
    462699\paragraph{Align}
    463 This parameter is position-free and uses a backtick routine align (@`align@). The parameter passed with @`align@ should be of type @size_t@. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used.
     700This parameter is position-free and uses a backtick routine align (@`align@).
     701The parameter passed with @`align@ should be of type @size_t@.
     702If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used.
    464703
    465704Example: @int b = alloc( 5 , 64`align )@
    466 This call will return a dynamic array of five integers. It will align the allocated object to 64.
     705This call will return a dynamic array of five integers.
     706It will align the allocated object to 64.
    467707
    468708\paragraph{Fill}
    469 This parameter is position-free and uses a backtick routine fill (@`fill@). In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter.
     709This parameter is position-free and uses a backtick routine fill (@`fill@).
     710In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter.
    470711Three types of parameters can be passed using `fill.
    471712
     
    476717Object of returned type: An object of type of returned type can be passed with @`fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation.
    477718\item
    478 Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen untill the end object passed to @`fill@ or the end of requested allocation reaches.
     719Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object.
     720In this case, the allocated memory is not filled recursively till the end of allocation.
     721The filling happen until the end object passed to @`fill@ or the end of requested allocation reaches.
    479722\end{itemize}
    480723
    481724Example: @int b = alloc( 5 , 'a'`fill )@
    482 This call will return a dynamic array of five integers. It will fill the allocated object with character 'a' recursively till the end of requested allocation size.
     725This call will return a dynamic array of five integers.
     726It will fill the allocated object with character 'a' recursively till the end of requested allocation size.
    483727
    484728Example: @int b = alloc( 5 , 4`fill )@
    485 This call will return a dynamic array of five integers. It will fill the allocated object with integer 4 recursively till the end of requested allocation size.
     729This call will return a dynamic array of five integers.
     730It will fill the allocated object with integer 4 recursively till the end of requested allocation size.
    486731
    487732Example: @int b = alloc( 5 , a`fill )@ where @a@ is a pointer of int type
    488 This call will return a dynamic array of five integers. It will copy data in a to the returned object non-recursively untill end of a or the newly allocated object is reached.
     733This call will return a dynamic array of five integers.
     734It will copy data in a to the returned object non-recursively until end of a or the newly allocated object is reached.
    489735
    490736\paragraph{Resize}
    491 This parameter is position-free and uses a backtick routine resize (@`resize@). It represents the old dynamic object (oaddr) that the programmer wants to
     737This parameter is position-free and uses a backtick routine resize (@`resize@).
     738It represents the old dynamic object (oaddr) that the programmer wants to
    492739\begin{itemize}
    493740\item
     
    498745fill with something.
    499746\end{itemize}
    500 The data in old dynamic object will not be preserved in the new object. The type of object passed to @`resize@ and the returned type of alloc call can be different.
     747The data in old dynamic object will not be preserved in the new object.
     748The type of object passed to @`resize@ and the returned type of alloc call can be different.
    501749
    502750Example: @int b = alloc( 5 , a`resize )@
     
    504752
    505753Example: @int b = alloc( 5 , a`resize , 32`align )@
    506 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32.
     754This call will resize object a to a dynamic array that can contain 5 integers.
     755The returned object will also be aligned to 32.
    507756
    508757Example: @int b = alloc( 5 , a`resize , 32`align , 2`fill )@
    509 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32 and will be filled with 2.
     758This call will resize object a to a dynamic array that can contain 5 integers.
     759The returned object will also be aligned to 32 and will be filled with 2.
    510760
    511761\paragraph{Realloc}
    512 This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@). It represents the old dynamic object (oaddr) that the programmer wants to
     762This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@).
     763It represents the old dynamic object (oaddr) that the programmer wants to
    513764\begin{itemize}
    514765\item
     
    519770fill with something.
    520771\end{itemize}
    521 The data in old dynamic object will be preserved in the new object. The type of object passed to @`realloc@ and the returned type of alloc call cannot be different.
     772The data in old dynamic object will be preserved in the new object.
     773The type of object passed to @`realloc@ and the returned type of alloc call cannot be different.
    522774
    523775Example: @int b = alloc( 5 , a`realloc )@
     
    525777
    526778Example: @int b = alloc( 5 , a`realloc , 32`align )@
    527 This call will realloc object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32.
     779This call will realloc object a to a dynamic array that can contain 5 integers.
     780The returned object will also be aligned to 32.
    528781
    529782Example: @int b = alloc( 5 , a`realloc , 32`align , 2`fill )@
    530 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. The extra space after copying data of a to the returned object will be filled with 2.
     783This call will resize object a to a dynamic array that can contain 5 integers.
     784The returned object will also be aligned to 32.
     785The extra space after copying data of a to the returned object will be filled with 2.
Note: See TracChangeset for help on using the changeset viewer.