source: doc/theses/mubeen_zulfiqar_MMath/allocator.tex @ db4a8cf

ADTast-experimentalenumpthread-emulationqualifiedEnum
Last change on this file since db4a8cf was db4a8cf, checked in by Peter A. Buhr <pabuhr@…>, 23 months ago

proofread allocator chapter

  • Property mode set to 100644
File size: 42.7 KB
Line 
1\chapter{Allocator}
2
3This chapter presents a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
4The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}.
5
6
7\section{llheap}
8
9The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call.
10(Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.)
11A direct consequence of this objective is very simple or no storage coalescing;
12hence, llheap's design is willing to use more storage to lower latency.
13This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory.
14Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{Performance}).
15
16% The objective of llheap's new design was to fulfill following requirements:
17% \begin{itemize}
18% \item It should be concurrent and thread-safe for multi-threaded programs.
19% \item It should avoid global locks, on resources shared across all threads, as much as possible.
20% \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
21% \item It should be a lightweight memory allocator.
22% \end{itemize}
23
24%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
25
26\section{Design Choices}
27
28llheap's design was reviewed and changed multiple times throughout the thesis.
29Some of the rejected designs are discussed because they show the path to the final design (see discussion in \VRef{s:MultipleHeaps}).
30Note, a few simples tests for a design choice were compared with the current best allocators to determine the viability of a design.
31
32
33\subsection{Allocation Fastpath}
34
35These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced.
36\paragraph{T:1 model}
37\VRef[Figure]{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are N-shared across KTs.
38This design leverages the fact that 95\% of allocation requests are less than 1024 bytes and there are only 3--5 different request sizes.
39When KTs $\le$ N, the common bucket sizes are uncontented;
40when KTs $>$ N, the free buckets are contented and latency increases significantly.
41In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared.
42Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency;
43however, picking N is workload specific.
44
45\begin{figure}
46\centering
47\input{AllocDS1}
48\caption{T:1 with Shared Buckets}
49\label{f:T1SharedBuckets}
50\end{figure}
51
52Problems:
53\begin{itemize}
54\item
55Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator.
56\item
57When no thread is assigned a bucket number, its free storage is unavailable.
58\item
59All KTs contend for the global-pool lock for initial allocations, before free-lists get populated.
60\end{itemize}
61Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
62
63\paragraph{T:H model}
64\VRef[Figure]{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded across the KTs.
65A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket.
66When KT $\le$ N, the heaps are uncontented;
67when KTs $>$ N, the heaps are contented.
68In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared.
69By adjusting N upwards, this approach reduces contention but increases storage (time versus space);
70however, picking N is workload specific.
71
72\begin{figure}
73\centering
74\input{AllocDS2}
75\caption{T:H with Shared Heaps}
76\label{f:THSharedHeaps}
77\end{figure}
78
79Problems:
80\begin{itemize}
81\item
82Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator.
83\item
84When no thread is assigned to a heap, its free storage is unavailable.
85\item
86Ownership issues arise (see \VRef{s:Ownership}).
87\item
88All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated.
89\end{itemize}
90Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
91
92\paragraph{T:H model, H = number of CPUs}
93This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@.
94(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per CPU.)
95Hence, each CPU logically has its own private heap and local pool.
96A memory operation is serviced from the heap associated with the CPU executing the operation.
97This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership).
98This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs.
99
100Problems:
101\begin{itemize}
102\item
103Need to know when a CPU is added/removed from the @taskset@.
104\item
105Need a fast way to determine the CPU a KT is executing on to access the appropriate heap.
106\item
107Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}.
108\begin{quote}
109A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}
110\end{quote}
111If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
112Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
113Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler.
114
115\noindent
116Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted.
117\end{itemize}
118Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-section along the allocation fast-path produced a significant increase in allocation costs.
119Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs.
120For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs.
121The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation.
122Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section.
123Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap.
124A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection.
125
126
127\paragraph{1:1 model}
128This design is the T:H model with T = H, where there is one thread-local heap for each KT.
129(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.)
130Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted.
131Heaps are uncontended for a KTs memory operations to its heap (modulo operations on the global pool and ownership).
132
133Problems:
134\begin{itemize}
135\item
136Need to know when a KT is starts/terminates to create/delete its heap.
137
138\noindent
139It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates.
140\item
141There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap.
142
143\noindent
144The classic solution only deletes a heap after all referents are returned, which is complex.
145The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees.
146If old referents return storage to a terminated heap, it is handled in the same way as an active heap.
147To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation).
148In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT.
149\item
150There can be significant external fragmentation as the number of KTs increases.
151
152\noindent
153In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs.
154Since the number of CPUs is relatively small, >~1024, and a heap relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs.
155\item
156There is the same serially-reusable problem with UTs migrating across KTs.
157\end{itemize}
158Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach.
159
160
161\vspace{5pt}
162\noindent
163The conclusion from this design exercise is: any atomic fence, instruction (lock free), or lock along the allocation fastpath produces significant slowdown.
164For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps maybe shared by multiple threads, even when KTs $\le$ N.
165For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
166However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs.
167More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
168Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required.
169The 1:1 model still has the serially-reusable problem with user-level threading, which is address in \VRef{}, and the greatest potential for heap blowup for certain allocation patterns.
170
171
172% \begin{itemize}
173% \item
174% A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated.
175% \item
176% Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario.
177% \item
178% Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it.
179% that  because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
180% \end{itemize}
181% Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons:
182
183% \subsection{Advantages of distributed design}
184%
185% The distributed design of llheap is concurrent to work in multi-threaded applications.
186% Some key benefits of the distributed design of llheap are as follows:
187% \begin{itemize}
188% \item
189% The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned.
190% \item
191% Low or almost no contention on heap resources.
192% \item
193% It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
194% \item
195% Distributed design avoids unnecessary locks on resources shared across all KTs.
196% \end{itemize}
197
198\subsection{Allocation Latency}
199
200A primary goal of llheap is low latency.
201Two forms of latency are internal and external.
202Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system.
203Ideally latency is $O(1)$ with a small constant.
204
205To obtain $O(1)$ internal latency means no searching on the allocation fastpath, largely prohibits coalescing, which leads to external fragmentation.
206The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger).
207
208To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
209Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable.
210The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VRef{}).
211Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent.
212
213
214%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
215
216\section{llheap Structure}
217
218\VRef[Figure]{f:llheapStructure} shows the design of llheap, which uses the following features:
219\begin{itemize}
220\item
2211:1 multiple-heap model to minimize the fastpath,
222\item
223can be built with or without heap ownership,
224\item
225headers per allocation versus containers,
226\item
227no coalescing to minimize latency,
228\item
229local reserved memory (pool) obtained from the operating system using @sbrk@ call,
230\item
231global reserved memory (pool) obtained from the operating system using @mmap@ call to create and reuse heaps needed by threads.
232\end{itemize}
233
234\begin{figure}
235\centering
236% \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
237\input{llheap}
238\caption{llheap Structure}
239\label{f:llheapStructure}
240\end{figure}
241
242llheap starts by creating an array of $N$ global heaps from storage obtained by @mmap@, where $N$ is the number of computer cores.
243There is a global bump-pointer to the next free heap in the array.
244When this array is exhausted, another array is allocated.
245There is a global top pointer to a heap intrusive link that chain free heaps from terminated threads, where these heaps are reused by new threads.
246When statistics are turned on, there is a global top pointer to a heap intrusive link that chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps (see @malloc_stats@ \VRef{}).
247
248When a KT starts, a heap is allocated from the current array for exclusive used by the KT.
249When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of heaps.
250The free heaps is a stack so hot storage is reused first.
251Preserving all heaps created during the program lifetime, solves the storage lifetime problem.
252This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially.
253llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap.
254
255Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
256The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation (see @mallopt@ \VRef{}), \ie small objects managed by the program and large objects managed by the operating system.
257Each free bucket of a specific size has following two lists:
258\begin{itemize}
259\item
260A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
261The free objects is a stack so hot storage is reused first.
262\item
263For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operation require locking.
264The entire ownership stack be removed and become the head of the corresponding free stack, when the free stack is empty.
265\end{itemize}
266
267Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
268First, the allocation is divided into small (@sbrk@) or large (@mmap@).
269For small allocations, $S$ is quantized into a bucket size.
270Quantizing is performed using a binary search, using the ordered bucket array.
271An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket.
272(Type @char@ restricts the number of bucket sizes to 256.)
273For $S$ > 64K, the binary search is used.
274Then, the allocation storage is obtained from the following locations (in order), with increasing latency.
275\begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt]
276\item
277bucket's free stack,
278\item
279bucket's away stack,
280\item
281heap's local pool
282\item
283global pool
284\item
285operating system (@sbrk@)
286\end{enumerate}
287
288\begin{algorithm}
289\caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc}
290\begin{algorithmic}[1]
291\State $\textit{O} \gets \text{NULL}$
292\If {$S < \textit{mmap-threshhold}$}
293        \State $\textit{B} \gets \text{smallest free-bucket} \geq S$
294        \If {$\textit{B's free-list is empty}$}
295                \If {$\textit{B's away-list is empty}$}
296                        \If {$\textit{heap's allocation buffer} < S$}
297                                \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$
298                        \EndIf
299                        \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$
300                \Else
301                        \State $\textit{merge B's away-list into free-list}$
302                        \State $\textit{O} \gets \text{pop an object from B's free-list}$
303                \EndIf
304        \Else
305                \State $\textit{O} \gets \text{pop an object from B's free-list}$
306        \EndIf
307        \State $\textit{O's owner} \gets \text{B}$
308\Else
309        \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
310\EndIf
311\State $\Return \textit{ O}$
312\end{algorithmic}
313\end{algorithm}
314
315Algorithm~\ref{alg:heapObjectFree} shows the de-allocation (free) outline for an object at address $A$.
316
317\begin{algorithm}[h]
318\caption{Dynamic object free at address $A$}\label{alg:heapObjectFree}
319%\begin{algorithmic}[1]
320%\State write this algorithm
321%\end{algorithmic}
322\end{algorithm}
323
324
325%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
326
327\section{Added Features and Methods}
328To improve the llheap allocator (FIX ME: cite llheap) interface and make it more user friendly, we added a few more routines to the C allocator.
329Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
330
331\subsection{C Interface}
332We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers.
333These features will programmer more control on the dynamic memory allocation.
334
335\subsection{Out of Memory}
336
337Most allocators use @nullptr@ to indicate an allocation failure, specifically out of memory;
338hence the need to return an alternate value for a zero-sized allocation.
339The alternative is to abort a program when out of memory.
340In theory, notifying the programmer allows recovery;
341in practice, it is almost impossible to gracefully when out of memory, so the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen.
342
343
344\subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
345@aalloc@ is an extension of malloc.
346It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly.
347The only alternate of this routine in the other allocators is @calloc@ but @calloc@ also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.
348\paragraph{Usage}
349@aalloc@ takes two parameters.
350
351\begin{itemize}
352\item
353@dim@: number of objects in the array
354\item
355@elemSize@: size of the object in the array.
356\end{itemize}
357It returns address of dynamic object allocated on heap that can contain dim number of objects of the size elemSize.
358On failure, it returns a @NULL@ pointer.
359
360\subsection{\lstinline{void * resize( void * oaddr, size_t size )}}
361@resize@ is an extension of relloc.
362It allows programmer to reuse a currently allocated dynamic object with a new size requirement.
363Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.
364\paragraph{Usage}
365@resize@ takes two parameters.
366
367\begin{itemize}
368\item
369@oaddr@: the address of the old object that needs to be resized.
370\item
371@size@: the new size requirement of the to which the old object needs to be resized.
372\end{itemize}
373It returns an object that is of the size given but it does not preserve the data in the old object.
374On failure, it returns a @NULL@ pointer.
375
376\subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
377This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize).
378In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
379\paragraph{Usage}
380This resize takes three parameters.
381It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize).
382
383\begin{itemize}
384\item
385@oaddr@: the address of the old object that needs to be resized.
386\item
387@nalign@: the new alignment to which the old object needs to be realigned.
388\item
389@size@: the new size requirement of the to which the old object needs to be resized.
390\end{itemize}
391It returns an object with the size and alignment given in the parameters.
392On failure, it returns a @NULL@ pointer.
393
394\subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
395amemalign is a hybrid of memalign and aalloc.
396It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly.
397It frees the programmer from calculating the total size of the array.
398\paragraph{Usage}
399amemalign takes three parameters.
400
401\begin{itemize}
402\item
403@alignment@: the alignment to which the dynamic array needs to be aligned.
404\item
405@dim@: number of objects in the array
406\item
407@elemSize@: size of the object in the array.
408\end{itemize}
409It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize.
410The returned dynamic array is aligned to the given alignment.
411On failure, it returns a @NULL@ pointer.
412
413\subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
414cmemalign is a hybrid of amemalign and calloc.
415It allows programmer to allocate an aligned dynamic array of objects that is 0 filled.
416The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly.
417This routine provides both features of aligning and 0 filling, implicitly.
418\paragraph{Usage}
419cmemalign takes three parameters.
420
421\begin{itemize}
422\item
423@alignment@: the alignment to which the dynamic array needs to be aligned.
424\item
425@dim@: number of objects in the array
426\item
427@elemSize@: size of the object in the array.
428\end{itemize}
429It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize.
430The returned dynamic array is aligned to the given alignment and is 0 filled.
431On failure, it returns a @NULL@ pointer.
432
433\subsection{\lstinline{size_t malloc_alignment( void * addr )}}
434@malloc_alignment@ returns the alignment of a currently allocated dynamic object.
435It allows the programmer in memory management and personal bookkeeping.
436It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.
437\paragraph{Usage}
438@malloc_alignment@ takes one parameters.
439
440\begin{itemize}
441\item
442@addr@: the address of the currently allocated dynamic object.
443\end{itemize}
444@malloc_alignment@ returns the alignment of the given dynamic object.
445On failure, it return the value of default alignment of the llheap allocator.
446
447\subsection{\lstinline{bool malloc_zero_fill( void * addr )}}
448@malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation.
449It allows the programmer in memory management and personal bookkeeping.
450It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.
451\paragraph{Usage}
452@malloc_zero_fill@ takes one parameters.
453
454\begin{itemize}
455\item
456@addr@: the address of the currently allocated dynamic object.
457\end{itemize}
458@malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise.
459On failure, it returns false.
460
461\subsection{\lstinline{size_t malloc_size( void * addr )}}
462@malloc_size@ returns the allocation size of a currently allocated dynamic object.
463It allows the programmer in memory management and personal bookkeeping.
464It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size.
465Its current alternate in the other allocators is @malloc_usable_size@.
466But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object.
467On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object.
468This size is updated when an object is realloced, resized, or passed through a similar allocator routine.
469\paragraph{Usage}
470@malloc_size@ takes one parameters.
471
472\begin{itemize}
473\item
474@addr@: the address of the currently allocated dynamic object.
475\end{itemize}
476@malloc_size@ returns the allocation size of the given dynamic object.
477On failure, it return zero.
478
479\subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
480This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@).
481In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
482\paragraph{Usage}
483This @realloc@ takes three parameters.
484It takes an additional parameter of nalign as compared to the default @realloc@.
485
486\begin{itemize}
487\item
488@oaddr@: the address of the old object that needs to be reallocated.
489\item
490@nalign@: the new alignment to which the old object needs to be realigned.
491\item
492@size@: the new size requirement of the to which the old object needs to be resized.
493\end{itemize}
494It returns an object with the size and alignment given in the parameters that preserves the data in the old object.
495On failure, it returns a @NULL@ pointer.
496
497\subsection{\CFA Malloc Interface}
498We added some routines to the @malloc@ interface of \CFA.
499These routines can only be used in \CFA and not in our stand-alone llheap allocator as these routines use some features that are only provided by \CFA and not by C.
500It makes the allocator even more usable to the programmers.
501\CFA provides the liberty to know the returned type of a call to the allocator.
502So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
503
504\subsection{\lstinline{T * malloc( void )}}
505This @malloc@ is a simplified polymorphic form of default @malloc@ (FIX ME: cite malloc).
506It does not take any parameter as compared to default @malloc@ that takes one parameter.
507\paragraph{Usage}
508This @malloc@ takes no parameters.
509It returns a dynamic object of the size of type @T@.
510On failure, it returns a @NULL@ pointer.
511
512\subsection{\lstinline{T * aalloc( size_t dim )}}
513This @aalloc@ is a simplified polymorphic form of above @aalloc@ (FIX ME: cite aalloc).
514It takes one parameter as compared to the above @aalloc@ that takes two parameters.
515\paragraph{Usage}
516aalloc takes one parameters.
517
518\begin{itemize}
519\item
520@dim@: required number of objects in the array.
521\end{itemize}
522It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
523On failure, it returns a @NULL@ pointer.
524
525\subsection{\lstinline{T * calloc( size_t dim )}}
526This @calloc@ is a simplified polymorphic form of default @calloc@ (FIX ME: cite calloc).
527It takes one parameter as compared to the default @calloc@ that takes two parameters.
528\paragraph{Usage}
529This @calloc@ takes one parameter.
530
531\begin{itemize}
532\item
533@dim@: required number of objects in the array.
534\end{itemize}
535It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
536On failure, it returns a @NULL@ pointer.
537
538\subsection{\lstinline{T * resize( T * ptr, size_t size )}}
539This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment).
540It takes two parameters as compared to the above resize that takes three parameters.
541It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
542\paragraph{Usage}
543This resize takes two parameters.
544
545\begin{itemize}
546\item
547@ptr@: address of the old object.
548\item
549@size@: the required size of the new object.
550\end{itemize}
551It returns a dynamic object of the size given in parameters.
552The returned object is aligned to the alignment of type @T@.
553On failure, it returns a @NULL@ pointer.
554
555\subsection{\lstinline{T * realloc( T * ptr, size_t size )}}
556This @realloc@ is a simplified polymorphic form of default @realloc@ (FIX ME: cite @realloc@ with align).
557It takes two parameters as compared to the above @realloc@ that takes three parameters.
558It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
559\paragraph{Usage}
560This @realloc@ takes two parameters.
561
562\begin{itemize}
563\item
564@ptr@: address of the old object.
565\item
566@size@: the required size of the new object.
567\end{itemize}
568It returns a dynamic object of the size given in parameters that preserves the data in the given object.
569The returned object is aligned to the alignment of type @T@.
570On failure, it returns a @NULL@ pointer.
571
572\subsection{\lstinline{T * memalign( size_t align )}}
573This memalign is a simplified polymorphic form of default memalign (FIX ME: cite memalign).
574It takes one parameters as compared to the default memalign that takes two parameters.
575\paragraph{Usage}
576memalign takes one parameters.
577
578\begin{itemize}
579\item
580@align@: the required alignment of the dynamic object.
581\end{itemize}
582It returns a dynamic object of the size of type @T@ that is aligned to given parameter align.
583On failure, it returns a @NULL@ pointer.
584
585\subsection{\lstinline{T * amemalign( size_t align, size_t dim )}}
586This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign).
587It takes two parameter as compared to the above amemalign that takes three parameters.
588\paragraph{Usage}
589amemalign takes two parameters.
590
591\begin{itemize}
592\item
593@align@: required alignment of the dynamic array.
594\item
595@dim@: required number of objects in the array.
596\end{itemize}
597It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
598The returned object is aligned to the given parameter align.
599On failure, it returns a @NULL@ pointer.
600
601\subsection{\lstinline{T * cmemalign( size_t align, size_t dim  )}}
602This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign).
603It takes two parameter as compared to the above cmemalign that takes three parameters.
604\paragraph{Usage}
605cmemalign takes two parameters.
606
607\begin{itemize}
608\item
609@align@: required alignment of the dynamic array.
610\item
611@dim@: required number of objects in the array.
612\end{itemize}
613It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
614The returned object is aligned to the given parameter align and is zero filled.
615On failure, it returns a @NULL@ pointer.
616
617\subsection{\lstinline{T * aligned_alloc( size_t align )}}
618This @aligned_alloc@ is a simplified polymorphic form of default @aligned_alloc@ (FIX ME: cite @aligned_alloc@).
619It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters.
620\paragraph{Usage}
621This @aligned_alloc@ takes one parameter.
622
623\begin{itemize}
624\item
625@align@: required alignment of the dynamic object.
626\end{itemize}
627It returns a dynamic object of the size of type @T@ that is aligned to the given parameter.
628On failure, it returns a @NULL@ pointer.
629
630\subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}}
631This @posix_memalign@ is a simplified polymorphic form of default @posix_memalign@ (FIX ME: cite @posix_memalign@).
632It takes two parameters as compared to the default @posix_memalign@ that takes three parameters.
633\paragraph{Usage}
634This @posix_memalign@ takes two parameter.
635
636\begin{itemize}
637\item
638@ptr@: variable address to store the address of the allocated object.
639\item
640@align@: required alignment of the dynamic object.
641\end{itemize}
642
643It stores address of the dynamic object of the size of type @T@ in given parameter ptr.
644This object is aligned to the given parameter.
645On failure, it returns a @NULL@ pointer.
646
647\subsection{\lstinline{T * valloc( void )}}
648This @valloc@ is a simplified polymorphic form of default @valloc@ (FIX ME: cite @valloc@).
649It takes no parameters as compared to the default @valloc@ that takes one parameter.
650\paragraph{Usage}
651@valloc@ takes no parameters.
652It returns a dynamic object of the size of type @T@ that is aligned to the page size.
653On failure, it returns a @NULL@ pointer.
654
655\subsection{\lstinline{T * pvalloc( void )}}
656\paragraph{Usage}
657@pvalloc@ takes no parameters.
658It returns a dynamic object of the size that is calculated by rounding the size of type @T@.
659The returned object is also aligned to the page size.
660On failure, it returns a @NULL@ pointer.
661
662\subsection{Alloc Interface}
663In addition to improve allocator interface both for \CFA and our stand-alone allocator llheap in C.
664We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
665This interface helps programmers in three major ways.
666
667\begin{itemize}
668\item
669Routine Name: alloc interface frees programmers from remembering different routine names for different kind of dynamic allocations.
670\item
671Parameter Positions: alloc interface frees programmers from remembering parameter positions in call to routines.
672\item
673Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determine the object size from returned type of alloc call.
674\end{itemize}
675
676Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers.
677The new interface has just one routine name alloc that can be used to perform a wide range of dynamic allocations.
678The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.
679
680\subsection{Routine: \lstinline{T * alloc( ...
681)}}
682Call to alloc without any parameter returns one object of size of type @T@ allocated dynamically.
683Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine.
684If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine.
685alloc routine accepts six kinds of arguments.
686Using different combinations of than parameters, different kind of allocations can be performed.
687Any combination of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultaneously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object.
688If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.
689
690\paragraph{Dim}
691This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function.
692It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@.
693It represents the required number of members in the array allocation as in \CFA's @aalloc@ (FIX ME: cite aalloc).
694This parameter should be of type @size_t@.
695
696Example: @int a = alloc( 5 )@
697This call will return a dynamic array of five integers.
698
699\paragraph{Align}
700This parameter is position-free and uses a backtick routine align (@`align@).
701The parameter passed with @`align@ should be of type @size_t@.
702If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used.
703
704Example: @int b = alloc( 5 , 64`align )@
705This call will return a dynamic array of five integers.
706It will align the allocated object to 64.
707
708\paragraph{Fill}
709This parameter is position-free and uses a backtick routine fill (@`fill@).
710In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter.
711Three types of parameters can be passed using `fill.
712
713\begin{itemize}
714\item
715@char@: A char can be passed with @`fill@ to fill the whole dynamic allocation with the given char recursively till the end of required allocation.
716\item
717Object of returned type: An object of type of returned type can be passed with @`fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation.
718\item
719Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object.
720In this case, the allocated memory is not filled recursively till the end of allocation.
721The filling happen until the end object passed to @`fill@ or the end of requested allocation reaches.
722\end{itemize}
723
724Example: @int b = alloc( 5 , 'a'`fill )@
725This call will return a dynamic array of five integers.
726It will fill the allocated object with character 'a' recursively till the end of requested allocation size.
727
728Example: @int b = alloc( 5 , 4`fill )@
729This call will return a dynamic array of five integers.
730It will fill the allocated object with integer 4 recursively till the end of requested allocation size.
731
732Example: @int b = alloc( 5 , a`fill )@ where @a@ is a pointer of int type
733This call will return a dynamic array of five integers.
734It will copy data in a to the returned object non-recursively until end of a or the newly allocated object is reached.
735
736\paragraph{Resize}
737This parameter is position-free and uses a backtick routine resize (@`resize@).
738It represents the old dynamic object (oaddr) that the programmer wants to
739\begin{itemize}
740\item
741resize to a new size.
742\item
743realign to a new alignment
744\item
745fill with something.
746\end{itemize}
747The data in old dynamic object will not be preserved in the new object.
748The type of object passed to @`resize@ and the returned type of alloc call can be different.
749
750Example: @int b = alloc( 5 , a`resize )@
751This call will resize object a to a dynamic array that can contain 5 integers.
752
753Example: @int b = alloc( 5 , a`resize , 32`align )@
754This call will resize object a to a dynamic array that can contain 5 integers.
755The returned object will also be aligned to 32.
756
757Example: @int b = alloc( 5 , a`resize , 32`align , 2`fill )@
758This call will resize object a to a dynamic array that can contain 5 integers.
759The returned object will also be aligned to 32 and will be filled with 2.
760
761\paragraph{Realloc}
762This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@).
763It represents the old dynamic object (oaddr) that the programmer wants to
764\begin{itemize}
765\item
766realloc to a new size.
767\item
768realign to a new alignment
769\item
770fill with something.
771\end{itemize}
772The data in old dynamic object will be preserved in the new object.
773The type of object passed to @`realloc@ and the returned type of alloc call cannot be different.
774
775Example: @int b = alloc( 5 , a`realloc )@
776This call will realloc object a to a dynamic array that can contain 5 integers.
777
778Example: @int b = alloc( 5 , a`realloc , 32`align )@
779This call will realloc object a to a dynamic array that can contain 5 integers.
780The returned object will also be aligned to 32.
781
782Example: @int b = alloc( 5 , a`realloc , 32`align , 2`fill )@
783This call will resize object a to a dynamic array that can contain 5 integers.
784The returned object will also be aligned to 32.
785The extra space after copying data of a to the returned object will be filled with 2.
Note: See TracBrowser for help on using the repository browser.