1 | \chapter{Allocator} |
---|
2 | \label{c:Allocator} |
---|
3 | |
---|
4 | This chapter presents a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading). |
---|
5 | The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}. |
---|
6 | |
---|
7 | |
---|
8 | \section{llheap} |
---|
9 | |
---|
10 | The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call. |
---|
11 | (Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.) |
---|
12 | A direct consequence of this objective is very simple or no storage coalescing; |
---|
13 | hence, llheap's design is willing to use more storage to lower latency. |
---|
14 | This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory. |
---|
15 | Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{c:Performance}). |
---|
16 | |
---|
17 | % The objective of llheap's new design was to fulfill following requirements: |
---|
18 | % \begin{itemize} |
---|
19 | % \item It should be concurrent and thread-safe for multi-threaded programs. |
---|
20 | % \item It should avoid global locks, on resources shared across all threads, as much as possible. |
---|
21 | % \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators). |
---|
22 | % \item It should be a lightweight memory allocator. |
---|
23 | % \end{itemize} |
---|
24 | |
---|
25 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
26 | |
---|
27 | \section{Design Choices} |
---|
28 | |
---|
29 | llheap's design was reviewed and changed multiple times throughout the thesis. |
---|
30 | Some of the rejected designs are discussed because they show the path to the final design (see discussion in \VRef{s:MultipleHeaps}). |
---|
31 | Note, a few simple tests for a design choice were compared with the current best allocators to determine the viability of a design. |
---|
32 | |
---|
33 | |
---|
34 | \subsection{Allocation Fastpath} |
---|
35 | \label{s:AllocationFastpath} |
---|
36 | |
---|
37 | These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced. |
---|
38 | \paragraph{T:1 model} |
---|
39 | \VRef[Figure]{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are shared among N KTs. |
---|
40 | This design leverages the fact that usually the allocation requests are less than 1024 bytes and there are only a few different request sizes. |
---|
41 | When KTs $\le$ N, the common bucket sizes are uncontented; |
---|
42 | when KTs $>$ N, the free buckets are contented and latency increases significantly. |
---|
43 | In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared. |
---|
44 | Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency; |
---|
45 | however, picking N is workload specific. |
---|
46 | |
---|
47 | \begin{figure} |
---|
48 | \centering |
---|
49 | \input{AllocDS1} |
---|
50 | \caption{T:1 with Shared Buckets} |
---|
51 | \label{f:T1SharedBuckets} |
---|
52 | \end{figure} |
---|
53 | |
---|
54 | Problems: |
---|
55 | \begin{itemize} |
---|
56 | \item |
---|
57 | Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator. |
---|
58 | \item |
---|
59 | When no thread is assigned a bucket number, its free storage is unavailable. |
---|
60 | \item |
---|
61 | All KTs contend for the global-pool lock for initial allocations, before free-lists get populated. |
---|
62 | \end{itemize} |
---|
63 | Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency. |
---|
64 | |
---|
65 | \paragraph{T:H model} |
---|
66 | \VRef[Figure]{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded (distributed) across the KTs. |
---|
67 | A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket. |
---|
68 | When KT $\le$ N, the heaps might be uncontented; |
---|
69 | when KTs $>$ N, the heaps are contented. |
---|
70 | In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared. |
---|
71 | By increasing N, this approach reduces contention but increases storage (time versus space); |
---|
72 | however, picking N is workload specific. |
---|
73 | |
---|
74 | \begin{figure} |
---|
75 | \centering |
---|
76 | \input{AllocDS2} |
---|
77 | \caption{T:H with Shared Heaps} |
---|
78 | \label{f:THSharedHeaps} |
---|
79 | \end{figure} |
---|
80 | |
---|
81 | Problems: |
---|
82 | \begin{itemize} |
---|
83 | \item |
---|
84 | Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator. |
---|
85 | \item |
---|
86 | When no thread is assigned to a heap, its free storage is unavailable. |
---|
87 | \item |
---|
88 | Ownership issues arise (see \VRef{s:Ownership}). |
---|
89 | \item |
---|
90 | All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated. |
---|
91 | \end{itemize} |
---|
92 | Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency. |
---|
93 | |
---|
94 | \paragraph{T:H model, H = number of CPUs} |
---|
95 | This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@. |
---|
96 | (See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per CPU.) |
---|
97 | Hence, each CPU logically has its own private heap and local pool. |
---|
98 | A memory operation is serviced from the heap associated with the CPU executing the operation. |
---|
99 | This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership). |
---|
100 | This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs. |
---|
101 | |
---|
102 | Problems: |
---|
103 | \begin{itemize} |
---|
104 | \item |
---|
105 | Need to know when a CPU is added/removed from the @taskset@. |
---|
106 | \item |
---|
107 | Need a fast way to determine the CPU a KT is executing on to access the appropriate heap. |
---|
108 | \item |
---|
109 | Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}. |
---|
110 | \begin{quote} |
---|
111 | A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable} |
---|
112 | \end{quote} |
---|
113 | If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness. |
---|
114 | Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable. |
---|
115 | Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler. |
---|
116 | |
---|
117 | Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted. |
---|
118 | \end{itemize} |
---|
119 | Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-section along the allocation fast-path produced a significant increase in allocation costs. |
---|
120 | Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs. |
---|
121 | For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs. |
---|
122 | The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation. |
---|
123 | Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section. |
---|
124 | Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap. |
---|
125 | If @librseq@ had an @rseq_abort@ which: |
---|
126 | \begin{enumerate} |
---|
127 | \item |
---|
128 | Marked the current restartable critical-section as cancelled so it restarts when attempting to commit. |
---|
129 | \item |
---|
130 | Do nothing if there is no current restartable critical section in progress. |
---|
131 | \end{enumerate} |
---|
132 | Then @rseq_abort@ could be called on the backside of a user-level context-switching. |
---|
133 | A feature similar to this idea might exist for hardware transactional-memory. |
---|
134 | A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection. |
---|
135 | |
---|
136 | \paragraph{1:1 model} |
---|
137 | This design is the T:H model with T = H, where there is one thread-local heap for each KT. |
---|
138 | (See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.) |
---|
139 | Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted. |
---|
140 | Heaps are uncontended for a KTs memory operations as every KT has its own thread-local heap, modulo operations on the global pool and ownership. |
---|
141 | |
---|
142 | Problems: |
---|
143 | \begin{itemize} |
---|
144 | \item |
---|
145 | Need to know when a KT starts/terminates to create/delete its heap. |
---|
146 | |
---|
147 | \noindent |
---|
148 | It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates. |
---|
149 | \item |
---|
150 | There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap. |
---|
151 | |
---|
152 | \noindent |
---|
153 | The classic solution only deletes a heap after all referents are returned, which is complex. |
---|
154 | The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees. |
---|
155 | If old referents return storage to a terminated heap, it is handled in the same way as an active heap. |
---|
156 | To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation). |
---|
157 | In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT. |
---|
158 | \item |
---|
159 | There can be significant external fragmentation as the number of KTs increases. |
---|
160 | |
---|
161 | \noindent |
---|
162 | In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs. |
---|
163 | Since the number of CPUs is relatively small, and a heap is also relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs. |
---|
164 | \item |
---|
165 | There is the same serially-reusable problem with UTs migrating across KTs. |
---|
166 | \end{itemize} |
---|
167 | Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach. |
---|
168 | |
---|
169 | |
---|
170 | \vspace{5pt} |
---|
171 | \noindent |
---|
172 | The conclusion from this design exercise is: any atomic fence, atomic instruction (lock free), or lock along the allocation fastpath produces significant slowdown. |
---|
173 | For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps might be shared by multiple threads, even when KTs $\le$ N. |
---|
174 | For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath. |
---|
175 | However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs. |
---|
176 | More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading. |
---|
177 | So the 1:1 model had no atomic actions along the fastpath and no special operating-system support requirements. |
---|
178 | The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in \VRef{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns. |
---|
179 | |
---|
180 | |
---|
181 | % \begin{itemize} |
---|
182 | % \item |
---|
183 | % A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated. |
---|
184 | % \item |
---|
185 | % Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario. |
---|
186 | % \item |
---|
187 | % Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it. |
---|
188 | % that because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower. |
---|
189 | % \end{itemize} |
---|
190 | % Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons: |
---|
191 | |
---|
192 | % \subsection{Advantages of distributed design} |
---|
193 | % |
---|
194 | % The distributed design of llheap is concurrent to work in multi-threaded applications. |
---|
195 | % Some key benefits of the distributed design of llheap are as follows: |
---|
196 | % \begin{itemize} |
---|
197 | % \item |
---|
198 | % The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned. |
---|
199 | % \item |
---|
200 | % Low or almost no contention on heap resources. |
---|
201 | % \item |
---|
202 | % It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty. |
---|
203 | % \item |
---|
204 | % Distributed design avoids unnecessary locks on resources shared across all KTs. |
---|
205 | % \end{itemize} |
---|
206 | |
---|
207 | \subsection{Allocation Latency} |
---|
208 | |
---|
209 | A primary goal of llheap is low latency. |
---|
210 | Two forms of latency are internal and external. |
---|
211 | Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system. |
---|
212 | Ideally latency is $O(1)$ with a small constant. |
---|
213 | |
---|
214 | To obtain $O(1)$ internal latency means no searching on the allocation fastpath and largely prohibits coalescing, which leads to external fragmentation. |
---|
215 | The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger). |
---|
216 | |
---|
217 | To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation. |
---|
218 | Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable. |
---|
219 | The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VPageref{p:malloc_expansion}). |
---|
220 | Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent. |
---|
221 | |
---|
222 | |
---|
223 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
224 | |
---|
225 | \section{llheap Structure} |
---|
226 | |
---|
227 | \VRef[Figure]{f:llheapStructure} shows the design of llheap, which uses the following features: |
---|
228 | \begin{itemize} |
---|
229 | \item |
---|
230 | 1:1 multiple-heap model to minimize the fastpath, |
---|
231 | \item |
---|
232 | can be built with or without heap ownership, |
---|
233 | \item |
---|
234 | headers per allocation versus containers, |
---|
235 | \item |
---|
236 | no coalescing to minimize latency, |
---|
237 | \item |
---|
238 | global heap memory (pool) obtained from the operating system using @mmap@ to create and reuse heaps needed by threads, |
---|
239 | \item |
---|
240 | local reserved memory (pool) per heap obtained from global pool, |
---|
241 | \item |
---|
242 | global reserved memory (pool) obtained from the operating system using @sbrk@ call, |
---|
243 | \item |
---|
244 | optional fast-lookup table for converting allocation requests into bucket sizes, |
---|
245 | \item |
---|
246 | optional statistic-counters table for accumulating counts of allocation operations. |
---|
247 | \end{itemize} |
---|
248 | |
---|
249 | \begin{figure} |
---|
250 | \centering |
---|
251 | % \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps} |
---|
252 | \input{llheap} |
---|
253 | \caption{llheap Structure} |
---|
254 | \label{f:llheapStructure} |
---|
255 | \end{figure} |
---|
256 | |
---|
257 | llheap starts by creating an array of $N$ global heaps from storage obtained using @mmap@, where $N$ is the number of computer cores, that persists for program duration. |
---|
258 | There is a global bump-pointer to the next free heap in the array. |
---|
259 | When this array is exhausted, another array of heaps is allocated. |
---|
260 | There is a global top pointer for a intrusive linked-list to chain free heaps from terminated threads. |
---|
261 | When statistics are turned on, there is a global top pointer for a intrusive linked-list to chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps using @malloc_stats@. |
---|
262 | |
---|
263 | When a KT starts, a heap is allocated from the current array for exclusive use by the KT. |
---|
264 | When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of number of heaps. |
---|
265 | The free heaps are stored on stack so hot storage is reused first. |
---|
266 | Preserving all heaps, created during the program lifetime, solves the storage lifetime problem when ownership is used. |
---|
267 | This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially. |
---|
268 | llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap. |
---|
269 | |
---|
270 | Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M. |
---|
271 | All objects in a bucket are of the same size. |
---|
272 | The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the operating system. |
---|
273 | Each free bucket of a specific size has the following two lists: |
---|
274 | \begin{itemize} |
---|
275 | \item |
---|
276 | A free stack used solely by the KT heap-owner, so push/pop operations do not require locking. |
---|
277 | The free objects are a stack so hot storage is reused first. |
---|
278 | \item |
---|
279 | For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operations require locking. |
---|
280 | When the free stack is empty, the entire ownership stack is removed and becomes the head of the corresponding free stack. |
---|
281 | \end{itemize} |
---|
282 | |
---|
283 | Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$. |
---|
284 | First, the allocation is divided into small (@sbrk@) or large (@mmap@). |
---|
285 | For large allocations, the storage is mapped directly from the operating system. |
---|
286 | For small allocations, $S$ is quantized into a bucket size. |
---|
287 | Quantizing is performed using a binary search over the ordered bucket array. |
---|
288 | An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket. |
---|
289 | The @char@ type restricts the number of bucket sizes to 256. |
---|
290 | For $S$ > 64K, a binary search is used. |
---|
291 | Then, the allocation storage is obtained from the following locations (in order), with increasing latency. |
---|
292 | \begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt] |
---|
293 | \item |
---|
294 | bucket's free stack, |
---|
295 | \item |
---|
296 | bucket's away stack, |
---|
297 | \item |
---|
298 | heap's local pool |
---|
299 | \item |
---|
300 | global pool |
---|
301 | \item |
---|
302 | operating system (@sbrk@) |
---|
303 | \end{enumerate} |
---|
304 | |
---|
305 | \begin{figure} |
---|
306 | \vspace*{-10pt} |
---|
307 | \begin{algorithm}[H] |
---|
308 | \small |
---|
309 | \caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc} |
---|
310 | \begin{algorithmic}[1] |
---|
311 | \State $\textit{O} \gets \text{NULL}$ |
---|
312 | \If {$S >= \textit{mmap-threshhold}$} |
---|
313 | \State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$ |
---|
314 | \Else |
---|
315 | \State $\textit{B} \gets \text{smallest free-bucket} \geq S$ |
---|
316 | \If {$\textit{B's free-list is empty}$} |
---|
317 | \If {$\textit{B's away-list is empty}$} |
---|
318 | \If {$\textit{heap's allocation buffer} < S$} |
---|
319 | \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$ |
---|
320 | \EndIf |
---|
321 | \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$ |
---|
322 | \Else |
---|
323 | \State $\textit{merge B's away-list into free-list}$ |
---|
324 | \State $\textit{O} \gets \text{pop an object from B's free-list}$ |
---|
325 | \EndIf |
---|
326 | \Else |
---|
327 | \State $\textit{O} \gets \text{pop an object from B's free-list}$ |
---|
328 | \EndIf |
---|
329 | \State $\textit{O's owner} \gets \text{B}$ |
---|
330 | \EndIf |
---|
331 | \State $\Return \textit{ O}$ |
---|
332 | \end{algorithmic} |
---|
333 | \end{algorithm} |
---|
334 | |
---|
335 | \vspace*{-15pt} |
---|
336 | \begin{algorithm}[H] |
---|
337 | \small |
---|
338 | \caption{Dynamic object free at address $A$ with object ownership}\label{alg:heapObjectFreeOwn} |
---|
339 | \begin{algorithmic}[1] |
---|
340 | \If {$\textit{A mapped allocation}$} |
---|
341 | \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$ |
---|
342 | \Else |
---|
343 | \State $\text{B} \gets \textit{O's owner}$ |
---|
344 | \If {$\textit{B is thread-local heap's bucket}$} |
---|
345 | \State $\text{push A to B's free-list}$ |
---|
346 | \Else |
---|
347 | \State $\text{push A to B's away-list}$ |
---|
348 | \EndIf |
---|
349 | \EndIf |
---|
350 | \end{algorithmic} |
---|
351 | \end{algorithm} |
---|
352 | |
---|
353 | \vspace*{-15pt} |
---|
354 | \begin{algorithm}[H] |
---|
355 | \small |
---|
356 | \caption{Dynamic object free at address $A$ without object ownership}\label{alg:heapObjectFreeNoOwn} |
---|
357 | \begin{algorithmic}[1] |
---|
358 | \If {$\textit{A mapped allocation}$} |
---|
359 | \State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$ |
---|
360 | \Else |
---|
361 | \State $\text{B} \gets \textit{O's owner}$ |
---|
362 | \If {$\textit{B is thread-local heap's bucket}$} |
---|
363 | \State $\text{push A to B's free-list}$ |
---|
364 | \Else |
---|
365 | \State $\text{C} \gets \textit{thread local heap's bucket with same size as B}$ |
---|
366 | \State $\text{push A to C's free-list}$ |
---|
367 | \EndIf |
---|
368 | \EndIf |
---|
369 | \end{algorithmic} |
---|
370 | \end{algorithm} |
---|
371 | \end{figure} |
---|
372 | |
---|
373 | Algorithm~\ref{alg:heapObjectFreeOwn} shows the de-allocation (free) outline for an object at address $A$ with ownership. |
---|
374 | First, the address is divided into small (@sbrk@) or large (@mmap@). |
---|
375 | For large allocations, the storage is unmapped back to the operating system. |
---|
376 | For small allocations, the bucket associated with the request size is retrieved. |
---|
377 | If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket. |
---|
378 | If the bucket is not local to the thread, the allocation is pushed onto the owning thread's associated away stack. |
---|
379 | |
---|
380 | Algorithm~\ref{alg:heapObjectFreeNoOwn} shows the de-allocation (free) outline for an object at address $A$ without ownership. |
---|
381 | The algorithm is the same as for ownership except if the bucket is not local to the thread. |
---|
382 | Then the corresponding bucket of the owner thread is computed for the deallocating thread, and the allocation is pushed onto the deallocating thread's bucket. |
---|
383 | |
---|
384 | Finally, the llheap design funnels \label{p:FunnelRoutine} all allocation/deallocation operations through the @malloc@ and @free@ routines, which are the only routines to directly access and manage the internal data structures of the heap. |
---|
385 | Other allocation operations, \eg @calloc@, @memalign@, and @realloc@, are composed of calls to @malloc@ and possibly @free@, and may manipulate header information after storage is allocated. |
---|
386 | This design simplifies heap-management code during development and maintenance. |
---|
387 | |
---|
388 | |
---|
389 | \subsection{Alignment} |
---|
390 | |
---|
391 | Most dynamic memory allocations have a minimum storage alignment for the contained object(s). |
---|
392 | Often the minimum memory alignment, M, is the bus width (32 or 64-bit) or the largest register (double, long double) or largest atomic instruction (DCAS) or vector data (MMMX). |
---|
393 | In general, the minimum storage alignment is 8/16-byte boundary on 32/64-bit computers. |
---|
394 | For consistency, the object header is normally aligned at this same boundary. |
---|
395 | Larger alignments must be a power of 2, such as page alignment (4/8K). |
---|
396 | Any alignment request, N, $\le$ the minimum alignment is handled as a normal allocation with minimal alignment. |
---|
397 | |
---|
398 | For alignments greater than the minimum, the obvious approach for aligning to address @A@ is: compute the next address that is a multiple of @N@ after the current end of the heap, @E@, plus room for the header before @A@ and the size of the allocation after @A@, moving the end of the heap to @E'@. |
---|
399 | \begin{center} |
---|
400 | \input{Alignment1} |
---|
401 | \end{center} |
---|
402 | The storage between @E@ and @H@ is chained onto the appropriate free list for future allocations. |
---|
403 | The same approach is used for sufficiently large free blocks, where @E@ is the start of the free block, and any unused storage before @H@ or after the allocated object becomes free storage. |
---|
404 | In this approach, the aligned address @A@ is the same as the allocated storage address @P@, \ie @P@ $=$ @A@ for all allocation routines, which simplifies deallocation. |
---|
405 | However, if there are a large number of aligned requests, this approach leads to memory fragmentation from the small free areas around the aligned object. |
---|
406 | As well, it does not work for large allocations, where many memory allocators switch from program @sbrk@ to operating-system @mmap@. |
---|
407 | The reason is that @mmap@ only starts on a page boundary, and it is difficult to reuse the storage before the alignment boundary for other requests. |
---|
408 | Finally, this approach is incompatible with allocator designs that funnel allocation requests through @malloc@ as it directly manipulates management information within the allocator to optimize the space/time of a request. |
---|
409 | |
---|
410 | Instead, llheap alignment is accomplished by making a \emph{pessimistic} allocation request for sufficient storage to ensure that \emph{both} the alignment and size request are satisfied, \eg: |
---|
411 | \begin{center} |
---|
412 | \input{Alignment2} |
---|
413 | \end{center} |
---|
414 | The amount of storage necessary is @alignment - M + size@, which ensures there is an address, @A@, after the storage returned from @malloc@, @P@, that is a multiple of @alignment@ followed by sufficient storage for the data object. |
---|
415 | The approach is pessimistic because if @P@ already has the correct alignment @N@, the initial allocation has already requested sufficient space to move to the next multiple of @N@. |
---|
416 | For this special case, there is @alignment - M@ bytes of unused storage after the data object, which subsequently can be used by @realloc@. |
---|
417 | |
---|
418 | Note, the address returned is @A@, which is subsequently returned to @free@. |
---|
419 | However, to correctly free the allocated object, the value @P@ must be computable, since that is the value generated by @malloc@ and returned within @memalign@. |
---|
420 | Hence, there must be a mechanism to detect when @P@ $\neq$ @A@ and how to compute @P@ from @A@. |
---|
421 | |
---|
422 | The llheap approach uses two headers: |
---|
423 | the \emph{original} header associated with a memory allocation from @malloc@, and a \emph{fake} header within this storage before the alignment boundary @A@, which is returned from @memalign@, e.g.: |
---|
424 | \begin{center} |
---|
425 | \input{Alignment2Impl} |
---|
426 | \end{center} |
---|
427 | Since @malloc@ has a minimum alignment of @M@, @P@ $\neq$ @A@ only holds for alignments greater than @M@. |
---|
428 | When @P@ $\neq$ @A@, the minimum distance between @P@ and @A@ is @M@ bytes, due to the pessimistic storage-allocation. |
---|
429 | Therefore, there is always room for an @M@-byte fake header before @A@. |
---|
430 | |
---|
431 | The fake header must supply an indicator to distinguish it from a normal header and the location of address @P@ generated by @malloc@. |
---|
432 | This information is encoded as an offset from A to P and the initialize alignment (discussed in \VRef{s:ReallocStickyProperties}). |
---|
433 | To distinguish a fake header from a normal header, the least-significant bit of the alignment is used because the offset participates in multiple calculations, while the alignment is just remembered data. |
---|
434 | \begin{center} |
---|
435 | \input{FakeHeader} |
---|
436 | \end{center} |
---|
437 | |
---|
438 | |
---|
439 | \subsection{\lstinline{realloc} and Sticky Properties} |
---|
440 | \label{s:ReallocStickyProperties} |
---|
441 | |
---|
442 | The allocation routine @realloc@ provides a memory-management pattern for shrinking/enlarging an existing allocation, while maintaining some or all of the object data, rather than performing the following steps manually. |
---|
443 | \begin{flushleft} |
---|
444 | \begin{tabular}{ll} |
---|
445 | \multicolumn{1}{c}{\textbf{realloc pattern}} & \multicolumn{1}{c}{\textbf{manually}} \\ |
---|
446 | \begin{lstlisting} |
---|
447 | T * naddr = realloc( oaddr, newSize ); |
---|
448 | |
---|
449 | |
---|
450 | |
---|
451 | \end{lstlisting} |
---|
452 | & |
---|
453 | \begin{lstlisting} |
---|
454 | T * naddr = (T *)malloc( newSize ); $\C[2.4in]{// new storage}$ |
---|
455 | memcpy( naddr, addr, oldSize ); $\C{// copy old bytes}$ |
---|
456 | free( addr ); $\C{// free old storage}$ |
---|
457 | addr = naddr; $\C{// change pointer}\CRT$ |
---|
458 | \end{lstlisting} |
---|
459 | \end{tabular} |
---|
460 | \end{flushleft} |
---|
461 | The realloc pattern leverages available storage at the end of an allocation due to bucket sizes, possibly eliminating a new allocation and copying. |
---|
462 | This pattern is not used enough to reduce storage management costs. |
---|
463 | In fact, if @oaddr@ is @nullptr@, @realloc@ does a @malloc@, so even the initial @malloc@ can be a @realloc@ for consistency in the allocation pattern. |
---|
464 | |
---|
465 | The hidden problem for this pattern is the effect of zero fill and alignment with respect to reallocation. |
---|
466 | Are these properties transient or persistent (``sticky'')? |
---|
467 | For example, when memory is initially allocated by @calloc@ or @memalign@ with zero fill or alignment properties, respectively, what happens when those allocations are given to @realloc@ to change size? |
---|
468 | That is, if @realloc@ logically extends storage into unused bucket space or allocates new storage to satisfy a size change, are initial allocation properties preserved? |
---|
469 | Currently, allocation properties are not preserved, so subsequent use of @realloc@ storage may cause inefficient execution or errors due to lack of zero fill or alignment. |
---|
470 | This silent problem is unintuitive to programmers and difficult to locate because it is transient. |
---|
471 | To prevent these problems, llheap preserves initial allocation properties for the lifetime of an allocation and the semantics of @realloc@ are augmented to preserve these properties, with additional query routines. |
---|
472 | This change makes the realloc pattern efficient and safe. |
---|
473 | |
---|
474 | |
---|
475 | \subsection{Header} |
---|
476 | |
---|
477 | To preserve allocation properties requires storing additional information with an allocation, |
---|
478 | The best available option is the header, where \VRef[Figure]{f:llheapNormalHeader} shows the llheap storage layout. |
---|
479 | The header has two data field sized appropriately for 32/64-bit alignment requirements. |
---|
480 | The first field is a union of three values: |
---|
481 | \begin{description} |
---|
482 | \item[bucket pointer] |
---|
483 | is for allocated storage and points back to the bucket associated with this storage requests (see \VRef[Figure]{f:llheapStructure} for the fields accessible in a bucket). |
---|
484 | \item[mapped size] |
---|
485 | is for mapped storage and is the storage size for use in unmapping. |
---|
486 | \item[next free block] |
---|
487 | is for free storage and is an intrusive pointer chaining same-size free blocks onto a bucket's free stack. |
---|
488 | \end{description} |
---|
489 | The second field remembers the request size versus the allocation (bucket) size, \eg request 42 bytes which is rounded up to 64 bytes. |
---|
490 | Since programmers think in request sizes rather than allocation sizes, the request size allows better generation of statistics or errors and also helps in memory management. |
---|
491 | |
---|
492 | \begin{figure} |
---|
493 | \centering |
---|
494 | \input{Header} |
---|
495 | \caption{llheap Normal Header} |
---|
496 | \label{f:llheapNormalHeader} |
---|
497 | \end{figure} |
---|
498 | |
---|
499 | The low-order 3-bits of the first field are \emph{unused} for any stored values as these values are 16-byte aligned by default, whereas the second field may use all of its bits. |
---|
500 | The 3 unused bits are used to represent mapped allocation, zero filled, and alignment, respectively. |
---|
501 | Note, the alignment bit is not used in the normal header and the zero-filled/mapped bits are not used in the fake header. |
---|
502 | This implementation allows a fast test if any of the lower 3-bits are on (@&@ and compare). |
---|
503 | If no bits are on, it implies a basic allocation, which is handled quickly; |
---|
504 | otherwise, the bits are analysed and appropriate actions are taken for the complex cases. |
---|
505 | Since most allocations are basic, they will take significantly less time as the memory operations will be done along the allocation and free fastpath. |
---|
506 | |
---|
507 | |
---|
508 | \section{Statistics and Debugging} |
---|
509 | |
---|
510 | llheap can be built to accumulate fast and largely contention-free allocation statistics to help understand allocation behaviour. |
---|
511 | Incrementing statistic counters must appear on the allocation fastpath. |
---|
512 | As noted, any atomic operation along the fastpath produces a significant increase in allocation costs. |
---|
513 | To make statistics performant enough for use on running systems, each heap has its own set of statistic counters, so heap operations do not require atomic operations. |
---|
514 | |
---|
515 | To locate all statistic counters, heaps are linked together in statistics mode, and this list is locked and traversed to sum all counters across heaps. |
---|
516 | Note, the list is locked to prevent errors traversing an active list; |
---|
517 | the statistics counters are not locked and can flicker during accumulation. |
---|
518 | \VRef[Figure]{f:StatiticsOutput} shows an example of statistics output, which covers all allocation operations and information about deallocating storage not owned by a thread. |
---|
519 | No other memory allocator studied provides as comprehensive statistical information. |
---|
520 | Finally, these statistics were invaluable during the development of this thesis for debugging and verifying correctness and should be equally valuable to application developers. |
---|
521 | |
---|
522 | \begin{figure} |
---|
523 | \begin{lstlisting} |
---|
524 | Heap statistics: (storage request / allocation) |
---|
525 | malloc >0 calls 2,766; 0 calls 2,064; storage 12,715 / 13,367 bytes |
---|
526 | aalloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
527 | calloc >0 calls 6; 0 calls 0; storage 1,008 / 1,104 bytes |
---|
528 | memalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
529 | amemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
530 | cmemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
531 | resize >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
532 | realloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes |
---|
533 | free !null calls 2,766; null calls 4,064; storage 12,715 / 13,367 bytes |
---|
534 | away pulls 0; pushes 0; storage 0 / 0 bytes |
---|
535 | sbrk calls 1; storage 10,485,760 bytes |
---|
536 | mmap calls 10,000; storage 10,000 / 10,035 bytes |
---|
537 | munmap calls 10,000; storage 10,000 / 10,035 bytes |
---|
538 | threads started 4; exited 3 |
---|
539 | heaps new 4; reused 0 |
---|
540 | \end{lstlisting} |
---|
541 | \caption{Statistics Output} |
---|
542 | \label{f:StatiticsOutput} |
---|
543 | \end{figure} |
---|
544 | |
---|
545 | llheap can also be built with debug checking, which inserts many asserts along all allocation paths. |
---|
546 | These assertions detect incorrect allocation usage, like double frees, unfreed storage, or memory corruptions because internal values (like header fields) are overwritten. |
---|
547 | These checks are best effort as opposed to complete allocation checking as in @valgrind@. |
---|
548 | Nevertheless, the checks detect many allocation problems. |
---|
549 | There is an unfortunate problem in detecting unfreed storage because some library routines assume their allocations have life-time duration, and hence, do not free their storage. |
---|
550 | For example, @printf@ allocates a 1024-byte buffer on the first call and never deletes this buffer. |
---|
551 | To prevent a false positive for unfreed storage, it is possible to specify an amount of storage that is never freed (see @malloc_unfreed@ \VPageref{p:malloc_unfreed}), and it is subtracted from the total allocate/free difference. |
---|
552 | Determining the amount of never-freed storage is annoying, but once done, any warnings of unfreed storage are application related. |
---|
553 | |
---|
554 | Tests indicate only a 30\% performance decrease when statistics \emph{and} debugging are enabled, and the latency cost for accumulating statistic is mitigated by limited calls, often only one at the end of the program. |
---|
555 | |
---|
556 | |
---|
557 | \section{User-level Threading Support} |
---|
558 | \label{s:UserlevelThreadingSupport} |
---|
559 | |
---|
560 | The serially-reusable problem (see \VPageref{p:SeriallyReusable}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model. |
---|
561 | The solution is to prevent interrupts that can result in a CPU or KT change during operations that are logically critical sections such as starting a memory operation on one KT and completing it on another. |
---|
562 | Locking these critical sections negates any attempt for a quick fastpath and results in high contention. |
---|
563 | For user-level threading, the serially-reusable problem appears with time slicing for preemptable scheduling, as the signal handler context switches to another user-level thread. |
---|
564 | Without time slicing, a user thread performing a long computation can prevent the execution of (starve) other threads. |
---|
565 | To prevent starvation for a memory-allocation-intensive thread, \ie the time slice always triggers in an allocation critical-section for one thread so the thread never gets time sliced, a thread-local \newterm{rollforward} flag is set in the signal handler when it aborts a time slice. |
---|
566 | The rollforward flag is tested at the end of each allocation funnel routine (see \VPageref{p:FunnelRoutine}), and if set, it is reset and a volunteer yield (context switch) is performed to allow other threads to execute. |
---|
567 | |
---|
568 | llheap uses two techniques to detect when execution is in an allocation operation or routine called from allocation operation, to abort any time slice during this period. |
---|
569 | On the slowpath when executing expensive operations, like @sbrk@ or @mmap@, interrupts are disabled/enabled by setting kernel-thread-local flags so the signal handler aborts immediately. |
---|
570 | On the fastpath, disabling/enabling interrupts is too expensive as accessing kernel-thread-local storage can be expensive and not user-thread-safe. |
---|
571 | For example, the ARM processor stores the thread-local pointer in a coprocessor register that cannot perform atomic base-displacement addressing. |
---|
572 | Hence, there is a window between loading the kernel-thread-local pointer from the coprocessor register into a normal register and adding the displacement when a time slice can move a thread. |
---|
573 | |
---|
574 | The fast technique (with lower run time cost) is to define a special code section and places all non-interruptible routines in this section. |
---|
575 | The linker places all code in this section into a contiguous block of memory, but the order of routines within the block is unspecified. |
---|
576 | Then, the signal handler compares the program counter at the point of interrupt with the the start and end address of the non-interruptible section, and aborts if executing within this section and sets the rollforward flag. |
---|
577 | This technique is fragile because any calls in the non-interruptible code outside of the non-interruptible section (like @sbrk@) must be bracketed with disable/enable interrupts and these calls must be along the slowpath. |
---|
578 | Hence, for correctness, this approach requires inspection of generated assembler code for routines placed in the non-interruptible section. |
---|
579 | This issue is mitigated by the llheap funnel design so only funnel routines and a few statistics routines are placed in the non-interruptible section and their assembler code examined. |
---|
580 | These techniques are used in both the \uC and \CFA versions of llheap as both of these systems have user-level threading. |
---|
581 | |
---|
582 | |
---|
583 | \section{Bootstrapping} |
---|
584 | |
---|
585 | There are problems bootstrapping a memory allocator. |
---|
586 | \begin{enumerate} |
---|
587 | \item |
---|
588 | Programs can be statically or dynamically linked. |
---|
589 | \item |
---|
590 | The order in which the linker schedules startup code is poorly supported so it cannot be controlled entirely. |
---|
591 | \item |
---|
592 | Knowing a KT's start and end independently from the KT code is difficult. |
---|
593 | \end{enumerate} |
---|
594 | |
---|
595 | For static linking, the allocator is loaded with the program. |
---|
596 | Hence, allocation calls immediately invoke the allocator operation defined by the loaded allocation library and there is only one memory allocator used in the program. |
---|
597 | This approach allows allocator substitution by placing an allocation library before any other in the linked/load path. |
---|
598 | |
---|
599 | Allocator substitution is similar for dynamic linking, but the problem is that the dynamic loader starts first and needs to perform dynamic allocations \emph{before} the substitution allocator is loaded. |
---|
600 | As a result, the dynamic loader uses a default allocator until the substitution allocator is loaded, after which all allocation operations are handled by the substitution allocator, including from the dynamic loader. |
---|
601 | Hence, some part of the @sbrk@ area may be used by the default allocator and statistics about allocation operations cannot be correct. |
---|
602 | Furthermore, dynamic linking goes through trampolines, so there is an additional cost along the allocator fastpath for all allocation operations. |
---|
603 | Testing showed up to a 5\% performance decrease with dynamic linking as compared to static linking, even when using @tls_model("initial-exec")@ so the dynamic loader can obtain tighter binding. |
---|
604 | |
---|
605 | All allocator libraries need to perform startup code to initialize data structures, such as the heap array for llheap. |
---|
606 | The problem is getting initialization done before the first allocator call. |
---|
607 | However, there does not seem to be mechanism to tell either the static or dynamic loader to first perform initialization code before any calls to a loaded library. |
---|
608 | Also, initialization code of other libraries and the run-time environment may call memory allocation routines such as \lstinline{malloc}. |
---|
609 | This compounds the situation as there is no mechanism to tell either the static or dynamic loader to first perform the initialization code of the memory allocator before any other initialization that may involve a dynamic memory allocation call. |
---|
610 | As a result, calls to allocation routines occur without initialization. |
---|
611 | To deal with this problem, it is necessary to put a conditional initialization check along the allocation fastpath to trigger initialization (singleton pattern). |
---|
612 | |
---|
613 | Two other important execution points are program startup and termination, which include prologue or epilogue code to bootstrap a program, which programmers are unaware of. |
---|
614 | For example, dynamic-memory allocations before/after the application starts should not be considered in statistics because the application does not make these calls. |
---|
615 | llheap establishes these two points using routines: |
---|
616 | \begin{lstlisting} |
---|
617 | __attribute__(( constructor( 100 ) )) static void startup( void ) { |
---|
618 | // clear statistic counters |
---|
619 | // reset allocUnfreed counter |
---|
620 | } |
---|
621 | __attribute__(( destructor( 100 ) )) static void shutdown( void ) { |
---|
622 | // sum allocUnfreed for all heaps |
---|
623 | // subtract global unfreed storage |
---|
624 | // if allocUnfreed > 0 then print warning message |
---|
625 | } |
---|
626 | \end{lstlisting} |
---|
627 | which use global constructor/destructor priority 100, where the linker calls these routines at program prologue/epilogue in increasing/decreasing order of priority. |
---|
628 | Application programs may only use global constructor/destructor priorities greater than 100. |
---|
629 | Hence, @startup@ is called after the program prologue but before the application starts, and @shutdown@ is called after the program terminates but before the program epilogue. |
---|
630 | By resetting counters in @startup@, prologue allocations are ignored, and checking unfreed storage in @shutdown@ checks only application memory management, ignoring the program epilogue. |
---|
631 | |
---|
632 | While @startup@/@shutdown@ apply to the program KT, a concurrent program creates additional KTs that do not trigger these routines. |
---|
633 | However, it is essential for the allocator to know when each KT is started/terminated. |
---|
634 | One approach is to create a thread-local object with a construct/destructor, which is triggered after a new KT starts and before it terminates, respectively. |
---|
635 | \begin{lstlisting} |
---|
636 | struct ThreadManager { |
---|
637 | volatile bool pgm_thread; |
---|
638 | ThreadManager() {} // unusable |
---|
639 | ~ThreadManager() { if ( pgm_thread ) heapManagerDtor(); } |
---|
640 | }; |
---|
641 | static thread_local ThreadManager threadManager; |
---|
642 | \end{lstlisting} |
---|
643 | Unfortunately, thread-local variables are created lazily, \ie on the first dereference of @threadManager@, which then triggers its constructor. |
---|
644 | Therefore, the constructor is useless for knowing when a KT starts because the KT must reference it, and the allocator does not control the application KT. |
---|
645 | Fortunately, the singleton pattern needed for initializing the program KT also triggers KT allocator initialization, which can then reference @pgm_thread@ to call @threadManager@'s constructor, otherwise its destructor is not called. |
---|
646 | Now when a KT terminates, @~ThreadManager@ is called to chain it onto the global-heap free-stack, where @pgm_thread@ is set to true only for the program KT. |
---|
647 | The conditional destructor call prevents closing down the program heap, which must remain available because epilogue code may free more storage. |
---|
648 | |
---|
649 | Finally, there is a recursive problem when the singleton pattern dereferences @pgm_thread@ to initialize the thread-local object, because its initialization calls @atExit@, which immediately calls @malloc@ to obtain storage. |
---|
650 | This recursion is handled with another thread-local flag to prevent double initialization. |
---|
651 | A similar problem exists when the KT terminates and calls member @~ThreadManager@, because immediately afterwards, the terminating KT calls @free@ to deallocate the storage obtained from the @atExit@. |
---|
652 | In the meantime, the terminated heap has been put on the global-heap free-stack, and may be active by a new KT, so the @atExit@ free is handled as a free to another heap and put onto the away list using locking. |
---|
653 | |
---|
654 | For user threading systems, the KTs are controlled by the runtime, and hence, start/end pointers are known and interact directly with the llheap allocator for \uC and \CFA, which eliminates or simplifies several of these problems. |
---|
655 | The following API was created to provide interaction between the language runtime and the allocator. |
---|
656 | \begin{lstlisting} |
---|
657 | void startThread(); $\C{// KT starts}$ |
---|
658 | void finishThread(); $\C{// KT ends}$ |
---|
659 | void startup(); $\C{// when application code starts}$ |
---|
660 | void shutdown(); $\C{// when application code ends}$ |
---|
661 | bool traceHeap(); $\C{// enable allocation/free printing for debugging}$ |
---|
662 | bool traceHeapOn(); $\C{// start printing allocation/free calls}$ |
---|
663 | bool traceHeapOff(); $\C{// stop printing allocation/free calls}$ |
---|
664 | \end{lstlisting} |
---|
665 | This kind of API is necessary to allow concurrent runtime systems to interact with different memory allocators in a consistent way. |
---|
666 | |
---|
667 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
---|
668 | |
---|
669 | \section{Added Features and Methods} |
---|
670 | |
---|
671 | The C dynamic-allocation API (see \VRef[Figure]{f:CDynamicAllocationAPI}) is neither orthogonal nor complete. |
---|
672 | For example, |
---|
673 | \begin{itemize} |
---|
674 | \item |
---|
675 | It is possible to zero fill or align an allocation but not both. |
---|
676 | \item |
---|
677 | It is \emph{only} possible to zero fill an array allocation. |
---|
678 | \item |
---|
679 | It is not possible to resize a memory allocation without data copying. |
---|
680 | \item |
---|
681 | @realloc@ does not preserve initial allocation properties. |
---|
682 | \end{itemize} |
---|
683 | As a result, programmers must provide these options, which is error prone, resulting in blaming the entire programming language for a poor dynamic-allocation API. |
---|
684 | Furthermore, newer programming languages have better type systems that can provide safer and more powerful APIs for memory allocation. |
---|
685 | |
---|
686 | \begin{figure} |
---|
687 | \begin{lstlisting} |
---|
688 | void * malloc( size_t size ); |
---|
689 | void * calloc( size_t nmemb, size_t size ); |
---|
690 | void * realloc( void * ptr, size_t size ); |
---|
691 | void * reallocarray( void * ptr, size_t nmemb, size_t size ); |
---|
692 | void free( void * ptr ); |
---|
693 | void * memalign( size_t alignment, size_t size ); |
---|
694 | void * aligned_alloc( size_t alignment, size_t size ); |
---|
695 | int posix_memalign( void ** memptr, size_t alignment, size_t size ); |
---|
696 | void * valloc( size_t size ); |
---|
697 | void * pvalloc( size_t size ); |
---|
698 | |
---|
699 | struct mallinfo mallinfo( void ); |
---|
700 | int mallopt( int param, int val ); |
---|
701 | int malloc_trim( size_t pad ); |
---|
702 | size_t malloc_usable_size( void * ptr ); |
---|
703 | void malloc_stats( void ); |
---|
704 | int malloc_info( int options, FILE * fp ); |
---|
705 | \end{lstlisting} |
---|
706 | \caption{C Dynamic-Allocation API} |
---|
707 | \label{f:CDynamicAllocationAPI} |
---|
708 | \end{figure} |
---|
709 | |
---|
710 | The following presents design and API changes for C, \CC (\uC), and \CFA, all of which are implemented in llheap. |
---|
711 | |
---|
712 | |
---|
713 | \subsection{Out of Memory} |
---|
714 | |
---|
715 | Most allocators use @nullptr@ to indicate an allocation failure, specifically out of memory; |
---|
716 | hence the need to return an alternate value for a zero-sized allocation. |
---|
717 | A different approach allowed by @C API@ is to abort a program when out of memory and return @nullptr@ for a zero-sized allocation. |
---|
718 | In theory, notifying the programmer of memory failure allows recovery; |
---|
719 | in practice, it is almost impossible to gracefully recover when out of memory. |
---|
720 | Hence, the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen because no pseudo allocation is necessary. |
---|
721 | |
---|
722 | |
---|
723 | \subsection{C Interface} |
---|
724 | |
---|
725 | For C, it is possible to increase functionality and orthogonality of the dynamic-memory API to make allocation better for programmers. |
---|
726 | |
---|
727 | For existing C allocation routines: |
---|
728 | \begin{itemize} |
---|
729 | \item |
---|
730 | @calloc@ sets the sticky zero-fill property. |
---|
731 | \item |
---|
732 | @memalign@, @aligned_alloc@, @posix_memalign@, @valloc@ and @pvalloc@ set the sticky alignment property. |
---|
733 | \item |
---|
734 | @realloc@ and @reallocarray@ preserve sticky properties. |
---|
735 | \end{itemize} |
---|
736 | |
---|
737 | The C dynamic-memory API is extended with the following routines: |
---|
738 | |
---|
739 | \paragraph{\lstinline{void * aalloc( size_t dim, size_t elemSize )}} |
---|
740 | extends @calloc@ for allocating a dynamic array of objects without calculating the total size of array explicitly but \emph{without} zero-filling the memory. |
---|
741 | @aalloc@ is significantly faster than @calloc@, which is the only alternative given by the standard memory-allocation routines. |
---|
742 | |
---|
743 | \noindent\textbf{Usage} |
---|
744 | @aalloc@ takes two parameters. |
---|
745 | \begin{itemize} |
---|
746 | \item |
---|
747 | @dim@: number of array objects |
---|
748 | \item |
---|
749 | @elemSize@: size of array object |
---|
750 | \end{itemize} |
---|
751 | It returns the address of the dynamic array or @NULL@ if either @dim@ or @elemSize@ are zero. |
---|
752 | |
---|
753 | \paragraph{\lstinline{void * resize( void * oaddr, size_t size )}} |
---|
754 | extends @realloc@ for resizing an existing allocation \emph{without} copying previous data into the new allocation or preserving sticky properties. |
---|
755 | @resize@ is significantly faster than @realloc@, which is the only alternative. |
---|
756 | |
---|
757 | \noindent\textbf{Usage} |
---|
758 | @resize@ takes two parameters. |
---|
759 | \begin{itemize} |
---|
760 | \item |
---|
761 | @oaddr@: address to be resized |
---|
762 | \item |
---|
763 | @size@: new allocation size (smaller or larger than previous) |
---|
764 | \end{itemize} |
---|
765 | It returns the address of the old or new storage with the specified new size or @NULL@ if @size@ is zero. |
---|
766 | |
---|
767 | \paragraph{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}} |
---|
768 | extends @aalloc@ and @memalign@ for allocating an aligned dynamic array of objects. |
---|
769 | Sets sticky alignment property. |
---|
770 | |
---|
771 | \noindent\textbf{Usage} |
---|
772 | @amemalign@ takes three parameters. |
---|
773 | \begin{itemize} |
---|
774 | \item |
---|
775 | @alignment@: alignment requirement |
---|
776 | \item |
---|
777 | @dim@: number of array objects |
---|
778 | \item |
---|
779 | @elemSize@: size of array object |
---|
780 | \end{itemize} |
---|
781 | It returns the address of the aligned dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero. |
---|
782 | |
---|
783 | \paragraph{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}} |
---|
784 | extends @amemalign@ with zero fill and has the same usage as @amemalign@. |
---|
785 | Sets sticky zero-fill and alignment property. |
---|
786 | It returns the address of the aligned, zero-filled dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero. |
---|
787 | |
---|
788 | \paragraph{\lstinline{size_t malloc_alignment( void * addr )}} |
---|
789 | returns the alignment of the dynamic object for use in aligning similar allocations. |
---|
790 | |
---|
791 | \noindent\textbf{Usage} |
---|
792 | @malloc_alignment@ takes one parameter. |
---|
793 | \begin{itemize} |
---|
794 | \item |
---|
795 | @addr@: address of an allocated object. |
---|
796 | \end{itemize} |
---|
797 | It returns the alignment of the given object, where objects not allocated with alignment return the minimal allocation alignment. |
---|
798 | |
---|
799 | \paragraph{\lstinline{bool malloc_zero_fill( void * addr )}} |
---|
800 | returns true if the object has the zero-fill sticky property for use in zero filling similar allocations. |
---|
801 | |
---|
802 | \noindent\textbf{Usage} |
---|
803 | @malloc_zero_fill@ takes one parameters. |
---|
804 | |
---|
805 | \begin{itemize} |
---|
806 | \item |
---|
807 | @addr@: address of an allocated object. |
---|
808 | \end{itemize} |
---|
809 | It returns true if the zero-fill sticky property is set and false otherwise. |
---|
810 | |
---|
811 | \paragraph{\lstinline{size_t malloc_size( void * addr )}} |
---|
812 | returns the request size of the dynamic object (updated when an object is resized) for use in similar allocations. |
---|
813 | See also @malloc_usable_size@. |
---|
814 | |
---|
815 | \noindent\textbf{Usage} |
---|
816 | @malloc_size@ takes one parameters. |
---|
817 | \begin{itemize} |
---|
818 | \item |
---|
819 | @addr@: address of an allocated object. |
---|
820 | \end{itemize} |
---|
821 | It returns the request size or zero if @addr@ is @NULL@. |
---|
822 | |
---|
823 | \paragraph{\lstinline{int malloc_stats_fd( int fd )}} |
---|
824 | changes the file descriptor where @malloc_stats@ writes statistics (default @stdout@). |
---|
825 | |
---|
826 | \noindent\textbf{Usage} |
---|
827 | @malloc_stats_fd@ takes one parameters. |
---|
828 | \begin{itemize} |
---|
829 | \item |
---|
830 | @fd@: file descriptor. |
---|
831 | \end{itemize} |
---|
832 | It returns the previous file descriptor. |
---|
833 | |
---|
834 | \paragraph{\lstinline{size_t malloc_expansion()}} |
---|
835 | \label{p:malloc_expansion} |
---|
836 | set the amount (bytes) to extend the heap when there is insufficient free storage to service an allocation request. |
---|
837 | It returns the heap extension size used throughout a program when requesting more memory from the system using @sbrk@ system-call, \ie called once at heap initialization. |
---|
838 | |
---|
839 | \paragraph{\lstinline{size_t malloc_mmap_start()}} |
---|
840 | set the crossover between allocations occurring in the @sbrk@ area or separately mapped. |
---|
841 | It returns the crossover point used throughout a program, \ie called once at heap initialization. |
---|
842 | |
---|
843 | \paragraph{\lstinline{size_t malloc_unfreed()}} |
---|
844 | \label{p:malloc_unfreed} |
---|
845 | amount subtracted to adjust for unfreed program storage (debug only). |
---|
846 | It returns the new subtraction amount and called by @malloc_stats@. |
---|
847 | |
---|
848 | |
---|
849 | \subsection{\CC Interface} |
---|
850 | |
---|
851 | The following extensions take advantage of overload polymorphism in the \CC type-system. |
---|
852 | |
---|
853 | \paragraph{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}} |
---|
854 | extends @resize@ with an alignment re\-quirement. |
---|
855 | |
---|
856 | \noindent\textbf{Usage} |
---|
857 | takes three parameters. |
---|
858 | \begin{itemize} |
---|
859 | \item |
---|
860 | @oaddr@: address to be resized |
---|
861 | \item |
---|
862 | @nalign@: alignment requirement |
---|
863 | \item |
---|
864 | @size@: new allocation size (smaller or larger than previous) |
---|
865 | \end{itemize} |
---|
866 | It returns the address of the old or new storage with the specified new size and alignment, or @NULL@ if @size@ is zero. |
---|
867 | |
---|
868 | \paragraph{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}} |
---|
869 | extends @realloc@ with an alignment re\-quirement and has the same usage as aligned @resize@. |
---|
870 | |
---|
871 | |
---|
872 | \subsection{\CFA Interface} |
---|
873 | |
---|
874 | The following extensions take advantage of overload polymorphism in the \CFA type-system. |
---|
875 | The key safety advantage of the \CFA type system is using the return type to select overloads; |
---|
876 | hence, a polymorphic routine knows the returned type and its size. |
---|
877 | This capability is used to remove the object size parameter and correctly cast the return storage to match the result type. |
---|
878 | For example, the following is the \CFA wrapper for C @malloc@: |
---|
879 | \begin{cfa} |
---|
880 | forall( T & | sized(T) ) { |
---|
881 | T * malloc( void ) { |
---|
882 | if ( _Alignof(T) <= libAlign() ) return @(T *)@malloc( @sizeof(T)@ ); // C allocation |
---|
883 | else return @(T *)@memalign( @_Alignof(T)@, @sizeof(T)@ ); // C allocation |
---|
884 | } // malloc |
---|
885 | \end{cfa} |
---|
886 | and is used as follows: |
---|
887 | \begin{lstlisting} |
---|
888 | int * i = malloc(); |
---|
889 | double * d = malloc(); |
---|
890 | struct Spinlock { ... } __attribute__(( aligned(128) )); |
---|
891 | Spinlock * sl = malloc(); |
---|
892 | \end{lstlisting} |
---|
893 | where each @malloc@ call provides the return type as @T@, which is used with @sizeof@, @_Alignof@, and casting the storage to the correct type. |
---|
894 | This interface removes many of the common allocation errors in C programs. |
---|
895 | \VRef[Figure]{f:CFADynamicAllocationAPI} show the \CFA wrappers for the equivalent C/\CC allocation routines with same semantic behaviour. |
---|
896 | |
---|
897 | \begin{figure} |
---|
898 | \begin{lstlisting} |
---|
899 | T * malloc( void ); |
---|
900 | T * aalloc( size_t dim ); |
---|
901 | T * calloc( size_t dim ); |
---|
902 | T * resize( T * ptr, size_t size ); |
---|
903 | T * realloc( T * ptr, size_t size ); |
---|
904 | T * memalign( size_t align ); |
---|
905 | T * amemalign( size_t align, size_t dim ); |
---|
906 | T * cmemalign( size_t align, size_t dim ); |
---|
907 | T * aligned_alloc( size_t align ); |
---|
908 | int posix_memalign( T ** ptr, size_t align ); |
---|
909 | T * valloc( void ); |
---|
910 | T * pvalloc( void ); |
---|
911 | \end{lstlisting} |
---|
912 | \caption{\CFA C-Style Dynamic-Allocation API} |
---|
913 | \label{f:CFADynamicAllocationAPI} |
---|
914 | \end{figure} |
---|
915 | |
---|
916 | In addition to the \CFA C-style allocator interface, a new allocator interface is provided to further increase orthogonality and usability of dynamic-memory allocation. |
---|
917 | This interface helps programmers in three ways. |
---|
918 | \begin{itemize} |
---|
919 | \item |
---|
920 | naming: \CFA regular and @ttype@ polymorphism (@ttype@ polymorphism in \CFA is similar to \CC variadic templates) is used to encapsulate a wide range of allocation functionality into a single routine name, so programmers do not have to remember multiple routine names for different kinds of dynamic allocations. |
---|
921 | \item |
---|
922 | named arguments: individual allocation properties are specified using postfix function call, so the programmers do not have to remember parameter positions in allocation calls. |
---|
923 | \item |
---|
924 | object size: like the \CFA's C-interface, programmers do not have to specify object size or cast allocation results. |
---|
925 | \end{itemize} |
---|
926 | Note, postfix function call is an alternative call syntax, using backtick @`@, where the argument appears before the function name, \eg |
---|
927 | \begin{cfa} |
---|
928 | duration ?@`@h( int h ); // ? denote the position of the function operand |
---|
929 | duration ?@`@m( int m ); |
---|
930 | duration ?@`@s( int s ); |
---|
931 | duration dur = 3@`@h + 42@`@m + 17@`@s; |
---|
932 | \end{cfa} |
---|
933 | |
---|
934 | \paragraph{\lstinline{T * alloc( ... )} or \lstinline{T * alloc( size_t dim, ... )}} |
---|
935 | is overloaded with a variable number of specific allocation operations, or an integer dimension parameter followed by a variable number of specific allocation operations. |
---|
936 | These allocation operations can be passed as named arguments when calling the \lstinline{alloc} routine. |
---|
937 | A call without parameters returns a dynamically allocated object of type @T@ (@malloc@). |
---|
938 | A call with only the dimension (dim) parameter returns a dynamically allocated array of objects of type @T@ (@aalloc@). |
---|
939 | The variable number of arguments consist of allocation properties, which can be combined to produce different kinds of allocations. |
---|
940 | The only restriction is for properties @realloc@ and @resize@, which cannot be combined. |
---|
941 | |
---|
942 | The allocation property functions are: |
---|
943 | \subparagraph{\lstinline{T_align ?`align( size_t alignment )}} |
---|
944 | to align the allocation. |
---|
945 | The alignment parameter must be $\ge$ the default alignment (@libAlign()@ in \CFA) and a power of two, \eg: |
---|
946 | \begin{cfa} |
---|
947 | int * i0 = alloc( @4096`align@ ); sout | i0 | nl; |
---|
948 | int * i1 = alloc( 3, @4096`align@ ); sout | i1; for (i; 3 ) sout | &i1[i]; sout | nl; |
---|
949 | |
---|
950 | 0x555555572000 |
---|
951 | 0x555555574000 0x555555574000 0x555555574004 0x555555574008 |
---|
952 | \end{cfa} |
---|
953 | returns a dynamic object and object array aligned on a 4096-byte boundary. |
---|
954 | |
---|
955 | \subparagraph{\lstinline{S_fill(T) ?`fill ( /* various types */ )}} |
---|
956 | to initialize storage. |
---|
957 | There are three ways to fill storage: |
---|
958 | \begin{enumerate} |
---|
959 | \item |
---|
960 | A char fills each byte of each object. |
---|
961 | \item |
---|
962 | An object of the returned type fills each object. |
---|
963 | \item |
---|
964 | An object array pointer fills some or all of the corresponding object array. |
---|
965 | \end{enumerate} |
---|
966 | For example: |
---|
967 | \begin{cfa}[numbers=left] |
---|
968 | int * i0 = alloc( @0n`fill@ ); sout | *i0 | nl; // disambiguate 0 |
---|
969 | int * i1 = alloc( @5`fill@ ); sout | *i1 | nl; |
---|
970 | int * i2 = alloc( @'\xfe'`fill@ ); sout | hex( *i2 ) | nl; |
---|
971 | int * i3 = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | i3[i]; sout | nl; |
---|
972 | int * i4 = alloc( 5, @0xdeadbeefN`fill@ ); for ( i; 5 ) sout | hex( i4[i] ); sout | nl; |
---|
973 | int * i5 = alloc( 5, @i3`fill@ ); for ( i; 5 ) sout | i5[i]; sout | nl; |
---|
974 | int * i6 = alloc( 5, @[i3, 3]`fill@ ); for ( i; 5 ) sout | i6[i]; sout | nl; |
---|
975 | \end{cfa} |
---|
976 | \begin{lstlisting}[numbers=left] |
---|
977 | 0 |
---|
978 | 5 |
---|
979 | 0xfefefefe |
---|
980 | 5 5 5 5 5 |
---|
981 | 0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef |
---|
982 | 5 5 5 5 5 |
---|
983 | 5 5 5 -555819298 -555819298 // two undefined values |
---|
984 | \end{lstlisting} |
---|
985 | Examples 1 to 3 fill an object with a value or characters. |
---|
986 | Examples 4 to 7 fill an array of objects with values, another array, or part of an array. |
---|
987 | |
---|
988 | \subparagraph{\lstinline{S_resize(T) ?`resize( void * oaddr )}} |
---|
989 | used to resize, realign, and fill, where the old object data is not copied to the new object. |
---|
990 | The old object type may be different from the new object type, since the values are not used. |
---|
991 | For example: |
---|
992 | \begin{cfa}[numbers=left] |
---|
993 | int * i = alloc( @5`fill@ ); sout | i | *i; |
---|
994 | i = alloc( @i`resize@, @256`align@, @7`fill@ ); sout | i | *i; |
---|
995 | double * d = alloc( @i`resize@, @4096`align@, @13.5`fill@ ); sout | d | *d; |
---|
996 | \end{cfa} |
---|
997 | \begin{lstlisting}[numbers=left] |
---|
998 | 0x55555556d5c0 5 |
---|
999 | 0x555555570000 7 |
---|
1000 | 0x555555571000 13.5 |
---|
1001 | \end{lstlisting} |
---|
1002 | Examples 2 to 3 change the alignment, fill, and size for the initial storage of @i@. |
---|
1003 | |
---|
1004 | \begin{cfa}[numbers=left] |
---|
1005 | int * ia = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | ia[i]; sout | nl; |
---|
1006 | ia = alloc( 10, @ia`resize@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl; |
---|
1007 | sout | ia; ia = alloc( 5, @ia`resize@, @512`align@, @13`fill@ ); sout | ia; for ( i; 5 ) sout | ia[i]; sout | nl;; |
---|
1008 | ia = alloc( 3, @ia`resize@, @4096`align@, @2`fill@ ); sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl; |
---|
1009 | \end{cfa} |
---|
1010 | \begin{lstlisting}[numbers=left] |
---|
1011 | 5 5 5 5 5 |
---|
1012 | 7 7 7 7 7 7 7 7 7 7 |
---|
1013 | 0x55555556d560 0x555555571a00 13 13 13 13 13 |
---|
1014 | 0x555555572000 0x555555572000 2 0x555555572004 2 0x555555572008 2 |
---|
1015 | \end{lstlisting} |
---|
1016 | Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@. |
---|
1017 | |
---|
1018 | \subparagraph{\lstinline{S_realloc(T) ?`realloc( T * a ))}} |
---|
1019 | used to resize, realign, and fill, where the old object data is copied to the new object. |
---|
1020 | The old object type must be the same as the new object type, since the value is used. |
---|
1021 | Note, for @fill@, only the extra space after copying the data from the old object is filled with the given parameter. |
---|
1022 | For example: |
---|
1023 | \begin{cfa}[numbers=left] |
---|
1024 | int * i = alloc( @5`fill@ ); sout | i | *i; |
---|
1025 | i = alloc( @i`realloc@, @256`align@ ); sout | i | *i; |
---|
1026 | i = alloc( @i`realloc@, @4096`align@, @13`fill@ ); sout | i | *i; |
---|
1027 | \end{cfa} |
---|
1028 | \begin{lstlisting}[numbers=left] |
---|
1029 | 0x55555556d5c0 5 |
---|
1030 | 0x555555570000 5 |
---|
1031 | 0x555555571000 5 |
---|
1032 | \end{lstlisting} |
---|
1033 | Examples 2 to 3 change the alignment for the initial storage of @i@. |
---|
1034 | The @13`fill@ in example 3 does nothing because no extra space is added. |
---|
1035 | |
---|
1036 | \begin{cfa}[numbers=left] |
---|
1037 | int * ia = alloc( 5, @5`fill@ ); for ( i; 5 ) sout | ia[i]; sout | nl; |
---|
1038 | ia = alloc( 10, @ia`realloc@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl; |
---|
1039 | sout | ia; ia = alloc( 1, @ia`realloc@, @512`align@, @13`fill@ ); sout | ia; for ( i; 1 ) sout | ia[i]; sout | nl;; |
---|
1040 | ia = alloc( 3, @ia`realloc@, @4096`align@, @2`fill@ ); sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl; |
---|
1041 | \end{cfa} |
---|
1042 | \begin{lstlisting}[numbers=left] |
---|
1043 | 5 5 5 5 5 |
---|
1044 | 5 5 5 5 5 7 7 7 7 7 |
---|
1045 | 0x55555556c560 0x555555570a00 5 |
---|
1046 | 0x555555571000 0x555555571000 5 0x555555571004 2 0x555555571008 2 |
---|
1047 | \end{lstlisting} |
---|
1048 | Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@. |
---|
1049 | The @13`fill@ in example 3 does nothing because no extra space is added. |
---|
1050 | |
---|
1051 | These \CFA allocation features are used extensively in the development of the \CFA runtime. |
---|