Index: doc/papers/llheap/Paper.tex
===================================================================
--- doc/papers/llheap/Paper.tex	(revision 9eb7f07cfca64b294b593355d1385e8bd1d0166c)
+++ doc/papers/llheap/Paper.tex	(revision c8f0199daff0565cb3f18b3bc6e07c382220342a)
@@ -252,6 +252,6 @@
 Dynamic code/data memory is managed by the dynamic loader for libraries loaded at runtime, which is complex especially in a multi-threaded program~\cite{Huang06}.
 However, changes to the dynamic code/data space are typically infrequent, many occurring at program startup, and are largely outside of a program's control.
-Stack memory is managed by the program call/return-mechanism using a simple LIFO technique, which works well for sequential programs.
-For stackful coroutines and user threads, a new stack is commonly created in dynamic-allocation memory.
+Stack memory is managed by the program call/return-mechanism using a LIFO technique, which works well for sequential programs.
+For stackful coroutines and user threads, a new stack is commonly created in the dynamic-allocation memory.
 This work focuses solely on management of the dynamic-allocation memory.
 
@@ -293,8 +293,8 @@
 \begin{enumerate}[leftmargin=*,itemsep=0pt]
 \item
-Implementation of a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running on multiple kernel threads (M:N threading).
-
-\item
-Extend the standard C heap functionality by preserving with each allocation: its request size plus the amount allocated, whether an allocation is zero fill, and allocation alignment.
+Implementation of a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC~\cite{uC++} and \CFA~\cite{Moss18,Delisle21} using user-level threads running on multiple kernel threads (M:N threading).
+
+\item
+Extend the standard C heap functionality by preserving with each allocation: its request size plus the amount allocated, whether an allocation is zero fill and/or allocation alignment.
 
 \item
@@ -365,5 +365,5 @@
 
 The following discussion is a quick overview of the moving-pieces that affect the design of a memory allocator and its performance.
-It is assumed that dynamic allocates and deallocates acquire storage for a program variable, referred to as an \newterm{object}, through calls such as @malloc@ and @free@ in C, and @new@ and @delete@ in \CC.
+Dynamic acquires and releases obtain storage for a program variable, called an \newterm{object}, through calls such as @malloc@ and @free@ in C, and @new@ and @delete@ in \CC.
 Space for each allocated object comes from the dynamic-allocation zone.
 
@@ -378,6 +378,5 @@
 
 Figure~\ref{f:AllocatorComponents} shows the two important data components for a memory allocator, management and storage, collectively called the \newterm{heap}.
-The \newterm{management data} is a data structure located at a known memory address and contains all information necessary to manage the storage data.
-The management data starts with fixed-sized information in the static-data memory that references components in the dynamic-allocation memory.
+The \newterm{management data} is a data structure located at a known memory address and contains fixed-sized information in the static-data memory that references components in the dynamic-allocation memory.
 For multi-threaded programs, additional management data may exist in \newterm{thread-local storage} (TLS) for each kernel thread executing the program.
 The \newterm{storage data} is composed of allocated and freed objects, and \newterm{reserved memory}.
@@ -385,6 +384,6 @@
 \ie only the program knows the location of allocated storage not the memory allocator.
 Freed objects (white) represent memory deallocated by the program, which are linked into one or more lists facilitating easy location of new allocations.
-Reserved memory (dark grey) is one or more blocks of memory obtained from the operating system but not yet allocated to the program;
-if there are multiple reserved blocks, they are also chained together, usually internally.
+Reserved memory (dark grey) is one or more blocks of memory obtained from the \newterm{operating system} (OS) but not yet allocated to the program;
+if there are multiple reserved blocks, they are also chained together.
 
 \begin{figure}
@@ -395,5 +394,5 @@
 \end{figure}
 
-In most allocator designs, allocated objects have management data embedded within them.
+In many allocator designs, allocated objects and reserved blocks have management data embedded within them (see also Section~\ref{s:ObjectContainers}).
 Figure~\ref{f:AllocatedObject} shows an allocated object with a header, trailer, and optional spacing around the object.
 The header contains information about the object, \eg size, type, etc.
@@ -404,8 +403,8 @@
 When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists.
 
-A free object also contains management data, \eg size, pointers, etc.
+A free object often contains management data, \eg size, pointers, etc.
 Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks.
 For internal chaining, the amount of management data for a free node defines the minimum allocation size, \eg if 16 bytes are needed for a free-list node, allocation requests less than 16 bytes are rounded up.
-The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new management information and/or program data.
+The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new program data and/or management information.
 
 \begin{figure}
@@ -428,5 +427,5 @@
 \label{s:Fragmentation}
 
-Fragmentation is memory requested from the operating system but not used by the program;
+Fragmentation is memory requested from the OS but not used by the program;
 hence, allocated objects are not fragmentation.
 Figure~\ref{f:InternalExternalFragmentation} shows fragmentation is divided into two forms: internal or external.
@@ -443,8 +442,8 @@
 An allocator should strive to keep internal management information to a minimum.
 
-\newterm{External fragmentation} is all memory space reserved from the operating system but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory.
+\newterm{External fragmentation} is all memory space reserved from the OS but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory.
 This memory is problematic in two ways: heap blowup and highly fragmented memory.
 \newterm{Heap blowup} occurs when freed memory cannot be reused for future allocations leading to potentially unbounded external fragmentation growth~\cite{Berger00}.
-Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects, resulting in a checkerboard of adjacent allocated and free areas, where the free blocks have become very small.
+Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects, resulting in a checkerboard of adjacent allocated and free areas, where the free blocks have become to small to service requests.
 % Figure~\ref{f:MemoryFragmentation} shows an example of how a small block of memory fragments as objects are allocated and deallocated over time.
 Heap blowup can occur due to allocator policies that are too restrictive in reusing freed memory (the allocated size cannot use a larger free block) and/or no coalescing of free storage.
@@ -452,5 +451,5 @@
 % Memory is highly fragmented when most free blocks are unusable because of their sizes.
 % For example, Figure~\ref{f:Contiguous} and Figure~\ref{f:HighlyFragmented} have the same quantity of external fragmentation, but Figure~\ref{f:HighlyFragmented} is highly fragmented.
-% If there is a request to allocate a large object, Figure~\ref{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while Figure~\ref{f:HighlyFragmented} likely has to request more memory from the operating system.
+% If there is a request to allocate a large object, Figure~\ref{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while Figure~\ref{f:HighlyFragmented} likely has to request more memory from the OS.
 
 % \begin{figure}
@@ -475,5 +474,5 @@
 The first approach is a \newterm{sequential-fit algorithm} with one list of free objects that is searched for a block large enough to fit a requested object size.
 Different search policies determine the free object selected, \eg the first free object large enough or closest to the requested size.
-Any storage larger than the request can become spacing after the object or be split into a smaller free object.
+Any storage larger than the request can become spacing after the object or split into a smaller free object.
 % The cost of the search depends on the shape and quality of the free list, \eg a linear versus a binary-tree free-list, a sorted versus unsorted free-list.
 
@@ -489,9 +488,9 @@
 
 The third approach is \newterm{splitting} and \newterm{coalescing algorithms}.
-When an object is allocated, if there are no free objects of the requested size, a larger free object may be split into two smaller objects to satisfy the allocation request without obtaining more memory from the operating system.
-For example, in the \newterm{buddy system}, a block of free memory is split into two equal chunks, one of those chunks is again split into two equal chunks, and so on until a block just large enough to fit the requested object is created.
-When an object is deallocated it is coalesced with the objects immediately before and after it in memory, if they are free, turning them into one larger object.
+When an object is allocated, if there are no free objects of the requested size, a larger free object is split into two smaller objects to satisfy the allocation request rather than obtaining more memory from the OS.
+For example, in the \newterm{buddy system}, a block of free memory is split into equal chunks, one of those chunks is again split, and so on until a minimal block is created that fits the requested object.
+When an object is deallocated, it is coalesced with the objects immediately before and after it in memory, if they are free, turning them into one larger block.
 Coalescing can be done eagerly at each deallocation or lazily when an allocation cannot be fulfilled.
-In all cases, coalescing increases allocation latency, hence some allocations can cause unbounded delays during coalescing.
+In all cases, coalescing increases allocation latency, hence some allocations can cause unbounded delays.
 While coalescing does not reduce external fragmentation, the coalesced blocks improve fragmentation quality so future allocations are less likely to cause heap blowup.
 % Splitting and coalescing can be used with other algorithms to avoid highly fragmented memory.
@@ -501,10 +500,10 @@
 \label{s:Locality}
 
-The principle of locality recognizes that programs tend to reference a small set of data, called a working set, for a certain period of time, where a working set is composed of temporal and spatial accesses~\cite{Denning05}.
+The principle of locality recognizes that programs tend to reference a small set of data, called a \newterm{working set}, for a certain period of time, composed of temporal and spatial accesses~\cite{Denning05}.
 % Temporal clustering implies a group of objects are accessed repeatedly within a short time period, while spatial clustering implies a group of objects physically close together (nearby addresses) are accessed repeatedly within a short time period.
 % Temporal locality commonly occurs during an iterative computation with a fixed set of disjoint variables, while spatial locality commonly occurs when traversing an array.
-Hardware takes advantage of temporal and spatial locality through multiple levels of caching, \ie memory hierarchy.
+Hardware takes advantage of the working set through multiple levels of caching, \ie memory hierarchy.
 % When an object is accessed, the memory physically located around the object is also cached with the expectation that the current and nearby objects will be referenced within a short period of time.
-For example, entire cache lines are transferred between memory and cache and entire virtual-memory pages are transferred between disk and memory.
+For example, entire cache lines are transferred between cache and memory, and entire virtual-memory pages are transferred between memory and disk.
 % A program exhibiting good locality has better performance due to fewer cache misses and page faults\footnote{With the advent of large RAM memory, paging is becoming less of an issue in modern programming.}.
 
@@ -532,10 +531,10 @@
 \label{s:MutualExclusion}
 
-\newterm{Mutual exclusion} provides sequential access to the shared management data of the heap.
+\newterm{Mutual exclusion} provides sequential access to the shared-management data of the heap.
 There are two performance issues for mutual exclusion.
 First is the overhead necessary to perform (at least) a hardware atomic operation every time a shared resource is accessed.
 Second is when multiple threads contend for a shared resource simultaneously, and hence, some threads must wait until the resource is released.
 Contention can be reduced in a number of ways:
-1) Using multiple fine-grained locks versus a single lock, spreading the contention across a number of locks.
+1) Using multiple fine-grained locks versus a single lock to spread the contention across a number of locks.
 2) Using trylock and generating new storage if the lock is busy, yielding a classic space versus time tradeoff.
 3) Using one of the many lock-free approaches for reducing contention on basic data-structure operations~\cite{Oyama99}.
@@ -551,6 +550,6 @@
 a memory allocator can only affect the latter two.
 
-Assume two objects, object$_1$ and object$_2$, share a cache line.
-\newterm{Program-induced false-sharing} occurs when thread$_1$ passes a reference to object$_2$ to thread$_2$, and then threads$_1$ modifies object$_1$ while thread$_2$ modifies object$_2$.
+Specifically, assume two objects, O$_1$ and O$_2$, share a cache line, with threads, T$_1$ and T$_2$.
+\newterm{Program-induced false-sharing} occurs when T$_1$ passes a reference to O$_2$ to T$_2$, and then T$_1$ modifies O$_1$ while T$_2$ modifies O$_2$.
 % Figure~\ref{f:ProgramInducedFalseSharing} shows when Thread$_1$ passes Object$_2$ to Thread$_2$, a false-sharing situation forms when Thread$_1$ modifies Object$_1$ and Thread$_2$ modifies Object$_2$.
 % Changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line.
@@ -574,5 +573,5 @@
 % \label{f:FalseSharing}
 % \end{figure}
-\newterm{Allocator-induced active false-sharing}\label{s:AllocatorInducedActiveFalseSharing} occurs when object$_1$ and object$_2$ are heap allocated and their references are passed to thread$_1$ and thread$_2$, which modify the objects.
+\newterm{Allocator-induced active false-sharing}\label{s:AllocatorInducedActiveFalseSharing} occurs when O$_1$ and O$_2$ are heap allocated and their references are passed to T$_1$ and T$_2$, which modify the objects.
 % For example, in Figure~\ref{f:AllocatorInducedActiveFalseSharing}, each thread allocates an object and loads a cache-line of memory into its associated cache.
 % Again, changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line.
@@ -580,5 +579,5 @@
 % is another form of allocator-induced false-sharing caused by program-induced false-sharing.
 % When an object in a program-induced false-sharing situation is deallocated, a future allocation of that object may cause passive false-sharing.
-when thread$_1$ passes object$_2$ to thread$_2$, and thread$_2$ subsequently deallocates object$_2$, and then object$_2$ is reallocated to thread$_2$ while thread$_1$ is still using object$_1$.
+when T$_1$ passes O$_2$ to T$_2$, and T$_2$ subsequently deallocates O$_2$, and then O$_2$ is reallocated to T$_2$ while T$_1$ is still using O$_1$.
 
 
@@ -593,12 +592,5 @@
 \label{s:MultiThreadedMemoryAllocatorFeatures}
 
-The following features are used in the construction of multi-threaded memory-allocators:
-\begin{enumerate}[itemsep=0pt]
-\item multiple heaps: with or without a global heap, or with or without heap ownership.
-\item object containers: with or without ownership, fixed or variable sized, global or local free-lists.
-\item hybrid private/public heap
-\item allocation buffer
-\item lock-free operations
-\end{enumerate}
+The following features are used in the construction of multi-threaded memory-allocators: multiple heaps, user-level threading, ownership, object containers, allocation buffer, lock-free operations.
 The first feature, multiple heaps, pertains to different kinds of heaps.
 The second feature, object containers, pertains to the organization of objects within the storage area.
@@ -606,10 +598,10 @@
 
 
-\subsection{Multiple Heaps}
+\subsubsection{Multiple Heaps}
 \label{s:MultipleHeaps}
 
 A multi-threaded allocator has potentially multiple threads and heaps.
 The multiple threads cause complexity, and multiple heaps are a mechanism for dealing with the complexity.
-The spectrum ranges from multiple threads using a single heap, denoted as T:1 (see Figure~\ref{f:SingleHeap}), to multiple threads sharing multiple heaps, denoted as T:H (see Figure~\ref{f:SharedHeaps}), to one thread per heap, denoted as 1:1 (see Figure~\ref{f:PerThreadHeap}), which is almost back to a single-threaded allocator.
+The spectrum ranges from multiple threads using a single heap, denoted as T:1, to multiple threads sharing multiple heaps, denoted as T:H, to one thread per heap, denoted as 1:1, which is almost back to a single-threaded allocator.
 
 \begin{figure}
@@ -635,12 +627,12 @@
 \end{figure}
 
-\paragraph{T:1 model} where all threads allocate and deallocate objects from one heap.
-Memory is obtained from the freed objects, or reserved memory in the heap, or from the operating system (OS);
-the heap may also return freed memory to the operating system.
+\paragraph{T:1 model (see Figure~\ref{f:SingleHeap})} where all threads allocate and deallocate objects from one heap.
+Memory is obtained from the freed objects, or reserved memory in the heap, or from the OS;
+the heap may also return freed memory to the OS.
 The arrows indicate the direction memory conceptually moves for each kind of operation: allocation moves memory along the path from the heap/operating-system to the user application, while deallocation moves memory along the path from the application back to the heap/operating-system.
 To safely handle concurrency, a single lock may be used for all heap operations or fine-grained locking for different operations.
 Regardless, a single heap may be a significant source of contention for programs with a large amount of memory allocation.
 
-\paragraph{T:H model} where each thread allocates storage from several heaps depending on certain criteria, with the goal of reducing contention by spreading allocations/deallocations across the heaps.
+\paragraph{T:H model (see Figure~\ref{f:SharedHeaps})} where each thread allocates storage from several heaps depending on certain criteria, with the goal of reducing contention by spreading allocations/deallocations across the heaps.
 The decision on when to create a new heap and which heap a thread allocates from depends on the allocator design.
 To determine which heap to access, each thread must point to its associated heap in some way.
@@ -673,5 +665,5 @@
 An alternative implementation is for all heaps to share one reserved memory, which requires a separate lock for the reserved storage to ensure mutual exclusion when acquiring new memory.
 Because multiple threads can allocate/free/reallocate adjacent storage, all forms of false sharing may occur.
-Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the operating system.
+Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the OS.
 
 % \begin{figure}
@@ -684,23 +676,21 @@
 Multiple heaps increase external fragmentation as the ratio of heaps to threads increases, which can lead to heap blowup.
 The external fragmentation experienced by a program with a single heap is now multiplied by the number of heaps, since each heap manages its own free storage and allocates its own reserved memory.
-Additionally, objects freed by one heap cannot be reused by other threads without increasing the cost of the memory operations, except indirectly by returning free memory to the operating system, which can be expensive.
-Depending on how the operating system provides dynamic storage to an application, returning storage may be difficult or impossible, \eg the contiguous @sbrk@ area in Unix.
-In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused.
+Additionally, objects freed by one heap cannot be reused by other threads without increasing the cost of the memory operations, except indirectly by returning free memory to the OS (see Section~\ref{s:Ownership}).
+Returning storage to the OS may be difficult or impossible, \eg the contiguous @sbrk@ area in Unix.
+% In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused.
 
 Adding a \newterm{global heap} (G) attempts to reduce the cost of obtaining/returning memory among heaps (sharing) by buffering storage within the application address-space.
-Now, each heap obtains and returns storage to/from the global heap rather than the operating system.
+Now, each heap obtains and returns storage to/from the global heap rather than the OS.
 Storage is obtained from the global heap only when a heap allocation cannot be fulfilled, and returned to the global heap when a heap's free memory exceeds some threshold.
-Similarly, the global heap buffers this memory, obtaining and returning storage to/from the operating system as necessary.
+Similarly, the global heap buffers this memory, obtaining and returning storage to/from the OS as necessary.
 The global heap does not have its own thread and makes no internal allocation requests;
 instead, it uses the application thread, which called one of the multiple heaps and then the global heap, to perform operations.
 Hence, the worst-case cost of a memory operation includes all these steps.
-With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the operating system to achieve the same goal and is independent of the mechanism used by the operating system to present dynamic memory to an address space.
-
+With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the OS to achieve the same goal and is independent of the mechanism used by the OS to present dynamic memory to an address space.
 However, since any thread may indirectly perform a memory operation on the global heap, it is a shared resource that requires locking.
 A single lock can be used to protect the global heap or fine-grained locking can be used to reduce contention.
 In general, the cost is minimal since the majority of memory operations are completed without the use of the global heap.
 
-
-\paragraph{1:1 model (thread heaps)} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see ownership in Section~\ref{s:Ownership}).
+\paragraph{1:1 model (see Figure~\ref{f:PerThreadHeap})} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see Section~\ref{s:Ownership}).
 An additional benefit of thread heaps is improved locality due to better memory layout.
 As each thread only allocates from its heap, all objects are consolidated in the storage area for that heap, better utilizing each CPUs cache and accessing fewer pages.
@@ -708,5 +698,5 @@
 Thread heaps can also eliminate allocator-induced active false-sharing, if memory is acquired so it does not overlap at crucial boundaries with memory for another thread's heap.
 For example, assume page boundaries coincide with cache line boundaries, if a thread heap always acquires pages of memory then no two threads share a page or cache line unless pointers are passed among them.
-Hence, allocator-induced active false-sharing cannot occur because the memory for thread heaps never overlaps.
+% Hence, allocator-induced active false-sharing cannot occur because the memory for thread heaps never overlaps.
 
 When a thread terminates, there are two options for handling its thread heap.
@@ -720,5 +710,5 @@
 
 It is possible to use any of the heap models with user-level (M:N) threading.
-However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the operating system, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000).
+However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the OS, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000).
 It is difficult to retain this goal, if the user-threading model is directly involved with the heap model.
 Figure~\ref{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model is provided by the language runtime.
@@ -732,15 +722,15 @@
 \end{figure}
 
-Adopting this model results in a subtle problem with shared heaps.
-With kernel threading, an operation that is started by a kernel thread is always completed by that thread.
-For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption.
+Adopting user threading results in a subtle problem with shared heaps.
+With kernel threading, an operation started by a kernel thread is always completed by that thread.
+For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap, even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption.
 However, this correctness property is not preserved for user-level threading.
 A user thread can start an allocation/deallocation on one kernel thread, be preempted (time slice), and continue running on a different kernel thread to complete the operation~\cite{Dice02}.
 When the user thread continues on the new kernel thread, it may have pointers into the previous kernel-thread's heap and hold locks associated with it.
 To get the same kernel-thread safety, time slicing must be disabled/\-enabled around these operations, so the user thread cannot jump to another kernel thread.
-However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption does not happen that frequently.
+However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption is infrequent (milliseconds).
 Instead, techniques exist to lazily detect this case in the interrupt handler, abort the preemption, and return to the operation so it can complete atomically.
-Occasionally ignoring a preemption should be benign, but a persistent lack of preemption can result in both short and long term starvation;
-techniques like rollforward can be used to force an eventual preemption.
+Occasional ignoring of a preemption should be benign, but a persistent lack of preemption can result in starvation;
+techniques like rolling forward the preemption to the next context switch can be used.
 
 
@@ -800,4 +790,38 @@
 % For example, in Figure~\ref{f:AllocatorInducedPassiveFalseSharing}, Object$_2$ may be deallocated to Thread$_2$'s heap initially.
 % If Thread$_2$ reallocates Object$_2$ before it is returned to its owner heap, then passive false-sharing may occur.
+
+For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps.% (see~Figure~\ref{f:HybridPrivatePublicHeap}).
+The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup.
+In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap.
+Similarly, a thread first deallocates an object to its private heap, and second to the public heap.
+Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps.
+% Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap.
+Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which does allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return.
+
+% \begin{figure}
+% \centering
+% \input{PrivatePublicHeaps.pstex_t}
+% \caption{Hybrid Private/Public Heap for Per-thread Heaps}
+% \label{f:HybridPrivatePublicHeap}
+% \vspace{10pt}
+% \input{RemoteFreeList.pstex_t}
+% \caption{Remote Free-List}
+% \label{f:RemoteFreeList}
+% \end{figure}
+
+% As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified.
+% For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}.
+% To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage.
+% Since the remote free-list is occasionally cleared during an allocation, this adds to that cost.
+% Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object.
+ 
+% If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking.
+% In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur.
+% If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations.
+ 
+% The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator.
+% Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used.
+% In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps.
+% For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach.
 
 
@@ -817,5 +841,5 @@
 
 
-\subsection{Object Containers}
+\subsubsection{Object Containers}
 \label{s:ObjectContainers}
 
@@ -827,5 +851,5 @@
 \eg an object is accessed by the program after it is allocated, while the header is accessed by the allocator after it is free.
 
-The alternative factors common header data to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks} in~\cite{Berger00}), as in Figure~\ref{f:ObjectContainer}.
+An alternative approach factors common header data to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks}~\cite{Berger00}), as in Figure~\ref{f:ObjectContainer}.
 The header for the container holds information necessary for all objects in the container;
 a trailer may also be used at the end of the container.
@@ -862,5 +886,5 @@
 
 
-\subsubsection{Container Ownership}
+\paragraph{Container Ownership}
 \label{s:ContainerOwnership}
 
@@ -894,8 +918,8 @@
 
 Additional restrictions may be applied to the movement of containers to prevent active false-sharing.
-For example, if a container changes ownership through the global heap, then when a thread allocates an object from the newly acquired container it is actively false-sharing even though no objects are passed among threads.
+For example, if a container changes ownership through the global heap, then a thread allocating from the newly acquired container is actively false-sharing even though no objects are passed among threads.
 Note, once the thread frees the object, no more false sharing can occur until the container changes ownership again.
 To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free.
-One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the operating system.
+One implementation approach that increases the freedom to return a free container to the OS involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the OS.
 
 % \begin{figure}
@@ -930,5 +954,5 @@
 
 
-\subsubsection{Container Size}
+\paragraph{Container Size}
 \label{s:ContainerSize}
 
@@ -941,5 +965,5 @@
 However, with more objects in a container, there may be more objects that are unallocated, increasing external fragmentation.
 With smaller containers, not only are there more containers, but a second new problem arises where objects are larger than the container.
-In general, large objects, \eg greater than 64\,KB, are allocated directly from the operating system and are returned immediately to the operating system to reduce long-term external fragmentation.
+In general, large objects, \eg greater than 64\,KB, are allocated directly from the OS and are returned immediately to the OS to reduce long-term external fragmentation.
 If the container size is small, \eg 1\,KB, then a 1.5\,KB object is treated as a large object, which is likely to be inappropriate.
 Ideally, it is best to use smaller containers for smaller objects, and larger containers for medium objects, which leads to the issue of locating the container header.
@@ -970,5 +994,5 @@
 
 
-\subsubsection{Container Free-Lists}
+\paragraph{Container Free-Lists}
 \label{s:containersfreelists}
 
@@ -1005,49 +1029,10 @@
 
 
-\subsubsection{Hybrid Private/Public Heap}
-\label{s:HybridPrivatePublicHeap}
-
-Section~\ref{s:Ownership} discusses advantages and disadvantages of public heaps (T:H model and with ownership) and private heaps (thread heaps with ownership).
-For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps (see~Figure~\ref{f:HybridPrivatePublicHeap}).
-The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup.
-In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap.
-Similarly, a thread first deallocates an object to its private heap, and second to the public heap.
-Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps.
-Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap.
-Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return.
-
-\begin{figure}
-\centering
-\input{PrivatePublicHeaps.pstex_t}
-\caption{Hybrid Private/Public Heap for Per-thread Heaps}
-\label{f:HybridPrivatePublicHeap}
-% \vspace{10pt}
-% \input{RemoteFreeList.pstex_t}
-% \caption{Remote Free-List}
-% \label{f:RemoteFreeList}
-\end{figure}
-
-As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified.
-For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}.
-To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage.
-Since the remote free-list is occasionally cleared during an allocation, this adds to that cost.
-Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object.
-
-If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking.
-In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur.
-If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations.
-
-The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator.
-Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used.
-In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps.
-For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach.
-
-
-\subsection{Allocation Buffer}
+\subsubsection{Allocation Buffer}
 \label{s:AllocationBuffer}
 
 An allocation buffer is reserved memory (see Section~\ref{s:AllocatorComponents}) not yet allocated to the program, and is used for allocating objects when the free list is empty.
 That is, rather than requesting new storage for a single object, an entire buffer is requested from which multiple objects are allocated later.
-Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or operating system, respectively.
+Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or OS, respectively.
 The allocation buffer reduces contention and the number of global/operating-system calls.
 For coalescing, a buffer is split into smaller objects by allocations, and recomposed into larger buffer areas during deallocations.
@@ -1062,5 +1047,5 @@
 
 Allocation buffers may increase external fragmentation, since some memory in the allocation buffer may never be allocated.
-A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or operating system.
+A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or OS.
 The allocation buffer also slightly increases internal fragmentation, since a pointer is necessary to locate the next free object in the buffer.
 
@@ -1068,8 +1053,8 @@
 For example, when a container is created, rather than placing all objects within the container on the free list, the objects form an allocation buffer and are allocated from the buffer as allocation requests are made.
 This lazy method of constructing objects is beneficial in terms of paging and caching.
-For example, although an entire container, possibly spanning several pages, is allocated from the operating system, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache.
-
-
-\subsection{Lock-Free Operations}
+For example, although an entire container, possibly spanning several pages, is allocated from the OS, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache.
+
+
+\subsubsection{Lock-Free Operations}
 \label{s:LockFreeOperations}
 
@@ -1194,7 +1179,7 @@
 % A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable}
 % \end{quote}
-% If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
+% If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
 % Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
-% Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the operating system is providing the second thread via the signal handler.
+% Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler.
 % 
 % Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted.
@@ -1256,7 +1241,7 @@
 A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable}
 \end{quote}
-If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
+If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
 Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
-Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the operating system is providing the second thread via the signal handler.
+Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler.
 
 Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted.
@@ -1273,5 +1258,5 @@
 For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
 However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs.
-More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
+More OS support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
 So the 1:1 model had no atomic actions along the fastpath and no special operating-system support requirements.
 The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in Section~\ref{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns.
@@ -1308,5 +1293,5 @@
 A primary goal of llheap is low latency, hence the name low-latency heap (llheap).
 Two forms of latency are internal and external.
-Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system.
+Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the OS.
 Ideally latency is $O(1)$ with a small constant.
 
@@ -1314,5 +1299,5 @@
 The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger).
 
-To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
+To obtain $O(1)$ external latency means obtaining one large storage area from the OS and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
 Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable.
 The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \pageref{p:malloc_expansion}).
@@ -1329,7 +1314,7 @@
 headers per allocation versus containers,
 no coalescing to minimize latency,
-global heap memory (pool) obtained from the operating system using @mmap@ to create and reuse heaps needed by threads,
+global heap memory (pool) obtained from the OS using @mmap@ to create and reuse heaps needed by threads,
 local reserved memory (pool) per heap obtained from global pool,
-global reserved memory (pool) obtained from the operating system using @sbrk@ call,
+global reserved memory (pool) obtained from the OS using @sbrk@ call,
 optional fast-lookup table for converting allocation requests into bucket sizes,
 optional statistic-counters table for accumulating counts of allocation operations.
@@ -1358,5 +1343,5 @@
 Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
 All objects in a bucket are of the same size.
-The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the operating system.
+The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the OS.
 Each free bucket of a specific size has two lists.
 1) A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
@@ -1367,5 +1352,5 @@
 Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
 First, the allocation is divided into small (@sbrk@) or large (@mmap@).
-For large allocations, the storage is mapped directly from the operating system.
+For large allocations, the storage is mapped directly from the OS.
 For small allocations, $S$ is quantized into a bucket size.
 Quantizing is performed using a binary search over the ordered bucket array.
@@ -1378,5 +1363,5 @@
 heap's local pool,
 global pool,
-operating system (@sbrk@).
+OS (@sbrk@).
 
 \begin{algorithm}
@@ -1443,5 +1428,5 @@
 Algorithm~\ref{alg:heapObjectFreeOwn} shows the de-allocation (free) outline for an object at address $A$ with ownership.
 First, the address is divided into small (@sbrk@) or large (@mmap@).
-For large allocations, the storage is unmapped back to the operating system.
+For large allocations, the storage is unmapped back to the OS.
 For small allocations, the bucket associated with the request size is retrieved.
 If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket.
@@ -3044,5 +3029,5 @@
 
 \textsf{pt3} is the only memory allocator where the total dynamic memory goes down in the second half of the program lifetime when the memory is freed by the benchmark program.
-It makes pt3 the only memory allocator that gives memory back to the operating system as it is freed by the program.
+It makes pt3 the only memory allocator that gives memory back to the OS as it is freed by the program.
 
 % FOR 1 THREAD
Index: doc/papers/llheap/figures/AllocatorComponents.fig
===================================================================
--- doc/papers/llheap/figures/AllocatorComponents.fig	(revision 9eb7f07cfca64b294b593355d1385e8bd1d0166c)
+++ doc/papers/llheap/figures/AllocatorComponents.fig	(revision c8f0199daff0565cb3f18b3bc6e07c382220342a)
@@ -8,5 +8,4 @@
 -2
 1200 2
-6 1275 2025 2700 2625
 6 2400 2025 2700 2625
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
@@ -14,6 +13,4 @@
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
 	 2700 2025 2700 2325 2400 2325 2400 2025 2700 2025
--6
-4 2 0 50 -1 2 11 0.0000 2 165 1005 2325 2400 Management\001
 -6
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
@@ -61,7 +58,9 @@
 2 2 0 1 0 7 60 -1 13 0.000 0 0 -1 0 0 5
 	 3300 2700 6300 2700 6300 3000 3300 3000 3300 2700
-4 0 0 50 -1 2 11 0.0000 2 165 585 3300 1725 Storage\001
+4 0 0 50 -1 2 11 0.0000 2 165 1005 3300 1725 Storage Data\001
 4 2 0 50 -1 0 11 0.0000 2 165 810 3000 1875 free objects\001
 4 2 0 50 -1 0 11 0.0000 2 135 1140 3000 2850 reserve memory\001
 4 1 0 50 -1 0 11 0.0000 2 120 795 2325 1500 Static Zone\001
 4 1 0 50 -1 0 11 0.0000 2 165 1845 4800 1500 Dynamic-Allocation Zone\001
+4 2 0 50 -1 2 11 0.0000 2 165 1005 2325 2325 Management\001
+4 2 0 50 -1 2 11 0.0000 2 135 375 2325 2525 Data\001
Index: doc/papers/llheap/figures/AllocatorComponents.fig.bak
===================================================================
--- doc/papers/llheap/figures/AllocatorComponents.fig.bak	(revision 9eb7f07cfca64b294b593355d1385e8bd1d0166c)
+++ 	(revision )
@@ -1,67 +1,0 @@
-#FIG 3.2  Produced by xfig version 3.2.7b
-Landscape
-Center
-Inches
-Letter
-100.00
-Single
--2
-1200 2
-6 1275 2025 2700 2625
-6 2400 2025 2700 2625
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 2700 2325 2700 2625 2400 2625 2400 2325 2700 2325
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 2700 2025 2700 2325 2400 2325 2400 2025 2700 2025
--6
-4 2 0 50 -1 2 11 0.0000 2 165 1005 2325 2400 Management\001
--6
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 4200 1800 4800 1800 4800 2100 4200 2100 4200 1800
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 4200 2100 5100 2100 5100 2400 4200 2400 4200 2100
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 5100 2100 6300 2100 6300 2400 5100 2400 5100 2100
-2 2 0 1 0 7 50 -1 17 0.000 0 0 -1 0 0 5
-	 3300 1800 4200 1800 4200 2100 3300 2100 3300 1800
-2 2 0 1 0 7 50 -1 17 0.000 0 0 -1 0 0 5
-	 5400 1800 6300 1800 6300 2100 5400 2100 5400 1800
-2 2 0 1 0 7 50 -1 17 0.000 0 0 -1 0 0 5
-	 3300 2100 3600 2100 3600 2400 3300 2400 3300 2100
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3300 2400 3900 2400 3900 2700 3300 2700 3300 2400
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3900 2400 4800 2400 4800 2700 3900 2700 3900 2400
-2 2 0 1 0 7 50 -1 17 0.000 0 0 -1 0 0 5
-	 4800 2400 5400 2400 5400 2700 4800 2700 4800 2400
-2 2 0 1 0 7 50 -1 17 0.000 0 0 -1 0 0 5
-	 4800 1800 5400 1800 5400 2100 4800 2100 4800 1800
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 5400 2400 6300 2400 6300 2700 5400 2700 5400 2400
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 3750 1950 4800 1800
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 5100 1950 3300 2100
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 3450 2250 4800 2400
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 5100 2550 5400 2100
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 2550 2175 3300 1800
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 2550 2475 3300 2700
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3150 1275 3150 3000
-2 2 0 1 0 7 60 -1 13 0.000 0 0 -1 0 0 5
-	 3300 2700 6300 2700 6300 3000 3300 3000 3300 2700
-4 1 0 50 -1 0 11 0.0000 2 120 795 2325 1425 Static Zone\001
-4 1 0 50 -1 0 11 0.0000 2 165 1845 4800 1425 Dynamic-Allocation Zone\001
-4 0 0 50 -1 2 11 0.0000 2 165 585 3300 1725 Storage\001
-4 2 0 50 -1 0 11 0.0000 2 165 810 3000 1875 free objects\001
-4 2 0 50 -1 0 11 0.0000 2 135 1140 3000 2850 reserve memory\001
