Changeset 77afbb4 for doc/papers
- Timestamp:
- Jun 6, 2023, 9:24:54 PM (19 months ago)
- Branches:
- ast-experimental, master
- Children:
- 55266c7
- Parents:
- 541dbc09
- Location:
- doc/papers/llheap
- Files:
-
- 1 deleted
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/papers/llheap/Paper.tex
r541dbc09 r77afbb4 252 252 Dynamic code/data memory is managed by the dynamic loader for libraries loaded at runtime, which is complex especially in a multi-threaded program~\cite{Huang06}. 253 253 However, changes to the dynamic code/data space are typically infrequent, many occurring at program startup, and are largely outside of a program's control. 254 Stack memory is managed by the program call/return-mechanism using a simpleLIFO technique, which works well for sequential programs.255 For stackful coroutines and user threads, a new stack is commonly created in dynamic-allocation memory.254 Stack memory is managed by the program call/return-mechanism using a LIFO technique, which works well for sequential programs. 255 For stackful coroutines and user threads, a new stack is commonly created in the dynamic-allocation memory. 256 256 This work focuses solely on management of the dynamic-allocation memory. 257 257 … … 293 293 \begin{enumerate}[leftmargin=*,itemsep=0pt] 294 294 \item 295 Implementation of a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFAusing user-level threads running on multiple kernel threads (M:N threading).296 297 \item 298 Extend the standard C heap functionality by preserving with each allocation: its request size plus the amount allocated, whether an allocation is zero fill , andallocation alignment.295 Implementation of a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC~\cite{uC++} and \CFA~\cite{Moss18,Delisle21} using user-level threads running on multiple kernel threads (M:N threading). 296 297 \item 298 Extend the standard C heap functionality by preserving with each allocation: its request size plus the amount allocated, whether an allocation is zero fill and/or allocation alignment. 299 299 300 300 \item … … 365 365 366 366 The following discussion is a quick overview of the moving-pieces that affect the design of a memory allocator and its performance. 367 It is assumed that dynamic allocates and deallocates acquire storage for a program variable, referred to asan \newterm{object}, through calls such as @malloc@ and @free@ in C, and @new@ and @delete@ in \CC.367 Dynamic acquires and releases obtain storage for a program variable, called an \newterm{object}, through calls such as @malloc@ and @free@ in C, and @new@ and @delete@ in \CC. 368 368 Space for each allocated object comes from the dynamic-allocation zone. 369 369 … … 378 378 379 379 Figure~\ref{f:AllocatorComponents} shows the two important data components for a memory allocator, management and storage, collectively called the \newterm{heap}. 380 The \newterm{management data} is a data structure located at a known memory address and contains all information necessary to manage the storage data. 381 The management data starts with fixed-sized information in the static-data memory that references components in the dynamic-allocation memory. 380 The \newterm{management data} is a data structure located at a known memory address and contains fixed-sized information in the static-data memory that references components in the dynamic-allocation memory. 382 381 For multi-threaded programs, additional management data may exist in \newterm{thread-local storage} (TLS) for each kernel thread executing the program. 383 382 The \newterm{storage data} is composed of allocated and freed objects, and \newterm{reserved memory}. … … 385 384 \ie only the program knows the location of allocated storage not the memory allocator. 386 385 Freed objects (white) represent memory deallocated by the program, which are linked into one or more lists facilitating easy location of new allocations. 387 Reserved memory (dark grey) is one or more blocks of memory obtained from the operating systembut not yet allocated to the program;388 if there are multiple reserved blocks, they are also chained together , usually internally.386 Reserved memory (dark grey) is one or more blocks of memory obtained from the \newterm{operating system} (OS) but not yet allocated to the program; 387 if there are multiple reserved blocks, they are also chained together. 389 388 390 389 \begin{figure} … … 395 394 \end{figure} 396 395 397 In m ost allocator designs, allocated objects have management data embedded within them.396 In many allocator designs, allocated objects and reserved blocks have management data embedded within them (see also Section~\ref{s:ObjectContainers}). 398 397 Figure~\ref{f:AllocatedObject} shows an allocated object with a header, trailer, and optional spacing around the object. 399 398 The header contains information about the object, \eg size, type, etc. … … 404 403 When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists. 405 404 406 A free object alsocontains management data, \eg size, pointers, etc.405 A free object often contains management data, \eg size, pointers, etc. 407 406 Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks. 408 407 For internal chaining, the amount of management data for a free node defines the minimum allocation size, \eg if 16 bytes are needed for a free-list node, allocation requests less than 16 bytes are rounded up. 409 The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new management information and/or program data.408 The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new program data and/or management information. 410 409 411 410 \begin{figure} … … 428 427 \label{s:Fragmentation} 429 428 430 Fragmentation is memory requested from the operating systembut not used by the program;429 Fragmentation is memory requested from the OS but not used by the program; 431 430 hence, allocated objects are not fragmentation. 432 431 Figure~\ref{f:InternalExternalFragmentation} shows fragmentation is divided into two forms: internal or external. … … 443 442 An allocator should strive to keep internal management information to a minimum. 444 443 445 \newterm{External fragmentation} is all memory space reserved from the operating systembut not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory.444 \newterm{External fragmentation} is all memory space reserved from the OS but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory. 446 445 This memory is problematic in two ways: heap blowup and highly fragmented memory. 447 446 \newterm{Heap blowup} occurs when freed memory cannot be reused for future allocations leading to potentially unbounded external fragmentation growth~\cite{Berger00}. 448 Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects, resulting in a checkerboard of adjacent allocated and free areas, where the free blocks have become very small.447 Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects, resulting in a checkerboard of adjacent allocated and free areas, where the free blocks have become to small to service requests. 449 448 % Figure~\ref{f:MemoryFragmentation} shows an example of how a small block of memory fragments as objects are allocated and deallocated over time. 450 449 Heap blowup can occur due to allocator policies that are too restrictive in reusing freed memory (the allocated size cannot use a larger free block) and/or no coalescing of free storage. … … 452 451 % Memory is highly fragmented when most free blocks are unusable because of their sizes. 453 452 % For example, Figure~\ref{f:Contiguous} and Figure~\ref{f:HighlyFragmented} have the same quantity of external fragmentation, but Figure~\ref{f:HighlyFragmented} is highly fragmented. 454 % If there is a request to allocate a large object, Figure~\ref{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while Figure~\ref{f:HighlyFragmented} likely has to request more memory from the operating system.453 % If there is a request to allocate a large object, Figure~\ref{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while Figure~\ref{f:HighlyFragmented} likely has to request more memory from the OS. 455 454 456 455 % \begin{figure} … … 475 474 The first approach is a \newterm{sequential-fit algorithm} with one list of free objects that is searched for a block large enough to fit a requested object size. 476 475 Different search policies determine the free object selected, \eg the first free object large enough or closest to the requested size. 477 Any storage larger than the request can become spacing after the object or besplit into a smaller free object.476 Any storage larger than the request can become spacing after the object or split into a smaller free object. 478 477 % The cost of the search depends on the shape and quality of the free list, \eg a linear versus a binary-tree free-list, a sorted versus unsorted free-list. 479 478 … … 489 488 490 489 The third approach is \newterm{splitting} and \newterm{coalescing algorithms}. 491 When an object is allocated, if there are no free objects of the requested size, a larger free object may be split into two smaller objects to satisfy the allocation request without obtaining more memory from the operating system.492 For example, in the \newterm{buddy system}, a block of free memory is split into two equal chunks, one of those chunks is again split into two equal chunks, and so on until a block just large enough to fit the requested object is created.493 When an object is deallocated it is coalesced with the objects immediately before and after it in memory, if they are free, turning them into one larger object.490 When an object is allocated, if there are no free objects of the requested size, a larger free object is split into two smaller objects to satisfy the allocation request rather than obtaining more memory from the OS. 491 For example, in the \newterm{buddy system}, a block of free memory is split into equal chunks, one of those chunks is again split, and so on until a minimal block is created that fits the requested object. 492 When an object is deallocated, it is coalesced with the objects immediately before and after it in memory, if they are free, turning them into one larger block. 494 493 Coalescing can be done eagerly at each deallocation or lazily when an allocation cannot be fulfilled. 495 In all cases, coalescing increases allocation latency, hence some allocations can cause unbounded delays during coalescing.494 In all cases, coalescing increases allocation latency, hence some allocations can cause unbounded delays. 496 495 While coalescing does not reduce external fragmentation, the coalesced blocks improve fragmentation quality so future allocations are less likely to cause heap blowup. 497 496 % Splitting and coalescing can be used with other algorithms to avoid highly fragmented memory. … … 501 500 \label{s:Locality} 502 501 503 The principle of locality recognizes that programs tend to reference a small set of data, called a working set, for a certain period of time, where a working set iscomposed of temporal and spatial accesses~\cite{Denning05}.502 The principle of locality recognizes that programs tend to reference a small set of data, called a \newterm{working set}, for a certain period of time, composed of temporal and spatial accesses~\cite{Denning05}. 504 503 % Temporal clustering implies a group of objects are accessed repeatedly within a short time period, while spatial clustering implies a group of objects physically close together (nearby addresses) are accessed repeatedly within a short time period. 505 504 % Temporal locality commonly occurs during an iterative computation with a fixed set of disjoint variables, while spatial locality commonly occurs when traversing an array. 506 Hardware takes advantage of t emporal and spatial localitythrough multiple levels of caching, \ie memory hierarchy.505 Hardware takes advantage of the working set through multiple levels of caching, \ie memory hierarchy. 507 506 % When an object is accessed, the memory physically located around the object is also cached with the expectation that the current and nearby objects will be referenced within a short period of time. 508 For example, entire cache lines are transferred between memory and cache and entire virtual-memory pages are transferred between disk and memory.507 For example, entire cache lines are transferred between cache and memory, and entire virtual-memory pages are transferred between memory and disk. 509 508 % A program exhibiting good locality has better performance due to fewer cache misses and page faults\footnote{With the advent of large RAM memory, paging is becoming less of an issue in modern programming.}. 510 509 … … 532 531 \label{s:MutualExclusion} 533 532 534 \newterm{Mutual exclusion} provides sequential access to the shared 533 \newterm{Mutual exclusion} provides sequential access to the shared-management data of the heap. 535 534 There are two performance issues for mutual exclusion. 536 535 First is the overhead necessary to perform (at least) a hardware atomic operation every time a shared resource is accessed. 537 536 Second is when multiple threads contend for a shared resource simultaneously, and hence, some threads must wait until the resource is released. 538 537 Contention can be reduced in a number of ways: 539 1) Using multiple fine-grained locks versus a single lock , spreadingthe contention across a number of locks.538 1) Using multiple fine-grained locks versus a single lock to spread the contention across a number of locks. 540 539 2) Using trylock and generating new storage if the lock is busy, yielding a classic space versus time tradeoff. 541 540 3) Using one of the many lock-free approaches for reducing contention on basic data-structure operations~\cite{Oyama99}. … … 551 550 a memory allocator can only affect the latter two. 552 551 553 Assume two objects, object$_1$ and object$_2$, share a cache line.554 \newterm{Program-induced false-sharing} occurs when thread$_1$ passes a reference to object$_2$ to thread$_2$, and then threads$_1$ modifies object$_1$ while thread$_2$ modifies object$_2$.552 Specifically, assume two objects, O$_1$ and O$_2$, share a cache line, with threads, T$_1$ and T$_2$. 553 \newterm{Program-induced false-sharing} occurs when T$_1$ passes a reference to O$_2$ to T$_2$, and then T$_1$ modifies O$_1$ while T$_2$ modifies O$_2$. 555 554 % Figure~\ref{f:ProgramInducedFalseSharing} shows when Thread$_1$ passes Object$_2$ to Thread$_2$, a false-sharing situation forms when Thread$_1$ modifies Object$_1$ and Thread$_2$ modifies Object$_2$. 556 555 % Changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line. … … 574 573 % \label{f:FalseSharing} 575 574 % \end{figure} 576 \newterm{Allocator-induced active false-sharing}\label{s:AllocatorInducedActiveFalseSharing} occurs when object$_1$ and object$_2$ are heap allocated and their references are passed to thread$_1$ and thread$_2$, which modify the objects.575 \newterm{Allocator-induced active false-sharing}\label{s:AllocatorInducedActiveFalseSharing} occurs when O$_1$ and O$_2$ are heap allocated and their references are passed to T$_1$ and T$_2$, which modify the objects. 577 576 % For example, in Figure~\ref{f:AllocatorInducedActiveFalseSharing}, each thread allocates an object and loads a cache-line of memory into its associated cache. 578 577 % Again, changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line. … … 580 579 % is another form of allocator-induced false-sharing caused by program-induced false-sharing. 581 580 % When an object in a program-induced false-sharing situation is deallocated, a future allocation of that object may cause passive false-sharing. 582 when thread$_1$ passes object$_2$ to thread$_2$, and thread$_2$ subsequently deallocates object$_2$, and then object$_2$ is reallocated to thread$_2$ while thread$_1$ is still using object$_1$.581 when T$_1$ passes O$_2$ to T$_2$, and T$_2$ subsequently deallocates O$_2$, and then O$_2$ is reallocated to T$_2$ while T$_1$ is still using O$_1$. 583 582 584 583 … … 593 592 \label{s:MultiThreadedMemoryAllocatorFeatures} 594 593 595 The following features are used in the construction of multi-threaded memory-allocators: 596 \begin{enumerate}[itemsep=0pt] 597 \item multiple heaps: with or without a global heap, or with or without heap ownership. 598 \item object containers: with or without ownership, fixed or variable sized, global or local free-lists. 599 \item hybrid private/public heap 600 \item allocation buffer 601 \item lock-free operations 602 \end{enumerate} 594 The following features are used in the construction of multi-threaded memory-allocators: multiple heaps, user-level threading, ownership, object containers, allocation buffer, lock-free operations. 603 595 The first feature, multiple heaps, pertains to different kinds of heaps. 604 596 The second feature, object containers, pertains to the organization of objects within the storage area. … … 606 598 607 599 608 \subs ection{Multiple Heaps}600 \subsubsection{Multiple Heaps} 609 601 \label{s:MultipleHeaps} 610 602 611 603 A multi-threaded allocator has potentially multiple threads and heaps. 612 604 The multiple threads cause complexity, and multiple heaps are a mechanism for dealing with the complexity. 613 The spectrum ranges from multiple threads using a single heap, denoted as T:1 (see Figure~\ref{f:SingleHeap}), to multiple threads sharing multiple heaps, denoted as T:H (see Figure~\ref{f:SharedHeaps}), to one thread per heap, denoted as 1:1 (see Figure~\ref{f:PerThreadHeap}), which is almost back to a single-threaded allocator.605 The spectrum ranges from multiple threads using a single heap, denoted as T:1, to multiple threads sharing multiple heaps, denoted as T:H, to one thread per heap, denoted as 1:1, which is almost back to a single-threaded allocator. 614 606 615 607 \begin{figure} … … 635 627 \end{figure} 636 628 637 \paragraph{T:1 model } where all threads allocate and deallocate objects from one heap.638 Memory is obtained from the freed objects, or reserved memory in the heap, or from the operating system (OS);639 the heap may also return freed memory to the operating system.629 \paragraph{T:1 model (see Figure~\ref{f:SingleHeap})} where all threads allocate and deallocate objects from one heap. 630 Memory is obtained from the freed objects, or reserved memory in the heap, or from the OS; 631 the heap may also return freed memory to the OS. 640 632 The arrows indicate the direction memory conceptually moves for each kind of operation: allocation moves memory along the path from the heap/operating-system to the user application, while deallocation moves memory along the path from the application back to the heap/operating-system. 641 633 To safely handle concurrency, a single lock may be used for all heap operations or fine-grained locking for different operations. 642 634 Regardless, a single heap may be a significant source of contention for programs with a large amount of memory allocation. 643 635 644 \paragraph{T:H model } where each thread allocates storage from several heaps depending on certain criteria, with the goal of reducing contention by spreading allocations/deallocations across the heaps.636 \paragraph{T:H model (see Figure~\ref{f:SharedHeaps})} where each thread allocates storage from several heaps depending on certain criteria, with the goal of reducing contention by spreading allocations/deallocations across the heaps. 645 637 The decision on when to create a new heap and which heap a thread allocates from depends on the allocator design. 646 638 To determine which heap to access, each thread must point to its associated heap in some way. … … 673 665 An alternative implementation is for all heaps to share one reserved memory, which requires a separate lock for the reserved storage to ensure mutual exclusion when acquiring new memory. 674 666 Because multiple threads can allocate/free/reallocate adjacent storage, all forms of false sharing may occur. 675 Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the operating system.667 Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the OS. 676 668 677 669 % \begin{figure} … … 684 676 Multiple heaps increase external fragmentation as the ratio of heaps to threads increases, which can lead to heap blowup. 685 677 The external fragmentation experienced by a program with a single heap is now multiplied by the number of heaps, since each heap manages its own free storage and allocates its own reserved memory. 686 Additionally, objects freed by one heap cannot be reused by other threads without increasing the cost of the memory operations, except indirectly by returning free memory to the operating system, which can be expensive.687 Depending on how the operating system provides dynamic storage to an application, returning storagemay be difficult or impossible, \eg the contiguous @sbrk@ area in Unix.688 In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused.678 Additionally, objects freed by one heap cannot be reused by other threads without increasing the cost of the memory operations, except indirectly by returning free memory to the OS (see Section~\ref{s:Ownership}). 679 Returning storage to the OS may be difficult or impossible, \eg the contiguous @sbrk@ area in Unix. 680 % In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused. 689 681 690 682 Adding a \newterm{global heap} (G) attempts to reduce the cost of obtaining/returning memory among heaps (sharing) by buffering storage within the application address-space. 691 Now, each heap obtains and returns storage to/from the global heap rather than the operating system.683 Now, each heap obtains and returns storage to/from the global heap rather than the OS. 692 684 Storage is obtained from the global heap only when a heap allocation cannot be fulfilled, and returned to the global heap when a heap's free memory exceeds some threshold. 693 Similarly, the global heap buffers this memory, obtaining and returning storage to/from the operating systemas necessary.685 Similarly, the global heap buffers this memory, obtaining and returning storage to/from the OS as necessary. 694 686 The global heap does not have its own thread and makes no internal allocation requests; 695 687 instead, it uses the application thread, which called one of the multiple heaps and then the global heap, to perform operations. 696 688 Hence, the worst-case cost of a memory operation includes all these steps. 697 With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the operating system to achieve the same goal and is independent of the mechanism used by the operating system to present dynamic memory to an address space. 698 689 With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the OS to achieve the same goal and is independent of the mechanism used by the OS to present dynamic memory to an address space. 699 690 However, since any thread may indirectly perform a memory operation on the global heap, it is a shared resource that requires locking. 700 691 A single lock can be used to protect the global heap or fine-grained locking can be used to reduce contention. 701 692 In general, the cost is minimal since the majority of memory operations are completed without the use of the global heap. 702 693 703 704 \paragraph{1:1 model (thread heaps)} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see ownership in Section~\ref{s:Ownership}). 694 \paragraph{1:1 model (see Figure~\ref{f:PerThreadHeap})} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see Section~\ref{s:Ownership}). 705 695 An additional benefit of thread heaps is improved locality due to better memory layout. 706 696 As each thread only allocates from its heap, all objects are consolidated in the storage area for that heap, better utilizing each CPUs cache and accessing fewer pages. … … 708 698 Thread heaps can also eliminate allocator-induced active false-sharing, if memory is acquired so it does not overlap at crucial boundaries with memory for another thread's heap. 709 699 For example, assume page boundaries coincide with cache line boundaries, if a thread heap always acquires pages of memory then no two threads share a page or cache line unless pointers are passed among them. 710 Hence, allocator-induced active false-sharing cannot occur because the memory for thread heaps never overlaps.700 % Hence, allocator-induced active false-sharing cannot occur because the memory for thread heaps never overlaps. 711 701 712 702 When a thread terminates, there are two options for handling its thread heap. … … 720 710 721 711 It is possible to use any of the heap models with user-level (M:N) threading. 722 However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the operating system, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000).712 However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the OS, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000). 723 713 It is difficult to retain this goal, if the user-threading model is directly involved with the heap model. 724 714 Figure~\ref{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model is provided by the language runtime. … … 732 722 \end{figure} 733 723 734 Adopting this modelresults in a subtle problem with shared heaps.735 With kernel threading, an operation that isstarted by a kernel thread is always completed by that thread.736 For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption.724 Adopting user threading results in a subtle problem with shared heaps. 725 With kernel threading, an operation started by a kernel thread is always completed by that thread. 726 For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap, even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption. 737 727 However, this correctness property is not preserved for user-level threading. 738 728 A user thread can start an allocation/deallocation on one kernel thread, be preempted (time slice), and continue running on a different kernel thread to complete the operation~\cite{Dice02}. 739 729 When the user thread continues on the new kernel thread, it may have pointers into the previous kernel-thread's heap and hold locks associated with it. 740 730 To get the same kernel-thread safety, time slicing must be disabled/\-enabled around these operations, so the user thread cannot jump to another kernel thread. 741 However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption does not happen that frequently.731 However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption is infrequent (milliseconds). 742 732 Instead, techniques exist to lazily detect this case in the interrupt handler, abort the preemption, and return to the operation so it can complete atomically. 743 Occasional ly ignoring a preemption should be benign, but a persistent lack of preemption can result in both short and long termstarvation;744 techniques like roll forward can be used to force an eventual preemption.733 Occasional ignoring of a preemption should be benign, but a persistent lack of preemption can result in starvation; 734 techniques like rolling forward the preemption to the next context switch can be used. 745 735 746 736 … … 800 790 % For example, in Figure~\ref{f:AllocatorInducedPassiveFalseSharing}, Object$_2$ may be deallocated to Thread$_2$'s heap initially. 801 791 % If Thread$_2$ reallocates Object$_2$ before it is returned to its owner heap, then passive false-sharing may occur. 792 793 For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps.% (see~Figure~\ref{f:HybridPrivatePublicHeap}). 794 The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup. 795 In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap. 796 Similarly, a thread first deallocates an object to its private heap, and second to the public heap. 797 Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps. 798 % Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap. 799 Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which does allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return. 800 801 % \begin{figure} 802 % \centering 803 % \input{PrivatePublicHeaps.pstex_t} 804 % \caption{Hybrid Private/Public Heap for Per-thread Heaps} 805 % \label{f:HybridPrivatePublicHeap} 806 % \vspace{10pt} 807 % \input{RemoteFreeList.pstex_t} 808 % \caption{Remote Free-List} 809 % \label{f:RemoteFreeList} 810 % \end{figure} 811 812 % As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified. 813 % For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}. 814 % To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage. 815 % Since the remote free-list is occasionally cleared during an allocation, this adds to that cost. 816 % Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object. 817 818 % If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking. 819 % In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur. 820 % If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations. 821 822 % The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator. 823 % Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used. 824 % In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps. 825 % For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach. 802 826 803 827 … … 817 841 818 842 819 \subs ection{Object Containers}843 \subsubsection{Object Containers} 820 844 \label{s:ObjectContainers} 821 845 … … 827 851 \eg an object is accessed by the program after it is allocated, while the header is accessed by the allocator after it is free. 828 852 829 The alternative factors common header data to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks} in~\cite{Berger00}), as in Figure~\ref{f:ObjectContainer}.853 An alternative approach factors common header data to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks}~\cite{Berger00}), as in Figure~\ref{f:ObjectContainer}. 830 854 The header for the container holds information necessary for all objects in the container; 831 855 a trailer may also be used at the end of the container. … … 862 886 863 887 864 \ subsubsection{Container Ownership}888 \paragraph{Container Ownership} 865 889 \label{s:ContainerOwnership} 866 890 … … 894 918 895 919 Additional restrictions may be applied to the movement of containers to prevent active false-sharing. 896 For example, if a container changes ownership through the global heap, then when a thread allocates an object from the newly acquired container itis actively false-sharing even though no objects are passed among threads.920 For example, if a container changes ownership through the global heap, then a thread allocating from the newly acquired container is actively false-sharing even though no objects are passed among threads. 897 921 Note, once the thread frees the object, no more false sharing can occur until the container changes ownership again. 898 922 To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free. 899 One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the operating system.923 One implementation approach that increases the freedom to return a free container to the OS involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the OS. 900 924 901 925 % \begin{figure} … … 930 954 931 955 932 \ subsubsection{Container Size}956 \paragraph{Container Size} 933 957 \label{s:ContainerSize} 934 958 … … 941 965 However, with more objects in a container, there may be more objects that are unallocated, increasing external fragmentation. 942 966 With smaller containers, not only are there more containers, but a second new problem arises where objects are larger than the container. 943 In general, large objects, \eg greater than 64\,KB, are allocated directly from the operating system and are returned immediately to the operating systemto reduce long-term external fragmentation.967 In general, large objects, \eg greater than 64\,KB, are allocated directly from the OS and are returned immediately to the OS to reduce long-term external fragmentation. 944 968 If the container size is small, \eg 1\,KB, then a 1.5\,KB object is treated as a large object, which is likely to be inappropriate. 945 969 Ideally, it is best to use smaller containers for smaller objects, and larger containers for medium objects, which leads to the issue of locating the container header. … … 970 994 971 995 972 \ subsubsection{Container Free-Lists}996 \paragraph{Container Free-Lists} 973 997 \label{s:containersfreelists} 974 998 … … 1005 1029 1006 1030 1007 \subsubsection{Hybrid Private/Public Heap} 1008 \label{s:HybridPrivatePublicHeap} 1009 1010 Section~\ref{s:Ownership} discusses advantages and disadvantages of public heaps (T:H model and with ownership) and private heaps (thread heaps with ownership). 1011 For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps (see~Figure~\ref{f:HybridPrivatePublicHeap}). 1012 The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup. 1013 In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap. 1014 Similarly, a thread first deallocates an object to its private heap, and second to the public heap. 1015 Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps. 1016 Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap. 1017 Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return. 1018 1019 \begin{figure} 1020 \centering 1021 \input{PrivatePublicHeaps.pstex_t} 1022 \caption{Hybrid Private/Public Heap for Per-thread Heaps} 1023 \label{f:HybridPrivatePublicHeap} 1024 % \vspace{10pt} 1025 % \input{RemoteFreeList.pstex_t} 1026 % \caption{Remote Free-List} 1027 % \label{f:RemoteFreeList} 1028 \end{figure} 1029 1030 As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified. 1031 For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}. 1032 To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage. 1033 Since the remote free-list is occasionally cleared during an allocation, this adds to that cost. 1034 Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object. 1035 1036 If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking. 1037 In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur. 1038 If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations. 1039 1040 The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator. 1041 Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used. 1042 In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps. 1043 For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach. 1044 1045 1046 \subsection{Allocation Buffer} 1031 \subsubsection{Allocation Buffer} 1047 1032 \label{s:AllocationBuffer} 1048 1033 1049 1034 An allocation buffer is reserved memory (see Section~\ref{s:AllocatorComponents}) not yet allocated to the program, and is used for allocating objects when the free list is empty. 1050 1035 That is, rather than requesting new storage for a single object, an entire buffer is requested from which multiple objects are allocated later. 1051 Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or operating system, respectively.1036 Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or OS, respectively. 1052 1037 The allocation buffer reduces contention and the number of global/operating-system calls. 1053 1038 For coalescing, a buffer is split into smaller objects by allocations, and recomposed into larger buffer areas during deallocations. … … 1062 1047 1063 1048 Allocation buffers may increase external fragmentation, since some memory in the allocation buffer may never be allocated. 1064 A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or operating system.1049 A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or OS. 1065 1050 The allocation buffer also slightly increases internal fragmentation, since a pointer is necessary to locate the next free object in the buffer. 1066 1051 … … 1068 1053 For example, when a container is created, rather than placing all objects within the container on the free list, the objects form an allocation buffer and are allocated from the buffer as allocation requests are made. 1069 1054 This lazy method of constructing objects is beneficial in terms of paging and caching. 1070 For example, although an entire container, possibly spanning several pages, is allocated from the operating system, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache.1071 1072 1073 \subs ection{Lock-Free Operations}1055 For example, although an entire container, possibly spanning several pages, is allocated from the OS, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache. 1056 1057 1058 \subsubsection{Lock-Free Operations} 1074 1059 \label{s:LockFreeOperations} 1075 1060 … … 1194 1179 % A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable} 1195 1180 % \end{quote} 1196 % If a KT is preempted during an allocation operation, the operating systemcan schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.1181 % If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness. 1197 1182 % Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable. 1198 % Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the operating systemis providing the second thread via the signal handler.1183 % Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler. 1199 1184 % 1200 1185 % Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted. … … 1256 1241 A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable} 1257 1242 \end{quote} 1258 If a KT is preempted during an allocation operation, the operating systemcan schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.1243 If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness. 1259 1244 Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable. 1260 Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the operating systemis providing the second thread via the signal handler.1245 Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler. 1261 1246 1262 1247 Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted. … … 1273 1258 For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath. 1274 1259 However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs. 1275 More operating systemsupport is required to make this model viable, but there is still the serially-reusable problem with user-level threading.1260 More OS support is required to make this model viable, but there is still the serially-reusable problem with user-level threading. 1276 1261 So the 1:1 model had no atomic actions along the fastpath and no special operating-system support requirements. 1277 1262 The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in Section~\ref{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns. … … 1308 1293 A primary goal of llheap is low latency, hence the name low-latency heap (llheap). 1309 1294 Two forms of latency are internal and external. 1310 Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system.1295 Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the OS. 1311 1296 Ideally latency is $O(1)$ with a small constant. 1312 1297 … … 1314 1299 The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger). 1315 1300 1316 To obtain $O(1)$ external latency means obtaining one large storage area from the operating systemand subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.1301 To obtain $O(1)$ external latency means obtaining one large storage area from the OS and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation. 1317 1302 Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable. 1318 1303 The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \pageref{p:malloc_expansion}). … … 1329 1314 headers per allocation versus containers, 1330 1315 no coalescing to minimize latency, 1331 global heap memory (pool) obtained from the operating systemusing @mmap@ to create and reuse heaps needed by threads,1316 global heap memory (pool) obtained from the OS using @mmap@ to create and reuse heaps needed by threads, 1332 1317 local reserved memory (pool) per heap obtained from global pool, 1333 global reserved memory (pool) obtained from the operating systemusing @sbrk@ call,1318 global reserved memory (pool) obtained from the OS using @sbrk@ call, 1334 1319 optional fast-lookup table for converting allocation requests into bucket sizes, 1335 1320 optional statistic-counters table for accumulating counts of allocation operations. … … 1358 1343 Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M. 1359 1344 All objects in a bucket are of the same size. 1360 The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the operating system.1345 The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the OS. 1361 1346 Each free bucket of a specific size has two lists. 1362 1347 1) A free stack used solely by the KT heap-owner, so push/pop operations do not require locking. … … 1367 1352 Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$. 1368 1353 First, the allocation is divided into small (@sbrk@) or large (@mmap@). 1369 For large allocations, the storage is mapped directly from the operating system.1354 For large allocations, the storage is mapped directly from the OS. 1370 1355 For small allocations, $S$ is quantized into a bucket size. 1371 1356 Quantizing is performed using a binary search over the ordered bucket array. … … 1378 1363 heap's local pool, 1379 1364 global pool, 1380 operating system(@sbrk@).1365 OS (@sbrk@). 1381 1366 1382 1367 \begin{algorithm} … … 1443 1428 Algorithm~\ref{alg:heapObjectFreeOwn} shows the de-allocation (free) outline for an object at address $A$ with ownership. 1444 1429 First, the address is divided into small (@sbrk@) or large (@mmap@). 1445 For large allocations, the storage is unmapped back to the operating system.1430 For large allocations, the storage is unmapped back to the OS. 1446 1431 For small allocations, the bucket associated with the request size is retrieved. 1447 1432 If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket. … … 3044 3029 3045 3030 \textsf{pt3} is the only memory allocator where the total dynamic memory goes down in the second half of the program lifetime when the memory is freed by the benchmark program. 3046 It makes pt3 the only memory allocator that gives memory back to the operating systemas it is freed by the program.3031 It makes pt3 the only memory allocator that gives memory back to the OS as it is freed by the program. 3047 3032 3048 3033 % FOR 1 THREAD -
doc/papers/llheap/figures/AllocatorComponents.fig
r541dbc09 r77afbb4 8 8 -2 9 9 1200 2 10 6 1275 2025 2700 262511 10 6 2400 2025 2700 2625 12 11 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 … … 14 13 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 15 14 2700 2025 2700 2325 2400 2325 2400 2025 2700 2025 16 -617 4 2 0 50 -1 2 11 0.0000 2 165 1005 2325 2400 Management\00118 15 -6 19 16 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 … … 61 58 2 2 0 1 0 7 60 -1 13 0.000 0 0 -1 0 0 5 62 59 3300 2700 6300 2700 6300 3000 3300 3000 3300 2700 63 4 0 0 50 -1 2 11 0.0000 2 165 585 3300 1725 Storage\00160 4 0 0 50 -1 2 11 0.0000 2 165 1005 3300 1725 Storage Data\001 64 61 4 2 0 50 -1 0 11 0.0000 2 165 810 3000 1875 free objects\001 65 62 4 2 0 50 -1 0 11 0.0000 2 135 1140 3000 2850 reserve memory\001 66 63 4 1 0 50 -1 0 11 0.0000 2 120 795 2325 1500 Static Zone\001 67 64 4 1 0 50 -1 0 11 0.0000 2 165 1845 4800 1500 Dynamic-Allocation Zone\001 65 4 2 0 50 -1 2 11 0.0000 2 165 1005 2325 2325 Management\001 66 4 2 0 50 -1 2 11 0.0000 2 135 375 2325 2525 Data\001
Note: See TracChangeset
for help on using the changeset viewer.