Ignore:
Timestamp:
May 20, 2022, 2:48:24 PM (2 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
Children:
598dc68
Parents:
25fa20a
Message:

final proofread of Mubeen's MMath thesis

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/allocator.tex

    r25fa20a rfb6691a  
    109109Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}.
    110110\begin{quote}
    111 A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}
     111A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable}
    112112\end{quote}
    113113If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
     
    138138(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.)
    139139Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted.
    140 \PAB{Heaps are uncontended for a KTs memory operations as every KT has its own thread-local heap which is not shared with any other KT (modulo operations on the global pool and ownership).}
     140Heaps are uncontended for a KTs memory operations as every KT has its own thread-local heap, modulo operations on the global pool and ownership.
    141141
    142142Problems:
     
    269269
    270270Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
    271 \PAB{All objects in a bucket are of the same size.}
     271All objects in a bucket are of the same size.
    272272The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the operating system.
    273273Each free bucket of a specific size has the following two lists:
     
    401401\end{center}
    402402The storage between @E@ and @H@ is chained onto the appropriate free list for future allocations.
    403 \PAB{The same approach is used for sufficiently large free blocks}, where @E@ is the start of the free block, and any unused storage before @H@ or after the allocated object becomes free storage.
     403The same approach is used for sufficiently large free blocks, where @E@ is the start of the free block, and any unused storage before @H@ or after the allocated object becomes free storage.
    404404In this approach, the aligned address @A@ is the same as the allocated storage address @P@, \ie @P@ $=$ @A@ for all allocation routines, which simplifies deallocation.
    405405However, if there are a large number of aligned requests, this approach leads to memory fragmentation from the small free areas around the aligned object.
     
    488488\end{description}
    489489The second field remembers the request size versus the allocation (bucket) size, \eg request 42 bytes which is rounded up to 64 bytes.
    490 \PAB{Since programmers think in request sizes rather than allocation sizes, the request size allows better generation of statistics or errors and also helps in memory management.}
     490Since programmers think in request sizes rather than allocation sizes, the request size allows better generation of statistics or errors and also helps in memory management.
    491491
    492492\begin{figure}
     
    497497\end{figure}
    498498
    499 \PAB{The low-order 3-bits of the first field are \emph{unused} for any stored values as these values are 16-byte aligned by default, whereas the second field may use all of its bits.}
     499The low-order 3-bits of the first field are \emph{unused} for any stored values as these values are 16-byte aligned by default, whereas the second field may use all of its bits.
    500500The 3 unused bits are used to represent mapped allocation, zero filled, and alignment, respectively.
    501501Note, the alignment bit is not used in the normal header and the zero-filled/mapped bits are not used in the fake header.
     
    515515To locate all statistic counters, heaps are linked together in statistics mode, and this list is locked and traversed to sum all counters across heaps.
    516516Note, the list is locked to prevent errors traversing an active list;
    517 \PAB{the statistics counters are not locked and can flicker during accumulation.}
     517the statistics counters are not locked and can flicker during accumulation.
    518518\VRef[Figure]{f:StatiticsOutput} shows an example of statistics output, which covers all allocation operations and information about deallocating storage not owned by a thread.
    519519No other memory allocator studied provides as comprehensive statistical information.
     
    558558\label{s:UserlevelThreadingSupport}
    559559
    560 The serially-reusable problem (see \VRef{s:AllocationFastpath}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model.
    561 \PAB{The solution is to prevent interrupts that can result in CPU or KT change during operations that are logically critical sections such as moving free storage from public heap to the private heap.}
     560The serially-reusable problem (see \VPageref{p:SeriallyReusable}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model.
     561The solution is to prevent interrupts that can result in a CPU or KT change during operations that are logically critical sections such as starting a memory operation on one KT and completing it on another.
    562562Locking these critical sections negates any attempt for a quick fastpath and results in high contention.
    563563For user-level threading, the serially-reusable problem appears with time slicing for preemptable scheduling, as the signal handler context switches to another user-level thread.
    564564Without time slicing, a user thread performing a long computation can prevent the execution of (starve) other threads.
    565 \PAB{To prevent starvation for a memory-allocation-intensive thread, \ie the time slice always triggers in an allocation critical-section for one thread so the thread never gets time sliced, a thread-local \newterm{rollforward} flag is set in the signal handler when it aborts a time slice.}
     565To prevent starvation for a memory-allocation-intensive thread, \ie the time slice always triggers in an allocation critical-section for one thread so the thread never gets time sliced, a thread-local \newterm{rollforward} flag is set in the signal handler when it aborts a time slice.
    566566The rollforward flag is tested at the end of each allocation funnel routine (see \VPageref{p:FunnelRoutine}), and if set, it is reset and a volunteer yield (context switch) is performed to allow other threads to execute.
    567567
    568568llheap uses two techniques to detect when execution is in an allocation operation or routine called from allocation operation, to abort any time slice during this period.
    569 On the slowpath when executing expensive operations, like @sbrk@ or @mmap@,
    570 \PAB{interrupts are disabled/enabled by setting kernel-thread-local flags so the signal handler aborts immediately.}
    571 \PAB{On the fastpath, disabling/enabling interrupts is too expensive as accessing kernel-thread-local storage can be expensive and not user-thread-safe.}
     569On the slowpath when executing expensive operations, like @sbrk@ or @mmap@, interrupts are disabled/enabled by setting kernel-thread-local flags so the signal handler aborts immediately.
     570On the fastpath, disabling/enabling interrupts is too expensive as accessing kernel-thread-local storage can be expensive and not user-thread-safe.
    572571For example, the ARM processor stores the thread-local pointer in a coprocessor register that cannot perform atomic base-displacement addressing.
    573 \PAB{Hence, there is a window between loading the kernel-thread-local pointer from the coprocessor register into a normal register and adding the displacement when a time slice can move a thread.}
     572Hence, there is a window between loading the kernel-thread-local pointer from the coprocessor register into a normal register and adding the displacement when a time slice can move a thread.
    574573
    575574The fast technique (with lower run time cost) is to define a special code section and places all non-interruptible routines in this section.
     
    589588Programs can be statically or dynamically linked.
    590589\item
    591 \PAB{The order in which the linker schedules startup code is poorly supported so cannot be controlled entirely.}
     590The order in which the linker schedules startup code is poorly supported so it cannot be controlled entirely.
    592591\item
    593592Knowing a KT's start and end independently from the KT code is difficult.
     
    607606The problem is getting initialization done before the first allocator call.
    608607However, there does not seem to be mechanism to tell either the static or dynamic loader to first perform initialization code before any calls to a loaded library.
    609 \PAB{Also, initialization code of other libraries and run-time envoronment may call memory allocation routines such as \lstinline{malloc}.
    610 So, this creates an even more difficult situation as there is no mechanism to tell either the static or dynamic loader to first perform initialization code of memory allocator before any other initialization that may involve a dynamic memory allocation call.}
     608Also, initialization code of other libraries and the run-time environment may call memory allocation routines such as \lstinline{malloc}.
     609This compounds the situation as there is no mechanism to tell either the static or dynamic loader to first perform the initialization code of the memory allocator before any other initialization that may involve a dynamic memory allocation call.
    611610As a result, calls to allocation routines occur without initialization.
    612611To deal with this problem, it is necessary to put a conditional initialization check along the allocation fastpath to trigger initialization (singleton pattern).
     
    740739\paragraph{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
    741740extends @calloc@ for allocating a dynamic array of objects without calculating the total size of array explicitly but \emph{without} zero-filling the memory.
    742 @aalloc@ is significantly faster than @calloc@, \PAB{which is the only alternative given by the memory allocation routines}.
     741@aalloc@ is significantly faster than @calloc@, which is the only alternative given by the standard memory-allocation routines.
    743742
    744743\noindent\textbf{Usage}
     
    935934\paragraph{\lstinline{T * alloc( ... )} or \lstinline{T * alloc( size_t dim, ... )}}
    936935is overloaded with a variable number of specific allocation operations, or an integer dimension parameter followed by a variable number of specific allocation operations.
    937 \PAB{These allocation operations can be passed as positional arguments when calling \lstinline{alloc} routine.}
     936These allocation operations can be passed as named arguments when calling the \lstinline{alloc} routine.
    938937A call without parameters returns a dynamically allocated object of type @T@ (@malloc@).
    939938A call with only the dimension (dim) parameter returns a dynamically allocated array of objects of type @T@ (@aalloc@).
Note: See TracChangeset for help on using the changeset viewer.