Ignore:
Timestamp:
Feb 23, 2022, 10:31:12 AM (4 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, ast-experimental, enum, master, pthread-emulation, qualifiedEnum
Children:
cc7bbe6
Parents:
f53afafb
Message:

more updates to added text

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/intro.tex

    rf53afafb r3a038fa  
    11\chapter{Introduction}
    2 
    3 
    4 \section{Introduction}
    52
    63% Shared-memory multi-processor computers are ubiquitous and important for improving application performance.
     
    96% Therefore, providing high-performance, scalable memory-management is important for virtually all shared-memory multi-threaded programs.
    107
     8\vspace*{-23pt}
    119Memory management takes a sequence of program generated allocation/deallocation requests and attempts to satisfy them within a fixed-sized block of memory while minimizing the total amount of memory used.
    1210A general-purpose dynamic-allocation algorithm cannot anticipate future allocation requests so its output is rarely optimal.
     
    1614
    1715
    18 \subsection{Memory Structure}
     16\section{Memory Structure}
    1917\label{s:MemoryStructure}
    2018
     
    2523Dynamic code/data memory is managed by the dynamic loader for libraries loaded at runtime, which is complex especially in a multi-threaded program~\cite{Huang06}.
    2624However, changes to the dynamic code/data space are typically infrequent, many occurring at program startup, and are largely outside of a program's control.
    27 Stack memory is managed by the program call-mechanism using simple LIFO management, which works well for sequential programs.
     25Stack memory is managed by the program call-mechanism using a simple LIFO technique, which works well for sequential programs.
    2826For multi-threaded programs (and coroutines), a new stack is created for each thread;
    2927these thread stacks are commonly created in dynamic-allocation memory.
     
    3937
    4038
    41 \subsection{Dynamic Memory-Management}
     39\section{Dynamic Memory-Management}
    4240\label{s:DynamicMemoryManagement}
    4341
    4442Modern programming languages manage dynamic-allocation memory in different ways.
    45 Some languages, such as Lisp~\cite{CommonLisp}, Java~\cite{Java}, Go~\cite{Go}, Haskell~\cite{Haskell}, provide explicit allocation but \emph{implicit} deallocation of data through garbage collection~\cite{Wilson92}.
     43Some languages, such as Lisp~\cite{CommonLisp}, Java~\cite{Java}, Haskell~\cite{Haskell}, Go~\cite{Go}, provide explicit allocation but \emph{implicit} deallocation of data through garbage collection~\cite{Wilson92}.
    4644In general, garbage collection supports memory compaction, where dynamic (live) data is moved during runtime to better utilize space.
    4745However, moving data requires finding pointers to it and updating them to reflect new data locations.
     
    5856However, high-performance memory-allocators for kernel and user multi-threaded programs are still being designed and improved.
    5957For this reason, several alternative general-purpose allocators have been written for C/\CC with the goal of scaling in a multi-threaded program~\cite{Berger00,mtmalloc,streamflow,tcmalloc}.
    60 This work examines the design of high-performance allocators for use by kernel and user multi-threaded applications written in C/\CC.
    61 
    62 
    63 \subsection{Contributions}
     58This thesis examines the design of high-performance allocators for use by kernel and user multi-threaded applications written in C/\CC.
     59
     60
     61\section{Contributions}
    6462\label{s:Contributions}
    6563
    6664This work provides the following contributions in the area of concurrent dynamic allocation:
    67 \begin{enumerate}
    68 \item
    69 Implementation of a new stand-lone concurrent memory allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
    70 
    71 \item
    72 Adopt the return of @nullptr@ for a zero-sized allocation, rather than an actual memory address, both of which can be passed to @free@.
    73 Most allocators use @nullptr@ to indicate an allocation failure, such as full memory;
    74 hence the need to return an alternate value for a zero-sized allocation.
    75 The alternative is to abort the program on allocation failure.
    76 In theory, notifying the programmer of a failure allows recovery;
    77 in practice, it is almost impossible to gracefully recover from allocation failure, especially full memory, so adopting the cheaper return @nullptr@ for a zero-sized allocation is chosen.
    78 
    79 \item
    80 Extended the standard C heap functionality by preserving with each allocation its original request size versus the amount allocated due to bucketing, if an allocation is zero fill, and the allocation alignment.
     65\begin{enumerate}[leftmargin=*]
     66\item
     67Implementation of a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
     68
     69\item
     70Adopt returning of @nullptr@ for a zero-sized allocation, rather than an actual memory address, both of which can be passed to @free@.
     71
     72\item
     73Extended the standard C heap functionality by preserving with each allocation its original request size versus the amount allocated, if an allocation is zero fill, and the allocation alignment.
    8174
    8275\item
     
    117110
    118111\item
    119 Provide complete and fast allocation statistics to help understand program behaviour:
     112Provide mostly contention-free allocation and free operations via a heap-per-kernel-thread implementation.
     113
     114\item
     115Provide complete, fast, and contention-free allocation statistics to help understand program behaviour:
    120116\begin{itemize}
    121117\item
     
    129125
    130126\item
    131 Provide mostly contention-free allocation and free operations via a heap-per-kernel-thread implementation.
    132 
    133 \item
    134 Provide extensive contention-free runtime checks to valid allocation operations and identify the amount of unfreed storage at program termination.
     127Provide extensive runtime checks to valid allocation operations and identify the amount of unfreed storage at program termination.
    135128
    136129\item
Note: See TracChangeset for help on using the changeset viewer.