# Changeset 365c8dcb

Ignore:
Timestamp:
Apr 14, 2022, 3:00:28 PM (3 months ago)
Branches:
enum, master
Children:
bfd5512
Parents:
30d91e4 (diff), 4ec9513 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' into enum

Files:
1 deleted
29 edited
2 moved

Unmodified
Removed
• ## doc/theses/mike_brooks_MMath/.gitignore

 r30d91e4 !Makefile build uw-thesis.pdf uw-ethesis.pdf
• ## doc/theses/mike_brooks_MMath/array.tex

 r30d91e4 enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b ) \end{eqnarray*} \section{Bound checks, added and removed} \CFA array subscripting is protected with runtime bound checks.  Having dependent typing causes the opimizer to remove more of these bound checks than it would without them.  This section provides a demonstration of the effect. The experiment compares the \CFA array system with the padded-room system [todo:xref] most typically exemplified by Java arrays, but also reflected in the C++ pattern where restricted vector usage models a checked array.  The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.  The experiment compares with the C++ version to keep access to generated assembly code simple. As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and C++ versions.  When the programmer treats the array's bound correctly (making the subscript obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.  But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assemly, ready to catch an occurrence the mistake. TODO: paste source and assemby codes Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.  The case is naive matrix multiplication over a row-major encoding. TODO: paste source codes \section{Comparison with other arrays} \CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.  Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly typed ownership system that further helps guarantee statically the validity of every pointer deference.  These systems, therefore, ask the programmer to convince the typechecker that every pointer dereference is valid.  \CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter. \CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.  Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.  The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program. \subsction{Safety in a padded room} Java's array [todo:cite] is a straightforward example of assuring safety against undefined behaviour, at a cost of expressiveness for more applied properties.  Consider the array parameter declarations in: \begin{tabular}{rl} C      &  @void f( size_t n, size_t m, float a[n][m] );@ \\ Java   &  @void f( float[][] a );@ \end{tabular} Java's safety against undefined behaviour assures the callee that, if @a@ is non-null, then @a.length@ is a valid access (say, evaluating to the number $\ell$) and if @i@ is in $[0, \ell)$ then @a[i]@ is a valid access.  If a value of @i@ outside this range is used, a runtime error is guaranteed.  In these respects, C offers no guarantess at all.  Notably, the suggestion that @n@ is the intended size of the first dimension of @a@ is documentation only.  Indeed, many might prefer the technically equivalent declarations @float a[][m]@ or @float (*a)[m]@ as emphasizing the no guarantees'' nature of an infrequently used language feature, over using the opportunity to explain a programmer intention.  Moreover, even if @a[0][0]@ is valid for the purpose intended, C's basic infamous feature is the possibility of an @i@, such that @a[i][0]@ is not valid for the same purpose, and yet, its evaluation does not produce an error. Java's lack of expressiveness for more applied properties means these outcomes are possible: \begin{itemize} \item @a[0][17]@ and @a[2][17]@ are valid accesses, yet @a[1][17]@ is a runtime error, because @a[1]@ is a null pointer \item the same observation, now because @a[1]@ refers to an array of length 5 \item execution times vary, because the @float@ values within @a@ are sometimes stored nearly contiguously, and other times, not at all \end{itemize} C's array has none of these limitations, nor do any of the array language'' comparators discussed in this section. This Java level of safety and expressiveness is also exemplified in the C family, with the commonly given advice [todo:cite example], for C++ programmers to use @std::vector@ in place of the C++ language's array, which is essentially the C array.  The advice is that, while a vector is also more powerful (and quirky) than an arry, its capabilities include options to preallocate with an upfront size, to use an available bound-checked accessor (@a.at(i)@ in place of @a[i]@), to avoid using @push_back@, and to use a vector of vectors.  Used with these restrictions, out-of-bound accesses are stopped, and in-bound accesses never exercise the vector's ability to grow, which is to say, they never make the program slow to reallocate and copy, and they never invalidate the program's other references to the contained values.  Allowing this scheme the same referential integrity assumption that \CFA enjoys [todo:xref], this scheme matches Java's safety and expressiveness exactly.  [TODO: decide about going deeper; some of the Java expressiveness concerns have mitigations, up to even more tradeoffs.] \subsection{Levels of dependently typed arrays} The \CFA array and the field of array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as: \begin{itemize} \item a \emph{zip}-style operation that consumes two arrays of equal length \item a \emph{map}-style operation whose produced length matches the consumed length \item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions \end{itemize} Across this field, this expressiveness is not just an avaiable place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.  Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C. Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.  In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.  The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.  In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.  [TODO: clarify how even Idris type checking terminates] Idris is a current, general-purpose dependently typed programming language.  Length checking is a common benchmark for full dependent type stystems.  Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.  [todo: finish explaining what Data.Vect is and then the essence of the comparison] POINTS: here is how our basic checks look (on a system that deosn't have to compromise); it can also do these other cool checks, but watch how I can mess with its conservativeness and termination Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer offer novel contributions concerning similar, restricted dependent types for tracking array length.  Unlike \CFA, both are garbage-collected functional languages.  Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.  So, like \CFA, the checking in question is a leightweight bounds-only analysis.  Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression. The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.  There is a particular emphasis on an existential type, enabling callee-determined return shapes. Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type. Futhark and full-strength dependently typed lanaguages treat array sizes are ordinary values.  Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not. CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.  Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.  Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark. \subsection{Static safety in C extensions} \section{Future Work} \subsection{Declaration syntax} \subsection{Range slicing} \subsection{With a module system} \subsection{With described enumerations} A project in \CFA's current portfolio will improve enumerations.  In the incumbent state, \CFA has C's enumerations, unmodified.  I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use.  It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations: \begin{lstlisting} forall( T | is_enum(T) ) void show_in_context( T val ) { for( T i ) { string decorator = ""; if ( i == val-1 ) decorator = "< ready"; if ( i == val   ) decorator = "< go"   ; sout | i | decorator; } } enum weekday { mon, tue, wed = 500, thu, fri }; show_in_context( wed ); \end{lstlisting} with output \begin{lstlisting} mon tue < ready wed < go thu fri \end{lstlisting} The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA.  But the example shows these abilities: \begin{itemize} \item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type \item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@) \item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@) \item a provision for looping (the @for@ form used) over the values of the type. \end{itemize} If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level. [TODO: indroduce Ada in the comparators] In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, C++, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.  The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing. This change of perspective also lets us remove ubiquitous dynamic bound checks.  [TODO: xref] discusses how automatically inserted bound checks can often be otimized away.  But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.  To remove the ubiquitious dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet. [TODO, fix confusion:  Idris has this arrangement of checks, but still the natural numbers as the domain.] The structural assumptions required for the domain of an array in Dex are given by the trait (there, interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.  Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above. \begin{lstlisting} interface Ix n get_size n : Unit -> Int ordinal : n -> Int unsafe_from_ordinal n : Int -> n \end{lstlisting} Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.  This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this funciton.  Dex's example is a routine that calculates pointwise differences between two samples.  Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).  In both cases, but with respectively dimensoned interpretations of size,'' this function requries the argument sizes to match, and it produces a result of the that size. The polymorphism plays out with the pointwise-difference routine advertizing a single-dimensional interface whose domain type is generic.  In the audio instantiation, the duration-of-clip type argument is used for the domain.  In the photograph instantiation, it's the tuple-type of $\langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle$.  This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements \begin{lstlisting} instance {a b} [Ix a, Ix b] Ix (a & b) get_size = \(). size a * size b ordinal = \(i, j). (ordinal i * size b) + ordinal j unsafe_from_ordinal = \o. bs = size b (unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs)) \end{lstlisting} and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time \begin{lstlisting} img_trans :: (img_wd,img_ht)=>Real img_trans.(i,j) = img.i.j result = pairwise img_trans \end{lstlisting} [TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4] In the case of adapting this pattern to \CFA, my current work provides an adapter from successively subscripted'' to subscripted by tuple,'' so it is likely that generalizing my adapter beyond subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary. \subsection{Retire pointer arithmetic}
• ## doc/theses/mike_brooks_MMath/uw-ethesis.bib

 r30d91e4 % For use with BibTeX % -------------------------------------------------- % Cforall @misc{cfa:frontpage, url = {https://cforall.uwaterloo.ca/} } @article{cfa:typesystem, author    = {Aaron Moss and Robert Schluntz and Peter A. Buhr}, title     = {{\CFA} : Adding modern programming language features to {C}}, journal   = {Softw. Pract. Exp.}, volume    = {48}, number    = {12}, pages     = {2111--2146}, year      = {2018}, url       = {https://doi.org/10.1002/spe.2624}, doi       = {10.1002/spe.2624}, timestamp = {Thu, 09 Apr 2020 17:14:14 +0200}, biburl    = {https://dblp.org/rec/journals/spe/MossSB18.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } % -------------------------------------------------- % Array prior work @inproceedings{arr:futhark:tytheory, author = {Henriksen, Troels and Elsman, Martin}, title = {Towards Size-Dependent Types for Array Programming}, year = {2021}, isbn = {9781450384667}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3460944.3464310}, doi = {10.1145/3460944.3464310}, abstract = {We present a type system for expressing size constraints on array types in an ML-style type system. The goal is to detect shape mismatches at compile-time, while being simpler than full dependent types. The main restrictions is that the only terms that can occur in types are array sizes, and syntactically they must be variables or constants. For those programs where this is not sufficient, we support a form of existential types, with the type system automatically managing the requisite book-keeping. We formalise a large subset of the type system in a small core language, which we prove sound. We also present an integration of the type system in the high-performance parallel functional language Futhark, and show on a collection of 44 representative programs that the restrictions in the type system are not too problematic in practice.}, booktitle = {Proceedings of the 7th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming}, pages = {1–14}, numpages = {14}, keywords = {functional programming, parallel programming, type systems}, location = {Virtual, Canada}, series = {ARRAY 2021} } @article{arr:dex:long, author    = {Adam Paszke and Daniel D. Johnson and David Duvenaud and Dimitrios Vytiniotis and Alexey Radul and Matthew J. Johnson and Jonathan Ragan{-}Kelley and Dougal Maclaurin}, title     = {Getting to the Point. Index Sets and Parallelism-Preserving Autodiff for Pointful Array Programming}, journal   = {CoRR}, volume    = {abs/2104.05372}, year      = {2021}, url       = {https://arxiv.org/abs/2104.05372}, eprinttype = {arXiv}, eprint    = {2104.05372}, timestamp = {Mon, 25 Oct 2021 07:55:47 +0200}, biburl    = {https://dblp.org/rec/journals/corr/abs-2104-05372.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
• ## doc/theses/mubeen_zulfiqar_MMath/allocator.tex

 r30d91e4 \chapter{Allocator} \section{uHeap} uHeap is a lightweight memory allocator. The objective behind uHeap is to design a minimal concurrent memory allocator that has new features and also fulfills GNU C Library requirements (FIX ME: cite requirements). The objective of uHeap's new design was to fulfill following requirements: \begin{itemize} \item It should be concurrent and thread-safe for multi-threaded programs. \item It should avoid global locks, on resources shared across all threads, as much as possible. \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators). \item It should be a lightweight memory allocator. \end{itemize} This chapter presents a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading). The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}. \section{llheap} The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call. (Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.) A direct consequence of this objective is very simple or no storage coalescing; hence, llheap's design is willing to use more storage to lower latency. This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory. Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{Performance}). % The objective of llheap's new design was to fulfill following requirements: % \begin{itemize} % \item It should be concurrent and thread-safe for multi-threaded programs. % \item It should avoid global locks, on resources shared across all threads, as much as possible. % \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators). % \item It should be a lightweight memory allocator. % \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Design choices for uHeap} uHeap's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed: \paragraph{Design 1: Centralized} One heap, but lower bucket sizes are N-shared across KTs. This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes. When KTs $\le$ N, the important bucket sizes are uncontented. When KTs $>$ N, the free buckets are contented. Therefore, threads are only contending for a small number of buckets, which are distributed among them to reduce contention. \begin{cquote} \section{Design Choices} llheap's design was reviewed and changed multiple times throughout the thesis. Some of the rejected designs are discussed because they show the path to the final design (see discussion in \VRef{s:MultipleHeaps}). Note, a few simples tests for a design choice were compared with the current best allocators to determine the viability of a design. \subsection{Allocation Fastpath} These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced. \paragraph{T:1 model} \VRef[Figure]{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are N-shared across KTs. This design leverages the fact that 95\% of allocation requests are less than 1024 bytes and there are only 3--5 different request sizes. When KTs $\le$ N, the common bucket sizes are uncontented; when KTs $>$ N, the free buckets are contented and latency increases significantly. In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared. Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency; however, picking N is workload specific. \begin{figure} \centering \input{AllocDS1} \caption{T:1 with Shared Buckets} \label{f:T1SharedBuckets} \end{figure} Problems: \begin{itemize} \item Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator. \item When no thread is assigned a bucket number, its free storage is unavailable. \item All KTs contend for the global-pool lock for initial allocations, before free-lists get populated. \end{itemize} Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency. \paragraph{T:H model} \VRef[Figure]{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded across the KTs. A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket. When KT $\le$ N, the heaps are uncontented; when KTs $>$ N, the heaps are contented. In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared. By adjusting N upwards, this approach reduces contention but increases storage (time versus space); however, picking N is workload specific. \begin{figure} \centering \input{AllocDS2} \end{cquote} Problems: need to know when a kernel thread (KT) is created and destroyed to know when to assign a shared bucket-number. When no thread is assigned a bucket number, its free storage is unavailable. All KTs will be contended for one lock on sbrk for their initial allocations (before free-lists gets populated). \paragraph{Design 2: Decentralized N Heaps} Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area. Kernel threads (KT) are assigned to the N heaps. When KTs $\le$ N, the heaps are uncontented. When KTs $>$ N, the heaps are contented. By adjusting N, this approach reduces storage at the cost of speed due to contention. In all cases, a thread acquires/releases a lock, contented or uncontented. \begin{cquote} \centering \input{AllocDS1} \end{cquote} Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT. \paragraph{Design 3: Decentralized Per-thread Heaps} Design 3 is similar to design 2 but instead of having an M:N model, it uses a 1:1 model. So, instead of having N heaos and sharing them among M KTs, Design 3 has one heap for each KT. Dynamic number of heaps: create a thread-local heap for each kernel thread (KT) with a bump-area allocated from the @sbrk@ area. Each KT will have its own exclusive thread-local heap. Heap will be uncontended between KTs regardless how many KTs have been created. Operations on @sbrk@ area will still be protected by locks. %\begin{cquote} %\centering %\input{AllocDS3} FIXME add figs %\end{cquote} Problems: We cannot destroy the heap when a KT exits because our dynamic objects have ownership and they are returned to the heap that created them when the program frees a dynamic object. All dynamic objects point back to their owner heap. If a thread A creates an object O, passes it to another thread B, and A itself exits. When B will free object O, O should return to A's heap so A's heap should be preserved for the lifetime of the whole program as their might be objects in-use of other threads that were allocated by A. Also, we need to know when a KT is created and destroyed to know when to create/destroy a heap for the KT. \paragraph{Design 4: Decentralized Per-CPU Heaps} Design 4 is similar to Design 3 but instead of having a heap for each thread, it creates a heap for each CPU. Fixed number of heaps for a machine: create a heap for each CPU with a bump-area allocated from the @sbrk@ area. Each CPU will have its own CPU-local heap. When the program does a dynamic memory operation, it will be entertained by the heap of the CPU where the process is currently running on. Each CPU will have its own exclusive heap. Just like Design 3(FIXME cite), heap will be uncontended between KTs regardless how many KTs have been created. Operations on @sbrk@ area will still be protected by locks. To deal with preemtion during a dynamic memory operation, librseq(FIXME cite) will be used to make sure that the whole dynamic memory operation completes on one CPU. librseq's restartable sequences can make it possible to re-run a critical section and undo the current writes if a preemption happened during the critical section's execution. %\begin{cquote} %\centering %\input{AllocDS4} FIXME add figs %\end{cquote} Problems: This approach was slower than the per-thread model. Also, librseq does not provide such restartable sequences to detect preemtions in user-level threading system which is important to us as CFA(FIXME cite) has its own threading system that we want to support. Out of the four designs, Design 3 was chosen because of the following reasons. \begin{itemize} \item Decentralized designes are better in general as compared to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designes shard the whole heap which has all the buckets with the addition of sharding sbrk area. So Design 1 was eliminated. \item Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenerio. \item Design 4 was eliminated because it was slower than Design 3 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achive user-threading safety which has some cost to it. Desing 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower. \end{itemize} \subsection{Advantages of distributed design} The distributed design of uHeap is concurrent to work in multi-threaded applications. Some key benefits of the distributed design of uHeap are as follows: \begin{itemize} \item The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The call to sbrk will be protected using locks but bump allocation (on memory taken from sbrk) will not be contended once the sbrk call has returned. \item Low or almost no contention on heap resources. \item It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty. \item Distributed design avoids unnecassry locks on resources shared across all KTs. \end{itemize} \caption{T:H with Shared Heaps} \label{f:THSharedHeaps} \end{figure} Problems: \begin{itemize} \item Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator. \item When no thread is assigned to a heap, its free storage is unavailable. \item Ownership issues arise (see \VRef{s:Ownership}). \item All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated. \end{itemize} Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency. \paragraph{T:H model, H = number of CPUs} This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@. (See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per CPU.) Hence, each CPU logically has its own private heap and local pool. A memory operation is serviced from the heap associated with the CPU executing the operation. This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership). This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs. Problems: \begin{itemize} \item Need to know when a CPU is added/removed from the @taskset@. \item Need a fast way to determine the CPU a KT is executing on to access the appropriate heap. \item Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}. \begin{quote} A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable} \end{quote} If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness. Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable. Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler. \noindent Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted. \end{itemize} Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-section along the allocation fast-path produced a significant increase in allocation costs. Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs. For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs. The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation. Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section. Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap. A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection. \paragraph{1:1 model} This design is the T:H model with T = H, where there is one thread-local heap for each KT. (See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.) Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted. Heaps are uncontended for a KTs memory operations to its heap (modulo operations on the global pool and ownership). Problems: \begin{itemize} \item Need to know when a KT is starts/terminates to create/delete its heap. \noindent It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates. \item There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap. \noindent The classic solution only deletes a heap after all referents are returned, which is complex. The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees. If old referents return storage to a terminated heap, it is handled in the same way as an active heap. To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation). In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT. \item There can be significant external fragmentation as the number of KTs increases. \noindent In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs. Since the number of CPUs is relatively small, >~1024, and a heap relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs. \item There is the same serially-reusable problem with UTs migrating across KTs. \end{itemize} Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach. \vspace{5pt} \noindent The conclusion from this design exercise is: any atomic fence, instruction (lock free), or lock along the allocation fastpath produces significant slowdown. For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps maybe shared by multiple threads, even when KTs $\le$ N. For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath. However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs. More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading. Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required. The 1:1 model still has the serially-reusable problem with user-level threading, which is address in \VRef{}, and the greatest potential for heap blowup for certain allocation patterns. % \begin{itemize} % \item % A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated. % \item % Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario. % \item % Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it. % that  because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower. % \end{itemize} % Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons: % \subsection{Advantages of distributed design} % % The distributed design of llheap is concurrent to work in multi-threaded applications. % Some key benefits of the distributed design of llheap are as follows: % \begin{itemize} % \item % The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned. % \item % Low or almost no contention on heap resources. % \item % It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty. % \item % Distributed design avoids unnecessary locks on resources shared across all KTs. % \end{itemize} \subsection{Allocation Latency} A primary goal of llheap is low latency. Two forms of latency are internal and external. Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system. Ideally latency is $O(1)$ with a small constant. To obtain $O(1)$ internal latency means no searching on the allocation fastpath, largely prohibits coalescing, which leads to external fragmentation. The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger). To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation. Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable. The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VRef{}). Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{uHeap Structure} As described in (FIXME cite 2.4) uHeap uses following features of multi-threaded memory allocators. \begin{itemize} \item uHeap has multiple heaps without a global heap and uses 1:1 model. (FIXME cite 2.5 1:1 model) \item uHeap uses object ownership. (FIXME cite 2.5.2) \item uHeap does not use object containers (FIXME cite 2.6) or any coalescing technique. Instead each dynamic object allocated by uHeap has a header than contains bookkeeping information. \item Each thread-local heap in uHeap has its own allocation buffer that is taken from the system using sbrk() call. (FIXME cite 2.7) \item Unless a heap is freeing an object that is owned by another thread's heap or heap is using sbrk() system call, uHeap is mostly lock-free which eliminates most of the contention on shared resources. (FIXME cite 2.8) \end{itemize} As uHeap uses a heap per-thread model to reduce contention on heap resources, we manage a list of heaps (heap-list) that can be used by threads. The list is empty at the start of the program. When a kernel thread (KT) is created, we check if heap-list is empty. If no then a heap is removed from the heap-list and is given to this new KT to use exclusively. If yes then a new heap object is created in dynamic memory and is given to this new KT to use exclusively. When a KT exits, its heap is not destroyed but instead its heap is put on the heap-list and is ready to be reused by new KTs. This reduces the memory footprint as the objects on free-lists of a KT that has exited can be reused by a new KT. Also, we preserve all the heaps that were created during the lifetime of the program till the end of the program. uHeap uses object ownership where an object is freed to the free-buckets of the heap that allocated it. Even after a KT A has exited, its heap has to be preserved as there might be objects in-use of other threads that were initially allocated by A and the passed to other threads. \section{llheap Structure} \VRef[Figure]{f:llheapStructure} shows the design of llheap, which uses the following features: \begin{itemize} \item 1:1 multiple-heap model to minimize the fastpath, \item can be built with or without heap ownership, \item headers per allocation versus containers, \item no coalescing to minimize latency, \item local reserved memory (pool) obtained from the operating system using @sbrk@ call, \item global reserved memory (pool) obtained from the operating system using @mmap@ call to create and reuse heaps needed by threads. \end{itemize} \begin{figure} \centering \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps} \caption{HeapStructure} \label{fig:heapStructureFig} % \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps} \input{llheap} \caption{llheap Structure} \label{f:llheapStructure} \end{figure} Each heap uses seggregated free-buckets that have free objects of a specific size. Each free-bucket of a specific size has following 2 lists in it: \begin{itemize} \item Free list is used when a thread is freeing an object that is owned by its own heap so free list does not use any locks/atomic-operations as it is only used by the owner KT. \item Away list is used when a thread A is freeing an object that is owned by another KT B's heap. This object should be freed to the owner heap (B's heap) so A will place the object on the away list of B. Away list is lock protected as it is shared by all other threads. \end{itemize} When a dynamic object of a size S is requested. The thread-local heap will check if S is greater than or equal to the mmap threshhold. Any request larger than the mmap threshhold is fulfilled by allocating an mmap area of that size and such requests are not allocated on sbrk area. The value of this threshhold can be changed using mallopt routine but the new value should not be larger than our biggest free-bucket size. Algorithm~\ref{alg:heapObjectAlloc} briefly shows how an allocation request is fulfilled. llheap starts by creating an array of $N$ global heaps from storage obtained by @mmap@, where $N$ is the number of computer cores. There is a global bump-pointer to the next free heap in the array. When this array is exhausted, another array is allocated. There is a global top pointer to a heap intrusive link that chain free heaps from terminated threads, where these heaps are reused by new threads. When statistics are turned on, there is a global top pointer to a heap intrusive link that chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps (see @malloc_stats@ \VRef{}). When a KT starts, a heap is allocated from the current array for exclusive used by the KT. When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of heaps. The free heaps is a stack so hot storage is reused first. Preserving all heaps created during the program lifetime, solves the storage lifetime problem. This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially. llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap. Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M. The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation (see @mallopt@ \VRef{}), \ie small objects managed by the program and large objects managed by the operating system. Each free bucket of a specific size has following two lists: \begin{itemize} \item A free stack used solely by the KT heap-owner, so push/pop operations do not require locking. The free objects is a stack so hot storage is reused first. \item For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operation require locking. The entire ownership stack be removed and become the head of the corresponding free stack, when the free stack is empty. \end{itemize} Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$. First, the allocation is divided into small (@sbrk@) or large (@mmap@). For small allocations, $S$ is quantized into a bucket size. Quantizing is performed using a binary search, using the ordered bucket array. An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket. (Type @char@ restricts the number of bucket sizes to 256.) For $S$ > 64K, the binary search is used. Then, the allocation storage is obtained from the following locations (in order), with increasing latency. \begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt] \item bucket's free stack, \item bucket's away stack, \item heap's local pool \item global pool \item operating system (@sbrk@) \end{enumerate} \begin{algorithm} \caption{Dynamic object allocation of size S}\label{alg:heapObjectAlloc} \caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc} \begin{algorithmic}[1] \State $\textit{O} \gets \text{NULL}$ \If {$S < \textit{mmap-threshhold}$} \State $\textit{B} \gets (\text{smallest free-bucket} \geq S)$ \State $\textit{B} \gets \text{smallest free-bucket} \geq S$ \If {$\textit{B's free-list is empty}$} \If {$\textit{B's away-list is empty}$} \If {$\textit{heap's allocation buffer} < S$} \State $\text{get allocation buffer using system call sbrk()}$ \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$ \EndIf \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$ \end{algorithm} Algorithm~\ref{alg:heapObjectFree} shows the de-allocation (free) outline for an object at address $A$. \begin{algorithm}[h] \caption{Dynamic object free at address $A$}\label{alg:heapObjectFree} %\begin{algorithmic}[1] %\State write this algorithm %\end{algorithmic} \end{algorithm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Added Features and Methods} To improve the uHeap allocator (FIX ME: cite uHeap) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator. To improve the llheap allocator (FIX ME: cite llheap) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator. \subsection{C Interface} We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. THese features will programmer more control on the dynamic memory allocation. We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. These features will programmer more control on the dynamic memory allocation. \subsection{Out of Memory} \subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}} @aalloc@ is an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is calloc but calloc also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0. @aalloc@ is an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is @calloc@ but @calloc@ also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0. \paragraph{Usage} @aalloc@ takes two parameters. @elemSize@: size of the object in the array. \end{itemize} It returns address of dynamic object allocatoed on heap that can contain dim number of objects of the size elemSize. On failure, it returns a @NULL@ pointer. It returns address of dynamic object allocated on heap that can contain dim number of objects of the size elemSize. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{void * resize( void * oaddr, size_t size )}} @resize@ is an extension of relloc. It allows programmer to reuse a cuurently allocated dynamic object with a new size requirement. Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object. @resize@ is an extension of relloc. It allows programmer to reuse a currently allocated dynamic object with a new size requirement. Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object. \paragraph{Usage} @resize@ takes two parameters. @size@: the new size requirement of the to which the old object needs to be resized. \end{itemize} It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns a @NULL@ pointer. It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}} This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement. \paragraph{Usage} This resize takes three parameters. It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize). This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement. \paragraph{Usage} This resize takes three parameters. It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize). \begin{itemize} @size@: the new size requirement of the to which the old object needs to be resized. \end{itemize} It returns an object with the size and alignment given in the parameters. On failure, it returns a @NULL@ pointer. It returns an object with the size and alignment given in the parameters. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}} amemalign is a hybrid of memalign and aalloc. It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. It frees the programmer from calculating the total size of the array. amemalign is a hybrid of memalign and aalloc. It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. It frees the programmer from calculating the total size of the array. \paragraph{Usage} amemalign takes three parameters. @elemSize@: size of the object in the array. \end{itemize} It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns a @NULL@ pointer. It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}} cmemalign is a hybrid of amemalign and calloc. It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. This routine provides both features of aligning and 0 filling, implicitly. cmemalign is a hybrid of amemalign and calloc. It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. This routine provides both features of aligning and 0 filling, implicitly. \paragraph{Usage} cmemalign takes three parameters. @elemSize@: size of the object in the array. \end{itemize} It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns a @NULL@ pointer. It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{size_t malloc_alignment( void * addr )}} @malloc_alignment@ returns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment. @malloc_alignment@ returns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment. \paragraph{Usage} @malloc_alignment@ takes one parameters. @addr@: the address of the currently allocated dynamic object. \end{itemize} @malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeap allocator. @malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the llheap allocator. \subsection{\lstinline{bool malloc_zero_fill( void * addr )}} @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation. @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation. \paragraph{Usage} @malloc_zero_fill@ takes one parameters. @addr@: the address of the currently allocated dynamic object. \end{itemize} @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false. @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false. \subsection{\lstinline{size_t malloc_size( void * addr )}} @malloc_size@ returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is @malloc_usable_size@. But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine. @malloc_size@ returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is @malloc_usable_size@. But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine. \paragraph{Usage} @malloc_size@ takes one parameters. @addr@: the address of the currently allocated dynamic object. \end{itemize} @malloc_size@ returns the allocation size of the given dynamic object. On failure, it return zero. @malloc_size@ returns the allocation size of the given dynamic object. On failure, it return zero. \subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}} This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement. \paragraph{Usage} This @realloc@ takes three parameters. It takes an additional parameter of nalign as compared to the default @realloc@. This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement. \paragraph{Usage} This @realloc@ takes three parameters. It takes an additional parameter of nalign as compared to the default @realloc@. \begin{itemize} @size@: the new size requirement of the to which the old object needs to be resized. \end{itemize} It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a @NULL@ pointer. It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a @NULL@ pointer. \subsection{\CFA Malloc Interface} We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeap allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers. \CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type. We added some routines to the @malloc@ interface of \CFA. These routines can only be used in \CFA and not in our stand-alone llheap allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers. \CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type. \subsection{\lstinline{T * malloc( void )}} This malloc is a simplified polymorphic form of defualt malloc (FIX ME: cite malloc). It does not take any parameter as compared to default malloc that takes one parameter. \paragraph{Usage} This malloc takes no parameters. It returns a dynamic object of the size of type @T@. On failure, it returns a @NULL@ pointer. This @malloc@ is a simplified polymorphic form of default @malloc@ (FIX ME: cite malloc). It does not take any parameter as compared to default @malloc@ that takes one parameter. \paragraph{Usage} This @malloc@ takes no parameters. It returns a dynamic object of the size of type @T@. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * aalloc( size_t dim )}} This aalloc is a simplified polymorphic form of above aalloc (FIX ME: cite aalloc). It takes one parameter as compared to the above aalloc that takes two parameters. This @aalloc@ is a simplified polymorphic form of above @aalloc@ (FIX ME: cite aalloc). It takes one parameter as compared to the above @aalloc@ that takes two parameters. \paragraph{Usage} aalloc takes one parameters. @dim@: required number of objects in the array. \end{itemize} It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer. It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * calloc( size_t dim )}} This calloc is a simplified polymorphic form of defualt calloc (FIX ME: cite calloc). It takes one parameter as compared to the default calloc that takes two parameters. \paragraph{Usage} This calloc takes one parameter. This @calloc@ is a simplified polymorphic form of default @calloc@ (FIX ME: cite calloc). It takes one parameter as compared to the default @calloc@ that takes two parameters. \paragraph{Usage} This @calloc@ takes one parameter. \begin{itemize} @dim@: required number of objects in the array. \end{itemize} It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer. It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * resize( T * ptr, size_t size )}} This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. \paragraph{Usage} This resize takes two parameters. @size@: the required size of the new object. \end{itemize} It returns a dynamic object of the size given in paramters. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer. It returns a dynamic object of the size given in parameters. The returned object is aligned to the alignment of type @T@. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * realloc( T * ptr, size_t size )}} This @realloc@ is a simplified polymorphic form of defualt @realloc@ (FIX ME: cite @realloc@ with align). It takes two parameters as compared to the above @realloc@ that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. This @realloc@ is a simplified polymorphic form of default @realloc@ (FIX ME: cite @realloc@ with align). It takes two parameters as compared to the above @realloc@ that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. \paragraph{Usage} This @realloc@ takes two parameters. @size@: the required size of the new object. \end{itemize} It returns a dynamic object of the size given in paramters that preserves the data in the given object. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer. It returns a dynamic object of the size given in parameters that preserves the data in the given object. The returned object is aligned to the alignment of type @T@. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * memalign( size_t align )}} This memalign is a simplified polymorphic form of defualt memalign (FIX ME: cite memalign). It takes one parameters as compared to the default memalign that takes two parameters. This memalign is a simplified polymorphic form of default memalign (FIX ME: cite memalign). It takes one parameters as compared to the default memalign that takes two parameters. \paragraph{Usage} memalign takes one parameters. @align@: the required alignment of the dynamic object. \end{itemize} It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. On failure, it returns a @NULL@ pointer. It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * amemalign( size_t align, size_t dim )}} This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). It takes two parameter as compared to the above amemalign that takes three parameters. This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). It takes two parameter as compared to the above amemalign that takes three parameters. \paragraph{Usage} amemalign takes two parameters. @dim@: required number of objects in the array. \end{itemize} It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align. On failure, it returns a @NULL@ pointer. It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * cmemalign( size_t align, size_t dim  )}} This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). It takes two parameter as compared to the above cmemalign that takes three parameters. This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). It takes two parameter as compared to the above cmemalign that takes three parameters. \paragraph{Usage} cmemalign takes two parameters. @dim@: required number of objects in the array. \end{itemize} It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align and is zero filled. On failure, it returns a @NULL@ pointer. It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align and is zero filled. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * aligned_alloc( size_t align )}} This @aligned_alloc@ is a simplified polymorphic form of defualt @aligned_alloc@ (FIX ME: cite @aligned_alloc@). It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters. This @aligned_alloc@ is a simplified polymorphic form of default @aligned_alloc@ (FIX ME: cite @aligned_alloc@). It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters. \paragraph{Usage} This @aligned_alloc@ takes one parameter. @align@: required alignment of the dynamic object. \end{itemize} It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. On failure, it returns a @NULL@ pointer. It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}} This @posix_memalign@ is a simplified polymorphic form of defualt @posix_memalign@ (FIX ME: cite @posix_memalign@). It takes two parameters as compared to the default @posix_memalign@ that takes three parameters. This @posix_memalign@ is a simplified polymorphic form of default @posix_memalign@ (FIX ME: cite @posix_memalign@). It takes two parameters as compared to the default @posix_memalign@ that takes three parameters. \paragraph{Usage} This @posix_memalign@ takes two parameter. \end{itemize} It stores address of the dynamic object of the size of type @T@ in given parameter ptr. This object is aligned to the given parameter. On failure, it returns a @NULL@ pointer. It stores address of the dynamic object of the size of type @T@ in given parameter ptr. This object is aligned to the given parameter. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * valloc( void )}} This @valloc@ is a simplified polymorphic form of defualt @valloc@ (FIX ME: cite @valloc@). It takes no parameters as compared to the default @valloc@ that takes one parameter. This @valloc@ is a simplified polymorphic form of default @valloc@ (FIX ME: cite @valloc@). It takes no parameters as compared to the default @valloc@ that takes one parameter. \paragraph{Usage} @valloc@ takes no parameters. It returns a dynamic object of the size of type @T@ that is aligned to the page size. On failure, it returns a @NULL@ pointer. It returns a dynamic object of the size of type @T@ that is aligned to the page size. On failure, it returns a @NULL@ pointer. \subsection{\lstinline{T * pvalloc( void )}} \paragraph{Usage} @pvalloc@ takes no parameters. It returns a dynamic object of the size that is calcutaed by rouding the size of type @T@. The returned object is also aligned to the page size. On failure, it returns a @NULL@ pointer. It returns a dynamic object of the size that is calculated by rounding the size of type @T@. The returned object is also aligned to the page size. On failure, it returns a @NULL@ pointer. \subsection{Alloc Interface} In addition to improve allocator interface both for \CFA and our standalone allocator uHeap in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation. In addition to improve allocator interface both for \CFA and our stand-alone allocator llheap in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation. This interface helps programmers in three major ways. \begin{itemize} \item Routine Name: alloc interfce frees programmers from remmebring different routine names for different kind of dynamic allocations. \item Parametre Positions: alloc interface frees programmers from remembering parameter postions in call to routines. \item Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determince the object size from returned type of alloc call. \end{itemize} Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interfece has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter. \subsection{Routine: \lstinline{T * alloc( ... )}} Call to alloc wihout any parameter returns one object of size of type @T@ allocated dynamically. Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine. alocc routine accepts six kinds of arguments. Using different combinations of tha parameters, different kind of allocations can be performed. Any combincation of parameters can be used together except @realloc@ and @resize@ that should not be used simultanously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both @resize@ and @realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced. Routine Name: alloc interface frees programmers from remembering different routine names for different kind of dynamic allocations. \item Parameter Positions: alloc interface frees programmers from remembering parameter positions in call to routines. \item Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determine the object size from returned type of alloc call. \end{itemize} Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interface has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter. \subsection{Routine: \lstinline{T * alloc( ... )}} Call to alloc without any parameter returns one object of size of type @T@ allocated dynamically. Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine. alloc routine accepts six kinds of arguments. Using different combinations of than parameters, different kind of allocations can be performed. Any combination of parameters can be used together except @realloc@ and @resize@ that should not be used simultaneously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both @resize@ and @realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced. \paragraph{Dim} This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@. It represents the required number of members in the array allocation as in \CFA's aalloc (FIX ME: cite aalloc). This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@. It represents the required number of members in the array allocation as in \CFA's @aalloc@ (FIX ME: cite aalloc). This parameter should be of type @size_t@. \paragraph{Align} This parameter is position-free and uses a backtick routine align (@align@). The parameter passed with @align@ should be of type @size_t@. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used. This parameter is position-free and uses a backtick routine align (@align@). The parameter passed with @align@ should be of type @size_t@. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used. Example: @int b = alloc( 5 , 64align )@ This call will return a dynamic array of five integers. It will align the allocated object to 64. This call will return a dynamic array of five integers. It will align the allocated object to 64. \paragraph{Fill} This parameter is position-free and uses a backtick routine fill (@fill@). In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter. This parameter is position-free and uses a backtick routine fill (@fill@). In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter. Three types of parameters can be passed using fill. Object of returned type: An object of type of returned type can be passed with @fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation. \item Dynamic object of returned type: A dynamic object of type of returned type can be passed with @fill@ to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen untill the end object passed to @fill@ or the end of requested allocation reaches. Dynamic object of returned type: A dynamic object of type of returned type can be passed with @fill@ to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen until the end object passed to @fill@ or the end of requested allocation reaches. \end{itemize} Example: @int b = alloc( 5 , 'a'fill )@ This call will return a dynamic array of five integers. It will fill the allocated object with character 'a' recursively till the end of requested allocation size. This call will return a dynamic array of five integers. It will fill the allocated object with character 'a' recursively till the end of requested allocation size. Example: @int b = alloc( 5 , 4fill )@ This call will return a dynamic array of five integers. It will fill the allocated object with integer 4 recursively till the end of requested allocation size. This call will return a dynamic array of five integers. It will fill the allocated object with integer 4 recursively till the end of requested allocation size. Example: @int b = alloc( 5 , afill )@ where @a@ is a pointer of int type This call will return a dynamic array of five integers. It will copy data in a to the returned object non-recursively untill end of a or the newly allocated object is reached. This call will return a dynamic array of five integers. It will copy data in a to the returned object non-recursively until end of a or the newly allocated object is reached. \paragraph{Resize} This parameter is position-free and uses a backtick routine resize (@resize@). It represents the old dynamic object (oaddr) that the programmer wants to This parameter is position-free and uses a backtick routine resize (@resize@). It represents the old dynamic object (oaddr) that the programmer wants to \begin{itemize} \item fill with something. \end{itemize} The data in old dynamic object will not be preserved in the new object. The type of object passed to @resize@ and the returned type of alloc call can be different. The data in old dynamic object will not be preserved in the new object. The type of object passed to @resize@ and the returned type of alloc call can be different. Example: @int b = alloc( 5 , aresize )@ Example: @int b = alloc( 5 , aresize , 32align )@ This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. Example: @int b = alloc( 5 , aresize , 32align , 2fill )@ This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32 and will be filled with 2. This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32 and will be filled with 2. \paragraph{Realloc} This parameter is position-free and uses a backtick routine @realloc@ (@realloc@). It represents the old dynamic object (oaddr) that the programmer wants to This parameter is position-free and uses a backtick routine @realloc@ (@realloc@). It represents the old dynamic object (oaddr) that the programmer wants to \begin{itemize} \item fill with something. \end{itemize} The data in old dynamic object will be preserved in the new object. The type of object passed to @realloc@ and the returned type of alloc call cannot be different. The data in old dynamic object will be preserved in the new object. The type of object passed to @realloc@ and the returned type of alloc call cannot be different. Example: @int b = alloc( 5 , arealloc )@ Example: @int b = alloc( 5 , arealloc , 32align )@ This call will realloc object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. This call will realloc object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. Example: @int b = alloc( 5 , arealloc , 32align , 2fill )@ This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. The extra space after copying data of a to the returned object will be filled with 2. This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. The extra space after copying data of a to the returned object will be filled with 2.
• ## doc/theses/mubeen_zulfiqar_MMath/background.tex

 r30d91e4 The trailer may be used to simplify an allocation implementation, \eg coalescing, and/or for security purposes to mark the end of an object. An object may be preceded by padding to ensure proper alignment. Some algorithms quantize allocation requests into distinct sizes resulting in additional spacing after objects less than the quantized value. Some algorithms quantize allocation requests into distinct sizes, called \newterm{buckets}, resulting in additional spacing after objects less than the quantized value. (Note, the buckets are often organized as an array of ascending bucket sizes for fast searching, \eg binary search, and the array is stored in the heap management-area, where each bucket is a top point to the freed objects of that size.) When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists. A free object also contains management data, \eg size, chaining, etc.
• ## doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS1.fig

 r30d91e4 -2 1200 2 6 4200 1575 4500 1725 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4350 1650 20 20 4350 1650 4370 1650 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4425 1650 20 20 4425 1650 4445 1650 6 2850 2100 3150 2250 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175 -6 6 2850 2475 3150 2850 6 4050 2100 4350 2250 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175 -6 6 4650 2100 4950 2250 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175 -6 6 3450 2100 3750 2250 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175 -6 6 3300 2175 3600 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 2925 2475 2925 2700 3375 2175 3375 2400 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 2850 2700 3150 2700 3150 2850 2850 2850 2850 2700 3300 2400 3600 2400 3600 2550 3300 2550 3300 2400 -6 6 4350 2475 4650 2850 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3150 1800 3150 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2850 1800 2850 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4650 1800 4650 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4950 1800 4950 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4500 1725 4500 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5100 1725 5100 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3450 1800 3450 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3750 1800 3750 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3300 1725 3300 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3900 1725 3900 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5250 1800 5250 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5400 1800 5400 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5550 1800 5550 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5700 1800 5700 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5850 1800 5850 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2700 1725 2700 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 4425 2475 4425 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 4350 2700 4650 2700 4650 2850 4350 2850 4350 2700 -6 6 3600 2475 3825 3150 3375 1275 3375 1575 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3675 2475 3675 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3600 2700 3825 2700 3825 2850 3600 2850 3600 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3600 3000 3825 3000 3825 3150 3600 3150 3600 3000 2700 1275 2700 1575 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 2775 1275 2775 1575 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3675 2775 3675 3000 -6 6 4875 3600 5175 3750 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675 -6 6 4875 2325 5175 2475 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400 -6 6 5625 2325 5925 2475 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 2400 20 20 5700 2400 5720 2400 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 2400 20 20 5775 2400 5795 2400 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 2400 20 20 5850 2400 5870 2400 -6 6 5625 3600 5925 3750 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 3675 20 20 5700 3675 5720 3675 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 3675 20 20 5775 3675 5795 3675 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 3675 20 20 5850 3675 5870 3675 -6 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2400 2100 2400 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2550 2100 2550 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2700 2100 2700 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2850 2100 2850 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3000 2100 3000 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3600 2100 3600 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3900 2100 3900 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4050 2100 4050 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4200 2100 4200 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4350 2100 4350 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4500 2100 4500 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3300 1500 3300 1800 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3600 1500 3600 1800 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3900 1500 3900 1800 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3000 1500 4800 1500 4800 1800 3000 1800 3000 1500 5175 1275 5175 1575 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 5625 1275 5625 1575 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3750 1275 3750 1575 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3225 1650 2625 2100 3825 1275 3825 1575 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2700 1950 6000 1950 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2700 2100 6000 2100 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 2700 1800 6000 1800 6000 2250 2700 2250 2700 1800 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3150 1650 2550 2100 2775 2175 2775 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3450 1650 4050 2100 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3375 1650 3975 2100 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2100 2100 2100 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 1950 2250 3150 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3450 2250 4650 2250 2775 2475 2775 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 1950 2100 3150 2100 3150 2550 1950 2550 1950 2100 2700 2700 2850 2700 2850 2850 2700 2850 2700 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3450 2100 4650 2100 4650 2550 3450 2550 3450 2100 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2250 2100 2250 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3750 2100 3750 2550 2700 2400 2850 2400 2850 2550 2700 2550 2700 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 2025 2475 2025 2700 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 2025 2775 2025 3000 4575 2175 4575 2400 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 1950 3000 2100 3000 2100 3150 1950 3150 1950 3000 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 1950 2700 2100 2700 2100 2850 1950 2850 1950 2700 4500 2400 5025 2400 5025 2550 4500 2550 4500 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 1 1 1.00 45.00 90.00 1950 3750 2700 3750 2700 3525 3600 3375 4350 3375 4350 3150 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 1950 3525 3150 3525 3150 3900 1950 3900 1950 3525 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 1 1 1.00 45.00 90.00 3450 3750 4200 3750 4200 3525 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3450 3525 4650 3525 4650 3900 3450 3900 3450 3525 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 1 1 1.00 45.00 90.00 3150 4650 4200 4650 4200 4275 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3150 4275 4650 4275 4650 4875 3150 4875 3150 4275 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 1950 2400 3150 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3450 2400 4650 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5400 2100 5400 3900 4 2 0 50 -1 0 11 0.0000 2 120 300 1875 2250 lock\001 4 1 0 50 -1 0 12 0.0000 2 135 1935 3900 1425 N kernel-thread buckets\001 4 1 0 50 -1 0 12 0.0000 2 195 810 4425 2025 heap$_2$\001 4 1 0 50 -1 0 12 0.0000 2 195 810 2175 2025 heap$_1$\001 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001 4 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001 4 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001 4 0 0 50 -1 0 12 0.0000 2 135 360 3225 4450 lock\001 4 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001 4 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001 4 1 0 50 -1 0 12 0.0000 2 180 1455 3900 4200 global pool (sbrk)\001 4 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001 4 1 0 50 -1 0 12 0.0000 2 180 720 6450 3150 free pool\001 4 1 0 50 -1 0 12 0.0000 2 180 390 6450 2925 heap\001 3600 3150 5100 3150 5100 3525 3600 3525 3600 3150 4 2 0 50 -1 0 11 0.0000 2 135 300 2625 1950 lock\001 4 1 0 50 -1 0 11 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001 4 1 0 50 -1 0 11 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001 4 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001 4 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001 4 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001 4 1 0 50 -1 0 11 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001
• ## doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS2.fig

 r30d91e4 -2 1200 2 6 2850 2100 3150 2250 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175 -6 6 4050 2100 4350 2250 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175 -6 6 4650 2100 4950 2250 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175 -6 6 3450 2100 3750 2250 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175 -6 6 3300 2175 3600 2550 6 2850 2475 3150 2850 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3375 2175 3375 2400 2925 2475 2925 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3300 2400 3600 2400 3600 2550 3300 2550 3300 2400 2850 2700 3150 2700 3150 2850 2850 2850 2850 2700 -6 6 4350 2475 4650 2850 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 4425 2475 4425 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 4350 2700 4650 2700 4650 2850 4350 2850 4350 2700 -6 6 3600 2475 3825 3150 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3675 2475 3675 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3600 2700 3825 2700 3825 2850 3600 2850 3600 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3600 3000 3825 3000 3825 3150 3600 3150 3600 3000 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3675 2775 3675 3000 -6 6 1950 3525 3150 3900 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 1 1 1.00 45.00 90.00 1950 3750 2700 3750 2700 3525 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 1950 3525 3150 3525 3150 3900 1950 3900 1950 3525 4 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001 -6 6 4050 1575 4350 1725 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 1650 20 20 4125 1650 4145 1650 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 1650 20 20 4200 1650 4220 1650 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650 -6 6 4875 2325 6150 3750 6 4875 2325 5175 2475 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400 -6 6 4875 3600 5175 3750 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675 -6 4 1 0 50 -1 0 12 0.0000 2 180 900 5700 3150 local pools\001 4 1 0 50 -1 0 12 0.0000 2 180 465 5700 2925 heaps\001 -6 6 3600 4050 5100 4650 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 1 1 1.00 45.00 90.00 3600 4500 4350 4500 4350 4275 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3600 4275 5100 4275 5100 4650 3600 4650 3600 4275 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 4200 global pool (sbrk)\001 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 4450 lock\001 -6 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3150 1800 3150 2250 2400 2100 2400 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2850 1800 2850 2250 2550 2100 2550 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4650 1800 4650 2250 2700 2100 2700 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4950 1800 4950 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4500 1725 4500 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5100 1725 5100 2250 2850 2100 2850 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3450 1800 3450 2250 3000 2100 3000 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3750 1800 3750 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3300 1725 3300 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3900 1725 3900 2250 3600 2100 3600 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5250 1800 5250 2250 3900 2100 3900 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5400 1800 5400 2250 4050 2100 4050 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5550 1800 5550 2250 4200 2100 4200 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5700 1800 5700 2250 4350 2100 4350 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5850 1800 5850 2250 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2700 1725 2700 2250 4500 2100 4500 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3300 1500 3300 1800 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3600 1500 3600 1800 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3000 1500 4800 1500 4800 1800 3000 1800 3000 1500 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3375 1275 3375 1575 3150 1650 2550 2100 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 2700 1275 2700 1575 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 2775 1275 2775 1575 3450 1650 4050 2100 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2100 2100 2100 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 1950 2250 3150 2250 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3450 2250 4650 2250 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 1950 2100 3150 2100 3150 2550 1950 2550 1950 2100 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3450 2100 4650 2100 4650 2550 3450 2550 3450 2100 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2250 2100 2250 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3750 2100 3750 2550 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 5175 1275 5175 1575 2025 2475 2025 2700 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 5625 1275 5625 1575 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3750 1275 3750 1575 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 3825 1275 3825 1575 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2700 1950 6000 1950 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 2700 2100 6000 2100 2025 2775 2025 3000 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 2700 1800 6000 1800 6000 2250 2700 2250 2700 1800 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 2775 2175 2775 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 2775 2475 2775 2700 1950 3000 2100 3000 2100 3150 1950 3150 1950 3000 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 2700 2700 2850 2700 2850 2850 2700 2850 2700 2700 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 2700 2400 2850 2400 2850 2550 2700 2550 2700 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 45.00 90.00 4575 2175 4575 2400 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 4500 2400 5025 2400 5025 2550 4500 2550 4500 2400 1950 2700 2100 2700 2100 2850 1950 2850 1950 2700 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 1 1 1.00 45.00 90.00 3600 3525 4650 3525 4650 3150 3450 3750 4200 3750 4200 3525 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3600 3150 5100 3150 5100 3750 3600 3750 3600 3150 4 2 0 50 -1 0 11 0.0000 2 120 300 2625 1950 lock\001 4 1 0 50 -1 0 10 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001 4 1 0 50 -1 0 10 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001 4 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001 4 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001 4 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001 4 1 0 50 -1 0 10 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001 3450 3525 4650 3525 4650 3900 3450 3900 3450 3525 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 1950 2400 3150 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3450 2400 4650 2400 4 2 0 50 -1 0 11 0.0000 2 135 300 1875 2250 lock\001 4 1 0 50 -1 0 12 0.0000 2 180 1245 3900 1425 H heap buckets\001 4 1 0 50 -1 0 12 0.0000 2 180 810 4425 2025 heap$_2$\001 4 1 0 50 -1 0 12 0.0000 2 180 810 2175 2025 heap$_1$\001 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001 4 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001 4 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001 4 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001 4 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001
• ## doc/theses/mubeen_zulfiqar_MMath/performance.tex

 r30d91e4 \chapter{Performance} \label{c:Performance} \noindent
• ## doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.bib

 r30d91e4 } @misc{nedmalloc, author      = {Niall Douglas}, title       = {nedmalloc version 1.06 Beta}, month       = jan, year        = 2010, note        = {\textsf{http://\-prdownloads.\-sourceforge.\-net/\-nedmalloc/\-nedmalloc\_v1.06beta1\_svn1151.zip}}, @misc{ptmalloc2, author      = {Wolfram Gloger}, title       = {ptmalloc version 2}, month       = jun, year        = 2006, note        = {\href{http://www.malloc.de/malloc/ptmalloc2-current.tar.gz}{http://www.malloc.de/\-malloc/\-ptmalloc2-current.tar.gz}}, } @misc{GNUallocAPI, author      = {GNU}, title       = {Summary of malloc-Related Functions}, year        = 2020, note        = {\href{https://www.gnu.org/software/libc/manual/html\_node/Summary-of-Malloc.html}{https://www.gnu.org/\-software/\-libc/\-manual/\-html\_node/\-Summary-of-Malloc.html}}, } @misc{SeriallyReusable, author      = {IBM}, title       = {Serially reusable programs}, month       = mar, year        = 2021, note        = {\href{https://www.ibm.com/docs/en/ztpf/1.1.0.15?topic=structures-serially-reusable-programs}{https://www.ibm.com/\-docs/\-en/\-ztpf/\-1.1.0.15?\-topic=structures-serially-reusable-programs}}, } @misc{librseq, author      = {Mathieu Desnoyers}, title       = {Library for Restartable Sequences}, month       = mar, year        = 2022, note        = {\href{https://github.com/compudj/librseq}{https://github.com/compudj/librseq}}, }

• ## doc/theses/thierry_delisle_PhD/thesis/Makefile

 r30d91e4 base \ base_avg \ cache-share \ cache-noshare \ empty \ emptybit \
• ## doc/theses/thierry_delisle_PhD/thesis/local.bib

 r30d91e4 note = "[Online; accessed 9-February-2021]" } @misc{wiki:rcu, author = "{Wikipedia contributors}", title = "Read-copy-update --- {W}ikipedia{,} The Free Encyclopedia", year = "2022", url = "https://en.wikipedia.org/wiki/Linear_congruential_generator", note = "[Online; accessed 12-April-2022]" } @misc{wiki:rwlock, author = "{Wikipedia contributors}", title = "Readers-writer lock --- {W}ikipedia{,} The Free Encyclopedia", year = "2021", url = "https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock", note = "[Online; accessed 12-April-2022]" }
• ## doc/theses/thierry_delisle_PhD/thesis/text/core.tex

 r30d91e4 \item Faster than other schedulers that have equal or better fairness. \end{itemize} \subsection{Fairness Goals} For this work fairness will be considered as having two strongly related requirements: true starvation freedom and fast'' load balancing. \paragraph{True starvation freedom} is more easily defined: As long as at least one \proc continues to dequeue \ats, all read \ats should be able to run eventually. In any running system, \procs can stop dequeing \ats if they start running a \at that will simply never park. Traditional workstealing schedulers do not have starvation freedom in these cases. Now this requirement begs the question, what about preemption? Generally speaking preemption happens on the timescale of several milliseconds, which brings us to the next requirement: fast'' load balancing. \paragraph{Fast load balancing} means that load balancing should happen faster than preemption would normally allow. For interactive applications that need to run at 60, 90, 120 frames per second, \ats having to wait for several millseconds to run are effectively starved. Therefore load-balancing should be done at a faster pace, one that can detect starvation at the microsecond scale. With that said, this is a much fuzzier requirement since it depends on the number of \procs, the number of \ats and the general load of the system. \subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal} Therefore this unprotected read of the timestamp and average satisfy the limited correctness that is required. \begin{figure} \centering \input{cache-share.pstex_t} \caption[CPU design with wide L3 sharing]{CPU design with wide L3 sharing \smallskip\newline A very simple CPU with 4 \glspl{hthrd}. L1 and L2 are private to each \gls{hthrd} but the L3 is shared across to entire core.} \label{fig:cache-share} \end{figure} \begin{figure} \centering \input{cache-noshare.pstex_t} \caption[CPU design with a narrower L3 sharing]{CPU design with a narrower L3 sharing \smallskip\newline A different CPU design, still with 4 \glspl{hthrd}. L1 and L2 are still private to each \gls{hthrd} but the L3 is shared some of the CPU but there is still two distinct L3 instances.} \label{fig:cache-noshare} \end{figure} With redundant tiemstamps this scheduling algorithm achieves both the fairness and performance requirements, on some machines. The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}. For example, on machines where the motherboard holds multiple CPU, cache misses can be satisfied from a cache that belongs to the CPU that missed, the \emph{local} CPU, or by a different CPU, a \emph{remote} one. Cache misses that are satisfied by a remote CPU will have higher latency than if it is satisfied by the local CPU. However, this is not specific to systems with multiple CPUs. Depending on the cache structure, cache-misses can have different latency for the same CPU. The AMD EPYC 7662 CPUs that is described in Chapter~\ref{microbench} is an example of that. Figure~\ref{fig:cache-share} and Figure~\ref{fig:cache-noshare} show two different cache topologies with highlight this difference. In Figure~\ref{fig:cache-share}, all cache instances are either private to a \gls{hthrd} or shared to the entire system, this means latency due to cache-misses are likely fairly consistent. By comparison, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by a hit in either instance of the L3. However, the memory access latency to the remote L3 instance will be notably higher than the memory access latency to the local L3. The impact of these different design on this algorithm is that scheduling will scale very well on architectures similar to Figure~\ref{fig:cache-share}, both will have notably worst scalling with many narrower L3 instances. This is simply because as the number of L3 instances grow, so two does the chances that the random helping will cause significant latency. The solution is to have the scheduler be aware of the cache topology. \subsection{Per CPU Sharding} Building a scheduler that is aware of cache topology poses two main challenges: discovering cache topology and matching \procs to cache instance. Sadly, there is no standard portable way to discover cache topology in C. Therefore, while this is a significant portability challenge, it is outside the scope of this thesis to design a cross-platform cache discovery mechanisms. The rest of this work assumes discovering the cache topology based on Linux's \texttt{/sys/devices/system/cpu} directory. This leaves the challenge of matching \procs to cache instance, or more precisely identifying which subqueues of the ready queue are local to which cache instance. Once this matching is available, the helping algorithm can be changed to add bias so that \procs more often help subqueues local to the same cache instance \footnote{Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuinng.}. The obvious approach to mapping cache instances to subqueues is to statically tie subqueues to CPUs. Instead of having each subqueue local to a specific \proc, the system is initialized with subqueues for each \glspl{hthrd} up front. Then \procs dequeue and enqueue by first asking which CPU id they are local to, in order to identify which subqueues are the local ones. \Glspl{proc} can get the CPU id from \texttt{sched\_getcpu} or \texttt{librseq}. This approach solves the performance problems on systems with topologies similar to Figure~\ref{fig:cache-noshare}. However, it actually causes some subtle fairness problems in some systems, specifically systems with few \procs and many \glspl{hthrd}. In these cases, the large number of subqueues and the bias agains subqueues tied to different cache instances make it so it is very unlikely any single subqueue is picked. To make things worst, the small number of \procs mean that few helping attempts will be made. This combination of few attempts and low chances make it so a \at stranded on a subqueue that is not actively dequeued from may wait very long before it gets randomly helped. On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can actually take multiple seconds for a \at to get dequeued from a remote queue. Therefore, a more dynamic matching of subqueues to cache instance is needed. \subsection{Topological Work Stealing} The approach that is used in the \CFA scheduler is to have per-\proc subqueue, but have an excplicit data-structure track which cache instance each subqueue is tied to. This is requires some finess because reading this data structure must lead to fewer cache misses than not having the data structure in the first place. A key element however is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}. Therefore the algorithm can be built as follows: Before enqueuing or dequeing a \at, each \proc queries the CPU id and the corresponding cache instance. Since subqueues are tied to \procs, each \proc can then update the cache instance mapped to the local subqueue(s). To avoid unnecessary cache line invalidation, the map is only written to if the mapping changes.
• ## doc/theses/thierry_delisle_PhD/thesis/text/io.tex

 r30d91e4 Finally, the last important part of the \io subsystem is it's interface. There are multiple approaches that can be offered to programmers, each with advantages and disadvantages. The new \io subsystem can replace the C runtime's API or extend it. And in the later case the interface can go from very similar to vastly different. The following sections discuss some useful options using @read@ as an example. The standard Linux interface for C is : @ssize_t read(int fd, void *buf, size_t count);@. @ssize_t read(int fd, void *buf, size_t count);@ \subsection{Replacement} Replacing the C \glsxtrshort{api} Replacing the C \glsxtrshort{api} is the more intrusive and draconian approach. The goal is to convince the compiler and linker to replace any calls to @read@ to direct them to the \CFA implementation instead of glibc's. This has the advantage of potentially working transparently and supporting existing binaries without needing recompilation. It also offers a, presumably, well known and familiar API that C programmers can simply continue to work with. However, this approach also entails a plethora of subtle technical challenges which generally boils down to making a perfect replacement. If the \CFA interface replaces only \emph{some} of the calls to glibc, then this can easily lead to esoteric concurrency bugs. Since the gcc ecosystems does not offer a scheme for such perfect replacement, this approach was rejected as being laudable but infeasible. \subsection{Synchronous Extension} An other interface option is to simply offer an interface that is different in name only. For example: @ssize_t cfa_read(int fd, void *buf, size_t count);@ \noindent This is much more feasible but still familiar to C programmers. It comes with the caveat that any code attempting to use it must be recompiled, which can be a big problem considering the amount of existing legacy C binaries. However, it has the advantage of implementation simplicity. \subsection{Asynchronous Extension} It is important to mention that there is a certain irony to using only synchronous, therefore blocking, interfaces for a feature often referred to as non-blocking'' \io. A fairly traditional way of doing this is using futures\cit{wikipedia futures}. As simple way of doing so is as follows: @future(ssize_t) read(int fd, void *buf, size_t count);@ \noindent Note that this approach is not necessarily the most idiomatic usage of futures. The definition of read above returns'' the read content through an output parameter which cannot be synchronized on. A more classical asynchronous API could look more like: @future([ssize_t, void *]) read(int fd, size_t count);@ \noindent However, this interface immediately introduces memory lifetime challenges since the call must effectively allocate a buffer to be returned. Because of the performance implications of this, the first approach is considered preferable as it is more familiar to C programmers. \subsection{Interface directly to \lstinline{io_uring}} Finally, an other interface that can be relevant is to simply expose directly the underlying \texttt{io\_uring} interface. For example: @array(SQE, want) cfa_io_allocate(int want);@ @void cfa_io_submit( const array(SQE, have) & );@ \noindent This offers more flexibility to users wanting to fully use all of the \texttt{io\_uring} features. However, it is not the most user-friendly option. It obviously imposes a strong dependency between user code and \texttt{io\_uring} but at the same time restricting users to usages that are compatible with how \CFA internally uses \texttt{io\_uring}.

• ## libcfa/src/containers/array.hfa

 r30d91e4 #include static inline Timmed & ?[?]( arpk(N, S, Timmed, Tbase) & a, int i ) { assert( i < N ); return (Timmed &) a.strides[i]; } static inline Timmed & ?[?]( arpk(N, S, Timmed, Tbase) & a, unsigned int i ) { assert( i < N ); return (Timmed &) a.strides[i]; } static inline Timmed & ?[?]( arpk(N, S, Timmed, Tbase) & a, long int i ) { assert( i < N ); return (Timmed &) a.strides[i]; } static inline Timmed & ?[?]( arpk(N, S, Timmed, Tbase) & a, unsigned long int i ) { assert( i < N ); return (Timmed &) a.strides[i]; }
• ## src/AST/Convert.cpp

 r30d91e4 } const ast::Expr * visit( const ast::DimensionExpr * node ) override final { auto expr = visitBaseExpr( node, new DimensionExpr( node->name ) ); this->node = expr; return nullptr; } const ast::Expr * visit( const ast::AsmExpr * node ) override final { auto expr = visitBaseExpr( node, virtual void visit( const DimensionExpr * old ) override final { // DimensionExpr gets desugared away in Validate. // As long as new-AST passes don't use it, this cheap-cheerful error // detection helps ensure that these occurrences have been compiled // away, as expected.  To move the DimensionExpr boundary downstream // or move the new-AST translation boundary upstream, implement // DimensionExpr in the new AST and implement a conversion. (void) old; assert(false && "DimensionExpr should not be present at new-AST boundary"); this->node = visitBaseExpr( old, new ast::DimensionExpr( old->location, old->name ) ); }
• ## src/AST/Expr.hpp

 r30d91e4 }; class DimensionExpr final : public Expr { public: std::string name; DimensionExpr( const CodeLocation & loc, std::string name ) : Expr( loc ), name( name ) {} const Expr * accept( Visitor & v ) const override { return v.visit( this ); } private: DimensionExpr * clone() const override { return new DimensionExpr{ *this }; } MUTATE_FRIEND }; /// A GCC "asm constraint operand" used in an asm statement, e.g. [output] "=f" (result). /// https://gcc.gnu.org/onlinedocs/gcc-4.7.1/gcc/Machine-Constraints.html#Machine-Constraints
• ## src/AST/Fwd.hpp

 r30d91e4 class CommaExpr; class TypeExpr; class DimensionExpr; class AsmExpr; class ImplicitCopyCtorExpr;
• ## src/AST/Pass.hpp

 r30d91e4 const ast::Expr *             visit( const ast::CommaExpr            * ) override final; const ast::Expr *             visit( const ast::TypeExpr             * ) override final; const ast::Expr *             visit( const ast::DimensionExpr        * ) override final; const ast::Expr *             visit( const ast::AsmExpr              * ) override final; const ast::Expr *             visit( const ast::ImplicitCopyCtorExpr * ) override final;
• ## src/AST/Pass.impl.hpp

 r30d91e4 __pass::symtab::addId( core, 0, func ); if ( __visit_children() ) { // parameter declarations maybe_accept( node, &FunctionDecl::type_params ); maybe_accept( node, &FunctionDecl::assertions ); maybe_accept( node, &FunctionDecl::params ); maybe_accept( node, &FunctionDecl::returns ); // type params and assertions maybe_accept( node, &FunctionDecl::type_params ); maybe_accept( node, &FunctionDecl::assertions ); maybe_accept( node, &FunctionDecl::type ); // First remember that we are now within a function. ValueGuard< bool > oldInFunction( inFunction ); //-------------------------------------------------------------------------- // DimensionExpr template< typename core_t > const ast::Expr * ast::Pass< core_t >::visit( const ast::DimensionExpr * node ) { VISIT_START( node ); if ( __visit_children() ) { guard_symtab guard { *this }; maybe_accept( node, &DimensionExpr::result ); } VISIT_END( Expr, node ); } //-------------------------------------------------------------------------- // AsmExpr template< typename core_t > if ( __visit_children() ) { // xxx - should PointerType visit/mutate dimension? maybe_accept( node, &PointerType::dimension ); maybe_accept( node, &PointerType::base ); }
• ## src/AST/Pass.proto.hpp

 r30d91e4 struct PureVisitor; template node_t * deepCopy( const node_t * localRoot ); namespace __pass { static inline auto addStructFwd( core_t & core, int, const ast::StructDecl * decl ) -> decltype( core.symtab.addStruct( decl ), void() ) { ast::StructDecl * fwd = new ast::StructDecl( decl->location, decl->name ); fwd->params = decl->params; for ( const auto & param : decl->params ) { fwd->params.push_back( deepCopy( param.get() ) ); } core.symtab.addStruct( fwd ); } template static inline auto addUnionFwd( core_t & core, int, const ast::UnionDecl * decl ) -> decltype( core.symtab.addUnion( decl ), void() ) { UnionDecl * fwd = new UnionDecl( decl->location, decl->name ); fwd->params = decl->params; ast::UnionDecl * fwd = new ast::UnionDecl( decl->location, decl->name ); for ( const auto & param : decl->params ) { fwd->params.push_back( deepCopy( param.get() ) ); } core.symtab.addUnion( fwd ); }
• ## src/AST/Print.cpp

 r30d91e4 } virtual const ast::Expr * visit( const ast::DimensionExpr * node ) override final { os << "Type-Sys Value: " << node->name; postprint( node ); return node; } virtual const ast::Expr * visit( const ast::AsmExpr * node ) override final { os << "Asm Expression:" << endl;
• ## src/AST/Visitor.hpp

 r30d91e4 virtual const ast::Expr *             visit( const ast::CommaExpr            * ) = 0; virtual const ast::Expr *             visit( const ast::TypeExpr             * ) = 0; virtual const ast::Expr *             visit( const ast::DimensionExpr        * ) = 0; virtual const ast::Expr *             visit( const ast::AsmExpr              * ) = 0; virtual const ast::Expr *             visit( const ast::ImplicitCopyCtorExpr * ) = 0;
• ## src/Common/CodeLocationTools.cpp

 r30d91e4 macro(CommaExpr, Expr) \ macro(TypeExpr, Expr) \ macro(DimensionExpr, Expr) \ macro(AsmExpr, Expr) \ macro(ImplicitCopyCtorExpr, Expr) \
• ## src/InitTweak/GenInit.cc

 r30d91e4 retVal->location, "?{}", retVal, stmt->expr ); assertf( ctorStmt, "ReturnFixer: genCtorDtor returned nllptr: %s / %s", "ReturnFixer: genCtorDtor returned nullptr: %s / %s", toString( retVal ).c_str(), toString( stmt->expr ).c_str() ); stmtsToAddBefore.push_back( ctorStmt ); stmtsToAddBefore.push_back( ctorStmt ); // Return the retVal object. void genInit( ast::TranslationUnit & transUnit ) { ast::Pass::run( transUnit ); ast::Pass::run( transUnit ); } void fixReturnStatements( ast::TranslationUnit & transUnit ) { ast::Pass::run( transUnit ); }
• ## src/InitTweak/GenInit.h

 r30d91e4 // Created On       : Mon May 18 07:44:20 2015 // Last Modified By : Andrew Beach // Last Modified On : Fri Oct 22 16:08:00 2021 // Update Count     : 6 // Last Modified On : Fri Mar 18 14:22:00 2022 // Update Count     : 7 // /// Converts return statements into copy constructor calls on the hidden return variable void fixReturnStatements( std::list< Declaration * > & translationUnit ); void fixReturnStatements( ast::TranslationUnit & translationUnit ); /// generates a single ctor/dtor statement using objDecl as the 'this' parameter and arg as the optional argument
• ## src/Validate/module.mk

 r30d91e4 Validate/ForallPointerDecay.cpp \ Validate/ForallPointerDecay.hpp \ Validate/GenericParameter.cpp \ Validate/GenericParameter.hpp \ Validate/HandleAttributes.cc \ Validate/HandleAttributes.h \ Validate/LabelAddressFixer.cpp \ Validate/LabelAddressFixer.hpp \ Validate/ReturnCheck.cpp \ Validate/ReturnCheck.hpp \ Validate/FindSpecialDeclsNew.cpp \ Validate/FindSpecialDecls.cc \
• ## src/main.cc

 r30d91e4 // Created On       : Fri May 15 23:12:02 2015 // Last Modified By : Andrew Beach // Last Modified On : Fri Mar 11 10:39:00 2022 // Update Count     : 671 // Last Modified On : Wed Apr 13 11:11:00 2022 // Update Count     : 672 // #include "Tuples/Tuples.h"                  // for expandMemberTuples, expan... #include "Validate/Autogen.hpp"             // for autogenerateRoutines #include "Validate/GenericParameter.hpp"    // for fillGenericParameters, tr... #include "Validate/FindSpecialDecls.h"      // for findGlobalDecls #include "Validate/ForallPointerDecay.hpp"  // for decayForallPointers #include "Validate/InitializerLength.hpp"   // for setLengthFromInitializer #include "Validate/LabelAddressFixer.hpp"   // for fixLabelAddresses #include "Validate/ReturnCheck.hpp"         // for checkReturnStatements #include "Virtual/ExpandCasts.h"            // for expandCasts PASS( "Validate-A", SymTab::validate_A( translationUnit ) ); PASS( "Validate-B", SymTab::validate_B( translationUnit ) ); PASS( "Validate-C", SymTab::validate_C( translationUnit ) ); CodeTools::fillLocations( translationUnit ); forceFillCodeLocations( transUnit ); // Check as early as possible. Can't happen before // LinkReferenceToType, observed failing when attempted // before eliminateTypedef PASS( "Validate Generic Parameters", Validate::fillGenericParameters( transUnit ) ); PASS( "Translate Dimensions", Validate::translateDimensionParameters( transUnit ) ); PASS( "Check Function Returns", Validate::checkReturnStatements( transUnit ) ); // Must happen before Autogen. PASS( "Fix Return Statements", InitTweak::fixReturnStatements( transUnit ) ); PASS( "Implement Concurrent Keywords", Concurrency::implementKeywords( transUnit ) ); translationUnit = convert( move( transUnit ) ); } else { PASS( "Validate-C", SymTab::validate_C( translationUnit ) ); PASS( "Validate-D", SymTab::validate_D( translationUnit ) ); PASS( "Validate-E", SymTab::validate_E( translationUnit ) );
Note: See TracChangeset for help on using the changeset viewer.