Changeset 365c8dcb
- Timestamp:
- Apr 14, 2022, 3:00:28 PM (2 years ago)
- Branches:
- ADT, ast-experimental, enum, master, pthread-emulation, qualifiedEnum
- Children:
- bfd5512
- Parents:
- 30d91e4 (diff), 4ec9513 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Files:
-
- 12 added
- 1 deleted
- 29 edited
- 2 moved
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/mike_brooks_MMath/.gitignore
r30d91e4 r365c8dcb 1 1 !Makefile 2 2 build 3 uw- thesis.pdf3 uw-ethesis.pdf -
doc/theses/mike_brooks_MMath/array.tex
r30d91e4 r365c8dcb 156 156 enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b ) 157 157 \end{eqnarray*} 158 158 159 160 \section{Bound checks, added and removed} 161 162 \CFA array subscripting is protected with runtime bound checks. Having dependent typing causes the opimizer to remove more of these bound checks than it would without them. This section provides a demonstration of the effect. 163 164 The experiment compares the \CFA array system with the padded-room system [todo:xref] most typically exemplified by Java arrays, but also reflected in the C++ pattern where restricted vector usage models a checked array. The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based. The experiment compares with the C++ version to keep access to generated assembly code simple. 165 166 As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and C++ versions. When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code. But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assemly, ready to catch an occurrence the mistake. 167 168 TODO: paste source and assemby codes 169 170 Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized. The case is naive matrix multiplication over a row-major encoding. 171 172 TODO: paste source codes 173 174 175 176 177 178 \section{Comparison with other arrays} 179 180 \CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C. Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly typed ownership system that further helps guarantee statically the validity of every pointer deference. These systems, therefore, ask the programmer to convince the typechecker that every pointer dereference is valid. \CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter. 181 182 \CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns. Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent. The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program. 183 184 \subsction{Safety in a padded room} 185 186 Java's array [todo:cite] is a straightforward example of assuring safety against undefined behaviour, at a cost of expressiveness for more applied properties. Consider the array parameter declarations in: 187 188 \begin{tabular}{rl} 189 C & @void f( size_t n, size_t m, float a[n][m] );@ \\ 190 Java & @void f( float[][] a );@ 191 \end{tabular} 192 193 Java's safety against undefined behaviour assures the callee that, if @a@ is non-null, then @a.length@ is a valid access (say, evaluating to the number $\ell$) and if @i@ is in $[0, \ell)$ then @a[i]@ is a valid access. If a value of @i@ outside this range is used, a runtime error is guaranteed. In these respects, C offers no guarantess at all. Notably, the suggestion that @n@ is the intended size of the first dimension of @a@ is documentation only. Indeed, many might prefer the technically equivalent declarations @float a[][m]@ or @float (*a)[m]@ as emphasizing the ``no guarantees'' nature of an infrequently used language feature, over using the opportunity to explain a programmer intention. Moreover, even if @a[0][0]@ is valid for the purpose intended, C's basic infamous feature is the possibility of an @i@, such that @a[i][0]@ is not valid for the same purpose, and yet, its evaluation does not produce an error. 194 195 Java's lack of expressiveness for more applied properties means these outcomes are possible: 196 \begin{itemize} 197 \item @a[0][17]@ and @a[2][17]@ are valid accesses, yet @a[1][17]@ is a runtime error, because @a[1]@ is a null pointer 198 \item the same observation, now because @a[1]@ refers to an array of length 5 199 \item execution times vary, because the @float@ values within @a@ are sometimes stored nearly contiguously, and other times, not at all 200 \end{itemize} 201 C's array has none of these limitations, nor do any of the ``array language'' comparators discussed in this section. 202 203 This Java level of safety and expressiveness is also exemplified in the C family, with the commonly given advice [todo:cite example], for C++ programmers to use @std::vector@ in place of the C++ language's array, which is essentially the C array. The advice is that, while a vector is also more powerful (and quirky) than an arry, its capabilities include options to preallocate with an upfront size, to use an available bound-checked accessor (@a.at(i)@ in place of @a[i]@), to avoid using @push_back@, and to use a vector of vectors. Used with these restrictions, out-of-bound accesses are stopped, and in-bound accesses never exercise the vector's ability to grow, which is to say, they never make the program slow to reallocate and copy, and they never invalidate the program's other references to the contained values. Allowing this scheme the same referential integrity assumption that \CFA enjoys [todo:xref], this scheme matches Java's safety and expressiveness exactly. [TODO: decide about going deeper; some of the Java expressiveness concerns have mitigations, up to even more tradeoffs.] 204 205 \subsection{Levels of dependently typed arrays} 206 207 The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as: 208 \begin{itemize} 209 \item a \emph{zip}-style operation that consumes two arrays of equal length 210 \item a \emph{map}-style operation whose produced length matches the consumed length 211 \item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions 212 \end{itemize} 213 Across this field, this expressiveness is not just an avaiable place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out. Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C. 214 215 216 217 Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted. In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example. The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination. In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate. [TODO: clarify how even Idris type checking terminates] 218 219 Idris is a current, general-purpose dependently typed programming language. Length checking is a common benchmark for full dependent type stystems. Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started. [todo: finish explaining what Data.Vect is and then the essence of the comparison] 220 221 POINTS: 222 here is how our basic checks look (on a system that deosn't have to compromise); 223 it can also do these other cool checks, but watch how I can mess with its conservativeness and termination 224 225 Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer offer novel contributions concerning similar, restricted dependent types for tracking array length. Unlike \CFA, both are garbage-collected functional languages. Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary. So, like \CFA, the checking in question is a leightweight bounds-only analysis. Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression. 226 227 228 229 The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics. There is a particular emphasis on an existential type, enabling callee-determined return shapes. 230 231 Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type. 232 233 Futhark and full-strength dependently typed lanaguages treat array sizes are ordinary values. Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not. 234 235 CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances. Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark. Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark. 236 237 \subsection{Static safety in C extensions} 238 239 240 \section{Future Work} 241 242 \subsection{Declaration syntax} 243 244 \subsection{Range slicing} 245 246 \subsection{With a module system} 247 248 \subsection{With described enumerations} 249 250 A project in \CFA's current portfolio will improve enumerations. In the incumbent state, \CFA has C's enumerations, unmodified. I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use. It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations: 251 \begin{lstlisting} 252 forall( T | is_enum(T) ) 253 void show_in_context( T val ) { 254 for( T i ) { 255 string decorator = ""; 256 if ( i == val-1 ) decorator = "< ready"; 257 if ( i == val ) decorator = "< go" ; 258 sout | i | decorator; 259 } 260 } 261 enum weekday { mon, tue, wed = 500, thu, fri }; 262 show_in_context( wed ); 263 \end{lstlisting} 264 with output 265 \begin{lstlisting} 266 mon 267 tue < ready 268 wed < go 269 thu 270 fri 271 \end{lstlisting} 272 The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA. But the example shows these abilities: 273 \begin{itemize} 274 \item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type 275 \item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@) 276 \item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@) 277 \item a provision for looping (the @for@ form used) over the values of the type. 278 \end{itemize} 279 280 If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level. 281 282 [TODO: indroduce Ada in the comparators] 283 284 In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, C++, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers. The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing. 285 286 This change of perspective also lets us remove ubiquitous dynamic bound checks. [TODO: xref] discusses how automatically inserted bound checks can often be otimized away. But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation. To remove the ubiquitious dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet. 287 288 [TODO, fix confusion: Idris has this arrangement of checks, but still the natural numbers as the domain.] 289 290 The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values. Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above. 291 \begin{lstlisting} 292 interface Ix n 293 get_size n : Unit -> Int 294 ordinal : n -> Int 295 unsafe_from_ordinal n : Int -> n 296 \end{lstlisting} 297 298 Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes. This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this funciton. Dex's example is a routine that calculates pointwise differences between two samples. Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result). In both cases, but with respectively dimensoned interpretations of ``size,'' this function requries the argument sizes to match, and it produces a result of the that size. 299 300 The polymorphism plays out with the pointwise-difference routine advertizing a single-dimensional interface whose domain type is generic. In the audio instantiation, the duration-of-clip type argument is used for the domain. In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $. This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements 301 \begin{lstlisting} 302 instance {a b} [Ix a, Ix b] Ix (a & b) 303 get_size = \(). size a * size b 304 ordinal = \(i, j). (ordinal i * size b) + ordinal j 305 unsafe_from_ordinal = \o. 306 bs = size b 307 (unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs)) 308 \end{lstlisting} 309 and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time 310 \begin{lstlisting} 311 img_trans :: (img_wd,img_ht)=>Real 312 img_trans.(i,j) = img.i.j 313 result = pairwise img_trans 314 \end{lstlisting} 315 [TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4] 316 317 In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary. 318 319 \subsection{Retire pointer arithmetic} -
doc/theses/mike_brooks_MMath/uw-ethesis.bib
r30d91e4 r365c8dcb 2 2 % For use with BibTeX 3 3 4 % -------------------------------------------------- 5 % Cforall 6 @misc{cfa:frontpage, 7 url = {https://cforall.uwaterloo.ca/} 8 } 9 @article{cfa:typesystem, 10 author = {Aaron Moss and Robert Schluntz and Peter A. Buhr}, 11 title = {{\CFA} : Adding modern programming language features to {C}}, 12 journal = {Softw. Pract. Exp.}, 13 volume = {48}, 14 number = {12}, 15 pages = {2111--2146}, 16 year = {2018}, 17 url = {https://doi.org/10.1002/spe.2624}, 18 doi = {10.1002/spe.2624}, 19 timestamp = {Thu, 09 Apr 2020 17:14:14 +0200}, 20 biburl = {https://dblp.org/rec/journals/spe/MossSB18.bib}, 21 bibsource = {dblp computer science bibliography, https://dblp.org} 22 } 23 24 25 % -------------------------------------------------- 26 % Array prior work 27 28 @inproceedings{arr:futhark:tytheory, 29 author = {Henriksen, Troels and Elsman, Martin}, 30 title = {Towards Size-Dependent Types for Array Programming}, 31 year = {2021}, 32 isbn = {9781450384667}, 33 publisher = {Association for Computing Machinery}, 34 address = {New York, NY, USA}, 35 url = {https://doi.org/10.1145/3460944.3464310}, 36 doi = {10.1145/3460944.3464310}, 37 abstract = {We present a type system for expressing size constraints on array types in an ML-style type system. The goal is to detect shape mismatches at compile-time, while being simpler than full dependent types. The main restrictions is that the only terms that can occur in types are array sizes, and syntactically they must be variables or constants. For those programs where this is not sufficient, we support a form of existential types, with the type system automatically managing the requisite book-keeping. We formalise a large subset of the type system in a small core language, which we prove sound. We also present an integration of the type system in the high-performance parallel functional language Futhark, and show on a collection of 44 representative programs that the restrictions in the type system are not too problematic in practice.}, 38 booktitle = {Proceedings of the 7th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming}, 39 pages = {1–14}, 40 numpages = {14}, 41 keywords = {functional programming, parallel programming, type systems}, 42 location = {Virtual, Canada}, 43 series = {ARRAY 2021} 44 } 45 46 @article{arr:dex:long, 47 author = {Adam Paszke and 48 Daniel D. Johnson and 49 David Duvenaud and 50 Dimitrios Vytiniotis and 51 Alexey Radul and 52 Matthew J. Johnson and 53 Jonathan Ragan{-}Kelley and 54 Dougal Maclaurin}, 55 title = {Getting to the Point. Index Sets and Parallelism-Preserving Autodiff 56 for Pointful Array Programming}, 57 journal = {CoRR}, 58 volume = {abs/2104.05372}, 59 year = {2021}, 60 url = {https://arxiv.org/abs/2104.05372}, 61 eprinttype = {arXiv}, 62 eprint = {2104.05372}, 63 timestamp = {Mon, 25 Oct 2021 07:55:47 +0200}, 64 biburl = {https://dblp.org/rec/journals/corr/abs-2104-05372.bib}, 65 bibsource = {dblp computer science bibliography, https://dblp.org} 66 } -
doc/theses/mubeen_zulfiqar_MMath/allocator.tex
r30d91e4 r365c8dcb 1 1 \chapter{Allocator} 2 2 3 \section{uHeap} 4 uHeap is a lightweight memory allocator. The objective behind uHeap is to design a minimal concurrent memory allocator that has new features and also fulfills GNU C Library requirements (FIX ME: cite requirements). 5 6 The objective of uHeap's new design was to fulfill following requirements: 7 \begin{itemize} 8 \item It should be concurrent and thread-safe for multi-threaded programs. 9 \item It should avoid global locks, on resources shared across all threads, as much as possible. 10 \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators). 11 \item It should be a lightweight memory allocator. 12 \end{itemize} 3 This chapter presents a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading). 4 The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}. 5 6 7 \section{llheap} 8 9 The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call. 10 (Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.) 11 A direct consequence of this objective is very simple or no storage coalescing; 12 hence, llheap's design is willing to use more storage to lower latency. 13 This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory. 14 Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{Performance}). 15 16 % The objective of llheap's new design was to fulfill following requirements: 17 % \begin{itemize} 18 % \item It should be concurrent and thread-safe for multi-threaded programs. 19 % \item It should avoid global locks, on resources shared across all threads, as much as possible. 20 % \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators). 21 % \item It should be a lightweight memory allocator. 22 % \end{itemize} 13 23 14 24 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 15 25 16 \section{Design choices for uHeap} 17 uHeap's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed: 18 19 \paragraph{Design 1: Centralized} 20 One heap, but lower bucket sizes are N-shared across KTs. 21 This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes. 22 When KTs $\le$ N, the important bucket sizes are uncontented. 23 When KTs $>$ N, the free buckets are contented. 24 Therefore, threads are only contending for a small number of buckets, which are distributed among them to reduce contention. 25 \begin{cquote} 26 \section{Design Choices} 27 28 llheap's design was reviewed and changed multiple times throughout the thesis. 29 Some of the rejected designs are discussed because they show the path to the final design (see discussion in \VRef{s:MultipleHeaps}). 30 Note, a few simples tests for a design choice were compared with the current best allocators to determine the viability of a design. 31 32 33 \subsection{Allocation Fastpath} 34 35 These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced. 36 \paragraph{T:1 model} 37 \VRef[Figure]{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are N-shared across KTs. 38 This design leverages the fact that 95\% of allocation requests are less than 1024 bytes and there are only 3--5 different request sizes. 39 When KTs $\le$ N, the common bucket sizes are uncontented; 40 when KTs $>$ N, the free buckets are contented and latency increases significantly. 41 In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared. 42 Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency; 43 however, picking N is workload specific. 44 45 \begin{figure} 46 \centering 47 \input{AllocDS1} 48 \caption{T:1 with Shared Buckets} 49 \label{f:T1SharedBuckets} 50 \end{figure} 51 52 Problems: 53 \begin{itemize} 54 \item 55 Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator. 56 \item 57 When no thread is assigned a bucket number, its free storage is unavailable. 58 \item 59 All KTs contend for the global-pool lock for initial allocations, before free-lists get populated. 60 \end{itemize} 61 Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency. 62 63 \paragraph{T:H model} 64 \VRef[Figure]{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded across the KTs. 65 A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket. 66 When KT $\le$ N, the heaps are uncontented; 67 when KTs $>$ N, the heaps are contented. 68 In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared. 69 By adjusting N upwards, this approach reduces contention but increases storage (time versus space); 70 however, picking N is workload specific. 71 72 \begin{figure} 26 73 \centering 27 74 \input{AllocDS2} 28 \end{cquote} 29 Problems: need to know when a kernel thread (KT) is created and destroyed to know when to assign a shared bucket-number. 30 When no thread is assigned a bucket number, its free storage is unavailable. All KTs will be contended for one lock on sbrk for their initial allocations (before free-lists gets populated). 31 32 \paragraph{Design 2: Decentralized N Heaps} 33 Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area. 34 Kernel threads (KT) are assigned to the N heaps. 35 When KTs $\le$ N, the heaps are uncontented. 36 When KTs $>$ N, the heaps are contented. 37 By adjusting N, this approach reduces storage at the cost of speed due to contention. 38 In all cases, a thread acquires/releases a lock, contented or uncontented. 39 \begin{cquote} 40 \centering 41 \input{AllocDS1} 42 \end{cquote} 43 Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT. 44 45 \paragraph{Design 3: Decentralized Per-thread Heaps} 46 Design 3 is similar to design 2 but instead of having an M:N model, it uses a 1:1 model. So, instead of having N heaos and sharing them among M KTs, Design 3 has one heap for each KT. 47 Dynamic number of heaps: create a thread-local heap for each kernel thread (KT) with a bump-area allocated from the @sbrk@ area. 48 Each KT will have its own exclusive thread-local heap. Heap will be uncontended between KTs regardless how many KTs have been created. 49 Operations on @sbrk@ area will still be protected by locks. 50 %\begin{cquote} 51 %\centering 52 %\input{AllocDS3} FIXME add figs 53 %\end{cquote} 54 Problems: We cannot destroy the heap when a KT exits because our dynamic objects have ownership and they are returned to the heap that created them when the program frees a dynamic object. All dynamic objects point back to their owner heap. If a thread A creates an object O, passes it to another thread B, and A itself exits. When B will free object O, O should return to A's heap so A's heap should be preserved for the lifetime of the whole program as their might be objects in-use of other threads that were allocated by A. Also, we need to know when a KT is created and destroyed to know when to create/destroy a heap for the KT. 55 56 \paragraph{Design 4: Decentralized Per-CPU Heaps} 57 Design 4 is similar to Design 3 but instead of having a heap for each thread, it creates a heap for each CPU. 58 Fixed number of heaps for a machine: create a heap for each CPU with a bump-area allocated from the @sbrk@ area. 59 Each CPU will have its own CPU-local heap. When the program does a dynamic memory operation, it will be entertained by the heap of the CPU where the process is currently running on. 60 Each CPU will have its own exclusive heap. Just like Design 3(FIXME cite), heap will be uncontended between KTs regardless how many KTs have been created. 61 Operations on @sbrk@ area will still be protected by locks. 62 To deal with preemtion during a dynamic memory operation, librseq(FIXME cite) will be used to make sure that the whole dynamic memory operation completes on one CPU. librseq's restartable sequences can make it possible to re-run a critical section and undo the current writes if a preemption happened during the critical section's execution. 63 %\begin{cquote} 64 %\centering 65 %\input{AllocDS4} FIXME add figs 66 %\end{cquote} 67 68 Problems: This approach was slower than the per-thread model. Also, librseq does not provide such restartable sequences to detect preemtions in user-level threading system which is important to us as CFA(FIXME cite) has its own threading system that we want to support. 69 70 Out of the four designs, Design 3 was chosen because of the following reasons. 71 \begin{itemize} 72 \item 73 Decentralized designes are better in general as compared to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designes shard the whole heap which has all the buckets with the addition of sharding sbrk area. So Design 1 was eliminated. 74 \item 75 Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenerio. 76 \item 77 Design 4 was eliminated because it was slower than Design 3 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achive user-threading safety which has some cost to it. Desing 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower. 78 \end{itemize} 79 80 81 \subsection{Advantages of distributed design} 82 83 The distributed design of uHeap is concurrent to work in multi-threaded applications. 84 85 Some key benefits of the distributed design of uHeap are as follows: 86 87 \begin{itemize} 88 \item 89 The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The call to sbrk will be protected using locks but bump allocation (on memory taken from sbrk) will not be contended once the sbrk call has returned. 90 \item 91 Low or almost no contention on heap resources. 92 \item 93 It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty. 94 \item 95 Distributed design avoids unnecassry locks on resources shared across all KTs. 96 \end{itemize} 75 \caption{T:H with Shared Heaps} 76 \label{f:THSharedHeaps} 77 \end{figure} 78 79 Problems: 80 \begin{itemize} 81 \item 82 Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator. 83 \item 84 When no thread is assigned to a heap, its free storage is unavailable. 85 \item 86 Ownership issues arise (see \VRef{s:Ownership}). 87 \item 88 All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated. 89 \end{itemize} 90 Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency. 91 92 \paragraph{T:H model, H = number of CPUs} 93 This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@. 94 (See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per CPU.) 95 Hence, each CPU logically has its own private heap and local pool. 96 A memory operation is serviced from the heap associated with the CPU executing the operation. 97 This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership). 98 This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs. 99 100 Problems: 101 \begin{itemize} 102 \item 103 Need to know when a CPU is added/removed from the @taskset@. 104 \item 105 Need a fast way to determine the CPU a KT is executing on to access the appropriate heap. 106 \item 107 Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}. 108 \begin{quote} 109 A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable} 110 \end{quote} 111 If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness. 112 Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable. 113 Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler. 114 115 \noindent 116 Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted. 117 \end{itemize} 118 Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-section along the allocation fast-path produced a significant increase in allocation costs. 119 Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs. 120 For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs. 121 The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation. 122 Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section. 123 Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap. 124 A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection. 125 126 127 \paragraph{1:1 model} 128 This design is the T:H model with T = H, where there is one thread-local heap for each KT. 129 (See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.) 130 Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted. 131 Heaps are uncontended for a KTs memory operations to its heap (modulo operations on the global pool and ownership). 132 133 Problems: 134 \begin{itemize} 135 \item 136 Need to know when a KT is starts/terminates to create/delete its heap. 137 138 \noindent 139 It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates. 140 \item 141 There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap. 142 143 \noindent 144 The classic solution only deletes a heap after all referents are returned, which is complex. 145 The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees. 146 If old referents return storage to a terminated heap, it is handled in the same way as an active heap. 147 To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation). 148 In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT. 149 \item 150 There can be significant external fragmentation as the number of KTs increases. 151 152 \noindent 153 In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs. 154 Since the number of CPUs is relatively small, >~1024, and a heap relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs. 155 \item 156 There is the same serially-reusable problem with UTs migrating across KTs. 157 \end{itemize} 158 Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach. 159 160 161 \vspace{5pt} 162 \noindent 163 The conclusion from this design exercise is: any atomic fence, instruction (lock free), or lock along the allocation fastpath produces significant slowdown. 164 For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps maybe shared by multiple threads, even when KTs $\le$ N. 165 For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath. 166 However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs. 167 More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading. 168 Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required. 169 The 1:1 model still has the serially-reusable problem with user-level threading, which is address in \VRef{}, and the greatest potential for heap blowup for certain allocation patterns. 170 171 172 % \begin{itemize} 173 % \item 174 % A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated. 175 % \item 176 % Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario. 177 % \item 178 % Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it. 179 % that because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower. 180 % \end{itemize} 181 % Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons: 182 183 % \subsection{Advantages of distributed design} 184 % 185 % The distributed design of llheap is concurrent to work in multi-threaded applications. 186 % Some key benefits of the distributed design of llheap are as follows: 187 % \begin{itemize} 188 % \item 189 % The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned. 190 % \item 191 % Low or almost no contention on heap resources. 192 % \item 193 % It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty. 194 % \item 195 % Distributed design avoids unnecessary locks on resources shared across all KTs. 196 % \end{itemize} 197 198 \subsection{Allocation Latency} 199 200 A primary goal of llheap is low latency. 201 Two forms of latency are internal and external. 202 Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system. 203 Ideally latency is $O(1)$ with a small constant. 204 205 To obtain $O(1)$ internal latency means no searching on the allocation fastpath, largely prohibits coalescing, which leads to external fragmentation. 206 The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger). 207 208 To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation. 209 Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable. 210 The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VRef{}). 211 Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent. 212 97 213 98 214 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 99 215 100 \section{uHeap Structure} 101 102 As described in (FIXME cite 2.4) uHeap uses following features of multi-threaded memory allocators. 103 \begin{itemize} 104 \item 105 uHeap has multiple heaps without a global heap and uses 1:1 model. (FIXME cite 2.5 1:1 model) 106 \item 107 uHeap uses object ownership. (FIXME cite 2.5.2) 108 \item 109 uHeap does not use object containers (FIXME cite 2.6) or any coalescing technique. Instead each dynamic object allocated by uHeap has a header than contains bookkeeping information. 110 \item 111 Each thread-local heap in uHeap has its own allocation buffer that is taken from the system using sbrk() call. (FIXME cite 2.7) 112 \item 113 Unless a heap is freeing an object that is owned by another thread's heap or heap is using sbrk() system call, uHeap is mostly lock-free which eliminates most of the contention on shared resources. (FIXME cite 2.8) 114 \end{itemize} 115 116 As uHeap uses a heap per-thread model to reduce contention on heap resources, we manage a list of heaps (heap-list) that can be used by threads. The list is empty at the start of the program. When a kernel thread (KT) is created, we check if heap-list is empty. If no then a heap is removed from the heap-list and is given to this new KT to use exclusively. If yes then a new heap object is created in dynamic memory and is given to this new KT to use exclusively. When a KT exits, its heap is not destroyed but instead its heap is put on the heap-list and is ready to be reused by new KTs. 117 118 This reduces the memory footprint as the objects on free-lists of a KT that has exited can be reused by a new KT. Also, we preserve all the heaps that were created during the lifetime of the program till the end of the program. uHeap uses object ownership where an object is freed to the free-buckets of the heap that allocated it. Even after a KT A has exited, its heap has to be preserved as there might be objects in-use of other threads that were initially allocated by A and the passed to other threads. 216 \section{llheap Structure} 217 218 \VRef[Figure]{f:llheapStructure} shows the design of llheap, which uses the following features: 219 \begin{itemize} 220 \item 221 1:1 multiple-heap model to minimize the fastpath, 222 \item 223 can be built with or without heap ownership, 224 \item 225 headers per allocation versus containers, 226 \item 227 no coalescing to minimize latency, 228 \item 229 local reserved memory (pool) obtained from the operating system using @sbrk@ call, 230 \item 231 global reserved memory (pool) obtained from the operating system using @mmap@ call to create and reuse heaps needed by threads. 232 \end{itemize} 119 233 120 234 \begin{figure} 121 235 \centering 122 \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps} 123 \caption{HeapStructure} 124 \label{fig:heapStructureFig} 236 % \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps} 237 \input{llheap} 238 \caption{llheap Structure} 239 \label{f:llheapStructure} 125 240 \end{figure} 126 241 127 Each heap uses seggregated free-buckets that have free objects of a specific size. Each free-bucket of a specific size has following 2 lists in it: 128 \begin{itemize} 129 \item 130 Free list is used when a thread is freeing an object that is owned by its own heap so free list does not use any locks/atomic-operations as it is only used by the owner KT. 131 \item 132 Away list is used when a thread A is freeing an object that is owned by another KT B's heap. This object should be freed to the owner heap (B's heap) so A will place the object on the away list of B. Away list is lock protected as it is shared by all other threads. 133 \end{itemize} 134 135 When a dynamic object of a size S is requested. The thread-local heap will check if S is greater than or equal to the mmap threshhold. Any request larger than the mmap threshhold is fulfilled by allocating an mmap area of that size and such requests are not allocated on sbrk area. The value of this threshhold can be changed using mallopt routine but the new value should not be larger than our biggest free-bucket size. 136 137 Algorithm~\ref{alg:heapObjectAlloc} briefly shows how an allocation request is fulfilled. 242 llheap starts by creating an array of $N$ global heaps from storage obtained by @mmap@, where $N$ is the number of computer cores. 243 There is a global bump-pointer to the next free heap in the array. 244 When this array is exhausted, another array is allocated. 245 There is a global top pointer to a heap intrusive link that chain free heaps from terminated threads, where these heaps are reused by new threads. 246 When statistics are turned on, there is a global top pointer to a heap intrusive link that chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps (see @malloc_stats@ \VRef{}). 247 248 When a KT starts, a heap is allocated from the current array for exclusive used by the KT. 249 When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of heaps. 250 The free heaps is a stack so hot storage is reused first. 251 Preserving all heaps created during the program lifetime, solves the storage lifetime problem. 252 This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially. 253 llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap. 254 255 Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M. 256 The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation (see @mallopt@ \VRef{}), \ie small objects managed by the program and large objects managed by the operating system. 257 Each free bucket of a specific size has following two lists: 258 \begin{itemize} 259 \item 260 A free stack used solely by the KT heap-owner, so push/pop operations do not require locking. 261 The free objects is a stack so hot storage is reused first. 262 \item 263 For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operation require locking. 264 The entire ownership stack be removed and become the head of the corresponding free stack, when the free stack is empty. 265 \end{itemize} 266 267 Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$. 268 First, the allocation is divided into small (@sbrk@) or large (@mmap@). 269 For small allocations, $S$ is quantized into a bucket size. 270 Quantizing is performed using a binary search, using the ordered bucket array. 271 An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket. 272 (Type @char@ restricts the number of bucket sizes to 256.) 273 For $S$ > 64K, the binary search is used. 274 Then, the allocation storage is obtained from the following locations (in order), with increasing latency. 275 \begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt] 276 \item 277 bucket's free stack, 278 \item 279 bucket's away stack, 280 \item 281 heap's local pool 282 \item 283 global pool 284 \item 285 operating system (@sbrk@) 286 \end{enumerate} 138 287 139 288 \begin{algorithm} 140 \caption{Dynamic object allocation of size S}\label{alg:heapObjectAlloc}289 \caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc} 141 290 \begin{algorithmic}[1] 142 291 \State $\textit{O} \gets \text{NULL}$ 143 292 \If {$S < \textit{mmap-threshhold}$} 144 \State $\textit{B} \gets (\text{smallest free-bucket} \geq S)$293 \State $\textit{B} \gets \text{smallest free-bucket} \geq S$ 145 294 \If {$\textit{B's free-list is empty}$} 146 295 \If {$\textit{B's away-list is empty}$} 147 296 \If {$\textit{heap's allocation buffer} < S$} 148 \State $\text{get allocation buffer using system call sbrk()}$297 \State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$ 149 298 \EndIf 150 299 \State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$ … … 164 313 \end{algorithm} 165 314 315 Algorithm~\ref{alg:heapObjectFree} shows the de-allocation (free) outline for an object at address $A$. 316 317 \begin{algorithm}[h] 318 \caption{Dynamic object free at address $A$}\label{alg:heapObjectFree} 319 %\begin{algorithmic}[1] 320 %\State write this algorithm 321 %\end{algorithmic} 322 \end{algorithm} 323 166 324 167 325 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 168 326 169 327 \section{Added Features and Methods} 170 To improve the uHeap allocator (FIX ME: cite uHeap) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator. 328 To improve the llheap allocator (FIX ME: cite llheap) interface and make it more user friendly, we added a few more routines to the C allocator. 329 Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator. 171 330 172 331 \subsection{C Interface} 173 We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. THese features will programmer more control on the dynamic memory allocation. 332 We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. 333 These features will programmer more control on the dynamic memory allocation. 174 334 175 335 \subsection{Out of Memory} … … 183 343 184 344 \subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}} 185 @aalloc@ is an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is calloc but calloc also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0. 345 @aalloc@ is an extension of malloc. 346 It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. 347 The only alternate of this routine in the other allocators is @calloc@ but @calloc@ also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0. 186 348 \paragraph{Usage} 187 349 @aalloc@ takes two parameters. … … 193 355 @elemSize@: size of the object in the array. 194 356 \end{itemize} 195 It returns address of dynamic object allocatoed on heap that can contain dim number of objects of the size elemSize. On failure, it returns a @NULL@ pointer. 357 It returns address of dynamic object allocated on heap that can contain dim number of objects of the size elemSize. 358 On failure, it returns a @NULL@ pointer. 196 359 197 360 \subsection{\lstinline{void * resize( void * oaddr, size_t size )}} 198 @resize@ is an extension of relloc. It allows programmer to reuse a cuurently allocated dynamic object with a new size requirement. Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object. 361 @resize@ is an extension of relloc. 362 It allows programmer to reuse a currently allocated dynamic object with a new size requirement. 363 Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object. 199 364 \paragraph{Usage} 200 365 @resize@ takes two parameters. … … 206 371 @size@: the new size requirement of the to which the old object needs to be resized. 207 372 \end{itemize} 208 It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns a @NULL@ pointer. 373 It returns an object that is of the size given but it does not preserve the data in the old object. 374 On failure, it returns a @NULL@ pointer. 209 375 210 376 \subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}} 211 This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement. 212 \paragraph{Usage} 213 This resize takes three parameters. It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize). 377 This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). 378 In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement. 379 \paragraph{Usage} 380 This resize takes three parameters. 381 It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize). 214 382 215 383 \begin{itemize} … … 221 389 @size@: the new size requirement of the to which the old object needs to be resized. 222 390 \end{itemize} 223 It returns an object with the size and alignment given in the parameters. On failure, it returns a @NULL@ pointer. 391 It returns an object with the size and alignment given in the parameters. 392 On failure, it returns a @NULL@ pointer. 224 393 225 394 \subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}} 226 amemalign is a hybrid of memalign and aalloc. It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. It frees the programmer from calculating the total size of the array. 395 amemalign is a hybrid of memalign and aalloc. 396 It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. 397 It frees the programmer from calculating the total size of the array. 227 398 \paragraph{Usage} 228 399 amemalign takes three parameters. … … 236 407 @elemSize@: size of the object in the array. 237 408 \end{itemize} 238 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns a @NULL@ pointer. 409 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. 410 The returned dynamic array is aligned to the given alignment. 411 On failure, it returns a @NULL@ pointer. 239 412 240 413 \subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}} 241 cmemalign is a hybrid of amemalign and calloc. It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. This routine provides both features of aligning and 0 filling, implicitly. 414 cmemalign is a hybrid of amemalign and calloc. 415 It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. 416 The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. 417 This routine provides both features of aligning and 0 filling, implicitly. 242 418 \paragraph{Usage} 243 419 cmemalign takes three parameters. … … 251 427 @elemSize@: size of the object in the array. 252 428 \end{itemize} 253 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns a @NULL@ pointer. 429 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. 430 The returned dynamic array is aligned to the given alignment and is 0 filled. 431 On failure, it returns a @NULL@ pointer. 254 432 255 433 \subsection{\lstinline{size_t malloc_alignment( void * addr )}} 256 @malloc_alignment@ returns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment. 434 @malloc_alignment@ returns the alignment of a currently allocated dynamic object. 435 It allows the programmer in memory management and personal bookkeeping. 436 It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment. 257 437 \paragraph{Usage} 258 438 @malloc_alignment@ takes one parameters. … … 262 442 @addr@: the address of the currently allocated dynamic object. 263 443 \end{itemize} 264 @malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeap allocator. 444 @malloc_alignment@ returns the alignment of the given dynamic object. 445 On failure, it return the value of default alignment of the llheap allocator. 265 446 266 447 \subsection{\lstinline{bool malloc_zero_fill( void * addr )}} 267 @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation. 448 @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. 449 It allows the programmer in memory management and personal bookkeeping. 450 It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation. 268 451 \paragraph{Usage} 269 452 @malloc_zero_fill@ takes one parameters. … … 273 456 @addr@: the address of the currently allocated dynamic object. 274 457 \end{itemize} 275 @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false. 458 @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. 459 On failure, it returns false. 276 460 277 461 \subsection{\lstinline{size_t malloc_size( void * addr )}} 278 @malloc_size@ returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is @malloc_usable_size@. But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine. 462 @malloc_size@ returns the allocation size of a currently allocated dynamic object. 463 It allows the programmer in memory management and personal bookkeeping. 464 It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. 465 Its current alternate in the other allocators is @malloc_usable_size@. 466 But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. 467 On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. 468 This size is updated when an object is realloced, resized, or passed through a similar allocator routine. 279 469 \paragraph{Usage} 280 470 @malloc_size@ takes one parameters. … … 284 474 @addr@: the address of the currently allocated dynamic object. 285 475 \end{itemize} 286 @malloc_size@ returns the allocation size of the given dynamic object. On failure, it return zero. 476 @malloc_size@ returns the allocation size of the given dynamic object. 477 On failure, it return zero. 287 478 288 479 \subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}} 289 This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement. 290 \paragraph{Usage} 291 This @realloc@ takes three parameters. It takes an additional parameter of nalign as compared to the default @realloc@. 480 This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). 481 In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement. 482 \paragraph{Usage} 483 This @realloc@ takes three parameters. 484 It takes an additional parameter of nalign as compared to the default @realloc@. 292 485 293 486 \begin{itemize} … … 299 492 @size@: the new size requirement of the to which the old object needs to be resized. 300 493 \end{itemize} 301 It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a @NULL@ pointer. 494 It returns an object with the size and alignment given in the parameters that preserves the data in the old object. 495 On failure, it returns a @NULL@ pointer. 302 496 303 497 \subsection{\CFA Malloc Interface} 304 We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeap allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers. 305 \CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type. 498 We added some routines to the @malloc@ interface of \CFA. 499 These routines can only be used in \CFA and not in our stand-alone llheap allocator as these routines use some features that are only provided by \CFA and not by C. 500 It makes the allocator even more usable to the programmers. 501 \CFA provides the liberty to know the returned type of a call to the allocator. 502 So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type. 306 503 307 504 \subsection{\lstinline{T * malloc( void )}} 308 This malloc is a simplified polymorphic form of defualt malloc (FIX ME: cite malloc). It does not take any parameter as compared to default malloc that takes one parameter. 309 \paragraph{Usage} 310 This malloc takes no parameters. 311 It returns a dynamic object of the size of type @T@. On failure, it returns a @NULL@ pointer. 505 This @malloc@ is a simplified polymorphic form of default @malloc@ (FIX ME: cite malloc). 506 It does not take any parameter as compared to default @malloc@ that takes one parameter. 507 \paragraph{Usage} 508 This @malloc@ takes no parameters. 509 It returns a dynamic object of the size of type @T@. 510 On failure, it returns a @NULL@ pointer. 312 511 313 512 \subsection{\lstinline{T * aalloc( size_t dim )}} 314 This aalloc is a simplified polymorphic form of above aalloc (FIX ME: cite aalloc). It takes one parameter as compared to the above aalloc that takes two parameters. 513 This @aalloc@ is a simplified polymorphic form of above @aalloc@ (FIX ME: cite aalloc). 514 It takes one parameter as compared to the above @aalloc@ that takes two parameters. 315 515 \paragraph{Usage} 316 516 aalloc takes one parameters. … … 320 520 @dim@: required number of objects in the array. 321 521 \end{itemize} 322 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer. 522 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. 523 On failure, it returns a @NULL@ pointer. 323 524 324 525 \subsection{\lstinline{T * calloc( size_t dim )}} 325 This calloc is a simplified polymorphic form of defualt calloc (FIX ME: cite calloc). It takes one parameter as compared to the default calloc that takes two parameters. 326 \paragraph{Usage} 327 This calloc takes one parameter. 526 This @calloc@ is a simplified polymorphic form of default @calloc@ (FIX ME: cite calloc). 527 It takes one parameter as compared to the default @calloc@ that takes two parameters. 528 \paragraph{Usage} 529 This @calloc@ takes one parameter. 328 530 329 531 \begin{itemize} … … 331 533 @dim@: required number of objects in the array. 332 534 \end{itemize} 333 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer. 535 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. 536 On failure, it returns a @NULL@ pointer. 334 537 335 538 \subsection{\lstinline{T * resize( T * ptr, size_t size )}} 336 This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. 539 This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). 540 It takes two parameters as compared to the above resize that takes three parameters. 541 It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. 337 542 \paragraph{Usage} 338 543 This resize takes two parameters. … … 344 549 @size@: the required size of the new object. 345 550 \end{itemize} 346 It returns a dynamic object of the size given in paramters. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer. 551 It returns a dynamic object of the size given in parameters. 552 The returned object is aligned to the alignment of type @T@. 553 On failure, it returns a @NULL@ pointer. 347 554 348 555 \subsection{\lstinline{T * realloc( T * ptr, size_t size )}} 349 This @realloc@ is a simplified polymorphic form of defualt @realloc@ (FIX ME: cite @realloc@ with align). It takes two parameters as compared to the above @realloc@ that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. 556 This @realloc@ is a simplified polymorphic form of default @realloc@ (FIX ME: cite @realloc@ with align). 557 It takes two parameters as compared to the above @realloc@ that takes three parameters. 558 It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type. 350 559 \paragraph{Usage} 351 560 This @realloc@ takes two parameters. … … 357 566 @size@: the required size of the new object. 358 567 \end{itemize} 359 It returns a dynamic object of the size given in paramters that preserves the data in the given object. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer. 568 It returns a dynamic object of the size given in parameters that preserves the data in the given object. 569 The returned object is aligned to the alignment of type @T@. 570 On failure, it returns a @NULL@ pointer. 360 571 361 572 \subsection{\lstinline{T * memalign( size_t align )}} 362 This memalign is a simplified polymorphic form of defualt memalign (FIX ME: cite memalign). It takes one parameters as compared to the default memalign that takes two parameters. 573 This memalign is a simplified polymorphic form of default memalign (FIX ME: cite memalign). 574 It takes one parameters as compared to the default memalign that takes two parameters. 363 575 \paragraph{Usage} 364 576 memalign takes one parameters. … … 368 580 @align@: the required alignment of the dynamic object. 369 581 \end{itemize} 370 It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. On failure, it returns a @NULL@ pointer. 582 It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. 583 On failure, it returns a @NULL@ pointer. 371 584 372 585 \subsection{\lstinline{T * amemalign( size_t align, size_t dim )}} 373 This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). It takes two parameter as compared to the above amemalign that takes three parameters. 586 This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). 587 It takes two parameter as compared to the above amemalign that takes three parameters. 374 588 \paragraph{Usage} 375 589 amemalign takes two parameters. … … 381 595 @dim@: required number of objects in the array. 382 596 \end{itemize} 383 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align. On failure, it returns a @NULL@ pointer. 597 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. 598 The returned object is aligned to the given parameter align. 599 On failure, it returns a @NULL@ pointer. 384 600 385 601 \subsection{\lstinline{T * cmemalign( size_t align, size_t dim )}} 386 This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). It takes two parameter as compared to the above cmemalign that takes three parameters. 602 This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). 603 It takes two parameter as compared to the above cmemalign that takes three parameters. 387 604 \paragraph{Usage} 388 605 cmemalign takes two parameters. … … 394 611 @dim@: required number of objects in the array. 395 612 \end{itemize} 396 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align and is zero filled. On failure, it returns a @NULL@ pointer. 613 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. 614 The returned object is aligned to the given parameter align and is zero filled. 615 On failure, it returns a @NULL@ pointer. 397 616 398 617 \subsection{\lstinline{T * aligned_alloc( size_t align )}} 399 This @aligned_alloc@ is a simplified polymorphic form of defualt @aligned_alloc@ (FIX ME: cite @aligned_alloc@). It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters. 618 This @aligned_alloc@ is a simplified polymorphic form of default @aligned_alloc@ (FIX ME: cite @aligned_alloc@). 619 It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters. 400 620 \paragraph{Usage} 401 621 This @aligned_alloc@ takes one parameter. … … 405 625 @align@: required alignment of the dynamic object. 406 626 \end{itemize} 407 It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. On failure, it returns a @NULL@ pointer. 627 It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. 628 On failure, it returns a @NULL@ pointer. 408 629 409 630 \subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}} 410 This @posix_memalign@ is a simplified polymorphic form of defualt @posix_memalign@ (FIX ME: cite @posix_memalign@). It takes two parameters as compared to the default @posix_memalign@ that takes three parameters. 631 This @posix_memalign@ is a simplified polymorphic form of default @posix_memalign@ (FIX ME: cite @posix_memalign@). 632 It takes two parameters as compared to the default @posix_memalign@ that takes three parameters. 411 633 \paragraph{Usage} 412 634 This @posix_memalign@ takes two parameter. … … 419 641 \end{itemize} 420 642 421 It stores address of the dynamic object of the size of type @T@ in given parameter ptr. This object is aligned to the given parameter. On failure, it returns a @NULL@ pointer. 643 It stores address of the dynamic object of the size of type @T@ in given parameter ptr. 644 This object is aligned to the given parameter. 645 On failure, it returns a @NULL@ pointer. 422 646 423 647 \subsection{\lstinline{T * valloc( void )}} 424 This @valloc@ is a simplified polymorphic form of defualt @valloc@ (FIX ME: cite @valloc@). It takes no parameters as compared to the default @valloc@ that takes one parameter. 648 This @valloc@ is a simplified polymorphic form of default @valloc@ (FIX ME: cite @valloc@). 649 It takes no parameters as compared to the default @valloc@ that takes one parameter. 425 650 \paragraph{Usage} 426 651 @valloc@ takes no parameters. 427 It returns a dynamic object of the size of type @T@ that is aligned to the page size. On failure, it returns a @NULL@ pointer. 652 It returns a dynamic object of the size of type @T@ that is aligned to the page size. 653 On failure, it returns a @NULL@ pointer. 428 654 429 655 \subsection{\lstinline{T * pvalloc( void )}} 430 656 \paragraph{Usage} 431 657 @pvalloc@ takes no parameters. 432 It returns a dynamic object of the size that is calcutaed by rouding the size of type @T@. The returned object is also aligned to the page size. On failure, it returns a @NULL@ pointer. 658 It returns a dynamic object of the size that is calculated by rounding the size of type @T@. 659 The returned object is also aligned to the page size. 660 On failure, it returns a @NULL@ pointer. 433 661 434 662 \subsection{Alloc Interface} 435 In addition to improve allocator interface both for \CFA and our standalone allocator uHeap in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation. 663 In addition to improve allocator interface both for \CFA and our stand-alone allocator llheap in C. 664 We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation. 436 665 This interface helps programmers in three major ways. 437 666 438 667 \begin{itemize} 439 668 \item 440 Routine Name: alloc interfce frees programmers from remmebring different routine names for different kind of dynamic allocations. 441 \item 442 Parametre Positions: alloc interface frees programmers from remembering parameter postions in call to routines. 443 \item 444 Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determince the object size from returned type of alloc call. 445 \end{itemize} 446 447 Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interfece has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter. 448 449 \subsection{Routine: \lstinline{T * alloc( ... )}} 450 Call to alloc wihout any parameter returns one object of size of type @T@ allocated dynamically. 451 Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine. 452 alocc routine accepts six kinds of arguments. Using different combinations of tha parameters, different kind of allocations can be performed. Any combincation of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultanously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced. 669 Routine Name: alloc interface frees programmers from remembering different routine names for different kind of dynamic allocations. 670 \item 671 Parameter Positions: alloc interface frees programmers from remembering parameter positions in call to routines. 672 \item 673 Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determine the object size from returned type of alloc call. 674 \end{itemize} 675 676 Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. 677 The new interface has just one routine name alloc that can be used to perform a wide range of dynamic allocations. 678 The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter. 679 680 \subsection{Routine: \lstinline{T * alloc( ... 681 )}} 682 Call to alloc without any parameter returns one object of size of type @T@ allocated dynamically. 683 Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. 684 If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine. 685 alloc routine accepts six kinds of arguments. 686 Using different combinations of than parameters, different kind of allocations can be performed. 687 Any combination of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultaneously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. 688 If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced. 453 689 454 690 \paragraph{Dim} 455 This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@. 456 It represents the required number of members in the array allocation as in \CFA's aalloc (FIX ME: cite aalloc). 691 This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. 692 It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@. 693 It represents the required number of members in the array allocation as in \CFA's @aalloc@ (FIX ME: cite aalloc). 457 694 This parameter should be of type @size_t@. 458 695 … … 461 698 462 699 \paragraph{Align} 463 This parameter is position-free and uses a backtick routine align (@`align@). The parameter passed with @`align@ should be of type @size_t@. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used. 700 This parameter is position-free and uses a backtick routine align (@`align@). 701 The parameter passed with @`align@ should be of type @size_t@. 702 If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used. 464 703 465 704 Example: @int b = alloc( 5 , 64`align )@ 466 This call will return a dynamic array of five integers. It will align the allocated object to 64. 705 This call will return a dynamic array of five integers. 706 It will align the allocated object to 64. 467 707 468 708 \paragraph{Fill} 469 This parameter is position-free and uses a backtick routine fill (@`fill@). In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter. 709 This parameter is position-free and uses a backtick routine fill (@`fill@). 710 In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter. 470 711 Three types of parameters can be passed using `fill. 471 712 … … 476 717 Object of returned type: An object of type of returned type can be passed with @`fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation. 477 718 \item 478 Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen untill the end object passed to @`fill@ or the end of requested allocation reaches. 719 Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object. 720 In this case, the allocated memory is not filled recursively till the end of allocation. 721 The filling happen until the end object passed to @`fill@ or the end of requested allocation reaches. 479 722 \end{itemize} 480 723 481 724 Example: @int b = alloc( 5 , 'a'`fill )@ 482 This call will return a dynamic array of five integers. It will fill the allocated object with character 'a' recursively till the end of requested allocation size. 725 This call will return a dynamic array of five integers. 726 It will fill the allocated object with character 'a' recursively till the end of requested allocation size. 483 727 484 728 Example: @int b = alloc( 5 , 4`fill )@ 485 This call will return a dynamic array of five integers. It will fill the allocated object with integer 4 recursively till the end of requested allocation size. 729 This call will return a dynamic array of five integers. 730 It will fill the allocated object with integer 4 recursively till the end of requested allocation size. 486 731 487 732 Example: @int b = alloc( 5 , a`fill )@ where @a@ is a pointer of int type 488 This call will return a dynamic array of five integers. It will copy data in a to the returned object non-recursively untill end of a or the newly allocated object is reached. 733 This call will return a dynamic array of five integers. 734 It will copy data in a to the returned object non-recursively until end of a or the newly allocated object is reached. 489 735 490 736 \paragraph{Resize} 491 This parameter is position-free and uses a backtick routine resize (@`resize@). It represents the old dynamic object (oaddr) that the programmer wants to 737 This parameter is position-free and uses a backtick routine resize (@`resize@). 738 It represents the old dynamic object (oaddr) that the programmer wants to 492 739 \begin{itemize} 493 740 \item … … 498 745 fill with something. 499 746 \end{itemize} 500 The data in old dynamic object will not be preserved in the new object. The type of object passed to @`resize@ and the returned type of alloc call can be different. 747 The data in old dynamic object will not be preserved in the new object. 748 The type of object passed to @`resize@ and the returned type of alloc call can be different. 501 749 502 750 Example: @int b = alloc( 5 , a`resize )@ … … 504 752 505 753 Example: @int b = alloc( 5 , a`resize , 32`align )@ 506 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. 754 This call will resize object a to a dynamic array that can contain 5 integers. 755 The returned object will also be aligned to 32. 507 756 508 757 Example: @int b = alloc( 5 , a`resize , 32`align , 2`fill )@ 509 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32 and will be filled with 2. 758 This call will resize object a to a dynamic array that can contain 5 integers. 759 The returned object will also be aligned to 32 and will be filled with 2. 510 760 511 761 \paragraph{Realloc} 512 This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@). It represents the old dynamic object (oaddr) that the programmer wants to 762 This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@). 763 It represents the old dynamic object (oaddr) that the programmer wants to 513 764 \begin{itemize} 514 765 \item … … 519 770 fill with something. 520 771 \end{itemize} 521 The data in old dynamic object will be preserved in the new object. The type of object passed to @`realloc@ and the returned type of alloc call cannot be different. 772 The data in old dynamic object will be preserved in the new object. 773 The type of object passed to @`realloc@ and the returned type of alloc call cannot be different. 522 774 523 775 Example: @int b = alloc( 5 , a`realloc )@ … … 525 777 526 778 Example: @int b = alloc( 5 , a`realloc , 32`align )@ 527 This call will realloc object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. 779 This call will realloc object a to a dynamic array that can contain 5 integers. 780 The returned object will also be aligned to 32. 528 781 529 782 Example: @int b = alloc( 5 , a`realloc , 32`align , 2`fill )@ 530 This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. The extra space after copying data of a to the returned object will be filled with 2. 783 This call will resize object a to a dynamic array that can contain 5 integers. 784 The returned object will also be aligned to 32. 785 The extra space after copying data of a to the returned object will be filled with 2. -
doc/theses/mubeen_zulfiqar_MMath/background.tex
r30d91e4 r365c8dcb 54 54 The trailer may be used to simplify an allocation implementation, \eg coalescing, and/or for security purposes to mark the end of an object. 55 55 An object may be preceded by padding to ensure proper alignment. 56 Some algorithms quantize allocation requests into distinct sizes resulting in additional spacing after objects less than the quantized value. 56 Some algorithms quantize allocation requests into distinct sizes, called \newterm{buckets}, resulting in additional spacing after objects less than the quantized value. 57 (Note, the buckets are often organized as an array of ascending bucket sizes for fast searching, \eg binary search, and the array is stored in the heap management-area, where each bucket is a top point to the freed objects of that size.) 57 58 When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists. 58 59 A free object also contains management data, \eg size, chaining, etc. -
doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS1.fig
r30d91e4 r365c8dcb 8 8 -2 9 9 1200 2 10 6 4200 1575 4500 172511 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 165012 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4350 1650 20 20 4350 1650 4370 165013 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4425 1650 20 20 4425 1650 4445 165010 6 2850 2100 3150 2250 11 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175 12 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175 13 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175 14 14 -6 15 6 2850 2475 3150 2850 15 6 4050 2100 4350 2250 16 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175 17 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175 18 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175 19 -6 20 6 4650 2100 4950 2250 21 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175 22 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175 23 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175 24 -6 25 6 3450 2100 3750 2250 26 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175 27 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175 28 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175 29 -6 30 6 3300 2175 3600 2550 16 31 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 17 32 1 1 1.00 45.00 90.00 18 2925 2475 2925 270033 3375 2175 3375 2400 19 34 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 20 2850 2700 3150 2700 3150 2850 2850 2850 2850 270035 3300 2400 3600 2400 3600 2550 3300 2550 3300 2400 21 36 -6 22 6 4350 2475 4650 2850 37 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 38 3150 1800 3150 2250 39 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 40 2850 1800 2850 2250 41 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 42 4650 1800 4650 2250 43 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 44 4950 1800 4950 2250 45 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 46 4500 1725 4500 2250 47 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 48 5100 1725 5100 2250 49 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 50 3450 1800 3450 2250 51 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 52 3750 1800 3750 2250 53 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 54 3300 1725 3300 2250 55 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 56 3900 1725 3900 2250 57 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 58 5250 1800 5250 2250 59 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 60 5400 1800 5400 2250 61 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 62 5550 1800 5550 2250 63 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 64 5700 1800 5700 2250 65 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 66 5850 1800 5850 2250 67 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 68 2700 1725 2700 2250 23 69 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 24 70 1 1 1.00 45.00 90.00 25 4425 2475 4425 2700 26 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 27 4350 2700 4650 2700 4650 2850 4350 2850 4350 2700 28 -6 29 6 3600 2475 3825 3150 71 3375 1275 3375 1575 30 72 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 31 73 1 1 1.00 45.00 90.00 32 3675 2475 3675 2700 33 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 34 3600 2700 3825 2700 3825 2850 3600 2850 3600 2700 35 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 36 3600 3000 3825 3000 3825 3150 3600 3150 3600 3000 74 2700 1275 2700 1575 75 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 76 1 1 1.00 45.00 90.00 77 2775 1275 2775 1575 37 78 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 38 79 1 1 1.00 45.00 90.00 39 3675 2775 3675 3000 40 -6 41 6 4875 3600 5175 3750 42 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675 43 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675 44 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675 45 -6 46 6 4875 2325 5175 2475 47 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400 48 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400 49 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400 50 -6 51 6 5625 2325 5925 2475 52 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 2400 20 20 5700 2400 5720 2400 53 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 2400 20 20 5775 2400 5795 2400 54 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 2400 20 20 5850 2400 5870 2400 55 -6 56 6 5625 3600 5925 3750 57 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 3675 20 20 5700 3675 5720 3675 58 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 3675 20 20 5775 3675 5795 3675 59 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 3675 20 20 5850 3675 5870 3675 60 -6 61 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 62 2400 2100 2400 2550 63 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 64 2550 2100 2550 2550 65 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 66 2700 2100 2700 2550 67 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 68 2850 2100 2850 2550 69 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 70 3000 2100 3000 2550 71 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 72 3600 2100 3600 2550 73 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 74 3900 2100 3900 2550 75 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 76 4050 2100 4050 2550 77 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 78 4200 2100 4200 2550 79 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 80 4350 2100 4350 2550 81 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 82 4500 2100 4500 2550 83 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 84 3300 1500 3300 1800 85 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 86 3600 1500 3600 1800 87 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 88 3900 1500 3900 1800 89 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 90 3000 1500 4800 1500 4800 1800 3000 1800 3000 1500 80 5175 1275 5175 1575 81 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 82 1 1 1.00 45.00 90.00 83 5625 1275 5625 1575 84 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 85 1 1 1.00 45.00 90.00 86 3750 1275 3750 1575 91 87 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 92 88 1 1 1.00 45.00 90.00 93 3225 1650 2625 2100 89 3825 1275 3825 1575 90 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 91 2700 1950 6000 1950 92 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 93 2700 2100 6000 2100 94 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 95 2700 1800 6000 1800 6000 2250 2700 2250 2700 1800 94 96 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 95 97 1 1 1.00 45.00 90.00 96 3150 1650 2550 210098 2775 2175 2775 2400 97 99 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 98 100 1 1 1.00 45.00 90.00 99 3450 1650 4050 2100 100 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 101 1 1 1.00 45.00 90.00 102 3375 1650 3975 2100 103 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 104 2100 2100 2100 2550 105 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 106 1950 2250 3150 2250 107 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 108 3450 2250 4650 2250 101 2775 2475 2775 2700 109 102 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 110 1950 2100 3150 2100 3150 2550 1950 2550 1950 2100103 2700 2700 2850 2700 2850 2850 2700 2850 2700 2700 111 104 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 112 3450 2100 4650 2100 4650 2550 3450 2550 3450 2100 113 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 114 2250 2100 2250 2550 115 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 116 3750 2100 3750 2550 105 2700 2400 2850 2400 2850 2550 2700 2550 2700 2400 117 106 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 118 107 1 1 1.00 45.00 90.00 119 2025 2475 2025 2700 120 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 121 1 1 1.00 45.00 90.00 122 2025 2775 2025 3000 108 4575 2175 4575 2400 123 109 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 124 1950 3000 2100 3000 2100 3150 1950 3150 1950 3000 125 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 126 1950 2700 2100 2700 2100 2850 1950 2850 1950 2700 110 4500 2400 5025 2400 5025 2550 4500 2550 4500 2400 127 111 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 128 112 1 1 1.00 45.00 90.00 129 1950 3750 2700 3750 2700 3525113 3600 3375 4350 3375 4350 3150 130 114 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 131 1950 3525 3150 3525 3150 3900 1950 3900 1950 3525 132 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 133 1 1 1.00 45.00 90.00 134 3450 3750 4200 3750 4200 3525 135 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 136 3450 3525 4650 3525 4650 3900 3450 3900 3450 3525 137 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 138 1 1 1.00 45.00 90.00 139 3150 4650 4200 4650 4200 4275 140 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 141 3150 4275 4650 4275 4650 4875 3150 4875 3150 4275 142 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 143 1950 2400 3150 2400 144 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 145 3450 2400 4650 2400 146 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 147 5400 2100 5400 3900 148 4 2 0 50 -1 0 11 0.0000 2 120 300 1875 2250 lock\001 149 4 1 0 50 -1 0 12 0.0000 2 135 1935 3900 1425 N kernel-thread buckets\001 150 4 1 0 50 -1 0 12 0.0000 2 195 810 4425 2025 heap$_2$\001 151 4 1 0 50 -1 0 12 0.0000 2 195 810 2175 2025 heap$_1$\001 152 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001 153 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001 154 4 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001 155 4 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001 156 4 0 0 50 -1 0 12 0.0000 2 135 360 3225 4450 lock\001 157 4 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001 158 4 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001 159 4 1 0 50 -1 0 12 0.0000 2 180 1455 3900 4200 global pool (sbrk)\001 160 4 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001 161 4 1 0 50 -1 0 12 0.0000 2 180 720 6450 3150 free pool\001 162 4 1 0 50 -1 0 12 0.0000 2 180 390 6450 2925 heap\001 115 3600 3150 5100 3150 5100 3525 3600 3525 3600 3150 116 4 2 0 50 -1 0 11 0.0000 2 135 300 2625 1950 lock\001 117 4 1 0 50 -1 0 11 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001 118 4 1 0 50 -1 0 11 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001 119 4 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001 120 4 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001 121 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001 122 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001 123 4 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001 124 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001 125 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001 126 4 1 0 50 -1 0 11 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001 -
doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS2.fig
r30d91e4 r365c8dcb 8 8 -2 9 9 1200 2 10 6 2850 2100 3150 2250 11 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175 12 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175 13 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175 14 -6 15 6 4050 2100 4350 2250 16 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175 17 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175 18 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175 19 -6 20 6 4650 2100 4950 2250 21 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175 22 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175 23 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175 24 -6 25 6 3450 2100 3750 2250 26 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175 27 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175 28 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175 29 -6 30 6 3300 2175 3600 2550 10 6 2850 2475 3150 2850 31 11 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 32 12 1 1 1.00 45.00 90.00 33 3375 2175 3375 240013 2925 2475 2925 2700 34 14 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 35 3300 2400 3600 2400 3600 2550 3300 2550 3300 2400 15 2850 2700 3150 2700 3150 2850 2850 2850 2850 2700 16 -6 17 6 4350 2475 4650 2850 18 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 19 1 1 1.00 45.00 90.00 20 4425 2475 4425 2700 21 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 22 4350 2700 4650 2700 4650 2850 4350 2850 4350 2700 23 -6 24 6 3600 2475 3825 3150 25 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 26 1 1 1.00 45.00 90.00 27 3675 2475 3675 2700 28 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 29 3600 2700 3825 2700 3825 2850 3600 2850 3600 2700 30 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 31 3600 3000 3825 3000 3825 3150 3600 3150 3600 3000 32 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 33 1 1 1.00 45.00 90.00 34 3675 2775 3675 3000 35 -6 36 6 1950 3525 3150 3900 37 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 38 1 1 1.00 45.00 90.00 39 1950 3750 2700 3750 2700 3525 40 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 41 1950 3525 3150 3525 3150 3900 1950 3900 1950 3525 42 4 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001 43 -6 44 6 4050 1575 4350 1725 45 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 1650 20 20 4125 1650 4145 1650 46 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 1650 20 20 4200 1650 4220 1650 47 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650 48 -6 49 6 4875 2325 6150 3750 50 6 4875 2325 5175 2475 51 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400 52 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400 53 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400 54 -6 55 6 4875 3600 5175 3750 56 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675 57 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675 58 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675 59 -6 60 4 1 0 50 -1 0 12 0.0000 2 180 900 5700 3150 local pools\001 61 4 1 0 50 -1 0 12 0.0000 2 180 465 5700 2925 heaps\001 62 -6 63 6 3600 4050 5100 4650 64 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 65 1 1 1.00 45.00 90.00 66 3600 4500 4350 4500 4350 4275 67 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 68 3600 4275 5100 4275 5100 4650 3600 4650 3600 4275 69 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 4200 global pool (sbrk)\001 70 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 4450 lock\001 36 71 -6 37 72 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 38 3150 1800 3150 225073 2400 2100 2400 2550 39 74 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 40 2 850 1800 2850 225075 2550 2100 2550 2550 41 76 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 42 4650 1800 4650 225077 2700 2100 2700 2550 43 78 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 44 4950 1800 4950 2250 45 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 46 4500 1725 4500 2250 47 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 48 5100 1725 5100 2250 79 2850 2100 2850 2550 49 80 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 50 3 450 1800 3450 225081 3000 2100 3000 2550 51 82 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 52 3750 1800 3750 2250 53 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 54 3300 1725 3300 2250 55 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 56 3900 1725 3900 2250 83 3600 2100 3600 2550 57 84 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 58 5250 1800 5250 225085 3900 2100 3900 2550 59 86 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 60 5400 1800 5400 225087 4050 2100 4050 2550 61 88 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 62 5550 1800 5550 225089 4200 2100 4200 2550 63 90 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 64 5700 1800 5700 225091 4350 2100 4350 2550 65 92 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 66 5850 1800 5850 2250 67 2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 68 2700 1725 2700 2250 93 4500 2100 4500 2550 94 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 95 3300 1500 3300 1800 96 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 97 3600 1500 3600 1800 98 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 99 3000 1500 4800 1500 4800 1800 3000 1800 3000 1500 69 100 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 70 101 1 1 1.00 45.00 90.00 71 3 375 1275 3375 1575102 3150 1650 2550 2100 72 103 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 73 104 1 1 1.00 45.00 90.00 74 2700 1275 2700 1575 75 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 76 1 1 1.00 45.00 90.00 77 2775 1275 2775 1575 105 3450 1650 4050 2100 106 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 107 2100 2100 2100 2550 108 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 109 1950 2250 3150 2250 110 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 111 3450 2250 4650 2250 112 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 113 1950 2100 3150 2100 3150 2550 1950 2550 1950 2100 114 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 115 3450 2100 4650 2100 4650 2550 3450 2550 3450 2100 116 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 117 2250 2100 2250 2550 118 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 119 3750 2100 3750 2550 78 120 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 79 121 1 1 1.00 45.00 90.00 80 5175 1275 5175 1575122 2025 2475 2025 2700 81 123 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 82 124 1 1 1.00 45.00 90.00 83 5625 1275 5625 1575 84 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 85 1 1 1.00 45.00 90.00 86 3750 1275 3750 1575 87 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 88 1 1 1.00 45.00 90.00 89 3825 1275 3825 1575 90 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 91 2700 1950 6000 1950 92 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 93 2700 2100 6000 2100 125 2025 2775 2025 3000 94 126 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 95 2700 1800 6000 1800 6000 2250 2700 2250 2700 1800 96 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 97 1 1 1.00 45.00 90.00 98 2775 2175 2775 2400 99 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 100 1 1 1.00 45.00 90.00 101 2775 2475 2775 2700 127 1950 3000 2100 3000 2100 3150 1950 3150 1950 3000 102 128 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 103 2700 2700 2850 2700 2850 2850 2700 2850 2700 2700 104 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 105 2700 2400 2850 2400 2850 2550 2700 2550 2700 2400 106 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 107 1 1 1.00 45.00 90.00 108 4575 2175 4575 2400 109 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 110 4500 2400 5025 2400 5025 2550 4500 2550 4500 2400 129 1950 2700 2100 2700 2100 2850 1950 2850 1950 2700 111 130 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 112 131 1 1 1.00 45.00 90.00 113 3 600 3525 4650 3525 4650 3150132 3450 3750 4200 3750 4200 3525 114 133 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 115 3600 3150 5100 3150 5100 3750 3600 3750 3600 3150 116 4 2 0 50 -1 0 11 0.0000 2 120 300 2625 1950 lock\001 117 4 1 0 50 -1 0 10 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001 118 4 1 0 50 -1 0 10 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001 119 4 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001 120 4 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001 121 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001 122 4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001 123 4 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001 124 4 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001 125 4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001 126 4 1 0 50 -1 0 10 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001 134 3450 3525 4650 3525 4650 3900 3450 3900 3450 3525 135 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 136 1950 2400 3150 2400 137 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 138 3450 2400 4650 2400 139 4 2 0 50 -1 0 11 0.0000 2 135 300 1875 2250 lock\001 140 4 1 0 50 -1 0 12 0.0000 2 180 1245 3900 1425 H heap buckets\001 141 4 1 0 50 -1 0 12 0.0000 2 180 810 4425 2025 heap$_2$\001 142 4 1 0 50 -1 0 12 0.0000 2 180 810 2175 2025 heap$_1$\001 143 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001 144 4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001 145 4 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001 146 4 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001 147 4 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001 148 4 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001 -
doc/theses/mubeen_zulfiqar_MMath/performance.tex
r30d91e4 r365c8dcb 1 1 \chapter{Performance} 2 \label{c:Performance} 2 3 3 4 \noindent -
doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.bib
r30d91e4 r365c8dcb 124 124 } 125 125 126 @misc{nedmalloc, 127 author = {Niall Douglas}, 128 title = {nedmalloc version 1.06 Beta}, 129 month = jan, 130 year = 2010, 131 note = {\textsf{http://\-prdownloads.\-sourceforge.\-net/\-nedmalloc/\-nedmalloc\_v1.06beta1\_svn1151.zip}}, 126 @misc{ptmalloc2, 127 author = {Wolfram Gloger}, 128 title = {ptmalloc version 2}, 129 month = jun, 130 year = 2006, 131 note = {\href{http://www.malloc.de/malloc/ptmalloc2-current.tar.gz}{http://www.malloc.de/\-malloc/\-ptmalloc2-current.tar.gz}}, 132 } 133 134 @misc{GNUallocAPI, 135 author = {GNU}, 136 title = {Summary of malloc-Related Functions}, 137 year = 2020, 138 note = {\href{https://www.gnu.org/software/libc/manual/html\_node/Summary-of-Malloc.html}{https://www.gnu.org/\-software/\-libc/\-manual/\-html\_node/\-Summary-of-Malloc.html}}, 139 } 140 141 @misc{SeriallyReusable, 142 author = {IBM}, 143 title = {Serially reusable programs}, 144 month = mar, 145 year = 2021, 146 note = {\href{https://www.ibm.com/docs/en/ztpf/1.1.0.15?topic=structures-serially-reusable-programs}{https://www.ibm.com/\-docs/\-en/\-ztpf/\-1.1.0.15?\-topic=structures-serially-reusable-programs}}, 147 } 148 149 @misc{librseq, 150 author = {Mathieu Desnoyers}, 151 title = {Library for Restartable Sequences}, 152 month = mar, 153 year = 2022, 154 note = {\href{https://github.com/compudj/librseq}{https://github.com/compudj/librseq}}, 132 155 } 133 156 -
doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.tex
r30d91e4 r365c8dcb 95 95 % Use the "hyperref" package 96 96 % N.B. HYPERREF MUST BE THE LAST PACKAGE LOADED; ADD ADDITIONAL PKGS ABOVE 97 \usepackage[pagebackref=true]{hyperref} % with basic options 97 \usepackage{url} 98 \usepackage[dvips,pagebackref=true]{hyperref} % with basic options 98 99 %\usepackage[pdftex,pagebackref=true]{hyperref} 99 100 % N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing. … … 114 115 citecolor=blue, % color of links to bibliography 115 116 filecolor=magenta, % color of file links 116 urlcolor=blue % color of external links 117 urlcolor=blue, % color of external links 118 breaklinks=true 117 119 } 118 120 \ifthenelse{\boolean{PrintVersion}}{ % for improved print quality, change some hyperref options … … 123 125 urlcolor=black 124 126 }}{} % end of ifthenelse (no else) 127 %\usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,pagebackref=true,breaklinks=true,colorlinks=true,linkcolor=blue,citecolor=blue,urlcolor=blue]{hyperref} 128 \usepackage{breakurl} 129 \urlstyle{sf} 125 130 126 131 %\usepackage[automake,toc,abbreviations]{glossaries-extra} % Exception to the rule of hyperref being the last add-on package -
doc/theses/thierry_delisle_PhD/thesis/Makefile
r30d91e4 r365c8dcb 30 30 base \ 31 31 base_avg \ 32 cache-share \ 33 cache-noshare \ 32 34 empty \ 33 35 emptybit \ -
doc/theses/thierry_delisle_PhD/thesis/local.bib
r30d91e4 r365c8dcb 685 685 note = "[Online; accessed 9-February-2021]" 686 686 } 687 688 @misc{wiki:rcu, 689 author = "{Wikipedia contributors}", 690 title = "Read-copy-update --- {W}ikipedia{,} The Free Encyclopedia", 691 year = "2022", 692 url = "https://en.wikipedia.org/wiki/Linear_congruential_generator", 693 note = "[Online; accessed 12-April-2022]" 694 } 695 696 @misc{wiki:rwlock, 697 author = "{Wikipedia contributors}", 698 title = "Readers-writer lock --- {W}ikipedia{,} The Free Encyclopedia", 699 year = "2021", 700 url = "https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock", 701 note = "[Online; accessed 12-April-2022]" 702 } -
doc/theses/thierry_delisle_PhD/thesis/text/core.tex
r30d91e4 r365c8dcb 32 32 \item Faster than other schedulers that have equal or better fairness. 33 33 \end{itemize} 34 35 \subsection{Fairness Goals} 36 For this work fairness will be considered as having two strongly related requirements: true starvation freedom and ``fast'' load balancing. 37 38 \paragraph{True starvation freedom} is more easily defined: As long as at least one \proc continues to dequeue \ats, all read \ats should be able to run eventually. 39 In any running system, \procs can stop dequeing \ats if they start running a \at that will simply never park. 40 Traditional workstealing schedulers do not have starvation freedom in these cases. 41 Now this requirement begs the question, what about preemption? 42 Generally speaking preemption happens on the timescale of several milliseconds, which brings us to the next requirement: ``fast'' load balancing. 43 44 \paragraph{Fast load balancing} means that load balancing should happen faster than preemption would normally allow. 45 For interactive applications that need to run at 60, 90, 120 frames per second, \ats having to wait for several millseconds to run are effectively starved. 46 Therefore load-balancing should be done at a faster pace, one that can detect starvation at the microsecond scale. 47 With that said, this is a much fuzzier requirement since it depends on the number of \procs, the number of \ats and the general load of the system. 34 48 35 49 \subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal} … … 223 237 Therefore this unprotected read of the timestamp and average satisfy the limited correctness that is required. 224 238 239 \begin{figure} 240 \centering 241 \input{cache-share.pstex_t} 242 \caption[CPU design with wide L3 sharing]{CPU design with wide L3 sharing \smallskip\newline A very simple CPU with 4 \glspl{hthrd}. L1 and L2 are private to each \gls{hthrd} but the L3 is shared across to entire core.} 243 \label{fig:cache-share} 244 \end{figure} 245 246 \begin{figure} 247 \centering 248 \input{cache-noshare.pstex_t} 249 \caption[CPU design with a narrower L3 sharing]{CPU design with a narrower L3 sharing \smallskip\newline A different CPU design, still with 4 \glspl{hthrd}. L1 and L2 are still private to each \gls{hthrd} but the L3 is shared some of the CPU but there is still two distinct L3 instances.} 250 \label{fig:cache-noshare} 251 \end{figure} 252 253 With redundant tiemstamps this scheduling algorithm achieves both the fairness and performance requirements, on some machines. 254 The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}. 255 For example, on machines where the motherboard holds multiple CPU, cache misses can be satisfied from a cache that belongs to the CPU that missed, the \emph{local} CPU, or by a different CPU, a \emph{remote} one. 256 Cache misses that are satisfied by a remote CPU will have higher latency than if it is satisfied by the local CPU. 257 However, this is not specific to systems with multiple CPUs. 258 Depending on the cache structure, cache-misses can have different latency for the same CPU. 259 The AMD EPYC 7662 CPUs that is described in Chapter~\ref{microbench} is an example of that. 260 Figure~\ref{fig:cache-share} and Figure~\ref{fig:cache-noshare} show two different cache topologies with highlight this difference. 261 In Figure~\ref{fig:cache-share}, all cache instances are either private to a \gls{hthrd} or shared to the entire system, this means latency due to cache-misses are likely fairly consistent. 262 By comparison, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by a hit in either instance of the L3. 263 However, the memory access latency to the remote L3 instance will be notably higher than the memory access latency to the local L3. 264 The impact of these different design on this algorithm is that scheduling will scale very well on architectures similar to Figure~\ref{fig:cache-share}, both will have notably worst scalling with many narrower L3 instances. 265 This is simply because as the number of L3 instances grow, so two does the chances that the random helping will cause significant latency. 266 The solution is to have the scheduler be aware of the cache topology. 267 225 268 \subsection{Per CPU Sharding} 269 Building a scheduler that is aware of cache topology poses two main challenges: discovering cache topology and matching \procs to cache instance. 270 Sadly, there is no standard portable way to discover cache topology in C. 271 Therefore, while this is a significant portability challenge, it is outside the scope of this thesis to design a cross-platform cache discovery mechanisms. 272 The rest of this work assumes discovering the cache topology based on Linux's \texttt{/sys/devices/system/cpu} directory. 273 This leaves the challenge of matching \procs to cache instance, or more precisely identifying which subqueues of the ready queue are local to which cache instance. 274 Once this matching is available, the helping algorithm can be changed to add bias so that \procs more often help subqueues local to the same cache instance 275 \footnote{Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuinng.}. 276 277 The obvious approach to mapping cache instances to subqueues is to statically tie subqueues to CPUs. 278 Instead of having each subqueue local to a specific \proc, the system is initialized with subqueues for each \glspl{hthrd} up front. 279 Then \procs dequeue and enqueue by first asking which CPU id they are local to, in order to identify which subqueues are the local ones. 280 \Glspl{proc} can get the CPU id from \texttt{sched\_getcpu} or \texttt{librseq}. 281 282 This approach solves the performance problems on systems with topologies similar to Figure~\ref{fig:cache-noshare}. 283 However, it actually causes some subtle fairness problems in some systems, specifically systems with few \procs and many \glspl{hthrd}. 284 In these cases, the large number of subqueues and the bias agains subqueues tied to different cache instances make it so it is very unlikely any single subqueue is picked. 285 To make things worst, the small number of \procs mean that few helping attempts will be made. 286 This combination of few attempts and low chances make it so a \at stranded on a subqueue that is not actively dequeued from may wait very long before it gets randomly helped. 287 On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can actually take multiple seconds for a \at to get dequeued from a remote queue. 288 Therefore, a more dynamic matching of subqueues to cache instance is needed. 226 289 227 290 \subsection{Topological Work Stealing} 228 229 291 The approach that is used in the \CFA scheduler is to have per-\proc subqueue, but have an excplicit data-structure track which cache instance each subqueue is tied to. 292 This is requires some finess because reading this data structure must lead to fewer cache misses than not having the data structure in the first place. 293 A key element however is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}. 294 Therefore the algorithm can be built as follows: Before enqueuing or dequeing a \at, each \proc queries the CPU id and the corresponding cache instance. 295 Since subqueues are tied to \procs, each \proc can then update the cache instance mapped to the local subqueue(s). 296 To avoid unnecessary cache line invalidation, the map is only written to if the mapping changes. 297 -
doc/theses/thierry_delisle_PhD/thesis/text/io.tex
r30d91e4 r365c8dcb 406 406 Finally, the last important part of the \io subsystem is it's interface. There are multiple approaches that can be offered to programmers, each with advantages and disadvantages. The new \io subsystem can replace the C runtime's API or extend it. And in the later case the interface can go from very similar to vastly different. The following sections discuss some useful options using @read@ as an example. The standard Linux interface for C is : 407 407 408 @ssize_t read(int fd, void *buf, size_t count);@ .408 @ssize_t read(int fd, void *buf, size_t count);@ 409 409 410 410 \subsection{Replacement} 411 Replacing the C \glsxtrshort{api} 411 Replacing the C \glsxtrshort{api} is the more intrusive and draconian approach. 412 The goal is to convince the compiler and linker to replace any calls to @read@ to direct them to the \CFA implementation instead of glibc's. 413 This has the advantage of potentially working transparently and supporting existing binaries without needing recompilation. 414 It also offers a, presumably, well known and familiar API that C programmers can simply continue to work with. 415 However, this approach also entails a plethora of subtle technical challenges which generally boils down to making a perfect replacement. 416 If the \CFA interface replaces only \emph{some} of the calls to glibc, then this can easily lead to esoteric concurrency bugs. 417 Since the gcc ecosystems does not offer a scheme for such perfect replacement, this approach was rejected as being laudable but infeasible. 412 418 413 419 \subsection{Synchronous Extension} 420 An other interface option is to simply offer an interface that is different in name only. For example: 421 422 @ssize_t cfa_read(int fd, void *buf, size_t count);@ 423 424 \noindent This is much more feasible but still familiar to C programmers. 425 It comes with the caveat that any code attempting to use it must be recompiled, which can be a big problem considering the amount of existing legacy C binaries. 426 However, it has the advantage of implementation simplicity. 414 427 415 428 \subsection{Asynchronous Extension} 429 It is important to mention that there is a certain irony to using only synchronous, therefore blocking, interfaces for a feature often referred to as ``non-blocking'' \io. 430 A fairly traditional way of doing this is using futures\cit{wikipedia futures}. 431 As simple way of doing so is as follows: 432 433 @future(ssize_t) read(int fd, void *buf, size_t count);@ 434 435 \noindent Note that this approach is not necessarily the most idiomatic usage of futures. 436 The definition of read above ``returns'' the read content through an output parameter which cannot be synchronized on. 437 A more classical asynchronous API could look more like: 438 439 @future([ssize_t, void *]) read(int fd, size_t count);@ 440 441 \noindent However, this interface immediately introduces memory lifetime challenges since the call must effectively allocate a buffer to be returned. 442 Because of the performance implications of this, the first approach is considered preferable as it is more familiar to C programmers. 416 443 417 444 \subsection{Interface directly to \lstinline{io_uring}} 445 Finally, an other interface that can be relevant is to simply expose directly the underlying \texttt{io\_uring} interface. For example: 446 447 @array(SQE, want) cfa_io_allocate(int want);@ 448 449 @void cfa_io_submit( const array(SQE, have) & );@ 450 451 \noindent This offers more flexibility to users wanting to fully use all of the \texttt{io\_uring} features. 452 However, it is not the most user-friendly option. 453 It obviously imposes a strong dependency between user code and \texttt{io\_uring} but at the same time restricting users to usages that are compatible with how \CFA internally uses \texttt{io\_uring}. 454 455 -
doc/theses/thierry_delisle_PhD/thesis/text/practice.tex
r30d91e4 r365c8dcb 2 2 The scheduling algorithm discribed in Chapter~\ref{core} addresses scheduling in a stable state. 3 3 However, it does not address problems that occur when the system changes state. 4 Indeed the \CFA runtime, supports expanding and shrinking the number of KTHREAD\_place \todo{add kthrd to glossary}, both manually and, to some extentautomatically.4 Indeed the \CFA runtime, supports expanding and shrinking the number of \procs, both manually and, to some extent, automatically. 5 5 This entails that the scheduling algorithm must support these transitions. 6 6 7 \section{Resizing} 7 More precise \CFA supports adding \procs using the RAII object @processor@. 8 These objects can be created at any time and can be destroyed at any time. 9 They are normally create as automatic stack variables, but this is not a requirement. 10 11 The consequence is that the scheduler and \io subsystems must support \procs comming in and out of existence. 12 13 \section{Manual Resizing} 14 The consequence of dynamically changing the number of \procs is that all internal arrays that are sized based on the number of \procs neede to be \texttt{realloc}ed. 15 This also means that any references into these arrays, pointers or indexes, may need to be fixed when shrinking\footnote{Indexes may still need fixing because there is no guarantee the \proc causing the shrink had the highest index. Therefore indexes need to be reassigned to preserve contiguous indexes.}. 16 17 There are no performance requirements, within reason, for resizing since this is usually considered as part of setup and teardown. 18 However, this operation has strict correctness requirements since shrinking and idle sleep can easily lead to deadlocks. 19 It should also avoid as much as possible any effect on performance when the number of \procs remain constant. 20 This later requirement prehibits simple solutions, like simply adding a global lock to these arrays. 21 22 \subsection{Read-Copy-Update} 23 One solution is to use the Read-Copy-Update\cite{wiki:rcu} pattern. 24 In this pattern, resizing is done by creating a copy of the internal data strucures, updating the copy with the desired changes, and then attempt an Idiana Jones Switch to replace the original witht the copy. 25 This approach potentially has the advantage that it may not need any synchronization to do the switch. 26 The switch definitely implies a race where \procs could still use the previous, original, data structure after the copy was switched in. 27 The important question then becomes whether or not this race can be recovered from. 28 If the changes that arrived late can be transferred from the original to the copy then this solution works. 29 30 For linked-lists, dequeing is somewhat of a problem. 31 Dequeing from the original will not necessarily update the copy which could lead to multiple \procs dequeing the same \at. 32 Fixing this requires making the array contain pointers to subqueues rather than the subqueues themselves. 33 34 Another challenge is that the original must be kept until all \procs have witnessed the change. 35 This is a straight forward memory reclamation challenge but it does mean that every operation will need \emph{some} form of synchronization. 36 If each of these operation does need synchronization then it is possible a simpler solution achieves the same performance. 37 Because in addition to the classic challenge of memory reclamation, transferring the original data to the copy before reclaiming it poses additional challenges. 38 Especially merging subqueues while having a minimal impact on fairness and locality. 39 40 \subsection{Read-Writer Lock} 41 A simpler approach would be to use a \newterm{Readers-Writer Lock}\cite{wiki:rwlock} where the resizing requires acquiring the lock as a writer while simply enqueing/dequeing \ats requires acquiring the lock as a reader. 42 Using a Readers-Writer lock solves the problem of dynamically resizing and leaves the challenge of finding or building a lock with sufficient good read-side performance. 43 Since this is not a very complex challenge and an ad-hoc solution is perfectly acceptable, building a Readers-Writer lock was the path taken. 44 45 To maximize reader scalability, the readers should not contend with eachother when attempting to acquire and release the critical sections. 46 This effectively requires that each reader have its own piece of memory to mark as locked and unlocked. 47 Reades then acquire the lock wait for writers to finish the critical section and then acquire their local spinlocks. 48 Writers acquire the global lock, so writers have mutual exclusion among themselves, and then acquires each of the local reader locks. 49 Acquiring all the local locks guarantees mutual exclusion between the readers and the writer, while the wait on the read side prevents readers from continously starving the writer. 50 \todo{reference listings} 51 52 \begin{lstlisting} 53 void read_lock() { 54 // Step 1 : make sure no writers in 55 while write_lock { Pause(); } 56 57 // May need fence here 58 59 // Step 2 : acquire our local lock 60 while atomic_xchg( tls.lock ) { 61 Pause(); 62 } 63 } 64 65 void read_unlock() { 66 tls.lock = false; 67 } 68 \end{lstlisting} 69 70 \begin{lstlisting} 71 void write_lock() { 72 // Step 1 : lock global lock 73 while atomic_xchg( write_lock ) { 74 Pause(); 75 } 76 77 // Step 2 : lock per-proc locks 78 for t in all_tls { 79 while atomic_xchg( t.lock ) { 80 Pause(); 81 } 82 } 83 } 84 85 void write_unlock() { 86 // Step 1 : release local locks 87 for t in all_tls { 88 t.lock = false; 89 } 90 91 // Step 2 : release global lock 92 write_lock = false; 93 } 94 \end{lstlisting} 8 95 9 96 \section{Idle-Sleep} 97 98 \subsection{Tracking Sleepers} 99 100 \subsection{Event FDs} 101 102 \subsection{Epoll} 103 104 \subsection{\texttt{io\_uring}} 105 106 \subsection{Reducing Latency} -
libcfa/src/containers/array.hfa
r30d91e4 r365c8dcb 1 #include <assert.h> 1 2 2 3 … … 34 35 35 36 static inline Timmed & ?[?]( arpk(N, S, Timmed, Tbase) & a, int i ) { 37 assert( i < N ); 36 38 return (Timmed &) a.strides[i]; 37 39 } 38 40 39 41 static inline Timmed & ?[?]( arpk(N, S, Timmed, Tbase) & a, unsigned int i ) { 42 assert( i < N ); 40 43 return (Timmed &) a.strides[i]; 41 44 } 42 45 43 46 static inline Timmed & ?[?]( arpk(N, S, Timmed, Tbase) & a, long int i ) { 47 assert( i < N ); 44 48 return (Timmed &) a.strides[i]; 45 49 } 46 50 47 51 static inline Timmed & ?[?]( arpk(N, S, Timmed, Tbase) & a, unsigned long int i ) { 52 assert( i < N ); 48 53 return (Timmed &) a.strides[i]; 49 54 } -
src/AST/Convert.cpp
r30d91e4 r365c8dcb 951 951 } 952 952 953 const ast::Expr * visit( const ast::DimensionExpr * node ) override final { 954 auto expr = visitBaseExpr( node, new DimensionExpr( node->name ) ); 955 this->node = expr; 956 return nullptr; 957 } 958 953 959 const ast::Expr * visit( const ast::AsmExpr * node ) override final { 954 960 auto expr = visitBaseExpr( node, … … 2463 2469 2464 2470 virtual void visit( const DimensionExpr * old ) override final { 2465 // DimensionExpr gets desugared away in Validate. 2466 // As long as new-AST passes don't use it, this cheap-cheerful error 2467 // detection helps ensure that these occurrences have been compiled 2468 // away, as expected. To move the DimensionExpr boundary downstream 2469 // or move the new-AST translation boundary upstream, implement 2470 // DimensionExpr in the new AST and implement a conversion. 2471 (void) old; 2472 assert(false && "DimensionExpr should not be present at new-AST boundary"); 2471 this->node = visitBaseExpr( old, 2472 new ast::DimensionExpr( old->location, old->name ) 2473 ); 2473 2474 } 2474 2475 -
src/AST/Expr.hpp
r30d91e4 r365c8dcb 604 604 }; 605 605 606 class DimensionExpr final : public Expr { 607 public: 608 std::string name; 609 610 DimensionExpr( const CodeLocation & loc, std::string name ) 611 : Expr( loc ), name( name ) {} 612 613 const Expr * accept( Visitor & v ) const override { return v.visit( this ); } 614 private: 615 DimensionExpr * clone() const override { return new DimensionExpr{ *this }; } 616 MUTATE_FRIEND 617 }; 618 606 619 /// A GCC "asm constraint operand" used in an asm statement, e.g. `[output] "=f" (result)`. 607 620 /// https://gcc.gnu.org/onlinedocs/gcc-4.7.1/gcc/Machine-Constraints.html#Machine-Constraints -
src/AST/Fwd.hpp
r30d91e4 r365c8dcb 84 84 class CommaExpr; 85 85 class TypeExpr; 86 class DimensionExpr; 86 87 class AsmExpr; 87 88 class ImplicitCopyCtorExpr; -
src/AST/Pass.hpp
r30d91e4 r365c8dcb 184 184 const ast::Expr * visit( const ast::CommaExpr * ) override final; 185 185 const ast::Expr * visit( const ast::TypeExpr * ) override final; 186 const ast::Expr * visit( const ast::DimensionExpr * ) override final; 186 187 const ast::Expr * visit( const ast::AsmExpr * ) override final; 187 188 const ast::Expr * visit( const ast::ImplicitCopyCtorExpr * ) override final; -
src/AST/Pass.impl.hpp
r30d91e4 r365c8dcb 575 575 __pass::symtab::addId( core, 0, func ); 576 576 if ( __visit_children() ) { 577 // parameter declarations 577 maybe_accept( node, &FunctionDecl::type_params ); 578 maybe_accept( node, &FunctionDecl::assertions ); 578 579 maybe_accept( node, &FunctionDecl::params ); 579 580 maybe_accept( node, &FunctionDecl::returns ); 580 // type params and assertions 581 maybe_accept( node, &FunctionDecl::type_params ); 582 maybe_accept( node, &FunctionDecl::assertions ); 581 maybe_accept( node, &FunctionDecl::type ); 583 582 // First remember that we are now within a function. 584 583 ValueGuard< bool > oldInFunction( inFunction ); … … 1522 1521 1523 1522 //-------------------------------------------------------------------------- 1523 // DimensionExpr 1524 template< typename core_t > 1525 const ast::Expr * ast::Pass< core_t >::visit( const ast::DimensionExpr * node ) { 1526 VISIT_START( node ); 1527 1528 if ( __visit_children() ) { 1529 guard_symtab guard { *this }; 1530 maybe_accept( node, &DimensionExpr::result ); 1531 } 1532 1533 VISIT_END( Expr, node ); 1534 } 1535 1536 //-------------------------------------------------------------------------- 1524 1537 // AsmExpr 1525 1538 template< typename core_t > … … 1859 1872 1860 1873 if ( __visit_children() ) { 1861 // xxx - should PointerType visit/mutate dimension?1874 maybe_accept( node, &PointerType::dimension ); 1862 1875 maybe_accept( node, &PointerType::base ); 1863 1876 } -
src/AST/Pass.proto.hpp
r30d91e4 r365c8dcb 26 26 27 27 struct PureVisitor; 28 29 template<typename node_t> 30 node_t * deepCopy( const node_t * localRoot ); 28 31 29 32 namespace __pass { … … 396 399 static inline auto addStructFwd( core_t & core, int, const ast::StructDecl * decl ) -> decltype( core.symtab.addStruct( decl ), void() ) { 397 400 ast::StructDecl * fwd = new ast::StructDecl( decl->location, decl->name ); 398 fwd->params = decl->params; 401 for ( const auto & param : decl->params ) { 402 fwd->params.push_back( deepCopy( param.get() ) ); 403 } 399 404 core.symtab.addStruct( fwd ); 400 405 } … … 405 410 template<typename core_t> 406 411 static inline auto addUnionFwd( core_t & core, int, const ast::UnionDecl * decl ) -> decltype( core.symtab.addUnion( decl ), void() ) { 407 UnionDecl * fwd = new UnionDecl( decl->location, decl->name ); 408 fwd->params = decl->params; 412 ast::UnionDecl * fwd = new ast::UnionDecl( decl->location, decl->name ); 413 for ( const auto & param : decl->params ) { 414 fwd->params.push_back( deepCopy( param.get() ) ); 415 } 409 416 core.symtab.addUnion( fwd ); 410 417 } -
src/AST/Print.cpp
r30d91e4 r365c8dcb 1101 1101 } 1102 1102 1103 virtual const ast::Expr * visit( const ast::DimensionExpr * node ) override final { 1104 os << "Type-Sys Value: " << node->name; 1105 postprint( node ); 1106 1107 return node; 1108 } 1109 1103 1110 virtual const ast::Expr * visit( const ast::AsmExpr * node ) override final { 1104 1111 os << "Asm Expression:" << endl; -
src/AST/Visitor.hpp
r30d91e4 r365c8dcb 76 76 virtual const ast::Expr * visit( const ast::CommaExpr * ) = 0; 77 77 virtual const ast::Expr * visit( const ast::TypeExpr * ) = 0; 78 virtual const ast::Expr * visit( const ast::DimensionExpr * ) = 0; 78 79 virtual const ast::Expr * visit( const ast::AsmExpr * ) = 0; 79 80 virtual const ast::Expr * visit( const ast::ImplicitCopyCtorExpr * ) = 0; -
src/Common/CodeLocationTools.cpp
r30d91e4 r365c8dcb 147 147 macro(CommaExpr, Expr) \ 148 148 macro(TypeExpr, Expr) \ 149 macro(DimensionExpr, Expr) \ 149 150 macro(AsmExpr, Expr) \ 150 151 macro(ImplicitCopyCtorExpr, Expr) \ -
src/InitTweak/GenInit.cc
r30d91e4 r365c8dcb 402 402 retVal->location, "?{}", retVal, stmt->expr ); 403 403 assertf( ctorStmt, 404 "ReturnFixer: genCtorDtor returned n llptr: %s / %s",404 "ReturnFixer: genCtorDtor returned nullptr: %s / %s", 405 405 toString( retVal ).c_str(), 406 406 toString( stmt->expr ).c_str() ); 407 407 stmtsToAddBefore.push_back( ctorStmt ); 408 408 409 409 // Return the retVal object. … … 421 421 void genInit( ast::TranslationUnit & transUnit ) { 422 422 ast::Pass<HoistArrayDimension_NoResolve_New>::run( transUnit ); 423 ast::Pass<ReturnFixer_New>::run( transUnit ); 424 } 425 426 void fixReturnStatements( ast::TranslationUnit & transUnit ) { 423 427 ast::Pass<ReturnFixer_New>::run( transUnit ); 424 428 } -
src/InitTweak/GenInit.h
r30d91e4 r365c8dcb 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Andrew Beach 12 // Last Modified On : Fri Oct 22 16:08:00 202113 // Update Count : 612 // Last Modified On : Fri Mar 18 14:22:00 2022 13 // Update Count : 7 14 14 // 15 15 … … 31 31 /// Converts return statements into copy constructor calls on the hidden return variable 32 32 void fixReturnStatements( std::list< Declaration * > & translationUnit ); 33 void fixReturnStatements( ast::TranslationUnit & translationUnit ); 33 34 34 35 /// generates a single ctor/dtor statement using objDecl as the 'this' parameter and arg as the optional argument -
src/Validate/module.mk
r30d91e4 r365c8dcb 22 22 Validate/ForallPointerDecay.cpp \ 23 23 Validate/ForallPointerDecay.hpp \ 24 Validate/GenericParameter.cpp \ 25 Validate/GenericParameter.hpp \ 24 26 Validate/HandleAttributes.cc \ 25 27 Validate/HandleAttributes.h \ … … 28 30 Validate/LabelAddressFixer.cpp \ 29 31 Validate/LabelAddressFixer.hpp \ 32 Validate/ReturnCheck.cpp \ 33 Validate/ReturnCheck.hpp \ 30 34 Validate/FindSpecialDeclsNew.cpp \ 31 35 Validate/FindSpecialDecls.cc \ -
src/main.cc
r30d91e4 r365c8dcb 10 10 // Created On : Fri May 15 23:12:02 2015 11 11 // Last Modified By : Andrew Beach 12 // Last Modified On : Fri Mar 11 10:39:00 202213 // Update Count : 67 112 // Last Modified On : Wed Apr 13 11:11:00 2022 13 // Update Count : 672 14 14 // 15 15 … … 75 75 #include "Tuples/Tuples.h" // for expandMemberTuples, expan... 76 76 #include "Validate/Autogen.hpp" // for autogenerateRoutines 77 #include "Validate/GenericParameter.hpp" // for fillGenericParameters, tr... 77 78 #include "Validate/FindSpecialDecls.h" // for findGlobalDecls 78 79 #include "Validate/ForallPointerDecay.hpp" // for decayForallPointers … … 80 81 #include "Validate/InitializerLength.hpp" // for setLengthFromInitializer 81 82 #include "Validate/LabelAddressFixer.hpp" // for fixLabelAddresses 83 #include "Validate/ReturnCheck.hpp" // for checkReturnStatements 82 84 #include "Virtual/ExpandCasts.h" // for expandCasts 83 85 … … 327 329 PASS( "Validate-A", SymTab::validate_A( translationUnit ) ); 328 330 PASS( "Validate-B", SymTab::validate_B( translationUnit ) ); 329 PASS( "Validate-C", SymTab::validate_C( translationUnit ) );330 331 331 332 CodeTools::fillLocations( translationUnit ); … … 341 342 342 343 forceFillCodeLocations( transUnit ); 344 345 // Check as early as possible. Can't happen before 346 // LinkReferenceToType, observed failing when attempted 347 // before eliminateTypedef 348 PASS( "Validate Generic Parameters", Validate::fillGenericParameters( transUnit ) ); 349 350 PASS( "Translate Dimensions", Validate::translateDimensionParameters( transUnit ) ); 351 PASS( "Check Function Returns", Validate::checkReturnStatements( transUnit ) ); 352 353 // Must happen before Autogen. 354 PASS( "Fix Return Statements", InitTweak::fixReturnStatements( transUnit ) ); 343 355 344 356 PASS( "Implement Concurrent Keywords", Concurrency::implementKeywords( transUnit ) ); … … 426 438 translationUnit = convert( move( transUnit ) ); 427 439 } else { 440 PASS( "Validate-C", SymTab::validate_C( translationUnit ) ); 428 441 PASS( "Validate-D", SymTab::validate_D( translationUnit ) ); 429 442 PASS( "Validate-E", SymTab::validate_E( translationUnit ) );
Note: See TracChangeset
for help on using the changeset viewer.