Index: doc/theses/mike_brooks_MMath/.gitignore
===================================================================
--- doc/theses/mike_brooks_MMath/.gitignore	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/.gitignore	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,3 @@
+!Makefile
+build
+uw-ethesis.pdf
Index: doc/theses/mike_brooks_MMath/Makefile
===================================================================
--- doc/theses/mike_brooks_MMath/Makefile	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/Makefile	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,58 @@
+# Configuration variables
+
+Build = build
+Pictures = pictures
+Programs = programs
+
+TeXSRC = ${wildcard *.tex}
+PicSRC = ${notdir ${wildcard ${Pictures}/*.png}}
+DemoSRC = ${notdir ${wildcard ${Programs}/*-demo.cfa}}
+PgmSRC = ${notdir ${wildcard ${Programs}/*.cfa}}
+BibSRC = ${wildcard *.bib}
+
+TeXLIB = .:../../LaTeXmacros:${Build}:		# common latex macros
+BibLIB = .:../../bibliography			# common citation repository
+
+MAKEFLAGS = --no-print-directory # --silent
+VPATH = ${Build} ${Pictures} ${Programs} # extra search path for file names used in document
+
+DOCUMENT = uw-ethesis.pdf
+BASE = ${basename ${DOCUMENT}}			# remove suffix
+
+# Commands
+
+LaTeX = TEXINPUTS=${TeXLIB} && export TEXINPUTS && pdflatex -halt-on-error -output-directory=${Build}
+BibTeX = BIBINPUTS=${BibLIB} && export BIBINPUTS && bibtex
+CFA = cfa
+
+# Rules and Recipes
+
+.PHONY : all clean				# not file names
+.ONESHELL :
+
+all : ${DOCUMENT}
+
+clean :
+	@rm -frv ${DOCUMENT} ${Build}
+
+# File Dependencies
+
+%.pdf : ${TeXSRC} ${DemoSRC:%.cfa=%.tex} ${PicSRC} ${PgmSRC} ${BibSRC} Makefile | ${Build}
+	${LaTeX} ${BASE}
+	${BibTeX} ${Build}/${BASE}
+	${LaTeX} ${BASE}
+	# if needed, run latex again to get citations
+	if fgrep -s "LaTeX Warning: Citation" ${basename $@}.log ; then ${LaTeX} ${BASE} ; fi
+#	${Glossary} ${Build}/${BASE}
+#	${LaTeX} ${BASE}
+	cp ${Build}/$@ $@
+
+${Build}:
+	mkdir -p $@
+
+%-demo.tex: %-demo | ${Build}
+	${Build}/$< > ${Build}/$@
+
+%-demo: %-demo.cfa
+	${CFA} $< -o ${Build}/$@
+
Index: doc/theses/mike_brooks_MMath/array.tex
===================================================================
--- doc/theses/mike_brooks_MMath/array.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/array.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,319 @@
+\chapter{Array}
+
+\section{Features Added}
+
+The present work adds a type @array@ to the \CFA standard library~\cite{Cforall}.
+
+This array's length is statically governed and dynamically valued.  This static governance achieves argument safety and suggests a path to subscript safety as future work (TODO: cross reference).  In present state, this work is a runtime libray accessed through a system of macros, while section [TODO: discuss C conexistence] discusses a path for the new array type to be accessed directly by \CFA's array syntax, replacing the lifted C array that this syntax currently exposes.
+
+This section presents motivating examples of the new array type's usage, and follows up with definitions of the notations that appear.
+
+The core of the new array governance is tracking all array lengths in the type system.  Dynamically valued lengths are represented using type variables.  The stratification of type variables preceding object declarations makes a length referenceable everywhere that it is needed.  For example, a declaration can share one length, @N@, among a pair of parameters and the return.
+\lstinputlisting[language=CFA, firstline=50, lastline=59]{hello-array.cfa}
+Here, the function @f@ does a pointwise comparison, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated bool array.
+
+The array type uses the parameterized length information in its @sizeof(-)@ determination, illustrated in the example's call to @alloc@.  That call requests an allocation of type @array(bool, N)@, which the type system deduces from the left-hand side of the initialization, into the return type of the @alloc@ call.  Preexesting \CFA behaviour is leveraged here, both in the return-type-only polymorphism, and the @sized(T)@-aware standard-library @alloc@ routine.  The new @array@ type plugs into this behaviour by implementing the @sized@/@sizeof(-)@ assertion to have the intuitive meaning.  As a result, this design avoids an opportunity for programmer error by making the size/length communication to a called routine implicit, compared with C's @calloc@ (or the low-level \CFA analog @aalloc@) which take an explicit length parameter not managed by the type system.
+
+A harness for this @f@ function shows how dynamic values are fed into the system.
+\lstinputlisting[language=CFA, firstline=100, lastline=119]{hello-array.cfa}
+Here, the @a@ sequence is loaded with decreasing values, and the @b@ sequence with amounts off by a constant, giving relative differences within tolerance at first and out of tolerance later.  The driver program is run with two different inputs of sequence length.
+
+The loops in the driver follow the more familiar pattern of using the ordinary variable @n@ to convey the length.  The type system implicitly captures this value at the call site (@main@ calling @f@) and makes it available within the callee (@f@'s loop bound).
+
+The two parts of the example show @Z(n)@ adapting a variable into a type-system governed length (at @main@'s declarations of @a@, @b@, and @result@), @z(N)@ adapting in the opposite direction (at @f@'s loop bound), and a passthru use of a governed length (at @f@'s declaration of @ret@.)  It is hoped that future language integration will allow the macros @Z@ and @z@ to be omitted entirely from the user's notation, creating the appearance of seamlessly interchanging numeric values with appropriate generic parameters.
+
+The macro-assisted notation, @forall...ztype@, participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and in the function body.  So future language integration only sweetens this form and does not seek to elimimate the declaration.  The present form is chosen to parallel, as closely as a macro allows, the existing forall forms:
+\begin{lstlisting}
+  forall( dtype T  ) ...
+  forall( otype T  ) ...
+  forall( ztype(N) ) ...
+\end{lstlisting}
+
+The notation @array(thing, N)@ is also macro-assisted, though only in service of enabling multidimensional uses discussed further in section \ref{toc:mdimpl}.  In a single-dimensional case, the marco expansion gives a generic type instance, exactly like the original form suggests.
+
+
+
+In summary:
+
+\begin{tabular}{p{15em}p{20em}}
+  @ztype( N )@ & within a forall, declares the type variable @N@ to be a governed length \\[0.25em]
+  @Z( @ $e$ @ )@ & a type representing the value of $e$ as a governed length, where $e$ is a @size_t@-typed expression \\[0.25em]
+  @z( N )@ & an expression of type @size_t@, whose value is the governed length @N@ \\[0.25em]
+  @array( thing, N0, N1, ... )@
+  &  a type wrapping $\prod_i N_i$ adjacent occurrences of @thing@ objects
+\end{tabular}
+
+Unsigned integers have a special status in this type system.  Unlike how C++ allows @template< size_t N, char * msg, typename T >...@ declarations, this system does not accommodate values of any user-provided type.  TODO: discuss connection with dependent types.
+
+
+An example of a type error demonstrates argument safety.  The running example has @f@ expecting two arrays of the same length.  A compile-time error occurs when attempting to call @f@ with arrays whose lengths may differ.
+\lstinputlisting[language=CFA, firstline=150, lastline=155]{hello-array.cfa}
+As is common practice in C, the programmer is free to cast, to assert knownledge not shared with the type system.
+\lstinputlisting[language=CFA, firstline=200, lastline=202]{hello-array.cfa}
+
+Argument safety, and the associated implicit communication of length, work with \CFA's generic types too.  As a structure can be defined over a parameterized element type, so can it be defined over a parameterized length.  Doing so gives a refinement of C's ``flexible array member'' pattern, that allows nesting structures with array members anywhere within other structures.
+\lstinputlisting[language=CFA, firstline=20, lastline=26]{hello-accordion.cfa}
+This structure's layout has the starting offest of @cost_contribs@ varying in @Nclients@, and the offset of @total_cost@ varying in both generic paramters.  For a function that operates on a @request@ structure, the type system handles this variation transparently.
+\lstinputlisting[language=CFA, firstline=50, lastline=57]{hello-accordion.cfa}
+In the example runs of a driver program, different offset values are navigated in the two cases.
+\lstinputlisting[language=CFA, firstline=100, lastline=115]{hello-accordion.cfa}
+The output values show that @summarize@ and its caller agree on both the offsets (where the callee starts reading @cost_contribs@ and where the callee writes @total_cost@).  Yet the call site still says just, ``pass the request.''
+
+
+\section{Multidimensional implementation}
+\label{toc:mdimpl}
+
+
+TODO: introduce multidimensional array feature and approaches
+
+The new \CFA standard library @array@ datatype supports multidimensional uses more richly than the C array.  The new array's multimentsional interface and implementation, follows an array-of-arrays setup, meaning, like C's @float[n][m]@ type, one contiguous object, with coarsely-strided dimensions directly wrapping finely-strided dimensions.  This setup is in contrast with the pattern of array of pointers to other allocations representing a sub-array.  Beyond what C's type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.  C and C++ require a programmer with such a need to manage pointer/offset arithmetic manually.
+
+Examples are shown using a $5 \times 7$ float array, @a@, loaded with increments of $0.1$ when stepping across the length-7 finely-strided dimension shown on columns, and with increments of $1.0$ when stepping across the length-5 corsely-strided dimension shown on rows.
+\lstinputlisting[language=CFA, firstline=120, lastline=128]{hello-md.cfa}
+The memory layout of @a@ has strictly increasing numbers along its 35 contiguous positions.
+
+A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.  Like with the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array, by a prefix of its dimensions.  This action first subscripts away the most coaresly strided dimensions, leaving a result that expects to be be subscripted by the more finely strided dimensions.
+\lstinputlisting[language=CFA, firstline=60, lastline=66]{hello-md.cfa}
+\lstinputlisting[language=CFA, firstline=140, lastline=140]{hello-md.cfa}
+
+This function declaration is asserting too much knowledge about its parameter @c@, for it to be usable for printing either a row slice or a column slice.  Specifically, declaring the parameter @c@ with type @array@ means that @c@ is contiguous.  However, the function does not use this fact.  For the function to do its job, @c@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@), with governed length @N@.  The new-array library provides the trait @ix@, so-defined.  With it, the original declaration can be generalized, while still implemented with the same body, to the latter declaration:
+\lstinputlisting[language=CFA, firstline=40, lastline=44]{hello-md.cfa}
+\lstinputlisting[language=CFA, firstline=145, lastline=145]{hello-md.cfa}
+
+Nontrivial slicing, in this example, means passing a noncontiguous slice to @print1d@.  The new-array library provides a ``subscript by all'' operation for this purpose.  In a multi-dimensional subscript operation, any dimension given as @all@ is left ``not yet subscripted by a value,'' implementing the @ix@ trait, waiting for such a value.
+\lstinputlisting[language=CFA, firstline=150, lastline=151]{hello-md.cfa}
+
+The example has shown that @a[2]@ and @a[[2, all]]@ both refer to the same, ``2.*'' slice.  Indeed, the various @print1d@ calls under discussion access the entry with value 2.3 as @a[2][3]@, @a[[2,all]][3]@, and @a[[all,3]][2]@.  This design preserves (and extends) C array semantics by defining @a[[i,j]]@ to be @a[i][j]@ for numeric subscripts, but also for ``subscripting by all''.  That is:
+
+\begin{tabular}{cccccl}
+    @a[[2,all]][3]@  &  $=$  &  @a[2][all][3]@  & $=$  &  @a[2][3]@  & (here, @all@ is redundant)  \\
+    @a[[all,3]][2]@  &  $=$  &  @a[all][3][2]@  & $=$  &  @a[2][3]@  & (here, @all@ is effective)
+\end{tabular}
+
+Narrating progress through each of the @-[-][-][-]@ expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
+
+\noindent Where @all@ is redundant:
+
+\begin{tabular}{ll}
+    @a@  & 2-dimensional, want subscripts for coarse then fine \\
+    @a[2]@  & 1-dimensional, want subscript for fine; lock coarse = 2 \\
+    @a[2][all]@  & 1-dimensional, want subscript for fine \\
+    @a[2][all][3]@  & 0-dimensional; lock fine = 3
+\end{tabular}
+
+\noindent Where @all@ is effective:
+
+\begin{tabular}{ll}
+    @a@  & 2-dimensional, want subscripts for coarse then fine \\
+    @a[all]@  & 2-dimensional, want subscripts for fine then coarse \\
+    @a[all][3]@  & 1-dimensional, want subscript for coarse; lock fine = 3 \\
+    @a[all][3][2]@  & 0-dimensional; lock coarse = 2
+\end{tabular}
+
+The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back.  The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@.
+
+Contiguous arrays, and slices of them, are all realized by the same underlying parameterized type.  It includes stride information in its metatdata.  The @-[all]@ operation is a conversion from a reference to one instantiation, to a reference to another instantiation.  The running example's @all@-effective step, stated more concretely, is:
+
+\begin{tabular}{ll}
+    @a@       & : 5 of ( 7 of float each spaced 1 float apart ) each spaced 7 floats apart \\
+    @a[all]@  & : 7 of ( 5 of float each spaced 7 floats apart ) each spaced 1 float apart
+\end{tabular}
+
+\begin{figure}
+    \includegraphics{measuring-like-layout}
+    \caption{Visualization of subscripting by value and by \lstinline[language=CFA,basicstyle=\ttfamily]{all}, for \lstinline[language=CFA,basicstyle=\ttfamily]{a} of type \lstinline[language=CFA,basicstyle=\ttfamily]{array( float, Z(5), Z(7) )}. The horizontal dimension represents memory addresses while vertical layout is conceptual.}
+    \label{fig:subscr-all}
+\end{figure}
+
+\noindent While the latter description implies overlapping elements, Figure \ref{fig:subscr-all} shows that the overlaps only occur with unused spaces between elements.  Its depictions of @a[all][...]@ show the navigation of a memory layout with nontrivial strides, that is, with ``spaced \_ floats apart'' values that are greater or smaller than the true count of valid indeces times the size of a logically indexed element.  Reading from the bottom up, the expression @a[all][3][2]@ shows a float, that is masquerading as a @float[7]@, for the purpose of being arranged among its peers; five such occurrences form @a[all][3]@.  The tail of flatter boxes extending to the right of a poper element represents this stretching.  At the next level of containment, the structure @a[all][3]@ masquerades as a @float[1]@, for the purpose of being arranged among its peers; seven such occurrences form @a[all]@.  The verical staircase arrangement represents this compression, and resulting overlapping.
+
+The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.  The private @arpk@ structure (array with explicit packing) is generic over these two types (and more): the contained element, what it is masquerading as.  This structure's public interface is the @array(...)@ construction macro and the two subscript operators.  Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.  Subscrpting by @all@ rearranges the order of masquerading-as types to achieve, in genernal, nontrivial striding.  Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
+
+The @arpk@ structure and its @-[i]@ operator are thus defined as:
+\begin{lstlisting}
+forall( ztype(N),               // length of current dimension
+        dtype(S) | sized(S),    // masquerading-as
+        dtype E_im,             // immediate element, often another array
+        dtype E_base            // base element, e.g. float, never array
+      ) {
+    struct arpk {
+        S strides[z(N)];        // so that sizeof(this) is N of S
+    };
+
+    // expose E_im, stride by S
+    E_im & ?[?]( arpk(N, S, E_im, E_base) & a, ptrdiff_t i ) {
+        return (E_im &) a.strides[i];
+    }
+}
+\end{lstlisting}
+
+An instantion of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ exapnsion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.  In the base case, @array(E_base)@ is just @E_base@.  Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
+
+Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instatiations, intact, to new positions.  Expressed as an operation on types, this rotation is:
+\begin{eqnarray*}
+suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
+enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
+enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
+\end{eqnarray*}
+
+
+\section{Bound checks, added and removed}
+
+\CFA array subscripting is protected with runtime bound checks.  Having dependent typing causes the opimizer to remove more of these bound checks than it would without them.  This section provides a demonstration of the effect.
+
+The experiment compares the \CFA array system with the padded-room system [todo:xref] most typically exemplified by Java arrays, but also reflected in the C++ pattern where restricted vector usage models a checked array.  The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.  The experiment compares with the C++ version to keep access to generated assembly code simple.
+
+As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and C++ versions.  When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.  But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assemly, ready to catch an occurrence the mistake.
+
+TODO: paste source and assemby codes
+
+Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.  The case is naive matrix multiplication over a row-major encoding.
+
+TODO: paste source codes
+
+
+
+
+
+\section{Comparison with other arrays}
+
+\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.  Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly typed ownership system that further helps guarantee statically the validity of every pointer deference.  These systems, therefore, ask the programmer to convince the typechecker that every pointer dereference is valid.  \CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
+
+\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.  Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.  The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
+
+\subsction{Safety in a padded room}
+
+Java's array [todo:cite] is a straightforward example of assuring safety against undefined behaviour, at a cost of expressiveness for more applied properties.  Consider the array parameter declarations in:
+
+\begin{tabular}{rl}
+    C      &  @void f( size_t n, size_t m, float a[n][m] );@ \\
+    Java   &  @void f( float[][] a );@
+\end{tabular}
+
+Java's safety against undefined behaviour assures the callee that, if @a@ is non-null, then @a.length@ is a valid access (say, evaluating to the number $\ell$) and if @i@ is in $[0, \ell)$ then @a[i]@ is a valid access.  If a value of @i@ outside this range is used, a runtime error is guaranteed.  In these respects, C offers no guarantess at all.  Notably, the suggestion that @n@ is the intended size of the first dimension of @a@ is documentation only.  Indeed, many might prefer the technically equivalent declarations @float a[][m]@ or @float (*a)[m]@ as emphasizing the ``no guarantees'' nature of an infrequently used language feature, over using the opportunity to explain a programmer intention.  Moreover, even if @a[0][0]@ is valid for the purpose intended, C's basic infamous feature is the possibility of an @i@, such that @a[i][0]@ is not valid for the same purpose, and yet, its evaluation does not produce an error.
+
+Java's lack of expressiveness for more applied properties means these outcomes are possible:
+\begin{itemize}
+    \item @a[0][17]@ and @a[2][17]@ are valid accesses, yet @a[1][17]@ is a runtime error, because @a[1]@ is a null pointer
+    \item the same observation, now because @a[1]@ refers to an array of length 5
+    \item execution times vary, because the @float@ values within @a@ are sometimes stored nearly contiguously, and other times, not at all
+\end{itemize}
+C's array has none of these limitations, nor do any of the ``array language'' comparators discussed in this section.
+
+This Java level of safety and expressiveness is also exemplified in the C family, with the commonly given advice [todo:cite example], for C++ programmers to use @std::vector@ in place of the C++ language's array, which is essentially the C array.  The advice is that, while a vector is also more powerful (and quirky) than an arry, its capabilities include options to preallocate with an upfront size, to use an available bound-checked accessor (@a.at(i)@ in place of @a[i]@), to avoid using @push_back@, and to use a vector of vectors.  Used with these restrictions, out-of-bound accesses are stopped, and in-bound accesses never exercise the vector's ability to grow, which is to say, they never make the program slow to reallocate and copy, and they never invalidate the program's other references to the contained values.  Allowing this scheme the same referential integrity assumption that \CFA enjoys [todo:xref], this scheme matches Java's safety and expressiveness exactly.  [TODO: decide about going deeper; some of the Java expressiveness concerns have mitigations, up to even more tradeoffs.]
+
+\subsection{Levels of dependently typed arrays}
+
+The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
+\begin{itemize}
+    \item a \emph{zip}-style operation that consumes two arrays of equal length
+    \item a \emph{map}-style operation whose produced length matches the consumed length
+    \item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
+\end{itemize}
+Across this field, this expressiveness is not just an avaiable place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.  Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
+
+
+
+Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.  In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.  The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.  In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.  [TODO: clarify how even Idris type checking terminates]
+
+Idris is a current, general-purpose dependently typed programming language.  Length checking is a common benchmark for full dependent type stystems.  Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.  [todo: finish explaining what Data.Vect is and then the essence of the comparison]
+
+POINTS:
+here is how our basic checks look (on a system that deosn't have to compromise);
+it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
+
+Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer offer novel contributions concerning similar, restricted dependent types for tracking array length.  Unlike \CFA, both are garbage-collected functional languages.  Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.  So, like \CFA, the checking in question is a leightweight bounds-only analysis.  Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
+
+
+
+The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.  There is a particular emphasis on an existential type, enabling callee-determined return shapes.  
+
+Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type.
+
+Futhark and full-strength dependently typed lanaguages treat array sizes are ordinary values.  Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
+
+CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.  Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.  Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
+
+\subsection{Static safety in C extensions}
+
+
+\section{Future Work}
+
+\subsection{Declaration syntax}
+
+\subsection{Range slicing}
+
+\subsection{With a module system}
+
+\subsection{With described enumerations}
+
+A project in \CFA's current portfolio will improve enumerations.  In the incumbent state, \CFA has C's enumerations, unmodified.  I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use.  It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations:
+\begin{lstlisting}
+    forall( T | is_enum(T) )
+    void show_in_context( T val ) {
+        for( T i ) {
+            string decorator = "";
+            if ( i == val-1 ) decorator = "< ready";
+            if ( i == val   ) decorator = "< go"   ;
+            sout | i | decorator;
+        }
+    }
+    enum weekday { mon, tue, wed = 500, thu, fri };
+    show_in_context( wed );
+\end{lstlisting}
+with output
+\begin{lstlisting}
+    mon
+    tue < ready
+    wed < go
+    thu
+    fri
+\end{lstlisting}
+The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA.  But the example shows these abilities:
+\begin{itemize}
+    \item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type
+    \item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@)
+    \item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@)
+    \item a provision for looping (the @for@ form used) over the values of the type.
+\end{itemize}
+
+If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
+
+[TODO: indroduce Ada in the comparators]
+
+In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, C++, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.  The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing.
+
+This change of perspective also lets us remove ubiquitous dynamic bound checks.  [TODO: xref] discusses how automatically inserted bound checks can often be otimized away.  But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.  To remove the ubiquitious dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
+
+[TODO, fix confusion:  Idris has this arrangement of checks, but still the natural numbers as the domain.]
+
+The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.  Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
+\begin{lstlisting}
+interface Ix n
+  get_size n : Unit -> Int
+  ordinal : n -> Int
+  unsafe_from_ordinal n : Int -> n
+\end{lstlisting}
+
+Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.  This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this funciton.  Dex's example is a routine that calculates pointwise differences between two samples.  Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).  In both cases, but with respectively dimensoned interpretations of ``size,'' this function requries the argument sizes to match, and it produces a result of the that size.
+
+The polymorphism plays out with the pointwise-difference routine advertizing a single-dimensional interface whose domain type is generic.  In the audio instantiation, the duration-of-clip type argument is used for the domain.  In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.  This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
+\begin{lstlisting}
+instance {a b} [Ix a, Ix b] Ix (a & b)
+  get_size = \(). size a * size b
+  ordinal = \(i, j). (ordinal i * size b) + ordinal j
+  unsafe_from_ordinal = \o.
+    bs = size b
+    (unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
+\end{lstlisting}
+and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
+\begin{lstlisting}
+    img_trans :: (img_wd,img_ht)=>Real
+    img_trans.(i,j) = img.i.j
+    result = pairwise img_trans
+\end{lstlisting}
+[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
+
+In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
+
+\subsection{Retire pointer arithmetic}
Index: doc/theses/mike_brooks_MMath/background.tex
===================================================================
--- doc/theses/mike_brooks_MMath/background.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/background.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,5 @@
+\chapter{Background}
+
+\section{Arrays}
+
+\section{Strings}
Index: doc/theses/mike_brooks_MMath/conclusion.tex
===================================================================
--- doc/theses/mike_brooks_MMath/conclusion.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/conclusion.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,7 @@
+\chapter{Conclusion}
+
+\section{Arrays}
+
+\section{Strings}
+
+\section{Future Work}
Index: doc/theses/mike_brooks_MMath/glossary.tex
===================================================================
--- doc/theses/mike_brooks_MMath/glossary.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/glossary.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,33 @@
+% Define Glossary terms (This is properly done here, in the preamble. Could be \input{} from a file...)
+% Main glossary entries -- definitions of relevant terminology
+\newglossaryentry{computer}
+{
+name=computer,
+description={A programmable machine that receives input data,
+               stores and manipulates the data, and provides
+               formatted output}
+}
+
+% Nomenclature glossary entries -- New definitions, or unusual terminology
+\newglossary*{nomenclature}{Nomenclature}
+\newglossaryentry{dingledorf}
+{
+type=nomenclature,
+name=dingledorf,
+description={A person of supposed average intelligence who makes incredibly brainless misjudgments}
+}
+
+% List of Abbreviations (abbreviations type is built in to the glossaries-extra package)
+\newabbreviation{aaaaz}{AAAAZ}{American Association of Amature Astronomers and Zoologists}
+
+% List of Symbols
+\newglossary*{symbols}{List of Symbols}
+\newglossaryentry{rvec}
+{
+name={$\mathbf{v}$},
+sort={label},
+type=symbols,
+description={Random vector: a location in n-dimensional Cartesian space, where each dimensional component is determined by a random process}
+}
+ 
+\makeglossaries
Index: doc/theses/mike_brooks_MMath/intro.tex
===================================================================
--- doc/theses/mike_brooks_MMath/intro.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/intro.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,7 @@
+\chapter{Introduction}
+
+\section{Arrays}
+
+\section{Strings}
+
+\section{Contributions}
Index: doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal-matmul.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal-matmul.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal-matmul.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,13 @@
+#include <array.hfa>
+
+// traditional "naiive" loops
+forall( [M], [N], [P] )
+void matmul( array(float, M, P) & src1,
+             array(float, P, N) & src2, 
+             array(float, M, N) & tgt ) {
+    for (i; M) for (j; N) {
+        tgt[i][j] = 0;
+        for (k; P)
+            tgt[i][j] += src1[i][k] * src2[k][j];
+    }
+}
Index: doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal-stdvec.cpp
===================================================================
--- doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal-stdvec.cpp	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal-stdvec.cpp	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,121 @@
+/*
+// traditional "naiive" loops
+forall( [M], [N], [P] )
+void matmul( array(float, M, P) & src1,
+             array(float, P, N) & src2, 
+             array(float, M, N) & tgt ) {
+    for (i; M) for (j; N) {
+        tgt[i][j] = 0;
+        for (k; P)
+            tgt[i][j] += src1[i][k] * src2[k][j];
+    }
+}
+*/
+
+#if defined BADBOUNDS
+#define BOUND(X) 17
+#else
+#define BOUND(X) X
+#endif
+
+#include <vector>
+using namespace std;
+
+
+#ifndef TESTCASE
+#define TESTCASE 1
+#endif
+
+#if TESTCASE==1
+
+float f( vector<float> & a ) {
+    float result = 0;
+    for( int i = 0; i < BOUND(a.size()); i++ ) {
+        result += a.at(i);
+        // hillarious that, while writing THIS VERY DEMO, on first go, I actaully wrote it a[i]
+    }
+    return result;
+}
+
+#ifdef WITHHARNESS
+#include <iostream>
+int main( int argc, char ** argv ) {
+    vector<float> v(5);
+    v.at(0) = 3.14;
+    v.at(1) = 3.14;
+    v.at(2) = 3.14;
+    v.at(3) = 3.14;
+    v.at(4) = 3.14;
+
+    float answer = f(v);
+
+    cout << "answer: " << answer << endl;
+
+}
+#endif 
+
+// g++ array-boundcheck-removal-stdvec.cpp -DWITHHARNESS -DBADBOUNDS
+// ./a.out
+// Aborted: terminate called after throwing an instance of 'std::out_of_range'
+
+
+// g++ array-boundcheck-removal-stdvec.cpp -DWITHHARNESS
+// ./a.out
+// answer: 15.7
+
+// g++ array-boundcheck-removal-stdvec.cpp -S -O2 -DBADBOUNDS
+// loop has cmp-jmp
+// jmp target has call _ZSt24__throw_out_of_range_fmtPKcz@PLT
+
+// g++ array-boundcheck-removal-stdvec.cpp -S -O2
+// loop is clean
+
+
+#elif TESTCASE==2
+
+//#include <cassert>
+#define assert(prop) if (!prop) return
+
+typedef vector<vector<float> > mat;
+
+void matmul( mat & a, mat & b, mat & rslt ) {
+    size_t m = rslt.size();
+    assert( m == a.size() );
+    size_t p = b.size();
+    for ( int i = 0; i < BOUND(m); i++ ) {
+        assert( p == a.at(i).size() );
+        size_t n = rslt.at(i).size();
+        for ( int j = 0; j < BOUND(n); j++ ) {
+            rslt.at(i).at(j) = 0.0;
+            for ( int k = 0; k < BOUND(p); k++ ) {
+                assert(b.at(k).size() == n); // asking to check it too often
+                rslt.at(i).at(j) += a.at(i).at(k) * b.at(k).at(j);
+            }
+        }
+    }
+}
+
+#ifdef WITHHARNESS
+#include <iostream>
+int main( int argc, char ** argv ) {
+    mat a(5, vector<float>(6));
+    mat b(6, vector<float>(7));
+    mat r(5, vector<float>(7));
+    matmul(a, b, r);
+}
+#endif 
+
+// (modify the declarations in main to be off by one, each of the 6 values at a time)
+// g++ array-boundcheck-removal-stdvec.cpp -DWITHHARNESS -DTESTCASE=2
+// ./a.out
+// (see an assertion failure)
+
+// g++ array-boundcheck-removal-stdvec.cpp -DWITHHARNESS -DTESTCASE=2
+// ./a.out
+// (runs fine)
+
+// g++ array-boundcheck-removal-stdvec.cpp -S -O2 -DTESTCASE=2
+// hypothesis: loop bound checks are clean because the assertions took care of it
+// actual so far:  both assertions and bound checks survived
+
+#endif
Index: doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/programs/array-boundcheck-removal.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,55 @@
+#include <array.hfa>
+
+#ifndef DIMSVERSION
+#define DIMSVERSION 1
+#endif
+
+#if DIMSVERSION == 1
+
+forall( [N] )
+size_t foo( array( size_t, N ) & a ) {
+    size_t retval = 0;
+    for( i; N ) {
+        retval += a[i];
+    }
+    return retval;
+}
+
+#elif DIMSVERSION == 2 
+
+forall( [N], [M] )
+size_t foo( array( size_t, N, M ) & a ) {
+    size_t retval = 0;
+    for( i; N ) for( j; M ) {
+        retval += a[i][j];
+    }
+    return retval;
+}
+
+#elif DIMSVERSION == 3
+
+forall( [N] )
+size_t foo( array( size_t, N ) & a, array( size_t, N ) & b ) {
+    size_t retval = 0;
+    for( i; N ) {
+        retval += a[i] - b[i];
+    }
+    return retval;
+}
+
+#elif DIMSVERSION == 4
+
+forall( [M], [N], [P] )
+void foo   ( array(size_t, M, P) & src1,
+             array(size_t, P, N) & src2, 
+             array(size_t, M, N) & tgt ) {
+    for (i; M) for (j; N) {
+        tgt[i][j] = 0;
+        for (k; P)
+            tgt[i][j] += src1[i][k] * src2[k][j];
+    }
+}
+
+#else
+#error Bad Version 
+#endif
Index: doc/theses/mike_brooks_MMath/programs/hello-accordion.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/hello-accordion.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/programs/hello-accordion.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,120 @@
+#include "stdlib.hfa"
+#include "array.hfa"
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+forall( ztype(Nclients), ztype(Ncosts) )
+struct request {
+    unsigned int requestor_id;
+    array( unsigned int, Nclients ) impacted_client_ids;
+    array( float, Ncosts ) cost_contribs;
+    float total_cost;
+};
+
+
+// TODO: understand (fix?) why these are needed (autogen seems to be failing ... is typeof as struct member nayok?)
+
+forall( ztype(Nclients), ztype(Ncosts) )
+void ?{}( request(Nclients, Ncosts) & this ) {}
+
+forall( ztype(Nclients), ztype(Ncosts) )
+void ^?{}( request(Nclients, Ncosts) & this ) {}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+forall( ztype(Nclients), ztype(Ncosts) )
+void summarize( request(Nclients, Ncosts) & r ) {
+    r.total_cost = 0;
+    for( i; z(Ncosts) )
+        r.total_cost += r.cost_contribs[i];
+    // say the cost is per-client, to make output vary
+    r.total_cost *= z(Nclients);
+}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+int main( int argc, char ** argv ) {
+
+
+
+const int ncl = atoi(argv[1]);
+const int nco = 2;
+
+request( Z(ncl), Z(nco) ) r;
+r.cost_contribs[0] = 100;
+r.cost_contribs[1] = 0.1;
+
+summarize(r);
+printf("Total cost: %.1f\n", r.total_cost);
+
+/*
+./a.out 5
+Total cost: 500.5
+./a.out 6
+Total cost: 600.6
+*/
+
+
+
+
+}
Index: doc/theses/mike_brooks_MMath/programs/hello-array.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/hello-array.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/programs/hello-array.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,204 @@
+
+#include <common.hfa>
+#include <bits/align.hfa>
+
+extern "C" {
+    int atoi(const char *str);
+}
+
+
+#include "stdlib.hfa"
+#include "array.hfa" // learned has to come afer stdlib, which uses the word tag
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+// Usage:
+// ./a.out 5
+// example does an unchecked reference to argv[1]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+forall( ztype( N ) )
+array(bool, N) & f( array(float, N) & a, array(float, N) & b ) {
+    array(bool, N) & ret = *alloc();
+    for( i; z(N) ) {
+        float fracdiff = 2 * abs( a[i] - b[i] )
+                       / ( abs( a[i] ) + abs( b[i] ) );
+        ret[i] = fracdiff < 0.005;
+    }
+    return ret;
+}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+// TODO: standardize argv
+
+int main( int argc, char ** argv ) {
+    int n = atoi(argv[1]);
+    array(float, Z(n)) a, b;
+    for (i; n) {
+        a[i] = 3.14 / (i+1);
+        b[i] = a[i] + 0.005 ;
+    }
+    array(bool, Z(n)) & answer = f( a, b );
+    printf("answer:");
+    for (i; n)
+        printf(" %d", answer[i]);
+    printf("\n");
+    free( & answer );
+}
+/*
+$ ./a.out 5
+answer: 1 1 1 0 0 
+$ ./a.out 7
+answer: 1 1 1 0 0 0 0 
+*/
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+forall( ztype(M), ztype(N) )
+void not_so_bad(array(float, M) &a, array(float, N) &b ) {
+    f( a, a );
+    f( b, b );
+}
+
+
+
+
+
+
+
+#ifdef SHOWERR1
+
+forall( ztype(M), ztype(N) )
+void bad( array(float, M) &a, array(float, N) &b ) {
+    f( a, a ); // ok
+    f( b, b ); // ok
+    f( a, b ); // error
+}
+
+#endif
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+forall( ztype(M), ztype(N) )
+void bad_fixed( array(float, M) &a, array(float, N) &b ) {
+    
+
+    if ( z(M) == z(N) ) {
+        f( a, ( array(float, M) & ) b ); // fixed
+    }
+
+}
Index: doc/theses/mike_brooks_MMath/programs/hello-md.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/hello-md.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/programs/hello-md.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,172 @@
+#include "array.hfa"
+
+
+trait ix( C &, E &, ztype(N) ) {
+    E & ?[?]( C &, ptrdiff_t );
+    void __taglen( tag(C), tag(N) );
+};
+
+forall( ztype(Zn), ztype(S), Timmed &, Tbase & )
+void __taglen( tag(arpk(Zn, S, Timmed, Tbase)), tag(Zn) ) {}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+forall( ztype( N ) )
+void print1d_cstyle( array(float, N) & c );
+
+forall( C &, ztype( N ) | ix( C, float, N ) )
+void print1d( C & c );
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+forall( ztype( N ) )
+void print1d_cstyle( array(float, N) & c ) {
+    for( i; z(N) ) {
+        printf("%.1f  ", c[i]);
+    }
+    printf("\n");
+}
+
+
+
+
+
+
+
+
+
+
+
+
+
+forall( C &, ztype( N ) | ix( C, float, N ) )
+void print1d( C & c ) {
+    for( i; z(N) ) {
+        printf("%.1f  ", c[i]);
+    }
+    printf("\n");
+}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+void fill( array(float, Z(5), Z(7)) & a ) {
+    for ( i; (ptrdiff_t) 5 ) {
+        for ( j; 7 ) {
+            a[[i,j]] = 1.0 * i + 0.1 * j;
+            printf("%.1f  ", a[[i,j]]);
+        }
+        printf("\n");
+    }
+    printf("\n");
+}
+
+int main() {
+
+
+
+
+
+
+
+array( float, Z(5), Z(7) ) a;
+fill(a);
+/*
+0.0  0.1  0.2  0.3  0.4  0.5  0.6  
+1.0  1.1  1.2  1.3  1.4  1.5  1.6  
+2.0  2.1  2.2  2.3  2.4  2.5  2.6  
+3.0  3.1  3.2  3.3  3.4  3.5  3.6  
+4.0  4.1  4.2  4.3  4.4  4.5  4.6
+*/
+    
+
+
+
+
+
+
+
+
+
+
+print1d_cstyle( a[ 2 ] );  // 2.0  2.1  2.2  2.3  2.4  2.5  2.6
+
+
+
+
+print1d( a[ 2 ] );  // 2.0  2.1  2.2  2.3  2.4  2.5  2.6
+
+
+
+
+print1d( a[[ 2, all ]] );  // 2.0  2.1  2.2  2.3  2.4  2.5  2.6
+print1d( a[[ all, 3 ]] );  // 0.3  1.3  2.3  3.3  4.3
+
+
+
+print1d_cstyle( a[[ 2, all ]] );
+
+
+
+
+
+
+
+#ifdef SHOWERR1
+
+print1d_cstyle( a[[ all, 2 ]] );  // bad
+
+#endif
+
+}
+
+
+
Index: doc/theses/mike_brooks_MMath/programs/sharectx-demo.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/sharectx-demo.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/programs/sharectx-demo.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,35 @@
+#include <string.hfa>
+#include <string_sharectx.hfa>
+
+void a() {}
+void b() {}
+void c() {}
+void d() {}
+void e() {}
+
+
+
+
+
+
+
+
+
+
+
+void helper2() {
+   c();
+   string_sharectx c = {NEW_SHARING};
+   d();
+}
+void helper1() {
+   a();
+   string_sharectx c = {NO_SHARING};
+   b();
+   helper2();
+   e();
+}
+int main() {
+   helper1();
+}
+
Index: doc/theses/mike_brooks_MMath/programs/sharing-demo.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/sharing-demo.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/programs/sharing-demo.cfa	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,382 @@
+#include <string.hfa>
+#include <assert.h>
+
+#define xstr(s) str(@s;@)
+#define str(s) #s
+
+void demo1() {
+	sout | sepDisable;;
+	sout | "Consider two strings @s1@ and @s1a@ that are in an aliasing relationship, and a third, @s2@, made by a simple copy from @s1@.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{llll}";
+	sout | "\t\t\t\t& @s1@\t& @s1a@\t& @s2@\t\\\\";
+
+	#define S1 string s1  = "abc"
+	#define S1A string s1a = s1`shareEdits
+	#define S2 string s2  = s1
+	S1;
+	S1A;
+	S2;
+	assert( s1 == "abc" );
+	assert( s1a == "abc" );
+	assert( s2 == "abc" );
+	sout | xstr(S1) | "\t\\\\";
+	sout | xstr(S1A) | "\t\\\\";
+	sout | xstr(S2) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2;
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+
+	sout | "Aliasing (@`shareEdits@) means that changes flow in both directions; with a simple copy, they do not.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{llll}";
+	sout | "\t\t& @s1@\t& @s1a@\t& @s2@\t\\\\";
+	sout | "\t\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+
+	#define S1s1 s1 [1] = '+'
+	S1s1;
+	assert( s1 == "a+c" );
+	sout | xstr(S1s1) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+      
+	#define S1As1 s1a[1] = '-'
+	S1As1;
+	assert( s1a == "a-c" );
+	sout | xstr(S1As1) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+      
+	#define S2s1 s2 [1] = '|'
+	S2s1;
+	assert( s2 == "a|c" );
+	sout | xstr(S2s1) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2;
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+      
+	sout | "Assignment of a value is just a modificiation."
+		   "\nThe aliasing relationship is established at construction and is unaffected by assignment of a value.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{llll}";
+	sout | "\t\t& @s1@\t& @s1a@\t& @s2@\t\\\\";
+	sout | "\t\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+
+	#define S1qrs s1  = "qrs"
+	S1qrs;
+	assert( s1 == "qrs" );
+	sout | xstr(S1qrs) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+    
+	#define S1Atuv s1a = "tuv"
+	S1Atuv;
+	assert( s1a == "tuv" );
+	sout | xstr(S1Atuv) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+    
+	#define S2wxy s2  = "wxy"
+	S2wxy;
+	assert( s2 == "wxy" );
+	sout | xstr(S2wxy) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2;
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+
+	sout | "Assignment from a string is just assignment of a value."
+		   "\nWhether of not the RHS participates in aliasing is irrelevant.  Any aliasing of the LHS is unaffected.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{llll}";
+	sout | "\t\t& @s1@\t& @s1a@\t& @s2@\t\\\\";
+	sout | "\t\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+
+	#define S1S2 s1  = s2
+	S1S2;
+	assert( s1 == "wxy" );
+	assert( s1a == "wxy" );
+	assert( s2 == "wxy" );
+	sout | xstr(S1S2) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+    
+	#define S1aaa s1  = "aaa"
+	S1aaa;
+	assert( s1 == "aaa" );
+	assert( s1a == "aaa" );
+	assert( s2 == "wxy" );
+	sout | xstr(S1aaa) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+    
+	#define S2S1 s2  = s1
+	S2S1;
+	assert( s1 == "aaa" );
+	assert( s1a == "aaa" );
+	assert( s2 == "aaa" );
+	sout | xstr(S2S1) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+    
+	#define S2bbb s2  = "bbb"
+	S2bbb;
+	assert( s1 == "aaa" );
+	assert( s1a == "aaa" );
+	assert( s2 == "bbb" );
+	sout | xstr(S2bbb) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+      
+    #define S2S1a s2  = s1a
+	S2S1a;
+	assert( s1 == "aaa" );
+	assert( s1a == "aaa" );
+	assert( s2 == "aaa" );
+	sout | xstr(S2S1a) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+
+	#define S2ccc s2  = "ccc"
+	S2ccc;
+	assert( s1 == "aaa" );
+	assert( s1a == "aaa" );
+	assert( s2 == "ccc" );
+	sout | xstr(S2ccc) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+      
+	#define S1xxx s1  = "xxx"
+	S1xxx;
+	assert( s1 == "xxx" );
+	assert( s1a == "xxx" );
+	assert( s2 == "ccc" );
+	sout | xstr(S1xxx) | "\t& " | s1 | "\t& " | s1a | "\t& " | s2 | "\t\\\\";
+	sout | "\\end{tabular}";
+	sout | "\\par";
+}
+
+
+void demo2() {
+	sout | "Consider new strings @s1_mid@ being an alias for a run in the middle of @s1@, along with @s2@, made by a simple copy from the middle of @s1@.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{llll}";
+	sout | "\t\t\t\t& @s1@\t& @s1_mid@\t& @s2@\t\\\\";
+
+	#define D2_s1_abcd string s1     = "abcd"
+	D2_s1_abcd;
+	sout | xstr(D2_s1_abcd) | "\t\\\\";
+
+	#define D2_s1mid_s1 string s1_mid = s1(1,3)`shareEdits
+	D2_s1mid_s1;
+	sout | xstr(D2_s1mid_s1) | "\t\\\\";
+
+	#define D2_s2_s1 string s2     = s1(1,3)
+	D2_s2_s1;      
+	assert( s1 == "abcd" );
+	assert( s1_mid == "bc" );
+	assert( s2 == "bc" );
+	sout | xstr(D2_s2_s1) | "\t& " | s1 | "\t& " | s1_mid | "\t& " | s2 | "\t\\\\";
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+
+    sout | "Again, @`shareEdits@ passes changes in both directions; copy does not.  Note the difference in index values, with the \\emph{b} position being 1 in the longer string and 0 in the shorter strings.  In the case of s1 aliasing with @s1_mid@, the very same character is being accessed by different postitions.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{llll}";
+	sout | "\t\t\t\t& @s1@\t& @s1_mid@\t& @s2@\t\\\\";
+	sout | "\t& " | s1 | "\t& " | s1_mid | "\t& " | s2 | "\t\\\\";
+
+	#define D2_s1_plus s1    [1] = '+'
+	D2_s1_plus;
+	assert( s1 == "a+cd" );
+	assert( s1_mid == "+c" );
+	assert( s2 == "bc" );
+	sout | xstr(D2_s1_plus) | "\t& " | s1 | "\t& " | s1_mid | "\t& " | s2 | "\t\\\\";
+
+	#define D2_s1mid_minus s1_mid[0] = '-'
+	D2_s1mid_minus;
+	assert( s1 == "a-cd" );
+	assert( s1_mid == "-c" );
+	assert( s2 == "bc" );
+	sout | xstr(D2_s1mid_minus) | "\t& " | s1 | "\t& " | s1_mid | "\t& " | s2 | "\t\\\\";
+      
+    #define D2_s2_pipe s2    [0] = '|'
+	D2_s2_pipe;
+	assert( s1 == "a-cd" );
+	assert( s1_mid == "-c" );
+	assert( s2 == "|c" );
+	sout | xstr(D2_s2_pipe) | "\t& " | s1 | "\t& " | s1_mid | "\t& " | s2 | "\t\\\\";
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+
+    sout | "Once again, assignment of a value is a modificiation that flows through the aliasing relationship, without affecting its structure.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{llll}";
+	sout | "\t\t\t\t& @s1@\t& @s1_mid@\t& @s2@\t\\\\";
+	sout | "\t& " | s1 | "\t& " | s1_mid | "\t& " | s2 | "\t\\\\";
+    
+	#define D2_s1mid_ff s1_mid = "ff"
+	D2_s1mid_ff;
+	assert( s1 == "affd" );
+	assert( s1_mid == "ff" );
+	assert( s2 == "|c" );
+	sout | xstr(D2_s1mid_ff) | "\t& " | s1 | "\t& " | s1_mid | "\t& " | s2 | "\t\\\\";
+      
+	#define D2_s2_gg s2     = "gg"
+	D2_s2_gg;
+	assert( s1 == "affd" );
+	assert( s1_mid == "ff" );
+	assert( s2 == "gg" );
+	sout | xstr(D2_s2_gg) | "\t& " | s1 | "\t& " | s1_mid | "\t& " | s2 | "\t\\\\";
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+    
+    sout | "In the \\emph{ff} step, which is a positive example of flow across an aliasing relationship, the result is straightforward to accept because the flow direction is from contained (small) to containing (large).  The following rules for edits through aliasing substrings will guide how to flow in the opposite direction.";
+	sout | "\\par";
+
+
+    sout | "Growth and shrinkage are natural extensions.  An empty substring is a real thing, at a well-defined location, whose meaning is extrapolated from the examples so far.  The intended metaphor is to operating a GUI text editor.  Having an aliasing substring is like using the mouse to select a few words.  Assigning onto an aliasign substring is like typing from having a few words selected:  depending how much you type, the file being edited can get shorter or longer.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{lll}";
+	sout | "\t\t\t\t& @s1@\t& @s1_mid@\t\\\\";
+	sout | "\t& " | s1 | "\t& " | s1_mid | "\t\\\\";
+
+	assert( s1 == "affd" );
+	assert( s1_mid == "fc" );                                                     // ????????? bug?
+	sout | xstr(D2_s2_gg) | "\t& " | s1 | "\t& " | s1_mid | "\t\\\\";
+
+	#define D2_s1mid_hhhh s1_mid = "hhhh"
+	D2_s1mid_hhhh;
+	assert( s1 == "ahhhhd" );
+	assert( s1_mid == "hhhh" );
+	sout  | xstr(D2_s1mid_hhhh)  | "\t& " | s1 | "\t& " | s1_mid | "\t\\\\";
+      
+	#define D2_s1mid_i s1_mid = "i"
+	D2_s1mid_i;
+	assert( s1 == "aid" );
+	assert( s1_mid == "i" );
+	sout  | xstr(D2_s1mid_i)  | "\t& " | s1 | "\t& " | s1_mid | "\t\\\\";
+    
+	#define D2_s1mid_empty s1_mid = ""
+	D2_s1mid_empty;
+	assert( s1 == "ad" );
+	// assert( s1_mid == "" );    ------ Should be so, but fails
+	sout  | xstr(D2_s1mid_empty)  | "\t& " | s1 | "\t& " | s1_mid | "\t\\\\";
+
+	#define D2_s1mid_jj s1_mid = "jj"
+	D2_s1mid_jj;
+	assert( s1 == "ajjd" );
+	assert( s1_mid == "jj" );
+	sout  | xstr(D2_s1mid_jj)  | "\t& " | s1 | "\t& " | s1_mid | "\t\\\\";
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+    
+    sout | "Multiple portions can be aliased.  When there are several aliasing substrings at once, the text editor analogy becomes an online multi-user editor.  I should be able to edit a paragraph in one place (changing the document's length), without my edits affecting which letters are within a mouse-selection that you had made previously, somewhere else.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{lllll}";
+	sout | "\t\t\t\t& @s1@\t& @s1_bgn@\t& @s1_mid@\t& @s1_end@\t\\\\";
+
+	#define D2_s1bgn_s1	string s1_bgn = s1(0, 1)`shareEdits
+	D2_s1bgn_s1;
+	sout  | xstr(D2_s1bgn_s1)  | "\t\\\\";
+
+	#define D2_s1end_s1 string s1_end = s1(3, 4)`shareEdits
+	D2_s1end_s1;
+	assert( s1 == "ajjd" );
+	assert( s1_bgn == "a" );
+	assert( s1_mid == "jj" );
+	assert( s1_end == "d" );
+	sout  | xstr(D2_s1end_s1)  | "\t& " | s1 | "\t& " | s1_bgn | "\t& " | s1_mid | "\t& " | s1_end | "\t\\\\";
+      
+	#define D1_s1bgn_zzz s1_bgn = "zzzz"
+	D1_s1bgn_zzz;
+	assert( s1 == "zzzzjjd" );
+	assert( s1_bgn == "zzzz" );
+	assert( s1_mid == "jj" );
+	assert( s1_end == "d" );
+	sout  | xstr(D1_s1bgn_zzz)  | "\t& " | s1 | "\t& " | s1_bgn | "\t& " | s1_mid | "\t& " | s1_end | "\t\\\\";
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+
+    sout | "When an edit happens on an aliasing substring that overlaps another, an effect is unavoidable.  Here, the passive party sees its selection shortened, to exclude the characters that were not part of the original selection.";
+	sout | "\\par\\noindent";
+	sout | "\\begin{tabular}{llllll}";
+	sout | "\t\t\t\t& @s1@\t& @s1_bgn@\t& @s1_crs@\t& @s1_mid@\t& @s1_end@\t\\\\";
+    
+	#define D2_s1crs_s1 string s1_crs = s1(3, 5)`shareEdits
+	D2_s1crs_s1;
+	assert( s1 == "zzzzjjd" );
+	assert( s1_bgn == "zzzz" );
+	assert( s1_crs == "zj" );
+	assert( s1_mid == "jj" );
+	assert( s1_end == "d" );  
+	sout  | xstr(D2_s1crs_s1)  | "\t& " | s1 | "\t& " | s1_bgn | "\t& " | s1_crs | "\t& " | s1_mid | "\t& " | s1_end | "\t\\\\";
+    
+	#define D2_s1crs_ppp s1_crs = "+++"
+	D2_s1crs_ppp;
+	assert( s1 == "zzz+++jd" );
+	assert( s1_bgn == "zzz" );
+	assert( s1_crs == "+++" );
+	assert( s1_mid == "j" );
+	assert( s1_end == "d" );
+	sout  | xstr(D2_s1crs_ppp)  | "\t& " | s1 | "\t& " | s1_bgn | "\t& " | s1_crs | "\t& " | s1_mid | "\t& " | s1_end | "\t\\\\";
+	sout | "\\end{tabular}";
+	sout | "\\par\\noindent";
+	sout | "TODO: finish typesetting the demo";
+
+    // "This shortening behaviour means that a modification has to occur entirely inside a substring, to show up in that substring.  Sharing changes through the intersection of partially overlapping aliases is still possible, so long as the receiver's boundary is not inside the edit."
+
+	string word = "Phi";
+	string consonants = word(0,2)`shareEdits;
+	string miniscules = word(1,3)`shareEdits;
+	assert( word == "Phi" );
+	assert( consonants == "Ph" );
+	assert( miniscules == "hi" );
+      
+	consonants[1] = 's';
+	assert( word == "Psi" );
+	assert( consonants == "Ps" );
+	assert( miniscules == "si" );
+
+	// "The extreme form of this shortening happens when a bystander alias is a proper substring of an edit.  The bystander becomes an empty substring."
+    
+	string all = "They said hello again";
+	string greet     = all(10,15)`shareEdits;
+	string greet_bgn = all(10,11)`shareEdits;
+	string greet_end = all(14,15)`shareEdits;
+      
+	assert( all == "They said hello again" );
+	assert( greet == "hello" );
+	assert( greet_bgn == "h" );
+	assert( greet_end == "o" );
+      
+    
+	greet = "sup";
+	assert( all == "They said sup again" );
+	assert( greet == "sup" );
+	// assert( greet_bgn == "" );    ------ Should be so, but fails
+	// assert( greet_end == "" );
+      
+    
+  
+
+  
+    /* As in the earlier step where \emph{aj} becomes \emph{ajjd}, such empty substrings maintain their places in the total string, and can be used for filling it.  Because @greet_bgn@ was orginally at the start of the edit, in the outcome, the empty @greet_bgn@ sits just before the written value.  Similarly @greed_end@ goes after.  Though not shown, an overwritten substring at neither side goes arbitrarily to the before side. */
+  
+
+  
+    
+	greet_bgn = "what"; 
+      
+      
+	assert( all == "They said whatsup again" );
+      
+	assert( greet == "sup" );
+      
+	assert( greet_bgn == "what" );
+	// assert( greet_end == "" );    ------ Should be so, but fails
+      
+    
+	greet_end = "..."; 
+      
+      
+	assert( all == "They said whatsup... again" );
+      
+	assert( greet == "sup" );
+      
+	assert( greet_bgn == "what" );
+      
+	assert( greet_end == "..." );
+      
+    
+  
+
+  
+    /* Though these empty substrings hold their places in the total string, an empty string only belongs to bigger strings when it occurs completely inside them.  There is no such state as including an empty substring at an edge.  For this reason, @word@ gains the characters added by assigning to @greet_bgn@ and @greet_end@, but the string @greet@ does not. */
+  
+
+}
+
+
+int main(int argc, char ** argv) {
+
+    demo1();
+    demo2();
+    printf("%% %s done running\n", argv[0]);
+}
Index: doc/theses/mike_brooks_MMath/string.tex
===================================================================
--- doc/theses/mike_brooks_MMath/string.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/string.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,226 @@
+\chapter{String}
+
+\section{String}
+
+\subsection{Logical Overlap}
+
+\input{sharing-demo.tex}
+
+\subsection{RAII limitations}
+
+Earlier work on \CFA [to cite Schluntz] implemented the feature of constructors and destructors.  A constructor is a user-defined function that runs implicitly, when control passes an object's declaration, while a destructor runs at the exit of the declaration's lexical scope.  The feature allows programmers to assume that, whenever a runtime object of a certain type is accessible, the system called one of the programmer's constuctor functions on that object, and a matching destructor call will happen in the future.  The feature helps programmers know that their programs' invariants obtain.
+
+The purposes of such invariants go beyond ensuring authentic values for the bits inside the object.   These invariants can track occurrences of the managed objects in other data structures.  Reference counting is a typical application of the latter invariant type.  With a reference-counting smart pointer, the consturctor and destructor \emph{of the pointer type} track the lifecycles of occurrences of these pointers, by incrementing and decrementing a counter (ususally) on the referent object, that is, they maintain a that is state separate from the objects to whose lifecycles they are attached.  Both the C++ and \CFA RAII systems ares powerful enough to achive such reference counting.
+
+The C++ RAII system supports a more advanced application.  A lifecycle function has access to the object under managamanet, by location; constructors and destuctors receive a @this@ parameter providing its memory address.  A lifecycle-function implementation can then add its objects to a collection upon creation, and remove them at destruction.  A modulue that provides such objects, by using and encapsulating such a collection, can traverse the collection at relevant times, to keep the objects ``good.''  Then, if you are the user of such an module, declaring an object of its type means not only receiving an authentically ``good'' value at initialization, but receiving a subscription to a service that will keep the value ``good'' until you are done with it.
+
+In many cases, the relationship between memory location and lifecycle is simple.  But with stack-allocated objects being used as parameters and returns, there is a sender version in one stack frame and a receiver version in another.  C++ is able to treat those versions as distinct objects and guarantee a copy-constructor call for communicating the value from one to the other.  This ability has implications on the language's calling convention.  Consider an ordinary function @void f( Vehicle x )@, which receives an aggregate by value.  If the type @Vehicle@ has custom lifecycle functions, then a call to a user-provided copy constructor occurs, after the caller evaluates its argument expression, after the callee's stack frame exists, with room for its variable @x@ (which is the location that the copy-constructor must target), but before the user-provided body of @f@ begins executing.  C++ achieves this ordering by changing the function signature, in the compiled form, to pass-by-reference and having the callee invoke the copy constructor in its preamble.  On the other hand, if @Vehicle@ is a simple structure then the C calling convention is applied as the code originally appeared, that is, the callsite implementation code performs a bitwise copy from the caller's expression result, into the callee's x.
+
+TODO: learn correction to fix inconcsistency: this discussion says the callee invokes the copy constructor, but only the caller knows which copy constructor to use!
+
+TODO: discuss the return-value piece of this pattern
+
+The \CFA RAII system has limited support for using lifecycle functions to provide a ``stay good'' service.  It works in restricted settings, including on dynamically allocated objects.  It does not work for communicating arguments and returns by value because the system does not produce a constructor call that tracks the implied move from a sender frame to a reciver frame.  This limitation does not prevent a typical reference-counting design from using call-with-value/return-of-value, because the constructor--destructor calls are correctly balanced.  But it impedes a ``stay-good'' service from supporting call-with-value/return-of-value, because the lifecycles presented to the constructor/destor calls do not keep stable locations.  A ``stay-good'' service is acheivable so long as call-with-value/return-of-value do not occur.  The original presentation [to cite Schluntz section] acknoweledges this limitiation; the present discussion makes its consequences more apparent.
+
+The \CFA team sees this limitation as part of a tactical interem state that should some day be improved.  The \CFA compiler is currently a source-to-source translator that targets relativly portable C.  Several details of its features are provisionally awkward or under-performant until finer control of its code generation is feasible.  In the present state, all calls that appear in \CFA source code as call-with-value/return-of-value are emitted this way to the underlying C calling convention.  SO WHAT?
+
+The present string-API contribution has both the ``stay good'' promise and call-with-value/return-of-value being essential.  The main string API uses a work-around to acheive the full feature set, at a runtime performance penalty.  An alternative API sacrifices call-with-value/return-of-value functionality to recover full runtime performance.  These APIs are layered, with the slower, friendlier High Level API (HL) wrapping the faster, more primitive Low Level API (LL).  They present the same features, up to lifecycle management, with call-with-value/return-of-value being disabled in LL and implemented with the workaround in HL.  The intention is for most future code to target HL.  In a more distant future state, where \CFA has an RAII system that can handle the problematic quadrant, the HL layer can be abolished, the LL can be renamed to match today's HL, and LL can have its call-with-value/return-of-value permission reenabled.  Then, programs written originally against HL will simply run faster.  In the meantime, two use cases of LL exist.  Performance-critical sections of applications have LL as an option.  Within [Xref perf experiments], though HL-v-LL penalties are measured, typcial comparisons of the contributed string libary vs similar systems are made using LL.  This measurement gives a fair estimate of the goal state for \CFA while it is an evloving work in progress.
+
+
+
+\subsection{Memory Management}
+
+A centrepriece of the string module is its memory manager.  The managment scheme defines a large shared buffer for strings' text.  Allocation in this buffer is always bump-pointer; the buffer is compacted and/or relocated with growth when it fills.  A string is a smart pointer into this buffer.
+
+This cycle of frequent cheap allocations, interspersed with infrequent expensive compactions, has obvious similarities to a general-purpose memory manager based on garbage collection (GC).  A few differences are noteworthy.  First, in a general purpose manager, the objects of allocation contain pointers to other such objects, making the transitive reachability of these objects be a critical property.  Here, the allocations are of buffers of text, never pointers, so one allocation never keeps another one alive.  Second, in a general purpose manager, where the handle that keeps an allocation alive is the same as the program's general-purpose inter-object reference, an extremely lean representation of this reference is required.  Here, a fatter representation is acceptable because [why??].
+
+
+Figure [memmgr-basix.vsdx] shows the representation.  A heap header, with its text buffer, defines a sharing context.  Often, one global sharing context is appropriate for an entire program; exceptions are discussed in [xref TBD].  Strings are handles into the buffer.  They are members of a linked list whose order matches the order of their buffer fragments (exactly, where there is no overlapping, and approximately, where there is).  The header maintains a next-allocation pointer (alloc, in the figure) after the last live allocation of the buffer.  No external references into the buffer are allowed and the management procedure relocates the text allocations as needed.  The string handles contain explicit length fields, null termination characters are not used and all string text is kept in contiguous storage.  When strings (the inter-linked hanldes) are allocated on the program's call stack, a sustained period with no use of the program's dynamic memory allocator can ensue, during which the program nonetheless creates strings, destroys them, and runs length-increasing modifications on existing ones.  
+
+Compaction happens when the heap fills.  It is the first of two uses of the linked list.  The list allows discovering all live string handles, and from them, the ranges of the character buffer that are in use.  With these ranges known, their total character count gives the amount of space in use.  When that amount is small, compared with the current buffer size, an in-place compaction occurs, which enatils copying the in-use ranges, to be adjacent, at the font of the buffer.  When the in-use amount is large, a larger buffer is allocated (using the program's general-purpose dynamic allcator), the in-use strings are copied to be adjacent at the front of it, and the original buffer is freed back to the program's general allocator.  Either way, navigating the links between the handles provides the pointers into the buffer, first read, to find the source fragment, then written with the location of the resultant fragment.  This linkage across the structure is unaffected during a compaction; only the pointers from the handles to the buffer are modified.  This modification, along with the grooming/provisioning of the text storage resouce that it represents, is an example, in the language of [xref RAII limitations] of the string module providing a ``stay good'' service.
+
+Object lifecycle events are the subscription-management triggers in such a service.  There are two fundamental string-creation routines:  importing from external text like a C-string, and initialization from an existing \CFA string.  When importing, a fresh allocation at the free end fo the buffer occurs, into which the text is copied.  The resultant handle is therefore inserted into the list at the position after the incumbent last handle, a position given by the heap manager's ``last handle'' pointer.  When initializing from text already on the \CFA heap, the resultant handle is a second reference onto the original run of characters.  In this case, the resultant handle's linked-list position is beside the original handle.  Both string initialization styles preserve the string module's internal invriant that the linked-list order match the buffer order.  For string destruction, the list being doubly linked provides for easy removal of the disappearing handle.
+
+While a string handle is live, it accepts modification operations, some of which make it reference a different portion of the underlying buffer, and accordingly, move the handle to a different position in the inter-handle list.   While special cases have more optimal handling, the general case requires a fresh buffer run.  In this case, the new run is allocated at the bump-pointer end and filled with the required value.  Then, handles that originally referenced the old location and need to see the new value are pointed at the new buffer location, unlinked from their original positions in the handles' list, and linked in at the end of the list.  An optimal case, when the target is not a substring of something larger, and the source is text from elsewhere in the managed buffer, allows the target to be re-pointed at the source characters, and accordingly, move list position to be beside the source.  Cases where in-place editing happens, addressed further in [xref: TBD], leave affected handles in their original list positions.  In analogy to the two cases of string initialization, the two cases of realizing assignment by moving either to a fresh buffer run, or to overlap references with the source, maintain the invariant of linked list order matching buffer order.
+
+
+To explain: GCing allocator doing bump-pointer with compaction
+
+
+
+At the level of the memory manager, these modifications can aways be explained as assignments; for example, an append is an assignemnt into the empty substring at the end. 
+
+While favourable conditions allow for in-place editing, the general case requires a fresh buffer run.  For example, if the new value does not fit in the old place, or if other handles are still using the old value, then the new value will use a fresh buffer run.
+
+where there is room for the resulting value in the original buffer location, and where all handles referring to the original buffer location should see the new value, 
+
+
+always boiled down to assignment and appendment.  Assignment has special cases that happen in-place, but in the general case, it is implemented as a sequence of appends onto a fresh allocation at the end of the buffer.  (The sequence has multiple steps when the assignment target is a substring: old before, new middle, old after.)  Similarly, an append request can be serviced in-place when there is room, or as a pair of appends 
+
+
+
+\subsection{Sharing implementation}
+
+The \CFA string module has two manners in which serveral string handles can share an unerlying run of characters.  
+
+The first type of sharing is user-requested, following the [xref Logical Overlap].  Here, the user requests, explicitly, that both handles be views of the same logical, modifiable string.  This state is typically prodecd by the substring operation.  In a typical substring call, the source string-handle is referencing an entire string, and the resluting, newly made, string handle is referencing a portion of the orignal.  In this state, a subsequent modification made by either is visible in both.
+
+The second type of sharing happens when the system implicitly delays the physical execution of a logical \emph{copy} operation, as part of its copy-on-write optimization.  This state is typically produced by constructing a new string, using an original string as its intialization source.  In this state, a subsequent modification done on one handle triggers the deferred copy action, leaving the handles referencing different runs within the buffer, holding distinct values.
+
+A further abstraction, in the string module's implementation, helps distinguish the two senses of sharing.  A share-edit set (SES) is an equivalence class over string handles, being the reflexive, symmetric and transitive closure of the relationship of one being constructed from the other, with the ``share edits'' opt-in given.  It is represented by a second linked list among the handles.  A string that shares edits with no other is in a SES by itself.  Inside a SES, a logical modification of one substring portion may change the logical value in another, depending on whether the two actually overlap.  Conversely, no logical value change can flow outside of a SES.  Even if a modification on one string handle does not reveal itself \emph{logically} to anther handle in the same SES (because they don't overlap), if the modification is length-changing, completing the modification requires visiting the second handle to adjust its location in the sliding text.
+
+
+\subsection{Avoiding Implicit Sharing}
+
+There are tradeoffs associated with the copy-on-write mechanism.  Several quatitative matters are detailed in the [xref: Performance Assessment] section and the qualitiative issue of multi-threaded support is introduced here.  The \CFA sting library provides a switch to disable the sharing mechanism for situtations where it is inappropriate.
+
+Because of the inter-linked string handles, any participant managing one string is also managing, directly, the neighbouring strings, and from there, a data structure of the ``set of all strings.''  This data structure is intended for sequential access.  A negative consequence of this decision is that multiple threads using strings need to be set up so that they avoid attempting to modify (concurrently) an instance of this structure.  A positive consequence is that a single-threaded program, or a program with several independent threads, can use the sharing context without an overhead from locking.
+
+The \CFA sting library provides the @string_sharectx@ type to control an ambient sharing context for the current thread.  It allows two adjustments: to opt out of sharing entirely, or to begin sharing within a private context.  Either way, the chosen mode applies to the current thread, for the duration of the lifetime of the created  @string_sharectx@ object, up to being suspended by child liftimes of different contexts.  The indended use is with stack-managed lifetimes, in which the established context lasts until the current function returns, and affects all functions called that don't create their own contexts.
+\lstinputlisting[language=CFA, firstline=20, lastline=34]{sharectx-demo.cfa}
+In this example, the single-letter functions are called in alphabetic order.  The functions @a@ and @d@ share string character ranges within themselves, but not with each other.  The functions @b@, @c@ and @e@ never share anything.
+
+[ TODO: true up with ``is thread local'' (implement that and expand this discussion to give a concurrent example, or adjust this wording) ]
+
+When the string library is running with sharing disabled, it runs without implicit thread-safety challenges (which same as the STL) and with performance goals similar to the STL's.  This thread-safety quality means concurrent users of one string object must still bring their own mutual exlusion, but the string libary will not add any cross thread uses that were not apparent in the user's code.
+
+Running with sharing disabled can be thought of as STL-emulation mode.
+
+
+
+\subsection{Future Work}
+
+To discuss: Unicode
+
+To discuss: Small-string optimization
+
+
+\subsection{Performance Assessment}
+
+I assessed the CFA string library's speed and memory usage.  I present these results ineven quivalent cases, due to either micro-optimizations foregone, or fundamental costs of the added functionality.  They also show the benefits and tradeoffs, as >100\% effects, of switching to CFA, with the tradeoff points quantified.  The final test shows the overall win of the CFA text-sharing mechanism.  It exercises several operations together, showing CFA enabling clean user code to achieve performance that STL requires less-clean user code to achieve.
+
+To discuss: general goal of ... while STL makes you think about memory management, all the time, and if you do your performance can be great ... CFA sacrifices this advantage modestly in exchange for big wins when you're not thinking about memory mamangement.  [Does this position cover all of it?]
+
+To discuss: revisit HL v LL APIs
+
+To discuss: revisit nosharing as STL emulation modes
+
+These tests use randomly generated text fragments of varying lengths.  A collection of such fragments is a \emph{corpus}.  The mean length of a fragment from corpus is a typical explanatory variable.  Such a length is used in one of three modes:
+\begin{description}
+    \item [Fixed-size] means all string fragments are of the stated size
+    \item [Varying from 1] means string lengths are drawn from a geometric distribution with the stated mean, and all lengths occur
+    \item [Varying from 16] means string lengths are drawn from a geometric distribution with the stated mean, but only lengths 16 and obove occur; thus, the stated mean will be above 16.
+\end{description}
+The geometric distribution implies that lengths much longer than the mean occur frequently.  The special treatment of length 16 deals with comparison to STL, given that STL has short-string optimization (see [todo: write and cross-ref future-work SSO]), currently not implmented in \CFA.  When success notwithstanding SSO is illustrated, a fixed-size or from-16 distribution ensures that extra-optimized cases are not part of the mix on the STL side.  In all experiments that use a corpus, its text is generated and loaded into the SUT before the timed phase begins.
+
+To discuss: vocabulary for reused case variables
+
+To discuss: common approach to iteration and quoted rates
+
+To discuss: hardware and such
+
+To discuss: memory allocator
+
+
+\subsubsection{Test: Append}
+
+This test measures the speed of appending fragments of text onto a growing string.  Its subcases include both CFA being similar to STL, and their designs offering a tradeoff.
+
+One experimental variable is the user's operation being @a = a + b@ vs. @a += b@.  While experienced programmers expect the latter to be ``what you obviously should do,'' controling the penatly of the former both helps the API be accessible to beginners and also helps offer confidence that when a user tries to compose operations, the forms that are most natural to the user's composition are viable.
+
+Another experimental variable is whether the user's logical allocation is fresh vs reused.  Here, \emph{reusing a logical allocation}, means that the prgram variable, into which the user is concatenating, previously held a long string:\\
+\begin{tabular}{ll}
+    Logical allocation fresh                   & Logical allocation reused                  \\
+                                               & @ string x; @                              \\
+    @ for( ... ) { @                           & @ for( ... ) { @                           \\
+    @     string x; @                          & @     x = ""; @                            \\
+    @     for( ... ) @                         & @     for( ... ) @                         \\
+    @        x += ... @                        & @        x += ... @                        \\
+    @ } @                                      & @ } @
+\end{tabular}\\
+These benchmark drivers have an outer loop for ``until a sample-worthy amount of execution has happened'' and an inner loop for ``build up the desired-length string.''  It is sensible to doubt that a user should have to care about this difference, yet the STL performs differently in these cases.  Concretly, both cases incur the cost of copying characters into the target string, but only the allocation-fresh case incurs a further reallocation cost, which is generally paid at points of doubling the length.  For the STL, this cost includes obtaining a fresh buffer from the memory allocator and copying older characters into the new buffer, while CFA-sharing hides such a cost entirely.  The reuse-vs-fresh distinction is only relevant in the currrent \emph{append} tests.
+
+The \emph{append} tests use the varying-from-1 corpus construction; that is they do not assume away the STL's advantage from small-string opitimization.
+
+To discuss: any other case variables intruduced in the performance intro
+
+\begin{figure}
+    \includegraphics[width=\textwidth]{string-graph-peq-cppemu.png}
+    \caption{Average time per iteration with one \lstinline{x += y} invocation, comparing \CFA with STL implementations (given \CFA running in STL emulation mode), and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.}
+    \label{fig:string-graph-peq-cppemu}
+\end{figure}
+
+Figure \ref{fig:string-graph-peq-cppemu} shows this behaviour, by the STL and by \CFA in STL emulation mode.  \CFA reproduces STL's performance, up to a 15\% penalty averaged over the cases shown, diminishing with larger strings, and 50\% in the worst case.  This penatly characterizes the amount of implementation fine tuning done with STL and not done with \CFA in present state.  The larger inherent penalty, for a user mismanaging reuse, is 40\% averaged over the cases shown, is minimally 24\%, shows up consistently between the STL and \CFA implementations, and increases with larger strings.
+
+\begin{figure}
+    \includegraphics[width=\textwidth]{string-graph-peq-sharing.png}
+    \caption{Average time per iteration with one \lstinline{x += y} invocation, comparing \CFA (having implicit sharing activated) with STL, and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.}
+    \label{fig:string-graph-peq-sharing}
+\end{figure}
+
+In sharing mode, \CFA makes the fresh/reuse difference disappear, as shown in Figure \ref{fig:string-graph-peq-sharing}.  At append lengths 5 and above, CFA not only splits the two baseline STL cases, but its slowdown of 16\% over (STL with user-managed reuse) is close to the \CFA-v-STL implementation difference seen with \CFA in STL-emulation mode.
+
+\begin{figure}
+    \includegraphics[width=\textwidth]{string-graph-pta-sharing.png}
+    \caption{Average time per iteration with one \lstinline{x = x + y} invocation (new, purple bands), comparing \CFA (having implicit sharing activated) with STL.  For context, the results from Figure \ref{fig:string-graph-peq-sharing} are repeated as the bottom bands.  While not a design goal, and not graphed out, \CFA in STL-emulation mode outperformed STL in this case; user-managed allocation reuse did not affect any of the implementations in this case.}
+    \label{fig:string-graph-pta-sharing}
+\end{figure}
+
+When the user takes a further step beyond the STL's optimal zone, by running @x = x + y@, as in Figure \ref{fig:string-graph-pta-sharing}, the STL's penalty is above $15 \times$ while CFA's (with sharing) is under $2 \times$, averaged across the cases shown here.  Moreover, the STL's gap increases with string size, while \CFA's converges.
+
+\subsubsection{Test: Pass argument}
+
+To have introduced:  STL string library forces users to think about memory management when communicating values across a function call
+
+STL charges a prohibitive penalty for passing a string by value.  With implicit sharing active, \CFA treats this operation as normal and supported.  This test illustrates a main adjantage of the \CFA sharing algorithm.  It also has a case in which STL's small-string optimization provides a successful mitigation.
+
+\begin{figure}
+    \includegraphics[width=\textwidth]{string-graph-pbv.png}
+    \caption{Average time per iteration with one call to a function that takes a by-value string argument, comparing \CFA (having implicit sharing activated) with STL.  (a) With \emph{Varying-from-1} corpus construction, in which the STL-only benefit of small-string optimization occurs, in varying degrees, at all string sizes.  (b) With \emph{Fixed-size} corpus construction, in which this benefit applies exactly to strings with length below 16.  [TODO: show version (b)]}
+    \label{fig:string-graph-pbv}
+\end{figure}
+
+Figure \ref{fig:string-graph-pbv} shows the costs for calling a function that receives a string argument by value.  STL's performance worsens as string length increases, while \CFA has the same performance at all sizes.
+
+The \CFA cost to pass is nontrivial.  The contributor is adding and removing the callee's string handle from the global list.  This cost is $1.5 \times$ to $2 \times$ over STL's when small-string optimization applies, though this cost should be avoidable in the same case, given a \CFA realization of this optimization.  At the larger sizes, when STL has to manage storage for the string, STL runs more than $3 \times$ slower, mainly due to time spent in the general-purpose memory allocator.
+
+
+\subsubsection{Test: Allocate}
+
+This test directly compares the allocation schemes of the \CFA string with sharing, compared with the STL string.  It treats the \CFA scheme as a form of garbage collection, and the STL scheme as an application of malloc-free.  The test shows that \CFA enables faster speed at a cost in memory usage.
+
+A garbage collector, afforded the freedom of managed memory, often runs faster than malloc-free (in an ammortized analysis, even though it must occasionally stop to collect) because it is able to use its collection time to move objects.  (In the case of the mini-allocator powering the \CFA string library, objects are runs of text.)  Moving objects lets fresh allocations consume from a large contiguous store of available memory; the ``bump pointer'' book-keeping for such a scheme is very light.  A malloc-free implementation without the freedom to move objects must, in the general case, allocate in the spaces between existing objects; doing so entails the heavier book-keeping to navigate and maintain a linked structure.
+
+A garbage collector keeps allocations around for longer than the using program can reach them.  By contrast, a program using malloc-free (correctly) releases allocations exactly when they are no longer reachable.  Therefore, the same harness will use more memory while running under garbage collection.  A garbage collector can minimize the memory overhead by searching for these dead allocations aggressively, that is, by collecting more often.  Tuned in this way, it spends a lot of time collecting, easily so much as to overwhelm its speed advantage from bump-pointer allocation.  If it is tuned to collect rarely, then it leaves a lot of garbage allocated (waiting to be collected) but gains the advantage of little time spent doing collection.
+
+[TODO: find citations for the above knowledge]
+
+The speed for memory tradeoff is, therefore, standard for comparisons like \CFA--STL string allocations.  The test verifies that it is so and quantifies the returns available.
+
+These tests manipulate a tuning knob that controls how much extra space to use.  Specific values of this knob are not user-visible and are not presented in the results here.  Instead, its two effects (amount of space used and time per operation) are shown.  The independent variable is the liveness target, which is the fraction of the text buffer that is in use at the end of a collection.  The allocator will expand its text buffer during a collection if the actual fraction live exceeds this target.
+
+This experiment's driver allocates strings by constructing a string handle as a local variable then looping over recursive calls.  The time measurement is of nanoseconds per such allocating call.  The arrangement of recursive calls and their fan-out (iterations per recursion level) makes some of the strings long-lived and some of them short-lived.  String lifetime (measured in number of subsequent string allocations) is ?? distributed, because each node in the call tree survives as long as its descendent calls.  The run presented in this section used a call depth of 1000 and a fan-out of 1.006, which means that approximately one call in 167 makes two recursive calls, while the rest make one.  This sizing was chosen to keep the amounts of consumed memory within the machine's last-level cache.
+
+\begin{figure}
+    \includegraphics[width=\textwidth]{string-graph-allocn.png}
+    \caption{Space and time performance, under varying fraction-live targets, for the five string lengths shown, at (emph{Fixed-size} corpus construction.  [MISSING] The identified clusters are for the default fraction-live target, which is 30\%.  MISSING: STL results, typically just below the 0.5--0.9 CFA segment.  All runs keep an average of 836 strings live, and the median string lifetime is ?? allocations.}
+    \label{fig:string-graph-allocn}
+\end{figure}
+
+Figure \ref{fig:string-graph-allocn} shows the results of this experiemnt.  At all string sizes, varying the liveness threshold gives offers speed-for-space tradeoffs relative to STL.  At the default liveness threshold, all measured string sizes see a ??\%--??\% speedup for a ??\%--??\% increase in memory footprint.
+
+
+
+\subsubsection{Test: Normalize}
+
+This test is more applied than the earlier ones.  It combines the effects of several operations.  It also demonstrates a case of the CFA API allowing user code to perform well, while being written without overt memory management, while achieving similar performance in STL requires adding memory-management complexity.
+
+To motivate: edits being rare
+
+The program is doing a specialized find-replace operation on a large body of text.  In the program under test, the replacement is just to erase a magic character.  But in the larger software problem represented, the rewrite logic belongs to a module that was originally intended to operate on simple, modest-length strings.  The challenge is to apply this packaged function across chunks taken from the large body.  Using the \CFA string library, the most natural way to write the helper module's function also works well in the adapted context.  Using the STL string, the most natural ways to write the helper module's function, given its requirements in isolation, slow down when it is driven in the adapted context.
+
+\begin{lstlisting}
+void processItem( string & item ) {
+    // find issues in item and fix them
+}
+\end{lstlisting}
Index: doc/theses/mike_brooks_MMath/style/master.tex
===================================================================
--- doc/theses/mike_brooks_MMath/style/master.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/style/master.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,10 @@
+\input{style/uw-top}
+
+\input{common}                                          % CForall team's common
+
+\CFAStyle						                        % CFA code-style for all languages
+\lstset{language=CFA,basicstyle=\linespread{0.9}\tt}	% CFA default language
+
+\usepackage[T1]{fontenc}                                % means a | character should be that, not an em dash
+
+\input{style/uw-bot}
Index: doc/theses/mike_brooks_MMath/style/uw-bot.tex
===================================================================
--- doc/theses/mike_brooks_MMath/style/uw-bot.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/style/uw-bot.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,81 @@
+% \usepackage[pdftex,pagebackref=false]{hyperref} % with basic options
+
+\usepackage[pagebackref=false]{hyperref} % with basic options
+		% N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing.
+\hypersetup{
+    plainpages=false,       % needed if Roman numbers in frontpages
+    unicode=false,          % non-Latin characters in Acrobat’s bookmarks
+    pdftoolbar=true,        % show Acrobat’s toolbar?
+    pdfmenubar=true,        % show Acrobat’s menu?
+    pdffitwindow=false,     % window fit to page when opened
+    pdfstartview={FitH},    % fits the width of the page to the window
+    pdftitle={uWaterloo\ LaTeX\ Thesis\ Template},    % title: CHANGE THIS TEXT!
+%    pdfauthor={Author},    % author: CHANGE THIS TEXT! and uncomment this line
+%    pdfsubject={Subject},  % subject: CHANGE THIS TEXT! and uncomment this line
+%    pdfkeywords={keyword1} {key2} {key3}, % list of keywords, and uncomment this line if desired
+    pdfnewwindow=true,      % links in new window
+    colorlinks=true,        % false: boxed links; true: colored links
+    linkcolor=blue,         % color of internal links
+    citecolor=green,        % color of links to bibliography
+    filecolor=magenta,      % color of file links
+    urlcolor=cyan           % color of external links
+}
+\ifthenelse{\boolean{PrintVersion}}{   % for improved print quality, change some hyperref options
+\hypersetup{	% override some previously defined hyperref options
+%    colorlinks,%
+    citecolor=black,%
+    filecolor=black,%
+    linkcolor=black,%
+    urlcolor=black}
+}{} % end of ifthenelse (no else)
+
+% from uw template
+% Exception to the rule of hyperref being the last add-on package
+\usepackage[automake,toc,abbreviations]{glossaries-extra} 
+
+% Setting up the page margins...
+% uWaterloo thesis requirements specify a minimum of 1 inch (72pt) margin at the
+% top, bottom, and outside page edges and a 1.125 in. (81pt) gutter
+% margin (on binding side). While this is not an issue for electronic
+% viewing, a PDF may be printed, and so we have the same page layout for
+% both printed and electronic versions, we leave the gutter margin in.
+% Set margins to minimum permitted by uWaterloo thesis regulations:
+\setlength{\marginparwidth}{0pt} % width of margin notes
+% N.B. If margin notes are used, you must adjust \textwidth, \marginparwidth
+% and \marginparsep so that the space left between the margin notes and page
+% edge is less than 15 mm (0.6 in.)
+\setlength{\marginparsep}{0pt} % width of space between body text and margin notes
+\setlength{\evensidemargin}{0.125in} % Adds 1/8 in. to binding side of all 
+% even-numbered pages when the "twoside" printing option is selected
+\setlength{\oddsidemargin}{0.125in} % Adds 1/8 in. to the left of all pages
+% when "oneside" printing is selected, and to the left of all odd-numbered
+% pages when "twoside" printing is selected
+\setlength{\textwidth}{6.375in} % assuming US letter paper (8.5 in. x 11 in.) and 
+% side margins as above
+\raggedbottom
+
+% The following statement specifies the amount of space between
+% paragraphs. Other reasonable specifications are \bigskipamount and \smallskipamount.
+\setlength{\parskip}{\medskipamount}
+
+% Peter's fix.  UW being broken.
+\setlength{\textheight}{9in}
+\setlength{\topmargin}{-0.45in}
+\setlength{\headsep}{0.25in}
+
+% The following statement controls the line spacing.  The default
+% spacing corresponds to good typographic conventions and only slight
+% changes (e.g., perhaps "1.2"), if any, should be made.
+\renewcommand{\baselinestretch}{1} % this is the default line space setting
+
+% By default, each chapter will start on a recto (right-hand side)
+% page.  We also force each section of the front pages to start on 
+% a recto page by inserting \cleardoublepage commands.
+% In many cases, this will require that the verso page be
+% blank and, while it should be counted, a page number should not be
+% printed.  The following statements ensure a page number is not
+% printed on an otherwise blank verso page.
+\let\origdoublepage\cleardoublepage
+\newcommand{\clearemptydoublepage}{%
+  \clearpage{\pagestyle{empty}\origdoublepage}}
+\let\cleardoublepage\clearemptydoublepage
Index: doc/theses/mike_brooks_MMath/style/uw-top.tex
===================================================================
--- doc/theses/mike_brooks_MMath/style/uw-top.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/style/uw-top.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,11 @@
+\newcommand{\package}[1]{\textbf{#1}} % package names in bold text
+\newcommand{\cmmd}[1]{\textbackslash\texttt{#1}} % command name in tt font 
+\newcommand{\href}[1]{#1} % does nothing, but defines the command so the
+
+\usepackage{ifthen}
+\newboolean{PrintVersion}
+\setboolean{PrintVersion}{false} 
+
+%\usepackage{nomencl} % For a nomenclature (optional; available from ctan.org)
+\usepackage{amsmath,amssymb,amstext}
+%\usepackage[pdftex]{graphicx}
Index: doc/theses/mike_brooks_MMath/uw-ethesis-frontpgs.tex
===================================================================
--- doc/theses/mike_brooks_MMath/uw-ethesis-frontpgs.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/uw-ethesis-frontpgs.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,177 @@
+% T I T L E   P A G E
+% -------------------
+% Last updated October 23, 2020, by Stephen Carr, IST-Client Services
+% The title page is counted as page `i' but we need to suppress the
+% page number. Also, we don't want any headers or footers.
+\pagestyle{empty}
+\pagenumbering{roman}
+
+% The contents of the title page are specified in the "titlepage"
+% environment.
+\begin{titlepage}
+        \begin{center}
+        \vspace*{1.0cm}
+
+  % TODO: punch up the title, thinking getting interest in the department-wide posting of my presentation
+  % Modern collections for C
+        {\Huge\bf \CFA Container Library}
+
+        \vspace*{1.0cm}
+
+        by \\
+
+        \vspace*{1.0cm}
+
+        {\Large Michael Leslie Brooks} \\
+
+        \vspace*{3.0cm}
+
+        A thesis \\
+        presented to the University of Waterloo \\
+        in fulfillment of the \\
+        thesis requirement for the degree of \\
+        Master of Mathematics \\
+        in \\
+        Computer Science \\
+
+        \vspace*{2.0cm}
+
+        Waterloo, Ontario, Canada, \the\year \\
+
+        \vspace*{1.0cm}
+
+        \copyright{} Michael Leslie Brooks \the\year \\
+        \end{center}
+\end{titlepage}
+
+% The rest of the front pages should contain no headers and be numbered using
+% Roman numerals starting with `ii'.
+\pagestyle{plain}
+\setcounter{page}{2}
+
+\cleardoublepage % Ends the current page and causes all figures and tables
+% that have so far appeared in the input to be printed. In a two-sided
+% printing style, it also makes the next page a right-hand (odd-numbered)
+% page, producing a blank page if necessary.
+
+\begin{comment}
+% E X A M I N I N G   C O M M I T T E E (Required for Ph.D. theses only)
+% Remove or comment out the lines below to remove this page
+\begin{center}\textbf{Examining Committee Membership}\end{center}
+  \noindent
+The following served on the Examining Committee for this thesis.
+The decision of the Examining Committee is by majority vote.
+  \bigskip
+
+  \noindent
+\begin{tabbing}
+Internal-External Member: \=  \kill % using longest text to define tab length
+External Examiner: \>  Bruce Bruce \\
+\> Professor, Dept. of Philosophy of Zoology, University of Wallamaloo \\
+\end{tabbing}
+  \bigskip
+
+  \noindent
+\begin{tabbing}
+Internal-External Member: \=  \kill % using longest text to define tab length
+Supervisor(s): \> Ann Elk \\
+\> Professor, Dept. of Zoology, University of Waterloo \\
+\> Andrea Anaconda \\
+\> Professor Emeritus, Dept. of Zoology, University of Waterloo \\
+\end{tabbing}
+  \bigskip
+
+  \noindent
+  \begin{tabbing}
+Internal-External Member: \=  \kill % using longest text to define tab length
+Internal Member: \> Pamela Python \\
+\> Professor, Dept. of Zoology, University of Waterloo \\
+\end{tabbing}
+  \bigskip
+
+  \noindent
+\begin{tabbing}
+Internal-External Member: \=  \kill % using longest text to define tab length
+Internal-External Member: \> Meta Meta \\
+\> Professor, Dept. of Philosophy, University of Waterloo \\
+\end{tabbing}
+  \bigskip
+
+  \noindent
+\begin{tabbing}
+Internal-External Member: \=  \kill % using longest text to define tab length
+Other Member(s): \> Leeping Fang \\
+\> Professor, Dept. of Fine Art, University of Waterloo \\
+\end{tabbing}
+
+\cleardoublepage
+\end{comment}
+
+% D E C L A R A T I O N   P A G E
+% -------------------------------
+  % The following is a sample Delaration Page as provided by the GSO
+  % December 13th, 2006.  It is designed for an electronic thesis.
+ \begin{center}\textbf{Author's Declaration}\end{center}
+
+ \noindent
+I hereby declare that I am the sole author of this thesis. This is a true copy
+of the thesis, including any required final revisions, as accepted by my
+examiners.
+
+  \bigskip
+
+  \noindent
+I understand that my thesis may be made electronically available to the public.
+
+\cleardoublepage
+
+% A B S T R A C T
+% ---------------
+
+\begin{center}\textbf{Abstract}\end{center}
+
+This is the abstract.
+
+\cleardoublepage
+
+% A C K N O W L E D G E M E N T S
+% -------------------------------
+
+\begin{center}\textbf{Acknowledgements}\end{center}
+
+I would like to thank all the little people who made this thesis possible.
+\cleardoublepage
+
+\begin{comment}
+% D E D I C A T I O N
+% -------------------
+
+\begin{center}\textbf{Dedication}\end{center}
+
+This is dedicated to the one I love.
+\cleardoublepage
+\end{comment}
+
+% T A B L E   O F   C O N T E N T S
+% ---------------------------------
+\renewcommand\contentsname{Table of Contents}
+\tableofcontents
+\cleardoublepage
+\phantomsection    % allows hyperref to link to the correct page
+
+% L I S T   O F   F I G U R E S
+% -----------------------------
+\addcontentsline{toc}{chapter}{List of Figures}
+\listoffigures
+\cleardoublepage
+\phantomsection		% allows hyperref to link to the correct page
+
+% L I S T   O F   T A B L E S
+% ---------------------------
+\addcontentsline{toc}{chapter}{List of Tables}
+\listoftables
+\cleardoublepage
+\phantomsection		% allows hyperref to link to the correct page
+
+% Change page numbering back to Arabic numerals
+\pagenumbering{arabic}
Index: doc/theses/mike_brooks_MMath/uw-ethesis.bib
===================================================================
--- doc/theses/mike_brooks_MMath/uw-ethesis.bib	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/uw-ethesis.bib	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,66 @@
+% Bibliography of key references for "LaTeX for Thesis and Large Documents"
+% For use with BibTeX
+
+% --------------------------------------------------
+% Cforall
+@misc{cfa:frontpage,
+  url = {https://cforall.uwaterloo.ca/}
+}
+@article{cfa:typesystem,
+  author    = {Aaron Moss and Robert Schluntz and Peter A. Buhr},
+  title     = {{\CFA} : Adding modern programming language features to {C}},
+  journal   = {Softw. Pract. Exp.},
+  volume    = {48},
+  number    = {12},
+  pages     = {2111--2146},
+  year      = {2018},
+  url       = {https://doi.org/10.1002/spe.2624},
+  doi       = {10.1002/spe.2624},
+  timestamp = {Thu, 09 Apr 2020 17:14:14 +0200},
+  biburl    = {https://dblp.org/rec/journals/spe/MossSB18.bib},
+  bibsource = {dblp computer science bibliography, https://dblp.org}
+}
+
+
+% --------------------------------------------------
+% Array prior work
+
+@inproceedings{arr:futhark:tytheory,
+    author = {Henriksen, Troels and Elsman, Martin},
+    title = {Towards Size-Dependent Types for Array Programming},
+    year = {2021},
+    isbn = {9781450384667},
+    publisher = {Association for Computing Machinery},
+    address = {New York, NY, USA},
+    url = {https://doi.org/10.1145/3460944.3464310},
+    doi = {10.1145/3460944.3464310},
+    abstract = {We present a type system for expressing size constraints on array types in an ML-style type system. The goal is to detect shape mismatches at compile-time, while being simpler than full dependent types. The main restrictions is that the only terms that can occur in types are array sizes, and syntactically they must be variables or constants. For those programs where this is not sufficient, we support a form of existential types, with the type system automatically managing the requisite book-keeping. We formalise a large subset of the type system in a small core language, which we prove sound. We also present an integration of the type system in the high-performance parallel functional language Futhark, and show on a collection of 44 representative programs that the restrictions in the type system are not too problematic in practice.},
+    booktitle = {Proceedings of the 7th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming},
+    pages = {1–14},
+    numpages = {14},
+    keywords = {functional programming, parallel programming, type systems},
+    location = {Virtual, Canada},
+    series = {ARRAY 2021}
+}
+
+@article{arr:dex:long,
+  author    = {Adam Paszke and
+               Daniel D. Johnson and
+               David Duvenaud and
+               Dimitrios Vytiniotis and
+               Alexey Radul and
+               Matthew J. Johnson and
+               Jonathan Ragan{-}Kelley and
+               Dougal Maclaurin},
+  title     = {Getting to the Point. Index Sets and Parallelism-Preserving Autodiff
+               for Pointful Array Programming},
+  journal   = {CoRR},
+  volume    = {abs/2104.05372},
+  year      = {2021},
+  url       = {https://arxiv.org/abs/2104.05372},
+  eprinttype = {arXiv},
+  eprint    = {2104.05372},
+  timestamp = {Mon, 25 Oct 2021 07:55:47 +0200},
+  biburl    = {https://dblp.org/rec/journals/corr/abs-2104-05372.bib},
+  bibsource = {dblp computer science bibliography, https://dblp.org}
+}
Index: doc/theses/mike_brooks_MMath/uw-ethesis.tex
===================================================================
--- doc/theses/mike_brooks_MMath/uw-ethesis.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mike_brooks_MMath/uw-ethesis.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,260 @@
+%======================================================================
+% University of Waterloo Thesis Template for LaTeX
+% Last Updated November, 2020
+% by Stephen Carr, IST Client Services,
+% University of Waterloo, 200 University Ave. W., Waterloo, Ontario, Canada
+% FOR ASSISTANCE, please send mail to request@uwaterloo.ca
+
+% DISCLAIMER
+% To the best of our knowledge, this template satisfies the current uWaterloo thesis requirements.
+% However, it is your responsibility to assure that you have met all requirements of the University and your particular department.
+
+% Many thanks for the feedback from many graduates who assisted the development of this template.
+% Also note that there are explanatory comments and tips throughout this template.
+%======================================================================
+% Some important notes on using this template and making it your own...
+
+% The University of Waterloo has required electronic thesis submission since October 2006. 
+% See the uWaterloo thesis regulations at
+% https://uwaterloo.ca/graduate-studies/thesis.
+% This thesis template is geared towards generating a PDF version optimized for viewing on an electronic display, including hyperlinks within the PDF.
+
+% DON'T FORGET TO ADD YOUR OWN NAME AND TITLE in the "hyperref" package configuration below. 
+% THIS INFORMATION GETS EMBEDDED IN THE PDF FINAL PDF DOCUMENT.
+% You can view the information if you view properties of the PDF document.
+
+% Many faculties/departments also require one or more printed copies.
+% This template attempts to satisfy both types of output.
+% See additional notes below.
+% It is based on the standard "book" document class which provides all necessary sectioning structures and allows multi-part theses.
+
+% If you are using this template in Overleaf (cloud-based collaboration service), then it is automatically processed and previewed for you as you edit.
+
+% For people who prefer to install their own LaTeX distributions on their own computers, and process the source files manually, the following notes provide the sequence of tasks:
+
+% E.g. to process a thesis called "mythesis.tex" based on this template, run:
+
+% pdflatex mythesis	-- first pass of the pdflatex processor
+% bibtex mythesis	-- generates bibliography from .bib data file(s)
+% makeindex         -- should be run only if an index is used 
+% pdflatex mythesis	-- fixes numbering in cross-references, bibliographic references, glossaries, index, etc.
+% pdflatex mythesis	-- it takes a couple of passes to completely process all cross-references
+
+% If you use the recommended LaTeX editor, Texmaker, you would open the mythesis.tex file, then click the PDFLaTeX button. Then run BibTeX (under the Tools menu).
+% Then click the PDFLaTeX button two more times. 
+% If you have an index as well,you'll need to run MakeIndex from the Tools menu as well, before running pdflatex
+% the last two times.
+
+% N.B. The "pdftex" program allows graphics in the following formats to be included with the "\includegraphics" command: PNG, PDF, JPEG, TIFF
+% Tip: Generate your figures and photos in the size you want them to appear in your thesis, rather than scaling them with \includegraphics options.
+% Tip: Any drawings you do should be in scalable vector graphic formats: SVG, PNG, WMF, EPS and then converted to PNG or PDF, so they are scalable in the final PDF as well.
+% Tip: Photographs should be cropped and compressed so as not to be too large.
+
+% To create a PDF output that is optimized for double-sided printing:
+% 1) comment-out the \documentclass statement in the preamble below, and un-comment the second \documentclass line.
+% 2) change the value assigned below to the boolean variable "PrintVersion" from " false" to "true".
+
+%======================================================================
+%   D O C U M E N T   P R E A M B L E
+% Specify the document class, default style attributes, and page dimensions, etc.
+% For hyperlinked PDF, suitable for viewing on a computer, use this:
+\documentclass[letterpaper,12pt,titlepage,oneside,final]{book}
+\usepackage[T1]{fontenc}	% Latin-1 => 256-bit characters, => | not dash, <> not Spanish question marks
+
+% For PDF, suitable for double-sided printing, change the PrintVersion variable below to "true" and use this \documentclass line instead of the one above:
+%\documentclass[letterpaper,12pt,titlepage,openright,twoside,final]{book}
+
+% Some LaTeX commands I define for my own nomenclature.
+% If you have to, it's easier to make changes to nomenclature once here than in a million places throughout your thesis!
+\newcommand{\package}[1]{\textbf{#1}} % package names in bold text
+\newcommand{\cmmd}[1]{\textbackslash\texttt{#1}} % command name in tt font
+\newcommand{\href}[1]{#1} % does nothing, but defines the command so the print-optimized version will ignore \href tags (redefined by hyperref pkg).
+%\newcommand{\texorpdfstring}[2]{#1} % does nothing, but defines the command
+% Anything defined here may be redefined by packages added below...
+
+% This package allows if-then-else control structures.
+\usepackage{ifthen}
+\newboolean{PrintVersion}
+\setboolean{PrintVersion}{false}
+% CHANGE THIS VALUE TO "true" as necessary, to improve printed results for hard copies by overriding some options of the hyperref package, called below.
+
+%\usepackage{nomencl} % For a nomenclature (optional; available from ctan.org)
+\usepackage{amsmath,amssymb,amstext} % Lots of math symbols and environments
+\usepackage{xcolor}
+\usepackage{epic,eepic}
+\usepackage{graphicx}
+\graphicspath{{pictures/}} % picture directory
+\usepackage{comment} % Removes large sections of the document.
+\usepackage{tabularx}
+\usepackage{subfigure}
+
+\usepackage{algorithm}
+\usepackage{algpseudocode}
+
+% Hyperlinks make it very easy to navigate an electronic document.
+% In addition, this is where you should specify the thesis title and author as they appear in the properties of the PDF document.
+% Use the "hyperref" package
+% N.B. HYPERREF MUST BE THE LAST PACKAGE LOADED; ADD ADDITIONAL PKGS ABOVE
+\usepackage[pagebackref=true]{hyperref} % with basic options
+%\usepackage[pdftex,pagebackref=true]{hyperref}
+% N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing.
+\hypersetup{
+    plainpages=false,       % needed if Roman numbers in frontpages
+    unicode=false,          % non-Latin characters in Acrobat's bookmarks
+    pdftoolbar=true,        % show Acrobat's toolbar?
+    pdfmenubar=true,        % show Acrobat's menu?
+    pdffitwindow=false,     % window fit to page when opened
+    pdfstartview={FitH},    % fits the width of the page to the window
+    pdftitle={Cforall Memory Allocation}, % title: CHANGE THIS TEXT!
+    pdfauthor={Mubeen Zulfiqar},    % author: CHANGE THIS TEXT! and uncomment this line
+    pdfsubject={Cforall},  % subject: CHANGE THIS TEXT! and uncomment this line
+    pdfkeywords={Cforall} {storage allocation} {C language}, % optional list of keywords
+    pdfnewwindow=true,      % links in new window
+    colorlinks=true,        % false: boxed links; true: colored links
+    linkcolor=blue,         % color of internal links
+    citecolor=blue,        % color of links to bibliography
+    filecolor=magenta,      % color of file links
+    urlcolor=blue           % color of external links
+}
+\ifthenelse{\boolean{PrintVersion}}{   % for improved print quality, change some hyperref options
+\hypersetup{	% override some previously defined hyperref options
+    citecolor=black,%
+    filecolor=black,%
+    linkcolor=black,%
+    urlcolor=black
+}}{} % end of ifthenelse (no else)
+
+%\usepackage[automake,toc,abbreviations]{glossaries-extra} % Exception to the rule of hyperref being the last add-on package
+% If glossaries-extra is not in your LaTeX distribution, get it from CTAN (http://ctan.org/pkg/glossaries-extra),
+% although it's supposed to be in both the TeX Live and MikTeX distributions. There are also documentation and
+% installation instructions there.
+
+% Setting up the page margins...
+\setlength{\textheight}{9in}
+\setlength{\topmargin}{-0.45in}
+\setlength{\headsep}{0.25in}
+% uWaterloo thesis requirements specify a minimum of 1 inch (72pt) margin at the
+% top, bottom, and outside page edges and a 1.125 in. (81pt) gutter margin (on binding side).
+% While this is not an issue for electronic viewing, a PDF may be printed, and so we have the same page layout for both printed and electronic versions, we leave the gutter margin in.
+% Set margins to minimum permitted by uWaterloo thesis regulations:
+\setlength{\marginparwidth}{0pt} % width of margin notes
+% N.B. If margin notes are used, you must adjust \textwidth, \marginparwidth
+% and \marginparsep so that the space left between the margin notes and page
+% edge is less than 15 mm (0.6 in.)
+\setlength{\marginparsep}{0pt} % width of space between body text and margin notes
+\setlength{\evensidemargin}{0.125in} % Adds 1/8 in. to binding side of all
+% even-numbered pages when the "twoside" printing option is selected
+\setlength{\oddsidemargin}{0.125in} % Adds 1/8 in. to the left of all pages when "oneside" printing is selected, and to the left of all odd-numbered pages when "twoside" printing is selected
+\setlength{\textwidth}{6.375in} % assuming US letter paper (8.5 in. x 11 in.) and side margins as above
+\raggedbottom
+
+% The following statement specifies the amount of space between paragraphs. Other reasonable specifications are \bigskipamount and \smallskipamount.
+\setlength{\parskip}{\medskipamount}
+
+% The following statement controls the line spacing.
+% The default spacing corresponds to good typographic conventions and only slight changes (e.g., perhaps "1.2"), if any, should be made.
+\renewcommand{\baselinestretch}{1} % this is the default line space setting
+
+% By default, each chapter will start on a recto (right-hand side) page.
+% We also force each section of the front pages to start on a recto page by inserting \cleardoublepage commands.
+% In many cases, this will require that the verso (left-hand) page be blank, and while it should be counted, a page number should not be printed.
+% The following statements ensure a page number is not printed on an otherwise blank verso page.
+\let\origdoublepage\cleardoublepage
+\newcommand{\clearemptydoublepage}{%
+  \clearpage{\pagestyle{empty}\origdoublepage}}
+\let\cleardoublepage\clearemptydoublepage
+
+% Define Glossary terms (This is properly done here, in the preamble and
+% could also be \input{} from a separate file...)
+%\input{glossaries}
+%\makeglossaries
+
+% cfa macros used in the document
+\input{common}
+%\usepackageinput{common}
+\CFAStyle						% CFA code-style
+\lstset{language=CFA}					% default language
+\lstset{basicstyle=\linespread{0.9}\tt}			% CFA typewriter font
+\lstset{inputpath={programs}}
+\newcommand{\PAB}[1]{{\color{red}PAB: #1}}
+
+%======================================================================
+%   L O G I C A L    D O C U M E N T
+% The logical document contains the main content of your thesis.
+% Being a large document, it is a good idea to divide your thesis into several files, each one containing one chapter or other significant chunk of content, so you can easily shuffle things around later if desired.
+%======================================================================
+\begin{document}
+
+%----------------------------------------------------------------------
+% FRONT MATERIAL
+% title page,declaration, borrowers' page, abstract, acknowledgements,
+% dedication, table of contents, list of tables, list of figures, nomenclature, etc.
+%----------------------------------------------------------------------
+\input{uw-ethesis-frontpgs}
+
+%----------------------------------------------------------------------
+% MAIN BODY
+% We suggest using a separate file for each chapter of your thesis.
+% Start each chapter file with the \chapter command.
+% Only use \documentclass or \begin{document} and \end{document} commands in this master document.
+% Tip: Putting each sentence on a new line is a way to simplify later editing.
+%----------------------------------------------------------------------
+\begin{sloppypar}
+
+\input{intro}
+\input{background}
+\input{array}
+\input{string}
+\input{conclusion}
+
+\end{sloppypar}
+
+%----------------------------------------------------------------------
+% END MATERIAL
+% Bibliography, Appendices, Index, etc.
+%----------------------------------------------------------------------
+
+% Bibliography
+
+% The following statement selects the style to use for references.
+% It controls the sort order of the entries in the bibliography and also the formatting for the in-text labels.
+\bibliographystyle{plain}
+% This specifies the location of the file containing the bibliographic information.
+% It assumes you're using BibTeX to manage your references (if not, why not?).
+\cleardoublepage % This is needed if the "book" document class is used, to place the anchor in the correct page, because the bibliography will start on its own page.
+% Use \clearpage instead if the document class uses the "oneside" argument
+\phantomsection  % With hyperref package, enables hyperlinking from the table of contents to bibliography
+% The following statement causes the title "References" to be used for the bibliography section:
+\renewcommand*{\bibname}{References}
+
+% Add the References to the Table of Contents
+\addcontentsline{toc}{chapter}{\textbf{References}}
+
+\bibliography{pl,uw-ethesis}
+% Tip: You can create multiple .bib files to organize your references.
+% Just list them all in the \bibliogaphy command, separated by commas (no spaces).
+
+% The following statement causes the specified references to be added to the bibliography even if they were not cited in the text.
+% The asterisk is a wildcard that causes all entries in the bibliographic database to be included (optional).
+% \nocite{*}
+%----------------------------------------------------------------------
+
+% Appendices
+
+% The \appendix statement indicates the beginning of the appendices.
+\appendix
+% Add an un-numbered title page before the appendices and a line in the Table of Contents
+% \chapter*{APPENDICES}
+% \addcontentsline{toc}{chapter}{APPENDICES}
+% Appendices are just more chapters, with different labeling (letters instead of numbers).
+% \input{appendix-matlab_plots.tex}
+
+% GLOSSARIES (Lists of definitions, abbreviations, symbols, etc.
+% provided by the glossaries-extra package)
+% -----------------------------
+%\printglossaries
+%\cleardoublepage
+%\phantomsection		% allows hyperref to link to the correct page
+
+%----------------------------------------------------------------------
+\end{document} % end of logical document
Index: doc/theses/mubeen_zulfiqar_MMath/Makefile
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/Makefile	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/Makefile	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -1,44 +1,49 @@
-# directory for latex clutter files
+# Configuration variables
+
 Build = build
 Figures = figures
 Pictures = pictures
+
+LaTMac = ../../LaTeXmacros
+BibRep = ../../bibliography
+
 TeXSRC = ${wildcard *.tex}
 FigSRC = ${notdir ${wildcard ${Figures}/*.fig}}
 PicSRC = ${notdir ${wildcard ${Pictures}/*.fig}}
-BIBSRC = ${wildcard *.bib}
-TeXLIB = .:../../LaTeXmacros:${Build}: # common latex macros
-BibLIB = .:../../bibliography # common citation repository
+BibSRC = ${wildcard *.bib}
+
+TeXLIB = .:${LaTMac}:${Build}:
+BibLIB = .:${BibRep}:
 
 MAKEFLAGS = --no-print-directory # --silent
 VPATH = ${Build} ${Figures} ${Pictures} # extra search path for file names used in document
 
-### Special Rules:
+DOCUMENT = uw-ethesis.pdf
+BASE = ${basename ${DOCUMENT}}			# remove suffix
 
-.PHONY: all clean
-.PRECIOUS: %.dvi %.ps # do not delete intermediate files
-
-### Commands:
+# Commands
 
 LaTeX = TEXINPUTS=${TeXLIB} && export TEXINPUTS && latex -halt-on-error -output-directory=${Build}
-BibTeX = BIBINPUTS=${BibLIB} bibtex
+BibTeX = BIBINPUTS=${BibLIB} && export BIBINPUTS && bibtex
 #Glossary = INDEXSTYLE=${Build} makeglossaries-lite
 
-### Rules and Recipes:
+# Rules and Recipes
 
-DOC = uw-ethesis.pdf
-BASE = ${DOC:%.pdf=%} # remove suffix
+.PHONY : all clean				# not file names
+.PRECIOUS: %.dvi %.ps # do not delete intermediate files
+.ONESHELL :
 
-all: ${DOC}
+all : ${DOCUMENT}
 
-clean:
-	@rm -frv ${DOC} ${Build}
+clean :
+	@rm -frv ${DOCUMENT} ${Build}
 
-# File Dependencies #
+# File Dependencies
 
-${Build}/%.dvi : ${TeXSRC} ${FigSRC:%.fig=%.tex} ${PicSRC:%.fig=%.pstex} ${BIBSRC} Makefile | ${Build}
+%.dvi : ${TeXSRC} ${FigSRC:%.fig=%.tex} ${PicSRC:%.fig=%.pstex} ${BibSRC} ${BibRep}/pl.bib ${LaTMac}/common.tex Makefile | ${Build}
 	${LaTeX} ${BASE}
 	${BibTeX} ${Build}/${BASE}
 	${LaTeX} ${BASE}
-	# if nedded, run latex again to get citations
+	# if needed, run latex again to get citations
 	if fgrep -s "LaTeX Warning: Citation" ${basename $@}.log ; then ${LaTeX} ${BASE} ; fi
 #	${Glossary} ${Build}/${BASE}
@@ -46,5 +51,5 @@
 
 ${Build}:
-	mkdir $@
+	mkdir -p $@
 
 %.pdf : ${Build}/%.ps | ${Build}
Index: doc/theses/mubeen_zulfiqar_MMath/allocator.tex
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/allocator.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/allocator.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -1,150 +1,330 @@
 \chapter{Allocator}
 
-\section{uHeap}
-uHeap is a lightweight memory allocator. The objective behind uHeap is to design a minimal concurrent memory allocator that has new features and also fulfills GNU C Library requirements (FIX ME: cite requirements).
-
-The objective of uHeap's new design was to fulfill following requirements:
-\begin{itemize}
-\item It should be concurrent and thread-safe for multi-threaded programs.
-\item It should avoid global locks, on resources shared across all threads, as much as possible.
-\item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
-\item It should be a lightweight memory allocator.
-\end{itemize}
+This chapter presents a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code), called llheap (low-latency heap), for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
+The new allocator fulfills the GNU C Library allocator API~\cite{GNUallocAPI}.
+
+
+\section{llheap}
+
+The primary design objective for llheap is low-latency across all allocator calls independent of application access-patterns and/or number of threads, \ie very seldom does the allocator have a delay during an allocator call.
+(Large allocations requiring initialization, \eg zero fill, and/or copying are not covered by the low-latency objective.)
+A direct consequence of this objective is very simple or no storage coalescing;
+hence, llheap's design is willing to use more storage to lower latency.
+This objective is apropos because systems research and industrial applications are striving for low latency and computers have huge amounts of RAM memory.
+Finally, llheap's performance should be comparable with the current best allocators (see performance comparison in \VRef[Chapter]{c:Performance}).
+
+% The objective of llheap's new design was to fulfill following requirements:
+% \begin{itemize}
+% \item It should be concurrent and thread-safe for multi-threaded programs.
+% \item It should avoid global locks, on resources shared across all threads, as much as possible.
+% \item It's performance (FIX ME: cite performance benchmarks) should be comparable to the commonly used allocators (FIX ME: cite common allocators).
+% \item It should be a lightweight memory allocator.
+% \end{itemize}
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
+<<<<<<< HEAD
 \section{Design choices for uHeap}\label{sec:allocatorSec}
 uHeap's design was reviewed and changed to fulfill new requirements (FIX ME: cite allocator philosophy). For this purpose, following two designs of uHeapLmm were proposed:
-
-\paragraph{Design 1: Centralized}
-One heap, but lower bucket sizes are N-shared across KTs.
-This design leverages the fact that 95\% of allocation requests are less than 512 bytes and there are only 3--5 different request sizes.
-When KTs $\le$ N, the important bucket sizes are uncontented.
-When KTs $>$ N, the free buckets are contented.
-Therefore, threads are only contending for a small number of buckets, which are distributed among them to reduce contention.
-\begin{cquote}
+=======
+\section{Design Choices}
+>>>>>>> bb7c77dc425e289ed60aa638529b3e5c7c3e4961
+
+llheap's design was reviewed and changed multiple times throughout the thesis.
+Some of the rejected designs are discussed because they show the path to the final design (see discussion in \VRef{s:MultipleHeaps}).
+Note, a few simples tests for a design choice were compared with the current best allocators to determine the viability of a design.
+
+
+\subsection{Allocation Fastpath}
+\label{s:AllocationFastpath}
+
+These designs look at the allocation/free \newterm{fastpath}, \ie when an allocation can immediately return free storage or returned storage is not coalesced.
+\paragraph{T:1 model}
+\VRef[Figure]{f:T1SharedBuckets} shows one heap accessed by multiple kernel threads (KTs) using a bucket array, where smaller bucket sizes are N-shared across KTs.
+This design leverages the fact that 95\% of allocation requests are less than 1024 bytes and there are only 3--5 different request sizes.
+When KTs $\le$ N, the common bucket sizes are uncontented;
+when KTs $>$ N, the free buckets are contented and latency increases significantly.
+In all cases, a KT must acquire/release a lock, contented or uncontented, along the fast allocation path because a bucket is shared.
+Therefore, while threads are contending for a small number of buckets sizes, the buckets are distributed among them to reduce contention, which lowers latency;
+however, picking N is workload specific.
+
+\begin{figure}
+\centering
+\input{AllocDS1}
+\caption{T:1 with Shared Buckets}
+\label{f:T1SharedBuckets}
+\end{figure}
+
+Problems:
+\begin{itemize}
+\item
+Need to know when a KT is created/destroyed to assign/unassign a shared bucket-number from the memory allocator.
+\item
+When no thread is assigned a bucket number, its free storage is unavailable.
+\item
+All KTs contend for the global-pool lock for initial allocations, before free-lists get populated.
+\end{itemize}
+Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
+
+\paragraph{T:H model}
+\VRef[Figure]{f:THSharedHeaps} shows a fixed number of heaps (N), each a local free pool, where the heaps are sharded across the KTs.
+A KT can point directly to its assigned heap or indirectly through the corresponding heap bucket.
+When KT $\le$ N, the heaps are uncontented;
+when KTs $>$ N, the heaps are contented.
+In all cases, a KT must acquire/release a lock, contented or uncontented along the fast allocation path because a heap is shared.
+By adjusting N upwards, this approach reduces contention but increases storage (time versus space);
+however, picking N is workload specific.
+
+\begin{figure}
 \centering
 \input{AllocDS2}
-\end{cquote}
-Problems: need to know when a kernel thread (KT) is created and destroyed to know when to assign a shared bucket-number.
-When no thread is assigned a bucket number, its free storage is unavailable. All KTs will be contended for one lock on sbrk for their initial allocations (before free-lists gets populated).
-
-\paragraph{Design 2: Decentralized N Heaps}
-Fixed number of heaps: shard the heap into N heaps each with a bump-area allocated from the @sbrk@ area.
-Kernel threads (KT) are assigned to the N heaps.
-When KTs $\le$ N, the heaps are uncontented.
-When KTs $>$ N, the heaps are contented.
-By adjusting N, this approach reduces storage at the cost of speed due to contention.
-In all cases, a thread acquires/releases a lock, contented or uncontented.
-\begin{cquote}
-\centering
-\input{AllocDS1}
-\end{cquote}
-Problems: need to know when a KT is created and destroyed to know when to assign/un-assign a heap to the KT.
-
-\paragraph{Design 3: Decentralized Per-thread Heaps}
-Design 3 is similar to design 2 but instead of having an M:N model, it uses a 1:1 model. So, instead of having N heaos and sharing them among M KTs, Design 3 has one heap for each KT.
-Dynamic number of heaps: create a thread-local heap for each kernel thread (KT) with a bump-area allocated from the @sbrk@ area.
-Each KT will have its own exclusive thread-local heap. Heap will be uncontended between KTs regardless how many KTs have been created.
-Operations on @sbrk@ area will still be protected by locks.
-%\begin{cquote}
-%\centering
-%\input{AllocDS3} FIXME add figs
-%\end{cquote}
-Problems: We cannot destroy the heap when a KT exits because our dynamic objects have ownership and they are returned to the heap that created them when the program frees a dynamic object. All dynamic objects point back to their owner heap. If a thread A creates an object O, passes it to another thread B, and A itself exits. When B will free object O, O should return to A's heap so A's heap should be preserved for the lifetime of the whole program as their might be objects in-use of other threads that were allocated by A. Also, we need to know when a KT is created and destroyed to know when to create/destroy a heap for the KT.
-
-\paragraph{Design 4: Decentralized Per-CPU Heaps}
-Design 4 is similar to Design 3 but instead of having a heap for each thread, it creates a heap for each CPU.
-Fixed number of heaps for a machine: create a heap for each CPU with a bump-area allocated from the @sbrk@ area.
-Each CPU will have its own CPU-local heap. When the program does a dynamic memory operation, it will be entertained by the heap of the CPU where the process is currently running on.
-Each CPU will have its own exclusive heap. Just like Design 3(FIXME cite), heap will be uncontended between KTs regardless how many KTs have been created.
-Operations on @sbrk@ area will still be protected by locks.
-To deal with preemtion during a dynamic memory operation, librseq(FIXME cite) will be used to make sure that the whole dynamic memory operation completes on one CPU. librseq's restartable sequences can make it possible to re-run a critical section and undo the current writes if a preemption happened during the critical section's execution.
-%\begin{cquote}
-%\centering
-%\input{AllocDS4} FIXME add figs
-%\end{cquote}
-
-Problems: This approach was slower than the per-thread model. Also, librseq does not provide such restartable sequences to detect preemtions in user-level threading system which is important to us as CFA(FIXME cite) has its own threading system that we want to support.
-
-Out of the four designs, Design 3 was chosen because of the following reasons.
-\begin{itemize}
-\item
-Decentralized designes are better in general as compared to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designes shard the whole heap which has all the buckets with the addition of sharding sbrk area. So Design 1 was eliminated.
-\item
-Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenerio.
-\item
-Design 4 was eliminated because it was slower than Design 3 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achive user-threading safety which has some cost to it. Desing 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
-\end{itemize}
-
-
-\subsection{Advantages of distributed design}
-
-The distributed design of uHeap is concurrent to work in multi-threaded applications.
-
-Some key benefits of the distributed design of uHeap are as follows:
-
-\begin{itemize}
-\item
-The bump allocation is concurrent as memory taken from sbrk is sharded across all heaps as bump allocation reserve. The call to sbrk will be protected using locks but bump allocation (on memory taken from sbrk) will not be contended once the sbrk call has returned.
-\item
-Low or almost no contention on heap resources.
-\item
-It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
-\item
-Distributed design avoids unnecassry locks on resources shared across all KTs.
-\end{itemize}
+\caption{T:H with Shared Heaps}
+\label{f:THSharedHeaps}
+\end{figure}
+
+Problems:
+\begin{itemize}
+\item
+Need to know when a KT is created/destroyed to assign/unassign a heap from the memory allocator.
+\item
+When no thread is assigned to a heap, its free storage is unavailable.
+\item
+Ownership issues arise (see \VRef{s:Ownership}).
+\item
+All KTs contend for the local/global-pool lock for initial allocations, before free-lists get populated.
+\end{itemize}
+Tests showed having locks along the allocation fast-path produced a significant increase in allocation costs and any contention among KTs produces a significant spike in latency.
+
+\paragraph{T:H model, H = number of CPUs}
+This design is the T:H model but H is set to the number of CPUs on the computer or the number restricted to an application, \eg via @taskset@.
+(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per CPU.)
+Hence, each CPU logically has its own private heap and local pool.
+A memory operation is serviced from the heap associated with the CPU executing the operation.
+This approach removes fastpath locking and contention, regardless of the number of KTs mapped across the CPUs, because only one KT is running on each CPU at a time (modulo operations on the global pool and ownership).
+This approach is essentially an M:N approach where M is the number if KTs and N is the number of CPUs.
+
+Problems:
+\begin{itemize}
+\item
+Need to know when a CPU is added/removed from the @taskset@.
+\item
+Need a fast way to determine the CPU a KT is executing on to access the appropriate heap.
+\item
+Need to prevent preemption during a dynamic memory operation because of the \newterm{serially-reusable problem}.
+\begin{quote}
+A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}
+\end{quote}
+If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
+Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
+Essentially, the serially-reusable problem is a race condition on an unprotected critical section, where the operating system is providing the second thread via the signal handler.
+
+Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical section after undoing its writes, if the critical section is preempted.
+\end{itemize}
+Tests showed that @librseq@ can determine the particular CPU quickly but setting up the restartable critical-section along the allocation fast-path produced a significant increase in allocation costs.
+Also, the number of undoable writes in @librseq@ is limited and restartable sequences cannot deal with user-level thread (UT) migration across KTs.
+For example, UT$_1$ is executing a memory operation by KT$_1$ on CPU$_1$ and a time-slice preemption occurs.
+The signal handler context switches UT$_1$ onto the user-level ready-queue and starts running UT$_2$ on KT$_1$, which immediately calls a memory operation.
+Since KT$_1$ is still executing on CPU$_1$, @librseq@ takes no action because it assumes KT$_1$ is still executing the same critical section.
+Then UT$_1$ is scheduled onto KT$_2$ by the user-level scheduler, and its memory operation continues in parallel with UT$_2$ using references into the heap associated with CPU$_1$, which corrupts CPU$_1$'s heap.
+If @librseq@ had an @rseq_abort@ which:
+\begin{enumerate}
+\item
+Marked the current restartable critical-section as cancelled so it restarts when attempting to commit.
+\item
+Do nothing if there is no current restartable critical section in progress.
+\end{enumerate}
+Then @rseq_abort@ could be called on the backside of a  user-level context-switching.
+A feature similar to this idea might exist for hardware transactional-memory.
+A significant effort was made to make this approach work but its complexity, lack of robustness, and performance costs resulted in its rejection.
+
+\paragraph{1:1 model}
+This design is the T:H model with T = H, where there is one thread-local heap for each KT.
+(See \VRef[Figure]{f:THSharedHeaps} but with a heap bucket per KT and no bucket or local-pool lock.)
+Hence, immediately after a KT starts, its heap is created and just before a KT terminates, its heap is (logically) deleted.
+Heaps are uncontended for a KTs memory operations to its heap (modulo operations on the global pool and ownership).
+
+Problems:
+\begin{itemize}
+\item
+Need to know when a KT is starts/terminates to create/delete its heap.
+
+\noindent
+It is possible to leverage constructors/destructors for thread-local objects to get a general handle on when a KT starts/terminates.
+\item
+There is a classic \newterm{memory-reclamation} problem for ownership because storage passed to another thread can be returned to a terminated heap.
+
+\noindent
+The classic solution only deletes a heap after all referents are returned, which is complex.
+The cheap alternative is for heaps to persist for program duration to handle outstanding referent frees.
+If old referents return storage to a terminated heap, it is handled in the same way as an active heap.
+To prevent heap blowup, terminated heaps can be reused by new KTs, where a reused heap may be populated with free storage from a prior KT (external fragmentation).
+In most cases, heap blowup is not a problem because programs have a small allocation set-size, so the free storage from a prior KT is apropos for a new KT.
+\item
+There can be significant external fragmentation as the number of KTs increases.
+
+\noindent
+In many concurrent applications, good performance is achieved with the number of KTs proportional to the number of CPUs.
+Since the number of CPUs is relatively small, >~1024, and a heap relatively small, $\approx$10K bytes (not including any associated freed storage), the worst-case external fragmentation is still small compared to the RAM available on large servers with many CPUs.
+\item
+There is the same serially-reusable problem with UTs migrating across KTs.
+\end{itemize}
+Tests showed this design produced the closest performance match with the best current allocators, and code inspection showed most of these allocators use different variations of this approach.
+
+
+\vspace{5pt}
+\noindent
+The conclusion from this design exercise is: any atomic fence, atomic instruction (lock free), or lock along the allocation fastpath produces significant slowdown.
+For the T:1 and T:H models, locking must exist along the allocation fastpath because the buckets or heaps maybe shared by multiple threads, even when KTs $\le$ N.
+For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
+However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs.
+More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
+Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required.
+The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in \VRef{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns.
+
+
+% \begin{itemize}
+% \item
+% A decentralized design is better to centralized design because their concurrency is better across all bucket-sizes as design 1 shards a few buckets of selected sizes while other designs shards all the buckets. Decentralized designs shard the whole heap which has all the buckets with the addition of sharding @sbrk@ area. So Design 1 was eliminated.
+% \item
+% Design 2 was eliminated because it has a possibility of contention in-case of KT > N while Design 3 and 4 have no contention in any scenario.
+% \item
+% Design 3 was eliminated because it was slower than Design 4 and it provided no way to achieve user-threading safety using librseq. We had to use CFA interruption handling to achieve user-threading safety which has some cost to it.
+% that  because of 4 was already slower than Design 3, adding cost of interruption handling on top of that would have made it even slower.
+% \end{itemize}
+% Of the four designs for a low-latency memory allocator, the 1:1 model was chosen for the following reasons:
+
+% \subsection{Advantages of distributed design}
+% 
+% The distributed design of llheap is concurrent to work in multi-threaded applications.
+% Some key benefits of the distributed design of llheap are as follows:
+% \begin{itemize}
+% \item
+% The bump allocation is concurrent as memory taken from @sbrk@ is sharded across all heaps as bump allocation reserve. The call to @sbrk@ will be protected using locks but bump allocation (on memory taken from @sbrk@) will not be contended once the @sbrk@ call has returned.
+% \item
+% Low or almost no contention on heap resources.
+% \item
+% It is possible to use sharing and stealing techniques to share/find unused storage, when a free list is unused or empty.
+% \item
+% Distributed design avoids unnecessary locks on resources shared across all KTs.
+% \end{itemize}
+
+\subsection{Allocation Latency}
+
+A primary goal of llheap is low latency.
+Two forms of latency are internal and external.
+Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system.
+Ideally latency is $O(1)$ with a small constant.
+
+To obtain $O(1)$ internal latency means no searching on the allocation fastpath, largely prohibits coalescing, which leads to external fragmentation.
+The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger).
+
+To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
+Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable.
+The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VPageref{p:malloc_expansion}).
+Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent.
+
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
-\section{uHeap Structure}
-
-As described in (FIXME cite 2.4) uHeap uses following features of multi-threaded memory allocators.
-\begin{itemize}
-\item
-uHeap has multiple heaps without a global heap and uses 1:1 model. (FIXME cite 2.5 1:1 model)
-\item
-uHeap uses object ownership. (FIXME cite 2.5.2)
-\item
-uHeap does not use object containers (FIXME cite 2.6) or any coalescing technique. Instead each dynamic object allocated by uHeap has a header than contains bookkeeping information.
-\item
-Each thread-local heap in uHeap has its own allocation buffer that is taken from the system using sbrk() call. (FIXME cite 2.7)
-\item
-Unless a heap is freeing an object that is owned by another thread's heap or heap is using sbrk() system call, uHeap is mostly lock-free which eliminates most of the contention on shared resources. (FIXME cite 2.8)
-\end{itemize}
-
-As uHeap uses a heap per-thread model to reduce contention on heap resources, we manage a list of heaps (heap-list) that can be used by threads. The list is empty at the start of the program. When a kernel thread (KT) is created, we check if heap-list is empty. If no then a heap is removed from the heap-list and is given to this new KT to use exclusively. If yes then a new heap object is created in dynamic memory and is given to this new KT to use exclusively. When a KT exits, its heap is not destroyed but instead its heap is put on the heap-list and is ready to be reused by new KTs.
-
-This reduces the memory footprint as the objects on free-lists of a KT that has exited can be reused by a new KT. Also, we preserve all the heaps that were created during the lifetime of the program till the end of the program. uHeap uses object ownership where an object is freed to the free-buckets of the heap that allocated it. Even after a KT A has exited, its heap has to be preserved as there might be objects in-use of other threads that were initially allocated by A and the passed to other threads.
+\section{llheap Structure}
+
+\VRef[Figure]{f:llheapStructure} shows the design of llheap, which uses the following features:
+\begin{itemize}
+\item
+1:1 multiple-heap model to minimize the fastpath,
+\item
+can be built with or without heap ownership,
+\item
+headers per allocation versus containers,
+\item
+no coalescing to minimize latency,
+\item
+global heap memory (pool) obtained from the operating system using @mmap@ to create and reuse heaps needed by threads,
+\item
+local reserved memory (pool) per heap obtained from global pool,
+\item
+global reserved memory (pool) obtained from the operating system using @sbrk@ call,
+\item
+optional fast-lookup table for converting allocation requests into bucket sizes,
+\item
+optional statistic-counters table for accumulating counts of allocation operations.
+\end{itemize}
 
 \begin{figure}
 \centering
+<<<<<<< HEAD
 \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
 \caption{uHeap Structure}
 \label{fig:heapStructureFig}
+=======
+% \includegraphics[width=0.65\textwidth]{figures/NewHeapStructure.eps}
+\input{llheap}
+\caption{llheap Structure}
+\label{f:llheapStructure}
+>>>>>>> bb7c77dc425e289ed60aa638529b3e5c7c3e4961
 \end{figure}
 
-Each heap uses seggregated free-buckets that have free objects of a specific size. Each free-bucket of a specific size has following 2 lists in it:
-\begin{itemize}
-\item
-Free list is used when a thread is freeing an object that is owned by its own heap so free list does not use any locks/atomic-operations as it is only used by the owner KT.
-\item
-Away list is used when a thread A is freeing an object that is owned by another KT B's heap. This object should be freed to the owner heap (B's heap) so A will place the object on the away list of B. Away list is lock protected as it is shared by all other threads.
-\end{itemize}
-
-When a dynamic object of a size S is requested. The thread-local heap will check if S is greater than or equal to the mmap threshhold. Any request larger than the mmap threshhold is fulfilled by allocating an mmap area of that size and such requests are not allocated on sbrk area. The value of this threshhold can be changed using mallopt routine but the new value should not be larger than our biggest free-bucket size.
-
-Algorithm~\ref{alg:heapObjectAlloc} briefly shows how an allocation request is fulfilled.
-
-\begin{algorithm}
-\caption{Dynamic object allocation of size S}\label{alg:heapObjectAlloc}
+llheap starts by creating an array of $N$ global heaps from storage obtained using @mmap@, where $N$ is the number of computer cores, that persists for program duration.
+There is a global bump-pointer to the next free heap in the array.
+When this array is exhausted, another array is allocated.
+There is a global top pointer for a heap intrusive link to chain free heaps from terminated threads.
+When statistics are turned on, there is a global top pointer for a heap intrusive link to chain \emph{all} the heaps, which is traversed to accumulate statistics counters across heaps using @malloc_stats@.
+
+When a KT starts, a heap is allocated from the current array for exclusive use by the KT.
+When a KT terminates, its heap is chained onto the heap free-list for reuse by a new KT, which prevents unbounded growth of heaps.
+The free heaps is a stack so hot storage is reused first.
+Preserving all heaps created during the program lifetime, solves the storage lifetime problem, when ownership is used.
+This approach wastes storage if a large number of KTs are created/terminated at program start and then the program continues sequentially.
+llheap can be configured with object ownership, where an object is freed to the heap from which it is allocated, or object no-ownership, where an object is freed to the KT's current heap.
+
+Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
+The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the operating system.
+Each free bucket of a specific size has the following two lists:
+\begin{itemize}
+\item
+A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
+The free objects are a stack so hot storage is reused first.
+\item
+For ownership, a shared away-stack for KTs to return storage allocated by other KTs, so push/pop operations require locking.
+When the free stack is empty, the entire ownership stack is removed and becomes the head of the corresponding free stack.
+\end{itemize}
+
+Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
+First, the allocation is divided into small (@sbrk@) or large (@mmap@).
+For large allocations, the storage is mapped directly from the operating system.
+For small allocations, $S$ is quantized into a bucket size.
+Quantizing is performed using a binary search over the ordered bucket array.
+An optional optimization is fast lookup $O(1)$ for sizes < 64K from a 64K array of type @char@, where each element has an index to the corresponding bucket.
+(Type @char@ restricts the number of bucket sizes to 256.)
+For $S$ > 64K, a binary search is used.
+Then, the allocation storage is obtained from the following locations (in order), with increasing latency.
+\begin{enumerate}[topsep=0pt,itemsep=0pt,parsep=0pt]
+\item
+bucket's free stack,
+\item
+bucket's away stack,
+\item
+heap's local pool
+\item
+global pool
+\item
+operating system (@sbrk@)
+\end{enumerate}
+
+\begin{figure}
+\vspace*{-10pt}
+\begin{algorithm}[H]
+\small
+\caption{Dynamic object allocation of size $S$}\label{alg:heapObjectAlloc}
 \begin{algorithmic}[1]
 \State $\textit{O} \gets \text{NULL}$
-\If {$S < \textit{mmap-threshhold}$}
-	\State $\textit{B} \gets (\text{smallest free-bucket} \geq S)$
+\If {$S >= \textit{mmap-threshhold}$}
+	\State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
+\Else
+	\State $\textit{B} \gets \text{smallest free-bucket} \geq S$
 	\If {$\textit{B's free-list is empty}$}
 		\If {$\textit{B's away-list is empty}$}
 			\If {$\textit{heap's allocation buffer} < S$}
-				\State $\text{get allocation buffer using system call sbrk()}$
+				\State $\text{get allocation from global pool (which might call \lstinline{sbrk})}$
 			\EndIf
 			\State $\textit{O} \gets \text{bump allocate an object of size S from allocation buffer}$
@@ -157,6 +337,4 @@
 	\EndIf
 	\State $\textit{O's owner} \gets \text{B}$
-\Else
-	\State $\textit{O} \gets \text{allocate dynamic memory using system call mmap with size S}$
 \EndIf
 \State $\Return \textit{ O}$
@@ -164,4 +342,5 @@
 \end{algorithm}
 
+<<<<<<< HEAD
 Algorithm~\ref{alg:heapObjectFreeOwn} shows how a free request is fulfilled if object ownership is turned on. Algorithm~\ref{alg:heapObjectFreeNoOwn} shows how the same free request is fulfilled without object ownership.
 
@@ -171,4 +350,13 @@
 \If {$\textit{A was mmap-ed}$}
 	\State $\text{return A's dynamic memory to system using system call munmap}$
+=======
+\vspace*{-15pt}
+\begin{algorithm}[H]
+\small
+\caption{Dynamic object free at address $A$ with object ownership}\label{alg:heapObjectFreeOwn}
+\begin{algorithmic}[1]
+\If {$\textit{A mapped allocation}$}
+	\State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$
+>>>>>>> bb7c77dc425e289ed60aa638529b3e5c7c3e4961
 \Else
 	\State $\text{B} \gets \textit{O's owner}$
@@ -181,4 +369,5 @@
 \end{algorithmic}
 \end{algorithm}
+<<<<<<< HEAD
 
 \begin{algorithm}
@@ -199,12 +388,364 @@
 \end{algorithm}
 
+=======
+>>>>>>> bb7c77dc425e289ed60aa638529b3e5c7c3e4961
+
+\vspace*{-15pt}
+\begin{algorithm}[H]
+\small
+\caption{Dynamic object free at address $A$ without object ownership}\label{alg:heapObjectFreeNoOwn}
+\begin{algorithmic}[1]
+\If {$\textit{A mapped allocation}$}
+	\State $\text{return A's dynamic memory to system using system call \lstinline{munmap}}$
+\Else
+	\State $\text{B} \gets \textit{O's owner}$
+	\If {$\textit{B is thread-local heap's bucket}$}
+		\State $\text{push A to B's free-list}$
+	\Else
+		\State $\text{C} \gets \textit{thread local heap's bucket with same size as B}$
+		\State $\text{push A to C's free-list}$
+	\EndIf
+\EndIf
+\end{algorithmic}
+\end{algorithm}
+\end{figure}
+
+Algorithm~\ref{alg:heapObjectFreeOwn} shows the de-allocation (free) outline for an object at address $A$ with ownership.
+First, the address is divided into small (@sbrk@) or large (@mmap@).
+For large allocations, the storage is unmapped back to the operating system.
+For small allocations, the bucket associated with the request size is retrieved.
+If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket.
+If the bucket is not local to the thread, the allocation is pushed onto the owning thread's associated away stack.
+
+Algorithm~\ref{alg:heapObjectFreeNoOwn} shows the de-allocation (free) outline for an object at address $A$ without ownership.
+The algorithm is the same as for ownership except if the bucket is not local to the thread.
+Then the corresponding bucket of the owner thread is computed for the deallocating thread, and the allocation is pushed onto the deallocating thread's bucket.
+
+Finally, the llheap design funnels \label{p:FunnelRoutine} all allocation/deallocation operations through routines @malloc@/@free@, which are the only routines to directly access and manage the internal data structures of the heap.
+Other allocation operations, \eg @calloc@, @memalign@, and @realloc@, are composed of calls to @malloc@ and possibly @free@, and may manipulate header information after storage is allocated.
+This design simplifies heap-management code during development and maintenance.
+
+
+\subsection{Alignment}
+
+All dynamic memory allocations must have a minimum storage alignment for the contained object(s).
+Often the minimum memory alignment, M, is the bus width (32 or 64-bit) or the largest register (double, long double) or largest atomic instruction (DCAS) or vector data (MMMX).
+In general, the minimum storage alignment is 8/16-byte boundary on 32/64-bit computers.
+For consistency, the object header is normally aligned at this same boundary.
+Larger alignments must be a power of 2, such page alignment (4/8K).
+Any alignment request, N, $\le$ the minimum alignment is handled as a normal allocation with minimal alignment.
+
+For alignments greater than the minimum, the obvious approach for aligning to address @A@ is: compute the next address that is a multiple of @N@ after the current end of the heap, @E@, plus room for the header before @A@ and the size of the allocation after @A@, moving the end of the heap to @E'@.
+\begin{center}
+\input{Alignment1}
+\end{center}
+The storage between @E@ and @H@ is chained onto the appropriate free list for future allocations.
+This approach is also valid within any sufficiently large free block, where @E@ is the start of the free block, and any unused storage before @H@ or after the allocated object becomes free storage.
+In this approach, the aligned address @A@ is the same as the allocated storage address @P@, \ie @P@ $=$ @A@ for all allocation routines, which simplifies deallocation.
+However, if there are a large number of aligned requests, this approach leads to memory fragmentation from the small free areas around the aligned object.
+As well, it does not work for large allocations, where many memory allocators switch from program @sbrk@ to operating-system @mmap@.
+The reason is that @mmap@ only starts on a page boundary, and it is difficult to reuse the storage before the alignment boundary for other requests.
+Finally, this approach is incompatible with allocator designs that funnel allocation requests through @malloc@ as it directly manipulates management information within the allocator to optimize the space/time of a request.
+
+Instead, llheap alignment is accomplished by making a \emph{pessimistically} allocation request for sufficient storage to ensure that \emph{both} the alignment and size request are satisfied, \eg:
+\begin{center}
+\input{Alignment2}
+\end{center}
+The amount of storage necessary is @alignment - M + size@, which ensures there is an address, @A@, after the storage returned from @malloc@, @P@, that is a multiple of @alignment@ followed by sufficient storage for the data object.
+The approach is pessimistic because if @P@ already has the correct alignment @N@, the initial allocation has already requested sufficient space to move to the next multiple of @N@.
+For this special case, there is @alignment - M@ bytes of unused storage after the data object, which subsequently can be used by @realloc@.
+
+Note, the address returned is @A@, which is subsequently returned to @free@.
+However, to correctly free the allocated object, the value @P@ must be computable, since that is the value generated by @malloc@ and returned within @memalign@.
+Hence, there must be a mechanism to detect when @P@ $\neq$ @A@ and how to compute @P@ from @A@.
+
+The llheap approach uses two headers:
+the \emph{original} header associated with a memory allocation from @malloc@, and a \emph{fake} header within this storage before the alignment boundary @A@, which is returned from @memalign@, e.g.:
+\begin{center}
+\input{Alignment2Impl}
+\end{center}
+Since @malloc@ has a minimum alignment of @M@, @P@ $\neq$ @A@ only holds for alignments of @M@ or greater.
+When @P@ $\neq$ @A@, the minimum distance between @P@ and @A@ is @M@ bytes, due to the pessimistic storage-allocation.
+Therefore, there is always room for an @M@-byte fake header before @A@.
+
+The fake header must supply an indicator to distinguish it from a normal header and the location of address @P@ generated by @malloc@.
+This information is encoded as an offset from A to P and the initialize alignment (discussed in \VRef{s:ReallocStickyProperties}).
+To distinguish a fake header from a normal header, the least-significant bit of the alignment is used because the offset participates in multiple calculations, while the alignment is just remembered data.
+\begin{center}
+\input{FakeHeader}
+\end{center}
+
+
+\subsection{\lstinline{realloc} and Sticky Properties}
+\label{s:ReallocStickyProperties}
+
+Allocation routine @realloc@ provides a memory-management pattern for shrinking/enlarging an existing allocation, while maintaining some or all of the object data, rather than performing the following steps manually.
+\begin{flushleft}
+\begin{tabular}{ll}
+\multicolumn{1}{c}{\textbf{realloc pattern}} & \multicolumn{1}{c}{\textbf{manually}} \\
+\begin{lstlisting}
+T * naddr = realloc( oaddr, newSize );
+
+
+
+\end{lstlisting}
+&
+\begin{lstlisting}
+T * naddr = (T *)malloc( newSize ); $\C[2.4in]{// new storage}$
+memcpy( naddr, addr, oldSize );	 $\C{// copy old bytes}$
+free( addr );				$\C{// free old storage}$
+addr = naddr;				$\C{// change pointer}\CRT$
+\end{lstlisting}
+\end{tabular}
+\end{flushleft}
+The realloc pattern leverages available storage at the end of an allocation due to bucket sizes, possibly eliminating a new allocation and copying.
+This pattern is not used enough to reduce storage management costs.
+In fact, if @oaddr@ is @nullptr@, @realloc@ does a @malloc@, so even the initial @malloc@ can be a @realloc@ for consistency in the pattern.
+
+The hidden problem for this pattern is the effect of zero fill and alignment with respect to reallocation.
+Are these properties transient or persistent (``sticky'')?
+For example, when memory is initially allocated by @calloc@ or @memalign@ with zero fill or alignment properties, respectively, what happens when those allocations are given to @realloc@ to change size.
+That is, if @realloc@ logically extends storage into unused bucket space or allocates new storage to satisfy a size change, are initial allocation properties preserve?
+Currently, allocation properties are not preserved, so subsequent use of @realloc@ storage may cause inefficient execution or errors due to lack of zero fill or alignment.
+This silent problem is unintuitive to programmers and difficult to locate because it is transient.
+To prevent these problems, llheap preserves initial allocation properties for the lifetime of an allocation and the semantics of @realloc@ are augmented to preserve these properties, with additional query routines.
+This change makes the realloc pattern efficient and safe.
+
+
+\subsection{Header}
+
+To preserve allocation properties requires storing additional information with an allocation,
+The only available location is the header, where \VRef[Figure]{f:llheapNormalHeader} shows the llheap storage layout.
+The header has two data field sized appropriately for 32/64-bit alignment requirements.
+The first field is a union of three values:
+\begin{description}
+\item[bucket pointer]
+is for allocated storage and points back to the bucket associated with this storage requests (see \VRef[Figure]{f:llheapStructure} for the fields accessible in a bucket).
+\item[mapped size]
+is for mapped storage and is the storage size for use in unmapping.
+\item[next free block]
+is for free storage and is an intrusive pointer chaining same-size free blocks onto a bucket's free stack.
+\end{description}
+The second field remembers the request size versus the allocation (bucket) size, \eg request 42 bytes which is rounded up to 64 bytes.
+Since programmers think in request sizes rather than allocation sizes, the request size allows better generation of statistics or errors.
+
+\begin{figure}
+\centering
+\input{Header}
+\caption{llheap Normal Header}
+\label{f:llheapNormalHeader}
+\end{figure}
+
+The low-order 3-bits of the first field are \emph{unused} for any stored values, whereas the second field may use all of its bits.
+The 3 unused bits are used to represent mapped allocation, zero filled, and alignment, respectively.
+Note, the alignment bit is not used in the normal header and the zero-filled/mapped bits are not used in the fake header.
+This implementation allows a fast test if any of the lower 3-bits are on (@&@ and compare).
+If no bits are on, it implies a basic allocation, which is handled quickly;
+otherwise, the bits are analysed and appropriate actions are taken for the complex cases.
+Since most allocations are basic, this implementation results in a significant performance gain along the allocation and free fastpath.
+
+
+\section{Statistics and Debugging}
+
+llheap can be built to accumulate fast and largely contention-free allocation statistics to help understand allocation behaviour.
+Incrementing statistic counters must appear on the allocation fastpath.
+As noted, any atomic operation along the fastpath produces a significant increase in allocation costs.
+To make statistics performant enough for use on running systems, each heap has its own set of statistic counters, so heap operations do not require atomic operations.
+
+To locate all statistic counters, heaps are linked together in statistics mode, and this list is locked and traversed to sum all counters across heaps.
+Note, the list is locked to prevent errors traversing an active list;
+the statistics counters are not locked and can flicker during accumulation, which is not an issue with atomic read/write.
+\VRef[Figure]{f:StatiticsOutput} shows an example of statistics output, which covers all allocation operations and information about deallocating storage not owned by a thread.
+No other memory allocator studied provides as comprehensive statistical information.
+Finally, these statistics were invaluable during the development of this thesis for debugging and verifying correctness, and hence, should be equally valuable to application developers.
+
+\begin{figure}
+\begin{lstlisting}
+Heap statistics: (storage request / allocation)
+  malloc >0 calls 2,766; 0 calls 2,064; storage 12,715 / 13,367 bytes
+  aalloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes
+  calloc >0 calls 6; 0 calls 0; storage 1,008 / 1,104 bytes
+  memalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
+  amemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
+  cmemalign >0 calls 0; 0 calls 0; storage 0 / 0 bytes
+  resize >0 calls 0; 0 calls 0; storage 0 / 0 bytes
+  realloc >0 calls 0; 0 calls 0; storage 0 / 0 bytes
+  free !null calls 2,766; null calls 4,064; storage 12,715 / 13,367 bytes
+  away pulls 0; pushes 0; storage 0 / 0 bytes
+  sbrk calls 1; storage 10,485,760 bytes
+  mmap calls 10,000; storage 10,000 / 10,035 bytes
+  munmap calls 10,000; storage 10,000 / 10,035 bytes
+  threads started 4; exited 3
+  heaps new 4; reused 0
+\end{lstlisting}
+\caption{Statistics Output}
+\label{f:StatiticsOutput}
+\end{figure}
+
+llheap can also be built with debug checking, which inserts many asserts along all allocation paths.
+These assertions detect incorrect allocation usage, like double frees, unfreed storage, or memory corruptions because internal values (like header fields) are overwritten.
+These checks are best effort as opposed to complete allocation checking as in @valgrind@.
+Nevertheless, the checks detect many allocation problems.
+There is an unfortunate problem in detecting unfreed storage because some library routines assume their allocations have life-time duration, and hence, do not free their storage.
+For example, @printf@ allocates a 1024 buffer on first call and never deletes this buffer.
+To prevent a false positive for unfreed storage, it is possible to specify an amount of storage that is never freed (see @malloc_unfreed@ \VPageref{p:malloc_unfreed}), and it is subtracted from the total allocate/free difference.
+Determining the amount of never-freed storage is annoying, but once done, any warnings of unfreed storage are application related.
+
+Tests indicate only a 30\% performance increase when statistics \emph{and} debugging are enabled, and the latency cost for accumulating statistic is mitigated by limited calls, often only one at the end of the program.
+
+
+\section{User-level Threading Support}
+\label{s:UserlevelThreadingSupport}
+
+The serially-reusable problem (see \VRef{s:AllocationFastpath}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model.
+The solution is to prevent interrupts that can result in CPU or KT change during operations that are logically critical sections.
+Locking these critical sections negates any attempt for a quick fastpath and results in high contention.
+For user-level threading, the serially-reusable problem appears with time slicing for preemptable scheduling, as the signal handler context switches to another user-level thread.
+Without time slicing, a user thread performing a long computation can prevent execution (starve) other threads.
+To prevent starvation for an allocation-active thread, \ie the time slice always triggers in an allocation critical-section for one thread, a thread-local \newterm{rollforward} flag is set in the signal handler when it aborts a time slice.
+The rollforward flag is tested at the end of each allocation funnel routine (see \VPageref{p:FunnelRoutine}), and if set, it is reset and a volunteer yield (context switch) is performed to allow other threads to execute.
+
+llheap uses two techniques to detect when execution is in a allocation operation or routine called from allocation operation, to abort any time slice during this period.
+On the slowpath when executing expensive operations, like @sbrk@ or @mmap@, interrupts are disabled/enabled by setting thread-local flags so the signal handler aborts immediately.
+On the fastpath, disabling/enabling interrupts is too expensive as accessing thread-local storage can be expensive and not thread-safe.
+For example, the ARM processor stores the thread-local pointer in a coprocessor register that cannot perform atomic base-displacement addressing.
+Hence, there is a window between loading the thread-local pointer from the coprocessor register into a normal register and adding the displacement when a time slice can move a thread.
+
+The fast technique defines a special code section and places all non-interruptible routines in this section.
+The linker places all code in this section into a contiguous block of memory, but the order of routines within the block is unspecified.
+Then, the signal handler compares the program counter at the point of interrupt with the the start and end address of the non-interruptible section, and aborts if executing within this section and sets the rollforward flag.
+This technique is fragile because any calls in the non-interruptible code outside of the non-interruptible section (like @sbrk@) must be bracketed with disable/enable interrupts and these calls must be along the slowpath.
+Hence, for correctness, this approach requires inspection of generated assembler code for routines placed in the non-interruptible section.
+This issue is mitigated by the llheap funnel design so only funnel routines and a few statistics routines are placed in the non-interruptible section and their assembler code examined.
+These techniques are used in both the \uC and \CFA versions of llheap, where both of these systems have user-level threading.
+
+
+\section{Bootstrapping}
+
+There are problems bootstrapping a memory allocator.
+\begin{enumerate}
+\item
+Programs can be statically or dynamically linked.
+\item
+The order the linker schedules startup code is poorly supported.
+\item
+Knowing a KT's start and end independently from the KT code is difficult.
+\end{enumerate}
+
+For static linking, the allocator is loaded with the program.
+Hence, allocation calls immediately invoke the allocator operation defined by the loaded allocation library and there is only one memory allocator used in the program.
+This approach allows allocator substitution by placing an allocation library before any other in the linked/load path.
+
+Allocator substitution is similar for dynamic linking, but the problem is that the dynamic loader starts first and needs to perform dynamic allocations \emph{before} the substitution allocator is loaded.
+As a result, the dynamic loader uses a default allocator until the substitution allocator is loaded, after which all allocation operations are handled by the substitution allocator, including from the dynamic loader.
+Hence, some part of the @sbrk@ area may be used by the default allocator and statistics about allocation operations cannot be correct.
+Furthermore, dynamic linking goes through trampolines, so there is an additional cost along the allocator fastpath for all allocation operations.
+Testing showed up to a 5\% performance increase for dynamic linking over static linking, even when using @tls_model("initial-exec")@ so the dynamic loader can obtain tighter binding.
+
+All allocator libraries need to perform startup code to initialize data structures, such as the heap array for llheap.
+The problem is getting initialized done before the first allocator call.
+However, there does not seem to be mechanism to tell either the static or dynamic loader to first perform initialization code before any calls to a loaded library.
+As a result, calls to allocation routines occur without initialization.
+To deal with this problem, it is necessary to put a conditional initialization check along the allocation fastpath to trigger initialization (singleton pattern).
+
+Two other important execution points are program startup and termination, which include prologue or epilogue code to bootstrap a program, which programmers are unaware of.
+For example, dynamic-memory allocations before/after the application starts should not be considered in statistics because the application does not make these calls.
+llheap establishes these two points using routines:
+\begin{lstlisting}
+__attribute__(( constructor( 100 ) )) static void startup( void ) {
+	// clear statistic counters
+	// reset allocUnfreed counter
+}
+__attribute__(( destructor( 100 ) )) static void shutdown( void ) {
+	// sum allocUnfreed for all heaps
+	// subtract global unfreed storage
+	// if allocUnfreed > 0 then print warning message
+}
+\end{lstlisting}
+which use global constructor/destructor priority 100, where the linker calls these routines at program prologue/epilogue in increasing/decreasing order of priority.
+Application programs may only use global constructor/destructor priorities greater than 100.
+Hence, @startup@ is called after the program prologue but before the application starts, and @shutdown@ is called after the program terminates but before the program epilogue.
+By resetting counters in @startup@, prologue allocations are ignored, and checking unfreed storage in @shutdown@ checks only application memory management, ignoring the program epilogue.
+
+While @startup@/@shutdown@ apply to the program KT, a concurrent program creates additional KTs that do not trigger these routines.
+However, it is essential for the allocator to know when each KT is started/terminated.
+One approach is to create a thread-local object with a construct/destructor, which is triggered after a new KT starts and before it terminates, respectively.
+\begin{lstlisting}
+struct ThreadManager {
+	volatile bool pgm_thread;
+	ThreadManager() {} // unusable
+	~ThreadManager() { if ( pgm_thread ) heapManagerDtor(); }
+};
+static thread_local ThreadManager threadManager;
+\end{lstlisting}
+Unfortunately, thread-local variables are created lazily, \ie on the first dereference of @threadManager@, which then triggers its constructor.
+Therefore, the constructor is useless for knowing when a KT starts because the KT must reference it, and the allocator does not control the application KT.
+Fortunately, the singleton pattern needed for initializing the program KT also triggers KT allocator initialization, which can then reference @pgm_thread@ to call @threadManager@'s constructor, otherwise its destructor is not called.
+Now when a KT terminates, @~ThreadManager@ is called to chained it onto the global-heap free-stack, where @pgm_thread@ is set to true only for the program KT.
+The conditional destructor call prevents closing down the program heap, which must remain available because epilogue code may free more storage.
+
+Finally, there is a recursive problem when the singleton pattern dereferences @pgm_thread@ to initialize the thread-local object, because its initialization calls @atExit@, which immediately calls @malloc@ to obtain storage.
+This recursion is handled with another thread-local flag to prevent double initialization.
+A similar problem exists when the KT terminates and calls member @~ThreadManager@, because immediately afterwards, the terminating KT calls @free@ to deallocate the storage obtained from the @atExit@.
+In the meantime, the terminated heap has been put on the global-heap free-stack, and may be active by a new KT, so the @atExit@ free is handled as a free to another heap and put onto the away list using locking.
+
+For user threading systems, the KTs are controlled by the runtime, and hence, start/end pointers are known and interact directly with the llheap allocator for \uC and \CFA, which eliminates or simplifies several of these problems.
+The following API was created to provide interaction between the language runtime and the allocator.
+\begin{lstlisting}
+void startTask();			$\C{// KT starts}$
+void finishTask();			$\C{// KT ends}$
+void startup();				$\C{// when application code starts}$
+void shutdown();			$\C{// when application code ends}$
+bool traceHeap();			$\C{// enable allocation/free printing for debugging}$
+bool traceHeapOn();			$\C{// start printing allocation/free calls}$
+bool traceHeapOff();			$\C{// stop printing allocation/free calls}$
+\end{lstlisting}
+This kind of API is necessary to allow concurrent runtime systems to interact with difference memory allocators in a consistent way.
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
 \section{Added Features and Methods}
-To improve the uHeap allocator (FIX ME: cite uHeap) interface and make it more user friendly, we added a few more routines to the C allocator. Also, we built a \CFA (FIX ME: cite cforall) interface on top of C interface to increase the usability of the allocator.
-
-\subsection{C Interface}
-We added a few more features and routines to the allocator's C interface that can make the allocator more usable to the programmers. THese features will programmer more control on the dynamic memory allocation.
+
+The C dynamic-allocation API (see \VRef[Figure]{f:CDynamicAllocationAPI}) is neither orthogonal nor complete.
+For example,
+\begin{itemize}
+\item
+It is possible to zero fill or align an allocation but not both.
+\item
+It is \emph{only} possible to zero fill an array allocation.
+\item
+It is not possible to resize a memory allocation without data copying.
+\item
+@realloc@ does not preserve initial allocation properties.
+\end{itemize}
+As a result, programmers must provide these options, which is error prone, resulting in blaming the entire programming language for a poor dynamic-allocation API.
+Furthermore, newer programming languages have better type systems that can provide safer and more powerful APIs for memory allocation.
+
+\begin{figure}
+\begin{lstlisting}
+void * malloc( size_t size );
+void * calloc( size_t nmemb, size_t size );
+void * realloc( void * ptr, size_t size );
+void * reallocarray( void * ptr, size_t nmemb, size_t size );
+void free( void * ptr );
+void * memalign( size_t alignment, size_t size );
+void * aligned_alloc( size_t alignment, size_t size );
+int posix_memalign( void ** memptr, size_t alignment, size_t size );
+void * valloc( size_t size );
+void * pvalloc( size_t size );
+
+struct mallinfo mallinfo( void );
+int mallopt( int param, int val );
+int malloc_trim( size_t pad );
+size_t malloc_usable_size( void * ptr );
+void malloc_stats( void );
+int malloc_info( int options, FILE * fp );
+\end{lstlisting}
+\caption{C Dynamic-Allocation API}
+\label{f:CDynamicAllocationAPI}
+\end{figure}
+
+The following presents design and API changes for C, \CC (\uC), and \CFA, all of which are implemented in llheap.
+
 
 \subsection{Out of Memory}
@@ -212,354 +753,337 @@
 Most allocators use @nullptr@ to indicate an allocation failure, specifically out of memory;
 hence the need to return an alternate value for a zero-sized allocation.
-The alternative is to abort a program when out of memory.
-In theory, notifying the programmer allows recovery;
-in practice, it is almost impossible to gracefully when out of memory, so the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen.
-
-
-\subsection{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
-@aalloc@ is an extension of malloc. It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly. The only alternate of this routine in the other allocators is calloc but calloc also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.
-\paragraph{Usage}
+A different approach allowed by the C API is to abort a program when out of memory and return @nullptr@ for a zero-sized allocation.
+In theory, notifying the programmer of memory failure allows recovery;
+in practice, it is almost impossible to gracefully recover when out of memory.
+Hence, the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen because no pseudo allocation is necessary.
+
+
+\subsection{C Interface}
+
+For C, it is possible to increase functionality and orthogonality of the dynamic-memory API to make allocation better for programmers.
+
+For existing C allocation routines:
+\begin{itemize}
+\item
+@calloc@ sets the sticky zero-fill property.
+\item
+@memalign@, @aligned_alloc@, @posix_memalign@, @valloc@ and @pvalloc@ set the sticky alignment property.
+\item
+@realloc@ and @reallocarray@ preserve sticky properties.
+\end{itemize}
+
+The C dynamic-memory API is extended with the following routines:
+
+\paragraph{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
+extends @calloc@ for allocating a dynamic array of objects without calculating the total size of array explicitly but \emph{without} zero-filling the memory.
+@aalloc@ is significantly faster than @calloc@, which is the only alternative.
+
+\noindent\textbf{Usage}
 @aalloc@ takes two parameters.
-
-\begin{itemize}
-\item
-@dim@: number of objects in the array
-\item
-@elemSize@: size of the object in the array.
-\end{itemize}
-It returns address of dynamic object allocatoed on heap that can contain dim number of objects of the size elemSize. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{void * resize( void * oaddr, size_t size )}}
-@resize@ is an extension of relloc. It allows programmer to reuse a cuurently allocated dynamic object with a new size requirement. Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.
-\paragraph{Usage}
+\begin{itemize}
+\item
+@dim@: number of array objects
+\item
+@elemSize@: size of array object
+\end{itemize}
+It returns the address of the dynamic array or @NULL@ if either @dim@ or @elemSize@ are zero.
+
+\paragraph{\lstinline{void * resize( void * oaddr, size_t size )}}
+extends @realloc@ for resizing an existing allocation \emph{without} copying previous data into the new allocation or preserving sticky properties.
+@resize@ is significantly faster than @realloc@, which is the only alternative.
+
+\noindent\textbf{Usage}
 @resize@ takes two parameters.
-
-\begin{itemize}
-\item
-@oaddr@: the address of the old object that needs to be resized.
-\item
-@size@: the new size requirement of the to which the old object needs to be resized.
-\end{itemize}
-It returns an object that is of the size given but it does not preserve the data in the old object. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
-This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize). In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
-\paragraph{Usage}
-This resize takes three parameters. It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize).
-
-\begin{itemize}
-\item
-@oaddr@: the address of the old object that needs to be resized.
-\item
-@nalign@: the new alignment to which the old object needs to be realigned.
-\item
-@size@: the new size requirement of the to which the old object needs to be resized.
-\end{itemize}
-It returns an object with the size and alignment given in the parameters. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
-amemalign is a hybrid of memalign and aalloc. It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly. It frees the programmer from calculating the total size of the array.
-\paragraph{Usage}
-amemalign takes three parameters.
-
-\begin{itemize}
-\item
-@alignment@: the alignment to which the dynamic array needs to be aligned.
-\item
-@dim@: number of objects in the array
-\item
-@elemSize@: size of the object in the array.
-\end{itemize}
-It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
-cmemalign is a hybrid of amemalign and calloc. It allows programmer to allocate an aligned dynamic array of objects that is 0 filled. The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly. This routine provides both features of aligning and 0 filling, implicitly.
-\paragraph{Usage}
-cmemalign takes three parameters.
-
-\begin{itemize}
-\item
-@alignment@: the alignment to which the dynamic array needs to be aligned.
-\item
-@dim@: number of objects in the array
-\item
-@elemSize@: size of the object in the array.
-\end{itemize}
-It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize. The returned dynamic array is aligned to the given alignment and is 0 filled. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{size_t malloc_alignment( void * addr )}}
-@malloc_alignment@ returns the alignment of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.
-\paragraph{Usage}
-@malloc_alignment@ takes one parameters.
-
-\begin{itemize}
-\item
-@addr@: the address of the currently allocated dynamic object.
-\end{itemize}
-@malloc_alignment@ returns the alignment of the given dynamic object. On failure, it return the value of default alignment of the uHeap allocator.
-
-\subsection{\lstinline{bool malloc_zero_fill( void * addr )}}
-@malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.
-\paragraph{Usage}
+\begin{itemize}
+\item
+@oaddr@: address to be resized
+\item
+@size@: new allocation size (smaller or larger than previous)
+\end{itemize}
+It returns the address of the old or new storage with the specified new size or @NULL@ if @size@ is zero.
+
+\paragraph{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
+extends @aalloc@ and @memalign@ for allocating an aligned dynamic array of objects.
+Sets sticky alignment property.
+
+\noindent\textbf{Usage}
+@amemalign@ takes three parameters.
+\begin{itemize}
+\item
+@alignment@: alignment requirement
+\item
+@dim@: number of array objects
+\item
+@elemSize@: size of array object
+\end{itemize}
+It returns the address of the aligned dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero.
+
+\paragraph{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
+extends @amemalign@ with zero fill and has the same usage as @amemalign@.
+Sets sticky zero-fill and alignment property.
+It returns the address of the aligned, zero-filled dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero.
+
+\paragraph{\lstinline{size_t malloc_alignment( void * addr )}}
+returns the alignment of the dynamic object for use in aligning similar allocations.
+
+\noindent\textbf{Usage}
+@malloc_alignment@ takes one parameter.
+\begin{itemize}
+\item
+@addr@: address of an allocated object.
+\end{itemize}
+It returns the alignment of the given object, where objects not allocated with alignment return the minimal allocation alignment.
+
+\paragraph{\lstinline{bool malloc_zero_fill( void * addr )}}
+returns true if the object has the zero-fill sticky property for use in zero filling similar allocations.
+
+\noindent\textbf{Usage}
 @malloc_zero_fill@ takes one parameters.
 
 \begin{itemize}
 \item
-@addr@: the address of the currently allocated dynamic object.
-\end{itemize}
-@malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise. On failure, it returns false.
-
-\subsection{\lstinline{size_t malloc_size( void * addr )}}
-@malloc_size@ returns the allocation size of a currently allocated dynamic object. It allows the programmer in memory management and personal bookkeeping. It helps the programmer in verofying the alignment of a dynamic object especially in a scenerio similar to prudcer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size. Its current alternate in the other allocators is @malloc_usable_size@. But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object. On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object. This size is updated when an object is realloced, resized, or passed through a similar allocator routine.
-\paragraph{Usage}
+@addr@: address of an allocated object.
+\end{itemize}
+It returns true if the zero-fill sticky property is set and false otherwise.
+
+\paragraph{\lstinline{size_t malloc_size( void * addr )}}
+returns the request size of the dynamic object (updated when an object is resized) for use in similar allocations.
+See also @malloc_usable_size@.
+
+\noindent\textbf{Usage}
 @malloc_size@ takes one parameters.
-
-\begin{itemize}
-\item
-@addr@: the address of the currently allocated dynamic object.
-\end{itemize}
-@malloc_size@ returns the allocation size of the given dynamic object. On failure, it return zero.
-
-\subsection{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
-This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@). In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
-\paragraph{Usage}
-This @realloc@ takes three parameters. It takes an additional parameter of nalign as compared to the default @realloc@.
-
-\begin{itemize}
-\item
-@oaddr@: the address of the old object that needs to be reallocated.
-\item
-@nalign@: the new alignment to which the old object needs to be realigned.
-\item
-@size@: the new size requirement of the to which the old object needs to be resized.
-\end{itemize}
-It returns an object with the size and alignment given in the parameters that preserves the data in the old object. On failure, it returns a @NULL@ pointer.
-
-\subsection{\CFA Malloc Interface}
-We added some routines to the malloc interface of \CFA. These routines can only be used in \CFA and not in our standalone uHeap allocator as these routines use some features that are only provided by \CFA and not by C. It makes the allocator even more usable to the programmers.
-\CFA provides the liberty to know the returned type of a call to the allocator. So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
-
-\subsection{\lstinline{T * malloc( void )}}
-This malloc is a simplified polymorphic form of defualt malloc (FIX ME: cite malloc). It does not take any parameter as compared to default malloc that takes one parameter.
-\paragraph{Usage}
-This malloc takes no parameters.
-It returns a dynamic object of the size of type @T@. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * aalloc( size_t dim )}}
-This aalloc is a simplified polymorphic form of above aalloc (FIX ME: cite aalloc). It takes one parameter as compared to the above aalloc that takes two parameters.
-\paragraph{Usage}
-aalloc takes one parameters.
-
-\begin{itemize}
-\item
-@dim@: required number of objects in the array.
-\end{itemize}
-It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * calloc( size_t dim )}}
-This calloc is a simplified polymorphic form of defualt calloc (FIX ME: cite calloc). It takes one parameter as compared to the default calloc that takes two parameters.
-\paragraph{Usage}
-This calloc takes one parameter.
-
-\begin{itemize}
-\item
-@dim@: required number of objects in the array.
-\end{itemize}
-It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * resize( T * ptr, size_t size )}}
-This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment). It takes two parameters as compared to the above resize that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
-\paragraph{Usage}
-This resize takes two parameters.
-
-\begin{itemize}
-\item
-@ptr@: address of the old object.
-\item
-@size@: the required size of the new object.
-\end{itemize}
-It returns a dynamic object of the size given in paramters. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * realloc( T * ptr, size_t size )}}
-This @realloc@ is a simplified polymorphic form of defualt @realloc@ (FIX ME: cite @realloc@ with align). It takes two parameters as compared to the above @realloc@ that takes three parameters. It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
-\paragraph{Usage}
-This @realloc@ takes two parameters.
-
-\begin{itemize}
-\item
-@ptr@: address of the old object.
-\item
-@size@: the required size of the new object.
-\end{itemize}
-It returns a dynamic object of the size given in paramters that preserves the data in the given object. The returned object is aligned to the alignemtn of type @T@. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * memalign( size_t align )}}
-This memalign is a simplified polymorphic form of defualt memalign (FIX ME: cite memalign). It takes one parameters as compared to the default memalign that takes two parameters.
-\paragraph{Usage}
-memalign takes one parameters.
-
-\begin{itemize}
-\item
-@align@: the required alignment of the dynamic object.
-\end{itemize}
-It returns a dynamic object of the size of type @T@ that is aligned to given parameter align. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * amemalign( size_t align, size_t dim )}}
-This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign). It takes two parameter as compared to the above amemalign that takes three parameters.
-\paragraph{Usage}
-amemalign takes two parameters.
-
-\begin{itemize}
-\item
-@align@: required alignment of the dynamic array.
-\item
-@dim@: required number of objects in the array.
-\end{itemize}
-It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * cmemalign( size_t align, size_t dim  )}}
-This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign). It takes two parameter as compared to the above cmemalign that takes three parameters.
-\paragraph{Usage}
-cmemalign takes two parameters.
-
-\begin{itemize}
-\item
-@align@: required alignment of the dynamic array.
-\item
-@dim@: required number of objects in the array.
-\end{itemize}
-It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@. The returned object is aligned to the given parameter align and is zero filled. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * aligned_alloc( size_t align )}}
-This @aligned_alloc@ is a simplified polymorphic form of defualt @aligned_alloc@ (FIX ME: cite @aligned_alloc@). It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters.
-\paragraph{Usage}
-This @aligned_alloc@ takes one parameter.
-
-\begin{itemize}
-\item
-@align@: required alignment of the dynamic object.
-\end{itemize}
-It returns a dynamic object of the size of type @T@ that is aligned to the given parameter. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}}
-This @posix_memalign@ is a simplified polymorphic form of defualt @posix_memalign@ (FIX ME: cite @posix_memalign@). It takes two parameters as compared to the default @posix_memalign@ that takes three parameters.
-\paragraph{Usage}
-This @posix_memalign@ takes two parameter.
-
-\begin{itemize}
-\item
-@ptr@: variable address to store the address of the allocated object.
-\item
-@align@: required alignment of the dynamic object.
-\end{itemize}
-
-It stores address of the dynamic object of the size of type @T@ in given parameter ptr. This object is aligned to the given parameter. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * valloc( void )}}
-This @valloc@ is a simplified polymorphic form of defualt @valloc@ (FIX ME: cite @valloc@). It takes no parameters as compared to the default @valloc@ that takes one parameter.
-\paragraph{Usage}
-@valloc@ takes no parameters.
-It returns a dynamic object of the size of type @T@ that is aligned to the page size. On failure, it returns a @NULL@ pointer.
-
-\subsection{\lstinline{T * pvalloc( void )}}
-\paragraph{Usage}
-@pvalloc@ takes no parameters.
-It returns a dynamic object of the size that is calcutaed by rouding the size of type @T@. The returned object is also aligned to the page size. On failure, it returns a @NULL@ pointer.
-
-\subsection{Alloc Interface}
-In addition to improve allocator interface both for \CFA and our standalone allocator uHeap in C. We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
-This interface helps programmers in three major ways.
-
-\begin{itemize}
-\item
-Routine Name: alloc interfce frees programmers from remmebring different routine names for different kind of dynamic allocations.
-\item
-Parametre Positions: alloc interface frees programmers from remembering parameter postions in call to routines.
-\item
-Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determince the object size from returned type of alloc call.
-\end{itemize}
-
-Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers. The new interfece has just one routine name alloc that can be used to perform a wide range of dynamic allocations. The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.
-
-\subsection{Routine: \lstinline{T * alloc( ... )}}
-Call to alloc wihout any parameter returns one object of size of type @T@ allocated dynamically.
-Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine. If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine.
-alocc routine accepts six kinds of arguments. Using different combinations of tha parameters, different kind of allocations can be performed. Any combincation of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultanously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object. If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.
-
-\paragraph{Dim}
-This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function. It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@.
-It represents the required number of members in the array allocation as in \CFA's aalloc (FIX ME: cite aalloc).
-This parameter should be of type @size_t@.
-
-Example: @int a = alloc( 5 )@
-This call will return a dynamic array of five integers.
-
-\paragraph{Align}
-This parameter is position-free and uses a backtick routine align (@`align@). The parameter passed with @`align@ should be of type @size_t@. If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used.
-
-Example: @int b = alloc( 5 , 64`align )@
-This call will return a dynamic array of five integers. It will align the allocated object to 64.
-
-\paragraph{Fill}
-This parameter is position-free and uses a backtick routine fill (@`fill@). In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter.
-Three types of parameters can be passed using `fill.
-
-\begin{itemize}
-\item
-@char@: A char can be passed with @`fill@ to fill the whole dynamic allocation with the given char recursively till the end of required allocation.
-\item
-Object of returned type: An object of type of returned type can be passed with @`fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation.
-\item
-Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object. In this case, the allocated memory is not filled recursively till the end of allocation. The filling happen untill the end object passed to @`fill@ or the end of requested allocation reaches.
-\end{itemize}
-
-Example: @int b = alloc( 5 , 'a'`fill )@
-This call will return a dynamic array of five integers. It will fill the allocated object with character 'a' recursively till the end of requested allocation size.
-
-Example: @int b = alloc( 5 , 4`fill )@
-This call will return a dynamic array of five integers. It will fill the allocated object with integer 4 recursively till the end of requested allocation size.
-
-Example: @int b = alloc( 5 , a`fill )@ where @a@ is a pointer of int type
-This call will return a dynamic array of five integers. It will copy data in a to the returned object non-recursively untill end of a or the newly allocated object is reached.
-
-\paragraph{Resize}
-This parameter is position-free and uses a backtick routine resize (@`resize@). It represents the old dynamic object (oaddr) that the programmer wants to
-\begin{itemize}
-\item
-resize to a new size.
-\item
-realign to a new alignment
-\item
-fill with something.
-\end{itemize}
-The data in old dynamic object will not be preserved in the new object. The type of object passed to @`resize@ and the returned type of alloc call can be different.
-
-Example: @int b = alloc( 5 , a`resize )@
-This call will resize object a to a dynamic array that can contain 5 integers.
-
-Example: @int b = alloc( 5 , a`resize , 32`align )@
-This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32.
-
-Example: @int b = alloc( 5 , a`resize , 32`align , 2`fill )@
-This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32 and will be filled with 2.
-
-\paragraph{Realloc}
-This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@). It represents the old dynamic object (oaddr) that the programmer wants to
-\begin{itemize}
-\item
-realloc to a new size.
-\item
-realign to a new alignment
-\item
-fill with something.
-\end{itemize}
-The data in old dynamic object will be preserved in the new object. The type of object passed to @`realloc@ and the returned type of alloc call cannot be different.
-
-Example: @int b = alloc( 5 , a`realloc )@
-This call will realloc object a to a dynamic array that can contain 5 integers.
-
-Example: @int b = alloc( 5 , a`realloc , 32`align )@
-This call will realloc object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32.
-
-Example: @int b = alloc( 5 , a`realloc , 32`align , 2`fill )@
-This call will resize object a to a dynamic array that can contain 5 integers. The returned object will also be aligned to 32. The extra space after copying data of a to the returned object will be filled with 2.
+\begin{itemize}
+\item
+@addr@: address of an allocated object.
+\end{itemize}
+It returns the request size or zero if @addr@ is @NULL@.
+
+\paragraph{\lstinline{int malloc_stats_fd( int fd )}}
+changes the file descriptor where @malloc_stats@ writes statistics (default @stdout@).
+
+\noindent\textbf{Usage}
+@malloc_stats_fd@ takes one parameters.
+\begin{itemize}
+\item
+@fd@: files description.
+\end{itemize}
+It returns the previous file descriptor.
+
+\paragraph{\lstinline{size_t malloc_expansion()}}
+\label{p:malloc_expansion}
+set the amount (bytes) to extend the heap when there is insufficient free storage to service an allocation request.
+It returns the heap extension size used throughout a program, \ie called once at heap initialization.
+
+\paragraph{\lstinline{size_t malloc_mmap_start()}}
+set the crossover between allocations occurring in the @sbrk@ area or separately mapped.
+It returns the crossover point used throughout a program, \ie called once at heap initialization.
+
+\paragraph{\lstinline{size_t malloc_unfreed()}}
+\label{p:malloc_unfreed}
+amount subtracted to adjust for unfreed program storage (debug only).
+It returns the new subtraction amount and called by @malloc_stats@.
+
+
+\subsection{\CC Interface}
+
+The following extensions take advantage of overload polymorphism in the \CC type-system.
+
+\paragraph{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
+extends @resize@ with an alignment re\-quirement.
+
+\noindent\textbf{Usage}
+takes three parameters.
+\begin{itemize}
+\item
+@oaddr@: address to be resized
+\item
+@nalign@: alignment requirement
+\item
+@size@: new allocation size (smaller or larger than previous)
+\end{itemize}
+It returns the address of the old or new storage with the specified new size and alignment, or @NULL@ if @size@ is zero.
+
+\paragraph{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
+extends @realloc@ with an alignment re\-quirement and has the same usage as aligned @resize@.
+
+
+\subsection{\CFA Interface}
+
+The following extensions take advantage of overload polymorphism in the \CFA type-system.
+The key safety advantage of the \CFA type system is using the return type to select overloads;
+hence, a polymorphic routine knows the returned type and its size.
+This capability is used to remove the object size parameter and correctly cast the return storage to match the result type.
+For example, the following is the \CFA wrapper for C @malloc@:
+\begin{cfa}
+forall( T & | sized(T) ) {
+	T * malloc( void ) {
+		if ( _Alignof(T) <= libAlign() ) return @(T *)@malloc( @sizeof(T)@ ); // C allocation
+		else return @(T *)@memalign( @_Alignof(T)@, @sizeof(T)@ ); // C allocation
+	} // malloc
+\end{cfa}
+and is used as follows:
+\begin{lstlisting}
+int * i = malloc();
+double * d = malloc();
+struct Spinlock { ... } __attribute__(( aligned(128) ));
+Spinlock * sl = malloc();
+\end{lstlisting}
+where each @malloc@ call provides the return type as @T@, which is used with @sizeof@, @_Alignof@, and casting the storage to the correct type.
+This interface removes many of the common allocation errors in C programs.
+\VRef[Figure]{f:CFADynamicAllocationAPI} show the \CFA wrappers for the equivalent C/\CC allocation routines with same semantic behaviour.
+
+\begin{figure}
+\begin{lstlisting}
+T * malloc( void );
+T * aalloc( size_t dim );
+T * calloc( size_t dim );
+T * resize( T * ptr, size_t size );
+T * realloc( T * ptr, size_t size );
+T * memalign( size_t align );
+T * amemalign( size_t align, size_t dim );
+T * cmemalign( size_t align, size_t dim  );
+T * aligned_alloc( size_t align );
+int posix_memalign( T ** ptr, size_t align );
+T * valloc( void );
+T * pvalloc( void );
+\end{lstlisting}
+\caption{\CFA C-Style Dynamic-Allocation API}
+\label{f:CFADynamicAllocationAPI}
+\end{figure}
+
+In addition to the \CFA C-style allocator interface, a new allocator interface is provided to further increase orthogonality and usability of dynamic-memory allocation.
+This interface helps programmers in three ways.
+\begin{itemize}
+\item
+naming: \CFA regular and @ttype@ polymorphism is used to encapsulate a wide range of allocation functionality into a single routine name, so programmers do not have to remember multiple routine names for different kinds of dynamic allocations.
+\item
+named arguments: individual allocation properties are specified using postfix function call, so programmers do have to remember parameter positions in allocation calls.
+\item
+object size: like the \CFA C-style interface, programmers do not have to specify object size or cast allocation results.
+\end{itemize}
+Note, postfix function call is an alternative call syntax, using backtick @`@, where the argument appears before the function name, \eg
+\begin{cfa}
+duration ?@`@h( int h );		// ? denote the position of the function operand
+duration ?@`@m( int m );
+duration ?@`@s( int s );
+duration dur = 3@`@h + 42@`@m + 17@`@s;
+\end{cfa}
+@ttype@ polymorphism is similar to \CC variadic templates.
+
+\paragraph{\lstinline{T * alloc( ... )} or \lstinline{T * alloc( size_t dim, ... )}}
+is overloaded with a variable number of specific allocation routines, or an integer dimension parameter followed by a variable number specific allocation routines.
+A call without parameters returns a dynamically allocated object of type @T@ (@malloc@).
+A call with only the dimension (dim) parameter returns a dynamically allocated array of objects of type @T@ (@aalloc@).
+The variable number of arguments consist of allocation properties, which can be combined to produce different kinds of allocations.
+The only restriction is for properties @realloc@ and @resize@, which cannot be combined.
+
+The allocation property functions are:
+\subparagraph{\lstinline{T_align ?`align( size_t alignment )}}
+to align the allocation.
+The alignment parameter must be $\ge$ the default alignment (@libAlign()@ in \CFA) and a power of two, \eg:
+\begin{cfa}
+int * i0 = alloc( @4096`align@ );  sout | i0 | nl;
+int * i1 = alloc( 3, @4096`align@ );  sout | i1; for (i; 3 ) sout | &i1[i]; sout | nl;
+
+0x555555572000
+0x555555574000 0x555555574000 0x555555574004 0x555555574008
+\end{cfa}
+returns a dynamic object and object array aligned on a 4096-byte boundary.
+
+\subparagraph{\lstinline{S_fill(T) ?`fill ( /* various types */ )}}
+to initialize storage.
+There are three ways to fill storage:
+\begin{enumerate}
+\item
+A char fills each byte of each object.
+\item
+An object of the returned type fills each object.
+\item
+An object array pointer fills some or all of the corresponding object array.
+\end{enumerate}
+For example:
+\begin{cfa}[numbers=left]
+int * i0 = alloc( @0n`fill@ );  sout | *i0 | nl;  // disambiguate 0
+int * i1 = alloc( @5`fill@ );  sout | *i1 | nl;
+int * i2 = alloc( @'\xfe'`fill@ ); sout | hex( *i2 ) | nl;
+int * i3 = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | i3[i]; sout | nl;
+int * i4 = alloc( 5, @0xdeadbeefN`fill@ );  for ( i; 5 ) sout | hex( i4[i] ); sout | nl;
+int * i5 = alloc( 5, @i3`fill@ );  for ( i; 5 ) sout | i5[i]; sout | nl;
+int * i6 = alloc( 5, @[i3, 3]`fill@ );  for ( i; 5 ) sout | i6[i]; sout | nl;
+\end{cfa}
+\begin{lstlisting}[numbers=left]
+0
+5
+0xfefefefe
+5 5 5 5 5
+0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef
+5 5 5 5 5
+5 5 5 -555819298 -555819298  // two undefined values
+\end{lstlisting}
+Examples 1 to 3, fill an object with a value or characters.
+Examples 4 to 7, fill an array of objects with values, another array, or part of an array.
+
+\subparagraph{\lstinline{S_resize(T) ?`resize( void * oaddr )}}
+used to resize, realign, and fill, where the old object data is not copied to the new object.
+The old object type may be different from the new object type, since the values are not used.
+For example:
+\begin{cfa}[numbers=left]
+int * i = alloc( @5`fill@ );  sout | i | *i;
+i = alloc( @i`resize@, @256`align@, @7`fill@ );  sout | i | *i;
+double * d = alloc( @i`resize@, @4096`align@, @13.5`fill@ );  sout | d | *d;
+\end{cfa}
+\begin{lstlisting}[numbers=left]
+0x55555556d5c0 5
+0x555555570000 7
+0x555555571000 13.5
+\end{lstlisting}
+Examples 2 to 3 change the alignment, fill, and size for the initial storage of @i@.
+
+\begin{cfa}[numbers=left]
+int * ia = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | ia[i]; sout | nl;
+ia = alloc( 10, @ia`resize@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl;
+sout | ia; ia = alloc( 5, @ia`resize@, @512`align@, @13`fill@ ); sout | ia; for ( i; 5 ) sout | ia[i]; sout | nl;;
+ia = alloc( 3, @ia`resize@, @4096`align@, @2`fill@ );  sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl;
+\end{cfa}
+\begin{lstlisting}[numbers=left]
+5 5 5 5 5
+7 7 7 7 7 7 7 7 7 7
+0x55555556d560 0x555555571a00 13 13 13 13 13
+0x555555572000 0x555555572000 2 0x555555572004 2 0x555555572008 2
+\end{lstlisting}
+Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@.
+
+\subparagraph{\lstinline{S_realloc(T) ?`realloc( T * a ))}}
+used to resize, realign, and fill, where the old object data is copied to the new object.
+The old object type must be the same as the new object type, since the values used.
+Note, for @fill@, only the extra space after copying the data from the old object is filled with the given parameter.
+For example:
+\begin{cfa}[numbers=left]
+int * i = alloc( @5`fill@ );  sout | i | *i;
+i = alloc( @i`realloc@, @256`align@ );  sout | i | *i;
+i = alloc( @i`realloc@, @4096`align@, @13`fill@ );  sout | i | *i;
+\end{cfa}
+\begin{lstlisting}[numbers=left]
+0x55555556d5c0 5
+0x555555570000 5
+0x555555571000 5
+\end{lstlisting}
+Examples 2 to 3 change the alignment for the initial storage of @i@.
+The @13`fill@ for example 3 does nothing because no extra space is added.
+
+\begin{cfa}[numbers=left]
+int * ia = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | ia[i]; sout | nl;
+ia = alloc( 10, @ia`realloc@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl;
+sout | ia; ia = alloc( 1, @ia`realloc@, @512`align@, @13`fill@ ); sout | ia; for ( i; 1 ) sout | ia[i]; sout | nl;;
+ia = alloc( 3, @ia`realloc@, @4096`align@, @2`fill@ );  sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl;
+\end{cfa}
+\begin{lstlisting}[numbers=left]
+5 5 5 5 5
+5 5 5 5 5 7 7 7 7 7
+0x55555556c560 0x555555570a00 5
+0x555555571000 0x555555571000 5 0x555555571004 2 0x555555571008 2
+\end{lstlisting}
+Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@.
+The @13`fill@ for example 3 does nothing because no extra space is added.
+
+These \CFA allocation features are used extensively in the development of the \CFA runtime.
Index: doc/theses/mubeen_zulfiqar_MMath/background.tex
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/background.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/background.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -34,7 +34,7 @@
 \VRef[Figure]{f:AllocatorComponents} shows the two important data components for a memory allocator, management and storage, collectively called the \newterm{heap}.
 The \newterm{management data} is a data structure located at a known memory address and contains all information necessary to manage the storage data.
-The management data starts with fixed-sized information in the static-data memory that flows into the dynamic-allocation memory.
+The management data starts with fixed-sized information in the static-data memory that references components in the dynamic-allocation memory.
 The \newterm{storage data} is composed of allocated and freed objects, and \newterm{reserved memory}.
-Allocated objects (white) are variable sized, and allocated and maintained by the program;
+Allocated objects (light grey) are variable sized, and allocated and maintained by the program;
 \ie only the program knows the location of allocated storage, not the memory allocator.
 \begin{figure}[h]
@@ -44,5 +44,5 @@
 \label{f:AllocatorComponents}
 \end{figure}
-Freed objects (light grey) are memory deallocated by the program, which are linked into one or more lists facilitating easy location for new allocations.
+Freed objects (white) represent memory deallocated by the program, which are linked into one or more lists facilitating easy location of new allocations.
 Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks.
 Reserved memory (dark grey) is one or more blocks of memory obtained from the operating system but not yet allocated to the program;
@@ -54,5 +54,6 @@
 The trailer may be used to simplify an allocation implementation, \eg coalescing, and/or for security purposes to mark the end of an object.
 An object may be preceded by padding to ensure proper alignment.
-Some algorithms quantize allocation requests into distinct sizes resulting in additional spacing after objects less than the quantized value.
+Some algorithms quantize allocation requests into distinct sizes, called \newterm{buckets}, resulting in additional spacing after objects less than the quantized value.
+(Note, the buckets are often organized as an array of ascending bucket sizes for fast searching, \eg binary search, and the array is stored in the heap management-area, where each bucket is a top point to the freed objects of that size.)
 When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists.
 A free object also contains management data, \eg size, chaining, etc.
@@ -81,5 +82,5 @@
 Fragmentation is memory requested from the operating system but not used by the program;
 hence, allocated objects are not fragmentation.
-\VRef[Figure]{f:InternalExternalFragmentation}) shows fragmentation is divided into two forms: internal or external.
+\VRef[Figure]{f:InternalExternalFragmentation} shows fragmentation is divided into two forms: internal or external.
 
 \begin{figure}
@@ -96,5 +97,5 @@
 An allocator should strive to keep internal management information to a minimum.
 
-\newterm{External fragmentation} is all memory space reserved from the operating system but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes freed objects, all external management data, and reserved memory.
+\newterm{External fragmentation} is all memory space reserved from the operating system but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory.
 This memory is problematic in two ways: heap blowup and highly fragmented memory.
 \newterm{Heap blowup} occurs when memory freed by the program is not reused for future allocations leading to potentially unbounded external fragmentation growth~\cite{Berger00}.
@@ -125,5 +126,5 @@
 \end{figure}
 
-For a single-threaded memory allocator, three basic approaches for controlling fragmentation have been identified~\cite{Johnstone99}.
+For a single-threaded memory allocator, three basic approaches for controlling fragmentation are identified~\cite{Johnstone99}.
 The first approach is a \newterm{sequential-fit algorithm} with one list of free objects that is searched for a block large enough to fit a requested object size.
 Different search policies determine the free object selected, \eg the first free object large enough or closest to the requested size.
@@ -132,5 +133,5 @@
 
 The second approach is a \newterm{segregated} or \newterm{binning algorithm} with a set of lists for different sized freed objects.
-When an object is allocated, the requested size is rounded up to the nearest bin-size, possibly with spacing after the object.
+When an object is allocated, the requested size is rounded up to the nearest bin-size, often leading to spacing after the object.
 A binning algorithm is fast at finding free memory of the appropriate size and allocating it, since the first free object on the free list is used.
 The fewer bin-sizes, the fewer lists need to be searched and maintained;
@@ -158,5 +159,5 @@
 Temporal locality commonly occurs during an iterative computation with a fix set of disjoint variables, while spatial locality commonly occurs when traversing an array.
 
-Hardware takes advantage of temporal and spatial locality through multiple levels of caching (\ie memory hierarchy).
+Hardware takes advantage of temporal and spatial locality through multiple levels of caching, \ie memory hierarchy.
 When an object is accessed, the memory physically located around the object is also cached with the expectation that the current and nearby objects will be referenced within a short period of time.
 For example, entire cache lines are transferred between memory and cache and entire virtual-memory pages are transferred between disk and memory.
@@ -171,5 +172,5 @@
 
 There are a number of ways a memory allocator can degrade locality by increasing the working set.
-For example, a memory allocator may access multiple free objects before finding one to satisfy an allocation request (\eg sequential-fit algorithm).
+For example, a memory allocator may access multiple free objects before finding one to satisfy an allocation request, \eg sequential-fit algorithm.
 If there are a (large) number of objects accessed in very different areas of memory, the allocator may perturb the program's memory hierarchy causing multiple cache or page misses~\cite{Grunwald93}.
 Another way locality can be degraded is by spatially separating related data.
@@ -181,5 +182,5 @@
 
 A multi-threaded memory-allocator does not run any threads itself, but is used by a multi-threaded program.
-In addition to single-threaded design issues of locality and fragmentation, a multi-threaded allocator may be simultaneously accessed by multiple threads, and hence, must deal with concurrency issues such as mutual exclusion, false sharing, and additional forms of heap blowup.
+In addition to single-threaded design issues of fragmentation and locality, a multi-threaded allocator is simultaneously accessed by multiple threads, and hence, must deal with concurrency issues such as mutual exclusion, false sharing, and additional forms of heap blowup.
 
 
@@ -192,8 +193,13 @@
 Second is when multiple threads contend for a shared resource simultaneously, and hence, some threads must wait until the resource is released.
 Contention can be reduced in a number of ways:
+\begin{itemize}[itemsep=0pt]
+\item
 using multiple fine-grained locks versus a single lock, spreading the contention across a number of locks;
+\item
 using trylock and generating new storage if the lock is busy, yielding a classic space versus time tradeoff;
+\item
 using one of the many lock-free approaches for reducing contention on basic data-structure operations~\cite{Oyama99}.
-However, all of these approaches have degenerate cases where contention occurs.
+\end{itemize}
+However, all of these approaches have degenerate cases where program contention is high, which occurs outside of the allocator.
 
 
@@ -275,5 +281,5 @@
 \label{s:MultipleHeaps}
 
-A single-threaded allocator has at most one thread and heap, while a multi-threaded allocator has potentially multiple threads and heaps.
+A multi-threaded allocator has potentially multiple threads and heaps.
 The multiple threads cause complexity, and multiple heaps are a mechanism for dealing with the complexity.
 The spectrum ranges from multiple threads using a single heap, denoted as T:1 (see \VRef[Figure]{f:SingleHeap}), to multiple threads sharing multiple heaps, denoted as T:H (see \VRef[Figure]{f:SharedHeaps}), to one thread per heap, denoted as 1:1 (see \VRef[Figure]{f:PerThreadHeap}), which is almost back to a single-threaded allocator.
@@ -339,5 +345,5 @@
 An alternative implementation is for all heaps to share one reserved memory, which requires a separate lock for the reserved storage to ensure mutual exclusion when acquiring new memory.
 Because multiple threads can allocate/free/reallocate adjacent storage, all forms of false sharing may occur.
-Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area.
+Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the operating system.
 
 \begin{figure}
@@ -368,5 +374,5 @@
 
 
-\paragraph{1:1 model (thread heaps)} where each thread has its own heap, which eliminates most contention and locking because threads seldom accesses another thread's heap (see ownership in \VRef{s:Ownership}).
+\paragraph{1:1 model (thread heaps)} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see ownership in \VRef{s:Ownership}).
 An additional benefit of thread heaps is improved locality due to better memory layout.
 As each thread only allocates from its heap, all objects for a thread are consolidated in the storage area for that heap, better utilizing each CPUs cache and accessing fewer pages.
@@ -380,5 +386,5 @@
 Second is to place the thread heap on a list of available heaps and reuse it for a new thread in the future.
 Destroying the thread heap immediately may reduce external fragmentation sooner, since all free objects are freed to the global heap and may be reused by other threads.
-Alternatively, reusing thread heaps may improve performance if the inheriting thread makes similar allocation requests as the thread that previously held the thread heap.
+Alternatively, reusing thread heaps may improve performance if the inheriting thread makes similar allocation requests as the thread that previously held the thread heap because any unfreed storage is immediately accessible..
 
 
@@ -388,5 +394,5 @@
 However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the operating system, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000).
 It is difficult to retain this goal, if the user-threading model is directly involved with the heap model.
-\VRef[Figure]{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model provided by the language runtime.
+\VRef[Figure]{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model is provided by the language runtime.
 Hence, a user thread allocates/deallocates from/to the heap of the kernel thread on which it is currently executing.
 
@@ -400,6 +406,5 @@
 Adopting this model results in a subtle problem with shared heaps.
 With kernel threading, an operation that is started by a kernel thread is always completed by that thread.
-For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap even if preempted.
-Any correctness locking associated with the shared heap is preserved across preemption.
+For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption.
 
 However, this correctness property is not preserved for user-level threading.
@@ -409,5 +414,5 @@
 However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption is rare (10--100 milliseconds).
 Instead, techniques exist to lazily detect this case in the interrupt handler, abort the preemption, and return to the operation so it can complete atomically.
-Occasionally ignoring a preemption should be benign.
+Occasionally ignoring a preemption should be benign, but a persistent lack of preemption can result in both short and long term starvation.
 
 
@@ -430,6 +435,6 @@
 
 \newterm{Ownership} defines which heap an object is returned-to on deallocation.
-If a thread returns an object to the heap it was originally allocated from, the heap has ownership of its objects.
-Alternatively, a thread can return an object to the heap it is currently allocating from, which can be any heap accessible during a thread's lifetime.
+If a thread returns an object to the heap it was originally allocated from, a heap has ownership of its objects.
+Alternatively, a thread can return an object to the heap it is currently associated with, which can be any heap accessible during a thread's lifetime.
 \VRef[Figure]{f:HeapsOwnership} shows an example of multiple heaps (minus the global heap) with and without ownership.
 Again, the arrows indicate the direction memory conceptually moves for each kind of operation.
@@ -539,4 +544,5 @@
 Only with the 1:1 model and ownership is active and passive false-sharing avoided (see \VRef{s:Ownership}).
 Passive false-sharing may still occur, if delayed ownership is used.
+Finally, a completely free container can become reserved storage and be reset to allocate objects of a new size or freed to the global heap.
 
 \begin{figure}
@@ -553,10 +559,4 @@
 \caption{Free-list Structure with Container Ownership}
 \end{figure}
-
-A fragmented heap has multiple containers that may be partially or completely free.
-A completely free container can become reserved storage and be reset to allocate objects of a new size.
-When a heap reaches a threshold of free objects, it moves some free storage to the global heap for reuse to prevent heap blowup.
-Without ownership, when a heap frees objects to the global heap, individual objects must be passed, and placed on the global-heap's free-list.
-Containers cannot be freed to the global heap unless completely free because
 
 When a container changes ownership, the ownership of all objects within it change as well.
@@ -569,5 +569,5 @@
 Note, once the object is freed by Task$_1$, no more false sharing can occur until the container changes ownership again.
 To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free.
-One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area.
+One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the operating system.
 
 \begin{figure}
@@ -700,5 +700,5 @@
 \end{figure}
 
-As mentioned, an implementation may have only one heap deal with the global heap, so the other heap can be simplified.
+As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified.
 For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}.
 To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage.
@@ -721,11 +721,11 @@
 An allocation buffer is reserved memory (see~\VRef{s:AllocatorComponents}) not yet allocated to the program, and is used for allocating objects when the free list is empty.
 That is, rather than requesting new storage for a single object, an entire buffer is requested from which multiple objects are allocated later.
-Both any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or operating system, respectively.
+Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or operating system, respectively.
 The allocation buffer reduces contention and the number of global/operating-system calls.
 For coalescing, a buffer is split into smaller objects by allocations, and recomposed into larger buffer areas during deallocations.
 
-Allocation buffers are useful initially when there are no freed objects in a heap because many allocations usually occur when a thread starts.
+Allocation buffers are useful initially when there are no freed objects in a heap because many allocations usually occur when a thread starts (simple bump allocation).
 Furthermore, to prevent heap blowup, objects should be reused before allocating a new allocation buffer.
-Thus, allocation buffers are often allocated more frequently at program/thread start, and then their use often diminishes.
+Thus, allocation buffers are often allocated more frequently at program/thread start, and then allocations often diminish.
 
 Using an allocation buffer with a thread heap avoids active false-sharing, since all objects in the allocation buffer are allocated to the same thread.
@@ -746,14 +746,14 @@
 \label{s:LockFreeOperations}
 
-A lock-free algorithm guarantees safe concurrent-access to a data structure, so that at least one thread can make progress in the system, but an individual task has no bound to execution, and hence, may starve~\cite[pp.~745--746]{Herlihy93}.
-% A wait-free algorithm puts a finite bound on the number of steps any thread takes to complete an operation, so an individual task cannot starve
+A \newterm{lock-free algorithm} guarantees safe concurrent-access to a data structure, so that at least one thread makes progress, but an individual task has no execution bound and may starve~\cite[pp.~745--746]{Herlihy93}.
+(A \newterm{wait-free algorithm} puts a bound on the number of steps any thread takes to complete an operation to prevent starvation.)
 Lock-free operations can be used in an allocator to reduce or eliminate the use of locks.
-Locks are a problem for high contention or if the thread holding the lock is preempted and other threads attempt to use that lock.
-With respect to the heap, these situations are unlikely unless all threads makes extremely high use of dynamic-memory allocation, which can be an indication of poor design.
+While locks and lock-free data-structures often have equal performance, lock-free has the advantage of not holding a lock across preemption so other threads can continue to make progress.
+With respect to the heap, these situations are unlikely unless all threads make extremely high use of dynamic-memory allocation, which can be an indication of poor design.
 Nevertheless, lock-free algorithms can reduce the number of context switches, since a thread does not yield/block while waiting for a lock;
-on the other hand, a thread may busy-wait for an unbounded period.
+on the other hand, a thread may busy-wait for an unbounded period holding a processor.
 Finally, lock-free implementations have greater complexity and hardware dependency.
 Lock-free algorithms can be applied most easily to simple free-lists, \eg remote free-list, to allow lock-free insertion and removal from the head of a stack.
-Implementing lock-free operations for more complex data-structures (queue~\cite{Valois94}/deque~\cite{Sundell08}) is more complex.
+Implementing lock-free operations for more complex data-structures (queue~\cite{Valois94}/deque~\cite{Sundell08}) is correspondingly more complex.
 Michael~\cite{Michael04} and Gidenstam \etal \cite{Gidenstam05} have created lock-free variations of the Hoard allocator.
 
Index: doc/theses/mubeen_zulfiqar_MMath/figures/Alignment1.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/Alignment1.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/Alignment1.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,35 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+5 1 0 1 0 7 50 -1 -1 0.000 0 1 1 0 4350.000 -13893.750 2175 1725 4200 1875 6525 1725
+	1 1 1.00 45.00 90.00
+6 6525 1575 7650 1800
+4 0 0 50 -1 4 12 0.0000 2 195 1095 6525 1725 E$^{\\prime}$\001
+-6
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 4
+	 1200 1200 2100 1200 2100 1500 1200 1500
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 5700 1200 5700 1500
+2 2 0 0 0 7 60 -1 18 0.000 0 0 -1 0 0 5
+	 5700 1200 6600 1200 6600 1500 5700 1500 5700 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 4200 1200 4200 1500
+2 2 0 1 0 7 50 -1 18 0.000 0 0 -1 0 0 5
+	 2100 1200 3300 1200 3300 1500 2100 1500 2100 1200
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3300 1200 6600 1200 6600 1500 3300 1500 3300 1200
+4 1 0 50 -1 4 12 0.0000 2 150 135 2100 1725 E\001
+4 1 0 50 -1 0 12 0.0000 2 180 510 4800 1425 object\001
+4 1 0 50 -1 0 12 0.0000 2 135 585 6150 1425 unused\001
+4 1 0 50 -1 0 12 0.0000 2 180 1185 1650 1425 $\\cdots$  heap\001
+4 0 0 50 -1 4 12 0.0000 2 180 390 4200 1725 A(P)\001
+4 1 0 50 -1 0 12 0.0000 2 135 540 3750 1425 header\001
+4 1 0 50 -1 0 12 0.0000 2 135 300 2700 1425 free\001
+4 1 0 50 -1 4 12 0.0000 2 150 135 3300 1725 H\001
+4 0 0 50 -1 0 12 0.0000 2 180 1200 4650 1725 (multiple of N)\001
Index: doc/theses/mubeen_zulfiqar_MMath/figures/Alignment2.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/Alignment2.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/Alignment2.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,31 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+2 1 1 1 0 7 25 -1 -1 4.000 0 0 -1 0 0 2
+	 2100 1500 2100 1800
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 5700 1500 5700 1800
+2 2 0 0 0 7 60 -1 18 0.000 0 0 -1 0 0 5
+	 2100 1500 4200 1500 4200 1800 2100 1800 2100 1500
+2 2 0 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 5
+	 1200 1500 6600 1500 6600 1800 1200 1800 1200 1500
+2 1 1 1 0 7 25 -1 -1 4.000 0 0 -1 0 0 2
+	 4200 1500 4200 1800
+2 2 0 0 0 7 60 -1 18 0.000 0 0 -1 0 0 5
+	 5700 1500 6600 1500 6600 1800 5700 1800 5700 1500
+4 1 0 50 -1 0 12 0.0000 2 135 540 1650 1725 header\001
+4 1 0 50 -1 4 12 0.0000 2 150 135 1200 2025 H\001
+4 1 0 50 -1 4 12 0.0000 2 150 135 2100 2025 P\001
+4 0 0 50 -1 0 12 0.0000 2 180 1575 2175 2025 (min. alignment M)\001
+4 1 0 50 -1 0 12 0.0000 2 180 510 4950 1725 object\001
+4 1 0 50 -1 0 12 0.0000 2 135 315 4950 1425 size\001
+4 1 0 50 -1 0 12 0.0000 2 180 1815 3150 1425 internal fragmentation\001
+4 1 0 50 -1 0 12 0.0000 2 135 585 6150 1725 unused\001
+4 1 0 50 -1 4 12 0.0000 2 150 135 4200 2025 A\001
+4 0 0 50 -1 0 12 0.0000 2 180 1200 4275 2025 (multiple of N)\001
Index: doc/theses/mubeen_zulfiqar_MMath/figures/Alignment2Impl.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/Alignment2Impl.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/Alignment2Impl.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,35 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 2100 1500 2100 1875
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 4200 1500 4200 1875
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 3300 1500 3300 1875
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3300 1725 2100 1725
+2 2 0 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 5
+	 1200 1500 5700 1500 5700 1875 1200 1875 1200 1500
+2 2 0 0 0 7 60 -1 18 0.000 0 0 -1 0 0 5
+	 2100 1500 3300 1500 3300 1875 2100 1875 2100 1500
+4 1 0 50 -1 0 12 0.0000 2 180 1815 2550 1425 internal fragmentation\001
+4 1 0 50 -1 0 12 0.0000 2 180 510 4950 1725 object\001
+4 1 0 50 -1 0 12 0.0000 2 135 315 4950 1425 size\001
+4 1 0 50 -1 4 12 0.0000 2 150 135 1200 2100 H\001
+4 1 0 50 -1 4 12 0.0000 2 150 135 2100 2100 P\001
+4 0 0 50 -1 0 12 0.0000 2 180 1575 2175 2100 (min. alignment M)\001
+4 1 0 50 -1 4 12 0.0000 2 150 135 4200 2100 A\001
+4 0 0 50 -1 0 12 0.0000 2 180 1200 4275 2100 (multiple of N)\001
+4 1 0 50 -1 0 12 0.0000 2 135 540 3750 1850 header\001
+4 1 0 50 -1 0 12 0.0000 2 135 345 3750 1700 fake\001
+4 1 0 50 -1 0 12 0.0000 2 135 450 2700 1700 offset\001
+4 1 0 50 -1 0 12 0.0000 2 135 540 1650 1850 header\001
+4 1 0 50 -1 0 12 0.0000 2 135 570 1650 1675 normal\001
Index: doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS1.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS1.fig	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS1.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -8,155 +8,119 @@
 -2
 1200 2
-6 4200 1575 4500 1725
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4350 1650 20 20 4350 1650 4370 1650
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4425 1650 20 20 4425 1650 4445 1650
+6 2850 2100 3150 2250
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175
 -6
-6 2850 2475 3150 2850
+6 4050 2100 4350 2250
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175
+-6
+6 4650 2100 4950 2250
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175
+-6
+6 3450 2100 3750 2250
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175
+-6
+6 3300 2175 3600 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 2925 2475 2925 2700
+	 3375 2175 3375 2400
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 2850 2700 3150 2700 3150 2850 2850 2850 2850 2700
+	 3300 2400 3600 2400 3600 2550 3300 2550 3300 2400
 -6
-6 4350 2475 4650 2850
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3150 1800 3150 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2850 1800 2850 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4650 1800 4650 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4950 1800 4950 2250
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4500 1725 4500 2250
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5100 1725 5100 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3450 1800 3450 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3750 1800 3750 2250
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3300 1725 3300 2250
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3900 1725 3900 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5250 1800 5250 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5400 1800 5400 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5550 1800 5550 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5700 1800 5700 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5850 1800 5850 2250
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2700 1725 2700 2250
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 4425 2475 4425 2700
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 4350 2700 4650 2700 4650 2850 4350 2850 4350 2700
--6
-6 3600 2475 3825 3150
+	 3375 1275 3375 1575
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 3675 2475 3675 2700
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3600 2700 3825 2700 3825 2850 3600 2850 3600 2700
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3600 3000 3825 3000 3825 3150 3600 3150 3600 3000
+	 2700 1275 2700 1575
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2775 1275 2775 1575
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 3675 2775 3675 3000
--6
-6 4875 3600 5175 3750
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675
--6
-6 4875 2325 5175 2475
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400
--6
-6 5625 2325 5925 2475
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 2400 20 20 5700 2400 5720 2400
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 2400 20 20 5775 2400 5795 2400
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 2400 20 20 5850 2400 5870 2400
--6
-6 5625 3600 5925 3750
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5700 3675 20 20 5700 3675 5720 3675
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5775 3675 20 20 5775 3675 5795 3675
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5850 3675 20 20 5850 3675 5870 3675
--6
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2400 2100 2400 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2550 2100 2550 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2700 2100 2700 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2850 2100 2850 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3000 2100 3000 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3600 2100 3600 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3900 2100 3900 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 4050 2100 4050 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 4200 2100 4200 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 4350 2100 4350 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 4500 2100 4500 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3300 1500 3300 1800
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3600 1500 3600 1800
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3900 1500 3900 1800
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3000 1500 4800 1500 4800 1800 3000 1800 3000 1500
+	 5175 1275 5175 1575
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 5625 1275 5625 1575
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3750 1275 3750 1575
 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 3225 1650 2625 2100
+	 3825 1275 3825 1575
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2700 1950 6000 1950
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2700 2100 6000 2100
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2700 1800 6000 1800 6000 2250 2700 2250 2700 1800
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 3150 1650 2550 2100
+	 2775 2175 2775 2400
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 3450 1650 4050 2100
-2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 3375 1650 3975 2100
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2100 2100 2100 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 1950 2250 3150 2250
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3450 2250 4650 2250
+	 2775 2475 2775 2700
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 1950 2100 3150 2100 3150 2550 1950 2550 1950 2100
+	 2700 2700 2850 2700 2850 2850 2700 2850 2700 2700
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3450 2100 4650 2100 4650 2550 3450 2550 3450 2100
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2250 2100 2250 2550
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3750 2100 3750 2550
+	 2700 2400 2850 2400 2850 2550 2700 2550 2700 2400
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 2025 2475 2025 2700
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 2025 2775 2025 3000
+	 4575 2175 4575 2400
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 1950 3000 2100 3000 2100 3150 1950 3150 1950 3000
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 1950 2700 2100 2700 2100 2850 1950 2850 1950 2700
+	 4500 2400 5025 2400 5025 2550 4500 2550 4500 2400
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
 	1 1 1.00 45.00 90.00
-	 1950 3750 2700 3750 2700 3525
+	 3600 3375 4350 3375 4350 3150
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 1950 3525 3150 3525 3150 3900 1950 3900 1950 3525
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
-	1 1 1.00 45.00 90.00
-	 3450 3750 4200 3750 4200 3525
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3450 3525 4650 3525 4650 3900 3450 3900 3450 3525
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
-	1 1 1.00 45.00 90.00
-	 3150 4650 4200 4650 4200 4275
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3150 4275 4650 4275 4650 4875 3150 4875 3150 4275
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 1950 2400 3150 2400
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3450 2400 4650 2400
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 5400 2100 5400 3900
-4 2 0 50 -1 0 11 0.0000 2 120 300 1875 2250 lock\001
-4 1 0 50 -1 0 12 0.0000 2 135 1935 3900 1425 N kernel-thread buckets\001
-4 1 0 50 -1 0 12 0.0000 2 195 810 4425 2025 heap$_2$\001
-4 1 0 50 -1 0 12 0.0000 2 195 810 2175 2025 heap$_1$\001
-4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001
-4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001
-4 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001
-4 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001
-4 0 0 50 -1 0 12 0.0000 2 135 360 3225 4450 lock\001
-4 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001
-4 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001
-4 1 0 50 -1 0 12 0.0000 2 180 1455 3900 4200 global pool (sbrk)\001
-4 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001
-4 1 0 50 -1 0 12 0.0000 2 180 720 6450 3150 free pool\001
-4 1 0 50 -1 0 12 0.0000 2 180 390 6450 2925 heap\001
+	 3600 3150 5100 3150 5100 3525 3600 3525 3600 3150
+4 2 0 50 -1 0 11 0.0000 2 135 300 2625 1950 lock\001
+4 1 0 50 -1 0 11 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001
+4 1 0 50 -1 0 11 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001
+4 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001
+4 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001
+4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001
+4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001
+4 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001
+4 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001
+4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001
+4 1 0 50 -1 0 11 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001
Index: doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS2.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS2.fig	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/AllocDS2.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -8,119 +8,141 @@
 -2
 1200 2
-6 2850 2100 3150 2250
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 2925 2175 20 20 2925 2175 2945 2175
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 2175 20 20 3000 2175 3020 2175
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3075 2175 20 20 3075 2175 3095 2175
--6
-6 4050 2100 4350 2250
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 2175 20 20 4125 2175 4145 2175
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 2175 20 20 4200 2175 4220 2175
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 2175 20 20 4275 2175 4295 2175
--6
-6 4650 2100 4950 2250
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4725 2175 20 20 4725 2175 4745 2175
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4800 2175 20 20 4800 2175 4820 2175
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4875 2175 20 20 4875 2175 4895 2175
--6
-6 3450 2100 3750 2250
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3525 2175 20 20 3525 2175 3545 2175
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3600 2175 20 20 3600 2175 3620 2175
-1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3675 2175 20 20 3675 2175 3695 2175
--6
-6 3300 2175 3600 2550
+6 2850 2475 3150 2850
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 3375 2175 3375 2400
+	 2925 2475 2925 2700
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3300 2400 3600 2400 3600 2550 3300 2550 3300 2400
+	 2850 2700 3150 2700 3150 2850 2850 2850 2850 2700
+-6
+6 4350 2475 4650 2850
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 4425 2475 4425 2700
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4350 2700 4650 2700 4650 2850 4350 2850 4350 2700
+-6
+6 3600 2475 3825 3150
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3675 2475 3675 2700
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3600 2700 3825 2700 3825 2850 3600 2850 3600 2700
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3600 3000 3825 3000 3825 3150 3600 3150 3600 3000
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3675 2775 3675 3000
+-6
+6 1950 3525 3150 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+	1 1 1.00 45.00 90.00
+	 1950 3750 2700 3750 2700 3525
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 1950 3525 3150 3525 3150 3900 1950 3900 1950 3525
+4 0 0 50 -1 0 12 0.0000 2 135 360 2025 3700 lock\001
+-6
+6 4050 1575 4350 1725
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4125 1650 20 20 4125 1650 4145 1650
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 1650 20 20 4200 1650 4220 1650
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4275 1650 20 20 4275 1650 4295 1650
+-6
+6 4875 2325 6150 3750
+6 4875 2325 5175 2475
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 2400 20 20 4950 2400 4970 2400
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 2400 20 20 5025 2400 5045 2400
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 2400 20 20 5100 2400 5120 2400
+-6
+6 4875 3600 5175 3750
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4950 3675 20 20 4950 3675 4970 3675
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5025 3675 20 20 5025 3675 5045 3675
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3675 20 20 5100 3675 5120 3675
+-6
+4 1 0 50 -1 0 12 0.0000 2 180 900 5700 3150 local pools\001
+4 1 0 50 -1 0 12 0.0000 2 180 465 5700 2925 heaps\001
+-6
+6 3600 4050 5100 4650
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+	1 1 1.00 45.00 90.00
+	 3600 4500 4350 4500 4350 4275
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3600 4275 5100 4275 5100 4650 3600 4650 3600 4275
+4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 4200 global pool (sbrk)\001
+4 0 0 50 -1 0 12 0.0000 2 135 360 3675 4450 lock\001
 -6
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3150 1800 3150 2250
+	 2400 2100 2400 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2850 1800 2850 2250
+	 2550 2100 2550 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 4650 1800 4650 2250
+	 2700 2100 2700 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 4950 1800 4950 2250
-2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 4500 1725 4500 2250
-2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 5100 1725 5100 2250
+	 2850 2100 2850 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3450 1800 3450 2250
+	 3000 2100 3000 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3750 1800 3750 2250
-2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3300 1725 3300 2250
-2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 3900 1725 3900 2250
+	 3600 2100 3600 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 5250 1800 5250 2250
+	 3900 2100 3900 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 5400 1800 5400 2250
+	 4050 2100 4050 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 5550 1800 5550 2250
+	 4200 2100 4200 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 5700 1800 5700 2250
+	 4350 2100 4350 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 5850 1800 5850 2250
-2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2700 1725 2700 2250
+	 4500 2100 4500 2550
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3300 1500 3300 1800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3600 1500 3600 1800
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3000 1500 4800 1500 4800 1800 3000 1800 3000 1500
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 3375 1275 3375 1575
+	 3150 1650 2550 2100
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 2700 1275 2700 1575
-2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 2775 1275 2775 1575
+	 3450 1650 4050 2100
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2100 2100 2100 2550
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 1950 2250 3150 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3450 2250 4650 2250
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 1950 2100 3150 2100 3150 2550 1950 2550 1950 2100
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3450 2100 4650 2100 4650 2550 3450 2550 3450 2100
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2250 2100 2250 2550
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3750 2100 3750 2550
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 5175 1275 5175 1575
+	 2025 2475 2025 2700
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
 	1 1 1.00 45.00 90.00
-	 5625 1275 5625 1575
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 3750 1275 3750 1575
-2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 3825 1275 3825 1575
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2700 1950 6000 1950
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
-	 2700 2100 6000 2100
+	 2025 2775 2025 3000
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 2700 1800 6000 1800 6000 2250 2700 2250 2700 1800
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 2775 2175 2775 2400
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 2775 2475 2775 2700
+	 1950 3000 2100 3000 2100 3150 1950 3150 1950 3000
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 2700 2700 2850 2700 2850 2850 2700 2850 2700 2700
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 2700 2400 2850 2400 2850 2550 2700 2550 2700 2400
-2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
-	1 1 1.00 45.00 90.00
-	 4575 2175 4575 2400
-2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 4500 2400 5025 2400 5025 2550 4500 2550 4500 2400
+	 1950 2700 2100 2700 2100 2850 1950 2850 1950 2700
 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
 	1 1 1.00 45.00 90.00
-	 3600 3525 4650 3525 4650 3150
+	 3450 3750 4200 3750 4200 3525
 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
-	 3600 3150 5100 3150 5100 3750 3600 3750 3600 3150
-4 2 0 50 -1 0 11 0.0000 2 120 300 2625 1950 lock\001
-4 1 0 50 -1 0 10 0.0000 2 150 1155 3000 1725 N$\\times$S$_1$\001
-4 1 0 50 -1 0 10 0.0000 2 150 1155 3600 1725 N$\\times$S$_2$\001
-4 1 0 50 -1 0 12 0.0000 2 180 390 4425 1500 heap\001
-4 2 0 50 -1 0 12 0.0000 2 135 1140 2550 1425 kernel threads\001
-4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2100 size\001
-4 2 0 50 -1 0 11 0.0000 2 120 270 2625 2250 free\001
-4 2 0 50 -1 0 12 0.0000 2 135 600 2625 2700 free list\001
-4 0 0 50 -1 0 12 0.0000 2 135 360 3675 3325 lock\001
-4 1 0 50 -1 0 12 0.0000 2 180 1455 4350 3075 global pool (sbrk)\001
-4 1 0 50 -1 0 10 0.0000 2 150 1110 4800 1725 N$\\times$S$_t$\001
+	 3450 3525 4650 3525 4650 3900 3450 3900 3450 3525
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 1950 2400 3150 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3450 2400 4650 2400
+4 2 0 50 -1 0 11 0.0000 2 135 300 1875 2250 lock\001
+4 1 0 50 -1 0 12 0.0000 2 180 1245 3900 1425 H heap buckets\001
+4 1 0 50 -1 0 12 0.0000 2 180 810 4425 2025 heap$_2$\001
+4 1 0 50 -1 0 12 0.0000 2 180 810 2175 2025 heap$_1$\001
+4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2400 size\001
+4 2 0 50 -1 0 11 0.0000 2 120 270 1875 2550 free\001
+4 1 0 50 -1 0 12 0.0000 2 180 825 2550 3450 local pool\001
+4 0 0 50 -1 0 12 0.0000 2 135 360 3525 3700 lock\001
+4 2 0 50 -1 0 12 0.0000 2 135 600 1875 3000 free list\001
+4 1 0 50 -1 0 12 0.0000 2 180 825 4050 3450 local pool\001
Index: doc/theses/mubeen_zulfiqar_MMath/figures/FakeHeader.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/FakeHeader.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/FakeHeader.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,24 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2700 1500 2700 1800
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 1200 1500 4200 1500 4200 1800 1200 1800 1200 1500
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 2550 1500 2550 1800
+2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 3
+	1 1 1.00 45.00 90.00
+	 2925 1950 2625 1950 2625 1800
+4 1 0 50 -1 0 12 0.0000 2 135 450 3450 1725 offset\001
+4 1 0 50 -1 0 12 0.0000 2 180 825 1950 1725 alignment\001
+4 1 0 50 -1 0 12 0.0000 2 135 105 2625 1725 1\001
+4 0 0 50 -1 0 12 0.0000 2 180 1920 3000 2025 alignment (fake header)\001
+4 1 0 50 -1 0 12 0.0000 2 180 765 1950 1425 4/8-bytes\001
+4 1 0 50 -1 0 12 0.0000 2 180 765 3450 1425 4/8-bytes\001
Index: doc/theses/mubeen_zulfiqar_MMath/figures/Header.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/Header.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/Header.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,40 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 1800 1800 4200 1800
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 1800 2100 4200 2100
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 1800 1500 4200 1500 4200 2400 1800 2400 1800 1500
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 3900 1500 3900 2400
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 3600 1500 3600 2400
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 3300 1500 3300 2400
+2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 3
+	1 1 1.00 45.00 90.00
+	 4050 2625 3750 2625 3750 2400
+2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 3
+	1 1 1.00 45.00 90.00
+	 4050 2850 3450 2850 3450 2400
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4200 1800 6600 1800 6600 2100 4200 2100 4200 1800
+4 0 0 50 -1 0 12 0.0000 2 180 1185 1875 1725 bucket pointer\001
+4 0 0 50 -1 0 12 0.0000 2 180 1005 1875 2025 mapped size\001
+4 0 0 50 -1 0 12 0.0000 2 135 1215 1875 2325 next free block\001
+4 2 0 50 -1 0 12 0.0000 2 135 480 1725 2025 union\001
+4 1 0 50 -1 0 12 0.0000 2 135 270 3775 2325 0/1\001
+4 1 0 50 -1 0 12 0.0000 2 135 270 3475 2325 0/1\001
+4 1 0 50 -1 0 12 0.0000 2 180 945 5400 2025 request size\001
+4 1 0 50 -1 0 12 0.0000 2 180 765 5400 1425 4/8-bytes\001
+4 1 0 50 -1 0 12 0.0000 2 180 765 3000 1425 4/8-bytes\001
+4 0 0 50 -1 0 12 0.0000 2 135 825 4125 2700 zero filled\001
+4 0 0 50 -1 0 12 0.0000 2 180 1515 4125 2925 mapped allocation\001
Index: doc/theses/mubeen_zulfiqar_MMath/figures/UserKernelHeaps.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/UserKernelHeaps.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/UserKernelHeaps.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,59 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+6 1500 1200 2100 1500
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 1800 1350 150 150 1800 1350 1950 1350
+4 1 0 50 -1 0 11 0.0000 2 165 495 1800 1425 U$_2$\001
+-6
+6 1050 1200 1650 1500
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 1350 1350 150 150 1350 1350 1500 1350
+4 1 0 50 -1 0 11 0.0000 2 165 495 1350 1425 U$_1$\001
+-6
+6 1950 1200 2550 1500
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2250 1350 150 150 2250 1350 2400 1350
+4 1 0 50 -1 0 11 0.0000 2 165 495 2250 1425 U$_3$\001
+-6
+6 2850 1200 3450 1500
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3150 1350 150 150 3150 1350 3300 1350
+4 1 0 50 -1 0 11 0.0000 2 165 495 3150 1425 U$_4$\001
+-6
+6 2400 1200 3000 1500
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2700 1350 150 150 2700 1350 2850 1350
+4 1 0 50 -1 0 11 0.0000 2 165 495 2700 1425 U$_5$\001
+-6
+6 3300 1200 3900 1500
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3600 1350 150 150 3600 1350 3750 1350
+4 1 0 50 -1 0 11 0.0000 2 165 495 3600 1425 U$_6$\001
+-6
+6 2175 1800 2775 2100
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2475 1950 150 150 2475 1950 2625 1950
+4 1 0 50 -1 0 11 0.0000 2 165 495 2475 2025 K$_2$\001
+-6
+6 1725 1800 2325 2100
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2025 1950 150 150 2025 1950 2175 1950
+4 1 0 50 -1 0 11 0.0000 2 165 495 2025 2025 K$_1$\001
+-6
+6 2625 1800 3225 2100
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2925 1950 150 150 2925 1950 3075 1950
+4 1 0 50 -1 0 11 0.0000 2 165 495 2925 2025 K$_3$\001
+-6
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	0 0 1.00 45.00 90.00
+	0 0 1.00 45.00 90.00
+	 2025 2100 2025 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	0 0 1.00 45.00 90.00
+	0 0 1.00 45.00 90.00
+	 2475 2100 2475 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	0 0 1.00 45.00 90.00
+	0 0 1.00 45.00 90.00
+	 2925 2100 2925 2400
+4 1 0 50 -1 0 11 0.0000 2 135 2235 2475 1725 scheduled across kernel threads\001
+4 1 0 50 -1 0 11 0.0000 2 180 2145 2475 2625 K:1 or K:H or 1:1 heap model\001
Index: doc/theses/mubeen_zulfiqar_MMath/figures/llheap.fig
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/figures/llheap.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/mubeen_zulfiqar_MMath/figures/llheap.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,177 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+6 1275 1950 1725 2250
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 1275 1950 1725 1950 1725 2250 1275 2250 1275 1950
+4 1 0 50 -1 0 12 0.0000 2 135 360 1500 2175 lock\001
+-6
+6 4125 4050 4275 4350
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 4125 20 20 4200 4125 4220 4125
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 4200 20 20 4200 4200 4220 4200
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 4200 4275 20 20 4200 4275 4220 4275
+-6
+6 5025 3825 5325 3975
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5100 3900 20 20 5100 3900 5120 3900
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5175 3900 20 20 5175 3900 5195 3900
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 5250 3900 20 20 5250 3900 5270 3900
+-6
+6 6150 2025 6450 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6225 2100 20 20 6225 2100 6245 2100
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6300 2100 20 20 6300 2100 6320 2100
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6375 2100 20 20 6375 2100 6395 2100
+-6
+6 3225 4650 3675 4950
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3225 4650 3675 4650 3675 4950 3225 4950 3225 4650
+4 1 0 50 -1 0 12 0.0000 2 135 360 3450 4875 lock\001
+-6
+6 3750 2325 3900 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3825 2325 3825 2550
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3750 2550 3900 2550 3900 2700 3750 2700 3750 2550
+-6
+6 6750 2025 7050 2175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6825 2100 20 20 6825 2100 6845 2100
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6900 2100 20 20 6900 2100 6920 2100
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6975 2100 20 20 6975 2100 6995 2100
+-6
+6 2550 3150 3450 4350
+6 2925 4050 3075 4350
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 4125 20 20 3000 4125 3020 4125
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 4200 20 20 3000 4200 3020 4200
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 3000 4275 20 20 3000 4275 3020 4275
+-6
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2550 3375 3450 3375 3450 3600 2550 3600 2550 3375
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2550 3750 3450 3750 3450 3975 2550 3975 2550 3750
+4 1 0 50 -1 0 12 0.0000 2 180 900 3000 3300 local pools\001
+-6
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2850 1800 2850 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3000 1800 3000 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3150 1800 3150 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3300 1800 3300 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3450 1800 3450 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2550 1800 2550 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 1950 3600 1950
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2700 1800 2700 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 2100 3600 2100
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2400 1800 3600 1800 3600 2400 2400 2400 2400 1800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 2250 3600 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2475 2325 2475 2550
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2475 2625 2475 2850
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2400 2850 2550 2850 2550 3000 2400 3000 2400 2850
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2400 2550 2550 2550 2550 2700 2400 2700 2400 2550
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2925 2175 2925 2550
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2925 2625 2925 2850
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2850 2850 3000 2850 3000 3000 2850 3000 2850 2850
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2850 2550 3000 2550 3000 2700 2850 2700 2850 2550
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3600 1650 3600 2550
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3375 2325 3375 2550
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3225 2550 3525 2550 3525 2700 3225 2700 3225 2550
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4050 1800 4050 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4200 1800 4200 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4350 1800 4350 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4500 1800 4500 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4650 1800 4650 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3750 1800 3750 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3600 1950 4800 1950
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3900 1800 3900 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3600 2100 4800 2100
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3600 1800 4800 1800 4800 2400 3600 2400 3600 1800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3600 2250 4800 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 4125 2175 4125 2550
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4050 2550 4200 2550 4200 2700 4050 2700 4050 2550
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4800 1650 4800 2550
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5400 1650 5400 2550
+2 1 0 3 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 6000 1650 6000 2550
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4800 1800 6600 1800 6600 2400 4800 2400 4800 1800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 4575 2625 4575 2850
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 4575 2325 4575 2550
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4425 2550 4725 2550 4725 2700 4425 2700 4425 2550
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4425 2850 4725 2850 4725 3000 4425 3000 4425 2850
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3750 3375 4650 3375 4650 3600 3750 3600 3750 3375
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3750 3750 4650 3750 4650 3975 3750 3975 3750 3750
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3825 4650 5325 4650 5325 4950 3825 4950 3825 4650
+2 2 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 5
+	 1200 3900 1950 3900 1950 4425 1200 4425 1200 3900
+2 2 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 5
+	 1200 3000 1800 3000 1800 3525 1200 3525 1200 3000
+4 2 0 50 -1 0 11 0.0000 2 135 300 2325 1950 lock\001
+4 2 0 50 -1 0 11 0.0000 2 120 270 2325 2100 size\001
+4 2 0 50 -1 0 11 0.0000 2 120 270 2325 2400 free\001
+4 2 0 50 -1 0 11 0.0000 2 165 495 2325 2250 (away)\001
+4 1 0 50 -1 0 12 0.0000 2 180 1455 4575 4575 global pool (sbrk)\001
+4 1 0 50 -1 0 12 0.0000 2 180 900 4200 3300 local pools\001
+4 1 0 50 -1 0 12 0.0000 2 180 1695 4350 1425 global heaps (mmap)\001
+4 1 0 50 -1 0 12 0.0000 2 180 810 3000 1725 heap$_1$\001
+4 1 0 50 -1 0 12 0.0000 2 180 810 4200 1725 heap$_2$\001
+4 1 0 50 -1 0 11 0.0000 2 120 255 1500 3150 fast\001
+4 1 0 50 -1 0 11 0.0000 2 180 495 1500 3300 lookup\001
+4 1 0 50 -1 0 11 0.0000 2 135 330 1500 3450 table\001
+4 1 0 50 -1 0 11 0.0000 2 120 315 1575 4050 stats\001
+4 1 0 50 -1 0 11 0.0000 2 120 600 1575 4200 counters\001
+4 1 0 50 -1 0 11 0.0000 2 135 330 1575 4350 table\001
Index: doc/theses/mubeen_zulfiqar_MMath/intro.tex
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/intro.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/intro.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -48,5 +48,5 @@
 Attempts have been made to perform quasi garbage collection in C/\CC~\cite{Boehm88}, but it is a compromise.
 This thesis only examines dynamic memory-management with \emph{explicit} deallocation.
-While garbage collection and compaction are not part this work, many of the results are applicable to the allocation phase in any memory-management approach.
+While garbage collection and compaction are not part this work, many of the work's results are applicable to the allocation phase in any memory-management approach.
 
 Most programs use a general-purpose allocator, often the one provided implicitly by the programming-language's runtime.
@@ -65,14 +65,22 @@
 \begin{enumerate}[leftmargin=*]
 \item
-Implementation of a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
-
-\item
-Adopt returning of @nullptr@ for a zero-sized allocation, rather than an actual memory address, both of which can be passed to @free@.
-
-\item
-Extended the standard C heap functionality by preserving with each allocation its original request size versus the amount allocated, if an allocation is zero fill, and the allocation alignment.
-
-\item
-Use the zero fill and alignment as \emph{sticky} properties for @realloc@, to realign existing storage, or preserve existing zero-fill and alignment when storage is copied.
+Implementation of a new stand-lone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running over multiple kernel threads (M:N threading).
+
+\item
+Adopt @nullptr@ return for a zero-sized allocation, rather than an actual memory address, which can be passed to @free@.
+
+\item
+Extend the standard C heap functionality by preserving with each allocation:
+\begin{itemize}[itemsep=0pt]
+\item
+its request size plus the amount allocated,
+\item
+whether an allocation is zero fill,
+\item
+and allocation alignment.
+\end{itemize}
+
+\item
+Use the preserved zero fill and alignment as \emph{sticky} properties for @realloc@ to zero-fill and align when storage is extended or copied.
 Without this extension, it is unsafe to @realloc@ storage initially allocated with zero-fill/alignment as these properties are not preserved when copying.
 This silent generation of a problem is unintuitive to programmers and difficult to locate because it is transient.
@@ -86,5 +94,5 @@
 @resize( oaddr, alignment, size )@ re-purpose an old allocation with new alignment but \emph{without} preserving fill.
 \item
-@realloc( oaddr, alignment, size )@ same as previous @realloc@ but adding or changing alignment.
+@realloc( oaddr, alignment, size )@ same as @realloc@ but adding or changing alignment.
 \item
 @aalloc( dim, elemSize )@ same as @calloc@ except memory is \emph{not} zero filled.
@@ -96,5 +104,5 @@
 
 \item
-Provide additional heap wrapper functions in \CFA to provide a complete orthogonal set of allocation operations and properties.
+Provide additional heap wrapper functions in \CFA creating an orthogonal set of allocation operations and properties.
 
 \item
@@ -109,5 +117,5 @@
 @malloc_size( addr )@ returns the size of the memory allocation pointed-to by @addr@.
 \item
-@malloc_usable_size( addr )@ returns the usable size of the memory pointed-to by @addr@, i.e., the bin size containing the allocation, where @malloc_size( addr )@ $\le$ @malloc_usable_size( addr )@.
+@malloc_usable_size( addr )@ returns the usable (total) size of the memory pointed-to by @addr@, i.e., the bin size containing the allocation, where @malloc_size( addr )@ $\le$ @malloc_usable_size( addr )@.
 \end{itemize}
 
@@ -116,5 +124,5 @@
 
 \item
-Provide complete, fast, and contention-free allocation statistics to help understand program behaviour:
+Provide complete, fast, and contention-free allocation statistics to help understand allocation behaviour:
 \begin{itemize}
 \item
Index: doc/theses/mubeen_zulfiqar_MMath/performance.tex
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/performance.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/performance.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -1,3 +1,4 @@
 \chapter{Performance}
+\label{c:Performance}
 
 \section{Machine Specification}
Index: doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.bib
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.bib	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.bib	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -124,10 +124,33 @@
 }
 
-@misc{nedmalloc,
-    author	= {Niall Douglas},
-    title	= {nedmalloc version 1.06 Beta},
-    month	= jan,
-    year	= 2010,
-    note	= {\textsf{http://\-prdownloads.\-sourceforge.\-net/\-nedmalloc/\-nedmalloc\_v1.06beta1\_svn1151.zip}},
+@misc{ptmalloc2,
+    author	= {Wolfram Gloger},
+    title	= {ptmalloc version 2},
+    month	= jun,
+    year	= 2006,
+    note	= {\href{http://www.malloc.de/malloc/ptmalloc2-current.tar.gz}{http://www.malloc.de/\-malloc/\-ptmalloc2-current.tar.gz}},
+}
+
+@misc{GNUallocAPI,
+    author	= {GNU},
+    title	= {Summary of malloc-Related Functions},
+    year	= 2020,
+    note	= {\href{https://www.gnu.org/software/libc/manual/html\_node/Summary-of-Malloc.html}{https://www.gnu.org/\-software/\-libc/\-manual/\-html\_node/\-Summary-of-Malloc.html}},
+}
+
+@misc{SeriallyReusable,
+    author	= {IBM},
+    title	= {Serially reusable programs},
+    month	= mar,
+    year	= 2021,
+    note	= {\href{https://www.ibm.com/docs/en/ztpf/1.1.0.15?topic=structures-serially-reusable-programs}{https://www.ibm.com/\-docs/\-en/\-ztpf/\-1.1.0.15?\-topic=structures-serially-reusable-programs}},
+}
+
+@misc{librseq,
+    author	= {Mathieu Desnoyers},
+    title	= {Library for Restartable Sequences},
+    month	= mar,
+    year	= 2022,
+    note	= {\href{https://github.com/compudj/librseq}{https://github.com/compudj/librseq}},
 }
 
Index: doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.tex
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/mubeen_zulfiqar_MMath/uw-ethesis.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -60,4 +60,5 @@
 % For hyperlinked PDF, suitable for viewing on a computer, use this:
 \documentclass[letterpaper,12pt,titlepage,oneside,final]{book}
+\usepackage[T1]{fontenc}	% Latin-1 => 256-bit characters, => | not dash, <> not Spanish question marks
 
 % For PDF, suitable for double-sided printing, change the PrintVersion variable below to "true" and use this \documentclass line instead of the one above:
@@ -94,5 +95,6 @@
 % Use the "hyperref" package
 % N.B. HYPERREF MUST BE THE LAST PACKAGE LOADED; ADD ADDITIONAL PKGS ABOVE
-\usepackage[pagebackref=true]{hyperref} % with basic options
+\usepackage{url}
+\usepackage[dvips,pagebackref=true]{hyperref} % with basic options
 %\usepackage[pdftex,pagebackref=true]{hyperref}
 % N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing.
@@ -113,5 +115,6 @@
     citecolor=blue,        % color of links to bibliography
     filecolor=magenta,      % color of file links
-    urlcolor=blue           % color of external links
+    urlcolor=blue,           % color of external links
+    breaklinks=true
 }
 \ifthenelse{\boolean{PrintVersion}}{   % for improved print quality, change some hyperref options
@@ -122,4 +125,7 @@
     urlcolor=black
 }}{} % end of ifthenelse (no else)
+%\usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,pagebackref=true,breaklinks=true,colorlinks=true,linkcolor=blue,citecolor=blue,urlcolor=blue]{hyperref}
+\usepackage{breakurl}
+\urlstyle{sf}
 
 %\usepackage[automake,toc,abbreviations]{glossaries-extra} % Exception to the rule of hyperref being the last add-on package
@@ -171,5 +177,6 @@
 \input{common}
 %\usepackageinput{common}
-\CFAStyle						% CFA code-style for all languages
+\CFAStyle						% CFA code-style
+\lstset{language=CFA}					% default language
 \lstset{basicstyle=\linespread{0.9}\sf}			% CFA typewriter font
 \newcommand{\uC}{$\mu$\CC}
Index: doc/theses/thierry_delisle_PhD/thesis/Makefile
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/Makefile	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/Makefile	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -29,4 +29,7 @@
 PICTURES = ${addsuffix .pstex, \
 	base \
+	base_avg \
+	cache-share \
+	cache-noshare \
 	empty \
 	emptybit \
@@ -38,4 +41,5 @@
 	system \
 	cycle \
+	result.cycle.jax.ops \
 }
 
@@ -112,4 +116,10 @@
 	python3 $< $@
 
+build/result.%.ns.svg : data/% | ${Build}
+	../../../../benchmark/plot.py -f $< -o $@ -y "ns per ops"
+
+build/result.%.ops.svg : data/% | ${Build}
+	../../../../benchmark/plot.py -f $< -o $@ -y "Ops per second"
+
 ## pstex with inverted colors
 %.dark.pstex : fig/%.fig Makefile | ${Build}
Index: doc/theses/thierry_delisle_PhD/thesis/data/cycle.jax
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/data/cycle.jax	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/thierry_delisle_PhD/thesis/data/cycle.jax	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,1 @@
+[["rdq-cycle-go", "./rdq-cycle-go -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 43606897.0, "Ops per second": 8720908.73, "ns per ops": 114.67, "Ops per threads": 2180344.0, "Ops per procs": 10901724.0, "Ops/sec/procs": 2180227.18, "ns per ops/procs": 458.67}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5010.922033, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 93993568.0, "Total blocks": 93993209.0, "Ops per second": 18757739.07, "ns per ops": 53.31, "Ops per threads": 1174919.0, "Ops per procs": 5874598.0, "Ops/sec/procs": 1172358.69, "ns per ops/procs": 852.98}],["rdq-cycle-go", "./rdq-cycle-go -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 136763517.0, "Ops per second": 27351079.35, "ns per ops": 36.56, "Ops per threads": 1709543.0, "Ops per procs": 8547719.0, "Ops/sec/procs": 1709442.46, "ns per ops/procs": 584.99}],["rdq-cycle-go", "./rdq-cycle-go -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 27778961.0, "Ops per second": 5555545.09, "ns per ops": 180.0, "Ops per threads": 5555792.0, "Ops per procs": 27778961.0, "Ops/sec/procs": 5555545.09, "ns per ops/procs": 180.0}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5009.290878, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 43976310.0, "Total blocks": 43976217.0, "Ops per second": 8778949.17, "ns per ops": 113.91, "Ops per threads": 2198815.0, "Ops per procs": 10994077.0, "Ops/sec/procs": 2194737.29, "ns per ops/procs": 455.64}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5009.151542, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 44132300.0, "Total blocks": 44132201.0, "Ops per second": 8810334.37, "ns per ops": 113.5, "Ops per threads": 2206615.0, "Ops per procs": 11033075.0, "Ops/sec/procs": 2202583.59, "ns per ops/procs": 454.01}],["rdq-cycle-go", "./rdq-cycle-go -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 46353896.0, "Ops per second": 9270294.11, "ns per ops": 107.87, "Ops per threads": 2317694.0, "Ops per procs": 11588474.0, "Ops/sec/procs": 2317573.53, "ns per ops/procs": 431.49}],["rdq-cycle-go", "./rdq-cycle-go -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 27894379.0, "Ops per second": 5578591.58, "ns per ops": 179.26, "Ops per threads": 5578875.0, "Ops per procs": 27894379.0, "Ops/sec/procs": 5578591.58, "ns per ops/procs": 179.26}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5008.743463, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 32825528.0, "Total blocks": 32825527.0, "Ops per second": 6553645.29, "ns per ops": 152.59, "Ops per threads": 6565105.0, "Ops per procs": 32825528.0, "Ops/sec/procs": 6553645.29, "ns per ops/procs": 152.59}],["rdq-cycle-go", "./rdq-cycle-go -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 138213098.0, "Ops per second": 27640977.5, "ns per ops": 36.18, "Ops per threads": 1727663.0, "Ops per procs": 8638318.0, "Ops/sec/procs": 1727561.09, "ns per ops/procs": 578.85}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5007.914168, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 44109513.0, "Total blocks": 44109419.0, "Ops per second": 8807961.06, "ns per ops": 113.53, "Ops per threads": 2205475.0, "Ops per procs": 11027378.0, "Ops/sec/procs": 2201990.27, "ns per ops/procs": 454.13}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5012.121876, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 94130673.0, "Total blocks": 94130291.0, "Ops per second": 18780603.37, "ns per ops": 53.25, "Ops per threads": 1176633.0, "Ops per procs": 5883167.0, "Ops/sec/procs": 1173787.71, "ns per ops/procs": 851.94}],["rdq-cycle-go", "./rdq-cycle-go -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 140936367.0, "Ops per second": 28185668.38, "ns per ops": 35.48, "Ops per threads": 1761704.0, "Ops per procs": 8808522.0, "Ops/sec/procs": 1761604.27, "ns per ops/procs": 567.66}],["rdq-cycle-go", "./rdq-cycle-go -t 4 -p 4 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 4.0, "Number of threads": 20.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 44279585.0, "Ops per second": 8855475.01, "ns per ops": 112.92, "Ops per threads": 2213979.0, "Ops per procs": 11069896.0, "Ops/sec/procs": 2213868.75, "ns per ops/procs": 451.7}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5008.37392, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 32227534.0, "Total blocks": 32227533.0, "Ops per second": 6434730.02, "ns per ops": 155.41, "Ops per threads": 6445506.0, "Ops per procs": 32227534.0, "Ops/sec/procs": 6434730.02, "ns per ops/procs": 155.41}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 16 -p 16 -d 5 -r 5", {"Duration (ms)": 5011.019789, "Number of processors": 16.0, "Number of threads": 80.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 90600569.0, "Total blocks": 90600173.0, "Ops per second": 18080265.66, "ns per ops": 55.31, "Ops per threads": 1132507.0, "Ops per procs": 5662535.0, "Ops/sec/procs": 1130016.6, "ns per ops/procs": 884.94}],["rdq-cycle-cfa", "./rdq-cycle-cfa -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5008.52474, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 32861776.0, "Total blocks": 32861775.0, "Ops per second": 6561168.75, "ns per ops": 152.41, "Ops per threads": 6572355.0, "Ops per procs": 32861776.0, "Ops/sec/procs": 6561168.75, "ns per ops/procs": 152.41}],["rdq-cycle-go", "./rdq-cycle-go -t 1 -p 1 -d 5 -r 5", {"Duration (ms)": 5000.0, "Number of processors": 1.0, "Number of threads": 5.0, "Cycle size (# thrds)": 5.0, "Total Operations(ops)": 28097680.0, "Ops per second": 5619274.9, "ns per ops": 177.96, "Ops per threads": 5619536.0, "Ops per procs": 28097680.0, "Ops/sec/procs": 5619274.9, "ns per ops/procs": 177.96}]]
Index: doc/theses/thierry_delisle_PhD/thesis/fig/base.fig
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/fig/base.fig	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/fig/base.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -89,4 +89,10 @@
 	 5700 5210 5550 4950 5250 4950 5100 5210 5250 5470 5550 5470
 	 5700 5210
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 3600 5700 3600 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 4800 5700 4800 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 6000 5700 6000 1200
 4 2 -1 50 -1 0 12 0.0000 2 135 630 2100 3075 Threads\001
 4 2 -1 50 -1 0 12 0.0000 2 165 450 2100 2850 Ready\001
Index: doc/theses/thierry_delisle_PhD/thesis/fig/base_avg.fig
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/fig/base_avg.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/thierry_delisle_PhD/thesis/fig/base_avg.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,107 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+6 6750 4125 7050 4275
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6825 4200 20 20 6825 4200 6845 4200
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6900 4200 20 20 6900 4200 6920 4200
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6975 4200 20 20 6975 4200 6995 4200
+-6
+6 6375 5100 6675 5250
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6450 5175 20 20 6450 5175 6470 5175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6525 5175 20 20 6525 5175 6545 5175
+1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6600 5175 20 20 6600 5175 6620 5175
+-6
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3900 2400 300 300 3900 2400 4200 2400
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3900 3300 300 300 3900 3300 4200 3300
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 5100 1500 300 300 5100 1500 5400 1500
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 5100 2400 300 300 5100 2400 5400 2400
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 5100 3300 300 300 5100 3300 5400 3300
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 6300 2400 300 300 6300 2400 6600 2400
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 6300 3300 300 300 6300 3300 6600 3300
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 4509 3302 300 300 4509 3302 4809 3302
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2700 3300 300 300 2700 3300 3000 3300
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2700 2400 300 300 2700 2400 3000 2400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3000 3900 3000 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 3600 3900 3600 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4200 3900 4200 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 4800 3900 4800 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 5400 3900 5400 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 6000 3900 6000 4500
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 6600 3900 6600 4500
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2400 3900 7200 3900 7200 4500 2400 4500 2400 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2700 3300 2700 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3900 3300 3900 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 3900 3975 3900 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 5100 2475 5100 1800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 5100 3300 5100 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 5100 3975 5100 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 6300 3300 6300 2700
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 6300 3975 6300 3600
+2 1 0 1 -1 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 2700 3975 2700 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 4275 3000 4275
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+	1 1 1.00 45.00 90.00
+	 4500 3975 4500 3600
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 3375 3000 3375
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 2475 3000 2475
+2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7
+	 3300 5210 3150 4950 2850 4950 2700 5210 2850 5470 3150 5470
+	 3300 5210
+2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7
+	 4500 5210 4350 4950 4050 4950 3900 5210 4050 5470 4350 5470
+	 4500 5210
+2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7
+	 5700 5210 5550 4950 5250 4950 5100 5210 5250 5470 5550 5470
+	 5700 5210
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 3600 5700 3600 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 4800 5700 4800 1200
+2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2
+	 6000 5700 6000 1200
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+	 2400 4050 3000 4050
+4 2 -1 50 -1 0 12 0.0000 2 135 630 2100 3075 Threads\001
+4 2 -1 50 -1 0 12 0.0000 2 165 450 2100 2850 Ready\001
+4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 4450 MA\001
+4 2 -1 50 -1 0 12 0.0000 2 165 720 2100 4200 Array of\001
+4 2 -1 50 -1 0 12 0.0000 2 150 540 2100 4425 Queues\001
+4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 3550 TS\001
+4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 2650 TS\001
+4 2 -1 50 -1 0 12 0.0000 2 135 900 2100 5175 Processors\001
+4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 4200 TS\001
Index: doc/theses/thierry_delisle_PhD/thesis/fig/cache-noshare.fig
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/fig/cache-noshare.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/thierry_delisle_PhD/thesis/fig/cache-noshare.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,99 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2550 2550 456 456 2550 2550 2100 2475
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3750 2550 456 456 3750 2550 3300 2475
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 4950 2550 456 456 4950 2550 4500 2475
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 6150 2550 456 456 6150 2550 5700 2475
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2100 3300 3000 3300 3000 3600 2100 3600 2100 3300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2100 3900 3000 3900 3000 4500 2100 4500 2100 3900
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3300 3300 4200 3300 4200 3600 3300 3600 3300 3300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3300 3900 4200 3900 4200 4500 3300 4500 3300 3900
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4500 3300 5400 3300 5400 3600 4500 3600 4500 3300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4500 3900 5400 3900 5400 4500 4500 4500 4500 3900
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 5700 3300 6600 3300 6600 3600 5700 3600 5700 3300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 5700 3900 6600 3900 6600 4500 5700 4500 5700 3900
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2100 4800 4200 4800 4200 5700 2100 5700 2100 4800
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4500 4800 6600 4800 6600 5700 4500 5700 4500 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 2550 3000 2550 3300
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 6150 3000 6150 3300
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 6150 3600 6150 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 3750 3000 3750 3300
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 4950 3000 4950 3300
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 4950 3600 4950 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 3750 3600 3750 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 2550 3600 2550 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 2550 4500 2550 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 3750 4500 3750 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 4950 4500 4950 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 6150 4500 6150 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 4200 5250 4500 5250
+4 0 0 50 -1 0 11 0.0000 2 135 360 4725 2625 CPU2\001
+4 0 0 50 -1 0 11 0.0000 2 135 360 2325 2625 CPU0\001
+4 0 0 50 -1 0 11 0.0000 2 135 360 5925 2625 CPU3\001
+4 0 0 50 -1 0 11 0.0000 2 135 360 3525 2625 CPU1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 2475 3525 L1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 4875 3525 L1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 6075 3525 L1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 2400 4275 L2\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 4875 4275 L2\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 3675 4275 L2\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 6075 4275 L2\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 3675 3525 L1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 3000 5250 L3\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 5475 5250 L3\001
Index: doc/theses/thierry_delisle_PhD/thesis/fig/cache-share.fig
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/fig/cache-share.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
+++ doc/theses/thierry_delisle_PhD/thesis/fig/cache-share.fig	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -0,0 +1,92 @@
+#FIG 3.2  Produced by xfig version 3.2.7b
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 2550 2550 456 456 2550 2550 2100 2475
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3750 2550 456 456 3750 2550 3300 2475
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 4950 2550 456 456 4950 2550 4500 2475
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 6150 2550 456 456 6150 2550 5700 2475
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2100 3300 3000 3300 3000 3600 2100 3600 2100 3300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2100 3900 3000 3900 3000 4500 2100 4500 2100 3900
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3300 3300 4200 3300 4200 3600 3300 3600 3300 3300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 3300 3900 4200 3900 4200 4500 3300 4500 3300 3900
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4500 3300 5400 3300 5400 3600 4500 3600 4500 3300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 4500 3900 5400 3900 5400 4500 4500 4500 4500 3900
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 5700 3300 6600 3300 6600 3600 5700 3600 5700 3300
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 5700 3900 6600 3900 6600 4500 5700 4500 5700 3900
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+	 2100 4800 6600 4800 6600 5775 2100 5775 2100 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 2550 3000 2550 3300
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 3750 3000 3750 3300
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 4950 3000 4950 3300
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 6150 3000 6150 3300
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 6150 3600 6150 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 4950 3600 4950 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 3750 3600 3750 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 2550 3600 2550 3900
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 2550 4500 2550 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 3750 4500 3750 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 4950 4500 4950 4800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
+	1 1 1.00 60.00 45.00
+	1 1 1.00 60.00 45.00
+	 6150 4500 6150 4800
+4 0 0 50 -1 0 11 0.0000 2 135 360 4725 2625 CPU2\001
+4 0 0 50 -1 0 11 0.0000 2 135 360 2325 2625 CPU0\001
+4 0 0 50 -1 0 11 0.0000 2 135 360 5925 2625 CPU3\001
+4 0 0 50 -1 0 11 0.0000 2 135 360 3525 2625 CPU1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 2475 3525 L1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 4875 3525 L1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 6075 3525 L1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 2400 4275 L2\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 4875 4275 L2\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 3675 4275 L2\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 6075 4275 L2\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 3675 3525 L1\001
+4 0 0 50 -1 0 11 0.0000 2 135 180 4275 5325 L3\001
Index: doc/theses/thierry_delisle_PhD/thesis/glossary.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/glossary.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/glossary.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -101,5 +101,5 @@
 
 \longnewglossaryentry{at}
-{name={fred}}
+{name={task}}
 {
 Abstract object representing an unit of work. Systems will offer one or more concrete implementations of this concept (\eg \gls{kthrd}, \gls{job}), however, most of the concept of schedulings are independent of the particular implementations of the work representation. For this reason, this document use the term \Gls{at} to mean any representation and not one in particular.
Index: doc/theses/thierry_delisle_PhD/thesis/local.bib
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/local.bib	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/local.bib	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -685,2 +685,18 @@
   note = "[Online; accessed 9-February-2021]"
 }
+
+@misc{wiki:rcu,
+  author = "{Wikipedia contributors}",
+  title = "Read-copy-update --- {W}ikipedia{,} The Free Encyclopedia",
+  year = "2022",
+  url = "https://en.wikipedia.org/wiki/Linear_congruential_generator",
+  note = "[Online; accessed 12-April-2022]"
+}
+
+@misc{wiki:rwlock,
+  author = "{Wikipedia contributors}",
+  title = "Readers-writer lock --- {W}ikipedia{,} The Free Encyclopedia",
+  year = "2021",
+  url = "https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock",
+  note = "[Online; accessed 12-April-2022]"
+}
Index: doc/theses/thierry_delisle_PhD/thesis/text/core.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/core.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/text/core.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -3,5 +3,5 @@
 Before discussing scheduling in general, where it is important to address systems that are changing states, this document discusses scheduling in a somewhat ideal scenario, where the system has reached a steady state. For this purpose, a steady state is loosely defined as a state where there are always \glspl{thrd} ready to run and the system has the resources necessary to accomplish the work, \eg, enough workers. In short, the system is neither overloaded nor underloaded.
 
-I believe it is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state can to be pervasive in all states.
+It is important to discuss the steady state first because it is the easiest case to handle and, relatedly, the case in which the best performance is to be expected. As such, when the system is either overloaded or underloaded, a common approach is to try to adapt the system to this new load and return to the steady state, \eg, by adding or removing workers. Therefore, flaws in scheduling the steady state tend to be pervasive in all states.
 
 \section{Design Goals}
@@ -25,5 +25,5 @@
 It is important to note that these guarantees are expected only up to a point. \Glspl{thrd} that are ready to run should not be prevented to do so, but they still share the limited hardware resources. Therefore, the guarantee is considered respected if a \gls{thrd} gets access to a \emph{fair share} of the hardware resources, even if that share is very small.
 
-Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is equivalent to or lower than other popular languages, I consider the guarantee achieved.
+Similarly the performance guarantee, the lack of interference among threads, is only relevant up to a point. Ideally, the cost of running and blocking should be constant regardless of contention, but the guarantee is considered satisfied if the cost is not \emph{too high} with or without contention. How much is an acceptable cost is obviously highly variable. For this document, the performance experimentation attempts to show the cost of scheduling is at worst equivalent to existing algorithms used in popular languages. This demonstration can be made by comparing applications built in \CFA to applications built with other languages or other models. Recall programmer expectation is that the impact of the scheduler can be ignored. Therefore, if the cost of scheduling is compatitive to other popular languages, the guarantee will be consider achieved.
 
 More precisely the scheduler should be:
@@ -33,8 +33,22 @@
 \end{itemize}
 
-\subsection{Fairness vs Scheduler Locality}
+\subsection{Fairness Goals}
+For this work fairness will be considered as having two strongly related requirements: true starvation freedom and ``fast'' load balancing.
+
+\paragraph{True starvation freedom} is more easily defined: As long as at least one \proc continues to dequeue \ats, all read \ats should be able to run eventually.
+In any running system, \procs can stop dequeing \ats if they start running a \at that will simply never park.
+Traditional workstealing schedulers do not have starvation freedom in these cases.
+Now this requirement begs the question, what about preemption?
+Generally speaking preemption happens on the timescale of several milliseconds, which brings us to the next requirement: ``fast'' load balancing.
+
+\paragraph{Fast load balancing} means that load balancing should happen faster than preemption would normally allow.
+For interactive applications that need to run at 60, 90, 120 frames per second, \ats having to wait for several millseconds to run are effectively starved.
+Therefore load-balancing should be done at a faster pace, one that can detect starvation at the microsecond scale.
+With that said, this is a much fuzzier requirement since it depends on the number of \procs, the number of \ats and the general load of the system.
+
+\subsection{Fairness vs Scheduler Locality} \label{fairnessvlocal}
 An important performance factor in modern architectures is cache locality. Waiting for data at lower levels or not present in the cache can have a major impact on performance. Having multiple \glspl{hthrd} writing to the same cache lines also leads to cache lines that must be waited on. It is therefore preferable to divide data among each \gls{hthrd}\footnote{This partitioning can be an explicit division up front or using data structures where different \glspl{hthrd} are naturally routed to different cache lines.}.
 
-For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in part~\ref{Evaluation} on evaluation.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
+For a scheduler, having good locality\footnote{This section discusses \emph{internal locality}, \ie, the locality of the data used by the scheduler versus \emph{external locality}, \ie, how the data used by the application is affected by scheduling. External locality is a much more complicated subject and is discussed in the next section.}, \ie, having the data local to each \gls{hthrd}, generally conflicts with fairness. Indeed, good locality often requires avoiding the movement of cache lines, while fairness requires dynamically moving a \gls{thrd}, and as consequence cache lines, to a \gls{hthrd} that is currently available.
 
 However, I claim that in practice it is possible to strike a balance between fairness and performance because these goals do not necessarily overlap temporally, where Figure~\ref{fig:fair} shows a visual representation of this behaviour. As mentioned, some unfairness is acceptable; therefore it is desirable to have an algorithm that prioritizes cache locality as long as thread delay does not exceed the execution mental-model.
@@ -48,23 +62,43 @@
 \end{figure}
 
-\section{Design}
+\subsection{Performance Challenges}\label{pref:challenge}
+While there exists a multitude of potential scheduling algorithms, they generally always have to contend with the same performance challenges. Since these challenges are recurring themes in the design of a scheduler it is relevant to describe the central ones here before looking at the design.
+
+\subsubsection{Scalability}
+The most basic performance challenge of a scheduler is scalability.
+Given a large number of \procs and an even larger number of \ats, scalability measures how fast \procs can enqueue and dequeues \ats.
+One could expect that doubling the number of \procs would double the rate at which \ats are dequeued, but contention on the internal data structure of the scheduler can lead to worst improvements.
+While the ready-queue itself can be sharded to alleviate the main source of contention, auxillary scheduling features, \eg counting ready \ats, can also be sources of contention.
+
+\subsubsection{Migration Cost}
+Another important source of latency in scheduling is migration.
+An \at is said to have migrated if it is executed by two different \proc consecutively, which is the process discussed in \ref{fairnessvlocal}.
+Migrations can have many different causes, but it certain programs it can be all but impossible to limit migrations.
+Chapter~\ref{microbench} for example, has a benchmark where any \at can potentially unblock any other \at, which can leat to \ats migrating more often than not.
+Because of this it is important to design the internal data structures of the scheduler to limit the latency penalty from migrations.
+
+
+\section{Inspirations}
 In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is adding/removing \glspl{thrd} is a single point of contention. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue : create multiple sub-ready-queues that multiple \glspl{hthrd} can access and modify without interfering.
 
-Before going into the design of \CFA's scheduler proper, I want to discuss two sharding solutions which served as the inspiration scheduler in this thesis.
+Before going into the design of \CFA's scheduler proper, it is relevant to discuss two sharding solutions which served as the inspiration scheduler in this thesis.
 
 \subsection{Work-Stealing}
 
-As I mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing. As mentionned, in this pattern each \gls{proc} has its own ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work.
-The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed. In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
+As mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing.
+In this pattern each \gls{proc} has its own local ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work on their local ready-queue.
+The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed.
+In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention.
 On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work.
-This means that the scheduler may never balance unfairness that does not result in a \gls{proc} running out of work.
+This means that the scheduler never balances unfair loads unless they result in a \gls{proc} running out of work.
 Chapter~\ref{microbench} shows that in pathological cases this problem can lead to indefinite starvation.
 
 
-Based on these observation, I conclude that \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises.
+Based on these observation, the conclusion is that a \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises.
 
 \subsection{Relaxed-Fifo}
 An entirely different scheme is to create a ``relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgos any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which the \glspl{proc} can pick from.
 \Glspl{proc} choose ready-queus at random, but timestamps are added to all elements of the queue and dequeues are done by picking two queues and dequeing the oldest element.
+All subqueues are protected by TryLocks and \procs simply pick a different subqueue if they fail to acquire the TryLock.
 The result is a queue that has both decent scalability and sufficient fairness.
 The lack of ownership means that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely that any element will stay on the queue for much longer than any other element.
@@ -75,87 +109,189 @@
 
 While the fairness, of this scheme is good, it does suffer in terms of performance.
-It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and the randomness means locality can suffer significantly and finding non-empty queues can be difficult.
-
-\section{\CFA}
-The \CFA is effectively attempting to merge these two approaches, keeping the best of both.
-It is based on the
+It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and finding non-empty queues can be difficult if there are too few ready \ats.
+
+\section{Relaxed-FIFO++}
+Since it has inherent fairness quelities and decent performance in the presence of many \ats, the relaxed-FIFO queue appears as a good candidate to form the basis of a scheduler.
+The most obvious problems is for workloads where the number of \ats is barely greater than the number of \procs.
+In these situations, the wide sharding means most of the sub-queues from which the relaxed queue is formed will be empty.
+The consequence is that when a dequeue operations attempts to pick a sub-queue at random, it is likely that it picks an empty sub-queue and will have to pick again.
+This problem can repeat an unbounded number of times.
+
+As this is the most obvious challenge, it is worth addressing first.
+The obvious solution is to supplement each subqueue with some sharded data structure that keeps track of which subqueues are empty.
+This data structure can take many forms, for example simple bitmask or a binary tree that tracks which branch are empty.
+Following a binary tree on each pick has fairly good Big O complexity and many modern architectures have powerful bitmask manipulation instructions.
+However, precisely tracking which sub-queues are empty is actually fundamentally problematic.
+The reason is that each subqueues are already a form of sharding and the sharding width has presumably already chosen to avoid contention.
+However, tracking which ready queue is empty is only useful if the tracking mechanism uses denser sharding than the sub queues, then it will invariably create a new source of contention.
+But if the tracking mechanism is not denser than the sub-queues, then it will generally not provide useful because reading this new data structure risks being as costly as simply picking a sub-queue at random.
+Early experiments with this approach have shown that even with low success rates, randomly picking a sub-queue can be faster than a simple tree walk.
+
+The exception to this rule is using local tracking.
+If each \proc keeps track locally of which sub-queue is empty, then this can be done with a very dense data structure without introducing a new source of contention.
+The consequence of local tracking however, is that the information is not complete.
+Each \proc is only aware of the last state it saw each subqueues but does not have any information about freshness.
+Even on systems with low \gls{hthrd} count, \eg 4 or 8, this can quickly lead to the local information being no better than the random pick.
+This is due in part to the cost of this maintaining this information and its poor quality.
+
+However, using a very low cost approach to local tracking may actually be beneficial.
+If the local tracking is no more costly than the random pick, than \emph{any} improvement to the succes rate, however low it is, would lead to a performance benefits.
+This leads to the following approach:
+
+\subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/}
+The Relaxed-FIFO approach can be made to handle the case of mostly empty sub-queues by tweaking the \glsxtrlong{prng}.
+The \glsxtrshort{prng} state can be seen as containing a list of all the future sub-queues that will be accessed.
+While this is not particularly useful on its own, the consequence is that if the \glsxtrshort{prng} algorithm can be run \emph{backwards}, then the state also contains a list of all the subqueues that were accessed.
+Luckily, bidirectional \glsxtrshort{prng} algorithms do exist, for example some Linear Congruential Generators\cit{https://en.wikipedia.org/wiki/Linear\_congruential\_generator} support running the algorithm backwards while offering good quality and performance.
+This particular \glsxtrshort{prng} can be used as follows:
+
+Each \proc maintains two \glsxtrshort{prng} states, which whill be refered to as \texttt{F} and \texttt{B}.
+
+When a \proc attempts to dequeue a \at, it picks the subqueues by running the \texttt{B} backwards.
+When a \proc attempts to enqueue a \at, it runs \texttt{F} forward to pick to subqueue to enqueue to.
+If the enqueue is successful, the state \texttt{B} is overwritten with the content of \texttt{F}.
+
+The result is that each \proc will tend to dequeue \ats that it has itself enqueued.
+When most sub-queues are empty, this technique increases the odds of finding \ats at very low cost, while also offering an improvement on locality in many cases.
+
+However, while this approach does notably improve performance in many cases, this algorithm is still not competitive with work-stealing algorithms.
+The fundamental problem is that the constant randomness limits how much locality the scheduler offers.
+This becomes problematic both because the scheduler is likely to get cache misses on internal data-structures and because migration become very frequent.
+Therefore since the approach of modifying to relaxed-FIFO algorithm to behave more like work stealing does not seem to pan out, the alternative is to do it the other way around.
+
+\section{Work Stealing++}
+To add stronger fairness guarantees to workstealing a few changes.
+First, the relaxed-FIFO algorithm has fundamentally better fairness because each \proc always monitors all subqueues.
+Therefore the workstealing algorithm must be prepended with some monitoring.
+Before attempting to dequeue from a \proc's local queue, the \proc must make some effort to make sure remote queues are not being neglected.
+To make this possible, \procs must be able to determie which \at has been on the ready-queue the longest.
+Which is the second aspect that much be added.
+The relaxed-FIFO approach uses timestamps for each \at and this is also what is done here.
+
 \begin{figure}
 	\centering
 	\input{base.pstex_t}
-	\caption[Base \CFA design]{Base \CFA design \smallskip\newline A list of sub-ready queues offers the sharding, two per \glspl{proc}. However, \glspl{proc} can access any of the sub-queues.}
+	\caption[Base \CFA design]{Base \CFA design \smallskip\newline A Pool of sub-ready queues offers the sharding, two per \glspl{proc}. Each \gls{proc} have local subqueues, however \glspl{proc} can access any of the sub-queues. Each \at is timestamped when enqueued.}
 	\label{fig:base}
 \end{figure}
-
-
-
-% The common solution to the single point of contention is to shard the ready-queue so each \gls{hthrd} can access the ready-queue without contention, increasing performance.
-
-% \subsection{Sharding} \label{sec:sharding}
-% An interesting approach to sharding a queue is presented in \cit{Trevors paper}. This algorithm presents a queue with a relaxed \glsxtrshort{fifo} guarantee using an array of strictly \glsxtrshort{fifo} sublists as shown in Figure~\ref{fig:base}. Each \emph{cell} of the array has a timestamp for the last operation and a pointer to a linked-list with a lock. Each node in the list is marked with a timestamp indicating when it is added to the list. A push operation is done by picking a random cell, acquiring the list lock, and pushing to the list. If the cell is locked, the operation is simply retried on another random cell until a lock is acquired. A pop operation is done in a similar fashion except two random cells are picked. If both cells are unlocked with non-empty lists, the operation pops the node with the oldest timestamp. If one of the cells is unlocked and non-empty, the operation pops from that cell. If both cells are either locked or empty, the operation picks two new random cells and tries again.
-
-% \begin{figure}
-% 	\centering
-% 	\input{base.pstex_t}
-% 	\caption[Relaxed FIFO list]{Relaxed FIFO list \smallskip\newline List at the base of the scheduler: an array of strictly FIFO lists. The timestamp is in all nodes and cell arrays.}
-% 	\label{fig:base}
-% \end{figure}
-
-% \subsection{Finding threads}
-% Once threads have been distributed onto multiple queues, identifying empty queues becomes a problem. Indeed, if the number of \glspl{thrd} does not far exceed the number of queues, it is probable that several of the cell queues are empty. Figure~\ref{fig:empty} shows an example with 2 \glspl{thrd} running on 8 queues, where the chances of getting an empty queue is 75\% per pick, meaning two random picks yield a \gls{thrd} only half the time. This scenario leads to performance problems since picks that do not yield a \gls{thrd} are not useful and do not necessarily help make more informed guesses.
-
-% \begin{figure}
-% 	\centering
-% 	\input{empty.pstex_t}
-% 	\caption[``More empty'' Relaxed FIFO list]{``More empty'' Relaxed FIFO list \smallskip\newline Emptier state of the queue: the array contains many empty cells, that is strictly FIFO lists containing no elements.}
-% 	\label{fig:empty}
-% \end{figure}
-
-% There are several solutions to this problem, but they ultimately all have to encode if a cell has an empty list. My results show the density and locality of this encoding is generally the dominating factor in these scheme. Classic solutions to this problem use one of three techniques to encode the information:
-
-% \paragraph{Dense Information} Figure~\ref{fig:emptybit} shows a dense bitmask to identify the cell queues currently in use. This approach means processors can often find \glspl{thrd} in constant time, regardless of how many underlying queues are empty. Furthermore, modern x86 CPUs have extended bit manipulation instructions (BMI2) that allow searching the bitmask with very little overhead compared to the randomized selection approach for a filled ready queue, offering good performance even in cases with many empty inner queues. However, this technique has its limits: with a single word\footnote{Word refers here to however many bits can be written atomically.} bitmask, the total amount of ready-queue sharding is limited to the number of bits in the word. With a multi-word bitmask, this maximum limit can be increased arbitrarily, but the look-up time increases. Finally, a dense bitmap, either single or multi-word, causes additional contention problems that reduces performance because of cache misses after updates. This central update bottleneck also means the information in the bitmask is more often stale before a processor can use it to find an item, \ie mask read says there are available \glspl{thrd} but none on queue when the subsequent atomic check is done.
-
-% \begin{figure}
-% 	\centering
-% 	\vspace*{-5pt}
-% 	{\resizebox{0.75\textwidth}{!}{\input{emptybit.pstex_t}}}
-% 	\vspace*{-5pt}
-% 	\caption[Underloaded queue with bitmask]{Underloaded queue with bitmask indicating array cells with items.}
-% 	\label{fig:emptybit}
-
-% 	\vspace*{10pt}
-% 	{\resizebox{0.75\textwidth}{!}{\input{emptytree.pstex_t}}}
-% 	\vspace*{-5pt}
-% 	\caption[Underloaded queue with binary search-tree]{Underloaded queue with binary search-tree indicating array cells with items.}
-% 	\label{fig:emptytree}
-
-% 	\vspace*{10pt}
-% 	{\resizebox{0.95\textwidth}{!}{\input{emptytls.pstex_t}}}
-% 	\vspace*{-5pt}
-% 	\caption[Underloaded queue with per processor bitmask]{Underloaded queue with per processor bitmask indicating array cells with items.}
-% 	\label{fig:emptytls}
-% \end{figure}
-
-% \paragraph{Sparse Information} Figure~\ref{fig:emptytree} shows an approach using a hierarchical tree data-structure to reduce contention and has been shown to work in similar cases~\cite{ellen2007snzi}. However, this approach may lead to poorer performance due to the inherent pointer chasing cost while still allowing significant contention on the nodes of the tree if the tree is shallow.
-
-% \paragraph{Local Information} Figure~\ref{fig:emptytls} shows an approach using dense information, similar to the bitmap, but each \gls{hthrd} keeps its own independent copy. While this approach can offer good scalability \emph{and} low latency, the liveliness and discovery of the information can become a problem. This case is made worst in systems with few processors where even blind random picks can find \glspl{thrd} in a few tries.
-
-% I built a prototype of these approaches and none of these techniques offer satisfying performance when few threads are present. All of these approach hit the same 2 problems. First, randomly picking sub-queues is very fast. That speed means any improvement to the hit rate can easily be countered by a slow-down in look-up speed, whether or not there are empty lists. Second, the array is already sharded to avoid contention bottlenecks, so any denser data structure tends to become a bottleneck. In all cases, these factors meant the best cases scenario, \ie many threads, would get worst throughput, and the worst-case scenario, few threads, would get a better hit rate, but an equivalent poor throughput. As a result I tried an entirely different approach.
-
-% \subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/}
-% In the worst-case scenario there are only few \glspl{thrd} ready to run, or more precisely given $P$ \glspl{proc}\footnote{For simplicity, this assumes there is a one-to-one match between \glspl{proc} and \glspl{hthrd}.}, $T$ \glspl{thrd} and $\epsilon$ a very small number, than the worst case scenario can be represented by $T = P + \epsilon$, with $\epsilon \ll P$. It is important to note in this case that fairness is effectively irrelevant. Indeed, this case is close to \emph{actually matching} the model of the ``Ideal multi-tasking CPU'' on page \pageref{q:LinuxCFS}. In this context, it is possible to use a purely internal-locality based approach and still meet the fairness requirements. This approach simply has each \gls{proc} running a single \gls{thrd} repeatedly. Or from the shared ready-queue viewpoint, each \gls{proc} pushes to a given sub-queue and then pops from the \emph{same} subqueue. The challenge is for the the scheduler to achieve good performance in both the $T = P + \epsilon$ case and the $T \gg P$ case, without affecting the fairness guarantees in the later.
-
-% To handle this case, I use a \glsxtrshort{prng}\todo{Fix missing long form} in a novel way. There exist \glsxtrshort{prng}s that are fast, compact and can be run forward \emph{and} backwards.  Linear congruential generators~\cite{wiki:lcg} are an example of \glsxtrshort{prng}s of such \glsxtrshort{prng}s. The novel approach is to use the ability to run backwards to ``replay'' the \glsxtrshort{prng}. The scheduler uses an exclusive \glsxtrshort{prng} instance per \gls{proc}, the random-number seed effectively starts an encoding that produces a list of all accessed subqueues, from latest to oldest. Replaying the \glsxtrshort{prng} to identify cells accessed recently and which probably have data still cached.
-
-% The algorithm works as follows:
-% \begin{itemize}
-% 	\item Each \gls{proc} has two \glsxtrshort{prng} instances, $F$ and $B$.
-% 	\item Push and Pop operations occur as discussed in Section~\ref{sec:sharding} with the following exceptions:
-% 	\begin{itemize}
-% 		\item Push operations use $F$ going forward on each try and on success $F$ is copied into $B$.
-% 		\item Pop operations use $B$ going backwards on each try.
-% 	\end{itemize}
-% \end{itemize}
-
-% The main benefit of this technique is that it basically respects the desired properties of Figure~\ref{fig:fair}. When looking for work, a \gls{proc} first looks at the last cell they pushed to, if any, and then move backwards through its accessed cells. As the \gls{proc} continues looking for work, $F$ moves backwards and $B$ stays in place. As a result, the relation between the two becomes weaker, which means that the probablisitic fairness of the algorithm reverts to normal. Chapter~\ref{proofs} discusses more formally the fairness guarantees of this algorithm.
-
-% \section{Details}
+The algorithm is structure as shown in Figure~\ref{fig:base}.
+This is very similar to classic workstealing except the local queues are placed in an array so \procs can access eachother's queue in constant time.
+Sharding width can be adjusted based on need.
+When a \proc attempts to dequeue a \at, it first picks a random remote queue and compares its timestamp to the timestamps of the local queue(s), dequeue from the remote queue if needed.
+
+Implemented as as naively state above, this approach has some obvious performance problems.
+First, it is necessary to have some damping effect on helping.
+Random effects like cache misses and preemption can add spurious but short bursts of latency for which helping is not helpful, pun intended.
+The effect of these bursts would be to cause more migrations than needed and make this workstealing approach slowdown to the match the relaxed-FIFO approach.
+
+\begin{figure}
+	\centering
+	\input{base_avg.pstex_t}
+	\caption[\CFA design with Moving Average]{\CFA design with Moving Average \smallskip\newline A moving average is added to each subqueue.}
+	\label{fig:base-ma}
+\end{figure}
+
+A simple solution to this problem is to compare an exponential moving average\cit{https://en.wikipedia.org/wiki/Moving\_average\#Exponential\_moving\_average} instead if the raw timestamps, shown in Figure~\ref{fig:base-ma}.
+Note that this is slightly more complex than it sounds because since the \at at the head of a subqueue is still waiting, its wait time has not ended.
+Therefore the exponential moving average is actually an exponential moving average of how long each already dequeued \at have waited.
+To compare subqueues, the timestamp at the head must be compared to the current time, yielding the bestcase wait time for the \at at the head of the queue.
+This new waiting is averaged with the stored average.
+To limit even more the amount of unnecessary migration, a bias can be added to the local queue, where a remote queue is helped only if its moving average is more than \emph{X} times the local queue's average.
+None of the experimentation that I have run with these scheduler seem to indicate that the choice of the weight for the moving average or the choice of bis is particularly important.
+Weigths and biases of similar \emph{magnitudes} have similar effects.
+
+With these additions to workstealing, scheduling can be made as fair as the relaxed-FIFO approach, well avoiding the majority of unnecessary migrations.
+Unfortunately, the performance of this approach does suffer in the cases with no risks of starvation.
+The problem is that the constant polling of remote subqueues generally entail a cache miss.
+To make things worst, remote subqueues that are very active, \ie \ats are frequently enqueued and dequeued from them, the higher the chances are that polling will incurr a cache-miss.
+Conversly, the active subqueues do not benefit much from helping since starvation is already a non-issue.
+This puts this algorithm in an akward situation where it is paying for a cost, but the cost itself suggests the operation was unnecessary.
+The good news is that this problem can be mitigated
+
+\subsection{Redundant Timestamps}
+The problem with polling remote queues is due to a tension between the consistency requirement on the subqueue.
+For the subqueues, correctness is critical. There must be a consensus among \procs on which subqueues hold which \ats.
+Since the timestamps are use for fairness, it is alco important to have consensus and which \at is the oldest.
+However, when deciding if a remote subqueue is worth polling, correctness is much less of a problem.
+Since the only need is that a subqueue will eventually be polled, some data staleness can be acceptable.
+This leads to a tension where stale timestamps are only problematic in some cases.
+Furthermore, stale timestamps can be somewhat desirable since lower freshness requirements means less tension on the cache coherence protocol.
+
+
+\begin{figure}
+	\centering
+	% \input{base_ts2.pstex_t}
+	\caption[\CFA design with Redundant Timestamps]{\CFA design with Redundant Timestamps \smallskip\newline A array is added containing a copy of the timestamps. These timestamps are written to with relaxed atomics, without fencing, leading to fewer cache invalidations.}
+	\label{fig:base-ts2}
+\end{figure}
+A solution to this is to create a second array containing a copy of the timestamps and average.
+This copy is updated \emph{after} the subqueue's critical sections using relaxed atomics.
+\Glspl{proc} now check if polling is needed by comparing the copy of the remote timestamp instead of the actual timestamp.
+The result is that since there is no fencing, the writes can be buffered and cause fewer cache invalidations.
+
+The correctness argument here is somewhat subtle.
+The data used for deciding whether or not to poll a queue can be stale as long as it does not cause starvation.
+Therefore, it is acceptable if stale data make queues appear older than they really are but not fresher.
+For the timestamps, this means that missing writes to the timestamp is acceptable since they will make the head \at look older.
+For the moving average, as long as the operation are RW-safe, the average is guaranteed to yield a value that is between the oldest and newest values written.
+Therefore this unprotected read of the timestamp and average satisfy the limited correctness that is required.
+
+\begin{figure}
+	\centering
+	\input{cache-share.pstex_t}
+	\caption[CPU design with wide L3 sharing]{CPU design with wide L3 sharing \smallskip\newline A very simple CPU with 4 \glspl{hthrd}. L1 and L2 are private to each \gls{hthrd} but the L3 is shared across to entire core.}
+	\label{fig:cache-share}
+\end{figure}
+
+\begin{figure}
+	\centering
+	\input{cache-noshare.pstex_t}
+	\caption[CPU design with a narrower L3 sharing]{CPU design with a narrower L3 sharing \smallskip\newline A different CPU design, still with 4 \glspl{hthrd}. L1 and L2 are still private to each \gls{hthrd} but the L3 is shared some of the CPU but there is still two distinct L3 instances.}
+	\label{fig:cache-noshare}
+\end{figure}
+
+With redundant tiemstamps this scheduling algorithm achieves both the fairness and performance requirements, on some machines.
+The problem is that the cost of polling and helping is not necessarily consistent across each \gls{hthrd}.
+For example, on machines where the motherboard holds multiple CPU, cache misses can be satisfied from a cache that belongs to the CPU that missed, the \emph{local} CPU, or by a different CPU, a \emph{remote} one.
+Cache misses that are satisfied by a remote CPU will have higher latency than if it is satisfied by the local CPU.
+However, this is not specific to systems with multiple CPUs.
+Depending on the cache structure, cache-misses can have different latency for the same CPU.
+The AMD EPYC 7662 CPUs that is described in Chapter~\ref{microbench} is an example of that.
+Figure~\ref{fig:cache-share} and Figure~\ref{fig:cache-noshare} show two different cache topologies with highlight this difference.
+In Figure~\ref{fig:cache-share}, all cache instances are either private to a \gls{hthrd} or shared to the entire system, this means latency due to cache-misses are likely fairly consistent.
+By comparison, in Figure~\ref{fig:cache-noshare} misses in the L2 cache can be satisfied by a hit in either instance of the L3.
+However, the memory access latency to the remote L3 instance will be notably higher than the memory access latency to the local L3.
+The impact of these different design on this algorithm is that scheduling will scale very well on architectures similar to Figure~\ref{fig:cache-share}, both will have notably worst scalling with many narrower L3 instances.
+This is simply because as the number of L3 instances grow, so two does the chances that the random helping will cause significant latency.
+The solution is to have the scheduler be aware of the cache topology.
+
+\subsection{Per CPU Sharding}
+Building a scheduler that is aware of cache topology poses two main challenges: discovering cache topology and matching \procs to cache instance.
+Sadly, there is no standard portable way to discover cache topology in C.
+Therefore, while this is a significant portability challenge, it is outside the scope of this thesis to design a cross-platform cache discovery mechanisms.
+The rest of this work assumes discovering the cache topology based on Linux's \texttt{/sys/devices/system/cpu} directory.
+This leaves the challenge of matching \procs to cache instance, or more precisely identifying which subqueues of the ready queue are local to which cache instance.
+Once this matching is available, the helping algorithm can be changed to add bias so that \procs more often help subqueues local to the same cache instance
+\footnote{Note that like other biases mentioned in this section, the actual bias value does not appear to need precise tuinng.}.
+
+The obvious approach to mapping cache instances to subqueues is to statically tie subqueues to CPUs.
+Instead of having each subqueue local to a specific \proc, the system is initialized with subqueues for each \glspl{hthrd} up front.
+Then \procs dequeue and enqueue by first asking which CPU id they are local to, in order to identify which subqueues are the local ones.
+\Glspl{proc} can get the CPU id from \texttt{sched\_getcpu} or \texttt{librseq}.
+
+This approach solves the performance problems on systems with topologies similar to Figure~\ref{fig:cache-noshare}.
+However, it actually causes some subtle fairness problems in some systems, specifically systems with few \procs and many \glspl{hthrd}.
+In these cases, the large number of subqueues and the bias agains subqueues tied to different cache instances make it so it is very unlikely any single subqueue is picked.
+To make things worst, the small number of \procs mean that few helping attempts will be made.
+This combination of few attempts and low chances make it so a \at stranded on a subqueue that is not actively dequeued from may wait very long before it gets randomly helped.
+On a system with 2 \procs, 256 \glspl{hthrd} with narrow cache sharing, and a 100:1 bias, it can actually take multiple seconds for a \at to get dequeued from a remote queue.
+Therefore, a more dynamic matching of subqueues to cache instance is needed.
+
+\subsection{Topological Work Stealing}
+The approach that is used in the \CFA scheduler is to have per-\proc subqueue, but have an excplicit data-structure track which cache instance each subqueue is tied to.
+This is requires some finess because reading this data structure must lead to fewer cache misses than not having the data structure in the first place.
+A key element however is that, like the timestamps for helping, reading the cache instance mapping only needs to give the correct result \emph{often enough}.
+Therefore the algorithm can be built as follows: Before enqueuing or dequeing a \at, each \proc queries the CPU id and the corresponding cache instance.
+Since subqueues are tied to \procs, each \proc can then update the cache instance mapped to the local subqueue(s).
+To avoid unnecessary cache line invalidation, the map is only written to if the mapping changes.
+
Index: doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -3,4 +3,22 @@
 The first step of evaluation is always to test-out small controlled cases, to ensure that the basics are working properly.
 This sections presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler.
+
+\section{Benchmark Environment}
+All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine.
+
+\paragraph{AMD} The AMD machine is a server with two AMD EPYC 7662 CPUs and 256GB of DDR4 RAM.
+The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
+These EPYCs have 64 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 256 \glspl{hthrd}.
+The cpus each have 4 MB, 64 MB and 512 MB of L1, L2 and L3 caches respectively.
+Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared by 4 cores, therefore 8 \glspl{hthrd}.
+
+\paragraph{Intel} The Intel machine is a server with four Intel Xeon Platinum 8160 CPUs and 384GB of DDR4 RAM.
+The server runs Ubuntu 20.04.2 LTS on top of Linux Kernel 5.8.0-55.
+These Xeon Platinums have 24 cores per CPUs and 2 \glspl{hthrd} per core, for a total of 192 \glspl{hthrd}.
+The cpus each have 3 MB, 96 MB and 132 MB of L1, L2 and L3 caches respectively.
+Each L1 and L2 instance are only shared by \glspl{hthrd} on a given core, but each L3 instance is shared across the entire CPU, therefore 48 \glspl{hthrd}.
+
+This limited sharing of the last level cache on the AMD machine is markedly different than the Intel machine. Indeed, while on both architectures L2 cache misses that are served by L3 caches on a different cpu incurr a significant latency, on AMD it is also the case that cache misses served by a different L3 instance on the same cpu still incur high latency.
+
 
 \section{Cycling latency}
@@ -31,12 +49,10 @@
 \end{figure}
 
-\todo{check term ``idle sleep handling''}
 To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of \glspl{proc} available.
 Beyond this point, adding more rings serves to mitigate even more the idle sleep handling.
-This is to avoid the case where one of the worker \glspl{at} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
+This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above.
 
 The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw \texttt{park}/\texttt{unpark} and carefully picking the order of the \texttt{P} and \texttt{V} with respect to the loop condition.
 
-\todo{code, setup, results}
 \begin{lstlisting}
 	Thread.main() {
@@ -52,4 +68,10 @@
 \end{lstlisting}
 
+\begin{figure}
+	\centering
+	\input{result.cycle.jax.ops.pstex_t}
+	\vspace*{-10pt}
+	\label{fig:cycle:ns:jax}
+\end{figure}
 
 \section{Yield}
Index: doc/theses/thierry_delisle_PhD/thesis/text/existing.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/existing.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/text/existing.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -2,6 +2,6 @@
 Scheduling is the process of assigning resources to incomming requests.
 A very common form of this is assigning available workers to work-requests.
-The need for scheduling is very common in Computer Science, \eg Operating Systems and Hypervisors schedule available CPUs, NICs schedule available bamdwith, but it is also common in other fields.
-For example, assmebly lines are an example of scheduling where parts needed assembly are assigned to line workers.
+The need for scheduling is very common in Computer Science, \eg Operating Systems and Hypervisors schedule available CPUs, NICs schedule available bamdwith, but scheduling is also common in other fields.
+For example, in assmebly lines assigning parts in need of assembly to line workers is a form of scheduling.
 
 In all these cases, the choice of a scheduling algorithm generally depends first and formost on how much information is available to the scheduler.
@@ -15,8 +15,8 @@
 
 \section{Naming Convention}
-Scheduling has been studied by various different communities concentrating on different incarnation of the same problems. As a result, their is no real naming convention for scheduling that is respected across these communities. For this document, I will use the term \newterm{task} to refer to the abstract objects being scheduled and the term \newterm{worker} to refer to the objects which will execute these tasks.
+Scheduling has been studied by various different communities concentrating on different incarnation of the same problems. As a result, their is no real naming convention for scheduling that is respected across these communities. For this document, I will use the term \newterm{\Gls{at}} to refer to the abstract objects being scheduled and the term \newterm{\Gls{proc}} to refer to the objects which will execute these \glspl{at}.
 
 \section{Static Scheduling}
-Static schedulers require that tasks have their dependencies and costs explicitly and exhaustively specified prior schedule.
+Static schedulers require that \glspl{at} have their dependencies and costs explicitly and exhaustively specified prior schedule.
 The scheduler then processes this input ahead of time and producess a \newterm{schedule} to which the system can later adhere.
 This approach is generally popular in real-time systems since the need for strong guarantees justifies the cost of supplying this information.
@@ -26,24 +26,25 @@
 
 \section{Dynamic Scheduling}
-It may be difficult to fulfill the requirements of static scheduler if dependencies are conditionnal. In this case, it may be preferable to detect dependencies at runtime. This detection effectively takes the form of halting or suspending a task with unfulfilled dependencies and adding one or more new task(s) to the system. The new task(s) have the responsability of adding the dependent task back in the system once completed. As a consequence, the scheduler may have an incomplete view of the system, seeing only tasks we no pending dependencies. Schedulers that support this detection at runtime are referred to as \newterm{Dynamic Schedulers}.
+It may be difficult to fulfill the requirements of static scheduler if dependencies are conditionnal. In this case, it may be preferable to detect dependencies at runtime. This detection effectively takes the form of adding one or more new \gls{at}(s) to the system as their dependencies are resolved. As well as potentially halting or suspending a \gls{at} that dynamically detect unfulfilled dependencies. Each \gls{at} has the responsability of adding the dependent \glspl{at} back in the system once completed. As a consequence, the scheduler may have an incomplete view of the system, seeing only \glspl{at} we no pending dependencies. Schedulers that support this detection at runtime are referred to as \newterm{Dynamic Schedulers}.
 
 \subsection{Explicitly Informed Dynamic Schedulers}
-While dynamic schedulers do not have access to an exhaustive list of dependencies for a task, they may require to provide more or less information about each task, including for example: expected duration, required ressources, relative importance, etc. The scheduler can then use this information to direct the scheduling decisions. \cit{Examples of schedulers with more information} Precisely providing this information can be difficult for programmers, especially \emph{predicted} behaviour, and the scheduler may need to support some amount of imprecision in the provided information. For example, specifying that a tasks takes approximately 5 seconds to complete, rather than exactly 5 seconds. User provided information can also become a significant burden depending how the effort to provide the information scales with the number of tasks and there complexity. For example, providing an exhaustive list of files read by 5 tasks is an easier requirement the providing an exhaustive list of memory addresses accessed by 10'000 distinct tasks.
+While dynamic schedulers do not have access to an exhaustive list of dependencies for a \gls{at}, they may require to provide more or less information about each \gls{at}, including for example: expected duration, required ressources, relative importance, etc. The scheduler can then use this information to direct the scheduling decisions. \cit{Examples of schedulers with more information} Precisely providing this information can be difficult for programmers, especially \emph{predicted} behaviour, and the scheduler may need to support some amount of imprecision in the provided information. For example, specifying that a \glspl{at} takes approximately 5 seconds to complete, rather than exactly 5 seconds. User provided information can also become a significant burden depending how the effort to provide the information scales with the number of \glspl{at} and there complexity. For example, providing an exhaustive list of files read by 5 \glspl{at} is an easier requirement the providing an exhaustive list of memory addresses accessed by 10'000 distinct \glspl{at}.
 
 Since the goal of this thesis is to provide a scheduler as a replacement for \CFA's existing \emph{uninformed} scheduler, Explicitly Informed schedulers are less relevant to this project. Nevertheless, some strategies are worth mentionnding.
 
 \subsubsection{Prority Scheduling}
-A commonly used information that schedulers used to direct the algorithm is priorities. Each Task is given a priority and higher-priority tasks are preferred to lower-priority ones. The simplest priority scheduling algorithm is to simply require that every task have a distinct pre-established priority and always run the available task with the highest priority. Asking programmers to provide an exhaustive set of unique priorities can be prohibitive when the system has a large number of tasks. It can therefore be diserable for schedulers to support tasks with identical priorities and/or automatically setting and adjusting priorites for tasks.
+A commonly used information that schedulers used to direct the algorithm is priorities. Each Task is given a priority and higher-priority \glspl{at} are preferred to lower-priority ones. The simplest priority scheduling algorithm is to simply require that every \gls{at} have a distinct pre-established priority and always run the available \gls{at} with the highest priority. Asking programmers to provide an exhaustive set of unique priorities can be prohibitive when the system has a large number of \glspl{at}. It can therefore be diserable for schedulers to support \glspl{at} with identical priorities and/or automatically setting and adjusting priorites for \glspl{at}. The most common operating some variation on priorities with overlaps and dynamic priority adjustments. For example, Microsoft Windows uses a pair of priorities
+\cit{https://docs.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities,https://docs.microsoft.com/en-us/windows/win32/taskschd/taskschedulerschema-priority-settingstype-element}, one specified by users out of ten possible options and one adjusted by the system.
 
 \subsection{Uninformed and Self-Informed Dynamic Schedulers}
-Several scheduling algorithms do not require programmers to provide additionnal information on each task, and instead make scheduling decisions based solely on internal state and/or information implicitly gathered by the scheduler.
+Several scheduling algorithms do not require programmers to provide additionnal information on each \gls{at}, and instead make scheduling decisions based solely on internal state and/or information implicitly gathered by the scheduler.
 
 
 \subsubsection{Feedback Scheduling}
-As mentionned, Schedulers may also gather information about each tasks to direct their decisions. This design effectively moves the scheduler to some extent into the realm of \newterm{Control Theory}\cite{wiki:controltheory}. This gathering does not generally involve programmers and as such does not increase programmer burden the same way explicitly provided information may. However, some feedback schedulers do offer the option to programmers to offer additionnal information on certain tasks, in order to direct scheduling decision. The important distinction being whether or not the scheduler can function without this additionnal information.
+As mentionned, Schedulers may also gather information about each \glspl{at} to direct their decisions. This design effectively moves the scheduler to some extent into the realm of \newterm{Control Theory}\cite{wiki:controltheory}. This gathering does not generally involve programmers and as such does not increase programmer burden the same way explicitly provided information may. However, some feedback schedulers do offer the option to programmers to offer additionnal information on certain \glspl{at}, in order to direct scheduling decision. The important distinction being whether or not the scheduler can function without this additionnal information.
 
 
 \section{Work Stealing}\label{existing:workstealing}
-One of the most popular scheduling algorithm in practice (see~\ref{existing:prod}) is work-stealing. This idea, introduce by \cite{DBLP:conf/fpca/BurtonS81}, effectively has each worker work on its local tasks first, but allows the possibility for other workers to steal local tasks if they run out of tasks. \cite{DBLP:conf/focs/Blumofe94} introduced the more familiar incarnation of this, where each workers has queue of tasks to accomplish and workers without tasks steal tasks from random workers. (The Burton and Sleep algorithm had trees of tasks and stole only among neighbours). Blumofe and Leiserson also prove worst case space and time requirements for well-structured computations.
+One of the most popular scheduling algorithm in practice (see~\ref{existing:prod}) is work-stealing. This idea, introduce by \cite{DBLP:conf/fpca/BurtonS81}, effectively has each worker work on its local \glspl{at} first, but allows the possibility for other workers to steal local \glspl{at} if they run out of \glspl{at}. \cite{DBLP:conf/focs/Blumofe94} introduced the more familiar incarnation of this, where each workers has queue of \glspl{at} to accomplish and workers without \glspl{at} steal \glspl{at} from random workers. (The Burton and Sleep algorithm had trees of \glspl{at} and stole only among neighbours). Blumofe and Leiserson also prove worst case space and time requirements for well-structured computations.
 
 Many variations of this algorithm have been proposed over the years\cite{DBLP:journals/ijpp/YangH18}, both optmizations of existing implementations and approaches that account for new metrics.
@@ -51,5 +52,5 @@
 \paragraph{Granularity} A significant portion of early Work Stealing research was concentrating on \newterm{Implicit Parellelism}\cite{wiki:implicitpar}. Since the system was responsible to split the work, granularity is a challenge that cannot be left to the programmers (as opposed to \newterm{Explicit Parellelism}\cite{wiki:explicitpar} where the burden can be left to programmers). In general, fine granularity is better for load balancing and coarse granularity reduces communication overhead. The best performance generally means finding a middle ground between the two. Several methods can be employed, but I believe these are less relevant for threads, which are generally explicit and more coarse grained.
 
-\paragraph{Task Placement} Since modern computers rely heavily on cache hierarchies\cit{Do I need a citation for this}, migrating tasks from one core to another can be .  \cite{DBLP:journals/tpds/SquillanteL93}
+\paragraph{Task Placement} Since modern computers rely heavily on cache hierarchies\cit{Do I need a citation for this}, migrating \glspl{at} from one core to another can be .  \cite{DBLP:journals/tpds/SquillanteL93}
 
 \todo{The survey is not great on this subject}
@@ -58,10 +59,10 @@
 
 \subsection{Theoretical Results}
-There is also a large body of research on the theoretical aspects of work stealing. These evaluate, for example, the cost of migration\cite{DBLP:conf/sigmetrics/SquillanteN91,DBLP:journals/pe/EagerLZ86}, how affinity affects performance\cite{DBLP:journals/tpds/SquillanteL93,DBLP:journals/mst/AcarBB02,DBLP:journals/ipl/SuksompongLS16} and theoretical models for heterogenous systems\cite{DBLP:journals/jpdc/MirchandaneyTS90,DBLP:journals/mst/BenderR02,DBLP:conf/sigmetrics/GastG10}. \cite{DBLP:journals/jacm/BlellochGM99} examine the space bounds of Work Stealing and \cite{DBLP:journals/siamcomp/BerenbrinkFG03} show that for underloaded systems, the scheduler will complete computations in finite time, \ie is \newterm{stable}. Others show that Work-Stealing is applicable to various scheduling contexts\cite{DBLP:journals/mst/AroraBP01,DBLP:journals/anor/TchiboukdjianGT13,DBLP:conf/isaac/TchiboukdjianGTRB10,DBLP:conf/ppopp/AgrawalLS10,DBLP:conf/spaa/AgrawalFLSSU14}. \cite{DBLP:conf/ipps/ColeR13} also studied how Randomized Work Stealing affects false sharing among tasks.
+There is also a large body of research on the theoretical aspects of work stealing. These evaluate, for example, the cost of migration\cite{DBLP:conf/sigmetrics/SquillanteN91,DBLP:journals/pe/EagerLZ86}, how affinity affects performance\cite{DBLP:journals/tpds/SquillanteL93,DBLP:journals/mst/AcarBB02,DBLP:journals/ipl/SuksompongLS16} and theoretical models for heterogenous systems\cite{DBLP:journals/jpdc/MirchandaneyTS90,DBLP:journals/mst/BenderR02,DBLP:conf/sigmetrics/GastG10}. \cite{DBLP:journals/jacm/BlellochGM99} examine the space bounds of Work Stealing and \cite{DBLP:journals/siamcomp/BerenbrinkFG03} show that for underloaded systems, the scheduler will complete computations in finite time, \ie is \newterm{stable}. Others show that Work-Stealing is applicable to various scheduling contexts\cite{DBLP:journals/mst/AroraBP01,DBLP:journals/anor/TchiboukdjianGT13,DBLP:conf/isaac/TchiboukdjianGTRB10,DBLP:conf/ppopp/AgrawalLS10,DBLP:conf/spaa/AgrawalFLSSU14}. \cite{DBLP:conf/ipps/ColeR13} also studied how Randomized Work Stealing affects false sharing among \glspl{at}.
 
 However, as \cite{DBLP:journals/ijpp/YangH18} highlights, it is worth mentionning that this theoretical research has mainly focused on ``fully-strict'' computations, \ie workloads that can be fully represented with a Direct Acyclic Graph. It is unclear how well these distributions represent workloads in real world scenarios.
 
 \section{Preemption}
-One last aspect of scheduling worth mentionning is preemption since many schedulers rely on it for some of their guarantees. Preemption is the idea of interrupting tasks that have been running for too long, effectively injecting suspend points in the applications. There are multiple techniques to achieve this but they all aim to have the effect of guaranteeing that suspend points in a task are never further apart than some fixed duration. While this helps schedulers guarantee that no tasks will unfairly monopolize a worker, preemption can effectively added to any scheduler. Therefore, the only interesting aspect of preemption for the design of scheduling is whether or not to require it.
+One last aspect of scheduling worth mentionning is preemption since many schedulers rely on it for some of their guarantees. Preemption is the idea of interrupting \glspl{at} that have been running for too long, effectively injecting suspend points in the applications. There are multiple techniques to achieve this but they all aim to have the effect of guaranteeing that suspend points in a \gls{at} are never further apart than some fixed duration. While this helps schedulers guarantee that no \glspl{at} will unfairly monopolize a worker, preemption can effectively added to any scheduler. Therefore, the only interesting aspect of preemption for the design of scheduling is whether or not to require it.
 
 \section{Schedulers in Production}\label{existing:prod}
@@ -69,12 +70,12 @@
 
 \subsection{Operating System Schedulers}
-Operating System Schedulers tend to be fairly complex schedulers, they generally support some amount of real-time, aim to balance interactive and non-interactive tasks and support for multiple users sharing hardware without requiring these users to cooperate. Here are more details on a few schedulers used in the common operating systems: Linux, FreeBsd, Microsoft Windows and Apple's OS X. The information is less complete for operating systems behind closed source.
+Operating System Schedulers tend to be fairly complex schedulers, they generally support some amount of real-time, aim to balance interactive and non-interactive \glspl{at} and support for multiple users sharing hardware without requiring these users to cooperate. Here are more details on a few schedulers used in the common operating systems: Linux, FreeBsd, Microsoft Windows and Apple's OS X. The information is less complete for operating systems behind closed source.
 
 \paragraph{Linux's CFS}
-The default scheduler used by Linux (the Completely Fair Scheduler)\cite{MAN:linux/cfs,MAN:linux/cfs2} is a feedback scheduler based on CPU time. For each processor, it constructs a Red-Black tree of tasks waiting to run, ordering them by amount of CPU time spent. The scheduler schedules the task that has spent the least CPU time. It also supports the concept of \newterm{Nice values}, which are effectively multiplicative factors on the CPU time spent. The ordering of tasks is also impacted by a group based notion of fairness, where tasks belonging to groups having spent less CPU time are preferred to tasks beloning to groups having spent more CPU time. Linux achieves load-balancing by regularly monitoring the system state\cite{MAN:linux/cfs/balancing} and using some heuristic on the load (currently CPU time spent in the last millisecond plus decayed version of the previous time slots\cite{MAN:linux/cfs/pelt}.).
+The default scheduler used by Linux (the Completely Fair Scheduler)\cite{MAN:linux/cfs,MAN:linux/cfs2} is a feedback scheduler based on CPU time. For each processor, it constructs a Red-Black tree of \glspl{at} waiting to run, ordering them by amount of CPU time spent. The scheduler schedules the \gls{at} that has spent the least CPU time. It also supports the concept of \newterm{Nice values}, which are effectively multiplicative factors on the CPU time spent. The ordering of \glspl{at} is also impacted by a group based notion of fairness, where \glspl{at} belonging to groups having spent less CPU time are preferred to \glspl{at} beloning to groups having spent more CPU time. Linux achieves load-balancing by regularly monitoring the system state\cite{MAN:linux/cfs/balancing} and using some heuristic on the load (currently CPU time spent in the last millisecond plus decayed version of the previous time slots\cite{MAN:linux/cfs/pelt}.).
 
-\cite{DBLP:conf/eurosys/LoziLFGQF16} shows that Linux's CFS also does work-stealing to balance the workload of each processors, but the paper argues this aspect can be improved significantly. The issues highlighted sem to stem from Linux's need to support fairness across tasks \emph{and} across users\footnote{Enforcing fairness across users means, for example, that given two users: one with a single task and the other with one thousand tasks, the user with a single task does not receive one one thousandth of the CPU time.}, increasing the complexity.
+\cite{DBLP:conf/eurosys/LoziLFGQF16} shows that Linux's CFS also does work-stealing to balance the workload of each processors, but the paper argues this aspect can be improved significantly. The issues highlighted sem to stem from Linux's need to support fairness across \glspl{at} \emph{and} across users\footnote{Enforcing fairness across users means, for example, that given two users: one with a single \gls{at} and the other with one thousand \glspl{at}, the user with a single \gls{at} does not receive one one thousandth of the CPU time.}, increasing the complexity.
 
-Linux also offers a FIFO scheduler, a real-time schedulerwhich runs the highest-priority task, and a round-robin scheduler, which is an extension of the fifo-scheduler that adds fixed time slices. \cite{MAN:linux/sched}
+Linux also offers a FIFO scheduler, a real-time schedulerwhich runs the highest-priority \gls{at}, and a round-robin scheduler, which is an extension of the fifo-scheduler that adds fixed time slices. \cite{MAN:linux/sched}
 
 \paragraph{FreeBSD}
@@ -82,5 +83,5 @@
 
 \paragraph{Windows(OS)}
-Microsoft's Operating System's Scheduler\cite{MAN:windows/scheduler} is a feedback scheduler with priorities. It supports 32 levels of priorities, some of which are reserved for real-time and prviliged applications. It schedules tasks based on the highest priorities (lowest number) and how much cpu time each tasks have used. The scheduler may also temporarily adjust priorities after certain effects like the completion of I/O requests.
+Microsoft's Operating System's Scheduler\cite{MAN:windows/scheduler} is a feedback scheduler with priorities. It supports 32 levels of priorities, some of which are reserved for real-time and prviliged applications. It schedules \glspl{at} based on the highest priorities (lowest number) and how much cpu time each \glspl{at} have used. The scheduler may also temporarily adjust priorities after certain effects like the completion of I/O requests.
 
 \todo{load balancing}
@@ -99,5 +100,5 @@
 
 \subsection{User-Level Schedulers}
-By comparison, user level schedulers tend to be simpler, gathering fewer metrics and avoid complex notions of fairness. Part of the simplicity is due to the fact that all tasks have the same user, and therefore cooperation is both feasible and probable.
+By comparison, user level schedulers tend to be simpler, gathering fewer metrics and avoid complex notions of fairness. Part of the simplicity is due to the fact that all \glspl{at} have the same user, and therefore cooperation is both feasible and probable.
 \paragraph{Go}
 Go's scheduler uses a Randomized Work Stealing algorithm that has a global runqueue(\emph{GRQ}) and each processor(\emph{P}) has both a fixed-size runqueue(\emph{LRQ}) and a high-priority next ``chair'' holding a single element.\cite{GITHUB:go,YTUBE:go} Preemption is present, but only at function call boundaries.
@@ -116,5 +117,5 @@
 
 \paragraph{Intel\textregistered ~Threading Building Blocks}
-\newterm{Thread Building Blocks}(TBB) is Intel's task parellelism\cite{wiki:taskparallel} framework. It runs tasks or \newterm{jobs}, schedulable objects that must always run to completion, on a pool of worker threads. TBB's scheduler is a variation of Randomized Work Stealing that also supports higher-priority graph-like dependencies\cite{MAN:tbb/scheduler}. It schedules tasks as follows (where \textit{t} is the last task completed):
+\newterm{Thread Building Blocks}(TBB) is Intel's task parellelism\cite{wiki:taskparallel} framework. It runs \newterm{jobs}, uninterruptable \glspl{at}, schedulable objects that must always run to completion, on a pool of worker threads. TBB's scheduler is a variation of Randomized Work Stealing that also supports higher-priority graph-like dependencies\cite{MAN:tbb/scheduler}. It schedules \glspl{at} as follows (where \textit{t} is the last \gls{at} completed):
 \begin{displayquote}
 	\begin{enumerate}
@@ -136,5 +137,5 @@
 
 \paragraph{Grand Central Dispatch}
-This is an API produce by Apple\cit{Official GCD source} that offers task parellelism\cite{wiki:taskparallel}. Its distinctive aspect is that it uses multiple ``Dispatch Queues'', some of which are created by programmers. These queues each have their own local ordering guarantees, \eg tasks on queue $A$ are executed in \emph{FIFO} order.
+This is an API produce by Apple\cit{Official GCD source} that offers task parellelism\cite{wiki:taskparallel}. Its distinctive aspect is that it uses multiple ``Dispatch Queues'', some of which are created by programmers. These queues each have their own local ordering guarantees, \eg \glspl{at} on queue $A$ are executed in \emph{FIFO} order.
 
 \todo{load balancing and scheduling}
Index: doc/theses/thierry_delisle_PhD/thesis/text/io.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/io.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/text/io.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -173,5 +173,5 @@
 The consequence is that the amount of parallelism used to prepare submissions for the next system call is limited.
 Beyond this limit, the length of the system call is the throughput limiting factor.
-I concluded from early experiments that preparing submissions seems to take about as long as the system call itself, which means that with a single @io_uring@ instance, there is no benefit in terms of \io throughput to having more than two \glspl{hthrd}.
+I concluded from early experiments that preparing submissions seems to take at most as long as the system call itself, which means that with a single @io_uring@ instance, there is no benefit in terms of \io throughput to having more than two \glspl{hthrd}.
 Therefore the design of the submission engine must manage multiple instances of @io_uring@ running in parallel, effectively sharding @io_uring@ instances.
 Similarly to scheduling, this sharding can be done privately, \ie, one instance per \glspl{proc}, in decoupled pools, \ie, a pool of \glspl{proc} use a pool of @io_uring@ instances without one-to-one coupling between any given instance and any given \gls{proc}, or some mix of the two.
@@ -200,5 +200,5 @@
 The only added complexity is that the number of SQEs is fixed, which means allocation can fail.
 
-Allocation failures need to be pushed up to the routing algorithm: \glspl{thrd} attempting \io operations must not be directed to @io_uring@ instances without sufficient SQEs available.
+Allocation failures need to be pushed up to a routing algorithm: \glspl{thrd} attempting \io operations must not be directed to @io_uring@ instances without sufficient SQEs available.
 Furthermore, the routing algorithm should block operations up-front if none of the instances have available SQEs.
 
@@ -214,5 +214,5 @@
 
 In the case of designating a \gls{thrd}, ideally, when multiple \glspl{thrd} attempt to submit operations to the same @io_uring@ instance, all requests would be batched together and one of the \glspl{thrd} would do the system call on behalf of the others, referred to as the \newterm{submitter}.
-In practice however, it is important that the \io requests are not left pending indefinitely and as such, it may be required to have a current submitter and a next submitter.
+In practice however, it is important that the \io requests are not left pending indefinitely and as such, it may be required to have a ``next submitter'' that guarentees everything that is missed by the current submitter is seen by the next one.
 Indeed, as long as there is a ``next'' submitter, \glspl{thrd} submitting new \io requests can move on, knowing that some future system call will include their request.
 Once the system call is done, the submitter must also free SQEs so that the allocator can reused them.
@@ -223,17 +223,16 @@
 If the submission side does not designate submitters, polling can also submit all SQEs as it is polling events.
 A simple approach to polling is to allocate a \gls{thrd} per @io_uring@ instance and simply let the poller \glspl{thrd} poll their respective instances when scheduled.
-This design is especially convenient for reasons explained in Chapter~\ref{practice}.
 
 With this pool of instances approach, the big advantage is that it is fairly flexible.
 It does not impose restrictions on what \glspl{thrd} submitting \io operations can and cannot do between allocations and submissions.
-It also can gracefully handles running out of ressources, SQEs or the kernel returning @EBUSY@.
+It also can gracefully handle running out of ressources, SQEs or the kernel returning @EBUSY@.
 The down side to this is that many of the steps used for submitting need complex synchronization to work properly.
 The routing and allocation algorithm needs to keep track of which ring instances have available SQEs, block incoming requests if no instance is available, prevent barging if \glspl{thrd} are already queued up waiting for SQEs and handle SQEs being freed.
 The submission side needs to safely append SQEs to the ring buffer, correctly handle chains, make sure no SQE is dropped or left pending forever, notify the allocation side when SQEs can be reused and handle the kernel returning @EBUSY@.
-All this synchronization may have a significant cost and, compare to the next approach presented, this synchronization is entirely overhead.
+All this synchronization may have a significant cost and, compared to the next approach presented, this synchronization is entirely overhead.
 
 \subsubsection{Private Instances}
 Another approach is to simply create one ring instance per \gls{proc}.
-This alleviate the need for synchronization on the submissions, requiring only that \glspl{thrd} are not interrupted in between two submission steps.
+This alleviates the need for synchronization on the submissions, requiring only that \glspl{thrd} are not interrupted in between two submission steps.
 This is effectively the same requirement as using @thread_local@ variables.
 Since SQEs that are allocated must be submitted to the same ring, on the same \gls{proc}, this effectively forces the application to submit SQEs in allocation order
@@ -331,5 +330,18 @@
 \paragraph{Pending Allocations} can be more complicated to handle.
 If the arbiter has available instances, the arbiter can attempt to directly hand over the instance and satisfy the request.
-Otherwise
+Otherwise it must hold onto the list of threads until SQEs are made available again.
+This handling becomes that much more complex if pending allocation require more than one SQE, since the arbiter must make a decision between statisfying requests in FIFO ordering or satisfy requests for fewer SQEs first.
+
+While this arbiter has the potential to solve many of the problems mentionned in above, it also introduces a significant amount of complexity.
+Tracking which processors are borrowing which instances and which instances have SQEs available ends-up adding a significant synchronization prelude to any I/O operation.
+Any submission must start with a handshake that pins the currently borrowed instance, if available.
+An attempt to allocate is then made, but the arbiter can concurrently be attempting to allocate from the same instance from a different \gls{hthrd}.
+Once the allocation is completed, the submission must still check that the instance is still burrowed before attempt to flush.
+These extra synchronization steps end-up having a similar cost to the multiple shared instances approach.
+Furthermore, if the number of instances does not match the number of processors actively submitting I/O, the system can fall into a state where instances are constantly being revoked and end-up cycling the processors, which leads to significant cache deterioration.
+Because of these reasons, this approach, which sounds promising on paper, does not improve on the private instance approach in practice.
+
+\subsubsection{Private Instances V2}
+
 
 
@@ -394,12 +406,50 @@
 Finally, the last important part of the \io subsystem is it's interface. There are multiple approaches that can be offered to programmers, each with advantages and disadvantages. The new \io subsystem can replace the C runtime's API or extend it. And in the later case the interface can go from very similar to vastly different. The following sections discuss some useful options using @read@ as an example. The standard Linux interface for C is :
 
-@ssize_t read(int fd, void *buf, size_t count);@.
+@ssize_t read(int fd, void *buf, size_t count);@
 
 \subsection{Replacement}
-Replacing the C \glsxtrshort{api}
+Replacing the C \glsxtrshort{api} is the more intrusive and draconian approach.
+The goal is to convince the compiler and linker to replace any calls to @read@ to direct them to the \CFA implementation instead of glibc's.
+This has the advantage of potentially working transparently and supporting existing binaries without needing recompilation.
+It also offers a, presumably, well known and familiar API that C programmers can simply continue to work with.
+However, this approach also entails a plethora of subtle technical challenges which generally boils down to making a perfect replacement.
+If the \CFA interface replaces only \emph{some} of the calls to glibc, then this can easily lead to esoteric concurrency bugs.
+Since the gcc ecosystems does not offer a scheme for such perfect replacement, this approach was rejected as being laudable but infeasible.
 
 \subsection{Synchronous Extension}
+An other interface option is to simply offer an interface that is different in name only. For example:
+
+@ssize_t cfa_read(int fd, void *buf, size_t count);@
+
+\noindent This is much more feasible but still familiar to C programmers.
+It comes with the caveat that any code attempting to use it must be recompiled, which can be a big problem considering the amount of existing legacy C binaries.
+However, it has the advantage of implementation simplicity.
 
 \subsection{Asynchronous Extension}
+It is important to mention that there is a certain irony to using only synchronous, therefore blocking, interfaces for a feature often referred to as ``non-blocking'' \io.
+A fairly traditional way of doing this is using futures\cit{wikipedia futures}.
+As simple way of doing so is as follows:
+
+@future(ssize_t) read(int fd, void *buf, size_t count);@
+
+\noindent Note that this approach is not necessarily the most idiomatic usage of futures.
+The definition of read above ``returns'' the read content through an output parameter which cannot be synchronized on.
+A more classical asynchronous API could look more like:
+
+@future([ssize_t, void *]) read(int fd, size_t count);@
+
+\noindent However, this interface immediately introduces memory lifetime challenges since the call must effectively allocate a buffer to be returned.
+Because of the performance implications of this, the first approach is considered preferable as it is more familiar to C programmers.
 
 \subsection{Interface directly to \lstinline{io_uring}}
+Finally, an other interface that can be relevant is to simply expose directly the underlying \texttt{io\_uring} interface. For example:
+
+@array(SQE, want) cfa_io_allocate(int want);@
+
+@void cfa_io_submit( const array(SQE, have) & );@
+
+\noindent This offers more flexibility to users wanting to fully use all of the \texttt{io\_uring} features.
+However, it is not the most user-friendly option.
+It obviously imposes a strong dependency between user code and \texttt{io\_uring} but at the same time restricting users to usages that are compatible with how \CFA internally uses \texttt{io\_uring}.
+
+
Index: doc/theses/thierry_delisle_PhD/thesis/text/practice.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/text/practice.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/text/practice.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -2,8 +2,126 @@
 The scheduling algorithm discribed in Chapter~\ref{core} addresses scheduling in a stable state.
 However, it does not address problems that occur when the system changes state.
-Indeed the \CFA runtime, supports expanding and shrinking the number of KTHREAD\_place \todo{add kthrd to glossary}, both manually and, to some extent automatically.
+Indeed the \CFA runtime, supports expanding and shrinking the number of \procs, both manually and, to some extent, automatically.
 This entails that the scheduling algorithm must support these transitions.
 
-\section{Resizing}
+More precise \CFA supports adding \procs using the RAII object @processor@.
+These objects can be created at any time and can be destroyed at any time.
+They are normally create as automatic stack variables, but this is not a requirement.
+
+The consequence is that the scheduler and \io subsystems must support \procs comming in and out of existence.
+
+\section{Manual Resizing}
+The consequence of dynamically changing the number of \procs is that all internal arrays that are sized based on the number of \procs neede to be \texttt{realloc}ed.
+This also means that any references into these arrays, pointers or indexes, may need to be fixed when shrinking\footnote{Indexes may still need fixing because there is no guarantee the \proc causing the shrink had the highest index. Therefore indexes need to be reassigned to preserve contiguous indexes.}.
+
+There are no performance requirements, within reason, for resizing since this is usually considered as part of setup and teardown.
+However, this operation has strict correctness requirements since shrinking and idle sleep can easily lead to deadlocks.
+It should also avoid as much as possible any effect on performance when the number of \procs remain constant.
+This later requirement prehibits simple solutions, like simply adding a global lock to these arrays.
+
+\subsection{Read-Copy-Update}
+One solution is to use the Read-Copy-Update\cite{wiki:rcu} pattern.
+In this pattern, resizing is done by creating a copy of the internal data strucures, updating the copy with the desired changes, and then attempt an Idiana Jones Switch to replace the original witht the copy.
+This approach potentially has the advantage that it may not need any synchronization to do the switch.
+The switch definitely implies a race where \procs could still use the previous, original, data structure after the copy was switched in.
+The important question then becomes whether or not this race can be recovered from.
+If the changes that arrived late can be transferred from the original to the copy then this solution works.
+
+For linked-lists, dequeing is somewhat of a problem.
+Dequeing from the original will not necessarily update the copy which could lead to multiple \procs dequeing the same \at.
+Fixing this requires making the array contain pointers to subqueues rather than the subqueues themselves.
+
+Another challenge is that the original must be kept until all \procs have witnessed the change.
+This is a straight forward memory reclamation challenge but it does mean that every operation will need \emph{some} form of synchronization.
+If each of these operation does need synchronization then it is possible a simpler solution achieves the same performance.
+Because in addition to the classic challenge of memory reclamation, transferring the original data to the copy before reclaiming it poses additional challenges.
+Especially merging subqueues while having a minimal impact on fairness and locality.
+
+\subsection{Read-Writer Lock}
+A simpler approach would be to use a \newterm{Readers-Writer Lock}\cite{wiki:rwlock} where the resizing requires acquiring the lock as a writer while simply enqueing/dequeing \ats requires acquiring the lock as a reader.
+Using a Readers-Writer lock solves the problem of dynamically resizing and leaves the challenge of finding or building a lock with sufficient good read-side performance.
+Since this is not a very complex challenge and an ad-hoc solution is perfectly acceptable, building a Readers-Writer lock was the path taken.
+
+To maximize reader scalability, the readers should not contend with eachother when attempting to acquire and release the critical sections.
+This effectively requires that each reader have its own piece of memory to mark as locked and unlocked.
+Reades then acquire the lock wait for writers to finish the critical section and then acquire their local spinlocks.
+Writers acquire the global lock, so writers have mutual exclusion among themselves, and then acquires each of the local reader locks.
+Acquiring all the local locks guarantees mutual exclusion between the readers and the writer, while the wait on the read side prevents readers from continously starving the writer.
+\todo{reference listings}
+
+\begin{lstlisting}
+void read_lock() {
+	// Step 1 : make sure no writers in
+	while write_lock { Pause(); }
+
+	// May need fence here
+
+	// Step 2 : acquire our local lock
+	while atomic_xchg( tls.lock ) {
+		Pause();
+	}
+}
+
+void read_unlock() {
+	tls.lock = false;
+}
+\end{lstlisting}
+
+\begin{lstlisting}
+void write_lock()  {
+	// Step 1 : lock global lock
+	while atomic_xchg( write_lock ) {
+		Pause();
+	}
+
+	// Step 2 : lock per-proc locks
+	for t in all_tls {
+		while atomic_xchg( t.lock ) {
+			Pause();
+		}
+	}
+}
+
+void write_unlock() {
+	// Step 1 : release local locks
+	for t in all_tls {
+		t.lock = false;
+	}
+
+	// Step 2 : release global lock
+	write_lock = false;
+}
+\end{lstlisting}
 
 \section{Idle-Sleep}
+In addition to users manually changing the number of \procs, it is desireable to support ``removing'' \procs when there is not enough \ats for all the \procs to be useful.
+While manual resizing is expected to be rare, the number of \ats is expected to vary much more which means \procs may need to be ``removed'' for only short periods of time.
+Furthermore, race conditions that spuriously lead to the impression no \ats are ready are actually common in practice.
+Therefore \procs should not be actually \emph{removed} but simply put into an idle state where the \gls{kthrd} is blocked until more \ats become ready.
+This state is referred to as \newterm{Idle-Sleep}.
+
+Idle sleep effectively encompasses several challenges.
+First some data structure needs to keep track of all \procs that are in idle sleep.
+Because of idle sleep can be spurious, this data structure has strict performance requirements in addition to the strict correctness requirements.
+Next, some tool must be used to block kernel threads \glspl{kthrd}, \eg \texttt{pthread\_cond\_wait}, pthread semaphores.
+The complexity here is to support \at parking and unparking, timers, \io operations and all other \CFA features with minimal complexity.
+Finally, idle sleep also includes a heuristic to determine the appropriate number of \procs to be in idle sleep an any given time.
+This third challenge is however outside the scope of this thesis because developping a general heuristic is involved enough to justify its own work.
+The \CFA scheduler simply follows the ``Race-to-Idle'\cit{https://doi.org/10.1137/1.9781611973099.100}' approach where a sleeping \proc is woken any time an \at becomes ready and \procs go to idle sleep anytime they run out of work.
+
+
+\section{Tracking Sleepers}
+Tracking which \procs are in idle sleep requires a data structure holding all the sleeping \procs, but more importantly it requires a concurrent \emph{handshake} so that no \at is stranded on a ready-queue with no active \proc.
+The classic challenge is when a \at is made ready while a \proc is going to sleep, there is a race where the new \at may not see the sleeping \proc and the sleeping \proc may not see the ready \at.
+
+Furthermore, the ``Race-to-Idle'' approach means that there is some
+
+\section{Sleeping}
+
+\subsection{Event FDs}
+
+\subsection{Epoll}
+
+\subsection{\texttt{io\_uring}}
+
+\section{Reducing Latency}
Index: doc/theses/thierry_delisle_PhD/thesis/thesis.tex
===================================================================
--- doc/theses/thierry_delisle_PhD/thesis/thesis.tex	(revision ba897d2136bc02c8b8a01751cd212c9a145a8df7)
+++ doc/theses/thierry_delisle_PhD/thesis/thesis.tex	(revision 2e9b59be1a4f6206dcfcb10cb3b400d3a5948c6c)
@@ -202,4 +202,8 @@
 
 \newcommand\io{\glsxtrshort{io}\xspace}%
+\newcommand\at{\gls{at}\xspace}%
+\newcommand\ats{\glspl{at}\xspace}%
+\newcommand\proc{\gls{proc}\xspace}%
+\newcommand\procs{\glspl{proc}\xspace}%
 
 %======================================================================
