Index: doc/theses/mike_brooks_MMath/list.tex
===================================================================
--- doc/theses/mike_brooks_MMath/list.tex	(revision 502ded0c74289414ef90b7ae2e4fa7ea3dfde332)
+++ doc/theses/mike_brooks_MMath/list.tex	(revision 9367cd64c947b7dabfcc1d50fbb8d3fb76854182)
@@ -7,4 +7,78 @@
 
 
+\section{Plan-9 Inheritance}
+\label{s:Plan9Inheritance}
+
+This chapter uses a form of inheritance from the Plan-9 C dialect~\cite[\S~3.3]{Thompson90new}, which is supported by @gcc@ and @clang@ using @-fplan9-extensions@.
+\CFA has its own variation of the Plan-9 mechanism, where the nested type is denoted using @inline@.
+\begin{cfa}
+union U {  int x;  double y;  char z; };
+struct W { int i;  double j;  char k; };
+struct S {
+	@inline@ struct W;  $\C{// extended Plan-9 inheritance}$
+	unsigned int tag;
+	@inline@ U;			$\C{// extended Plan-9 inheritance}$
+} s;
+\end{cfa}
+Inline inheritance is containment, where the inlined field is unnamed and the type's internal fields are hoisted into the containing structure.
+Hence, the field names must be unique, unlike \CC nested types, but any inlined type-names are at a nested scope level, unlike aggregate nesting in C.
+Note, the position of the containment is normally unimportant, unless there is some form of memory or @union@ overlay.
+Finally, the inheritance declaration of @U@ is not prefixed with @union@.
+Like \CC, \CFA allows optional prefixing of type names with their kind, \eg @struct@, @union@, and @enum@, unless there is ambiguity with variable names in the same scope.
+
+\VRef[Figure]{f:Plan9Polymorphism} shows the key polymorphic feature of Plan-9 inheritance: implicit conversion of values and pointers for nested types.
+In the example, there are implicit conversions from @S@ to @U@ and @S@ to @W@, extracting the appropriate value or pointer for the substructure.
+\VRef[Figure]{f:DiamondInheritancePattern} shows complex multiple inheritance patterns are possible, like the \newterm{diamond pattern}~\cite[\S~6.1]{Stroustrup89}\cite[\S~4]{Cargill91}.
+Currently, \CFA type-system does not support @virtual@ inheritance.
+
+\begin{figure}
+\begin{cfa}
+void f( U, U * ); $\C{// value, pointer}$
+void g( W, W * ); $\C{// value, pointer}$
+U u, * up;   W w, * wp;   S s, * sp;
+u = s;   up = sp; $\C{// value, pointer}$
+w = s;   wp = sp; $\C{// value, pointer}$
+f( s, &s ); $\C{// value, pointer}$
+g( s, &s ); $\C{// value, pointer}$
+\end{cfa}
+\caption{Plan-9 Polymorphism}
+\label{f:Plan9Polymorphism}
+\end{figure}
+
+\begin{figure}
+\setlength{\tabcolsep}{10pt}
+\begin{tabular}{ll@{}}
+\multicolumn{1}{c}{\CC}	& \multicolumn{1}{c}{\CFA}	\\
+\begin{c++}
+struct B { int b; };
+struct L : @public B@ { int l, w; };
+struct R : @public B@ { int r, w; };
+struct T : @public L, R@ { int t; };
+
+T t = { { { 5 }, 3, 4 }, { { 8 }, 6, 7 }, 3 };
+t.t = 42;
+t.l = 42;
+t.r = 42;
+((L &)t).b = 42;  // disambiguate
+\end{c++}
+&
+\begin{cfa}
+struct B { int b; };
+struct L { @inline B;@ int l, w; };
+struct R { @inline B;@ int r, w; };
+struct T { @inline L; inline R;@ int t; };
+
+T t = { { { 5 }, 3, 4 }, { { 8 }, 6, 7 }, 3 };
+t.t = 42;
+t.l = 42;
+t.r = 42;
+((L &)t).b = 42;  // disambiguate, proposed solution t.L.b = 42;
+\end{cfa}
+\end{tabular}
+\caption{Diamond Non-Virtual Inheritance Pattern}
+\label{f:DiamondInheritancePattern}
+\end{figure}
+
+
 \section{Features}
 
@@ -17,16 +91,11 @@
 This design covers system and data management issues stated in \VRef{toc:lst:issue}.
 
-\VRef[Figure]{fig:lst-features-intro} continues the running @req@ example from \VRef[Figure]{fig:lst-issues-attach} using the \CFA list.
+\VRef[Figure]{fig:lst-features-intro} continues the running @req@ example from \VRef[Figure]{fig:lst-issues-attach} using the \CFA list @dlist@.
 The \CFA link attachment is intrusive so the resulting memory layout is per user node, as for the LQ version of \VRef[Figure]{f:Intrusive}.
-The \CFA framework provides generic type @dlink( T, T )@ for the two link fields (front and back).
-A user inserts the links into the @req@ structure via \CFA inline-inheritance from the Plan-9 C dialect~\cite[\S~3.3]{Thompson90new}.
-Inline inheritance is containment, where the inlined field is unnamed but the type's internal fields are hoisted into the containing structure.
-Hence, the field names must be unique, unlike \CC nested types, but the type names are at a nested scope level, unlike aggregate nesting in C.
-Note, the position of the containment is normally unimportant, unless there is some form of memory or @union@ overlay.
-The key feature of inlined inheritance is that a pointer to the containing structure is automatically converted to a pointer to any anonymous inline field for assignments and function calls, providing containment inheritance with implicit subtyping.
-Therefore, a reference to a @req@ is implicitly convertible to @dlink@ in assignments and function calls.
-% These links have a nontrivial, user-specified location within the @req@ structure;
-% this convention encapsulates the implied pointer arithmetic safely.
-The links in @dlist@ point at (links) in the containing node, know the offsets of all links (data is abstract), and any field-offset arithmetic or link-value changes are safe and abstract.
+The \CFA framework provides generic type @dlink( T )@ (not to be confused with @dlist@) for the two link fields (front and back).
+A user inserts the links into the @req@ structure via \CFA inline-inheritance \see{\VRef{s:Plan9Inheritance}}.
+Lists leverage the automatic conversion of a pointer to anonymous inline field for assignments and function calls.
+Therefore, a reference to a @req@ is implicitly convertible to @dlink@ in both contexts.
+The links in @dlist@ point at another embedded @dlist@ node, know the offsets of all links (data is abstract but accessible), and any field-offset arithmetic or link-value changes are safe and abstract.
 
 \begin{figure}
@@ -39,13 +108,32 @@
 \end{figure}
 
+\VRef[Figure]{f:dlistOutline} shows the outline for types @dlink@ and @dlist@.
+Note, the first @forall@ clause is distributed across all the declaration in its containing block, eliminating repetition on each declaration.
+The second nested @forall@ on @dlist@ is also distributed and adds an optional second type parameter, @tLinks@, denoting the linking axis \see{\VRef{s:Axis}}, \ie the kind of list this node can appear on.
+
+\begin{figure}
+\begin{cfa}
+forall( tE & ) {					$\C{// distributed}$
+	struct @dlink@ { ...  };		$\C{// abstract type}$
+	static inline void ?{}( dlink( tE ) & this );  $\C{// constructor}$
+
+	forall( tLinks & = dlink( tE ) ) { $\C{// distributed, default type for axis}$
+		struct @dlist@ { ... };		$\C{// abstract type}$
+		static inline void ?{}( dlist( tE, tLinks ) & this ); $\C{// constructor}$
+	}
+}
+\end{cfa}
+\caption{\lstinline{dlisk} / \lstinline{dlist} Outline}
+\label{f:dlistOutline}
+\end{figure}
+
 \VRef[Figure]{fig:lst-features-multidir} shows how the \CFA library supports multi-inline links, so a node can be on one or more lists simultaneously.
 The declaration of @req@ has two inline-inheriting @dlink@ occurrences.
 The first of these gives a type named @req.by_pri@, @req@ inherits from it, and it inherits from @dlink@.
 The second line @req.by_rqr@ is similar to @req.by_pri@.
-Thus, there is a diamond, non-virtual, inheritance from @req@ to @dlink@, with @by_pri@ and @by_rqr@ being the mid-level types.
-
-Disambiguation occurs in the declarations of the list-head objects: @reqs_pri_global@, @reqs_rqr_42@, @reqs_rqr_17@, and @reqs_rqr_99@.
-The type of the variable @reqs_pri_global@ is @dlist(req, req.by_pri)@, meaning operations called on @reqs_pri_global@ are implicitly disambiguated.
-In the example, the calls @insert_first(reqs_pri_global, ...)@ imply, ``here, we are working by priority.''
+Thus, there is a diamond, non-virtual, inheritance from @req@ to @dlink@, with @by_pri@ and @by_rqr@ being the mid-level types \see{\VRef[Figure]{f:DiamondInheritancePattern}}.
+
+The declarations of the list-head objects, @reqs_pri@, @reqs_rqr_42@, @reqs_rqr_17@, and @reqs_rqr_99@, bind which link nodes in @req@ are used by this list.
+Hence, the type of the variable @reqs_pri@, @dlist(req, req.by_pri)@, means operations called on @reqs_pri@ implicitly select (disambiguate) the correct @dlink@s, \eg the calls @insert_first(reqs_pri, ...)@ imply, ``here, we are working by priority.''
 As in \VRef[Figure]{fig:lst-issues-multi-static}, three lists are constructed, a priority list containing all nodes, a list with only nodes containing the value 42, and a list with only nodes containing the value 17.
 
@@ -69,62 +157,49 @@
 \end{figure}
 
-The \CFA library also supports the common case, of single directionality, more naturally than LQ.  
-\VRef[Figure]{fig:lst-features-intro} shows a single-direction list done with no contrived name for the link direction,
-where \VRef[Figure]{f:Intrusive} adds the unnecessary field name, @d@.
-In \CFA, a user doing a single direction (\VRef[Figure]{fig:lst-features-intro}) sets up a simple inheritance with @dlink@, and declares a list head to have the simpler type @dlist( T )@.
-In contrast, (\VRef[Figure]{fig:lst-features-multidir}) sets up a diamond inheritance with @dlink@, and declares a list head to have the more-informed type @dlist( T, DIR )@.
-
-The directionality issue also has an advanced corner-case that needs treatment.
-When working with multiple directions, calls like @insert_first@ benefit from implicit direction disambiguation;
-however, other calls like @insert_after@ still require explicit disambiguation, \eg the call
-\begin{cfa}
-insert_after(r1, r2);
-\end{cfa}
-does not have enough information to clarify which of a request's simultaneous list directions is intended.
+The list library also supports the common case of single directionality more naturally than LQ.  
+Returning to \VRef[Figure]{fig:lst-features-intro}, the single-direction list has no contrived name for the link direction as it uses the default type in the definition of @dlist@;
+in contrast, the LQ list in \VRef[Figure]{f:Intrusive} adds the unnecessary field name @d@.
+In \CFA, a single direction list sets up a single inheritance with @dlink@, and the default list axis is to itself.
+
+When operating on a list with several directions and operations that do not take the list head, the list axis can be ambiguous.
+For example, a call like @insert_after( r1, r2 )@ does not have enough information to know which axes to select implicitly.
 Is @r2@ supposed to be the next-priority request after @r1@, or is @r2@ supposed to join the same-requester list of @r1@?
-As such, the \CFA compiler gives an ambiguity error for this call.
-To resolve the ambiguity, the list library provides a hook for applying the \CFA language's scoping and priority rules.
-It applies as:
-\begin{cfa}
-with ( DLINK_VIA(req, req.pri) ) insert_after(r1, r2);
+As such, the \CFA type-system gives an ambiguity error for this call.
+There are multiple ways to resolve the ambiguity.
+The simplest is an explicit cast on each call to select the specific axis, \eg @insert_after( (by_pri)r1, r2 )@.
+However, multiple explicit casts are tedious and error-prone.
+To mitigate this issue, the list library provides a hook for applying the \CFA language's scoping and priority rules.
+\begin{cfa}
+with ( DLINK_VIA(  req, req.pri ) ) insert_after( r1, r2 );
 \end{cfa}
 Here, the @with@ statement opens the scope of the object type for the expression;
 hence, the @DLINK_VIA@ result causes one of the list directions to become a more attractive candidate to \CFA's overload resolution.
-This boost applies within the scope of the following statement, but could also be a custom block or an entire function body.
+This boost can be applied across multiple statements in a block or an entire function body.
 \begin{cquote}
 \setlength{\tabcolsep}{15pt}
 \begin{tabular}{@{}ll@{}}
 \begin{cfa}
-void f() @with( DLINK_VIA(req, req.pri) )@ {
-	...
-
-	insert_after(r1, r2);
-
-	...
-}
+@with( DLINK_VIA( req, req.pri ) ) {@
+	...  insert_after( r1, r2 );  ...
+@}@
 \end{cfa}
 &
 \begin{cfa}
-void f() {
-	...
-	@with( DLINK_VIA(req, req.pri) )@ {
-		...  insert_after(r1, r2);  ...
-	}
-	...
-}
+void f() @with( DLINK_VIA( req, req.pri ) ) {@
+	... insert_after( r1, r2 ); ...
+@}@
 \end{cfa}
 \end{tabular}
 \end{cquote}
-By using a larger scope, a user can put code within that acts as if there is only one list direction.
-This boost is needed only when operating on a list with several directions, using operations that do not take the list head.
-Otherwise, the sole applicable list direction \emph{just works}.
-
-Unlike \CC templates container-types, the \CFA library works completely within the type system;
-both @dlink@ and @dlist@ are ordinary types.
+Within the @with@, the code acts as if there is only one list direction.
+
+Unlike the \CC template container-types, the \CFA library works completely within the type system;
+both @dlink@ and @dlist@ are ordinary types, not language macros.
 There is no textual expansion other than header-included static-inline function for performance.
-Errors in user code are reported only with mention of the library's declarations.
-Finally, the library is separately compiled from the usage code.
-
-The \CFA library works in headed and headless modes.  TODO: elaborate.
+Hence, errors in user code are reported only with mention of the library's declarations.
+Finally, the library is separately compiled from the usage code, modulo inlining.
+
+The \CFA library works in headed and headless modes.
+\PAB{TODO: elaborate.}
 
 
@@ -142,9 +217,11 @@
 @last@ returns a reference to the last node of the list without removing it or @0p@ if the list is empty.
 \item
-@insert_before@ adds a node before the specified node and returns the added node for cascading.
-\item
-@insert_after@ adds a node after the specified node and returns the added node for cascading.
-\item
-@remove@ removes the specified node from the list (any location) and returns a reference to the node.
+@insert_before@ adds a node before a specified node \see{\lstinline{insert_last} for insertion at the end}\footnote{
+Some list packages allow \lstinline{0p} (\lstinline{nullptr}) for the before/after node implying insert/remove at the start/end of the list, respectively.
+However, this inserts an \lstinline{if} statement in the fastpath of a potentially commonly used list operation.}
+\item
+@insert_after@ adds a node after a specified node \see{\lstinline{insert_first} for insertion at the start}\footnotemark[\value{footnote}].
+\item
+@remove@ removes a specified node from the list (any location) and returns a reference to the node.
 \item
 @iter@ create an iterator for the list.
@@ -158,7 +235,7 @@
 @isLast@ returns true if the node is the last node in the list and false otherwise.
 \item
-@pred@ returns a reference to the previous (predecessor, towards first) node before the specified node or @0p@ if the specified node is the first node in the list.
-\item
-@next@ returns a reference to the next (successor, towards last) node after the specified node or @0p@ if the specified node is the last node in the list.
+@pred@ returns a reference to the previous (predecessor, towards first) node before a specified node or @0p@ if a specified node is the first node in the list.
+\item
+@next@ returns a reference to the next (successor, towards last) node after a specified node or @0p@ if a specified node is the last node in the list.
 \item
 @insert_first@ adds a node to the start of the list so it becomes the first node and returns a reference to the node for cascading.
@@ -236,5 +313,5 @@
 The choice depends on how the programmer wants to access the fields: @it->f@ or @it.f@.
 The following examples use a reference because the loop body manipulates the node values rather than the list pointers.
-The end of iteration is denoted by the loop cursor having the null pointer, denoted @0p@ in \CFA.
+The end of iteration is denoted by the loop cursor returning @0p@.
 
 \noindent
@@ -443,7 +520,7 @@
 
 \VRef[Figure]{fig:lst-impl-links} continues the running @req@ example, now showing the \CFA list library's internal representation.
-The @dlink@ structure contains exactly two pointers: @next@ and @prev@, which are opaque to a user.
+The @dlink@ structure contains exactly two pointers: @next@ and @prev@, which are opaque.
 Even though the user-facing list model is linear, the CFA library implements all listing as circular.
-This choice helps achieve uniform end treatment and TODO finish summarizing benefit.
+This choice helps achieve uniform end treatment and \PAB{TODO finish summarizing benefit}.
 A link pointer targets a neighbouring @dlink@ structure, rather than a neighbouring @req@.
 (Recall, the running example has the user putting a @dlink@ within a @req@.)
@@ -451,5 +528,5 @@
 \begin{figure}
 	\centering
-	\includegraphics{lst-impl-links.pdf}
+	\includegraphics[width=\textwidth]{lst-impl-links.pdf}
 	\caption{
 		\CFA list library representations for the cases under discussion.
@@ -458,16 +535,16 @@
 \end{figure}
 
-System-added link pointers (dashed lines) are internally tagged to indicate linear endpoints that are the circular pointers.
+Circular link-pointers (dashed lines) are tagged internally in the pointer to indicate linear endpoints.
 Links among neighbour nodes are not tagged.
-Iteration reports ``has more elements'' when crossing natural links, and ``no more elements'' upon reaching a tagged link.
+Iteration reports ``has more elements'' when accessing untagged links, and ``no more elements'' when accessing tagged links.
+Hence, the tags are set on the links that a user cannot navigate.
 
 In a headed list, the list head (@dlist(req)@) acts as an extra element in the implementation-level circularly-linked list.
-The content of a @dlist@ is a (private) @dlink@, with the @next@ pointer to the first element, and the @prev@ pointer to the last element.
-Since the head wraps a @dlink@, as does a @req@ does too, and since a link-pointer targets a @dlink@, the resulting cycle is among @dlink@ structures, situated inside of header/node.
+The content of a @dlist@ header is a (private) @dlink@, with the @next@ pointer to the first element, and the @prev@ pointer to the last element.
+Since the head wraps a @dlink@, as does @req@, and since a link-pointer targets a @dlink@, the resulting cycle is among @dlink@ structures, situated inside of header/node.
 An untagged pointer points within a @req@, while a tagged pointer points within a list head.
 In a headless list, the circular backing list is only among @dlink@s within @req@s.
-The tags are set on the links that a user cannot navigate.
-
-No distinction is made between an unlisted item under a headed model and a singleton list under a headless model.
+
+No distinction is made between an unlisted node (top left and middle) under a headed model and a singleton list under a headless model.
 Both are represented as an item referring to itself, with both tags set.
 
@@ -476,8 +553,12 @@
 \label{toc:lst:assess}
 
+This section examines the performance of the discussed list implementations.
+The goal is to show the \CFA lists are competitive with other designs, but the different list designs may not have equivalent functionality, so it is impossible to select a winner encompassing both functionality and execution performance.
+
+
 \subsection{Add-Remove Performance}
 
-The fundamental job of a linked-list library is to manage the links that connect users' items.
-Any link management is an action that causes pair(s) of elements to become, or cease to be, adjacent.
+The fundamental job of a linked-list library is to manage the links that connect nodes.
+Any link management is an action that causes pair(s) of elements to become, or cease to be, adjacent in the list.
 Thus, adding and removing an element are the sole primitive actions.
 
@@ -485,15 +566,17 @@
 These instruction sequences may have cases that proceed (in a modern, deep pipeline) without a stall.
 
-This experiment takes the position that
+This experiment takes the position that:
 \begin{itemize}
-    \item The total time to add and remove is relevant; an attribution of time spent adding vs.\ removing is not.
-          Any use case for which addition speed matters necessarily has removes paired with adds.
-		  For otherwise, the alleged usage would exhaust the amount of work expressable as a main-memory full of nodes within a few seconds.
-    \item A relevant breakdown ``by operation'' is, rather, one that considers the structural context of these requests.
+    \item The total time to add and remove is relevant \vs individual times for add and remove.
+          Adds without removes quickly fill memory;
+		  removes without adds is meaningless.
+    \item A relevant breakdown ``by operation'' is:
 		\begin{description}
 		\item[movement]
-          Is the add/remove order that of a stack, a queue, or something else?
+          Is the add/remove applied to a stack, queue, or something else?
 		\item[polarity]
-		  In which direction does the movement's action apply?  For a queue, do items flow from first to last or last to first?  For a stack, is the first-end or the last-end used for adding and removing?
+		  In which direction does the action apply?
+		  For a queue, do items flow from first to last or last to first?
+		  For a stack, is the first or last end used for adding and removing?
 		\item[accessor]
           Is an add/remove location given by a list head's ``first''/``last'', or by a reference to an individual element?
@@ -503,17 +586,12 @@
 \end{itemize}
 
-This experiment measures the mean duration of a list addition and removal.
+The experiment used to measure insert/remove cost measures the mean duration of a sequence of additions and removals.
 Confidence bounds, on this mean, are discussed.
 The distribution of speeds experienced by an individual add-remove pair (tail latency) is not discussed.
-
 Space efficiency is shown only indirectly, by way of caches' impact on speed.
-
-%~MONITOR
-% If able to show cases with CFA doing better, reword.
-The goal is to show the \CFA library performing comparably to other intrusive libraries,
-in an experimental context sensitive enough to show also:
+The experiment is sensitive enough to show:
 \begin{itemize}
-    \item intrusive lists performing (majorly) differently than wrapped lists
-    \item a space of (minor) performance differences typical of existing intrusive lists
+    \item intrusive lists performing (majorly) differently than wrapped lists,
+    \item a space of (minor) performance differences among the intrusive lists.
 \end{itemize}
 
@@ -521,250 +599,288 @@
 \subsubsection{Experiment setup}
 
-The experiment defines a user's datatype and considers
-the speed of building, and tearing down, a list of $n$ instances of the user's type.
-
-The timings are taken with a fixed-duration method based on checks @clock()@.
-In a typical 5-sec run, the outer looping checks the clock about 200 times.
-A number of experimental rounds per clock check is precalculated to be appropriate to the value of $n$.
-
+The experiment driver defines a (intrusive) node type:
+\begin{cfa}
+struct Node {
+	int i, j, k;  // fields
+	// possible intrusive links
+};
+\end{cfa}
+and considers the speed of building and tearing down a list of $n$ instances of it.
+% A number of experimental rounds per clock check is precalculated to be appropriate to the value of $n$.
 \begin{cfa}
 // simplified harness: CFA implementation,
 // stack movement, insert-first polarity, head-mediated access
 size_t totalOpsDone = 0;
-dlist( item_t ) lst;
-item_t items[ n ];
+dlist( node_t ) lst;
+node_t nodes[ n ];						$\C{// preallocated list storage}$
 startTimer();
-while ( SHOULD_CONTINUE ) {
-	for ( i; n ) insert_first( lst, items[i] );
-	for ( i; n ) remove_first( lst );
+while ( CONTINUE ) {					$\C{// \(\approx\) 20 second duration}$
+	for ( i; n ) insert_first( lst, nodes[i] );  $\C{// build up}$
+	for ( i; n ) remove_first( lst );	$\C{// tear down}$
 	totalOpsDone += n;
 }
 stopTimer();
-reportedDuration = getTimerDuration() / totalOpsDone;
-\end{cfa}
-
-One experimental round is, first, a tight loop of inserting $n$ elements into a list, followed by another, to remove these $n$ elements.
-A counter is incremented by $n$ each round.
-When the whole experiment is done, the total elapsed time, divided by final value of the operation counter,
-is reported as the observed mean operation duration.
-In a scatterplot presentation, each dot would be one such reported mean duration.
-So, ``operation'' really means insert + remove + harness overhead.
-
-The harness overheads are held constant when comparing linked-list implementations.
-The remainder of the setup section discusses the choices that affected the harness overhead.
-
-An \emph{iterators' array} provides support for element-level operations on non-intrusive lists.
-As elaborated in Section \ref{toc:lst:issue:attach},
-wrapped-attachment lists use a distinct type (at a distinct memory location) to represent ``an item that's in the list.''
-Operations like insert-after and remove-here consume iterators.
-In the STL implementation, an iterator is a pointer to a \lstinline{std::_List_node}.
-For the STL case, the driver obtains an iterator value
-at the time of adding to the list, and stores the iterator in an array, for consumption by subsequent element-oriented operations.
-For intrusive-list cases, the driver stores the user object's address in the iterators' array.
-
+reportedDuration = getTimerDuration() / totalOpsDone; // throughput per insert/remove operation
+\end{cfa}
+To reduce administrative overhead, the $n$ nodes for each experiment list are preallocated in an array (on the stack), which removes dynamic allocations for this storage.
+These nodes contain an intrusive pointer for intrusive lists, or a pointer to it is stored in a dynamically-allocated internal-node for a wrapper list.
+Copying the node for wrapped lists skews the results with administration costs.
+The list insertion/removal operations are repeated for a typical 20+ second duration.
+After each round, a counter is incremented by $n$ (for throughput).
+Time is measured outside the loop because a large $n$ can overrun the time duration before the @CONTINUE@ flag is tested.
+The loop duration is divided by the counter and this throughput is reported.
+In a scatter-plot, each dot is one throughput, which means insert + remove + harness overhead.
+The harness overhead is constant when comparing linked-list implementations and keep as small as possible.
+% The remainder of the setup section discusses the choices that affected the harness overhead.
+
+To test list operations, the experiment performs the inserts/removes in different patterns, \eg insert and remove from front, insert from front and remove from back, random insert and remove, \etc.
+Unfortunately, the @std::list@ does \emph{not} support direct insert/remove from a node without an iterator, \ie no @erase( node )@, even though the list is doubly-linked.
+To eliminate this additional cost in the harness, a trick is used for random insertions without replacement.
+The @i@ fields in each node are initialized from @0..n-1@, these @i@ values are then shuffled in the nodes, and the @i@ value is used to represent an indirection to that node for insertion, so the nodes are inserted in random order, and hence, removed in th same random order.
+$\label{p:Shuffle}$
 \begin{c++}
-// further simplified harness (bookkeeping elided): STL implementation,
-// stack movement, insert-first polarity, element-based remove access
-list< item_t * > lst;
-item_t items[ n ];
-while ( SHOULD_CONTINUE ) {
-	@list< item_t * >::iterator iters[ n ];@
-	for ( int i = 0; i < n; i += 1 ) {
-		lst.push_front( & items[i] );
-		@iters[i]@ = lst.begin();
+	for ( i; n ) @nodes[i].i = i@;		$\C{// indirection}$
+	shuffle( nodes, n );				$\C{// random shuffle indirects within nodes}$
+
+	while ( CONTINUE ) {
+		for ( i; n ) insert_first( lst, nodes[ @nodes[i].i@ ] ); $\C{// build up}$
+		for ( i; n ) pass( &remove_first( lst )	);	$\C{// tear down}$
+		totalOpsDone += n;
 	}
-	for ( int i = 0; i < n; i += 1 ) {
-		lst.erase( @iters[i]@ );
-	}
-}
 \end{c++}
-
-%~MONITOR
-% If running insert-random scenarios, revise the assessment
-
-A \emph{shuffling array} helps control the memory layout of user items.
-The control required is when choosing a next item to insert.
-The user items are allocated in a contiguous array.
-Without shuffling, the driver's insert phase visits these items in order, producing a list whose adjavency links hop uniform strides.
-With shuffling active, the driver's insert phase visits only the shuffling array in order,
-which applies pseudo-random indirection to the selection of a next-to-insert element from the user-item array.
-The result is a list whose links travel randomly far.
-
-\begin{cfa}
-// harness (bookkeeping and iterators elided): CFA implementation,
-// stack movement, insert-first polarity, head-mediated access
-dlist( item_t ) lst;
-item_t items[ n ];
-size_t insert_ord[ n ];  // elided: populate with shuffled [0,n)
-while ( SHOULD_CONTINUE ) {
-	for ( i; n ) insert_first( lst, items[ @insert_ord[@ i @]@ ] );
-	for ( i; n ) remove_first( lst );
-}
-\end{cfa}
-
-\emph{Interleaving} allows for movements other than pure stack and queue.
-Note that the earlier example of using the iterators' array is still a pure stack: the item selected for @erase(...)@ is always the first.
-Including a less predictable movement is important because real applications that justify doubly linked lists use them.
-Freedom to remove from arbitrary places (and to insert under more relaxed assumptions) is the characteristic function of a doubly linked list.
-A queue with drop-out is an example of such a movement.
-A list implementation can show unrepresentative speed under a simple movement, for example, by enjoying unchallenged ``Is first element?'' branch predictions.
-
-Interleaving brings ``at middle of list'' cases into a stream of add or remove invocations, which would otherwise be exclusively ``at end''.
-A chosen split, like half middle and half end, populates a boolean array, which is then shuffled.
-These booleans then direct the action to end-\vs-middle.
-
-\begin{cfa}
-// harness (bookkeeping and shuffling elided): CFA implementation,
-// stack movement, insert-first polarity, interleaved element-based remove access
-dlist( item_t ) lst;
-item_t items[ n ];
-@bool interl[ n ];@  // elided: populate with weighted, shuffled [0,1]
-while ( SHOULD_CONTINUE ) {
-	item_t * iters[ n ];
-	for ( i; n ) {
-		insert_first( items[i] );
-		iters[i] = & items[i];
-	}
-	@item_t ** crsr[ 2 ]@ = { // two cursors into iters
-		& iters[ @0@ ], // at stack-insert-first's removal end
-		& iters[ @n / interl_frac@ ]  // in middle
-	};
-	for ( i; n ) {
-		item *** crsr_use = & crsr[ interl[ i ] ]@;
-		remove( *** crsr_use );  // removing from either middle or end
-		*crsr_use += 1;  // that item is done
-	}
-	assert( crsr[0] == & iters[ @n / interl_frac@ ] ); // through second's start
-	assert( crsr[1] == & iters[ @n@ ] );  // did the rest
-}
-\end{cfa}
-
-By using the pair of cursors, the harness avoids branches, which could incur prediction stall times themselves, or prime a branch in the SUT.
-This harness avoids telling the hardware what the SUT is about to do.
-
-These experiments are single threaded.  They run on a PC with a 64-bit eight-core AMD FX-8370E, with ``taskset'' pinning to core \#6.  The machine has 16 GB of RAM and 8 MB of last-level cache.
+This approach works across intrusive and wrapped lists.
+
+% \emph{Interleaving} allows for movements other than pure stack and queue.
+% Note that the earlier example of using the iterators' array is still a pure stack: the item selected for @erase(...)@ is always the first.
+% Including a less predictable movement is important because real applications that justify doubly linked lists use them.
+% Freedom to remove from arbitrary places (and to insert under more relaxed assumptions) is the characteristic function of a doubly linked list.
+% A queue with drop-out is an example of such a movement.
+% A list implementation can show unrepresentative speed under a simple movement, for example, by enjoying unchallenged ``Is first element?'' branch predictions.
+
+% Interleaving brings ``at middle of list'' cases into a stream of add or remove invocations, which would otherwise be exclusively ``at end''.
+% A chosen split, like half middle and half end, populates a boolean array, which is then shuffled.
+% These booleans then direct the action to end-\vs-middle.
+% 
+% \begin{cfa}
+% // harness (bookkeeping and shuffling elided): CFA implementation,
+% // stack movement, insert-first polarity, interleaved element-based remove access
+% dlist( item_t ) lst;
+% item_t items[ n ];
+% @bool interl[ n ];@  // elided: populate with weighted, shuffled [0,1]
+% while ( CONTINUE ) {
+% 	item_t * iters[ n ];
+% 	for ( i; n ) {
+% 		insert_first( items[i] );
+% 		iters[i] = & items[i];
+% 	}
+% 	@item_t ** crsr[ 2 ]@ = { // two cursors into iters
+% 		& iters[ @0@ ], // at stack-insert-first's removal end
+% 		& iters[ @n / interl_frac@ ]  // in middle
+% 	};
+% 	for ( i; n ) {
+% 		item *** crsr_use = & crsr[ interl[ i ] ]@;
+% 		remove( *** crsr_use );  // removing from either middle or end
+% 		*crsr_use += 1;  // that item is done
+% 	}
+% 	assert( crsr[0] == & iters[ @n / interl_frac@ ] ); // through second's start
+% 	assert( crsr[1] == & iters[ @n@ ] );  // did the rest
+% }
+% \end{cfa}
+% 
+% By using the pair of cursors, the harness avoids branches, which could incur prediction stall times themselves, or prime a branch in the SUT.
+% This harness avoids telling the hardware what the SUT is about to do.
 
 The comparator linked-list implementations are:
 \begin{description}
-\item[lq-list]  The @list@ type of LQ from glibc of GCC-11.
+\item[std::list]  The @list@ type of g++.
+\item[lq-list]  The @list@ type of LQ from glibc of gcc.
 \item[lq-tailq] The @tailq@ type of the same.
-\item[upp-upp]  uC++ provided @uSequence@
+\item[upp-upp]  \uC provided @uSequence@
 \item[cfa-cfa]  \CFA's @dlist@
 \end{description}
 
 
+\subsection{Experimental Environment}
+\label{s:ExperimentalEnvironment}
+
+The performance experiments are run on:
+\begin{description}[leftmargin=*,topsep=3pt,itemsep=2pt,parsep=0pt]
+%\item[PC]
+%with a 64-bit eight-core AMD FX-8370E, with ``taskset'' pinning to core \#6.  The machine has 16 GB of RAM and 8 MB of last-level cache.
+%\item[ARM]
+%Gigabyte E252-P31 128-core socket 3.0 GHz, WO memory model
+\item[AMD]
+Supermicro AS--1125HS--TNR EPYC 9754 128--core socket, hyper-threading $\times$ 2 sockets (512 processing units) 2.25 GHz, TSO memory model, with cache structure 32KB L1i/L1d, 1024KB L2, 16MB L3, where each L3 cache covers 16 processors.
+%\item[Intel]
+%Supermicro SYS-121H-TNR Xeon Gold 6530 32--core, hyper-threading $\times$ 2 sockets (128 processing units) 2.1 GHz, TSO memory model
+\end{description}
+The experiments are single threaded and pinned to single core to prevent any OS movement, which might cause cache or NUMA effects perturbing the experiment.
+
+The compiler is gcc/g++-14.2.0 running on the Linux v6.8.0-52-generic OS.
+Switching between the default memory allocators @glibc@ and @llheap@ is done with @LD_PRELOAD@.
+To prevent eliding certain code patterns, crucial parts of a test are wrapped by the function @pass@
+\begin{cfa}
+// prevent eliding, cheaper than volatile
+static inline void * pass( void * v ) {  __asm__  __volatile__( "" : "+r"(v) );  return v;  }
+...
+pass( &remove_first( lst ) );			// wrap call to prevent elision, insert cannot be elided now
+\end{cfa}
+The call to @pass@ can prevent a small number of compiler optimizations but this cost is the same for all lists.
+
+
 \subsubsection{Result: Coarse comparison of styles}
 
-This comparison establishes how an intrusive list performs, compared with a wrapped-reference list.
-It also establishes the context within which it is meaningful to compare one intrusive list to another.
-
-%These goals notwithstanding, the effect of the host machine's memory hierarchy is more significant here than linked-list implementation.
+This comparison establishes how an intrusive list performs compared with a wrapped-reference list.
+\VRef[Figure]{fig:plot-list-zoomout} presents throughput at various list lengths for a linear and random (shuffled) insert/remove test.
+Other kinds of scans were made, but the results are similar in many cases, so it is sufficient to discuss these two scans, representing difference ends of the access spectrum.
+In the graphs, all four intrusive lists are plotted with the same symbol;
+theses symbols clumped on top of each, showing the performance difference among intrusive lists is small in comparison to the wrapped list \see{\VRef{s:ComparingIntrusiveImplementations} for details among intrusive lists}.
+
+The list lengths start at 10 due to the short insert/remove times of 2--4 ns, for intrusive lists, \vs STL's wrapped-reference list of 15--20 ns.
+For very short lists, like 4, the experiment time of 4 $\times$ 2.5 ns and experiment overhead (loops) of 2--4 ns, results in an artificial administrative bump at the start of the graph having nothing to do with the insert/remove times.
+As the list size grows, the administrative overhead for intrusive lists quickly disappears.
+
+\begin{figure}
+  \centering
+  \subfloat[Linear List Nodes]{\label{f:Linear}
+    \includegraphics{plot-list-zoomout-noshuf.pdf}
+  } % subfigure
+  \\
+  \subfloat[Shuffled List Nodes]{\label{f:Shuffled}
+    \includegraphics{plot-list-zoomout-shuf.pdf}
+  } % subfigure
+  \caption{Insert/remove duration \vs list length.
+  Lengths go as large possible without error.
+  One example operation is shown: stack movement, insert-first polarity and head-mediated access.}
+  \label{fig:plot-list-zoomout}
+\end{figure}
+
+The large difference in performance between intrusive and wrapped-reference lists results from the dynamic allocation for the wrapped nodes.
+In effect, this experiment is largely measuring the cost of malloc/free rather than the insert/remove, and is affected by the layout of memory by the allocator.
+Fundamentally, insert/remove for a doubly linked-list has a basic cost, which is seen by the similar results for the intrusive lists.
+Hence, the costs for a wrapped-reference list are: allocating/deallocating a wrapped node, copying a external-node pointer into the wrapped node for insertion, and linking the wrapped node to/from the list;
+the dynamic allocation dominates these costs.
+For example, the experiment was run with both glibc and llheap memory allocators, where performance from llheap reduced the cost from 20 to 16 ns, which is still far from the 2--4 ns for intrusive lists.
+Unfortunately, there is no way to tease apart the linking and allocation costs for wrapped lists, as there is no way to preallocate the list nodes without writing a mini-allocator to manage the storage.
+Hence, dynamic allocation cost is fundamental for wrapped lists.
+
+In detail, \VRef[Figure]{f:Linear} shows linear insertion of all the nodes and then linear removal, both in the same direction.
+For intrusive lists, the nodes are adjacent because they are preallocated in an array.
+For wrapped lists, the wrapped nodes are still adjacent because the memory allocator happens to use bump allocation for the small fixed-sized wrapped nodes.
+As a result, these memory layouts result in high spatial and temporal locality for both kinds of lists during the linear array traversal.
+With address look-ahead, the hardware does an excellent job of managing the multi-level cache.
+Hence, performance is largely constant for both kinds of lists, until cache and NUMA boundaries are crossed for longer lists and the costs increase consistently for both kinds of lists.
+
+In detail, \VRef[Figure]{f:Shuffled} shows shuffle insertion of all the nodes and then linear removal, both in the same direction.
+As for linear, there are issues with the wrapped list and memory allocation.
+For intrusive lists, it is possible to link the nodes randomly, so consecutive nodes in memory seldom point at adjacent nodes.
+For wrapped lists, the placement of wrapped nodes is dictated by the memory allocator, and results in memory layout virtually the same as for linear, \ie the wrapped nodes are laid out consecutively in memory, but each wrapped node points at a randomly located external node.
+As seen on \VPageref{p:Shuffle}, the random access reads data from external node, which are located in random order.
+So while the wrapped nodes are accessed linearly, external nodes are touched randomly for both kinds of lists resulting in similar cache events.
+The slowdown of shuffled occurs as the experiment's length grows from last-level cache into main memory.
+% Insert and remove operations act on both sides of a link.
+%Both a next unlisted item to insert (found in the items' array, seen through the shuffling array), and a next listed item to remove (found by traversing list links), introduce a new user-item location.
+Each time a next item is processed, the memory access is a hop to a randomly far address.
+The target is unavailable in the cache and a slowdown results.
+Note, the external-data no-touch assumption is often unrealistic: decisions like,``Should I remove this item?'' need to look at the item.
+Therefore, under realistic assumptions, both intrusive and wrapped-reference lists suffer similar caching issues for very large lists.
+% In an odd scenario where this intuition is incorrect, and where furthermore the program's total use of the memory allocator is sufficiently limited to yield approximately adjacent allocations for successive list insertions, a non-intrusive list may be preferred for lists of approximately the cache's size.
+
+The takeaway from this experiment is that wrapped-list operations are expensive because memory allocation is expense at this fine-grained level of execution.
+Hence, when possible, using intrusive links can produce a significant performance gain, even if nodes must be dynamically allocated, because the wrapping allocations are eliminated.
+Even when space is a consideration, intrusive links may not use any more storage if a node is mostly linked.
+Unfortunately, many programmers are unaware of intrusive lists for dynamically-sized data-structures.
+
+% Note, linear access may not be realistic unless dynamic size changes may occur;
+% if the nodes are known to be adjacent, use an array.
+
+% In a wrapped-reference list, list nodes are allocated separately from the items put into the list.
+% Intrusive beats wrapped at the smaller lengths, and when shuffling is avoided, because intrusive avoids dynamic memory allocation for list nodes.
+
+% STL's performance is not affected by element order in memory.
+%The field of intrusive lists begins with length-1 operations costing around 10 ns and enjoys a ``sweet spot'' in lengths 10--100 of 5--7-ns operations.
+% This much is also unaffected by element order.
+% Beyond this point, shuffled-element list performance worsens drastically, losing to STL beyond about half a million elements, and never particularly leveling off.
+% In the same range, an unshuffled list sees some degradation, but holds onto a 1--2 $\times$ speedup over STL.
+
+% The apparent intrusive ``sweet spot,'' particularly its better-than-length-1 speed, is not because of list operations truly running faster.
+% Rather, the worsening as length decreases reflects the per-operation share of harness overheads incurred at the outer-loop level.
+% Disabling the harness's ability to drive interleaving, even though the current scenario is using a ``never work in middle'' interleave, made this rise disappear.
+% Subsequent analyses use length-controlled relative performance when comparing intrusive implementations, making this curiosity disappear.
+
+% The remaining big-swing comparison points say more about a computer's memory hierarchy than about linked lists.
+% The tests in this chapter are only inserting and removing.
+% They are not operating on any user payload data that is being listed.
+% The drastic differences at large list lengths reflect differences in link-field storage density and in correlation of link-field order to element order.
+% These differences are inherent to the two list models.
+
+% A wrapped-reference list's separate nodes are allocated right beside each other in this experiment, because no other memory allocation action is happening.
+% As a result, the interlinked nodes of the STL list are generally referencing their immediate neighbours.
+% This pattern occurs regardless of user-item shuffling because this test's ``use'' of the user-items' array is limited to storing element addresses.  
+% This experiment, driving an STL list, is simply not touching the memory that holds the user data.
+% Because the interlinked nodes, being the only touched memory, are generally adjacent, this case too has high memory locality and stays fast.
+
+% But the comparison of unshuffled intrusive with wrapped-reference gives the performance of these two styles, with their the common impediment of overfilling the cache removed.
+% Intrusive consistently beats wrapped-reference by about 20 ns, at all sizes.  
+% This difference is appreciable below list length 0.5 M, and enormous below 10 K.
+
+
+\section{Result: Comparing intrusive implementations}
+\label{s:ComparingIntrusiveImplementations}
+
+The preceding result shows that all the intrusive implementations examined have noteworthy performance compared to wrapped lists.
+This analysis picks the list region 10-150 and zooms-in to differentiate among the different intrusive implementations.
+The data is selected from the start of \VRef[Figure]{f:Linear}, but the start of \VRef[Figure]{f:Shuffled} would be largely the same.
 
 \begin{figure}
 \centering
-  \begin{tabular}{c}
-  \includegraphics{plot-list-zoomout-shuf.pdf} \\
-  (a) \\
-  \includegraphics{plot-list-zoomout-noshuf.pdf} \\
-  (b) \\
-  \end{tabular}
-  \caption{Operation duration \vs list length at full spectrum of list lengths.  One example operation is shown: stack movement, insert-first polarity and head-mediated access.  Lengths go as large as completes without error.  Version (a) uses shuffled items, while version (b) links items with their physical neighbours.}
-  \label{fig:plot-list-zoomout}
-\end{figure}
-
-\VRef[Figure]{fig:plot-list-zoomout} presents the speed measures at various list lengths.
-STL's wrapped-reference list begins with operations on a length-1 list costing around 30 ns.
-This time grows modetly as list length increases, apart from more drastic worsening at the largest lengths.
-STL's performance is not affected by element order in memory.
-The field of intrusive lists begins with length-1 operations costing around 10 ns and enjoys a ``sweet spot'' in lengths 10--100 of 5--7-ns operations.
-This much is also unaffected by element order.
-Beyond this point, shuffled-element list performance worsens drastically, losing to STL beyond about half a million elements, and never particularly leveling off.
-In the same range, an unshuffled list sees some degradation, but holds onto a 1--2 $\times$ speedup over STL.
-
-The apparent intrusive ``sweet spot,'' particularly its better-than-length-1 speed, is not because of list operations truly running faster.
-Rather, the worsening as length decreases reflects the per-operation share of harness overheads incurred at the outer-loop level.
-Disabling the harness's ability to drive interleaving, even though the current scenario is using a ``never work in middle'' interleave, made this rise disappear.
-Subsequent analyses use length-controlled relative performance when comparing intrusive implementations, making this curiosity disappear.
-
-In a wrapped-reference list, list nodes are allocated separately from the items put into the list.
-Intrusive beats wrapped at the smaller lengths, and when shuffling is avoided, because intrusive avoids dynamic memory allocation for list nodes.
-
-The remaining big-swing comparison points say more about a computer's memory hierarchy than about linked lists.
-The tests in this chapter are only inserting and removing.
-They are not operating on any user payload data that is being listed.
-The drastic differences at large list lengths reflect differences in link-field storage density and in correlation of link-field order to element order.
-These differences are inherent to the two list models.
-
-The slowdown of shuffled intrusive occurs as the experiment's length grows from last-level cache, into main memory.
-Insert and remove operations act on both sides of a link.
-Both a next unlisted item to insert (found in the items' array, seen through the shuffling array), and a next listed item to remove (found by traversing list links), introduce a new user-item location.
-Each time a next item is processed, the memory access is a hop to a randomly far address.
-The target is not available in cache and a slowdown results.
-
-With the unshuffled intrusive list, each link connects to an adjacent location.  So, this case has high memory locality and stays fast.  But the unshuffled assumption is simply not realistic: if you know items are adjacent, you don't need a linked list.
-
-A wrapped-reference list's separate nodes are allocated right beside each other in this experiment, because no other memory allocation action is happening.
-As a result, the interlinked nodes of the STL list are generally referenceing their immediate neighbours.
-This pattern occurs regardless of user-item suffling because this test's ``use'' of the user-items' array is limited to storing element addresses.  
-This experiment, driving an STL list, is simply not touching the memory that holds the user data.
-Because the interlinked nodes, being the only touched memory, are generally adjacent, this case too has high memory locality and stays fast.
-But the user-data no-touch assumption is often unrealistic: decisions like,``Should I remove this item?'' need to look at the item.
-In an odd scenario where this intuition is incorrect, and where furthermore the program's total use of the memory allocator is sufficiently limited to yield approximately adjacent allocations for successive list insertions, a nonintrusive list may be preferred for lists of approximately the chache's size.
-
-Therefore, under clearly typical situational assumptions, both intrusive and wrapped-reference lists will suffer similarly from a large list overfilling the memory cache, experiencing degradation like shuffled intrusive shows here.
-
-But the comparison of unshuffled intrusive with wrapped-reference gives the peformance of these two styles, with their the common impediment of overfilling the cache removed.
-Intrusive consistently beats wrapped-reference by about 20 ns, at all sizes.  
-This difference is appreciable below list length 0.5 M, and enormous below 10 K.
-
-
-\section{Result: Comparing intrusive implementations}
-
-The preceding result shows that intrusive implementations have noteworthy performance differences below 150 nodes.
-This analysis zooms in on this area and identifies the participants.
-
-\begin{figure}
-\centering
-  \begin{tabular}{c}
-  \includegraphics{plot-list-zoomin-abs.pdf} \\
-  (a) \\
-  \includegraphics{plot-list-zoomin-rel.pdf} \\
-  (b) \\
-  \end{tabular}
+  \subfloat[Absolute Time]{\label{f:AbsoluteTime}
+  \includegraphics{plot-list-zoomin-abs.pdf}
+  } % subfigure
+  \\
+  \subfloat[Relative Time]{\label{f:RelativeTime}
+  \includegraphics{plot-list-zoomin-rel.pdf}
+  } % subfigure
   \caption{Operation duration \vs list length at small-medium lengths.  One example operation is shown: stack movement, insert-first polarity and head-mediated access.  (a) has absolute times.  (b) has times relative to those of LQ-\lstinline{tailq}.}
   \label{fig:plot-list-zoomin}
 \end{figure}
 
-In \VRef{fig:plot-list-zoomin} part (a) shows exactly this zoom-in.
-The same scenario as the coarse comparison is used: a stack, with insertions and removals happening at the end called ``first,'' ``head'' or ``front,'' and all changes occuring through a head-provided insert/remove operation.
+\VRef[Figure]{fig:plot-list-zoomin} shows the zone from 10--150 blown up, both in absolute and relative time.
+% The same scenario as the coarse comparison is used: a stack, with insertions and removals happening at the end called ``first,'' ``head'' or ``front,'' and all changes occurring through a head-provided insert/remove operation.
 The error bars show fastest and slowest time seen on five trials, and the central point is the mean of the remaining three trials.
-For readability, the frameworks are slightly staggered in the horizontal, but all trials near a given size were run at the same size.
-
-For this particular operation, uC++ fares the worst, followed by \CFA, then LQ's @tailq@.
-Its @list@ does the best at smaller lengths but loses its edge above a dozen elements.
-
-Moving toward being able to consider several scenarios, \VRef{fig:plot-list-zoomin} part (b) shows the same result, adjusted to treat @tailq@ as a benchmark, and expressing all the results relative to it.
+For readability, the points are slightly staggered at a given horizontal value, where the points might otherwise appear on top of each other.
+
+While preparing experiment results, I first tested on my old office PC, AMD FX-8370E Eight-Core, before switching to the large new server for final testing.
+For this experiment, the results flipped in my favour when running on the server.
+New CPU architectures are now amazingly good at branch prediction and micro-parallelism in the pipelines.
+Specifically, on the PC, my \CFA and companion \uC lists are slower than lq-tail and lq-list by 10\% to 20\%.
+On the server, \CFA and \uC lists are can be fast by up to 100\%.
+Overall, LQ-tailq does the best at short lengths but loses out above a dozen elements.
+
+\VRef[Figure]{f:RelativeTime} shows the percentage difference by treating @tailq@ as the control benchmark, and showing the other list results relative to it.
 This change does not affect the who-wins statements, it just removes the ``sweet spot'' bend that the earlier discussion dismissed as incidental.
-Runs faster than @tailq@'s are below the zero and slower runs are above; @tailq@'s mean is always zero by definition, but its error bars, representing a single scenario's re-run stability, are still meaningful.
+Runs faster than @tailq@'s are below the zero and slower runs are above;
+@tailq@'s mean is always zero by definition, but its error bars, representing a single scenario's re-run stability, are still meaningful.
 With this bend straightened out, aggregating across lengths is feasible.
 
 \begin{figure}
 \centering
-  \begin{tabular}{c}
-  \includegraphics{plot-list-cmp-exout.pdf} \\
-  (a) \\
-  \includegraphics{plot-list-cmp-survey.pdf} \\
-  (b) \\
-  \end{tabular}
+  \subfloat[Supersets]{\label{f:Superset}
+  \includegraphics{plot-list-cmp-exout.pdf}
+  } % subfigure
+  \\
+  \subfloat[1st Level Slice]{\label{f:1stLevelSlice}
+  \includegraphics{plot-list-cmp-survey.pdf}
+  } % subfigure
   \caption{Operation duration ranges across operational scenarios.  (a) has the supersets of the running example operation.  (b) has the first-level slices of the full space of operations.}
   \label{fig:plot-list-cmp-overall}
 \end{figure}
 
-\VRef{fig:plot-list-cmp-overall} introduces the resulting format.
+\VRef[Figure]{fig:plot-list-cmp-overall} introduces alternative views of the data.
 Part (a)'s first column summarizes all the data of \VRef{fig:plot-list-zoomin}-(b).
 Its x-axis label, ``stack/insfirst/allhead,'' names the concrete scenario that has been discussed until now.
 Moving across the columns, the next three each stretch to include more scenarios on each of the operation dimensions, one at a time.
 The second column considers the scenarios $\{\mathrm{stack}\} \times \{\mathrm{insfirst}\} \times \{\mathrm{allhead}, \mathrm{inselem}, \mathrm{remelem}\}$,
-while the third stretches polarity and the fourth streches accessor.
+while the third stretches polarity and the fourth stretches accessor.
 Then next three columns each stretch two scenario dimensions and the last column stretches all three.
 The \CFA bar in the last column is summarizing 840 test-program runs: 14 list lengths, 2 movements, 2 polarities, 3 accessors and 5 repetitions.
@@ -778,5 +894,5 @@
 
 The chosen benchmark of LQ-@tailq@ is not shown in this format because it would be trivial here.
-With iter-scenario differences dominating the bar size, and @tailq@'s mean performance defined to be zero in all scenarios, a @tailq@ bar on this plot would only show @tailq@'s re-run stabiity, which is of no comparison value.
+With iter-scenario differences dominating the bar size, and @tailq@'s mean performance defined to be zero in all scenarios, a @tailq@ bar on this plot would only show @tailq@'s re-run stability, which is of no comparison value.
 
 The LQ-@list@ implementation does not support all scenarios, only stack movement with insert-first polarity.
@@ -805,7 +921,11 @@
 % \end{figure}
 
+
 \section{Result: CFA cost attribution}
 
-This comparison loosely itemizes the reasons that the \CFA implementation runs 15--20\% slower than LQ.  Each reason provides for safer programming.  For each reaon, a version of the \CFA list was measured that forgoes its safety and regains some performance.  These potential sacrifices are:
+This comparison loosely itemizes the reasons that the \CFA implementation runs 15--20\% slower than LQ.
+Each reason provides for safer programming.
+For each reason, a version of the \CFA list was measured that forgoes its safety and regains some performance.
+These potential sacrifices are:
 \newcommand{\mandhead}{\emph{mand-head}}
 \newcommand{\nolisted}{\emph{no-listed}}
@@ -814,5 +934,5 @@
 \item[mand(atory)-head] Removing support for headless lists.
   A specific explanation of why headless support causes a slowdown is not offered.
-  But it is reasonable for a cost to result from making one pieceof code handle multiple cases; the subset of the \CFA list API that applies to headless lists shares its implementation with headed lists.
+  But it is reasonable for a cost to result from making one piece of code handle multiple cases; the subset of the \CFA list API that applies to headless lists shares its implementation with headed lists.
   In the \mandhead case, disabling the feature in \CFA means using an older version of the implementation, from before headless support was added.
   In the pre-headless library, trying to form a headless list (instructing, ``Insert loose element B after loose element A,'') is a checked runtime error.
@@ -822,5 +942,5 @@
 	For \lstinline{list}, this usage causes an ``uncaught'' runtime crash.}.
 \item[no-listed] Removing support for the @is_listed@ API query.
-  Along with it goes error checking such as ``When instering an element, it must not already be listed, \ie be referred to from somewhere else.''
+  Along with it goes error checking such as ``When inserting an element, it must not already be listed, \ie be referred to from somewhere else.''
   These abilities have a cost because, in order to support them, a listed element that is being removed must be written to, to record its change in state.
   In \CFA's representation, this cost is two pointer writes.
@@ -835,5 +955,5 @@
   Without this termination marking, repeated requests for a next valid item will always provide a positive response; when it should be negative, the indicated next element is garbage data at an address unlikely to trigger a memory error.
   LQ has a well-terminating iteration for listed elements.
-  In the \noiter case, the slowdown is not inherent; it represents a \CFA optimization opporunity.
+  In the \noiter case, the slowdown is not inherent; it represents a \CFA optimization opportunity.
 \end{description}
 \MLB{Ensure benefits are discussed earlier and cross-reference}  % an LQ programmer must know not to ask, ``Who's next?'' about an unlisted element; an LQ programmer cannot write assertions about an item being listed; LQ requiring a head parameter is an opportunity for the user to provide inconsistent data
@@ -871,5 +991,5 @@
 For element removals, \attribParity is the heavy hitter, with \noiter contributing modestly.
 
-The couterproductive shift outside of element removals is likely due to optimization done in the \attribFull version after implementing headless support, \ie not present in the \mandhead version.
+The counterproductive shift outside of element removals is likely due to optimization done in the \attribFull version after implementing headless support, \ie not present in the \mandhead version.
 This work streamlined both head-based operations (head-based removal being half the work of the element-insertion test).
 This improvement could be ported to a \mandhead-style implementation, which would bring down the \attribParity time in these cases.
@@ -880,5 +1000,5 @@
 
 \VRef[Figure]{fig:plot-list-cfa-attrib}-(b) addresses element removal being the overall \CFA slow spot and element removal having a peculiar shape in the (a) analysis.
-Here, the \attribParity sacrifice bundle is broken out into its two consituents.
+Here, the \attribParity sacrifice bundle is broken out into its two constituents.
 The result is the same regardless of the operation.
 All three individual sacrifices contribute noteworthy improvements (\nolisted slightly less).
@@ -891,5 +1011,5 @@
 The \noiter speed improvement would bring \CFA to +5\% of LQ overall, and from high twenties to high teens, in the worst case of element removal.
 
-Utimately, this analysis provides options for a future effort that needs to get the most speed out of the \CFA list.
+Ultimately, this analysis provides options for a future effort that needs to get the most speed out of the \CFA list.
 
 
