source: doc/theses/mike_brooks_MMath/list.tex@ 5faa3a5

Last change on this file since 5faa3a5 was 5faa3a5, checked in by Peter A. Buhr <pabuhr@…>, 14 hours ago

spelling corrections, and final proofreading

  • Property mode set to 100644
File size: 107.4 KB
Line 
1\chapter{Linked List}
2\label{ch:list}
3
4This chapter presents my work on designing and building a linked-list library for \CFA.
5Due to time limitations and the needs expressed by the \CFA runtime developers, I focussed on providing a doubly-linked list, and its bidirectional iterators for traversal.
6Simpler data-structures, like stack and queue, can be built from the doubly-linked mechanism with only a slight storage/performance cost because of the unused link field.
7Reducing to data-structures with a single link follows directly from the more complex double links and its iterators.
8
9
10\section{Plan-9 Inheritance}
11\label{s:Plan9Inheritance}
12
13This chapter uses a form of inheritance from the Plan-9 C dialect~\cite[\S~3.3]{Thompson90new}, which is supported by @gcc@ and @clang@ using @-fplan9-extensions@.
14\CFA has its own variation of the Plan-9 mechanism, where the nested type is denoted using @inline@.
15\begin{cfa}
16union U { int x; double y; char z; };
17struct W { int i; double j; char k; };
18struct S {
19 @inline@ struct W; $\C{// extended Plan-9 inheritance}$
20 unsigned int tag;
21 @inline@ U; $\C{// extended Plan-9 inheritance}$
22} s;
23\end{cfa}
24Inline inheritance is containment, where the inlined field is unnamed and the type's internal fields are hoisted into the containing structure.
25Hence, the field names must be unique, unlike \CC nested types, but any inlined type-names are at a nested scope level, unlike aggregate nesting in C.
26Note, the position of the containment is normally unimportant, unless there is some form of memory or @union@ overlay.
27Finally, the inheritance declaration of @U@ is not prefixed with @union@.
28Like \CC, \CFA allows optional prefixing of type names with their kind, \eg @struct@, @union@, and @enum@, unless there is ambiguity with variable names in the same scope.
29
30\VRef[Figure]{f:Plan9Polymorphism} shows the key polymorphic feature of Plan-9 inheritance: implicit conversion of values and pointers for nested types.
31In the example, there are implicit conversions from @S@ to @U@ and @S@ to @W@, extracting the appropriate value or pointer for the substructure.
32\VRef[Figure]{f:DiamondInheritancePattern} shows complex multiple inheritance patterns are possible, like the \newterm{diamond pattern}~\cite[\S~6.1]{Stroustrup89}\cite[\S~4]{Cargill91}.
33Currently, the \CFA type-system does not support @virtual@ inheritance.
34
35\begin{figure}
36\begin{cfa}
37void f( U, U * ); $\C{// value, pointer}$
38void g( W, W * ); $\C{// value, pointer}$
39U u, * up; W w, * wp; S s, * sp;
40u = s; up = sp; $\C{// value, pointer}$
41w = s; wp = sp; $\C{// value, pointer}$
42f( s, &s ); $\C{// value, pointer}$
43g( s, &s ); $\C{// value, pointer}$
44\end{cfa}
45\caption{Plan-9 polymorphism}
46\label{f:Plan9Polymorphism}
47\end{figure}
48
49\begin{figure}
50\setlength{\tabcolsep}{10pt}
51\begin{tabular}{ll@{}}
52\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
53\begin{c++}
54struct B { int b; };
55struct L : @public B@ { int l, w; };
56struct R : @public B@ { int r, w; };
57struct T : @public L, R@ { int t; };
58 T
59 B L B R
60T t = { { { 5 }, 3, 4 }, { { 8 }, 6, 7 }, 3 };
61t.t = 42;
62t.l = 42;
63t.r = 42;
64((L &)t).b = 42; // disambiguate
65\end{c++}
66&
67\begin{cfa}
68struct B { int b; };
69struct L { @inline B;@ int l, w; };
70struct R { @inline B;@ int r, w; };
71struct T { @inline L; inline R;@ int t; };
72 T
73 B L B R
74T t = { { { 5 }, 3, 4 }, { { 8 }, 6, 7 }, 3 };
75t.t = 42;
76t.l = 42;
77t.r = 42;
78((L &)t).b = 42; // disambiguate, proposed solution t.L.b = 42;
79\end{cfa}
80\end{tabular}
81\caption{Diamond non-virtual inheritance pattern}
82\label{f:DiamondInheritancePattern}
83\end{figure}
84
85
86\section{Features}
87
88The following features directed this project, where the goal is high-performance list operations required by \CFA runtime components, like the threading library.
89
90
91\subsection{Core Design Issues}
92
93The doubly-linked list attaches links intrusively in a node, allows a node to appear on multiple lists (axes) simultaneously, integrates with user code via the type system, treats its ends uniformly, and identifies a list using an explicit head.
94This design covers system and data management issues stated in \VRef{toc:lst:issue}.
95
96\VRef[Figure]{fig:lst-features-intro} continues the running @req@ example from \VRef[Figure]{fig:lst-issues-attach} using the \CFA list @dlist@.
97The \CFA link attachment is intrusive so the resulting memory layout is per user node, as for the LQ version of \VRef[Figure]{f:Intrusive}.
98The \CFA framework provides generic type @dlink( T )@ (not to be confused with @dlist@) for the two link fields (front and back).
99A user inserts the links into the @req@ structure via \CFA inline-inheritance \see{\VRef{s:Plan9Inheritance}}.
100Lists leverage the automatic conversion of a pointer to anonymous inline field for assignments and function calls.
101Therefore, a reference to a @req@ is implicitly convertible to @dlink@ in both contexts.
102The links in @dlist@ point at another embedded @dlist@ node, know the offsets of all links (data is abstract but accessible), and any field-offset arithmetic or link-value changes are safe and abstract.
103
104\begin{figure}
105 \lstinput{20-30}{lst-features-intro.run.cfa}
106 \caption[Multiple link axes in \CFA list library]{
107 Demonstration of the running \lstinline{req} example, done using the \CFA list library.
108 This example is equivalent to the three approaches in \VRef[Figure]{fig:lst-issues-attach}.
109 }
110 \label{fig:lst-features-intro}
111\end{figure}
112
113\VRef[Figure]{f:dlistOutline} shows the outline for types @dlink@ and @dlist@.
114Note, the first @forall@ clause is distributed across all the declaration in its containing block, eliminating repetition on each declaration.
115The second nested @forall@ on @dlist@ is also distributed and adds an optional second type parameter, @tLinks@, denoting the linking axis \see{\VRef{s:Axis}}, \ie the kind of list this node can appear on.
116
117\begin{figure}
118\begin{cfa}
119forall( tE & ) { $\C{// distributed}$
120 struct @dlink@ { ... }; $\C{// abstract type}$
121 static inline void ?{}( dlink( tE ) & this ); $\C{// constructor}$
122
123 forall( tLinks & = dlink( tE ) ) { $\C{// distributed, default type for axis}$
124 struct @dlist@ { ... }; $\C{// abstract type}$
125 static inline void ?{}( dlist( tE, tLinks ) & this ); $\C{// constructor}$
126 }
127}
128\end{cfa}
129\caption[Sketch of the dlink and dlist type definitions]{Sketch of the \lstinline{dlink} and \lstinline{dlist} type definitions }
130\label{f:dlistOutline}
131\end{figure}
132
133\VRef[Figure]{fig:lst-features-multidir} shows how the \CFA library supports multi-inline links, so a node has multiple axes.
134The declaration of @req@ has two inline-inheriting @dlink@ occurrences.
135The first of these gives a type named @req.by_pri@, @req@ inherits from it, and it inherits from @dlink@.
136The second line @req.by_rqr@ is similar to @req.by_pri@.
137Thus, there is a diamond, non-virtual, inheritance from @req@ to @dlink@, with @by_pri@ and @by_rqr@ being the mid-level types \see{\VRef[Figure]{f:DiamondInheritancePattern}}.
138
139The declarations of the list-head objects, @reqs_pri@, @reqs_rqr_42@, @reqs_rqr_17@, and @reqs_rqr_99@, bind which link nodes in @req@ are used by this list.
140Hence, the type of the variable @reqs_pri@, @dlist(req, req.by_pri)@, means operations on @reqs_pri@ implicitly select (disambiguate) the correct @dlink@s, \eg the calls @insert_first(reqs_pri, ...)@ imply, ``here, we are working by priority.''
141As in \VRef[Figure]{fig:lst-issues-multi-static}, three lists are constructed, a priority list containing all nodes, a list with only nodes containing the value 42, and a list with only nodes containing the value 17.
142
143\begin{figure}
144\centering
145
146\begin{tabular}{@{}ll@{}}
147\multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{C} \\
148 \begin{tabular}{@{}l@{}}
149 \lstinput{20-31}{lst-features-multidir.run.cfa} \\
150 \lstinput{43-71}{lst-features-multidir.run.cfa}
151 \end{tabular}
152&
153 \lstinput[language=C++]{20-60}{lst-issues-multi-static.run.c}
154\end{tabular}
155
156\caption[Demonstration of multiple static link axes in \CFA]{
157 Demonstration of multiple static link axes in \CFA.
158 The right example is from \VRef[Figure]{fig:lst-issues-multi-static}.
159 The left \CFA example does the same job.
160}
161\label{fig:lst-features-multidir}
162\end{figure}
163
164The list library also supports the common case of single directionality more naturally than LQ.
165Returning to \VRef[Figure]{fig:lst-features-intro}, the single-axis list has no contrived name for the link axis as it uses the default type in the definition of @dlist@;
166in contrast, the LQ list in \VRef[Figure]{f:Intrusive} adds the unnecessary field name @d@.
167In \CFA, a single axis list sets up a single inheritance with @dlink@, and the default list axis is to itself.
168
169When operating on a list with several axes and operations that do not take the list head, the list axis can be ambiguous.
170For example, a call like @insert_after( r1, r2 )@ does not have enough information to know which axes to select implicitly.
171Is @r2@ supposed to be the next-priority request after @r1@, or is @r2@ supposed to join the same-requester list of @r1@?
172As such, the \CFA type-system gives an ambiguity error for this call.
173There are multiple ways to resolve the ambiguity.
174The simplest is an explicit cast on each call to select the specific axis, \eg @insert_after( (by_pri)r1, r2 )@.
175However, multiple explicit casts are tedious and error-prone.
176To mitigate this issue, the list library provides a hook for applying the \CFA language's scoping and priority rules.
177\begin{cfa}
178with ( DLINK_VIA( req, req.pri ) ) insert_after( r1, r2 );
179\end{cfa}
180Here, the @with@ statement opens the scope of the object type for the expression;
181hence, the @DLINK_VIA@ result causes one of the list axes to become a more attractive candidate to \CFA's overload resolution.
182This boost can be applied across multiple statements in a block or an entire function body.
183\begin{cquote}
184\setlength{\tabcolsep}{15pt}
185\begin{tabular}{@{}ll@{}}
186\begin{cfa}
187@with( DLINK_VIA( req, req.pri ) ) {@
188 ... insert_after( r1, r2 ); ...
189@}@
190\end{cfa}
191&
192\begin{cfa}
193void f() @with( DLINK_VIA( req, req.pri ) ) {@
194 ... insert_after( r1, r2 ); ...
195@}@
196\end{cfa}
197\end{tabular}
198\end{cquote}
199Within the @with@, the code acts as if there is only one list axis, without explicit casting.
200
201Unlike the \CC template collection types, the \CFA library works completely within the type system;
202both @dlink@ and @dlist@ are ordinary types, not language macros.
203There is no textual expansion other than header-included static-inline function for performance.
204Hence, errors in user code are reported only with mention of the library's declarations, versus long template names in error messages.
205Finally, the library is separately compiled from the usage code, modulo inlining.
206
207
208\section{List API}
209
210\VRef[Figure]{f:ListAPI} shows the API for the doubly-link list operations, where each is explained.
211\begin{itemize}[leftmargin=*]
212\item
213@listed@ returns true if the node is an element of a list and false otherwise.
214\item
215@empty@ returns true if the list has no nodes and false otherwise.
216\item
217@first@ returns a reference to the first node of the list without removing it or @0p@ if the list is empty.
218\item
219@last@ returns a reference to the last node of the list without removing it or @0p@ if the list is empty.
220\item
221@insert_before@ adds a node before a specified node \see{\lstinline{insert_last} for insertion at the end}\footnote{
222Some list packages allow \lstinline{0p} (\lstinline{nullptr}) for the before/after node implying insert/remove at the start/end of the list, respectively.
223However, this inserts an \lstinline{if} statement in the fastpath of a potentially commonly used list operation.}
224\item
225@insert_after@ adds a node after a specified node \see{\lstinline{insert_first} for insertion at the start}\footnotemark[\value{footnote}].
226\item
227@remove@ removes a specified node from the list (any location) and returns a reference to the node.
228\item
229@iter@ create an iterator for the list.
230\item
231@recede@ returns true if the iterator cursor is advanced to the previous (predecessor, towards first) node before the prior cursor node and false otherwise.
232\item
233@advance@ returns true if the iterator cursor is advanced to the next (successor, towards last) node after the prior cursor node and false otherwise.
234\item
235@first@ returns true if the node is the first node in the list and false otherwise.
236\item
237@last@ returns true if the node is the last node in the list and false otherwise.
238\item
239@pred@ returns a reference to the previous (predecessor, towards first) node before a specified node or @0p@ if a specified node is the first node in the list.
240\item
241@next@ returns a reference to the next (successor, towards last) node after a specified node or @0p@ if a specified node is the last node in the list.
242\item
243@insert_first@ adds a node to the start of the list so it becomes the first node and returns a reference to the node for cascading.
244\item
245@insert_last@ adds a node to the end of the list so it becomes the last node and returns returns a reference to the node for cascading.
246\item
247@remove_first@ removes the first node and returns a reference to it or @0p@ if the list is empty.
248\item
249@remove_last@ removes the last node and returns a reference to it or @0p@ if the list is empty.
250\item
251@transfer@ transfers all nodes from the @from@ list to the end of the @to@ list; the @from@ list is empty after the transfer.
252\item
253@split@ transfers the @from@ list up to node to the end of the @to@ list; the @from@ list becomes the list after the node.
254The node must be in the @from@ list.
255\end{itemize}
256For operations @insert_*@, @insert@, and @remove@, a variable-sized list of nodes can be specified using \CFA's tuple type~\cite[\S~4.7]{Moss18} (not discussed), \eg @insert( list, n1, n2, n3 )@, recursively invokes @insert( list, n@$_i$@ )@.\footnote{
257Currently, a resolver bug between tuple types and references means tuple routines must use pointer parameters.
258Nevertheless, the imaginary reference versions are used here as the code is cleaner, \eg no \lstinline{&} on call arguments.}
259
260\begin{figure}
261\begin{cfa}
262E & listed( E & node );
263E & empty( dlist( E ) & list );
264E & first( dlist( E ) & list );
265E & last( dlist( E ) & list );
266E & insert_before( E & before, E & node );
267E & insert_after( E & after, E & node );
268E & remove( E & node );
269E & iter( dlist( E ) & list );
270bool advance( E && refx );
271bool recede( E && refx );
272bool first( E & node );
273bool last( E & node );
274E & prev( E & node );
275E & next( E & node );
276E & insert_first( dlist( E ) & list, E & node );
277E & insert_last( dlist( E ) & list, E & node ); // synonym insert
278E & remove_first( dlist( E ) & list );
279E & remove_last( dlist( E ) & list );
280void transfer( dlist( E ) & to, dlist( E ) & from );
281void split( dlist( E ) & to, dlist( E ) & from, E & node );
282\end{cfa}
283\caption{\CFA list API}
284\label{f:ListAPI}
285\end{figure}
286
287
288\subsection{Iteration}
289
290It is possible to iterate through a list manually or using a set of standard macros.
291\VRef[Figure]{f:IteratorDriver} shows the iterator outline, managing a list of nodes, used throughout the following iterator examples.
292Each example assumes its loop body prints the value in the current node.
293
294\begin{figure}
295\begin{cfa}
296#include <fstream.hfa>
297#include <list.hfa>
298struct node {
299 int v;
300 inline dlink(node);
301};
302int main() {
303 dlist(node) list;
304 node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
305 insert( list, n1, n2, n3, n4 ); $\C{// insert in any order}$
306 sout | nlOff;
307 @for ( ... )@ sout | it.v | ","; sout | nl; $\C{// iterator examples in text}$
308 remove( n1, n2, n3, n4 ); $\C{// remove in any order}$
309}
310\end{cfa}
311\caption{Iterator driver}
312\label{f:IteratorDriver}
313\end{figure}
314
315The manual method is low level but allows complete control of the iteration.
316The list cursor (index) can be either a pointer or a reference to a node in the list.
317The choice depends on how the programmer wants to access the fields: @it->f@ or @it.f@.
318The following examples use a reference because the loop body manipulates the node values rather than the list pointers.
319The end of iteration is denoted by the loop cursor returning @0p@.
320
321\noindent
322Iterating forward and reverse through the entire list.
323\begin{cquote}
324\setlength{\tabcolsep}{15pt}
325\begin{tabular}{@{}l|l@{}}
326\begin{cfa}
327for ( node & it = @first@( list ); &it /* != 0p */ ; &it = &@next@( it ) ...
328for ( node & it = @last@( list ); &it; &it = &@prev@( it ) ) ...
329\end{cfa}
330&
331\begin{cfa}
3321, 2, 3, 4,
3334, 3, 2, 1,
334\end{cfa}
335\end{tabular}
336\end{cquote}
337Iterating forward and reverse from a starting node through the remaining list.
338\begin{cquote}
339\setlength{\tabcolsep}{15pt}
340\begin{tabular}{@{}l|l@{}}
341\begin{cfa}
342for ( node & it = @n2@; &it; &it = &@next@( it ) ) ...
343for ( node & it = @n3@; &it; &it = &@prev@( it ) ) ...
344\end{cfa}
345&
346\begin{cfa}
3472, 3, 4,
3483, 2, 1,
349\end{cfa}
350\end{tabular}
351\end{cquote}
352Iterating forward and reverse from a starting node to an ending node through the contained list.
353\begin{cquote}
354\setlength{\tabcolsep}{15pt}
355\begin{tabular}{@{}l|l@{}}
356\begin{cfa}
357for ( node & it = @n2@; &it @!= &n4@; &it = &@next@( it ) ) ...
358for ( node & it = @n4@; &it @!= &n2@; &it = &@prev@( it ) ) ...
359\end{cfa}
360&
361\begin{cfa}
3622, 3,
3634, 3,
364\end{cfa}
365\end{tabular}
366\end{cquote}
367Iterating forward and reverse through the entire list using the shorthand start at the list head and pick an axis.
368In this case, @advance@ and @recede@ return a boolean, like \CC @while ( cin >> i )@.
369\begin{cquote}
370\setlength{\tabcolsep}{15pt}
371\begin{tabular}{@{}l|l@{}}
372\begin{cfa}
373for ( node & it = @iter@( list ); @advance@( it ); ) ...
374for ( node & it = @iter@( list ); @recede@( it ); ) ...
375\end{cfa}
376&
377\begin{cfa}
3781, 2, 3, 4,
3794, 3, 2, 1,
380\end{cfa}
381\end{tabular}
382\end{cquote}
383Finally, there are convenience macros that look like @foreach@ in other programming languages.
384Iterating forward and reverse through the entire list.
385\begin{cquote}
386\setlength{\tabcolsep}{15pt}
387\begin{tabular}{@{}l|l@{}}
388\begin{cfa}
389FOREACH( list, it ) ...
390FOREACH_REV( list, it ) ...
391\end{cfa}
392&
393\begin{cfa}
3941, 2, 3, 4,
3954, 3, 2, 1,
396\end{cfa}
397\end{tabular}
398\end{cquote}
399Iterating forward and reverse through the entire list or until a predicate is triggered.
400\begin{cquote}
401\setlength{\tabcolsep}{15pt}
402\begin{tabular}{@{}l|l@{}}
403\begin{cfa}
404FOREACH_COND( list, it, @it.v == 3@ ) ...
405FOREACH_REV_COND( list, it, @it.v == 1@ ) ...
406\end{cfa}
407&
408\begin{cfa}
4091, 2,
4104, 3, 2,
411\end{cfa}
412\end{tabular}
413\end{cquote}
414Macros are not ideal, so future work is to provide a language-level @foreach@ statement, like \CC.
415Finally, a predicate can be added to any of the manual iteration loops.
416\begin{cquote}
417\setlength{\tabcolsep}{15pt}
418\begin{tabular}{@{}l|l@{}}
419\begin{cfa}
420for ( node & it = first( list ); &it @&& !(it.v == 3)@; &it = &next( it ) ) ...
421for ( node & it = last( list ); &it @&& !(it.v == 1)@; &it = &prev( it ) ) ...
422for ( node & it = iter( list ); advance( it ) @&& !(it.v == 3)@; ) ...
423for ( node & it = iter( list ); recede( it ) @&& !(it.v == 1)@; ) ...
424\end{cfa}
425&
426\begin{cfa}
4271, 2,
4284, 3, 2,
4291, 2,
4304, 3, 2,
431\end{cfa}
432\end{tabular}
433\end{cquote}
434
435\begin{comment}
436Many languages offer an iterator interface for collections, and a corresponding for-each loop syntax for consuming the items through implicit interface calls.
437\CFA does not yet have a general-purpose form of such a feature, though it has a form that addresses some use cases.
438This section shows why the incumbent \CFA pattern does not work for linked lists and gives the alternative now offered by the linked-list library.
439Chapter 5 [TODO: deal with optimism here] presents a design that satisfies both uses and accommodates even more complex collections.
440
441The current \CFA extensible loop syntax is:
442\begin{cfa}
443for( elem; end )
444for( elem; begin ~ end )
445for( elem; begin ~ end ~ step )
446\end{cfa}
447Many derived forms of @begin ~ end@ exist, but are used for defining numeric ranges, so they are excluded from the linked-list discussion.
448These three forms are rely on the iterative trait:
449\begin{cfa}
450forall( T ) trait Iterate {
451 void ?{}( T & t, zero_t );
452 int ?<?( T t1, T t2 );
453 int ?<=?( T t1, T t2 );
454 int ?>?( T t1, T t2 );
455 int ?>=?( T t1, T t2 );
456 T ?+=?( T & t1, T t2 );
457 T ?+=?( T & t, one_t );
458 T ?-=?( T & t1, T t2 );
459 T ?-=?( T & t, one_t );
460}
461\end{cfa}
462where @zero_t@ and @one_t@ are constructors for the constants 0 and 1.
463The simple loops above are abbreviates for:
464\begin{cfa}
465for( typeof(end) elem = @0@; elem @<@ end; elem @+=@ @1@ )
466for( typeof(begin) elem = begin; elem @<@ end; elem @+=@ @1@ )
467for( typeof(begin) elem = @0@; elem @<@ end; elem @+=@ @step@ )
468\end{cfa}
469which use a subset of the trait operations.
470The shortened loop works well for iterating a number of times or through an array.
471\begin{cfa}
472for ( 20 ) // 20 iterations
473for ( i: 1 ~= 21 ~ 2 ) // odd numbers
474for ( i; n ) total += a[i]; // subscripts
475\end{cfa}
476which is similar to other languages, like JavaScript.
477\begin{cfa}
478for ( i in a ) total += a[i];
479\end{cfa}
480Albeit with different mechanisms for expressing the array's length.
481It might be possible to take the \CC iterator:
482\begin{c++}
483for ( list<int>::iterator it=mylist.begin(); it != mylist.end(); ++it )
484\end{c++}
485and convert it to the \CFA form
486\begin{cfa}
487for ( it; begin() ~= end() )
488\end{cfa}
489by having a list operator @<=@ that just looks for equality, and @+=@ that moves to the next node, \etc.
490
491However, the list usage is contrived, because a list does use its data values for relational comparison, only links for equality comparison.
492Hence, the focus of a list iterator's stopping condition is fundamentally different.
493So, iteration of a linked list via the existing loop syntax is to ask whether this syntax can also do double-duty for iterating values.
494That is, to be an analog of JavaScript's @for..of@ syntax:
495\begin{cfa}
496for ( e of a ) total += e;
497\end{cfa}
498
499The \CFA team will likely implement an extension of this functionality that moves the @~@ syntax from being part of the loop, to being a first-class operator (with associated multi-pace operators for the elided derived forms).
500With this change, both @begin ~ end@ and @end@ (in context of the latter ``two-place for'' expression) parse as \emph{ranges}, and the loop syntax becomes, simply:
501\begin{cfa}
502 for( elem; rangeExpr )
503\end{cfa}
504The expansion and underlying API are under discussion.
505TODO: explain pivot from ``is at done?'' to ``has more?''
506Advantages of this change include being able to pass ranges to functions, for example, projecting a numerically regular subsequence of array entries, and being able to use the loop syntax to cover more collection types, such as looping over the keys of a hash table.
507
508When iterating an empty list, the question, ``Is there a further element?'' needs to be posed once, receiving the answer, ``no.''
509When iterating an $n$-item list, the same question gets $n$ ``yes'' answers (one for each element), plus one ``no'' answer, once there are no more elements; the question is posed $n+1$ times.
510
511When iterating an empty list, the question, ``What is the value of the current element?'' is never posed, nor is the command, ``Move to the next element,'' issued. When iterating an $n$-item list, each happens $n$ times.
512
513So, asking about the existence of an element happens once more than retrieving an element's value and advancing the position.
514
515Many iteration APIs deal with this fact by splitting these steps across different functions, and relying on the user's knowledge of iterator state to know when to call each. In Java, the function @hasNext@ should be called $n+1$ times and @next@ should be called $n$ times (doing the double duty of advancing the iteration and returning a value). In \CC, the jobs are split among the three actions, @it != end@, @it++@ and @*it@, the latter two being called one time more than the first.
516
517TODO: deal with simultaneous axes: @DLINK_VIA@ just works
518
519TODO: deal with spontaneous simultaneity, like a single-axis req, put into an array: which ``axis'' is @&req++@ navigating: array-adjacency vs link dereference. It should sick according to how you got it in the first place: navigating dlist(req, req.pri) vs navigating array(req, 42). (prob. future work)
520\end{comment}
521
522
523\section[C++ Lists]{\CC Lists}
524
525It is worth addressing two API issues in \CC lists avoided in \CFA.
526First, \CC lists require two steps to remove a node versus one in \CFA.
527\begin{cquote}
528\begin{tabular}{@{}ll@{}}
529\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
530\begin{c++}
531list<node> li;
532node n = li.first(); // assignment could raise exception
533li.pop_front();
534\end{c++}
535&
536\begin{cfa}
537dlist(node) list;
538node n = remove_first( list );
539
540\end{cfa}
541\end{tabular}
542\end{cquote}
543The argument for two steps is exception safety: returning an unknown T by value/move might throw an exception from T's copy/move constructor.
544Hence, to be \emph{exception safe}, all internal list operations must complete before the copy/move so the list is consistent should the return fail.
545This coding style can result in contrived code, but is usually possible;
546however, it requires the collection designer to anticipate the potential throw.
547(Note, this anticipation issue is pervasive in exception systems, not just with collections.)
548The solution moves the coding complexity from the collection designer to the programer~\cite[ch~10, part 3]{Sutter99}.
549First, obtain the node, which might fail, but the collection is unmodified.
550Second, remove the node, which modifies the collection without the possibly of an exception.
551This movement of responsibility increases the cognitive effort for programmers.
552Unfortunately, this \emph{single-responsibility principle}, \ie preferring separate operations, is often repeated as a necessary requirement rather an an optional one.
553Separate operations should always be available, but their composition should also be available.
554Interestingly, this issue does not apply to intrusive lists, because the node data is never copied/moved in or out of a list;
555only the link fields are accessed in list operations.
556
557Second, \VRef[Figure]{f:CCvsCFAListIssues} shows an example where a \CC list operation is $O(N$) rather than $O(1)$ in \CFA.
558This issue is inherent to wrapped (non-intrusive) lists.
559Specifically, to remove a node requires access to the links that materialize its membership.
560In a wrapped list, there is no access from node to links, and for abstract reasons, no direct pointers to wrapped nodes, so the links must be found indirectly by navigating the list.
561The \CC iterator is the abstraction to navigate wrapped links.
562So an iterator is needed, not because it offers go-next, but for managing the list membership.
563Note, attempting to keep an array of iterators to each node requires high-complexity to ensure the list and array are harmonized.
564
565\begin{figure}
566\begin{tabular}{@{}ll@{}}
567\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
568\begin{c++}
569list<node *> list;
570node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
571list.push_back( &n1 ); list.push_back( &n2 );
572list.push_back( &n3 ); list.push_back( &n4 );
573list<node *>::iterator it;
574for ( it = list.begin(); it != list.end(); it ++ )
575 if ( *it == &n3 ) { dl.erase( it ); break; }
576\end{c++}
577&
578\begin{cfa}
579dlist(node) list;
580node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
581insert( list, n1, n2, n3, n4 );
582
583
584
585remove( list, n3 );
586\end{cfa}
587\end{tabular}
588\caption{Obtaining a linked-list iterator in \CC \vs \CFA}
589\label{f:CCvsCFAListIssues}
590\end{figure}
591
592\begin{comment}
593Dave Dice:
594There are what I'd call the "dark days" of C++, where the language \& libraries seemed to be getting progressively uglier - requiring more tokens to express something simple, and lots of arcana.
595But over time, somehow, they seem to have mostly righted the ship and now I can write C++ code that's fairly terse, like python, by ignoring all the old constructs.
596(They carry around the legacy baggage, of course, but seemed to have found a way to evolve away from it).
597
598If you just want to traverse a std::list, then, using modern "for" loops, you never need to see an iterator.
599I try hard never to need to write X.begin() or X.end().
600(There are situations where I'll expose iterators for my own types, however, to enable modern "for" loops).
601If I'm implementing simple linked lists, I'll usually skip std:: collections and do it myself, as it's less grief.
602I just don't get that much advantage from std::list. And my code is certainly not any shorter.
603On the other hand, @std::map@, @unordered_map@, @set@, and friends, are terrific, and I can usually still avoid seeing any iterators, which are blight to the eye.
604So those are a win and I get to move up an abstraction level, and write terse but easily understood code that still performs well.
605(One slight concern here is that all the C++ collection/container code is templated and lives in include files, and not traditional libraries.
606So the only way to replace something - say with a better algorithm, or you have a bug in the collections code - is to boil the oceans and recompile everything.
607But on the other hand the compiler can specialize to the specific use case, which is often a nice performance win).
608
609And yeah, the method names are pretty terrible. I think they boxed themselves in early with a set of conventions that didn't age well, and they tried to stick with it and force regularity over the collection types.
610
611I've never seen anything written up about the history, although lots of the cppcon talks make note of the "bad old days" idea and how collection \& container library design evolved. The issue is certainly recognized.
612\end{comment}
613
614
615\section{Implementation}
616
617\VRef[Figure]{fig:lst-impl-links} continues the running @req@ example, showing the \CFA list library's internal representation.
618The @dlink@ structure contains exactly two pointers: @next@ and @prev@, which are opaque.
619Even though the user-facing list model is linear, the \CFA library implements all listing as circular.
620This choice helps achieve uniform end treatment.
621% and \PAB{TODO finish summarizing benefit}.
622A link pointer targets a neighbouring @dlink@ structure, rather than a neighbouring @req@.
623(Recall, the running example has the user putting a @dlink@ within a @req@.)
624
625\begin{figure}
626 \centering
627 \includegraphics[width=\textwidth]{lst-impl-links.pdf}
628 \caption{
629 \CFA list library representations for headed and headless lists
630 }
631 \label{fig:lst-impl-links}
632\end{figure}
633
634Circular link-pointers (dashed lines) are tagged internally in the pointer to indicate linear endpoints.
635Links among neighbour nodes are not tagged.
636Iteration reports ``has more elements'' when accessing untagged links, and ``no more elements'' when accessing tagged links.
637Hence, the tags are set on the links that a user cannot navigate.
638
639The \CFA library works in headed and headless modes.
640In a headed list, the list head (@dlist(req)@) acts as an extra element in the implementation-level circularly-linked list.
641The content of a @dlist@ header is a (private) @dlink@, with the @next@ pointer to the first element, and the @prev@ pointer to the last element.
642Since the head wraps a @dlink@, as does @req@, and since a link-pointer targets a @dlink@, the resulting cycle is among @dlink@ structures, situated inside of header/node.
643An untagged pointer points within a @req@, while a tagged pointer points within a list head.
644In a headless list, the circular backing list is only among @dlink@s within @req@s.
645
646No distinction is made between an unlisted node (top left and middle) under a headed model and a singleton list under a headless model.
647Both are represented as an item referring to itself, with both tags set.
648
649
650\section{Assessment}
651\label{toc:lst:assess}
652
653This section examines the performance of the discussed list implementations.
654The goal is to show the \CFA lists are competitive with other designs, but the different list designs may not have equivalent functionality, so it is impossible to select a winner encompassing both functionality and execution performance.
655
656
657\subsection{Experiment Design}
658
659This section explains how the experiment is built.
660Many of the following parts define terminology concerning tuning knobs.
661\VRef[Table]{f:ListPerfGlossary} provides a consolidated reference.
662
663\begin{table}
664\caption{
665 Glossary of terms used in the list performance evaluation
666}
667\label{f:ListPerfGlossary}
668\noindent
669\begin{tabular}{p{1.75in}@{\ }p{4.5in}}
670Insert-Remove (IR)
671 & The atomic unit of work being measured: one insertion plus one remove (plus all looping/tracking overheads) \\
672Use Case
673 & Pattern of add-remove calls. \\
674-- Movement & \\
675 \quad $\ni$ stack
676 & IRs happen at the same end. \\
677 \quad $\ni$ queue
678 & IRs happen at opposite ends. \\
679-- Polarity
680 & Which of the two orientations in which the movement happens. \\
681 \quad $\ni$ insert-first
682 & All inserts at front; stack removes at front; queue removes at back. \\
683 \quad $\ni$ insert-last
684 & All inserts at back; stack removes at back; queue removes at front. \\
685-- Accessor
686 & How an insertion position, or removal element, is specified. The same position/element is picked either way. \\
687 \quad $\ni$ all head
688 & IRs both through the head \\
689 \quad $\ni$ insert element
690 & insert by element and remove through the head \\
691 \quad $\ni$ remove element
692 & insert through head and remove by element\\
693Physical Context & \\
694-- Size (number) & Number of nodes being linked. Equals the length of the program's sole list. \\
695-- Size Zone
696 & Contiguous range of sizes, chosen to avoid known anomalies and to sample a brief plateau. Each zone buckets four specific sizes. \\
697 \quad $\ni$ small
698 & lists of 4--16 elements \\
699 \quad $\ni$ medium
700 & lists of 50--200 elements \\
701 \quad $\ni$ (other)
702 & Not used for comparing intrusive frameworks. \\
703-- Machine
704 & Computer running the experiment \\
705 \quad $\ni$ AMD
706 & smaller cache \\
707 \quad $\ni$ Intel
708 & bigger cache \\
709Framework & A particular linked-list implementation (within its host language) \\
710$\ni$ \CC & The @std::list@ type of g++. \\
711$\ni$ lq-list & The @list@ type of LQ from glibc of gcc. \\
712$\ni$ lq-tailq & The @tailq@ type of the same. \\
713$\ni$ \uCpp & \uCpp's @uSequence@ \\
714$\ni$ \CFA & \CFA's @dlist@ \\
715Individual config'n (IC)
716 & Specific situation in which an IR sequence is timed. One use case, on one machine, driving one exact list size, by one framework. \\
717Trial
718 & Timed run of the test program under a given IC, during which a few billion IRs occur. \\
719Explanation being
720 & How independent explanatory variable X is analyzed \\
721-- Marginalized
722 & Left alone, allowed to vary, yielding a more absolute measure. Shows the effect that X causes. If all explanations are marginalized, then absolute times are available and a relative time has a peer group that is the entire population. \\
723-- Conditioned
724 & Held constant, yielding a more relative measure. Hides the effect that X causes. Conditioning on X creates more, smaller relative-measure peer groups, by isolating each X-domain value. Resulting interpretation is, ``Assuming no change in X.'' \\
725\end{tabular}
726\end{table}
727
728
729\subsubsection{Add-Remove Performance}
730\label{s:AddRemovePerformance}
731
732The fundamental job of a linked-list library is to manage the links that connect nodes.
733Any link management is an action that causes pair(s) of elements to become, or cease to be, adjacent in the list.
734Thus, adding and removing an element are the sole primitive actions.
735
736Repeated adding and removing is necessary to measure timing because these operations can be as short as a dozen instructions.
737These instruction sequences may have cases that proceed (in a modern, deep pipeline) without a stall.
738
739This experiment takes the position that:
740\begin{enumerate}[leftmargin=*]
741 \item The total time to add and remove is relevant, as opposed to having one time for adding and a separate time for removing.
742 Adds without removes quickly fill memory;
743 removing without adding is impossible.
744 \item A relevant breakdown ``by operation'' is, rather, the usage pattern of the add/remove calls.
745 A example pattern choice is adding and removing at the same end, making a stack, or opposite ends, for a queue.
746 Another is pushing on the front by calling @insert_first(lst, e)@ \vs @insert(e, old_first_elm)@; this aspect provides the test's API coverage.
747 \VRef[Section]{s:UseCases} gives the full breakdown.
748 \item Speed differences caused by the host machine's memory hierarchy need to be identified and explained,
749 but do not represent advantages of one framework over another.
750\end{enumerate}
751
752The experiment used to measure IR cost measures the mean duration of a sequence of additions and removals.
753The distribution of speeds experienced by an individual add-remove pair (tail latency) is not discussed.
754Space efficiency is shown only indirectly, by way of caches' impact on speed.
755The experiment is sensitive enough to show:
756\begin{itemize}
757 \item intrusive lists performing (majorly) differently than wrapped lists,
758 \item a space of (lesser) performance differences among the intrusive lists.
759\end{itemize}
760
761In all cases, the quantity discussed is the duration of one insert-remove (IR).
762An IR is the time taken to do one innermost insertion-loop iteration, one innermost removal-loop iteration, and its share of all overheads, amortized.
763Lower IR is better.
764This experiment typically does an IR in 1--10 ns.
765The short end of this range has durations of single-digit clock-cycle counts.
766Therefore, the situations that achieve the best times are saturating the instruction pipeline successfully.
767
768Often, an IR duration value needs to be considered relatively.
769For example, \VRef{s:SweetSoreSpots} asks whether one linked list implementation is more sensitive than another to the computer architecture.
770A finding might be that a machine slows implementation A by 10\% and B by 20\%.
771This finding is not saying that A is faster than B (on either machine).
772The finding could stand if B starts faster and then levels off, if B starts slower and gets worse, or in myriad other cases.
773The finding asserts that such distinctions are not what is immediately relevant.
774The arithmetic producing different answers is removing the information about which one starts or ends up faster.
775Each implementation's to-machine duration is stated relatively to \emph{the same implementation's} from-machine duration.
776The resulting measure is still about a duration.
777The framework with the lower from-machine-relative duration handles the change better.
778
779
780\subsubsection{Test Program}
781\label{s:TetProgram}
782
783The experiment driver defines a (intrusive) node type:
784\begin{cfa}
785struct Node {
786 int i, j, k; // fields
787 // possible intrusive links
788};
789\end{cfa}
790and considers the speed of building and tearing down a list of $n$ instances of it.
791% A number of experimental rounds per clock check is precalculated to be appropriate to the value of $n$.
792\begin{cfa}
793// simplified harness: CFA implementation,
794// stack movement, insert-first polarity, head-mediated access
795size_t totalOpsDone = 0;
796dlist( node_t ) lst;
797node_t nodes[ n ]; $\C{// preallocated list storage}$
798startTimer();
799while ( CONTINUE ) { $\C{// \(\approx\) 20 second duration}$
800 for ( i; n ) insert_first( lst, nodes[i] ); $\C{// build up}$
801 for ( i; n ) remove_first( lst ); $\C{// tear down}$
802 totalOpsDone += n;
803}
804stopTimer();
805reportedDuration = getTimerDuration() / totalOpsDone; // throughput per IR operation
806\end{cfa}
807To reduce administrative overhead, the $n$ nodes for each experiment list are preallocated in an array (on the stack), which removes dynamic allocations for this storage.
808These nodes contain an intrusive pointer for intrusive lists, or a pointer to it is stored in a dynamically-allocated internal-node for a wrapper list.
809Copying the node for wrapped lists skews the results with administration costs.
810The list insertion/removal operations are repeated for a typical 20+ second duration.
811After each round, a counter is incremented by $n$ (for throughput).
812Time is measured outside the loop because a large $n$ can overrun the time duration before the @CONTINUE@ flag is tested.
813Hence, there is a minimum of one outer (@CONTINUE@) loop iteration for large lists.
814The loop duration is divided by the counter and this throughput is reported.
815In a scatter-plot, each dot is one throughput, which means insert + remove + harness overhead.
816The harness overhead is constant when comparing linked-list frameworks and is kept as small as possible.
817% The remainder of the setup section discusses the choices that affected the harness overhead.
818
819To test list operations, the experiment performs the inserts/removes in different patterns, \eg insert and remove from front, insert from front and remove from back, random insert and remove, \etc.
820Unfortunately, the @std::list@ does \emph{not} support direct IR from a node without an iterator, \ie no @erase( node )@, even though the list is doubly-linked.
821To eliminate the iterator, a trick is used for random insertions without replacement, which takes advantage of the array nature of the nodes.
822The @i@ fields in each node are initialized from @0..n-1@.
823These @i@ values are then shuffled in the nodes, and the @i@ value is used to represent an indirection to that node for insertion.
824Hence, the nodes are inserted in random order and removed in the same random order.
825$\label{p:Shuffle}$
826\begin{cfa}
827 for ( i; n ) @nodes[i].i = i@; $\C[3.25in]{// indirection}$
828 shuffle( nodes, n ); $\C{// random shuffle indirects within nodes}$
829
830 while ( CONTINUE ) {
831 for ( i; n ) {
832 node_t & temp = nodes[ nodes[i].i ]; $\C{// select random node in array}$
833 @temp.j = 0;@ $\C{// only touch random node for wrapped nodes}$
834 insert_first( lst, temp ); $\C{// build up}$
835 }
836 for ( i; n ) pass( &remove_first( lst ) ); $\C{// tear down}\CRT$
837 totalOpsDone += n;
838 }
839\end{cfa}
840Note, insertion is traversing the list of nodes linearly, @node[i]@.
841For intrusive lists, the inserted (random) node is always touched because its link fields are read/written for insertion into the list.
842Hence, the array of nodes is being accessed both linearly and randomly during the traversal.
843For wrapped lists, the wrapped nodes are traversed linearly but the random node is not accessed, only a pointer to it is inserted into the linearly accessed wrapped node.
844Hence, the traversal is the same as the non-random traversal above.
845To level the experiments, an explicit access to the random node is inserted after the insertion, @temp.j = 0@, for the wrapped experiment.
846Furthermore, it is rare to IR nodes and not access them.
847
848% \emph{Interleaving} allows for movements other than pure stack and queue.
849% Note that the earlier example of using the iterators' array is still a pure stack: the item selected for @erase(...)@ is always the first.
850% Including a less predictable movement is important because real applications that justify doubly linked lists use them.
851% Freedom to remove from arbitrary places (and to insert under more relaxed assumptions) is the characteristic function of a doubly linked list.
852% A queue with drop-out is an example of such a movement.
853% A list implementation can show unrepresentative speed under a simple movement, for example, by enjoying unchallenged ``Is first element?'' branch predictions.
854
855% Interleaving brings ``at middle of list'' cases into a stream of add or remove invocations, which would otherwise be exclusively ``at end''.
856% A chosen split, like half middle and half end, populates a boolean array, which is then shuffled.
857% These booleans then direct the action to end-\vs-middle.
858%
859% \begin{cfa}
860% // harness (bookkeeping and shuffling elided): CFA implementation,
861% // stack movement, insert-first polarity, interleaved element-based remove access
862% dlist( item_t ) lst;
863% item_t items[ n ];
864% @bool interl[ n ];@ // elided: populate with weighted, shuffled [0,1]
865% while ( CONTINUE ) {
866% item_t * iters[ n ];
867% for ( i; n ) {
868% insert_first( items[i] );
869% iters[i] = & items[i];
870% }
871% @item_t ** crsr[ 2 ]@ = { // two cursors into iters
872% & iters[ @0@ ], // at stack-insert-first's removal end
873% & iters[ @n / interl_frac@ ] // in middle
874% };
875% for ( i; n ) {
876% item *** crsr_use = & crsr[ interl[ i ] ]@;
877% remove( *** crsr_use ); // removing from either middle or end
878% *crsr_use += 1; // that item is done
879% }
880% assert( crsr[0] == & iters[ @n / interl_frac@ ] ); // through second's start
881% assert( crsr[1] == & iters[ @n@ ] ); // did the rest
882% }
883% \end{cfa}
884%
885% By using the pair of cursors, the harness avoids branches, which could incur prediction stall times themselves, or prime a branch in the SUT.
886% This harness avoids telling the hardware what the SUT is about to do.
887
888
889\subsubsection{Use Cases}
890\label{s:UseCases}
891
892Where \VRef[Table]{f:ListPerfGlossary} enumerates the specific values, recall the use-case dimensions are:
893\begin{description}
894 \item[movement ($\times 2$)]
895 In these experiments, strict stack and queue patterns are tested.
896 \item[polarity ($\times 2$)]
897 Obtain one polarity from the other by reversing uses of first/last.
898 \item[accessor ($\times 3$)]
899 Giving an add/remove location by a list head's first/last, \vs by a preexisting reference to an individual element.
900\end{description}
901
902\begin{figure}
903\begin{comment}
904\centering
905\setlength{\tabcolsep}{8pt}
906\begin{tabular}{@{}ll@{}}
907\begin{tabular}{@{}c|c|c@{}}
908movement & polarity & accessor \\
909\hline
910\hline
911stack &
912 \begin{tabular}{@{}l@{}}
913 insert-first \\
914 \hline
915 insert-last
916 \end{tabular}
917 &
918 \begin{tabular}{@{}l@{}}
919 insert-head / remove-head \\
920 \hline
921 insert-list / remove-head \\
922 \hline
923 insert-head / remove-list
924 \end{tabular}
925 \\
926\hline
927queue &
928 \begin{tabular}{@{}l@{}}
929 insert-first \\
930 \hline
931 insert-last
932 \end{tabular}
933 &
934 \begin{tabular}{@{}l@{}}
935 insert-head / remove-head \\
936 \hline
937 insert-list / remove-head \\
938 \hline
939 insert-head / remove-list
940 \end{tabular}
941\end{tabular}
942&
943 \setlength{\tabcolsep}{3pt}
944 \small
945 \begin{tabular}{@{}ll@{}}
946 I: & stack, insert first, all head \\
947 II: & stack, insert first, insert element \\
948 III:& stack, insert first, remove element \\
949 IV: & stack, insert last, all head \\
950 V: & stack, insert last, insert element \\
951 VI: & stack, insert last, remove element \\
952 VII:& queue, insert first, all head \\
953 VIII:& queue, insert first, insert element \\
954 IX: & queue, insert first, remove element \\
955 X: & queue, insert last, all head \\
956 XI: & queue, insert last, iinsert element \\
957 XII:& queue, insert last, remove element \\
958 \end{tabular}
959\end{tabular}
960\end{comment}
961\setlength{\tabcolsep}{5pt}
962\small
963\begin{tabular}{rcccccccccccc}
964& I & II & III & IV & V & VI & VII & VIII & IX & X & XI & XII \\
965Movement &
966stack & stack & stack & stack & stack & stack &
967queue & queue & queue & queue & queue & queue \\
968Polarity &
969i-first & i-first & i-first & i-last & i-last & i-last &
970i-first & i-first & i-first & i-last & i-last & i-last \\
971Acessor &
972all hd & ins-e & rem-e & all hd & ins-e & rem-e &
973all hd & ins-e & rem-e & all hd & ins-e & rem-e
974\end{tabular}
975\caption{Experiment use cases, numbered}
976\label{f:ExperimentOperations}
977\end{figure}
978
979A use case is a specific selection of movement, polarity and accessor.
980These experiments run twelve use cases.
981When a comparison is showing only what can happen when switching among use cases (as opposed to \eg how stacks are different from queues), the numbering scheme of \VRef[Figure]{f:ExperimentOperations} is used.
982
983With accessor, when an action names its insertion position or removal element, the harness either
984\begin{itemize}
985\item defers to the list-head's tracking of first/last (``through the head''), or
986\item applies its own knowledge of the current pattern, to name a position/element that happens to be first/last (``of known element'').
987\end{itemize}
988
989The accessor patterns, at the (\CFA) API level, are:
990\begin{description}
991 \item[all (through the) head:] Both IRs happen through the list head. The list head operations are @insert_first@, @insert_last@, @remove_first@ and @remove_last@.
992 \begin{sloppypar}
993 \item[insert (of known) element] \dots and remove through head: Inserts use @insert_before(e, first)@ or @insert_after(e, last)@, where @e@ is being inserted and @first@/@last@ are element references known by list-independent means.
994 \end{sloppypar}
995 \item[remove (of known) element] \dots and insert through the head: Removes use @remove(e)@, where @e@ is being removed. List-independent knowledge establishes that @e@ is first or last, as appropriate.
996\end{description}
997
998Comparing all-head with insert-element gives the relative performance of head-mediated \vs element-oriented insertion, because both use the same removal style.
999Comparing all-head with remove-element gives the relative performance of head-mediated \vs element-oriented removal, because both use the same insertion style.
1000
1001
1002\subsubsection{Sizing}
1003\label{s:experiment-sizing}
1004
1005Intuition suggests measuring IR for different sized lists should just be a multiple of the single linking/unlinking of a node.
1006However, there is a scaling issue as more memory is being read and written, where caching comes into play.
1007But caching is more than the amount of memory being accessed;
1008the access pattern is equally important.
1009Aggressive instruction-level parallelism scheduling, which enables short IR times, is the amplifier, \eg a data dependency is a critical path in one situation but not in another.
1010Therefore, the duration's response to size is not a steady worsening as size increases.
1011Often, each size-independent configuration responds to size increases in steps of slowdown.
1012Occasionally a slowdown step is followed by some performance increase, where an incurred penalty begins to amortize away.
1013Hence, performance results can have interesting jitter as size increases.
1014The analysis treats these behaviours as incidental.
1015It does not try to characterize various exact-size responses.
1016Rather, size zones are picked, specific effects inside of a zone are averaged away, and the story at one zone is compared to that at another.
1017
1018% It is true, but perhaps not obvious, that building and destroying long lists is slower than building and destroying short lists.
1019% Obviously, indeed, it takes longer to fuse and divide a hundred neighbours than five.
1020% But the key metric in this work, AII, is about a single link--unlink.
1021% So, critically, linking and unlinking a hundred neighbours actually takes \emph{more} than $20\times$ the time for five neighbours.
1022% The main reason is caching; when more neighbours are being manipulated, more memory is being read and written.
1023%
1024% But caching success is about more than the amount of memory worked on.
1025% Subtle changes in pattern become butterfly effects.
1026% Aggressive ILP scheduling, which enables short AIR times, is the amplifier.
1027% A data dependency, present in one framework but not another, is critical path in one situation but in not another.
1028% So, duration's response to size is not a steady worsening as size increases.
1029% Rather, each size-independent configuration often responds to size increases with leaps of worsening.
1030% Occasionally a leap is even followed size-run of retrograde response, where a suddenly incurred penalty has a chance to amortize away.
1031% The frameworks tend to leapfrog over each other, at different points, as size increases.
1032%
1033% The analysis treats these behaviours as incidental.
1034% It does not try to characterize various exact-size responses.
1035% Rather, size zones are picked, specific effects inside of a zone are averaged away, and the story at one zone is compared to that at another zone.
1036
1037\begin{figure}
1038\centering
1039 \setlength{\tabcolsep}{0pt}
1040 \begin{tabular}{p{3.4375in}p{0.125in}p{2.9375in}}
1041 \subfloat[ ]{\label{f:zoomin-abs-i-swift}
1042 \includegraphics{plot-list-zoomin-abs-i-swift.pdf}
1043 } & &
1044 \subfloat[ ]{\label{f:zoomin-abs-ix-java}
1045 \includegraphics{plot-list-zoomin-abs-ix-java.pdf}
1046 }
1047 \end{tabular}
1048 \caption[Variety of IR duration responses to list length, at small--medium lengths]{Variety of IR duration responses to list length, at small--medium lengths. Two example use cases are shown: I, stack movement with head-only access (plot a); IX, queue movement with element-oriented removal access (plot b); both use cases have insert-first polarity. One example is run on each machine: UC-I on AMD (plot a); UC-IX on Intel (plot b). Lower is better.}
1049 \label{fig:plot-list-zoomin-abs}
1050\end{figure}
1051
1052\VRef[Figure]{fig:plot-list-zoomin-abs} gives two example responses to size.
1053% The dataset here is a small portion of the overall result and it is premature to attempt conclusions about framework differences from it.
1054These two example cases show how differently a pair of configurations can behave.
1055% Of more immediate significance, they also have a pattern repeated, in all eight of their size responses.
1056% Note the ``small'' and ``medium'' overlaid boxes, which call out the size zones' definitions.
1057Outside of an identified box, size response is erratic.
1058Inside a box, size response is relatively smooth.
1059Within and among the boxes there are identifiable patterns, which occur throughout all the experimental results.
1060
1061Every data point is one Individual Configuration (IC).
1062Each IC is tested by five trials; a point's error bars give the best and worst trial results; the point's centre is the mean of the middle three.
1063The amount of error here is typical across all the configurations (beyond the two shown here).
1064With a few exceptions, it is modest, so these experiments are repeatable.
1065
1066To preview, \VRef{s:ResultCoarseComparisonStyles} dismisses large sizes (above 150 elements) and wrapped lists, because the performance story is dominated by the amount of memory touched not by intrusive \vs wrapped lists.
1067At smaller sizes, \VRef{s:ComparingIntrusiveImplementations} shows differences appear among the intrusive-list implementations.
1068Among the ``not Large'' sizes ($\le$ 150), two zones, Small and Medium, are selected as representatives of what can vary when the scale is changed.
1069These particular ranges are chosen because each range tends to have one story repeated across its constituent sizes.
1070For example, if \CFA's duration increases across Small, then the other frameworks' usually do too, or if \CFA is beating \uCpp across Medium, then it usually is at the high end too.
1071% The leapfrogging tends to happen outside of these two ranges.
1072
1073Finally, on the AMD architecture, \CFA performed poorly at size 1, on queue movements only, and no other framework saw the same effect.
1074This extreme outlier is not plotted in graphs.
1075After exploring the phenomenon in depth (not presented), the conclusion is a quirky interaction between the hardware and the testing harness.
1076A side experiment (that does not enrich the overall comparisons) saw user-induced gaps of $\approx$10 ns between same-list operations hide the effect completely.
1077These gaps are realistic because when an item goes on a list another action comes back to it \emph{later}.
1078The pattern that the general harness uses, concentrating time-adjacent operations on one list, is useful for measuring the ``small'' size zone, but is contrived, from the perspective of a data hazard that only this pattern exposes.
1079The general comparisons do not see the effect at all, because they use only the Small and Medium zones, with shortest length of 4.
1080
1081% A spot of poor performance appears in the general results for \CFA at size 1.
1082% Section \MLB{TODO:xref} explores the phenomenon and concludes that it is an anomaly due to a quirky interaction with the testing rig.
1083% To do so, two it considers size as either length or width.
1084% Length is the number of elements in a list.
1085% Width is a number of these lists being kept, worked upon in round-robin order.
1086% Outside of \MLB{TODO:xref}, size always means length, and width is 1.
1087
1088
1089\subsubsection{Execution Environment}
1090\label{s:ExperimentalEnvironment}
1091
1092The performance experiments are run on:
1093\begin{description}[leftmargin=*,topsep=3pt,itemsep=2pt,parsep=0pt]
1094%\item[PC]
1095%with a 64-bit eight-core AMD FX-8370E, with ``taskset'' pinning to core \#6. The machine has 16 GB of RAM and 8 MB of last-level cache.
1096%\item[ARM]
1097%Gigabyte E252-P31 128-core socket 3.0 GHz, WO memory model
1098\item[AMD]
1099Supermicro AS--1125HS--TNR EPYC 9754 128--core socket, hyper-threading $\times$ 2 sockets (512 processing units) 2.25 GHz, TSO memory model, with cache structure 32KB L1i/L1d, 1024KB L2, 16MB L3, where each L3 cache covers 1 NUMA node and 8 cores (16 processors).
1100\item[Intel]
1101Supermicro SYS-121H-TNR Xeon Gold 6530 32--core, hyper-threading $\times$ 2 sockets (128 processing units) 2.1 GHz, TSO memory model, with cache structure 32KB L1i/L1d, 20248KB L2, 160MB L3, where each L3 cache covers 2 NUMA node and 32 cores (64 processors).
1102\end{description}
1103The experiments are single threaded and pinned to single core to prevent any OS movement, which might cause cache or NUMA effects perturbing the experiment.
1104
1105The compiler is gcc/g++-14.2.0 running on the Linux v6.8.0-52-generic OS.
1106Switching between the default memory allocators @glibc@ and @llheap@ is done with @LD_PRELOAD@.
1107To prevent eliding certain code patterns, crucial parts of a test are wrapped by the function @pass@
1108\begin{cfa}
1109// prevent eliding, cheaper than volatile
1110static inline void * pass( void * v ) { __asm__ __volatile__( "" : "+r"(v) ); return v; }
1111...
1112pass( &remove_first( lst ) ); // wrap call to prevent elision, insert cannot be elided now
1113\end{cfa}
1114The call to @pass@ can prevent a small number of compiler optimizations but this cost is the same for all lists.
1115
1116The main difference in the machines is their cache structure.
1117The AMD has smaller caches that are shared less, while the Intel shares larger caches among more processors.
1118This difference, while an interesting tradeoff for highly concurrent use, is rather one-sided for sequential use, such as this experiment.
1119Specifically, the Intel offers a single processor a bigger cache.
1120
1121
1122\subsubsection{Recap and Master Legend}
1123
1124For experiments performed in later sections, there are 12 use cases, which are all combinations of 2 movements, 2 polarities and 3 accessors.
1125There are 4 physical contexts, which are all combinations of 2 machines and 2 size (length) zones.
1126There are 3.25 frameworks.
1127This accounting considers how LQ-list supports only the movement--polarity combination ``stack, insert first.''
1128So LQ-list fills a quarter of the otherwise-orthogonal space.
1129Physical context, use case, and framework are the explanatory factors.
1130Each size zone summarizes samples of 4 specific sizes.
1131Taking all combinations of the explanatory factors and this sampling gives 4 $\times$ 12 $\times$ 3.25 $\times$ 4 = 624 individual configurations (ICs).
1132
1133% \[
1134% \textrm{624 individual configurations} =
1135% \sum_{\substack{
1136% \textrm{12 use cases}\\
1137% \textrm{4 physical contexts}\\
1138% \textrm{4 specific sizes}\\
1139% \textrm{3.25 frameworks}
1140% }}
1141% \textrm{1 individual configuration}
1142% \]
1143
1144\begin{figure}
1145\centering
1146 \setlength{\tabcolsep}{0pt}
1147 \includegraphics[trim={0in, 1.75in, 0in, 0in}, clip]{plot-list-legend-rel-i-swift.pdf}
1148 \begin{tabular}{p{2.75in}p{0.125in}p{3.625in}}
1149 \subfloat[ ]{\label{f:zoomin-rel-i-swift}
1150 \includegraphics{plot-list-zoomin-rel-i-swift.pdf}
1151 } & &
1152 \subfloat[ ]{\label{f:zoomin-histo-i-swift}
1153 \includegraphics{plot-list-histo-rel-i-swift.pdf}
1154 }
1155 \end{tabular}
1156 \caption[IR duration, transformed for general analysis]{
1157 IR duration, transformed for general analysis.
1158 The analysis follows the single example setup of \VRef[Figure]{f:zoomin-abs-i-swift}, \ie Use Case I on AMD; there, IR is given as absolute duration.
1159 Plot (a) transforms the source dataset by conditioning on specific size.
1160 Plot (b) takes the results from only the identified size zones, discards their specific-size information, and shows the resulting distribution.
1161 Lower is better.}
1162 \label{fig:plot-list-rel}
1163\end{figure}
1164
1165\begin{comment}
116632 of these individual configurations, plucked from \VRef[Figure]{f:zoomin-abs-i-swift}, are the subject of \VRef[Figure]{fig:plot-list-rel}, where they are now transformed into the format used for general analysis.
1167In \subref*{f:zoomin-rel-i-swift}, each of the 56 data points is an individual configuration; the subset within the two boxes has the 32 of interest.
1168In \subref*{f:zoomin-histo-i-swift}, the very leftmost histogram (Small, \CFA) summarizes the 4 Small--\CFA individual configurations.
1169Each remaining framework, at size zone Small, similarly summarizes 4 individual configurations.
1170This statement is the interpretation of the x-category label, ``Small (/4).''
1171Each line under the ``/4'' label has 4 individual configurations; so, this group has 16 individual configurations.
1172The interpretation of ``Medium (/4)'' is similar; it groups the remaining 16 configurations.
1173When both size zones are pooled together, ``Both (/8)'' results; this group revisits all 32 configurations.
1174
1175This example's use of 32 individual configurations is in contrast with full-universe comparisons like the forthcoming \VRef[Figure]{fig:plot-list-1ord}, whose coverage further includes all use cases and both machines.
1176
1177The transformation of \VRef[Figure]{f:zoomin-abs-i-swift} into \VRef[Figure]{f:zoomin-rel-i-swift} conditions on the configuration's specific size.
1178At a particular size, the average duration is computed, across the three frameworks that work on all use cases: \CFA, \uCpp and LQ-@tailq@.
1179\VRef[Figure]{fig:plot-list-rel} shows a particular configuration's duration relative to this average.
1180
1181The effect of conditioning on specific size erases the fact that \VRef[Figure]{fig:plot-list-zoomin-abs} shows, aside from the coarse hops, all frameworks getting smoothly slower as size increases.
1182This effect is particularly relevant \emph{within} a size zone, most noticeable as the data lines all going up across the Small box.
1183Now, with size conditioned, \VRef[Figure]{f:zoomin-rel-i-swift} has the trends inside a zone box being flat.
1184This flatness gives \subref*{f:histo-rel-i-swift} nicely separated histograms.
1185
1186Specific size is the only factor conditioned in this view.
1187This choice was made to keep the relationship between \VRef[Figures]{f:zoomin-abs-i-swift} and \VRef[]{:zoomin-rel-i-swift} perceptible.
1188By contrast, general comparisons like \VRef[Figure]{fig:plot-list-1ord} condition on more, generally, everything not presented.
1189Its physical-factor breakdown conditions on use case and framework, but not on physical factors; its other two breakdowns are defined similarly.
1190
1191The noteworthy performance hop, in this example, is LQ-@list@, which \VRef[Figures]{f:zoomin-abs-i-swift} has as consistently slow in the Small range, and consistently fast in the Medium range.
1192Therefore, in \VRef{f:zoomin-histo-i-swift}, its two per-size-zone histograms are far apart, and its cross-size-zone histogram is bimodal.
1193Hops and distribution contributions, like this one, are common.
1194They are attention-grabbing curiosities when comparing nearly any pair of individual configurations.
1195They are the background noise of the all-in inter-configuration comparisons following.
1196With this one, \VRef[Figure]{fig:plot-list-1ord}'s LQ-@list@ distribution (farthest right) does have a perceptible bump at $1.8\times$, corresponding to the upper mode seen here.
1197But this UC-I--Intel contribution is only $1/6 = 8/48$ of the configurations summarized there.
1198
1199Each individual configuration is tested by five trials, giving the error bars of \VRef[Figure]{:zoomin-rel-i-swift} (at min and max).
1200This trial error is unaffected by the size conditioning.
1201Though this error is presented only for the narrow slice of the current examples, the amount of it seen here is typical across all the configurations.
1202With a few exceptions, it is modest; so, the experiment is repeatable.
1203The data points in \subref{f:zoomin-rel-i-swift} show mean excluding min and max.
1204This value, alone, is the contribution to the histogram in \subref{f:zoomin-histo-i-swift}.
1205That is, inter-configuration rollups discard the modest trial-repeatability error.
1206The girth of a histogram's distribution is entirely the inter-configuration variance, of its configurations' expected performance.
1207\end{comment}
1208
1209A condensed graphing style is used in subsequent plots to present this amount of data.
1210\VRef[Figure]{fig:plot-list-rel} shows how the condensed graphing style is generated from individual-configuration measures.
1211\VRef[Figure]{f:zoomin-rel-i-swift} is formed from the data in \VRef[Figure]{f:zoomin-abs-i-swift}, transformed on the Y-axis to show duration relative to the mean across all four frameworks, at each specific size.
1212\VRef[Figure]{f:zoomin-histo-i-swift} condenses the interesting data within the two boxes (Small/Medium) and their combination (Both).
1213This graph plots a vertical histogram for each of the 4 frameworks.
1214A data point on \VRef[Figure]{f:zoomin-abs-i-swift} is one-to-one with a point on \VRef[Figure]{f:zoomin-rel-i-swift}; each gives one IC.
1215Repeatability of the experiment being established previously, retry variance information (error bar on an individual-configuration point) is discarded, and only the expected performance of an IC (mean of its middle three out of five trials) is promoted into the histograms.
1216Each histogram bin (light-shaded area) counts the number of ICs whose expected performance falls in the bin's range.
1217A histogram's girth indicates the diversity of its qualifying configurations' performance expectations.
1218The overlaid tick symbol marks its group's mean performance.
1219An x label indicates the total number of ICs in a group, \eg ``/4'' $\Rightarrow$ 4 ICs per histogram (here, a size-zone/framework combination).
1220The totals are small here because attention is still restricted to the example slice of Use Case I on AMD.
1221But the graph format now scales to handle the full population of tested configurations, which is the subject of all subsequent plots.
1222
1223\begin{comment}
1224The light-shaded histogram is the raw data (similar data values overlap), and the dark histogram is the goemean average when there are multiple experiments condensed in a column.
1225
1226The vertical relationship among the averages gives a quick result, where lower is better.
1227The relative duration smooths the results, where smoothness diminishes as size increases.
1228This flatness gives nicely separated histograms.
1229Thus, in the forthcoming comparison plots:
1230\end{comment}
1231
1232% \begin{itemize}[leftmargin=*]
1233% \item
1234% The measure is mean IR among the middle 3 trials out of 5, that occurred for an individual configuration.
1235% \item
1236% The number of individual configurations per histogram is stated as ``/N,'' at a relevant granularity.
1237% \item
1238% All reported averages are geometric means and all IR duration axes (verticals) are logarithmic.
1239% \item
1240% Unless indicated otherwise, all explanatory factors appearing on a plot are marginalized, while those not appearing on the plot are conditioned.
1241% \end{itemize}
1242
1243
1244\subsection{Result: Coarse Comparison of Styles}
1245\label{s:ResultCoarseComparisonStyles}
1246
1247This comparison establishes how an intrusive list performs compared with a wrapped-reference list.
1248\VRef[Figure]{fig:plot-list-zoomout} presents throughput at various list lengths for a linear and random (shuffled) IR test.
1249Other kinds of scans were made, but the results are similar in many cases, so it is sufficient to discuss these two scans, representing difference ends of the access spectrum.
1250In the graphs, all four intrusive lists (lq-list, lq-tailq, \uCpp, \CFA, see Framework in \VRef[Table]{f:ListPerfGlossary}) are plotted with the same symbol;
1251sometimes theses symbols clump on top of each other, showing the performance difference among intrusive lists is small in comparison to the wrapped list (std::list).
1252See~\VRef{s:ComparingIntrusiveImplementations} for details among intrusive lists.
1253
1254The list lengths start at 10 due to the short IR times of 2--4 ns, for intrusive lists \vs STL's wrapped-reference list of 15--20 ns.
1255For very short lists, like 4, the experiment time of 4 $\times$ 2.5 ns and experiment overhead (loops) of 2--4 ns, results in an artificial administrative bump at the start of the graph having nothing to do with the IR times.
1256As the list size grows, the administrative overhead for intrusive lists quickly disappears.
1257
1258\begin{figure}
1259 \centering
1260 \setlength{\tabcolsep}{0pt}
1261 \begin{tabular}{p{0.75in}p{2.75in}p{3in}}
1262 &
1263 \subfloat[Linear List Nodes, AMD]{\label{f:Linear-swift}
1264 \hspace*{-0.75in}
1265 \includegraphics{plot-list-zoomout-noshuf-swift.pdf}
1266 } % subfigure
1267 &
1268 \subfloat[Linear List Nodes, Intel]{\label{f:Linear-java}
1269 \includegraphics{plot-list-zoomout-noshuf-java.pdf}
1270 } % subfigure
1271 \\
1272 &
1273 \subfloat[Random List Nodes, AMD]{\label{f:Random-swift}
1274 \hspace*{-0.75in}
1275 \includegraphics{plot-list-zoomout-shuf-swift.pdf}
1276 } % subfigure
1277 &
1278 \subfloat[Random List Nodes, Intel]{\label{f:Random-java}
1279 \includegraphics{plot-list-zoomout-shuf-java.pdf}
1280 } % subfigure
1281 \end{tabular}
1282 \caption[IR duration \vs list length, all sizes]{IR duration \vs list length, all sizes.
1283 Lengths go as large possible without error.
1284 One example use case is shown: stack movement, insert-first polarity and head-mediated access. Lower is better.}
1285 \label{fig:plot-list-zoomout}
1286\end{figure}
1287
1288The key performance factor between intrusive and wrapped-reference lists is the dynamic allocation for the wrapped nodes.
1289Hence, this experiment is largely measuring the cost of @malloc@/\-@free@ rather than IR, and is sensitive to the layout of memory by the allocator.
1290For intrusive-list IR, the cost is manipulating the link fields, which is seen by the relatively similar results for the different intrusive lists.
1291For wrapped-reference IR, the costs are: dynamically allocating/deallocating a wrapped node, copying a external-node pointer into the wrapped node for insertion, and linking the wrapped node to/from the list;
1292the allocation dominates these costs.
1293For example, the experiment was run with both glibc and llheap memory allocators, where llheap performance reduced the cost from 20 to 16 ns, still far from the 2--4 ns for linking an intrusive node.
1294Hence, there is no way to tease apart the allocation, copying, and linking costs for wrapped lists, as there is no way to preallocate the list nodes without writing a mini-allocator to manage that storage.
1295
1296In detail, \VRef[Figure]{f:Linear-swift}--\subref*{f:Linear-java} shows linear insertion of all the nodes and then linear removal, both in the same direction.
1297For intrusive lists, the nodes are adjacent in memory from being preallocated in an array.
1298For wrapped lists, the wrapped nodes happen to be adjacent because the memory allocator uses bump allocation during the initial phase of allocation.
1299As a result, these memory layouts result in high spatial and temporal locality for both kinds of lists during the linear array traversal.
1300With address look-ahead, the hardware does an excellent job of managing the multi-level cache.
1301Hence, performance is largely constant for both kinds of lists, until L3 cache and NUMA boundaries are crossed for longer lists and the costs increase consistently for both kinds of lists.
1302For example, on AMD (\VRef[Figure]{f:Linear-swift}), there is one NUMA node but many small L3 caches, so performance slows down quickly as multiple L3 caches come into play, and remains constant at that level, except for some anomalies for very large lists.
1303On Intel (\VRef[Figure]{f:Linear-java}), there are four NUMA nodes and four slowdown steps as list-length increase.
1304At each step, the difference between the kinds of lists decreases as the NUMA effect increases.
1305
1306In detail, \VRef[Figure]{f:Random-swift}--\subref*{f:Random-java} shows random insertion and removal of the nodes.
1307As for linear, there is the issue of memory allocation for the wrapped list.
1308As well, the consecutive storage-layout is the same (array and bump allocation).
1309Hence, the difference is the random linking among nodes, resulting in random accesses, even though the list is traversed linearly, resulting in similar cache events for both kinds of lists.
1310Both \VRef[Figures]{f:Random-swift}--\subref*{f:Random-java} show the slowdown of random access as the list-length grows resulting from stepping out of caches into main memory and crossing NUMA nodes.
1311% Insert and remove operations act on both sides of a link.
1312%Both a next unlisted item to insert (found in the items' array, seen through the shuffling array), and a next listed item to remove (found by traversing list links), introduce a new user-item location.
1313As for linear, the Intel (\VRef[Figure]{f:Random-java}) graph shows steps from the four NUMA nodes.
1314Interestingly, after $10^6$ nodes, intrusive lists are slower than wrapped.
1315I did not have time to track down this anomaly, but I speculate it results from the difference in touching the data in the accessed node, as the data and links are together for intrusive and separated for wrapped.
1316For the llheap memory-allocator and the two tested architectures, intrusive lists out perform wrapped lists up to size $10^3$ for both linear and random, and performance begins to converge around $10^6$ nodes as architectural issues begin to dominate.
1317Clearly, memory allocator and hardware architecture plays a large factor in the total cost and the crossover points as list-size increases.
1318% In an odd scenario where this intuition is incorrect, and where furthermore the program's total use of the memory allocator is sufficiently limited to yield approximately adjacent allocations for successive list insertions, a non-intrusive list may be preferred for lists of approximately the cache's size.
1319
1320The takeaway from this experiment is that wrapped-list operations are expensive because memory allocation is expense at this fine-grained level of execution.
1321Hence, when possible, using intrusive links can produce a significant performance gain, even if nodes must be dynamically allocated, because the wrapping allocations are eliminated.
1322Even when space is a consideration, intrusive links may not use more storage if a node is often linked.
1323Unfortunately, many programmers are unaware of intrusive lists for dynamically-sized data-structures or their tool-set does not provide them.
1324
1325% Note, linear access may not be realistic unless dynamic size changes may occur;
1326% if the nodes are known to be adjacent, use an array.
1327
1328% In a wrapped-reference list, list nodes are allocated separately from the items put into the list.
1329% Intrusive beats wrapped at the smaller lengths, and when shuffling is avoided, because intrusive avoids dynamic memory allocation for list nodes.
1330
1331% STL's performance is not affected by element order in memory.
1332%The field of intrusive lists begins with length-1 operations costing around 10 ns and enjoys a ``sweet spot'' in lengths 10--100 of 5--7-ns operations.
1333% This much is also unaffected by element order.
1334% Beyond this point, shuffled-element list performance worsens drastically, losing to STL beyond about half a million elements, and never particularly leveling off.
1335% In the same range, an unshuffled list sees some degradation, but holds onto a 1--2 $\times$ speedup over STL.
1336
1337% The apparent intrusive ``sweet spot,'' particularly its better-than-length-1 speed, is not because of list operations truly running faster.
1338% Rather, the worsening as length decreases reflects the per-operation share of harness overheads incurred at the outer-loop level.
1339% Disabling the harness's ability to drive interleaving, even though the current scenario is using a ``never work in middle'' interleave, made this rise disappear.
1340% Subsequent analyses use length-controlled relative performance when comparing intrusive implementations, making this curiosity disappear.
1341
1342% The remaining big-swing comparison points say more about a computer's memory hierarchy than about linked lists.
1343% The tests in this chapter are only inserting and removing.
1344% They are not operating on any user payload data that is being listed.
1345% The drastic differences at large list lengths reflect differences in link-field storage density and in correlation of link-field order to element order.
1346% These differences are inherent to the two list models.
1347
1348% A wrapped-reference list's separate nodes are allocated right beside each other in this experiment, because no other memory allocation action is happening.
1349% As a result, the interlinked nodes of the STL list are generally referencing their immediate neighbours.
1350% This pattern occurs regardless of user-item shuffling because this test's ``use'' of the user-items' array is limited to storing element addresses.
1351% This experiment, driving an STL list, is simply not touching the memory that holds the user data.
1352% Because the interlinked nodes, being the only touched memory, are generally adjacent, this case too has high memory locality and stays fast.
1353
1354% But the comparison of unshuffled intrusive with wrapped-reference gives the performance of these two styles, with their the common impediment of overfilling the cache removed.
1355% Intrusive consistently beats wrapped-reference by about 20 ns, at all sizes.
1356% This difference is appreciable below list length 0.5 M, and enormous below 10 K.
1357
1358
1359\subsection{Result: Intrusive Winners and Losers}
1360\label{s:ComparingIntrusiveImplementations}
1361
1362The preceding result shows the intrusive frameworks have better performance than the wrapped lists for small to medium sized lists.
1363This analysis covers the experiment position taken in \VRef{s:AddRemovePerformance} for movement, polarity, and accessor.
1364\VRef[Figure]{f:ExperimentOperations} shows the experiment use cases tested, which results in 12 experiments (I--XII) for comparing intrusive frameworks.
1365To preclude hardware interference, only list sizes below 150 are examined to differentiate among the intrusive frameworks,
1366The data is selected from the start of \VRef[Figures]{f:Linear-swift}--\subref*{f:Linear-java}, but the start of \VRef[Figures]{f:Random-swift}--\subref*{f:Random-java} is largely the same.
1367
1368\begin{figure}
1369 \centering
1370 \includegraphics{plot-list-1ord.pdf}
1371 \small{\textsuperscript{\textdagger} LQ-@list@ is (/48) by its incomplete (25\%) use case coverage. Its bars are scaled to match.}
1372 \caption[IR duration, decomposed by all first-order effects]{
1373 IR duration, decomposed by all first-order effects.
1374 Each of the three breakdowns divides the entire population of test results into its mutually disjoint constituents.
1375 Lower is better.
1376 }
1377 \label{fig:plot-list-1ord}
1378\end{figure}
1379
1380\VRef[Figure]{fig:plot-list-1ord} gives the first-order effects.
1381The first breakdown, architecture/size-zone (left), shows the overall performance of all configurations, split by the two different hardware architectures and by small \vs medium lists (624 / 4 = 156 experiments per column).
1382% The relative experiment duration for each experiment is shown as a bar in each column and the black bar in that column shows the average of all 12 experiments.
1383By inspection of the averages, Intel runs faster than AMD.
1384Within an architecture, the small zone (lists of 4--16 elements) runs faster than the medium zone (lists of 50--200 elements).
1385The overall slower execution on the AMD results from its smaller cache \vs the larger cache on the Intel.
1386(No NUMA effects for these list sizes.)
1387Specifically, a 20\% standard deviation exists here, between the means of the four physical-effect categories.
1388These hardware effects are accounted for when interpreting the following framework comparisons.
1389% The key takeaway for this comparison is the context it establishes for interpreting the following framework comparisons.
1390% Both the particulars of a machine's cache design, and a list length's effect on the program's cache friendliness, affect IR speed in the manner illustrated in this breakdown.
1391% That is, if you are running on an unknown machine, at a scale above anomaly-prone individuals, and below where major LLC caching effects take over the general intrusive-list advantage, but with an unknown relationship to the sizing of your fickle low-level caches, you are likely to experience an unpredictable speed impact on the order of 20\%.
1392
1393The second breakdown, use case (middle), shows the overall performance for each of the 12 use cases from \VRef[Figure]{f:ExperimentOperations} (624 / 12 = 52 experiments per column).
1394% A similar situation comes from \VRef[Figure]{fig:plot-list-1ord}'s second comparison, by use case.
1395 The standard deviation of the individual use cases' means is 10\%.
1396A more detailed analysis occurs in the discussion of \VRef[Figure]{fig:plot-list-2ord}.
1397% But they are so irrelevant to the issue of picking a winning framework that it is sufficient here to number the use cases opaquely.
1398% Whether a given list framework is suitable for a language's general library succeeds or fails without knowledge of whether your use will have stack or queue movement.
1399% So you face another lottery, with a likely win-loss range of the standard deviation of the individual use cases' means: 9\%.
1400
1401The third breakdown, framework (right), shows the overall performance of the 4 list implementations (624 / 3.25 = 192).
1402Here, \CFA runs similarly to \uCpp and LQ-@list@ runs similarly to @tailq@.
1403The standard deviation of the frameworks' means is 7\%.
1404% Framework choice has, therefore, less impact on your speed than the lottery tickets you already hold.
1405Now, \CFA/\uCpp run slower than LQ-@list@/@tailq@ by 15\%, a fact explored further in \VRef{s:SweetSoreSpots}.
1406But so too does use case X typically beat use case II by 38\%.
1407As does a small size on the Intel typically beat a medium size on the AMD by 66\%.
1408Hence, architecture and usage pattern have a more significant effect on speed than the selection of a framework.
1409
1410
1411\subsection{Result: Sweet and Sore Spots}
1412\label{s:SweetSoreSpots}
1413
1414\begin{figure}
1415 \centering
1416 \includegraphics{plot-list-2ord.pdf}\\
1417 \small{
1418 \textsuperscript{\textdagger} LQ-@list@ is absent from Movement and Polarity comparisons because it does not support queue and insert-last, respectively.\\
1419 \textsuperscript{\textdaggerdbl} LQ-@list@ is (/24) by its incomplete (25\%) use case coverage. Its bars are scaled to match.\\
1420 \textsuperscript{*} The full population of 192 individual configurations applies (48 for LQ-@list@), but this analysis summarizes pairs of them, giving each histogram's 96 contributions (24 for LQ-@list@).
1421 }
1422 \caption[IR duration where framework selection interacts with other factors]{
1423 IR duration where framework selection interacts with other factors.
1424 Higher favours top option; lower favours bottom option.
1425 }
1426 \label{fig:plot-list-2ord}
1427\end{figure}
1428
1429% \VRef[Figure]{fig:plot-list-1ord} is focused on only first-order effects in order to contextualize a winner/loser framework observation.
1430% But this perspective cannot address questions like, ``Where are \CFA's sore spots?''
1431% Moreover, the shallow treatment of use cases by ordinals said nothing about how stack usage compares with queues.
1432
1433\VRef[Figure]{fig:plot-list-2ord} shows how frameworks react to a single other factor being varied across one pair of options.
1434Every (binned and mean-contributing) individual data point represents a pair of test setups, one with the criterion set to the option labelled at the top; the other setup uses the bottom option.
1435This point's y-axis score is the ratio of these setups' durations.
1436The point lands in a bin closer to the label of the option that performs better.
1437
1438The first breakdown, size zone (left), refines the notion that a small size runs faster than a big size;
1439this issue is by how much.
1440Indeed, all means favour small and few tails favour medium.
1441But the various frameworks do no respond to the different sizes and machines uniformly.
1442On the AMD, \CFA and \uCpp have a modest size sensitivity, LQ-tailq's is moderate, and LQ-list seems unaffected.
1443On the Intel, \CFA's increases to moderate, while \uCpp is now unaffected, and both LQs have a large effect.
1444Hence, the Intel is more sensitive to size than the AMD.
1445
1446The second breakdown, movement and polarity (middle), the responses are more subdued.
1447Note, LQ-list has no representation in these comparisons because it only supports stacks that push and pop with the first element.
1448\CFA is completely stable under movement and polarity changes.
1449\uCpp and LQ show modest responses favouring queues and insertion at last.
1450
1451The third breakdown, accessor (right), the responses are close, except for \CFA.
1452Note the pair of two-way comparisons pulled from the three experiment setups used.
1453First, the all-head/insert-element addresses which insertion style is better---by-head (top) and by-element (bottom).
1454Then, the all-head/remove-element addresses which removal style is better---by-head (top) and by-element (bottom).
1455The LQs favour insertion by head and removal by element.
1456\CFA and \uCpp favour both operations by head.
1457The strongest effect is \CFA's aversion to removal by element---certainly an opportunity for improvement.
1458
1459
1460\begin{comment}
1461\subsection{\CFA Tiny-Size Anomaly}
1462
1463The \CFA list occasionally showed a concerning slowdown at length 1.
1464The issue, seen in \VRef[Figure]{fig:plot-list-short} (top-left corner), has \CFA taking above 10 ns per IR (top-left corner).
1465It occurs only for the queue movement, only on the AMD machine, and only for the \CFA framework.
1466
1467Length-1 performance is an important case.
1468Lists like those of waiting threads are frequently left empty, with the occasional thread (or few) momentarily joining.
1469These scenarios need to work.
1470
1471A cause of the slowdown was never determined.
1472Speculation is that \CFA's increased data dependency, a result of the tagging scheme, pairs poorly with the aliasing implied by queue movement.
1473The aliasing, at length 1, is: the head's first element is the head's last element.
1474With stack movement, one of these names for the first element is reused for both insert and remove.
1475While with queue movement, both names are used in alternation.
1476
1477The breakdowns earlier in the performance assessment vary length only.
1478That is, they see the story down the leftmost column in a triangle.
1479The insight for contextualizing this issue was to inspect both length and width.
1480
1481The issue is seen as practically mitigated by noticing that the difficulty fades away as width increases.
1482This effect is seen both in \VRef[Figure]{fig:plot-list-short}'s easement across the top triangle rows, and, zoomed farther out, in \VRef[Figure]{fig:plot-list-wide}.
1483
1484Increasing the width matters to the aliasing hypothesis.
1485In a narrow experiment, one element's insert and remove happen in rapid succession.
1486So, the two aliases are exercised closer together, making a data hazard (that lacks ideal hardware treatment) stretch the instruction-pipeline schedule more significantly.
1487Increasing the width adds harness-induced gaps between the uses of each alias, behind which a potential hazard can hide.
1488
1489In the practical scenario that judges length-1 performance as relevant, width 1 is contrived.
1490A thread putting itself on an often-empty waiters' list is not doing so on one such list repeatedly, at least not without taking other situation-induced pauses.
1491
1492Thus, the congestion at low width + length comes from the harness using repetition (in order to obtain a measurable time).
1493It does not reflect the situation that motivates the legitimate desire for good length-1 performance.
1494
1495There likely is a real hazard, unique to the \CFA framework, when a queue movement is repeated on a tiny list \emph{without other intervening action}.
1496Doing so is believed to occur only in contrived situations.
1497
1498
1499\begin{figure}
1500\centering
1501 \includegraphics[trim={00in, 5.5in, 0in, 0in}, clip, scale=0.8]{plot-list-short-temp.pdf}
1502 \caption{Behaviour at very short lengths.}
1503 \label{fig:plot-list-short}
1504\end{figure}
1505
1506\begin{figure}
1507\centering
1508 \includegraphics[trim={0.25in, 1in, 0.25in, 1in}, clip, scale=0.5]{plot-list-wide-temp.pdf}
1509 \caption{Length-1 anomaly resolving at modest width. Points are for varying widths, at fixed length 1.}
1510 \label{fig:plot-list-wide}
1511\end{figure}
1512\end{comment}
1513
1514\begin{comment}
1515 These remarks are mostly about 3ord over 2ord.
1516This analysis does not provide more detail about one framework beating another; it offers different benefits.
1517These interactions further illustrate the lottery-ticket unpredictability that a linked-list user inevitably faces, by revealing stronger-still performance swings hidden from the first-order view.
1518They illustrate the difficult signal-to-noise ratio that I had to overcome in preparing this data.
1519They may serve as a reference guiding future \CFA linked-list work by informing on where to target improvements.
1520Finally, the findings offer the conclusion that \CFA's list offers more consistent performance across usage scenarios, than the other lists.
1521\end{comment}
1522
1523\begin{comment}
1524Further to the interpretation guidance of \VRef[Figure]{fig:plot-list-2ord}'s caption, a comparison with the construction of \VRef[Figure]{fig:plot-list-1ord} may be helpful.
1525In the first-order graph, the factors being compared had many options: four, twelve and four.
1526# XXX Each option contributes its own, seemingly independent, mean and distribution.
1527# XXX But, in fact, they are not totally independent.
1528# WRONG: it's not just about binary, you also need a split on a conditioned factor
1529 I'm trying to get to:
1530Side by side in earlier style, but they're opposites, so they mirror each other, so you take option B, flip it over, and have option A---I'll just show A
1531\end{comment}
1532
1533\begin{comment}
1534\VRef[Figure]{fig:plot-list-zoomin} shows the sizes below 150 blown up.
1535% The same scenario as the coarse comparison is used: a stack, with insertions and removals happening at the end called ``first,'' ``head'' or ``front,'' and all changes occurring through a head-provided IR operation.
1536The error bars show fastest and slowest time seen on five trials, and the central point is the mean of the remaining three trials.
1537For readability, the points are slightly staggered at a given horizontal value, where the points might otherwise appear on top of each other.
1538The experiment runs twelve use cases;
1539the ones chosen for their variety are scenarios I and VIII from the listing of \VRef[Figure]{fig:plot-list-mchn-szz}, and their results appear in the rows.
1540As in the previous experiment, each hardware architecture appears in a column.
1541Note that LQ-list does not support queue operations, so this framework is absent from Operation VIII.
1542
1543At lengths 1 and 2, winner patterns seen at larger sizes generally do not apply.
1544Indeed, an issue with \CFA giving terrible on queues at length 1 is evident in \VRef[Figure]{f:AbsoluteTime-viii-swift}.
1545This phenomenon is elaborated in \MLB{TODO: xref}.
1546For the remainder of this section, these sizes are disregarded.
1547
1548Even after the very-small anomalies, the selections of operation and machine significantly affect how speed responds to size.
1549For example, Operation I on the AMD (\VRef[Figure]{f:AbsoluteTime-i-swift}) has \CFA generally winning over LQ, while the opposite is seen by switching either to Operation VIII (\VRef[Figure]{f:AbsoluteTime-viii-swift}) or to the Intel (\VRef[Figure]{f:AbsoluteTime-i-java}).
1550For another, Operation I has sore spots at lengths in the middle for \uCpp on AMD and LQ-list on Intel, which resolve at larger lengths; yet no such pattern presents with Operation VIII.
1551
1552In spite of these complex interactions, a couple spots of stability can be analyzed.
1553In these examples, the two defined Size Zones (4--16 being ``small,'' and above 50 being ``medium,'') covering four specific sizes apiece, each tends to show a simple winner/loser story.
1554Manual inspection of other such plots (not detailed) showed that this quality is generally upheld.
1555So these zones are used for basing comparison.
1556
1557\MLB{Peter, caution beyond here. I am reconsidering if this first-dismiss-physical approach to comparison is simplest.}
1558
1559\begin{figure}
1560 \centering
1561 \includegraphics{plot-list-mchn-szz.pdf}
1562 \caption{Histogram of operation durations, decomposed by physical factors.
1563 Measurements are included from only the sizes in the ``small'' and ``medium'' stable zones.
1564 This breakdown divides the entire population of test results into four mutually disjoint constituents.
1565 \MLB{I see that I broke it. But we might be getting rid of it.}
1566 }
1567 \label{fig:plot-list-mchn-szz}
1568\end{figure}
1569
1570\VRef[Figure]{fig:plot-list-mchn-szz} shows the effects of the physical factors of size zone and machine.
1571Each of these four histograms shows variation in duration coming from the four specific sizes in a size zone, from combining results of all twelve operations and all four frameworks.
1572Among the means of the four histograms, there is a standard deviation of 0.9 ns, which is 20\% of the global mean.
1573This variability is due solely to physical factors.
1574
1575From the perspective of assessing winning/losing frameworks, these physical effects are noise.
1576So, subsequent analysis conditions on the physical effects.
1577That is, it supposes you are put into an unknown physical situation (that is one of the four being tested), then presents all the ways your outcome could change as a result of non-physical factors, assuming that the physical situation is kept constant.
1578It does do by presenting results relative to the mean of the physical quadrant (\VRef[fig]{fig:plot-list-mchn-szz} histogram) to which it belongs.
1579With this adjustment, absolute duration values (in nanoseconds) are lost.
1580In return, the physical quadrants are re-combined, enabling assessment of the non-physical factors.
1581\end{comment}
1582
1583\begin{comment}
1584While preparing experiment results, I first tested on my old office PC, AMD FX-8370E Eight-Core, before switching to the large new server for final testing.
1585For this experiment, the results flipped in my favour when running on the server.
1586New CPU architectures are now amazingly good at branch prediction and micro-parallelism in the pipelines.
1587Specifically, on the PC, my \CFA and companion \uCpp lists are slower than lq-tail and lq-list by 10\% to 20\%.
1588On the server, \CFA and \uCpp lists are can be fast by up to 100\%.
1589Overall, LQ-tailq does the best at short lengths but loses out above a dozen elements.
1590\end{comment}
1591
1592% \begin{figure}
1593% \centering
1594% \begin{tabular}{c}
1595% \includegraphics{plot-list-cmp-intrl-shift.pdf} \\
1596% (a) \\
1597% \includegraphics{plot-list-cmp-intrl-outcome.pdf} \\
1598% (b) \\
1599% \end{tabular}
1600% \caption{Caption TODO}
1601% \label{fig:plot-list-cmp-intrl}
1602% \end{figure}
1603
1604\begin{comment}
1605\subsection{Result: \CFA cost attribution}
1606
1607This comparison loosely itemizes the reasons that the \CFA implementation runs 15--20\% slower than LQ.
1608Each reason provides for safer programming.
1609For each reason, a version of the \CFA list was measured that forgoes its safety and regains some performance.
1610These potential sacrifices are:
1611\newcommand{\mandhead}{\emph{mand-head}}
1612\newcommand{\nolisted}{\emph{no-listed}}
1613\newcommand{\noiter}{\emph{no-iter}}
1614\begin{description}
1615\item[mand(atory)-head] Removing support for headless lists.
1616 A specific explanation of why headless support causes a slowdown is not offered.
1617 But it is reasonable for a cost to result from making one piece of code handle multiple cases; the subset of the \CFA list API that applies to headless lists shares its implementation with headed lists.
1618 In the \mandhead case, disabling the feature in \CFA means using an older version of the implementation, from before headless support was added.
1619 In the pre-headless library, trying to form a headless list (instructing, ``Insert loose element B after loose element A,'') is a checked runtime error.
1620 LQ does not support headless lists\footnote{
1621 Though its documentation does not mention the headless use case, this fact is due to one of its insert-before or insert-after routines being unusable in every list model.
1622 For \lstinline{tailq}, the API requires a head.
1623 For \lstinline{list}, this usage causes an ``uncaught'' runtime crash.}.
1624\item[no-listed] Removing support for the @is_listed@ API query.
1625 Along with it goes error checking such as ``When inserting an element, it must not already be listed, \ie be referred to from somewhere else.''
1626 These abilities have a cost because, in order to support them, a listed element that is being removed must be written to, to record its change in state.
1627 In \CFA's representation, this cost is two pointer writes.
1628 To disable the feature, these writes, and the error checking that consumes their result, are put behind an @#ifdef@.
1629 The result is that a removed element sees itself as still having neighbours (though these quasi-neighbours see it differently).
1630 This state is how LQ leaves a removed element; LQ does not offer an is-listed query.
1631\item[no-iter(ation)] Removing support for well-terminating iteration.
1632 The \CFA list uses bit-manipulation tagging on link pointers (rather than \eg null links) to express, ``No more elements this way.''
1633 This tagging has the cost of submitting a retrieved value to the ALU, and awaiting this operation's completion, before dereferencing a link pointer.
1634 In some cases, the is-terminating bit is transferred from one link to another, or has a similar influence on a resulting link value; this logic adds register pressure and more data dependency.
1635 To disable the feature, the @#ifdef@-controlled tag manipulation logic compiles in answers like, ``No, that link is not a terminator,'' ``The dereferenceable pointer is the value you read from memory,'' and ``The terminator-marked value you need to write is the pointer you started with.''
1636 Without this termination marking, repeated requests for a next valid item will always provide a positive response; when it should be negative, the indicated next element is garbage data at an address unlikely to trigger a memory error.
1637 LQ has a well-terminating iteration for listed elements.
1638 In the \noiter case, the slowdown is not inherent; it represents a \CFA optimization opportunity.
1639\end{description}
1640\MLB{Ensure benefits are discussed earlier and cross-reference} % an LQ programmer must know not to ask, ``Who's next?'' about an unlisted element; an LQ programmer cannot write assertions about an item being listed; LQ requiring a head parameter is an opportunity for the user to provide inconsistent data
1641
1642\begin{figure}
1643\centering
1644 \begin{tabular}{c}
1645 \includegraphics{plot-list-cfa-attrib-swift.pdf} \\
1646 (a) \\
1647 \includegraphics{plot-list-cfa-attrib-remelem-swift.pdf} \\
1648 (b) \\
1649 \end{tabular}
1650 \caption{Operation duration ranges for functionality-reduced \CFA list implementations. (a) has the top level slices. (b) has the next level of slicing within the slower element-based removal operation.}
1651 \label{fig:plot-list-cfa-attrib}
1652\end{figure}
1653
1654\VRef[Figure]{fig:plot-list-cfa-attrib} shows the \CFA list performance with these features, and their combinations, turned on and off. When a series name is one of the three sacrifices above, the series is showing this sacrifice in isolation. These further series names give combinations:
1655\newcommand{\attribFull}{\emph{full}}
1656\newcommand{\attribParity}{\emph{parity}}
1657\newcommand{\attribStrip}{\emph{strip}}
1658\begin{description}
1659 \item[full] No sacrifices. Same as measurements presented earlier.
1660 \item[parity] \mandhead + \nolisted. Feature parity with LQ.
1661 \item[strip] \mandhead + \nolisted + \noiter. All options set to ``faster.''
1662\end{description}
1663All list implementations are \CFA, possibly stripped.
1664The plot uses the same LQ-relative basis as earlier.
1665So getting to zero means matching LQ's @tailq@.
1666
1667\VRef[Figure]{fig:plot-list-cfa-attrib}-(a) summarizes the time attribution across the main operating scenarios.
1668The \attribFull series is repeated from \VRef[Figure]{fig:plot-list-cmp-overall}, part (b), while the series showing feature sacrifices are new.
1669Going all the way to \attribStrip at least nearly matches LQ in all operating scenarios, beats LQ often, and slightly beats LQ overall.
1670Except within the accessor splits, both sacrifices contribute improvements individually, \noiter helps more than \attribParity, and the total \attribStrip benefit depends on both contributions.
1671When the accessor is not element removal, the \attribParity shift appears to be counterproductive, leaving \noiter to deliver most of the benefit.
1672For element removals, \attribParity is the heavy hitter, with \noiter contributing modestly.
1673
1674The counterproductive shift outside of element removals is likely due to optimization done in the \attribFull version after implementing headless support, \ie not present in the \mandhead version.
1675This work streamlined both head-based operations (head-based removal being half the work of the element-insertion test).
1676This improvement could be ported to a \mandhead-style implementation, which would bring down the \attribParity time in these cases.
1677
1678More significantly, missing this optimization affects every \attribParity result because they all use head-based inserts or removes for at least half their operations.
1679It is likely a reason that \attribParity is not delivering as well overall as \noiter.
1680It even represents plausible further improvements in \attribStrip.
1681
1682\VRef[Figure]{fig:plot-list-cfa-attrib}-(b) addresses element removal being the overall \CFA slow spot and element removal having a peculiar shape in the (a) analysis.
1683Here, the \attribParity sacrifice bundle is broken out into its two constituents.
1684The result is the same regardless of the operation.
1685All three individual sacrifices contribute noteworthy improvements (\nolisted slightly less).
1686The fullest improvement requires all of them.
1687
1688The \noiter feature sacrifice is unpalatable.
1689But because it is not an inherent slowdown, there may be room to pursue a \noiter-level speed improvement without the \noiter feature sacrifice.
1690The performance crux for \noiter is the pointer-bit tagging scheme.
1691Alternative designs that may offer speedup with acceptable consequences include keeping the tag information in a separate field, and (for 64-bit architectures) keeping it in the high-order byte \ie using byte- rather than bit-oriented instructions to access it.
1692The \noiter speed improvement would bring \CFA to +5\% of LQ overall, and from high twenties to high teens, in the worst case of element removal.
1693
1694Ultimately, this analysis provides options for a future effort that needs to get the most speed out of the \CFA list.
1695\end{comment}
1696
1697
1698\section{Future Work}
1699\label{toc:lst:futwork}
1700
1701Not discussed in the chapter is a \CFA type-system issue with Plan-9 @inline@ declarations, implementing the trait @embedded@ to access the contained @dlist@ link-fields.
1702This trait defines an implicit conversion from derived to base (the safe direction).
1703Such a conversion exists implicitly for concrete types using the Plan-9 inheritance feature.
1704However, this implicit conversion is only partially implemented for polymorphism types, such as @dlist@, which prevents the straightforward list interface shown throughout the chapter.
1705
1706My workaround is the macro @P9_EMBEDDED@ placed before an intrusive node is used to declare a list.
1707\begin{cfa}
1708struct node {
1709 int v;
1710 inline dlink(node);
1711};
1712@P9_EMBEDDED( node, dlink(node) );@
1713dlist( node ) dlist;
1714\end{cfa}
1715The macro creates specialized access functions to explicitly extract the required information for the polymorphic @dlist@ type.
1716These access functions could have been generated implicitly for each intrusive node by adding another compiler pass.
1717However, this would have been substantial temporary work, when the correct solution is the compiler fix.
1718Hence, the macro workaround is only a small temporary inconvenience;
1719otherwise, all the list API shown in this chapter works.
1720
1721
1722\begin{comment}
1723An API author has decided that the intended user experience is:
1724\begin{itemize}
1725\item
1726offer the user an opaque type that abstracts the API's internal state
1727\item
1728tell the user to extend this type
1729\item
1730provide functions with a ``user's type in, user's type out'' style
1731\end{itemize}
1732Fig XXX shows several attempts to provide this experience.
1733The types in (a) give the setup, achieving the first two points, while the pair of function declarations and calls of (b) are unsuccessful attempts at achieving the third point.
1734Both functions @f1@ and @f2@ allow calls of the form @f( d )@, but @f1@ has the wrong return type (@Base@) for initializing a @Derived@.
1735The @f1@ signature fails to meet ``user's type out;'' this signature does not give the \emph{user} a usable type.
1736On the other hand, the signature @f2@ offers the desired user experience, so the API author proceeds with trying to implement it.
1737
1738\begin{figure}
1739\begin{cfa}
1740#ifdef SHOW_ERRS
1741#define E(...) __VA_ARGS__
1742#else
1743#define E(...)
1744#endif
1745
1746// (a)
1747struct Base { /*...*/ }; // system offers
1748struct Derived { /*...*/ inline Base; }; // user writes
1749
1750// (b)
1751// system offers
1752Base & f1( Base & );
1753forall( T ) T & f2( T & );
1754// user writes
1755void to_elide1() {
1756 Derived & d /* = ... */;
1757 f1( d );
1758 E( Derived & d1 = f1( d ); ) // error: return is not Derived
1759 Derived & d2 = f2( d );
1760
1761 // (d), user could write
1762 Base & b = d;
1763}
1764
1765// (c), system has
1766static void helper( Base & );
1767forall( T ) T & f2( T & param ) {
1768 E( helper( param ); ) // error: param is not Base
1769 return param;
1770}
1771static void helper( Base & ) {}
1772
1773#include <list2.hfa>
1774// (e)
1775// system offers, has
1776forall( T | embedded(T, Base, Base) )
1777 T & f3( T & param ) {
1778 helper( param`inner ); // ok
1779 return param;
1780}
1781// user writes
1782struct DerivedRedux { /*...*/ inline Base; };
1783P9_EMBEDDED( DerivedRedux, Base )
1784void to_elide2() {
1785 DerivedRedux & dr /* = ... */;
1786 DerivedRedux & dr3 = f3( dr );
1787
1788 // (f)
1789 // user writes
1790 Derived & xxx /* = ... */;
1791 E( Derived & yyy = f3( xxx ); ) // error: xxx is not embedded
1792}
1793\end{cfa}
1794\end{figure}
1795
1796
1797The \CFA list examples elide the @P9_EMBEDDED@ annotations that (TODO: xref P9E future work) proposes to obviate.
1798Thus, these examples illustrate a to-be state, free of what is to be historic clutter.
1799The elided portions are immaterial to the discussion and the examples work with the annotations provided.
1800The \CFA test suite (TODO:cite?) includes equivalent demonstrations, with the annotations included.
1801\begin{cfa}
1802struct mary {
1803 float anotherdatum;
1804 inline dlink(mary);
1805};
1806struct fred {
1807 float adatum;
1808 inline struct mine { inline dlink(fred); };
1809 inline struct yours { inline dlink(fred); };
1810};
1811\end{cfa}
1812like in the thesis examples. You have to say
1813\begin{cfa}
1814struct mary {
1815 float anotherdatum;
1816 inline dlink(mary);
1817};
1818P9_EMBEDDED(mary, dlink(mary))
1819struct fred {
1820 float adatum;
1821 inline struct mine { inline dlink(fred); };
1822 inline struct yours { inline dlink(fred); };
1823};
1824P9_EMBEDDED(fred, fred.mine)
1825P9_EMBEDDED(fred, fred.yours)
1826P9_EMBEDDED(fred.mine, dlink(fred))
1827P9_EMBEDDED(fred.yours, dlink(fred))
1828\end{cfa}
1829
1830
1831The function definition in (c) gives this system-implementation attempt.
1832The system needs to operate on its own data, stored in the @Base@ part of the user's @d@, now called @param@.
1833Calling @helper@ represents this attempt to look inside.
1834It fails, because the @f2@ signature does not state that @param@ has any relationship to @Base@;
1835this signature does not give the \emph{system} a usable type.
1836The fact that the user's argument can be converted to @Base@ is lost when going through this signature.
1837
1838Moving forward needs an @f@ signature that conveys the relationship that the argument @d@ has to @Base@.
1839\CFA conveys type abilities, from caller to callee, by way of traits; so the challenge is to state the right trait member.
1840As initialization (d) illustrates, the @d@--@Base@ capability is an implicit conversion.
1841Unfortunately, in present state, \CFA does not have a first-class representation of an implicit conversion, the way operator @?{}@ (which is a value), done with the right overload, is arguably an explicit conversion.
1842So \CFA cannot directly convey ``@T@ compatible with @Base@'' in a trait.
1843
1844This work contributes a stand-in for such an ability, tunneled through the present-state trait system, shown in (e).
1845On the declaration/system side, the trait @embedded@, and its member, @`inner@, convey the ability to recover the @Base@, and thereby call @helper@.
1846On the user side, the @P9_EMBEDDED@ macro accompanies type definitions that work with @f3@-style declarations.
1847An option is presemt-state-available to compiler-emit @P9_EMBEDDED@ annotations automatically, upon each occurrence of an `inline` member.
1848The choice not to is based on conversions in \CFA being a moving target because of separate, ongoing work.
1849
1850An intended finished state for this area is achieved if \CFA's future efforts with conversions include:
1851\begin{itemize}
1852\item
1853treat conversion as operator(s), \ie values
1854\item
1855re-frame the compiler's current Plan-9 ``magic'' as seeking an implicit conversion operator, rather than seeking an @inline@ member
1856\item
1857make Plan-9 syntax cause an implementation of implicit conversion to exist (much as @struct@ syntax causes @forall(T)@ compliance to exist)
1858\end{itemize}
1859To this end, the contributed @P9_EMBEDDED@ expansion shows how to implement this conversion.
1860
1861
1862like in tests/list/dlist-insert-remove.
1863Future work should autogen those @P9_EMBEDDED@ declarations whenever it sees a plan-9 declaration.
1864The exact scheme chosen should harmonize with general user-defined conversions.
1865
1866Today's P9 scheme is: mary gets a function `inner returning this as dlink(mary).
1867Fred gets four of them in a diamond.
1868They're defined so that `inner is transitive; i.e. fred has two further ambiguous overloads mapping fred to dlink(fred).
1869The scheme allows the dlist functions to give the assertion, "we work on any T that gives a `inner to dlink(T)."
1870
1871
1872TODO: deal with: A doubly linked list is being designed.
1873
1874TODO: deal with: Link fields are system-managed.
1875Links in GDB.
1876\end{comment}
Note: See TracBrowser for help on using the repository browser.