source: doc/theses/mike_brooks_MMath/list.tex@ 4cf8832

Last change on this file since 4cf8832 was 4cf8832, checked in by Michael Brooks <mlbrooks@…>, 12 days ago

Anticipated last new content for list perf analysis

  • Property mode set to 100644
File size: 95.3 KB
Line 
1\chapter{Linked List}
2
3This chapter presents my work on designing and building a linked-list library for \CFA.
4Due to time limitations and the needs expressed by the \CFA runtime developers, I focussed on providing a doubly-linked list, and its bidirectional iterators for traversal.
5Simpler data-structures, like stack and queue, can be built from the doubly-linked mechanism with only a slight storage/performance cost because of the unused link field.
6Reducing to data-structures with a single link follows directly from the more complex double links and its iterators.
7
8
9\section{Plan-9 Inheritance}
10\label{s:Plan9Inheritance}
11
12This chapter uses a form of inheritance from the Plan-9 C dialect~\cite[\S~3.3]{Thompson90new}, which is supported by @gcc@ and @clang@ using @-fplan9-extensions@.
13\CFA has its own variation of the Plan-9 mechanism, where the nested type is denoted using @inline@.
14\begin{cfa}
15union U { int x; double y; char z; };
16struct W { int i; double j; char k; };
17struct S {
18 @inline@ struct W; $\C{// extended Plan-9 inheritance}$
19 unsigned int tag;
20 @inline@ U; $\C{// extended Plan-9 inheritance}$
21} s;
22\end{cfa}
23Inline inheritance is containment, where the inlined field is unnamed and the type's internal fields are hoisted into the containing structure.
24Hence, the field names must be unique, unlike \CC nested types, but any inlined type-names are at a nested scope level, unlike aggregate nesting in C.
25Note, the position of the containment is normally unimportant, unless there is some form of memory or @union@ overlay.
26Finally, the inheritance declaration of @U@ is not prefixed with @union@.
27Like \CC, \CFA allows optional prefixing of type names with their kind, \eg @struct@, @union@, and @enum@, unless there is ambiguity with variable names in the same scope.
28
29\VRef[Figure]{f:Plan9Polymorphism} shows the key polymorphic feature of Plan-9 inheritance: implicit conversion of values and pointers for nested types.
30In the example, there are implicit conversions from @S@ to @U@ and @S@ to @W@, extracting the appropriate value or pointer for the substructure.
31\VRef[Figure]{f:DiamondInheritancePattern} shows complex multiple inheritance patterns are possible, like the \newterm{diamond pattern}~\cite[\S~6.1]{Stroustrup89}\cite[\S~4]{Cargill91}.
32Currently, the \CFA type-system does not support @virtual@ inheritance.
33
34\begin{figure}
35\begin{cfa}
36void f( U, U * ); $\C{// value, pointer}$
37void g( W, W * ); $\C{// value, pointer}$
38U u, * up; W w, * wp; S s, * sp;
39u = s; up = sp; $\C{// value, pointer}$
40w = s; wp = sp; $\C{// value, pointer}$
41f( s, &s ); $\C{// value, pointer}$
42g( s, &s ); $\C{// value, pointer}$
43\end{cfa}
44\caption{Plan-9 Polymorphism}
45\label{f:Plan9Polymorphism}
46\end{figure}
47
48\begin{figure}
49\setlength{\tabcolsep}{10pt}
50\begin{tabular}{ll@{}}
51\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
52\begin{c++}
53struct B { int b; };
54struct L : @public B@ { int l, w; };
55struct R : @public B@ { int r, w; };
56struct T : @public L, R@ { int t; };
57 T
58 B L B R
59T t = { { { 5 }, 3, 4 }, { { 8 }, 6, 7 }, 3 };
60t.t = 42;
61t.l = 42;
62t.r = 42;
63((L &)t).b = 42; // disambiguate
64\end{c++}
65&
66\begin{cfa}
67struct B { int b; };
68struct L { @inline B;@ int l, w; };
69struct R { @inline B;@ int r, w; };
70struct T { @inline L; inline R;@ int t; };
71 T
72 B L B R
73T t = { { { 5 }, 3, 4 }, { { 8 }, 6, 7 }, 3 };
74t.t = 42;
75t.l = 42;
76t.r = 42;
77((L &)t).b = 42; // disambiguate, proposed solution t.L.b = 42;
78\end{cfa}
79\end{tabular}
80\caption{Diamond Non-Virtual Inheritance Pattern}
81\label{f:DiamondInheritancePattern}
82\end{figure}
83
84
85\section{Features}
86
87The following features directed this project, where the goal is high-performance list operations required by \CFA runtime components, like the threading library.
88
89
90\subsection{Core Design Issues}
91
92The doubly-linked list attaches links intrusively in a node, allows a node to appear on multiple lists (axes) simultaneously, integrates with user code via the type system, treats its ends uniformly, and identifies a list using an explicit head.
93This design covers system and data management issues stated in \VRef{toc:lst:issue}.
94
95\VRef[Figure]{fig:lst-features-intro} continues the running @req@ example from \VRef[Figure]{fig:lst-issues-attach} using the \CFA list @dlist@.
96The \CFA link attachment is intrusive so the resulting memory layout is per user node, as for the LQ version of \VRef[Figure]{f:Intrusive}.
97The \CFA framework provides generic type @dlink( T )@ (not to be confused with @dlist@) for the two link fields (front and back).
98A user inserts the links into the @req@ structure via \CFA inline-inheritance \see{\VRef{s:Plan9Inheritance}}.
99Lists leverage the automatic conversion of a pointer to anonymous inline field for assignments and function calls.
100Therefore, a reference to a @req@ is implicitly convertible to @dlink@ in both contexts.
101The links in @dlist@ point at another embedded @dlist@ node, know the offsets of all links (data is abstract but accessible), and any field-offset arithmetic or link-value changes are safe and abstract.
102
103\begin{figure}
104 \lstinput{20-30}{lst-features-intro.run.cfa}
105 \caption[Multiple link axes in \CFA list library]{
106 Demonstration of the running \lstinline{req} example, done using the \CFA list library.
107 This example is equivalent to the three approaches in \VRef[Figure]{fig:lst-issues-attach}.
108 }
109 \label{fig:lst-features-intro}
110\end{figure}
111
112\VRef[Figure]{f:dlistOutline} shows the outline for types @dlink@ and @dlist@.
113Note, the first @forall@ clause is distributed across all the declaration in its containing block, eliminating repetition on each declaration.
114The second nested @forall@ on @dlist@ is also distributed and adds an optional second type parameter, @tLinks@, denoting the linking axis \see{\VRef{s:Axis}}, \ie the kind of list this node can appear on.
115
116\begin{figure}
117\begin{cfa}
118forall( tE & ) { $\C{// distributed}$
119 struct @dlink@ { ... }; $\C{// abstract type}$
120 static inline void ?{}( dlink( tE ) & this ); $\C{// constructor}$
121
122 forall( tLinks & = dlink( tE ) ) { $\C{// distributed, default type for axis}$
123 struct @dlist@ { ... }; $\C{// abstract type}$
124 static inline void ?{}( dlist( tE, tLinks ) & this ); $\C{// constructor}$
125 }
126}
127\end{cfa}
128\caption{\lstinline{dlisk} / \lstinline{dlist} Outline}
129\label{f:dlistOutline}
130\end{figure}
131
132\VRef[Figure]{fig:lst-features-multidir} shows how the \CFA library supports multi-inline links, so a node has multiple axes.
133The declaration of @req@ has two inline-inheriting @dlink@ occurrences.
134The first of these gives a type named @req.by_pri@, @req@ inherits from it, and it inherits from @dlink@.
135The second line @req.by_rqr@ is similar to @req.by_pri@.
136Thus, there is a diamond, non-virtual, inheritance from @req@ to @dlink@, with @by_pri@ and @by_rqr@ being the mid-level types \see{\VRef[Figure]{f:DiamondInheritancePattern}}.
137
138The declarations of the list-head objects, @reqs_pri@, @reqs_rqr_42@, @reqs_rqr_17@, and @reqs_rqr_99@, bind which link nodes in @req@ are used by this list.
139Hence, the type of the variable @reqs_pri@, @dlist(req, req.by_pri)@, means operations on @reqs_pri@ implicitly select (disambiguate) the correct @dlink@s, \eg the calls @insert_first(reqs_pri, ...)@ imply, ``here, we are working by priority.''
140As in \VRef[Figure]{fig:lst-issues-multi-static}, three lists are constructed, a priority list containing all nodes, a list with only nodes containing the value 42, and a list with only nodes containing the value 17.
141
142\begin{figure}
143\centering
144
145\begin{tabular}{@{}ll@{}}
146\multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{C} \\
147 \begin{tabular}{@{}l@{}}
148 \lstinput{20-31}{lst-features-multidir.run.cfa} \\
149 \lstinput{43-71}{lst-features-multidir.run.cfa}
150 \end{tabular}
151&
152 \lstinput[language=C++]{20-60}{lst-issues-multi-static.run.c}
153\end{tabular}
154
155\caption{
156 Demonstration of multiple static link axes done in the \CFA list library.
157 The right example is from \VRef[Figure]{fig:lst-issues-multi-static}.
158 The left \CFA example does the same job.
159}
160\label{fig:lst-features-multidir}
161\end{figure}
162
163The list library also supports the common case of single directionality more naturally than LQ.
164Returning to \VRef[Figure]{fig:lst-features-intro}, the single-axis list has no contrived name for the link axis as it uses the default type in the definition of @dlist@;
165in contrast, the LQ list in \VRef[Figure]{f:Intrusive} adds the unnecessary field name @d@.
166In \CFA, a single axis list sets up a single inheritance with @dlink@, and the default list axis is to itself.
167
168When operating on a list with several axes and operations that do not take the list head, the list axis can be ambiguous.
169For example, a call like @insert_after( r1, r2 )@ does not have enough information to know which axes to select implicitly.
170Is @r2@ supposed to be the next-priority request after @r1@, or is @r2@ supposed to join the same-requester list of @r1@?
171As such, the \CFA type-system gives an ambiguity error for this call.
172There are multiple ways to resolve the ambiguity.
173The simplest is an explicit cast on each call to select the specific axis, \eg @insert_after( (by_pri)r1, r2 )@.
174However, multiple explicit casts are tedious and error-prone.
175To mitigate this issue, the list library provides a hook for applying the \CFA language's scoping and priority rules.
176\begin{cfa}
177with ( DLINK_VIA( req, req.pri ) ) insert_after( r1, r2 );
178\end{cfa}
179Here, the @with@ statement opens the scope of the object type for the expression;
180hence, the @DLINK_VIA@ result causes one of the list axes to become a more attractive candidate to \CFA's overload resolution.
181This boost can be applied across multiple statements in a block or an entire function body.
182\begin{cquote}
183\setlength{\tabcolsep}{15pt}
184\begin{tabular}{@{}ll@{}}
185\begin{cfa}
186@with( DLINK_VIA( req, req.pri ) ) {@
187 ... insert_after( r1, r2 ); ...
188@}@
189\end{cfa}
190&
191\begin{cfa}
192void f() @with( DLINK_VIA( req, req.pri ) ) {@
193 ... insert_after( r1, r2 ); ...
194@}@
195\end{cfa}
196\end{tabular}
197\end{cquote}
198Within the @with@, the code acts as if there is only one list axis, without explicit casting.
199
200Unlike the \CC template container-types, the \CFA library works completely within the type system;
201both @dlink@ and @dlist@ are ordinary types, not language macros.
202There is no textual expansion other than header-included static-inline function for performance.
203Hence, errors in user code are reported only with mention of the library's declarations, versus long template names in error messages.
204Finally, the library is separately compiled from the usage code, modulo inlining.
205
206
207\section{List API}
208
209\VRef[Figure]{f:ListAPI} shows the API for the doubly-link list operations, where each is explained.
210\begin{itemize}[leftmargin=*]
211\item
212@listed@ returns true if the node is an element of a list and false otherwise.
213\item
214@empty@ returns true if the list has no nodes and false otherwise.
215\item
216@first@ returns a reference to the first node of the list without removing it or @0p@ if the list is empty.
217\item
218@last@ returns a reference to the last node of the list without removing it or @0p@ if the list is empty.
219\item
220@insert_before@ adds a node before a specified node \see{\lstinline{insert_last} for insertion at the end}\footnote{
221Some list packages allow \lstinline{0p} (\lstinline{nullptr}) for the before/after node implying insert/remove at the start/end of the list, respectively.
222However, this inserts an \lstinline{if} statement in the fastpath of a potentially commonly used list operation.}
223\item
224@insert_after@ adds a node after a specified node \see{\lstinline{insert_first} for insertion at the start}\footnotemark[\value{footnote}].
225\item
226@remove@ removes a specified node from the list (any location) and returns a reference to the node.
227\item
228@iter@ create an iterator for the list.
229\item
230@recede@ returns true if the iterator cursor is advanced to the previous (predecessor, towards first) node before the prior cursor node and false otherwise.
231\item
232@advance@ returns true if the iterator cursor is advanced to the next (successor, towards last) node after the prior cursor node and false otherwise.
233\item
234@first@ returns true if the node is the first node in the list and false otherwise.
235\item
236@last@ returns true if the node is the last node in the list and false otherwise.
237\item
238@pred@ returns a reference to the previous (predecessor, towards first) node before a specified node or @0p@ if a specified node is the first node in the list.
239\item
240@next@ returns a reference to the next (successor, towards last) node after a specified node or @0p@ if a specified node is the last node in the list.
241\item
242@insert_first@ adds a node to the start of the list so it becomes the first node and returns a reference to the node for cascading.
243\item
244@insert_last@ adds a node to the end of the list so it becomes the last node and returns returns a reference to the node for cascading.
245\item
246@remove_first@ removes the first node and returns a reference to it or @0p@ if the list is empty.
247\item
248@remove_last@ removes the last node and returns a reference to it or @0p@ if the list is empty.
249\item
250@transfer@ transfers all nodes from the @from@ list to the end of the @to@ list; the @from@ list is empty after the transfer.
251\item
252@split@ transfers the @from@ list up to node to the end of the @to@ list; the @from@ list becomes the list after the node.
253The node must be in the @from@ list.
254\end{itemize}
255For operations @insert_*@, @insert@, and @remove@, a variable-sized list of nodes can be specified using \CFA's tuple type~\cite[\S~4.7]{Moss18} (not discussed), \eg @insert( list, n1, n2, n3 )@, recursively invokes @insert( list, n@$_i$@ )@.\footnote{
256Currently, a resolver bug between tuple types and references means tuple routines must use pointer parameters.
257Nevertheless, the imaginary reference versions are used here as the code is cleaner, \eg no \lstinline{&} on call arguments.}
258
259\begin{figure}
260\begin{cfa}
261E & listed( E & node );
262E & empty( dlist( E ) & list );
263E & first( dlist( E ) & list );
264E & last( dlist( E ) & list );
265E & insert_before( E & before, E & node );
266E & insert_after( E & after, E & node );
267E & remove( E & node );
268E & iter( dlist( E ) & list );
269bool advance( E && refx );
270bool recede( E && refx );
271bool first( E & node );
272bool last( E & node );
273E & prev( E & node );
274E & next( E & node );
275E & insert_first( dlist( E ) & list, E & node );
276E & insert_last( dlist( E ) & list, E & node ); // synonym insert
277E & remove_first( dlist( E ) & list );
278E & remove_last( dlist( E ) & list );
279void transfer( dlist( E ) & to, dlist( E ) & from ) {
280void split( dlist( E ) & to, dlist( E ) & from, E & node ) {
281\end{cfa}
282\caption{\CFA List API}
283\label{f:ListAPI}
284\end{figure}
285
286
287\subsection{Iteration}
288
289It is possible to iterate through a list manually or using a set of standard macros.
290\VRef[Figure]{f:IteratorDriver} shows the iterator outline, managing a list of nodes, used throughout the following iterator examples.
291Each example assumes its loop body prints the value in the current node.
292
293\begin{figure}
294\begin{cfa}
295#include <fstream.hfa>
296#include <list.hfa>
297struct node {
298 int v;
299 inline dlink(node);
300};
301int main() {
302 dlist(node) list;
303 node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
304 insert( list, n1, n2, n3, n4 ); $\C{// insert in any order}$
305 sout | nlOff;
306 @for ( ... )@ sout | it.v | ","; sout | nl; $\C{// iterator examples in text}$
307 remove( n1, n2, n3, n4 ); $\C{// remove in any order}$
308}
309\end{cfa}
310\caption{Iterator Driver}
311\label{f:IteratorDriver}
312\end{figure}
313
314The manual method is low level but allows complete control of the iteration.
315The list cursor (index) can be either a pointer or a reference to a node in the list.
316The choice depends on how the programmer wants to access the fields: @it->f@ or @it.f@.
317The following examples use a reference because the loop body manipulates the node values rather than the list pointers.
318The end of iteration is denoted by the loop cursor returning @0p@.
319
320\noindent
321Iterating forward and reverse through the entire list.
322\begin{cquote}
323\setlength{\tabcolsep}{15pt}
324\begin{tabular}{@{}l|l@{}}
325\begin{cfa}
326for ( node & it = @first@( list ); &it /* != 0p */ ; &it = &@next@( it ) ...
327for ( node & it = @last@( list ); &it; &it = &@prev@( it ) ) ...
328\end{cfa}
329&
330\begin{cfa}
3311, 2, 3, 4,
3324, 3, 2, 1,
333\end{cfa}
334\end{tabular}
335\end{cquote}
336Iterating forward and reverse from a starting node through the remaining list.
337\begin{cquote}
338\setlength{\tabcolsep}{15pt}
339\begin{tabular}{@{}l|l@{}}
340\begin{cfa}
341for ( node & it = @n2@; &it; &it = &@next@( it ) ) ...
342for ( node & it = @n3@; &it; &it = &@prev@( it ) ) ...
343\end{cfa}
344&
345\begin{cfa}
3462, 3, 4,
3473, 2, 1,
348\end{cfa}
349\end{tabular}
350\end{cquote}
351Iterating forward and reverse from a starting node to an ending node through the contained list.
352\begin{cquote}
353\setlength{\tabcolsep}{15pt}
354\begin{tabular}{@{}l|l@{}}
355\begin{cfa}
356for ( node & it = @n2@; &it @!= &n4@; &it = &@next@( it ) ) ...
357for ( node & it = @n4@; &it @!= &n2@; &it = &@prev@( it ) ) ...
358\end{cfa}
359&
360\begin{cfa}
3612, 3,
3624, 3,
363\end{cfa}
364\end{tabular}
365\end{cquote}
366Iterating forward and reverse through the entire list using the shorthand start at the list head and pick an axis.
367In this case, @advance@ and @recede@ return a boolean, like \CC @while ( cin >> i )@.
368\begin{cquote}
369\setlength{\tabcolsep}{15pt}
370\begin{tabular}{@{}l|l@{}}
371\begin{cfa}
372for ( node & it = @iter@( list ); @advance@( it ); ) ...
373for ( node & it = @iter@( list ); @recede@( it ); ) ...
374\end{cfa}
375&
376\begin{cfa}
3771, 2, 3, 4,
3784, 3, 2, 1,
379\end{cfa}
380\end{tabular}
381\end{cquote}
382Finally, there are convenience macros that look like @foreach@ in other programming languages.
383Iterating forward and reverse through the entire list.
384\begin{cquote}
385\setlength{\tabcolsep}{15pt}
386\begin{tabular}{@{}l|l@{}}
387\begin{cfa}
388FOREACH( list, it ) ...
389FOREACH_REV( list, it ) ...
390\end{cfa}
391&
392\begin{cfa}
3931, 2, 3, 4,
3944, 3, 2, 1,
395\end{cfa}
396\end{tabular}
397\end{cquote}
398Iterating forward and reverse through the entire list or until a predicate is triggered.
399\begin{cquote}
400\setlength{\tabcolsep}{15pt}
401\begin{tabular}{@{}l|l@{}}
402\begin{cfa}
403FOREACH_COND( list, it, @it.v == 3@ ) ...
404FOREACH_REV_COND( list, it, @it.v == 1@ ) ...
405\end{cfa}
406&
407\begin{cfa}
4081, 2,
4094, 3, 2,
410\end{cfa}
411\end{tabular}
412\end{cquote}
413Macros are not ideal, so future work is to provide a language-level @foreach@ statement, like \CC.
414Finally, a predicate can be added to any of the manual iteration loops.
415\begin{cquote}
416\setlength{\tabcolsep}{15pt}
417\begin{tabular}{@{}l|l@{}}
418\begin{cfa}
419for ( node & it = first( list ); &it @&& !(it.v == 3)@; &it = &next( it ) ) ...
420for ( node & it = last( list ); &it @&& !(it.v == 1)@; &it = &prev( it ) ) ...
421for ( node & it = iter( list ); advance( it ) @&& !(it.v == 3)@; ) ...
422for ( node & it = iter( list ); recede( it ) @&& !(it.v == 1)@; ) ...
423\end{cfa}
424&
425\begin{cfa}
4261, 2,
4274, 3, 2,
4281, 2,
4294, 3, 2,
430\end{cfa}
431\end{tabular}
432\end{cquote}
433
434\begin{comment}
435Many languages offer an iterator interface for collections, and a corresponding for-each loop syntax for consuming the items through implicit interface calls.
436\CFA does not yet have a general-purpose form of such a feature, though it has a form that addresses some use cases.
437This section shows why the incumbent \CFA pattern does not work for linked lists and gives the alternative now offered by the linked-list library.
438Chapter 5 [TODO: deal with optimism here] presents a design that satisfies both uses and accommodates even more complex collections.
439
440The current \CFA extensible loop syntax is:
441\begin{cfa}
442for( elem; end )
443for( elem; begin ~ end )
444for( elem; begin ~ end ~ step )
445\end{cfa}
446Many derived forms of @begin ~ end@ exist, but are used for defining numeric ranges, so they are excluded from the linked-list discussion.
447These three forms are rely on the iterative trait:
448\begin{cfa}
449forall( T ) trait Iterate {
450 void ?{}( T & t, zero_t );
451 int ?<?( T t1, T t2 );
452 int ?<=?( T t1, T t2 );
453 int ?>?( T t1, T t2 );
454 int ?>=?( T t1, T t2 );
455 T ?+=?( T & t1, T t2 );
456 T ?+=?( T & t, one_t );
457 T ?-=?( T & t1, T t2 );
458 T ?-=?( T & t, one_t );
459}
460\end{cfa}
461where @zero_t@ and @one_t@ are constructors for the constants 0 and 1.
462The simple loops above are abbreviates for:
463\begin{cfa}
464for( typeof(end) elem = @0@; elem @<@ end; elem @+=@ @1@ )
465for( typeof(begin) elem = begin; elem @<@ end; elem @+=@ @1@ )
466for( typeof(begin) elem = @0@; elem @<@ end; elem @+=@ @step@ )
467\end{cfa}
468which use a subset of the trait operations.
469The shortened loop works well for iterating a number of times or through an array.
470\begin{cfa}
471for ( 20 ) // 20 iterations
472for ( i: 1 ~= 21 ~ 2 ) // odd numbers
473for ( i; n ) total += a[i]; // subscripts
474\end{cfa}
475which is similar to other languages, like JavaScript.
476\begin{cfa}
477for ( i in a ) total += a[i];
478\end{cfa}
479Albeit with different mechanisms for expressing the array's length.
480It might be possible to take the \CC iterator:
481\begin{c++}
482for ( list<int>::iterator it=mylist.begin(); it != mylist.end(); ++it )
483\end{c++}
484and convert it to the \CFA form
485\begin{cfa}
486for ( it; begin() ~= end() )
487\end{cfa}
488by having a list operator @<=@ that just looks for equality, and @+=@ that moves to the next node, \etc.
489
490However, the list usage is contrived, because a list does use its data values for relational comparison, only links for equality comparison.
491Hence, the focus of a list iterator's stopping condition is fundamentally different.
492So, iteration of a linked list via the existing loop syntax is to ask whether this syntax can also do double-duty for iterating values.
493That is, to be an analog of JavaScript's @for..of@ syntax:
494\begin{cfa}
495for ( e of a ) total += e;
496\end{cfa}
497
498The \CFA team will likely implement an extension of this functionality that moves the @~@ syntax from being part of the loop, to being a first-class operator (with associated multi-pace operators for the elided derived forms).
499With this change, both @begin ~ end@ and @end@ (in context of the latter ``two-place for'' expression) parse as \emph{ranges}, and the loop syntax becomes, simply:
500\begin{cfa}
501 for( elem; rangeExpr )
502\end{cfa}
503The expansion and underlying API are under discussion.
504TODO: explain pivot from ``is at done?'' to ``has more?''
505Advantages of this change include being able to pass ranges to functions, for example, projecting a numerically regular subsequence of array entries, and being able to use the loop syntax to cover more collection types, such as looping over the keys of a hashtable.
506
507When iterating an empty list, the question, ``Is there a further element?'' needs to be posed once, receiving the answer, ``no.''
508When iterating an $n$-item list, the same question gets $n$ ``yes'' answers (one for each element), plus one ``no'' answer, once there are no more elements; the question is posed $n+1$ times.
509
510When iterating an empty list, the question, ``What is the value of the current element?'' is never posed, nor is the command, ``Move to the next element,'' issued. When iterating an $n$-item list, each happens $n$ times.
511
512So, asking about the existence of an element happens once more than retrieving an element's value and advancing the position.
513
514Many iteration APIs deal with this fact by splitting these steps across different functions, and relying on the user's knowledge of iterator state to know when to call each. In Java, the function @hasNext@ should be called $n+1$ times and @next@ should be called $n$ times (doing the double duty of advancing the iteration and returning a value). In \CC, the jobs are split among the three actions, @it != end@, @it++@ and @*it@, the latter two being called one time more than the first.
515
516TODO: deal with simultaneous axes: @DLINK_VIA@ just works
517
518TODO: deal with spontaneous simultaneity, like a single-axis req, put into an array: which ``axis'' is @&req++@ navigating: array-adjacency vs link dereference. It should sick according to how you got it in the first place: navigating dlist(req, req.pri) vs navigating array(req, 42). (prob. future work)
519\end{comment}
520
521
522\section[C++ Lists]{\CC Lists}
523
524It is worth addressing two API issues in \CC lists avoided in \CFA.
525First, \CC lists require two steps to remove a node versus one in \CFA.
526\begin{cquote}
527\begin{tabular}{@{}ll@{}}
528\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
529\begin{c++}
530list<node> li;
531node n = li.first(); // assignment could raise exception
532li.pop_front();
533\end{c++}
534&
535\begin{cfa}
536dlist(node) list;
537node n = remove_first( list );
538
539\end{cfa}
540\end{tabular}
541\end{cquote}
542The argument for two steps is exception safety: returning an unknown T by value/move might throw an exception from T's copy/move constructor.
543Hence, to be \emph{exception safe}, all internal list operations must complete before the copy/move so the list is consistent should the return fail.
544This coding style can result in contrived code, but is usually possible;
545however, it requires the container designer to anticipate the potential throw.
546(Note, this anticipation issue is pervasive in exception systems, not just with containers.)
547The solution moves the coding complexity from the container designer to the programer~\cite[ch~10, part 3]{Sutter99}.
548First, obtain the node, which might fail, but the container is unmodified.
549Second, remove the node, which modifies the container without the possibly of an exception.
550This movement of responsibility increases the cognitive effort for programmers.
551Unfortunately, this \emph{single-responsibility principle}, \ie preferring separate operations, is often repeated as a necessary requirement rather an an optional one.
552Separate operations should always be available, but their composition should also be available.
553Interestingly, this issue does not apply to intrusive lists, because the node data is never copied/moved in or out of a list;
554only the link fields are accessed in list operations.
555
556Second, \VRef[Figure]{f:CCvsCFAListIssues} shows an example where a \CC list operation is $O(N$) rather than $O(1)$ in \CFA.
557This issue is inherent to wrapped (non-intrusive) lists.
558Specifically, to remove a node requires access to the links that materialize its membership.
559In a wrapped list, there is no access from node to links, and for abstract reasons, no direct pointers to wrapped nodes, so the links must be found indirectly by navigating the list.
560The \CC iterator is the abstraction to navigate wrapped links.
561So an iterator is needed, not because it offers go-next, but for managing the list membership.
562Note, attempting to keep an array of iterators to each node requires high-complexity to ensure the list and array are harmonized.
563
564\begin{figure}
565\begin{tabular}{@{}ll@{}}
566\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
567\begin{c++}
568list<node *> list;
569node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
570list.push_back( &n1 ); list.push_back( &n2 );
571list.push_back( &n3 ); list.push_back( &n4 );
572list<node *>::iterator it;
573for ( it = list.begin(); it != list.end(); it ++ )
574 if ( *it == &n3 ) { dl.erase( it ); break; }
575\end{c++}
576&
577\begin{cfa}
578dlist(node) list;
579node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
580insert( list, n1, n2, n3, n4 );
581
582
583
584remove( list, n3 );
585\end{cfa}
586\end{tabular}
587\caption{\CC \vs \CFA List Issues}
588\label{f:CCvsCFAListIssues}
589\end{figure}
590
591\begin{comment}
592Dave Dice:
593There are what I'd call the "dark days" of C++, where the language \& libraries seemed to be getting progressively uglier - requiring more tokens to express something simple, and lots of arcana.
594But over time, somehow, they seem to have mostly righted the ship and now I can write C++ code that's fairly terse, like python, by ignoring all the old constructs.
595(They carry around the legacy baggage, of course, but seemed to have found a way to evolve away from it).
596
597If you just want to traverse a std::list, then, using modern "for" loops, you never need to see an iterator.
598I try hard never to need to write X.begin() or X.end().
599(There are situations where I'll expose iterators for my own types, however, to enable modern "for" loops).
600If I'm implementing simple linked lists, I'll usually skip std:: collections and do it myself, as it's less grief.
601I just don't get that much advantage from std::list. And my code is certainly not any shorter.
602On the other hand, @std::map@, @unordered_map@, @set@, and friends, are terrific, and I can usually still avoid seeing any iterators, which are blight to the eye.
603So those are a win and I get to move up an abstraction level, and write terse but easily understood code that still performs well.
604(One slight concern here is that all the C++ collection/container code is templated and lives in include files, and not traditional libraries.
605So the only way to replace something - say with a better algorithm, or you have a bug in the collections code - is to boil the oceans and recompile everything.
606But on the other hand the compiler can specialize to the specific use case, which is often a nice performance win).
607
608And yeah, the method names are pretty terrible. I think they boxed themselves in early with a set of conventions that didn't age well, and they tried to stick with it and force regularity over the collection types.
609
610I've never seen anything written up about the history, although lots of the cppcon talks make note of the "bad old days" idea and how collection \& container library design evolved. The issue is certainly recognized.
611\end{comment}
612
613
614\section{Implementation}
615
616\VRef[Figure]{fig:lst-impl-links} continues the running @req@ example, showing the \CFA list library's internal representation.
617The @dlink@ structure contains exactly two pointers: @next@ and @prev@, which are opaque.
618Even though the user-facing list model is linear, the \CFA library implements all listing as circular.
619This choice helps achieve uniform end treatment.
620% and \PAB{TODO finish summarizing benefit}.
621A link pointer targets a neighbouring @dlink@ structure, rather than a neighbouring @req@.
622(Recall, the running example has the user putting a @dlink@ within a @req@.)
623
624\begin{figure}
625 \centering
626 \includegraphics[width=\textwidth]{lst-impl-links.pdf}
627 \caption{
628 \CFA list library representations for headed and headless lists.
629 }
630 \label{fig:lst-impl-links}
631\end{figure}
632
633Circular link-pointers (dashed lines) are tagged internally in the pointer to indicate linear endpoints.
634Links among neighbour nodes are not tagged.
635Iteration reports ``has more elements'' when accessing untagged links, and ``no more elements'' when accessing tagged links.
636Hence, the tags are set on the links that a user cannot navigate.
637
638The \CFA library works in headed and headless modes.
639In a headed list, the list head (@dlist(req)@) acts as an extra element in the implementation-level circularly-linked list.
640The content of a @dlist@ header is a (private) @dlink@, with the @next@ pointer to the first element, and the @prev@ pointer to the last element.
641Since the head wraps a @dlink@, as does @req@, and since a link-pointer targets a @dlink@, the resulting cycle is among @dlink@ structures, situated inside of header/node.
642An untagged pointer points within a @req@, while a tagged pointer points within a list head.
643In a headless list, the circular backing list is only among @dlink@s within @req@s.
644
645No distinction is made between an unlisted node (top left and middle) under a headed model and a singleton list under a headless model.
646Both are represented as an item referring to itself, with both tags set.
647
648
649\section{Assessment}
650\label{toc:lst:assess}
651
652This section examines the performance of the discussed list implementations.
653The goal is to show the \CFA lists are competitive with other designs, but the different list designs may not have equivalent functionality, so it is impossible to select a winner encompassing both functionality and execution performance.
654
655\subsection{Experiment Design}
656
657\begin{figure}
658\noindent
659\begin{tabular}{p{1.75in}@{\ }p{4.5in}}
660Insert-Remove (IR)
661 & The atomic unit of work being measured: one insertion plus one remove (plus all looping/tracking overheads) \\
662Use Case
663 & Pattern of add-remove calls. \\
664-- Movement & \\
665 \quad $\ni$ stack
666 & Inserts and removes happen at the same end. \\
667 \quad $\ni$ queue
668 & Inserts and removes happen at opposite ends. \\
669-- Polarity
670 & Which of the two orientations in which the movement happens. \\
671 \quad $\ni$ insert-first
672 & All inserts at front; stack removes at front; queue removes at back. \\
673 \quad $\ni$ insert-last
674 & All inserts at back; stack removes at back; queue removes at front. \\
675-- Accessor
676 & How an insertion position, or removal element, is specified. The same position/element is picked either way. \\
677 \quad $\ni$ all head
678 & inserts and removes both through the head \\
679 \quad $\ni$ insert element
680 & insert by element and remove through the head \\
681 \quad $\ni$ remove element
682 & insert through head and remove by element\\
683Physical Context & \\
684-- Size (number) & Number of nodes being linked. Unless specified, equals the \emph{length} of the program's sole list. \emph{Width}, rarely used, is the number of lists. \\
685-- Size Zone
686 & Contiguous range of sizes, chosen to avoid known anomalies and to sample a brief plateau. Each zone buckets four specific sizes. \\
687 \quad $\ni$ small
688 & lists of 4--16 elements \\
689 \quad $\ni$ medium
690 & lists of 50--200 elements \\
691 \quad $\ni$ (other)
692 & Not used for comparing intrusive frameworks. \\
693-- machine
694 & Computer running the experiment \\
695 \quad $\ni$ AMD
696 & smaller cache \\
697 \quad $\ni$ Intel
698 & bigger cache \\
699Framework & A particular linked-list implementation (within its host language) \\
700$\ni$ \CC & The @std::list@ type of g++. \\
701$\ni$ lq-list & The @list@ type of LQ from glibc of gcc. \\
702$\ni$ lq-tailq & The @tailq@ type of the same. \\
703$\ni$ \uCpp & \uCpp's @uSequence@ \\
704$\ni$ \CFA & \CFA's @dlist@ \\
705Explanation being
706 & How independent explanatory variable X is analyzed \\
707-- Marginalized
708 & Left alone, allowed to vary, yielding a more absolute measure. Shows the effect that X causes. If all explanations are marginalized, then absolute times are available and a relative time has a peer group that is the entire population. \\
709-- Conditioned
710 & Held constant, yielding a more relative measure. Hides the effect that X causes. Conditioning on X creates more, smaller relative-measure peer groups, by isolating each X-domain value. Resulting interpretation is, ``Assuming no change in X.'' \\
711\end{tabular}
712\caption{
713 Glossary of terms used in the list performance evaluation.
714}
715\label{f:ListPerfGlossary}
716\end{figure}
717
718
719This section explains how the experiment is built.
720Many of the parts following define terminology concerning tuning knobs.
721\VRef[Figure]{f:ListPerfGlossary} provides a consolidated refence.
722
723
724\subsubsection{Add-Remove Performance}
725\label{s:AddRemovePerformance}
726
727The fundamental job of a linked-list library is to manage the links that connect nodes.
728Any link management is an action that causes pair(s) of elements to become, or cease to be, adjacent in the list.
729Thus, adding and removing an element are the sole primitive actions.
730
731Repeated adding and removing is necessary to measure timing because these operations can be as short as a dozen instructions.
732These instruction sequences may have cases that proceed (in a modern, deep pipeline) without a stall.
733
734This experiment takes the position that:
735\begin{itemize}[leftmargin=*]
736 \item The total time to add and remove is relevant, as opposed to having one time for adding and a separate time for removing.
737 Adds without removes quickly fill memory;
738 removing without adding is impossible.
739 \item A relevant breakdown ``by operation'' is, rather, the usage pattern of the add/remove calls.
740 A example pattern choice is adding and removing at the same end, making a stack, or opposite ends, for a queue.
741 Another is pushing on the front by calling @insert_first(lst, e)@ \vs @insert(e, old_first_elm)@; this aspect provides the test's API coverage.
742 \VRef[Section]{s:UseCases} gives the full breakdown.
743 \item Speed differences caused by the host machine's memory hierarchy need to be identified and explained,
744 but do not represent advantages of one framework over another.
745\end{itemize}
746
747The experiment used to measure insert/remove cost measures the mean duration of a sequence of additions and removals.
748The distribution of speeds experienced by an individual add-remove pair (tail latency) is not discussed.
749Space efficiency is shown only indirectly, by way of caches' impact on speed.
750The experiment is sensitive enough to show:
751\begin{itemize}
752 \item intrusive lists performing (majorly) differently than wrapped lists,
753 \item a space of (lesser) performance differences among the intrusive lists.
754\end{itemize}
755
756In all cases, the quantity discussed is the duration of one insert-remove (IR).
757An IR is the time taken to do one innermost insertion-loop iteration, one innermost removal-loop iteration, and its share of all overheads, ammortized.
758
759Lower IR duration is better.
760This experiment typically does an IR in 1--10 ns.
761The short end of this range has durations of single-digit clock-cycle counts.
762Therefore, the situations that achieve the best times are saturating the instruction pipeline successfully.
763
764Often, an IR duration value needs to be considered relatively.
765For example, \VRef[Section]{toc:sweet-sore} asks whether one linked list implementation is more sensitive than another to changing which computer runs the test.
766A finding might be that a machine change slows implementation A by 10\% and B by 20\%.
767This finding is not saying that A is faster than B (on either machine).
768The finding could stand if B started 10\% faster and the machine change levelled them off, if B started slower and got worse, or in myriad other cases.
769The finding asserts that such distinctions are not what's immediately relevant.
770The arithmetic that produces the 10\% and 20\% answers is removing the information about which one starts, or ends up, faster.
771Each implementation's to-machine duration is stated relatively to \emph{the same implementation's} from-machine duration.
772The resulting measure is still about a duration.
773The framework with the lower from-machine-relative duration handles the change better.
774
775
776\subsubsection{Test Program}
777\label{s:TetProgram}
778
779The experiment driver defines a (intrusive) node type:
780\begin{cfa}
781struct Node {
782 int i, j, k; // fields
783 // possible intrusive links
784};
785\end{cfa}
786and considers the speed of building and tearing down a list of $n$ instances of it.
787% A number of experimental rounds per clock check is precalculated to be appropriate to the value of $n$.
788\begin{cfa}
789// simplified harness: CFA implementation,
790// stack movement, insert-first polarity, head-mediated access
791size_t totalOpsDone = 0;
792dlist( node_t ) lst;
793node_t nodes[ n ]; $\C{// preallocated list storage}$
794startTimer();
795while ( CONTINUE ) { $\C{// \(\approx\) 20 second duration}$
796 for ( i; n ) insert_first( lst, nodes[i] ); $\C{// build up}$
797 for ( i; n ) remove_first( lst ); $\C{// tear down}$
798 totalOpsDone += n;
799}
800stopTimer();
801reportedDuration = getTimerDuration() / totalOpsDone; // throughput per insert/remove operation
802\end{cfa}
803To reduce administrative overhead, the $n$ nodes for each experiment list are preallocated in an array (on the stack), which removes dynamic allocations for this storage.
804These nodes contain an intrusive pointer for intrusive lists, or a pointer to it is stored in a dynamically-allocated internal-node for a wrapper list.
805Copying the node for wrapped lists skews the results with administration costs.
806The list insertion/removal operations are repeated for a typical 20+ second duration.
807After each round, a counter is incremented by $n$ (for throughput).
808Time is measured outside the loop because a large $n$ can overrun the time duration before the @CONTINUE@ flag is tested.
809Hence, there is minimum of one outer (@CONTINUE@) loop iteration for large lists.
810The loop duration is divided by the counter and this throughput is reported.
811In a scatter-plot, each dot is one throughput, which means insert + remove + harness overhead.
812The harness overhead is constant when comparing linked-list frameworks and is kept as small as possible.
813% The remainder of the setup section discusses the choices that affected the harness overhead.
814
815To test list operations, the experiment performs the inserts/removes in different patterns, \eg insert and remove from front, insert from front and remove from back, random insert and remove, \etc.
816Unfortunately, the @std::list@ does \emph{not} support direct insert/remove from a node without an iterator, \ie no @erase( node )@, even though the list is doubly-linked.
817To eliminate the iterator, a trick is used for random insertions without replacement, which takes advantage of the array nature of the nodes.
818The @i@ fields in each node are initialized from @0..n-1@.
819These @i@ values are then shuffled in the nodes, and the @i@ value is used to represent an indirection to that node for insertion.
820Hence, the nodes are inserted in random order and removed in the same random order.
821$\label{p:Shuffle}$
822\begin{cfa}
823 for ( i; n ) @nodes[i].i = i@; $\C[3.25in]{// indirection}$
824 shuffle( nodes, n ); $\C{// random shuffle indirects within nodes}$
825
826 while ( CONTINUE ) {
827 for ( i; n ) {
828 node_t & temp = nodes[ nodes[i].i ]; $\C{// select random node in array}$
829 @temp.j = 0;@ $\C{// only touch random node for wrapped nodes}$
830 insert_first( lst, temp ); $\C{// build up}$
831 }
832 for ( i; n ) pass( &remove_first( lst ) ); $\C{// tear down}\CRT$
833 totalOpsDone += n;
834 }
835\end{cfa}
836Note, insertion is traversing the list of nodes linearly, @node[i]@.
837For intrusive lists, the inserted (random) node is always touched because its link fields are read/written for insertion into the list.
838Hence, the array of nodes is being accessed both linearly and randomly during the traversal.
839For wrapped lists, the wrapped nodes are traversed linearly but the random node is not accessed, only a pointer to it is inserted into the linearly accessed wrapped node.
840Hence, the traversal is the same as the non-random traversal above.
841To level the experiments, an explicit access to the random node is inserted after the insertion, @temp.j = 0@, for the wrapped experiment.
842Furthermore, it is rare to insert/remove nodes and not access them.
843
844% \emph{Interleaving} allows for movements other than pure stack and queue.
845% Note that the earlier example of using the iterators' array is still a pure stack: the item selected for @erase(...)@ is always the first.
846% Including a less predictable movement is important because real applications that justify doubly linked lists use them.
847% Freedom to remove from arbitrary places (and to insert under more relaxed assumptions) is the characteristic function of a doubly linked list.
848% A queue with drop-out is an example of such a movement.
849% A list implementation can show unrepresentative speed under a simple movement, for example, by enjoying unchallenged ``Is first element?'' branch predictions.
850
851% Interleaving brings ``at middle of list'' cases into a stream of add or remove invocations, which would otherwise be exclusively ``at end''.
852% A chosen split, like half middle and half end, populates a boolean array, which is then shuffled.
853% These booleans then direct the action to end-\vs-middle.
854%
855% \begin{cfa}
856% // harness (bookkeeping and shuffling elided): CFA implementation,
857% // stack movement, insert-first polarity, interleaved element-based remove access
858% dlist( item_t ) lst;
859% item_t items[ n ];
860% @bool interl[ n ];@ // elided: populate with weighted, shuffled [0,1]
861% while ( CONTINUE ) {
862% item_t * iters[ n ];
863% for ( i; n ) {
864% insert_first( items[i] );
865% iters[i] = & items[i];
866% }
867% @item_t ** crsr[ 2 ]@ = { // two cursors into iters
868% & iters[ @0@ ], // at stack-insert-first's removal end
869% & iters[ @n / interl_frac@ ] // in middle
870% };
871% for ( i; n ) {
872% item *** crsr_use = & crsr[ interl[ i ] ]@;
873% remove( *** crsr_use ); // removing from either middle or end
874% *crsr_use += 1; // that item is done
875% }
876% assert( crsr[0] == & iters[ @n / interl_frac@ ] ); // through second's start
877% assert( crsr[1] == & iters[ @n@ ] ); // did the rest
878% }
879% \end{cfa}
880%
881% By using the pair of cursors, the harness avoids branches, which could incur prediction stall times themselves, or prime a branch in the SUT.
882% This harness avoids telling the hardware what the SUT is about to do.
883
884
885\subsubsection{Use Cases}
886\label{s:UseCases}
887
888\begin{figure}
889\begin{comment}
890\centering
891\setlength{\tabcolsep}{8pt}
892\begin{tabular}{@{}ll@{}}
893\begin{tabular}{@{}c|c|c@{}}
894movement & polarity & accessor \\
895\hline
896\hline
897stack &
898 \begin{tabular}{@{}l@{}}
899 insert-first \\
900 \hline
901 insert-last
902 \end{tabular}
903 &
904 \begin{tabular}{@{}l@{}}
905 insert-head / remove-head \\
906 \hline
907 insert-list / remove-head \\
908 \hline
909 insert-head / remove-list
910 \end{tabular}
911 \\
912\hline
913queue &
914 \begin{tabular}{@{}l@{}}
915 insert-first \\
916 \hline
917 insert-last
918 \end{tabular}
919 &
920 \begin{tabular}{@{}l@{}}
921 insert-head / remove-head \\
922 \hline
923 insert-list / remove-head \\
924 \hline
925 insert-head / remove-list
926 \end{tabular}
927\end{tabular}
928&
929 \setlength{\tabcolsep}{3pt}
930 \small
931 \begin{tabular}{@{}ll@{}}
932 I: & stack, insert first, all head \\
933 II: & stack, insert first, insert element \\
934 III:& stack, insert first, remove element \\
935 IV: & stack, insert last, all head \\
936 V: & stack, insert last, insert element \\
937 VI: & stack, insert last, remove element \\
938 VII:& queue, insert first, all head \\
939 VIII:& queue, insert first, insert element \\
940 IX: & queue, insert first, remove element \\
941 X: & queue, insert last, all head \\
942 XI: & queue, insert last, iinsert element \\
943 XII:& queue, insert last, remove element \\
944 \end{tabular}
945\end{tabular}
946\end{comment}
947\setlength{\tabcolsep}{5pt}
948\small
949\begin{tabular}{rcccccccccccc}
950& I & II & III & IV & V & VI & VII & VIII & IX & X & XI & XII \\
951Movement &
952stack & stack & stack & stack & stack & stack &
953queue & queue & queue & queue & queue & queue \\
954Polarity &
955i-first & i-first & i-first & i-last & i-last & i-last &
956i-first & i-first & i-first & i-last & i-last & i-last \\
957Acessor &
958all hd & ins-e & rem-e & all hd & ins-e & rem-e &
959all hd & ins-e & rem-e & all hd & ins-e & rem-e
960\end{tabular}
961\caption{Experiment use cases, numbered.}
962\label{f:ExperimentOperations}
963\end{figure}
964
965Where \VRef[Figure]{f:ListPerfGlossary} enumerates the specific values, reall the use-case dimensions are:
966\begin{description}
967 \item[movement ($\times 2$)]
968 In these experiments, strict stack and queue patterns are tested.
969 \item[polarity ($\times 2$)]
970 Obtain one polarity from the other by reversing uses of first/last.
971 \item[accessor ($\times 3$)]
972 Giving an add/remove location by a list head's first/last, \vs by a preexisting reference to an individual element?
973\end{description}
974
975A use case is a specific selection of movement, polarity and accessor.
976These experiments run twelve use cases.
977When a comparison is showing only what can happen when switching among use cases (as opposed to \eg how stacks are different from queues), the numbering scheme of \VRef[Figure]{f:ExperimentOperations} is used.
978
979With accessor, when an action names its insertion position or removal element, the harness either
980\begin{itemize}
981\item defers to the list-head's tracking of first/last (``through the head''), or
982\item applies its own knowledge of the current pattern, to name a position/element that happens to be first/last (``of known element'').
983\end{itemize}
984
985The accessor patterns, at the (\CFA) API level, are:
986\begin{description}
987 \item[all (through the) head:] Both inserts and removes happen through the list head. The list head operations are @insert_first@, @insert_last@, @remove_first@ and @remove_last@. \\
988 \item[insert (of known) element] \dots and remove through head: Inserts use @insert_before(e, first)@ or @insert_after(e, last)@, where @e@ is being inserted and @first@/@last@ are element references known by list-independent means.
989 \item[remove (of known) element] \dots and insert through the head: Removes use @remove(e)@, where @e@ is being removed. List-independent knowledge establishes that @e@ is first or last, as appropriate.
990\end{description}
991
992Comparing all-head with insert-element gives the relative performance of head-mediated \vs element-oriented insertion, because both use the same removal style.
993Comparing all-head with remove-element gives the relative performance of head-mediated \vs element-oriented removal, because both use the same insertion style.
994
995\subsubsection{Sizing}
996
997
998It is true, but perhaps not obvious, that buildind and destroying long lists is slower than building and destroying short lists.
999Obviously, indeed, it takes longer to fuse and divide a hundred neighbours than five.
1000But the key metric in this work, AII, is about a single link--unlink.
1001So, critically, linking and unlinking a hundred neighbours actually takes \emph{more} than $20\times$ the time for five neighbours.
1002The main reason is caching; when more neighbours are being manipulated, more memory is being read and written.
1003
1004But caching success is about more than the amount of memory worked on.
1005Subtle changes in pattern become butterfly effects.
1006Aggressive ILP scheduling, which enables short AIR times, is the amplifier.
1007A data dependency, present in one framework but not another, is critical path in one situation but in not another.
1008So, duration's response to size is not a steady worsening as size increases.
1009Rather, each size-independent configuration often responds to size increases with leaps of worsening.
1010Occasionally a leap is even followed size-run of retrograde response, where a suddenly incurred penalty has a chance to ammortize away.
1011The frameworks tend to leapfrog over each other, at different points, as size increases.
1012
1013The analysis treats these behaviours as incidental.
1014It does not try to characterize various exact-size responses.
1015Rather, size zones are picked, specific effects inside of a zone are averaged away, and the story at one zone is compared to that at another zone.
1016
1017To preview, \VRef[Section]{toc:coarse-compre} dismisses ``Large'' sizes (above 150 elements), where the performance story is dominated by the amount of memory touched, inherently, by the choice of intrusive list \vs wrapped, and where one intrusive framework is quite obviously as good as another.
1018At smaller sizes, comparing one intrusive framework to another makes sense; this comparion occurs in the remaining ``Result'' sections.
1019Among the ``not Large'' sizes used there, two further zones, Small and Medium, are selected as representatives of what can vary when the scale is changed.
1020These particular ranges were chosen beacause each range tends to have one story repeated across its constituent sizes.
1021If \CFA's duration increases across Small, then the other frameworks' usually do too.
1022If \CFA is beating \uCpp at the low end of Large, then it usually is at the high end too.
1023The leapfrogging tends to happen outside of these two ranges.
1024
1025A spot of poor performance appears in the general results for \CFA at size 1.
1026Section \MLB{TODO:xref} explores the phenomenon and concludes that it is an anomaly due to a quirky interaction with the testing rig.
1027To do so, two it considers size as either length or width.
1028Length is the number of elements in a list.
1029Width is a number of these lists being kept, worked upon in round-robin order.
1030Outside of \MLB{TODO:xref}, size always means length, and width is 1.
1031
1032
1033
1034\subsubsection{Execution Environment}
1035\label{s:ExperimentalEnvironment}
1036
1037The performance experiments are run on:
1038\begin{description}[leftmargin=*,topsep=3pt,itemsep=2pt,parsep=0pt]
1039%\item[PC]
1040%with a 64-bit eight-core AMD FX-8370E, with ``taskset'' pinning to core \#6. The machine has 16 GB of RAM and 8 MB of last-level cache.
1041%\item[ARM]
1042%Gigabyte E252-P31 128-core socket 3.0 GHz, WO memory model
1043\item[AMD]
1044Supermicro AS--1125HS--TNR EPYC 9754 128--core socket, hyper-threading $\times$ 2 sockets (512 processing units) 2.25 GHz, TSO memory model, with cache structure 32KB L1i/L1d, 1024KB L2, 16MB L3, where each L3 cache covers 1 NUMA node and 8 cores (16 processors).
1045\item[Intel]
1046Supermicro SYS-121H-TNR Xeon Gold 6530 32--core, hyper-threading $\times$ 2 sockets (128 processing units) 2.1 GHz, TSO memory model, with cache structure 32KB L1i/L1d, 20248KB L2, 160MB L3, where each L3 cache covers 2 NUMA node and 32 cores (64 processors).
1047\end{description}
1048The experiments are single threaded and pinned to single core to prevent any OS movement, which might cause cache or NUMA effects perturbing the experiment.
1049
1050The compiler is gcc/g++-14.2.0 running on the Linux v6.8.0-52-generic OS.
1051Switching between the default memory allocators @glibc@ and @llheap@ is done with @LD_PRELOAD@.
1052To prevent eliding certain code patterns, crucial parts of a test are wrapped by the function @pass@
1053\begin{cfa}
1054// prevent eliding, cheaper than volatile
1055static inline void * pass( void * v ) { __asm__ __volatile__( "" : "+r"(v) ); return v; }
1056...
1057pass( &remove_first( lst ) ); // wrap call to prevent elision, insert cannot be elided now
1058\end{cfa}
1059The call to @pass@ can prevent a small number of compiler optimizations but this cost is the same for all lists.
1060
1061
1062The main difference in the machines is their cache structure.
1063The AMD has smaller caches that are shared less, while the Intel shares larger caches among more processors.
1064This difference, while an interesting tradeoff for highly concurrent use, is rather one-sided for sequential use, such as this experiment's.
1065The Intel offers a single processor a bigger cache.
1066
1067\subsubsection{Recap and Master Legend}
1068
1069There are 12 use cases, which are all combinations of 2 movents, 2 polarities and 3 accessors.
1070There are 4 pysical contexts, which are all combinations of 2 machines and 2 size (length) zones (and 1 width, of value 1).
1071Each physical context samples 4 specific sizes.
1072
1073There are 3.25 frameworks.
1074This accounting considers how LQ-list supports only the movement--polarity combination "stack, insert first."
1075So LQ-list fills a quarter of the otherwise-orthogonal space.
1076
1077Use case, physical context and framework are the explanatory factors.
1078Taking all combinations of the explanatory factors gives 624 individual configurations.
1079
1080Though there are multiple experimental trials of each configuration (to assure repeatability), the usual measure is mean AIR among the trials, considered for each of the 624 individual configurations.
1081
1082All means reported in this analysis are geometric.
1083
1084\MLB{TODO: add example plots; explain histogram of 624}
1085
1086
1087\subsection{Result: Coarse comparison of styles}
1088
1089This comparison establishes how an intrusive list performs compared with a wrapped-reference list.
1090\VRef[Figure]{fig:plot-list-zoomout} presents throughput at various list lengths for a linear and random (shuffled) insert/remove test.
1091Other kinds of scans were made, but the results are similar in many cases, so it is sufficient to discuss these two scans, representing difference ends of the access spectrum.
1092In the graphs, all four intrusive lists (lq-list, lq-tailq, upp-upp, cfa-cfa, see end of \VRef{s:Contenders}) are plotted with the same symbol;
1093sometimes theses symbols clump on top of each other, showing the performance difference among intrusive lists is small in comparison to the wrapped list (std::list).
1094See~\VRef{s:ComparingIntrusiveImplementations} for details among intrusive lists.
1095
1096The list lengths start at 10 due to the short insert/remove times of 2--4 ns, for intrusive lists, \vs STL's wrapped-reference list of 15--20 ns.
1097For very short lists, like 4, the experiment time of 4 $\times$ 2.5 ns and experiment overhead (loops) of 2--4 ns, results in an artificial administrative bump at the start of the graph having nothing to do with the insert/remove times.
1098As the list size grows, the administrative overhead for intrusive lists quickly disappears.
1099
1100\begin{figure}
1101 \centering
1102 \setlength{\tabcolsep}{0pt}
1103 \begin{tabular}{p{0.75in}p{2.75in}p{3in}}
1104 &
1105 \subfloat[Linear List Nodes, AMD]{\label{f:Linear-swift}
1106 \hspace*{-0.75in}
1107 \includegraphics{plot-list-zoomout-noshuf-swift.pdf}
1108 } % subfigure
1109 &
1110 \subfloat[Linear List Nodes, Intel]{\label{f:Linear-java}
1111 \includegraphics{plot-list-zoomout-noshuf-java.pdf}
1112 } % subfigure
1113 \\
1114 &
1115 \subfloat[Random List Nodes, AMD]{\label{f:Random-swift}
1116 \hspace*{-0.75in}
1117 \includegraphics{plot-list-zoomout-shuf-swift.pdf}
1118 } % subfigure
1119 &
1120 \subfloat[Random List Nodes, Intel]{\label{f:Random-java}
1121 \includegraphics{plot-list-zoomout-shuf-java.pdf}
1122 } % subfigure
1123 \end{tabular}
1124 \caption{Insert/remove duration \vs list length.
1125 Lengths go as large possible without error.
1126 One example use case is shown: stack movement, insert-first polarity and head-mediated access. Lower is better.}
1127 \label{fig:plot-list-zoomout}
1128\end{figure}
1129
1130The key performance factor between the intrusive and the wrapped-reference lists is the dynamic allocation for the wrapped nodes.
1131Hence, this experiment is largely measuring the cost of @malloc@/\-@free@ rather than insert/remove, and is sensitive to the layout of memory by the allocator.
1132For insert/remove of an intrusive list, the cost is manipulating the link fields, which is seen by the relatively similar results for the different intrusive lists.
1133For insert/remove of a wrapped-reference list, the costs are: dynamically allocating/deallocating a wrapped node, copying a external-node pointer into the wrapped node for insertion, and linking the wrapped node to/from the list;
1134the allocation dominates these costs.
1135For example, the experiment was run with both glibc and llheap memory allocators, where llheap performance reduced the cost from 20 to 16 ns, still far from the 2--4 ns for linking an intrusive node.
1136Hence, there is no way to tease apart the allocation, copying, and linking costs for wrapped lists, as there is no way to preallocate the list nodes without writing a mini-allocator to manage that storage.
1137
1138In detail, \VRef[Figure]{f:Linear-swift}--\subref*{f:Linear-java} shows linear insertion of all the nodes and then linear removal, both in the same direction.
1139For intrusive lists, the nodes are adjacent in memory from being preallocated in an array.
1140For wrapped lists, the wrapped nodes happen to be adjacent because the memory allocator uses bump allocation during the initial phase of allocation.
1141As a result, these memory layouts result in high spatial and temporal locality for both kinds of lists during the linear array traversal.
1142With address look-ahead, the hardware does an excellent job of managing the multi-level cache.
1143Hence, performance is largely constant for both kinds of lists, until L3 cache and NUMA boundaries are crossed for longer lists and the costs increase consistently for both kinds of lists.
1144For example, on AMD (\VRef[Figure]{f:Linear-swift}), there is one NUMA node but many small L3 caches, so performance slows down quickly as multiple L3 caches come into play, and remains constant at that level, except for some anomalies for very large lists.
1145On Intel (\VRef[Figure]{f:Linear-java}), there are four NUMA nodes and four slowdown steps as list-length increase.
1146At each step, the difference between the kinds of lists decreases as the NUMA effect increases.
1147
1148In detail, \VRef[Figure]{f:Random-swift}--\subref*{f:Random-java} shows random insertion and removal of the nodes.
1149As for linear, there is the issue of memory allocation for the wrapped list.
1150As well, the consecutive storage-layout is the same (array and bump allocation).
1151Hence, the difference is the random linking among nodes, resulting in random accesses, even though the list is traversed linearly, resulting in similar cache events for both kinds of lists.
1152Both \VRef[Figures]{f:Random-swift}--\subref*{f:Random-java} show the slowdown of random access as the list-length grows resulting from stepping out of caches into main memory and crossing NUMA nodes.
1153% Insert and remove operations act on both sides of a link.
1154%Both a next unlisted item to insert (found in the items' array, seen through the shuffling array), and a next listed item to remove (found by traversing list links), introduce a new user-item location.
1155As for linear, the Intel (\VRef[Figure]{f:Random-java}) graph shows steps from the four NUMA nodes.
1156Interestingly, after $10^6$ nodes, intrusive lists are slower than wrapped.
1157I did not have time to track down this anomaly, but I speculate it results from the difference in touching the data in the accessed node, as the data and links are together for intrusive and separated for wrapped.
1158For the llheap memory-allocator and the two tested architectures, intrusive lists out perform wrapped lists up to size $10^3$ for both linear and random, and performance begins to converge around $10^6$ nodes as architectural issues begin to dominate.
1159Clearly, memory allocator and hardware architecture plays a large factor in the total cost and the crossover points as list-size increases.
1160% In an odd scenario where this intuition is incorrect, and where furthermore the program's total use of the memory allocator is sufficiently limited to yield approximately adjacent allocations for successive list insertions, a non-intrusive list may be preferred for lists of approximately the cache's size.
1161
1162The takeaway from this experiment is that wrapped-list operations are expensive because memory allocation is expense at this fine-grained level of execution.
1163Hence, when possible, using intrusive links can produce a significant performance gain, even if nodes must be dynamically allocated, because the wrapping allocations are eliminated.
1164Even when space is a consideration, intrusive links may not use more storage if a node is often linked.
1165Unfortunately, many programmers are unaware of intrusive lists for dynamically-sized data-structures or their tool-set does not provide them.
1166
1167% Note, linear access may not be realistic unless dynamic size changes may occur;
1168% if the nodes are known to be adjacent, use an array.
1169
1170% In a wrapped-reference list, list nodes are allocated separately from the items put into the list.
1171% Intrusive beats wrapped at the smaller lengths, and when shuffling is avoided, because intrusive avoids dynamic memory allocation for list nodes.
1172
1173% STL's performance is not affected by element order in memory.
1174%The field of intrusive lists begins with length-1 operations costing around 10 ns and enjoys a ``sweet spot'' in lengths 10--100 of 5--7-ns operations.
1175% This much is also unaffected by element order.
1176% Beyond this point, shuffled-element list performance worsens drastically, losing to STL beyond about half a million elements, and never particularly leveling off.
1177% In the same range, an unshuffled list sees some degradation, but holds onto a 1--2 $\times$ speedup over STL.
1178
1179% The apparent intrusive ``sweet spot,'' particularly its better-than-length-1 speed, is not because of list operations truly running faster.
1180% Rather, the worsening as length decreases reflects the per-operation share of harness overheads incurred at the outer-loop level.
1181% Disabling the harness's ability to drive interleaving, even though the current scenario is using a ``never work in middle'' interleave, made this rise disappear.
1182% Subsequent analyses use length-controlled relative performance when comparing intrusive implementations, making this curiosity disappear.
1183
1184% The remaining big-swing comparison points say more about a computer's memory hierarchy than about linked lists.
1185% The tests in this chapter are only inserting and removing.
1186% They are not operating on any user payload data that is being listed.
1187% The drastic differences at large list lengths reflect differences in link-field storage density and in correlation of link-field order to element order.
1188% These differences are inherent to the two list models.
1189
1190% A wrapped-reference list's separate nodes are allocated right beside each other in this experiment, because no other memory allocation action is happening.
1191% As a result, the interlinked nodes of the STL list are generally referencing their immediate neighbours.
1192% This pattern occurs regardless of user-item shuffling because this test's ``use'' of the user-items' array is limited to storing element addresses.
1193% This experiment, driving an STL list, is simply not touching the memory that holds the user data.
1194% Because the interlinked nodes, being the only touched memory, are generally adjacent, this case too has high memory locality and stays fast.
1195
1196% But the comparison of unshuffled intrusive with wrapped-reference gives the performance of these two styles, with their the common impediment of overfilling the cache removed.
1197% Intrusive consistently beats wrapped-reference by about 20 ns, at all sizes.
1198% This difference is appreciable below list length 0.5 M, and enormous below 10 K.
1199
1200
1201\subsection{Result: Intrusive Winners and Losers}
1202\label{s:ComparingIntrusiveImplementations}
1203
1204The preceding result shows the intrusive frameworks have better performance than the wrapped lists for small to medium sized lists.
1205This analysis covers the experiment position taken in \VRef{s:AddRemovePerformance} for movement, polarity, and accessor.
1206\VRef[Figure]{f:ExperimentOperations} shows the experiment use cases tested, which results in 12 experiments (I--XII) for comparing intrusive frameworks.
1207To preclude hardware interference, only list sizes below 150 are examined to differentiate among the intrusive frameworks,
1208The data is selected from the start of \VRef[Figures]{f:Linear-swift}--\subref*{f:Linear-java}, but the start of \VRef[Figures]{f:Random-swift}--\subref*{f:Random-java} is largely the same.
1209
1210
1211\begin{figure}
1212 \centering
1213 \includegraphics{plot-list-1ord.pdf}
1214 \caption{Histogram of IR durations, decomposed by all first-order effects.
1215 Each of the three breakdowns divides the entire population of test results into its mutually disjoint constituents. The measure is duration; lower is better.}
1216 \label{fig:plot-list-1ord}
1217\end{figure}
1218
1219\VRef[Figure]{fig:plot-list-1ord} gives the first-order effects.
1220The first breakdown, architecture/size-zone (left), showing the overall performance of all 12 experiment on the two different hardware architectures.
1221The relative experiment duration for each experiment is shown as a bar in each column and the black bar in that column shows the average of all 12 experiments.
1222By inspection, Intel runs faster than AMD.
1223As well, the small zone (lists of 4--16 elements) runs faster than the medium zone (lists of 50--200 elements).
1224The size effect is more pronounced on the AMD with its smaller L3 cache than it is on the Intel.
1225(No NUMA effects for these list sizes.)
1226Specifically, a 20\% standard deviation exists here, between the means of the four physical-effect categories.
1227The key takeaway for this comparison is the context it establishes for interpreting the following framework comparisons.
1228Both the particulars of a machine's cache design, and a list length's effect on the program's cache friendliness, affect insert/remove speed in the manner illlustrated in this breakdown.
1229That is, if you are running on an unknown machine, at a scale above anomaly-prone individuals, and below where major LLC caching effects take over the general intrusive-list advantage, but with an unknown relationship to the sizing of your fickle low-level caches, you are likely to experience an unpredictable speed impact on the order of 20\%.
1230
1231A similar situation comes from \VRef[Figure]{fig:plot-list-1ord}'s second comparison, by use case.
1232Specific interactions do occur, like framework X doing better on stacks than on queues; a selection of these is addressed in \VRef[Figure]{fig:plot-list-2ord} and discussed shortly.
1233But they are so irrelevant to the issue of picking a winning framework that it is sufficient here to number the use cases opaquely.
1234Whether a given list framework is suitable for a language's general library succeeds or fails without knowledge of whether your use will have stack or queue movement.
1235So you face another lottery, with a likely win-loss range of the standard deviation of the individual use cases' means: 9\%.
1236
1237This context helps interpret \VRef[Figure]{fig:plot-list-1ord}'s final comparison, by framework.
1238In this result, \CFA runs similarly to \uCpp and LQ-@list@ runs similarly to @tailq@.
1239The standard deviation of the frameworks' means is 8\%.
1240Framework choice has, therefore, less impact on your speed than the lottery tickets you already hold.
1241
1242Now, the LQs do indeed beat the UW languages by 15\%, a fact explored further in \MLB{TODO: xref}.
1243But so too does use case VIII typically beat use case IV by 38\%.
1244As does a small size on the Intel typically beat a medium size on the AMD by 66\%.
1245Framework choice is simply not where you stand to win or lose the most.
1246
1247
1248\subsection{Intrusive Sweet and Sore Spots}
1249
1250\begin{figure}
1251 \centering
1252 \includegraphics{plot-list-2ord.pdf}
1253 \caption{Histogram of IR durations, illustrating interactions with framework.
1254 Each distribution shows how its framework reacts to a single other factor being varied across one pair of options.
1255 Every (binned and mean-contributing) individual data point represents a pair of test setups, one with the criterion set to the option labelled at the top; the other setup uses the bottom option.
1256 This point's y-axis score is the ratio of these setups' durations.
1257 The point lands in a bin closer to the label of the option that performs better.
1258 }
1259 \label{fig:plot-list-2ord}
1260\end{figure}
1261
1262\VRef[Figure]{fig:plot-list-1ord} stays razor-focused on only first-order effects in order to contextualize a winner/loser framework observation.
1263But this perspective cannot address questions like, ``Where are \CFA's sore spots?''
1264Moreover, the shallow threatment of use cases by ordinals said nothing about how stack usage compares with queues'.
1265
1266\VRef[Figure]{fig:plot-list-2ord} provides such answers.
1267Its size-zone criterion refines the obvious notion that a small size runs faster than a big size; this issue is by how much.
1268Indeed, all means favour small and few tails favour medium.
1269But the various frameworks do no respond to the different sizes and machines uniformly.
1270On the AMD, \CFA and \uCpp have a modest size sensitivity, LQ-tailq's is moderate amd LQ-list seems unaffected.
1271On the Intel, \CFA's increases to moderate, while \uCpp is now unaffected, and both LQs have a dramatic response.
1272The Intel is more sensitive to size than the AMD.
1273
1274Turning next to movement and polarity, the responses appear more subdued.
1275Note that LQ-list has no represntation in these comparisons because it only supports stacks that push and pop with the first element.
1276\CFA is completely stable under movement and polarity changes.
1277\uCpp and LQ show modest responses favouring queues and insertion at last.
1278
1279Finally, with accessor, a \CFA sore spot emerges.
1280Note the pair of two-way comparisons pulled from the three experiment setups used.
1281First, the all-head/insert-element opposition addresses which insertion style is better---by-head (top) and by-element (bottom).
1282Then, the all-head/remove-element opposition addresses which removal style is better---by-head (top) and by-element (bottom).
1283The LQs favour insertion by head and removal by element.
1284\CFA and \uCpp favour both operations by head.
1285The strongest effect is \CFA's aversion to removal by element---certainly an opportunity for improvement.
1286
1287
1288\subsection{\CFA Tiny-Size Anomaly}
1289
1290The \CFA list occasionally showed a concerning slowdown at length 1.
1291The issue, seen in \VRef[Figure]{fig:plot-list-short} (top-left corner), has \CFA taking above 10 ns per IR (top-left corner).
1292It occurs only for the queue movement, only on the AMD machine, and only for the \CFA framework.
1293
1294Length-1 performane is an important case.
1295Lists like those of waiting threads are frequently left empty, with the occasional thread (or few) momentarily joining.
1296These scenarios need to work.
1297
1298A cause of this behaviour was never determined.
1299Speculation is that \CFA's increased data dependency, a result of the tagging scheme, pairs poorly with the situation implied by queue movement.
1300The aliasing, at length 1, is: the head's first element is the head's last element.
1301With stack movement, one of these aliases is used twice, while with queue movement, both are used in alternation.
1302
1303The breakdowns earlier in the performane assessment work by varying length only.
1304That is, they see the story down the leftmost column in a triangle.
1305The insight for contextualizing this issue was to inspect both length and width.
1306
1307The issue is seen as practically mitigated by noticing that the difficutly fades away as width increases.
1308This effect is seen both in \VRef[Figure]{fig:plot-list-short}'s easement across the top triangle rows, and, zoomed farther out, in \VRef[Figure]{fig:plot-list-wide}.
1309
1310Increasing the width matters to the aliasing hypothesis.
1311In a narrow experiment, one element's insert and remove happen in rapid succession.
1312So, the two aliases are exercied closer together, making a data hazard (that lacks ideal hardware treatment) stretch the instruction-pipeline schedule more significantly.
1313Increasing the width adds harness-induced gaps between the uses of each alias, behind which a potential hazard can hide.
1314
1315In the practical scenario that judges length-1 performance as relevant, width 1 is contrived.
1316A thread putting itself on an often-empty waiters' list is not doing so on one such list repeatedly, at least not without taking other situation-iduced pauses.
1317
1318Thus, the congestion at low width + length comes from the harness using repetition (in order to obtain a measurable time).
1319It does not reflect the situation that motivates the legitimate desire for good length-1 performance.
1320
1321There likely is a real hazard, unique to the \CFA framework, when a queue movement is repeated on a tiny list \emph{without other interventing action}.
1322Doing so is believed to occur only in contrived situations.
1323
1324
1325\begin{figure}
1326\centering
1327 \includegraphics[trim={00in, 5.5in, 0in, 0in}, clip, scale=0.8]{plot-list-short-temp.pdf}
1328 \caption{Behaviour at very short lengths.}
1329 \label{fig:plot-list-short}
1330\end{figure}
1331
1332\begin{figure}
1333\centering
1334 \includegraphics[trim={0.25in, 1in, 0.25in, 1in}, clip, scale=0.5]{plot-list-wide-temp.pdf}
1335 \caption{Length-1 anomaly resolving at modest width. Points are for varying widths, at fixed length 1.}
1336 \label{fig:plot-list-wide}
1337\end{figure}
1338
1339\begin{comment}
1340 These remarks are mostly about 3ord over 2ord.
1341This analysis does not provide more detail about one framework beating another; it offers different benefits.
1342These interactions further illustrate the lottery-ticket unpredictability that a linked-list user inevitably faces, by revealing stronger-still performance swings hidden from the first-order view.
1343They illustrate the difficult signal-to-noise ratio that I had to overcome in preparing this data.
1344They may serve as a reference guiding future \CFA linked-list work by informing on where to target improvements.
1345Finally, the findings offer the conclusion that \CFA's list offers more consistent performance across usage scenarios, than the other lists.
1346\end{comment}
1347
1348\begin{comment}
1349Further to the interpretation guidance of \VRef[Figure]{fig:plot-list-2ord}'s caption, a comparison with the construction of \VRef[Figure]{fig:plot-list-1ord} may be helpful.
1350In the first-order graph, the factors being compared had many options: four, twelve and four.
1351# XXX Each option contributes its own, seemingly independent, mean and distribution.
1352# XXX But, in fact, they are not totally independent.
1353# WRONG: it's not just about binary, you also need a split on a conditioned factor
1354 I'm trying to get to:
1355Side by side in earlier style, but they're opposites, so they mirror each other, so you take option B, flip it over, and have option A---I'll just show A
1356\end{comment}
1357
1358\begin{comment}
1359
1360\begin{figure}
1361\centering
1362 \setlength{\tabcolsep}{0pt}
1363 \begin{tabular}{p{0.75in}p{2.75in}p{3in}}
1364 &
1365 \subfloat[Operation I, AMD]{\label{f:AbsoluteTime-i-swift}
1366 \hspace*{-0.75in}
1367 \includegraphics{plot-list-zoomin-abs-i-swift.pdf}
1368 } % subfigure
1369 &
1370 \subfloat[Operation I, Intel]{\label{f:AbsoluteTime-i-java}
1371 \includegraphics{plot-list-zoomin-abs-i-java.pdf}
1372 } % subfigure
1373 \\
1374 &
1375 \subfloat[Operation VIII, AMD]{\label{f:AbsoluteTime-viii-swift}
1376 \hspace*{-0.75in}
1377 \includegraphics{plot-list-zoomin-abs-viii-swift.pdf}
1378 } % subfigure
1379 &
1380 \subfloat[Operation VIII, Intel]{\label{f:AbsoluteTime-viii-java}
1381 \includegraphics{plot-list-zoomin-abs-viii-java.pdf}
1382 } % subfigure
1383 \end{tabular}
1384 \caption{Operation duration \vs list length at small-medium lengths. Two example operations are shown: [I] stack movement with head-only access (plots a and b); [VIII] queue movement with element-oriented removal access (plots c and d); both operations use insert-first polarity. Lower is better.}
1385 \label{fig:plot-list-zoomin}
1386\end{figure}
1387
1388\VRef[Figure]{fig:plot-list-zoomin} shows the sizes below 150 blown up.
1389% The same scenario as the coarse comparison is used: a stack, with insertions and removals happening at the end called ``first,'' ``head'' or ``front,'' and all changes occurring through a head-provided insert/remove operation.
1390The error bars show fastest and slowest time seen on five trials, and the central point is the mean of the remaining three trials.
1391For readability, the points are slightly staggered at a given horizontal value, where the points might otherwise appear on top of each other.
1392The experiment runs twelve use cases;
1393the ones chosen for their variety are scenarios I and VIII from the listing of \VRef[Figure]{fig:plot-list-mchn-szz}, and their results appear in the rows.
1394As in the previous experiment, each hardware architecture appears in a column.
1395Note that LQ-list does not support queue operations, so this framework is absent from Operation VIII.
1396
1397At lengths 1 and 2, winner patterns seen at larger sizes generally do not apply.
1398Indeed, an issue with \CFA giving terrible on queues at length 1 is evident in \VRef[Figure]{f:AbsoluteTime-viii-swift}.
1399This phenomenon is elaborated in \MLB{TODO: xref}.
1400For the remainder of this section, these sizes are disregarded.
1401
1402Even after the very-small anomalies, the selections of operation and machine significantly affect how speed responds to size.
1403For example, Operation I on the AMD (\VRef[Figure]{f:AbsoluteTime-i-swift}) has \CFA generally winning over LQ, while the opposite is seen by switching either to Operation VIII (\VRef[Figure]{f:AbsoluteTime-viii-swift}) or to the Intel (\VRef[Figure]{f:AbsoluteTime-i-java}).
1404For another, Operation I has sore spots at lengths in the middle for \uCpp on AMD and LQ-list on Intel, which resolve at larger lengths; yet no such pattern presents with Operation VIII.
1405
1406In spite of these complex interactions, a couple spots of stability can be analyzed.
1407In these examples, the two defined Size Zones (4--16 being ``small,'' and above 50 being ``medium,'') covering four specific sizes apiece, each tends to show a simple winner/loser story.
1408Manual inspection of other such plots (not detailed) showed that this quality is generally upheld.
1409So these zones are used for basing comparison.
1410
1411\MLB{Peter, caution beyond here. I am reconsidering if this first-dismiss-physical approach to comparison is simplest.}
1412
1413\begin{figure}
1414 \centering
1415 \includegraphics{plot-list-mchn-szz.pdf}
1416 \caption{Histogram of operation durations, decomposed by physical factors.
1417 Measurements are included from only the sizes in the ``small'' and ``medium'' stable zones.
1418 This breakdown divides the entire population of test results into four mutually disjoint constituents.
1419 \MLB{I see that I broke it. But we might be getting rid of it.}
1420 }
1421 \label{fig:plot-list-mchn-szz}
1422\end{figure}
1423
1424\VRef[Figure]{fig:plot-list-mchn-szz} shows the effects of the physical factors of size zone and machine.
1425Each of these four histograms shows variation in duration coming from the four specific sizes in a size zone, from combining results of all twelve operations and all four frameworks.
1426Among the means of the four histograms, there is a standard deviation of 0.9 ns, which is 20\% of the global mean.
1427This variability is due solely to physical factors.
1428
1429From the perspective of assessing winning/losing frameworks, these physical effects are noise.
1430So, subsequent analysis conditions on the phisical effects.
1431That is, it supposes you are put into an unknown physical situation (that is one of the four being tested), then presents all the ways your outcome could change as a result of non-physical factors, assuming that the physical situation is kept constant.
1432It does do by presenting results relative to the mean of the physical quadrant (\VRef[fig]{fig:plot-list-mchn-szz} histogram) to which it belogs.
1433With this adjustment, absolute duration values (in nonsecods) are lost.
1434In return, the physical quadrants are re-combined, enabling assessment of the non-physical factors.
1435\end{comment}
1436
1437\begin{comment}
1438While preparing experiment results, I first tested on my old office PC, AMD FX-8370E Eight-Core, before switching to the large new server for final testing.
1439For this experiment, the results flipped in my favour when running on the server.
1440New CPU architectures are now amazingly good at branch prediction and micro-parallelism in the pipelines.
1441Specifically, on the PC, my \CFA and companion \uCpp lists are slower than lq-tail and lq-list by 10\% to 20\%.
1442On the server, \CFA and \uCpp lists are can be fast by up to 100\%.
1443Overall, LQ-tailq does the best at short lengths but loses out above a dozen elements.
1444\end{comment}
1445
1446% \begin{figure}
1447% \centering
1448% \begin{tabular}{c}
1449% \includegraphics{plot-list-cmp-intrl-shift.pdf} \\
1450% (a) \\
1451% \includegraphics{plot-list-cmp-intrl-outcome.pdf} \\
1452% (b) \\
1453% \end{tabular}
1454% \caption{Caption TODO}
1455% \label{fig:plot-list-cmp-intrl}
1456% \end{figure}
1457
1458\begin{comment}
1459\subsection{Result: \CFA cost attribution}
1460
1461This comparison loosely itemizes the reasons that the \CFA implementation runs 15--20\% slower than LQ.
1462Each reason provides for safer programming.
1463For each reason, a version of the \CFA list was measured that forgoes its safety and regains some performance.
1464These potential sacrifices are:
1465\newcommand{\mandhead}{\emph{mand-head}}
1466\newcommand{\nolisted}{\emph{no-listed}}
1467\newcommand{\noiter}{\emph{no-iter}}
1468\begin{description}
1469\item[mand(atory)-head] Removing support for headless lists.
1470 A specific explanation of why headless support causes a slowdown is not offered.
1471 But it is reasonable for a cost to result from making one piece of code handle multiple cases; the subset of the \CFA list API that applies to headless lists shares its implementation with headed lists.
1472 In the \mandhead case, disabling the feature in \CFA means using an older version of the implementation, from before headless support was added.
1473 In the pre-headless library, trying to form a headless list (instructing, ``Insert loose element B after loose element A,'') is a checked runtime error.
1474 LQ does not support headless lists\footnote{
1475 Though its documentation does not mention the headless use case, this fact is due to one of its insert-before or insert-after routines being unusable in every list model.
1476 For \lstinline{tailq}, the API requires a head.
1477 For \lstinline{list}, this usage causes an ``uncaught'' runtime crash.}.
1478\item[no-listed] Removing support for the @is_listed@ API query.
1479 Along with it goes error checking such as ``When inserting an element, it must not already be listed, \ie be referred to from somewhere else.''
1480 These abilities have a cost because, in order to support them, a listed element that is being removed must be written to, to record its change in state.
1481 In \CFA's representation, this cost is two pointer writes.
1482 To disable the feature, these writes, and the error checking that consumes their result, are put behind an @#ifdef@.
1483 The result is that a removed element sees itself as still having neighbours (though these quasi-neighbours see it differently).
1484 This state is how LQ leaves a removed element; LQ does not offer an is-listed query.
1485\item[no-iter(ation)] Removing support for well-terminating iteration.
1486 The \CFA list uses bit-manipulation tagging on link poiters (rather than \eg null links) to express, ``No more elements this way.''
1487 This tagging has the cost of submitting a retrieved value to the ALU, and awaiting this operation's completion, before dereferencing a link pointer.
1488 In some cases, the is-terminating bit is transferred from one link to another, or has a similar influence on a resulting link value; this logic adds register pressure and more data dependency.
1489 To disable the feature, the @#ifdef@-controlled tag manipulation logic compiles in answers like, ``No, that link is not a terminator,'' ``The dereferenceable pointer is the value you read from memory,'' and ``The terminator-marked value you need to write is the pointer you started with.''
1490 Without this termination marking, repeated requests for a next valid item will always provide a positive response; when it should be negative, the indicated next element is garbage data at an address unlikely to trigger a memory error.
1491 LQ has a well-terminating iteration for listed elements.
1492 In the \noiter case, the slowdown is not inherent; it represents a \CFA optimization opportunity.
1493\end{description}
1494\MLB{Ensure benefits are discussed earlier and cross-reference} % an LQ programmer must know not to ask, ``Who's next?'' about an unlisted element; an LQ programmer cannot write assertions about an item being listed; LQ requiring a head parameter is an opportunity for the user to provide inconsistent data
1495
1496\begin{figure}
1497\centering
1498 \begin{tabular}{c}
1499 \includegraphics{plot-list-cfa-attrib-swift.pdf} \\
1500 (a) \\
1501 \includegraphics{plot-list-cfa-attrib-remelem-swift.pdf} \\
1502 (b) \\
1503 \end{tabular}
1504 \caption{Operation duration ranges for functionality-reduced \CFA list implementations. (a) has the top level slices. (b) has the next level of slicing within the slower element-based removal operation.}
1505 \label{fig:plot-list-cfa-attrib}
1506\end{figure}
1507
1508\VRef[Figure]{fig:plot-list-cfa-attrib} shows the \CFA list performance with these features, and their combinations, turned on and off. When a series name is one of the three sacrifices above, the series is showing this sacrifice in isolation. These further series names give combinations:
1509\newcommand{\attribFull}{\emph{full}}
1510\newcommand{\attribParity}{\emph{parity}}
1511\newcommand{\attribStrip}{\emph{strip}}
1512\begin{description}
1513 \item[full] No sacrifices. Same as measurements presented earlier.
1514 \item[parity] \mandhead + \nolisted. Feature parity with LQ.
1515 \item[strip] \mandhead + \nolisted + \noiter. All options set to ``faster.''
1516\end{description}
1517All list implementations are \CFA, possibly stripped.
1518The plot uses the same LQ-relative basis as earlier.
1519So getting to zero means matching LQ's @tailq@.
1520
1521\VRef[Figure]{fig:plot-list-cfa-attrib}-(a) summarizes the time attribution across the main operating scenarios.
1522The \attribFull series is repeated from \VRef[Figure]{fig:plot-list-cmp-overall}, part (b), while the series showing feature sacrifices are new.
1523Going all the way to \attribStrip at least nearly matches LQ in all operating scenarios, beats LQ often, and slightly beats LQ overall.
1524Except within the accessor splits, both sacrifices contribute improvements individually, \noiter helps more than \attribParity, and the total \attribStrip benefit depends on both contributions.
1525When the accessor is not element removal, the \attribParity shift appears to be counterproductive, leaving \noiter to deliver most of the benefit.
1526For element removals, \attribParity is the heavy hitter, with \noiter contributing modestly.
1527
1528The counterproductive shift outside of element removals is likely due to optimization done in the \attribFull version after implementing headless support, \ie not present in the \mandhead version.
1529This work streamlined both head-based operations (head-based removal being half the work of the element-insertion test).
1530This improvement could be ported to a \mandhead-style implementation, which would bring down the \attribParity time in these cases.
1531
1532More significantly, missing this optimization affects every \attribParity result because they all use head-based inserts or removes for at least half their operations.
1533It is likely a reason that \attribParity is not delivering as well overall as \noiter.
1534It even represents plausible further improvements in \attribStrip.
1535
1536\VRef[Figure]{fig:plot-list-cfa-attrib}-(b) addresses element removal being the overall \CFA slow spot and element removal having a peculiar shape in the (a) analysis.
1537Here, the \attribParity sacrifice bundle is broken out into its two constituents.
1538The result is the same regardless of the operation.
1539All three individual sacrifices contribute noteworthy improvements (\nolisted slightly less).
1540The fullest improvement requires all of them.
1541
1542The \noiter feature sacrifice is unpalatable.
1543But because it is not an inherent slowdown, there may be room to pursue a \noiter-level speed improvement without the \noiter feature sacrifice.
1544The performance crux for \noiter is the pointer-bit tagging scheme.
1545Alternative designs that may offer speedup with acceptable consequences include keeping the tag information in a separate field, and (for 64-bit architectures) keeping it in the high-order byte \ie using byte- rather than bit-oriented instructions to access it.
1546The \noiter speed improvement would bring \CFA to +5\% of LQ overall, and from high twenties to high teens, in the worst case of element removal.
1547
1548Ultimately, this analysis provides options for a future effort that needs to get the most speed out of the \CFA list.
1549
1550\end{comment}
1551
1552
1553\section{Future Work}
1554\label{toc:lst:futwork}
1555
1556Not discussed in the chapter is a \CFA type-system issue with Plan-9 @inline@ declarations, implementing the trait @embedded@ to access the contained @dlist@ link-fields.
1557This trait defines an implicit conversion from derived to base (the safe direction).
1558Such a conversion exists implicitly for concrete types using the Plan-9 inheritance feature.
1559However, this implicit conversion is only partially implemented for polymorphism types, such as @dlist@, which prevents the straightforward list interface shown throughout the chapter.
1560
1561My workaround is the macro @P9_EMBEDDED@ placed before an intrusive node is used to declare a list.
1562\begin{cfa}
1563struct node {
1564 int v;
1565 inline dlink(node);
1566};
1567@P9_EMBEDDED( node, dlink(node) );@
1568dlist( node ) dlist;
1569\end{cfa}
1570The macro creates specialized access functions to explicitly extract the required information for the polymorphic @dlist@ type.
1571These access functions could have been generated implicitly for each intrusive node by adding another compiler pass.
1572However, this would have been substantial temporary work, when the correct solution is the compiler fix.
1573Hence, the macro workaround is only a small temporary inconvenience;
1574otherwise, all the list API shown in this chapter works.
1575
1576
1577\begin{comment}
1578An API author has decided that the intended user experience is:
1579\begin{itemize}
1580\item
1581offer the user an opaque type that abstracts the API's internal state
1582\item
1583tell the user to extend this type
1584\item
1585provide functions with a ``user's type in, user's type out'' style
1586\end{itemize}
1587Fig XXX shows several attempts to provide this experience.
1588The types in (a) give the setup, achieving the first two points, while the pair of function declarations and calls of (b) are unsuccessful attempts at achieving the third point.
1589Both functions @f1@ and @f2@ allow calls of the form @f( d )@, but @f1@ has the wrong return type (@Base@) for initializing a @Derived@.
1590The @f1@ signature fails to meet ``user's type out;'' this signature does not give the \emph{user} a usable type.
1591On the other hand, the signature @f2@ offers the desired user experience, so the API author proceeds with trying to implement it.
1592
1593\begin{figure}
1594\begin{cfa}
1595#ifdef SHOW_ERRS
1596#define E(...) __VA_ARGS__
1597#else
1598#define E(...)
1599#endif
1600
1601// (a)
1602struct Base { /*...*/ }; // system offers
1603struct Derived { /*...*/ inline Base; }; // user writes
1604
1605// (b)
1606// system offers
1607Base & f1( Base & );
1608forall( T ) T & f2( T & );
1609// user writes
1610void to_elide1() {
1611 Derived & d /* = ... */;
1612 f1( d );
1613 E( Derived & d1 = f1( d ); ) // error: return is not Derived
1614 Derived & d2 = f2( d );
1615
1616 // (d), user could write
1617 Base & b = d;
1618}
1619
1620// (c), system has
1621static void helper( Base & );
1622forall( T ) T & f2( T & param ) {
1623 E( helper( param ); ) // error: param is not Base
1624 return param;
1625}
1626static void helper( Base & ) {}
1627
1628#include <list2.hfa>
1629// (e)
1630// system offers, has
1631forall( T | embedded(T, Base, Base) )
1632 T & f3( T & param ) {
1633 helper( param`inner ); // ok
1634 return param;
1635}
1636// user writes
1637struct DerivedRedux { /*...*/ inline Base; };
1638P9_EMBEDDED( DerivedRedux, Base )
1639void to_elide2() {
1640 DerivedRedux & dr /* = ... */;
1641 DerivedRedux & dr3 = f3( dr );
1642
1643 // (f)
1644 // user writes
1645 Derived & xxx /* = ... */;
1646 E( Derived & yyy = f3( xxx ); ) // error: xxx is not embedded
1647}
1648\end{cfa}
1649\end{figure}
1650
1651
1652The \CFA list examples elide the @P9_EMBEDDED@ annotations that (TODO: xref P9E future work) proposes to obviate.
1653Thus, these examples illustrate a to-be state, free of what is to be historic clutter.
1654The elided portions are immaterial to the discussion and the examples work with the annotations provided.
1655The \CFA test suite (TODO:cite?) includes equivalent demonstrations, with the annotations included.
1656\begin{cfa}
1657struct mary {
1658 float anotherdatum;
1659 inline dlink(mary);
1660};
1661struct fred {
1662 float adatum;
1663 inline struct mine { inline dlink(fred); };
1664 inline struct yours { inline dlink(fred); };
1665};
1666\end{cfa}
1667like in the thesis examples. You have to say
1668\begin{cfa}
1669struct mary {
1670 float anotherdatum;
1671 inline dlink(mary);
1672};
1673P9_EMBEDDED(mary, dlink(mary))
1674struct fred {
1675 float adatum;
1676 inline struct mine { inline dlink(fred); };
1677 inline struct yours { inline dlink(fred); };
1678};
1679P9_EMBEDDED(fred, fred.mine)
1680P9_EMBEDDED(fred, fred.yours)
1681P9_EMBEDDED(fred.mine, dlink(fred))
1682P9_EMBEDDED(fred.yours, dlink(fred))
1683\end{cfa}
1684
1685
1686The function definition in (c) gives this system-implementation attempt.
1687The system needs to operate on its own data, stored in the @Base@ part of the user's @d@, now called @param@.
1688Calling @helper@ represents this attempt to look inside.
1689It fails, because the @f2@ signature does not state that @param@ has any relationship to @Base@;
1690this signature does not give the \emph{system} a usable type.
1691The fact that the user's argument can be converted to @Base@ is lost when going through this signature.
1692
1693Moving forward needs an @f@ signature that conveys the relationship that the argument @d@ has to @Base@.
1694\CFA conveys type abilities, from caller to callee, by way of traits; so the challenge is to state the right trait member.
1695As initialization (d) illustrates, the @d@--@Base@ capability is an implicit conversion.
1696Unfortunately, in present state, \CFA does not have a first-class representation of an implicit conversion, the way operator @?{}@ (which is a value), done with the right overload, is arguably an explicit conversion.
1697So \CFA cannot directly convey ``@T@ compatible with @Base@'' in a trait.
1698
1699This work contributes a stand-in for such an ability, tunneled through the present-state trait system, shown in (e).
1700On the declaration/system side, the trait @embedded@, and its member, @`inner@, convey the ability to recover the @Base@, and thereby call @helper@.
1701On the user side, the @P9_EMBEDDED@ macro accompanies type definitions that work with @f3@-style declarations.
1702An option is presemt-state-available to compiler-emit @P9_EMBEDDED@ annotations automatically, upon each occurrence of an `inline` member.
1703The choice not to is based on conversions in \CFA being a moving target because of separate, ongoing work.
1704
1705An intended finished state for this area is achieved if \CFA's future efforts with conversions include:
1706\begin{itemize}
1707\item
1708treat conversion as operator(s), \ie values
1709\item
1710re-frame the compiler's current Plan-9 ``magic'' as seeking an implicit conversion operator, rather than seeking an @inline@ member
1711\item
1712make Plan-9 syntax cause an implementation of implicit conversion to exist (much as @struct@ syntax causes @forall(T)@ compliance to exist)
1713\end{itemize}
1714To this end, the contributed @P9_EMBEDDED@ expansion shows how to implement this conversion.
1715
1716
1717like in tests/list/dlist-insert-remove.
1718Future work should autogen those @P9_EMBEDDED@ declarations whenever it sees a plan-9 declaration.
1719The exact scheme chosen should harmonize with general user-defined conversions.
1720
1721Today's P9 scheme is: mary gets a function `inner returning this as dlink(mary).
1722Fred gets four of them in a diamond.
1723They're defined so that `inner is transitive; i.e. fred has two further ambiguous overloads mapping fred to dlink(fred).
1724The scheme allows the dlist functions to give the assertion, "we work on any T that gives a `inner to dlink(T)."
1725
1726
1727TODO: deal with: A doubly linked list is being designed.
1728
1729TODO: deal with: Link fields are system-managed.
1730Links in GDB.
1731\end{comment}
Note: See TracBrowser for help on using the repository browser.