source: doc/theses/mike_brooks_MMath/list.tex@ 9c12dd0

Last change on this file since 9c12dd0 was 9c12dd0, checked in by Michael Brooks <mlbrooks@…>, 30 hours ago

Apply document-level formatting and rearrange the array-in-struct discussion.

Formatting:

  • add visual separation of figures from text
  • prevent "formula"-style code fragments from page breaking
  • (thereby prevent code formula broken at end of page from mistakenly restarting the reader in a code figure at the top of the next page)
  • give short+long figure captions
  • assure descriptive captions
  • remove Title-Like Captialization in Captions
  • turn a couple figures into tables where the figure visual separator looks bad

Array-in-struct

  • remove lingering use of "accordion"
  • call it Dynamic Array Member
  • oranize discussion as, first explain example, then argue feature is good
  • Property mode set to 100644
File size: 106.2 KB
Line 
1\chapter{Linked List}
2\label{ch:list}
3
4This chapter presents my work on designing and building a linked-list library for \CFA.
5Due to time limitations and the needs expressed by the \CFA runtime developers, I focussed on providing a doubly-linked list, and its bidirectional iterators for traversal.
6Simpler data-structures, like stack and queue, can be built from the doubly-linked mechanism with only a slight storage/performance cost because of the unused link field.
7Reducing to data-structures with a single link follows directly from the more complex double links and its iterators.
8
9
10\section{Plan-9 Inheritance}
11\label{s:Plan9Inheritance}
12
13This chapter uses a form of inheritance from the Plan-9 C dialect~\cite[\S~3.3]{Thompson90new}, which is supported by @gcc@ and @clang@ using @-fplan9-extensions@.
14\CFA has its own variation of the Plan-9 mechanism, where the nested type is denoted using @inline@.
15\begin{cfa}
16union U { int x; double y; char z; };
17struct W { int i; double j; char k; };
18struct S {
19 @inline@ struct W; $\C{// extended Plan-9 inheritance}$
20 unsigned int tag;
21 @inline@ U; $\C{// extended Plan-9 inheritance}$
22} s;
23\end{cfa}
24Inline inheritance is containment, where the inlined field is unnamed and the type's internal fields are hoisted into the containing structure.
25Hence, the field names must be unique, unlike \CC nested types, but any inlined type-names are at a nested scope level, unlike aggregate nesting in C.
26Note, the position of the containment is normally unimportant, unless there is some form of memory or @union@ overlay.
27Finally, the inheritance declaration of @U@ is not prefixed with @union@.
28Like \CC, \CFA allows optional prefixing of type names with their kind, \eg @struct@, @union@, and @enum@, unless there is ambiguity with variable names in the same scope.
29
30\VRef[Figure]{f:Plan9Polymorphism} shows the key polymorphic feature of Plan-9 inheritance: implicit conversion of values and pointers for nested types.
31In the example, there are implicit conversions from @S@ to @U@ and @S@ to @W@, extracting the appropriate value or pointer for the substructure.
32\VRef[Figure]{f:DiamondInheritancePattern} shows complex multiple inheritance patterns are possible, like the \newterm{diamond pattern}~\cite[\S~6.1]{Stroustrup89}\cite[\S~4]{Cargill91}.
33Currently, the \CFA type-system does not support @virtual@ inheritance.
34
35\begin{figure}
36\begin{cfa}
37void f( U, U * ); $\C{// value, pointer}$
38void g( W, W * ); $\C{// value, pointer}$
39U u, * up; W w, * wp; S s, * sp;
40u = s; up = sp; $\C{// value, pointer}$
41w = s; wp = sp; $\C{// value, pointer}$
42f( s, &s ); $\C{// value, pointer}$
43g( s, &s ); $\C{// value, pointer}$
44\end{cfa}
45\caption{Plan-9 polymorphism}
46\label{f:Plan9Polymorphism}
47\end{figure}
48
49\begin{figure}
50\setlength{\tabcolsep}{10pt}
51\begin{tabular}{ll@{}}
52\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
53\begin{c++}
54struct B { int b; };
55struct L : @public B@ { int l, w; };
56struct R : @public B@ { int r, w; };
57struct T : @public L, R@ { int t; };
58 T
59 B L B R
60T t = { { { 5 }, 3, 4 }, { { 8 }, 6, 7 }, 3 };
61t.t = 42;
62t.l = 42;
63t.r = 42;
64((L &)t).b = 42; // disambiguate
65\end{c++}
66&
67\begin{cfa}
68struct B { int b; };
69struct L { @inline B;@ int l, w; };
70struct R { @inline B;@ int r, w; };
71struct T { @inline L; inline R;@ int t; };
72 T
73 B L B R
74T t = { { { 5 }, 3, 4 }, { { 8 }, 6, 7 }, 3 };
75t.t = 42;
76t.l = 42;
77t.r = 42;
78((L &)t).b = 42; // disambiguate, proposed solution t.L.b = 42;
79\end{cfa}
80\end{tabular}
81\caption{Diamond non-virtual inheritance pattern}
82\label{f:DiamondInheritancePattern}
83\end{figure}
84
85
86\section{Features}
87
88The following features directed this project, where the goal is high-performance list operations required by \CFA runtime components, like the threading library.
89
90
91\subsection{Core Design Issues}
92
93The doubly-linked list attaches links intrusively in a node, allows a node to appear on multiple lists (axes) simultaneously, integrates with user code via the type system, treats its ends uniformly, and identifies a list using an explicit head.
94This design covers system and data management issues stated in \VRef{toc:lst:issue}.
95
96\VRef[Figure]{fig:lst-features-intro} continues the running @req@ example from \VRef[Figure]{fig:lst-issues-attach} using the \CFA list @dlist@.
97The \CFA link attachment is intrusive so the resulting memory layout is per user node, as for the LQ version of \VRef[Figure]{f:Intrusive}.
98The \CFA framework provides generic type @dlink( T )@ (not to be confused with @dlist@) for the two link fields (front and back).
99A user inserts the links into the @req@ structure via \CFA inline-inheritance \see{\VRef{s:Plan9Inheritance}}.
100Lists leverage the automatic conversion of a pointer to anonymous inline field for assignments and function calls.
101Therefore, a reference to a @req@ is implicitly convertible to @dlink@ in both contexts.
102The links in @dlist@ point at another embedded @dlist@ node, know the offsets of all links (data is abstract but accessible), and any field-offset arithmetic or link-value changes are safe and abstract.
103
104\begin{figure}
105 \lstinput{20-30}{lst-features-intro.run.cfa}
106 \caption[Multiple link axes in \CFA list library]{
107 Demonstration of the running \lstinline{req} example, done using the \CFA list library.
108 This example is equivalent to the three approaches in \VRef[Figure]{fig:lst-issues-attach}.
109 }
110 \label{fig:lst-features-intro}
111\end{figure}
112
113\VRef[Figure]{f:dlistOutline} shows the outline for types @dlink@ and @dlist@.
114Note, the first @forall@ clause is distributed across all the declaration in its containing block, eliminating repetition on each declaration.
115The second nested @forall@ on @dlist@ is also distributed and adds an optional second type parameter, @tLinks@, denoting the linking axis \see{\VRef{s:Axis}}, \ie the kind of list this node can appear on.
116
117\begin{figure}
118\begin{cfa}
119forall( tE & ) { $\C{// distributed}$
120 struct @dlink@ { ... }; $\C{// abstract type}$
121 static inline void ?{}( dlink( tE ) & this ); $\C{// constructor}$
122
123 forall( tLinks & = dlink( tE ) ) { $\C{// distributed, default type for axis}$
124 struct @dlist@ { ... }; $\C{// abstract type}$
125 static inline void ?{}( dlist( tE, tLinks ) & this ); $\C{// constructor}$
126 }
127}
128\end{cfa}
129\caption[Sketch of the dlink and dlist type definitions]{Sketch of the \lstinline{dlink} and \lstinline{dlist} type definitions }
130\label{f:dlistOutline}
131\end{figure}
132
133\VRef[Figure]{fig:lst-features-multidir} shows how the \CFA library supports multi-inline links, so a node has multiple axes.
134The declaration of @req@ has two inline-inheriting @dlink@ occurrences.
135The first of these gives a type named @req.by_pri@, @req@ inherits from it, and it inherits from @dlink@.
136The second line @req.by_rqr@ is similar to @req.by_pri@.
137Thus, there is a diamond, non-virtual, inheritance from @req@ to @dlink@, with @by_pri@ and @by_rqr@ being the mid-level types \see{\VRef[Figure]{f:DiamondInheritancePattern}}.
138
139The declarations of the list-head objects, @reqs_pri@, @reqs_rqr_42@, @reqs_rqr_17@, and @reqs_rqr_99@, bind which link nodes in @req@ are used by this list.
140Hence, the type of the variable @reqs_pri@, @dlist(req, req.by_pri)@, means operations on @reqs_pri@ implicitly select (disambiguate) the correct @dlink@s, \eg the calls @insert_first(reqs_pri, ...)@ imply, ``here, we are working by priority.''
141As in \VRef[Figure]{fig:lst-issues-multi-static}, three lists are constructed, a priority list containing all nodes, a list with only nodes containing the value 42, and a list with only nodes containing the value 17.
142
143\begin{figure}
144\centering
145
146\begin{tabular}{@{}ll@{}}
147\multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{C} \\
148 \begin{tabular}{@{}l@{}}
149 \lstinput{20-31}{lst-features-multidir.run.cfa} \\
150 \lstinput{43-71}{lst-features-multidir.run.cfa}
151 \end{tabular}
152&
153 \lstinput[language=C++]{20-60}{lst-issues-multi-static.run.c}
154\end{tabular}
155
156\caption[Demonstration of multiple static link axes in \CFA]{
157 Demonstration of multiple static link axes in \CFA.
158 The right example is from \VRef[Figure]{fig:lst-issues-multi-static}.
159 The left \CFA example does the same job.
160}
161\label{fig:lst-features-multidir}
162\end{figure}
163
164The list library also supports the common case of single directionality more naturally than LQ.
165Returning to \VRef[Figure]{fig:lst-features-intro}, the single-axis list has no contrived name for the link axis as it uses the default type in the definition of @dlist@;
166in contrast, the LQ list in \VRef[Figure]{f:Intrusive} adds the unnecessary field name @d@.
167In \CFA, a single axis list sets up a single inheritance with @dlink@, and the default list axis is to itself.
168
169When operating on a list with several axes and operations that do not take the list head, the list axis can be ambiguous.
170For example, a call like @insert_after( r1, r2 )@ does not have enough information to know which axes to select implicitly.
171Is @r2@ supposed to be the next-priority request after @r1@, or is @r2@ supposed to join the same-requester list of @r1@?
172As such, the \CFA type-system gives an ambiguity error for this call.
173There are multiple ways to resolve the ambiguity.
174The simplest is an explicit cast on each call to select the specific axis, \eg @insert_after( (by_pri)r1, r2 )@.
175However, multiple explicit casts are tedious and error-prone.
176To mitigate this issue, the list library provides a hook for applying the \CFA language's scoping and priority rules.
177\begin{cfa}
178with ( DLINK_VIA( req, req.pri ) ) insert_after( r1, r2 );
179\end{cfa}
180Here, the @with@ statement opens the scope of the object type for the expression;
181hence, the @DLINK_VIA@ result causes one of the list axes to become a more attractive candidate to \CFA's overload resolution.
182This boost can be applied across multiple statements in a block or an entire function body.
183\begin{cquote}
184\setlength{\tabcolsep}{15pt}
185\begin{tabular}{@{}ll@{}}
186\begin{cfa}
187@with( DLINK_VIA( req, req.pri ) ) {@
188 ... insert_after( r1, r2 ); ...
189@}@
190\end{cfa}
191&
192\begin{cfa}
193void f() @with( DLINK_VIA( req, req.pri ) ) {@
194 ... insert_after( r1, r2 ); ...
195@}@
196\end{cfa}
197\end{tabular}
198\end{cquote}
199Within the @with@, the code acts as if there is only one list axis, without explicit casting.
200
201Unlike the \CC template container-types, the \CFA library works completely within the type system;
202both @dlink@ and @dlist@ are ordinary types, not language macros.
203There is no textual expansion other than header-included static-inline function for performance.
204Hence, errors in user code are reported only with mention of the library's declarations, versus long template names in error messages.
205Finally, the library is separately compiled from the usage code, modulo inlining.
206
207
208\section{List API}
209
210\VRef[Figure]{f:ListAPI} shows the API for the doubly-link list operations, where each is explained.
211\begin{itemize}[leftmargin=*]
212\item
213@listed@ returns true if the node is an element of a list and false otherwise.
214\item
215@empty@ returns true if the list has no nodes and false otherwise.
216\item
217@first@ returns a reference to the first node of the list without removing it or @0p@ if the list is empty.
218\item
219@last@ returns a reference to the last node of the list without removing it or @0p@ if the list is empty.
220\item
221@insert_before@ adds a node before a specified node \see{\lstinline{insert_last} for insertion at the end}\footnote{
222Some list packages allow \lstinline{0p} (\lstinline{nullptr}) for the before/after node implying insert/remove at the start/end of the list, respectively.
223However, this inserts an \lstinline{if} statement in the fastpath of a potentially commonly used list operation.}
224\item
225@insert_after@ adds a node after a specified node \see{\lstinline{insert_first} for insertion at the start}\footnotemark[\value{footnote}].
226\item
227@remove@ removes a specified node from the list (any location) and returns a reference to the node.
228\item
229@iter@ create an iterator for the list.
230\item
231@recede@ returns true if the iterator cursor is advanced to the previous (predecessor, towards first) node before the prior cursor node and false otherwise.
232\item
233@advance@ returns true if the iterator cursor is advanced to the next (successor, towards last) node after the prior cursor node and false otherwise.
234\item
235@first@ returns true if the node is the first node in the list and false otherwise.
236\item
237@last@ returns true if the node is the last node in the list and false otherwise.
238\item
239@pred@ returns a reference to the previous (predecessor, towards first) node before a specified node or @0p@ if a specified node is the first node in the list.
240\item
241@next@ returns a reference to the next (successor, towards last) node after a specified node or @0p@ if a specified node is the last node in the list.
242\item
243@insert_first@ adds a node to the start of the list so it becomes the first node and returns a reference to the node for cascading.
244\item
245@insert_last@ adds a node to the end of the list so it becomes the last node and returns returns a reference to the node for cascading.
246\item
247@remove_first@ removes the first node and returns a reference to it or @0p@ if the list is empty.
248\item
249@remove_last@ removes the last node and returns a reference to it or @0p@ if the list is empty.
250\item
251@transfer@ transfers all nodes from the @from@ list to the end of the @to@ list; the @from@ list is empty after the transfer.
252\item
253@split@ transfers the @from@ list up to node to the end of the @to@ list; the @from@ list becomes the list after the node.
254The node must be in the @from@ list.
255\end{itemize}
256For operations @insert_*@, @insert@, and @remove@, a variable-sized list of nodes can be specified using \CFA's tuple type~\cite[\S~4.7]{Moss18} (not discussed), \eg @insert( list, n1, n2, n3 )@, recursively invokes @insert( list, n@$_i$@ )@.\footnote{
257Currently, a resolver bug between tuple types and references means tuple routines must use pointer parameters.
258Nevertheless, the imaginary reference versions are used here as the code is cleaner, \eg no \lstinline{&} on call arguments.}
259
260\begin{figure}
261\begin{cfa}
262E & listed( E & node );
263E & empty( dlist( E ) & list );
264E & first( dlist( E ) & list );
265E & last( dlist( E ) & list );
266E & insert_before( E & before, E & node );
267E & insert_after( E & after, E & node );
268E & remove( E & node );
269E & iter( dlist( E ) & list );
270bool advance( E && refx );
271bool recede( E && refx );
272bool first( E & node );
273bool last( E & node );
274E & prev( E & node );
275E & next( E & node );
276E & insert_first( dlist( E ) & list, E & node );
277E & insert_last( dlist( E ) & list, E & node ); // synonym insert
278E & remove_first( dlist( E ) & list );
279E & remove_last( dlist( E ) & list );
280void transfer( dlist( E ) & to, dlist( E ) & from );
281void split( dlist( E ) & to, dlist( E ) & from, E & node );
282\end{cfa}
283\caption{\CFA list API}
284\label{f:ListAPI}
285\end{figure}
286
287
288\subsection{Iteration}
289
290It is possible to iterate through a list manually or using a set of standard macros.
291\VRef[Figure]{f:IteratorDriver} shows the iterator outline, managing a list of nodes, used throughout the following iterator examples.
292Each example assumes its loop body prints the value in the current node.
293
294\begin{figure}
295\begin{cfa}
296#include <fstream.hfa>
297#include <list.hfa>
298struct node {
299 int v;
300 inline dlink(node);
301};
302int main() {
303 dlist(node) list;
304 node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
305 insert( list, n1, n2, n3, n4 ); $\C{// insert in any order}$
306 sout | nlOff;
307 @for ( ... )@ sout | it.v | ","; sout | nl; $\C{// iterator examples in text}$
308 remove( n1, n2, n3, n4 ); $\C{// remove in any order}$
309}
310\end{cfa}
311\caption{Iterator driver}
312\label{f:IteratorDriver}
313\end{figure}
314
315The manual method is low level but allows complete control of the iteration.
316The list cursor (index) can be either a pointer or a reference to a node in the list.
317The choice depends on how the programmer wants to access the fields: @it->f@ or @it.f@.
318The following examples use a reference because the loop body manipulates the node values rather than the list pointers.
319The end of iteration is denoted by the loop cursor returning @0p@.
320
321\noindent
322Iterating forward and reverse through the entire list.
323\begin{cquote}
324\setlength{\tabcolsep}{15pt}
325\begin{tabular}{@{}l|l@{}}
326\begin{cfa}
327for ( node & it = @first@( list ); &it /* != 0p */ ; &it = &@next@( it ) ...
328for ( node & it = @last@( list ); &it; &it = &@prev@( it ) ) ...
329\end{cfa}
330&
331\begin{cfa}
3321, 2, 3, 4,
3334, 3, 2, 1,
334\end{cfa}
335\end{tabular}
336\end{cquote}
337Iterating forward and reverse from a starting node through the remaining list.
338\begin{cquote}
339\setlength{\tabcolsep}{15pt}
340\begin{tabular}{@{}l|l@{}}
341\begin{cfa}
342for ( node & it = @n2@; &it; &it = &@next@( it ) ) ...
343for ( node & it = @n3@; &it; &it = &@prev@( it ) ) ...
344\end{cfa}
345&
346\begin{cfa}
3472, 3, 4,
3483, 2, 1,
349\end{cfa}
350\end{tabular}
351\end{cquote}
352Iterating forward and reverse from a starting node to an ending node through the contained list.
353\begin{cquote}
354\setlength{\tabcolsep}{15pt}
355\begin{tabular}{@{}l|l@{}}
356\begin{cfa}
357for ( node & it = @n2@; &it @!= &n4@; &it = &@next@( it ) ) ...
358for ( node & it = @n4@; &it @!= &n2@; &it = &@prev@( it ) ) ...
359\end{cfa}
360&
361\begin{cfa}
3622, 3,
3634, 3,
364\end{cfa}
365\end{tabular}
366\end{cquote}
367Iterating forward and reverse through the entire list using the shorthand start at the list head and pick an axis.
368In this case, @advance@ and @recede@ return a boolean, like \CC @while ( cin >> i )@.
369\begin{cquote}
370\setlength{\tabcolsep}{15pt}
371\begin{tabular}{@{}l|l@{}}
372\begin{cfa}
373for ( node & it = @iter@( list ); @advance@( it ); ) ...
374for ( node & it = @iter@( list ); @recede@( it ); ) ...
375\end{cfa}
376&
377\begin{cfa}
3781, 2, 3, 4,
3794, 3, 2, 1,
380\end{cfa}
381\end{tabular}
382\end{cquote}
383Finally, there are convenience macros that look like @foreach@ in other programming languages.
384Iterating forward and reverse through the entire list.
385\begin{cquote}
386\setlength{\tabcolsep}{15pt}
387\begin{tabular}{@{}l|l@{}}
388\begin{cfa}
389FOREACH( list, it ) ...
390FOREACH_REV( list, it ) ...
391\end{cfa}
392&
393\begin{cfa}
3941, 2, 3, 4,
3954, 3, 2, 1,
396\end{cfa}
397\end{tabular}
398\end{cquote}
399Iterating forward and reverse through the entire list or until a predicate is triggered.
400\begin{cquote}
401\setlength{\tabcolsep}{15pt}
402\begin{tabular}{@{}l|l@{}}
403\begin{cfa}
404FOREACH_COND( list, it, @it.v == 3@ ) ...
405FOREACH_REV_COND( list, it, @it.v == 1@ ) ...
406\end{cfa}
407&
408\begin{cfa}
4091, 2,
4104, 3, 2,
411\end{cfa}
412\end{tabular}
413\end{cquote}
414Macros are not ideal, so future work is to provide a language-level @foreach@ statement, like \CC.
415Finally, a predicate can be added to any of the manual iteration loops.
416\begin{cquote}
417\setlength{\tabcolsep}{15pt}
418\begin{tabular}{@{}l|l@{}}
419\begin{cfa}
420for ( node & it = first( list ); &it @&& !(it.v == 3)@; &it = &next( it ) ) ...
421for ( node & it = last( list ); &it @&& !(it.v == 1)@; &it = &prev( it ) ) ...
422for ( node & it = iter( list ); advance( it ) @&& !(it.v == 3)@; ) ...
423for ( node & it = iter( list ); recede( it ) @&& !(it.v == 1)@; ) ...
424\end{cfa}
425&
426\begin{cfa}
4271, 2,
4284, 3, 2,
4291, 2,
4304, 3, 2,
431\end{cfa}
432\end{tabular}
433\end{cquote}
434
435\begin{comment}
436Many languages offer an iterator interface for collections, and a corresponding for-each loop syntax for consuming the items through implicit interface calls.
437\CFA does not yet have a general-purpose form of such a feature, though it has a form that addresses some use cases.
438This section shows why the incumbent \CFA pattern does not work for linked lists and gives the alternative now offered by the linked-list library.
439Chapter 5 [TODO: deal with optimism here] presents a design that satisfies both uses and accommodates even more complex collections.
440
441The current \CFA extensible loop syntax is:
442\begin{cfa}
443for( elem; end )
444for( elem; begin ~ end )
445for( elem; begin ~ end ~ step )
446\end{cfa}
447Many derived forms of @begin ~ end@ exist, but are used for defining numeric ranges, so they are excluded from the linked-list discussion.
448These three forms are rely on the iterative trait:
449\begin{cfa}
450forall( T ) trait Iterate {
451 void ?{}( T & t, zero_t );
452 int ?<?( T t1, T t2 );
453 int ?<=?( T t1, T t2 );
454 int ?>?( T t1, T t2 );
455 int ?>=?( T t1, T t2 );
456 T ?+=?( T & t1, T t2 );
457 T ?+=?( T & t, one_t );
458 T ?-=?( T & t1, T t2 );
459 T ?-=?( T & t, one_t );
460}
461\end{cfa}
462where @zero_t@ and @one_t@ are constructors for the constants 0 and 1.
463The simple loops above are abbreviates for:
464\begin{cfa}
465for( typeof(end) elem = @0@; elem @<@ end; elem @+=@ @1@ )
466for( typeof(begin) elem = begin; elem @<@ end; elem @+=@ @1@ )
467for( typeof(begin) elem = @0@; elem @<@ end; elem @+=@ @step@ )
468\end{cfa}
469which use a subset of the trait operations.
470The shortened loop works well for iterating a number of times or through an array.
471\begin{cfa}
472for ( 20 ) // 20 iterations
473for ( i: 1 ~= 21 ~ 2 ) // odd numbers
474for ( i; n ) total += a[i]; // subscripts
475\end{cfa}
476which is similar to other languages, like JavaScript.
477\begin{cfa}
478for ( i in a ) total += a[i];
479\end{cfa}
480Albeit with different mechanisms for expressing the array's length.
481It might be possible to take the \CC iterator:
482\begin{c++}
483for ( list<int>::iterator it=mylist.begin(); it != mylist.end(); ++it )
484\end{c++}
485and convert it to the \CFA form
486\begin{cfa}
487for ( it; begin() ~= end() )
488\end{cfa}
489by having a list operator @<=@ that just looks for equality, and @+=@ that moves to the next node, \etc.
490
491However, the list usage is contrived, because a list does use its data values for relational comparison, only links for equality comparison.
492Hence, the focus of a list iterator's stopping condition is fundamentally different.
493So, iteration of a linked list via the existing loop syntax is to ask whether this syntax can also do double-duty for iterating values.
494That is, to be an analog of JavaScript's @for..of@ syntax:
495\begin{cfa}
496for ( e of a ) total += e;
497\end{cfa}
498
499The \CFA team will likely implement an extension of this functionality that moves the @~@ syntax from being part of the loop, to being a first-class operator (with associated multi-pace operators for the elided derived forms).
500With this change, both @begin ~ end@ and @end@ (in context of the latter ``two-place for'' expression) parse as \emph{ranges}, and the loop syntax becomes, simply:
501\begin{cfa}
502 for( elem; rangeExpr )
503\end{cfa}
504The expansion and underlying API are under discussion.
505TODO: explain pivot from ``is at done?'' to ``has more?''
506Advantages of this change include being able to pass ranges to functions, for example, projecting a numerically regular subsequence of array entries, and being able to use the loop syntax to cover more collection types, such as looping over the keys of a hash table.
507
508When iterating an empty list, the question, ``Is there a further element?'' needs to be posed once, receiving the answer, ``no.''
509When iterating an $n$-item list, the same question gets $n$ ``yes'' answers (one for each element), plus one ``no'' answer, once there are no more elements; the question is posed $n+1$ times.
510
511When iterating an empty list, the question, ``What is the value of the current element?'' is never posed, nor is the command, ``Move to the next element,'' issued. When iterating an $n$-item list, each happens $n$ times.
512
513So, asking about the existence of an element happens once more than retrieving an element's value and advancing the position.
514
515Many iteration APIs deal with this fact by splitting these steps across different functions, and relying on the user's knowledge of iterator state to know when to call each. In Java, the function @hasNext@ should be called $n+1$ times and @next@ should be called $n$ times (doing the double duty of advancing the iteration and returning a value). In \CC, the jobs are split among the three actions, @it != end@, @it++@ and @*it@, the latter two being called one time more than the first.
516
517TODO: deal with simultaneous axes: @DLINK_VIA@ just works
518
519TODO: deal with spontaneous simultaneity, like a single-axis req, put into an array: which ``axis'' is @&req++@ navigating: array-adjacency vs link dereference. It should sick according to how you got it in the first place: navigating dlist(req, req.pri) vs navigating array(req, 42). (prob. future work)
520\end{comment}
521
522
523\section[C++ Lists]{\CC Lists}
524
525It is worth addressing two API issues in \CC lists avoided in \CFA.
526First, \CC lists require two steps to remove a node versus one in \CFA.
527\begin{cquote}
528\begin{tabular}{@{}ll@{}}
529\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
530\begin{c++}
531list<node> li;
532node n = li.first(); // assignment could raise exception
533li.pop_front();
534\end{c++}
535&
536\begin{cfa}
537dlist(node) list;
538node n = remove_first( list );
539
540\end{cfa}
541\end{tabular}
542\end{cquote}
543The argument for two steps is exception safety: returning an unknown T by value/move might throw an exception from T's copy/move constructor.
544Hence, to be \emph{exception safe}, all internal list operations must complete before the copy/move so the list is consistent should the return fail.
545This coding style can result in contrived code, but is usually possible;
546however, it requires the container designer to anticipate the potential throw.
547(Note, this anticipation issue is pervasive in exception systems, not just with containers.)
548The solution moves the coding complexity from the container designer to the programer~\cite[ch~10, part 3]{Sutter99}.
549First, obtain the node, which might fail, but the container is unmodified.
550Second, remove the node, which modifies the container without the possibly of an exception.
551This movement of responsibility increases the cognitive effort for programmers.
552Unfortunately, this \emph{single-responsibility principle}, \ie preferring separate operations, is often repeated as a necessary requirement rather an an optional one.
553Separate operations should always be available, but their composition should also be available.
554Interestingly, this issue does not apply to intrusive lists, because the node data is never copied/moved in or out of a list;
555only the link fields are accessed in list operations.
556
557Second, \VRef[Figure]{f:CCvsCFAListIssues} shows an example where a \CC list operation is $O(N$) rather than $O(1)$ in \CFA.
558This issue is inherent to wrapped (non-intrusive) lists.
559Specifically, to remove a node requires access to the links that materialize its membership.
560In a wrapped list, there is no access from node to links, and for abstract reasons, no direct pointers to wrapped nodes, so the links must be found indirectly by navigating the list.
561The \CC iterator is the abstraction to navigate wrapped links.
562So an iterator is needed, not because it offers go-next, but for managing the list membership.
563Note, attempting to keep an array of iterators to each node requires high-complexity to ensure the list and array are harmonized.
564
565\begin{figure}
566\begin{tabular}{@{}ll@{}}
567\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{\CFA} \\
568\begin{c++}
569list<node *> list;
570node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
571list.push_back( &n1 ); list.push_back( &n2 );
572list.push_back( &n3 ); list.push_back( &n4 );
573list<node *>::iterator it;
574for ( it = list.begin(); it != list.end(); it ++ )
575 if ( *it == &n3 ) { dl.erase( it ); break; }
576\end{c++}
577&
578\begin{cfa}
579dlist(node) list;
580node n1 = { 1 }, n2 = { 2 }, n3 = { 3 }, n4 = { 4 };
581insert( list, n1, n2, n3, n4 );
582
583
584
585remove( list, n3 );
586\end{cfa}
587\end{tabular}
588\caption{Obtaining a linked-list iterator in \CC \vs \CFA}
589\label{f:CCvsCFAListIssues}
590\end{figure}
591
592\begin{comment}
593Dave Dice:
594There are what I'd call the "dark days" of C++, where the language \& libraries seemed to be getting progressively uglier - requiring more tokens to express something simple, and lots of arcana.
595But over time, somehow, they seem to have mostly righted the ship and now I can write C++ code that's fairly terse, like python, by ignoring all the old constructs.
596(They carry around the legacy baggage, of course, but seemed to have found a way to evolve away from it).
597
598If you just want to traverse a std::list, then, using modern "for" loops, you never need to see an iterator.
599I try hard never to need to write X.begin() or X.end().
600(There are situations where I'll expose iterators for my own types, however, to enable modern "for" loops).
601If I'm implementing simple linked lists, I'll usually skip std:: collections and do it myself, as it's less grief.
602I just don't get that much advantage from std::list. And my code is certainly not any shorter.
603On the other hand, @std::map@, @unordered_map@, @set@, and friends, are terrific, and I can usually still avoid seeing any iterators, which are blight to the eye.
604So those are a win and I get to move up an abstraction level, and write terse but easily understood code that still performs well.
605(One slight concern here is that all the C++ collection/container code is templated and lives in include files, and not traditional libraries.
606So the only way to replace something - say with a better algorithm, or you have a bug in the collections code - is to boil the oceans and recompile everything.
607But on the other hand the compiler can specialize to the specific use case, which is often a nice performance win).
608
609And yeah, the method names are pretty terrible. I think they boxed themselves in early with a set of conventions that didn't age well, and they tried to stick with it and force regularity over the collection types.
610
611I've never seen anything written up about the history, although lots of the cppcon talks make note of the "bad old days" idea and how collection \& container library design evolved. The issue is certainly recognized.
612\end{comment}
613
614
615\section{Implementation}
616
617\VRef[Figure]{fig:lst-impl-links} continues the running @req@ example, showing the \CFA list library's internal representation.
618The @dlink@ structure contains exactly two pointers: @next@ and @prev@, which are opaque.
619Even though the user-facing list model is linear, the \CFA library implements all listing as circular.
620This choice helps achieve uniform end treatment.
621% and \PAB{TODO finish summarizing benefit}.
622A link pointer targets a neighbouring @dlink@ structure, rather than a neighbouring @req@.
623(Recall, the running example has the user putting a @dlink@ within a @req@.)
624
625\begin{figure}
626 \centering
627 \includegraphics[width=\textwidth]{lst-impl-links.pdf}
628 \caption{
629 \CFA list library representations for headed and headless lists
630 }
631 \label{fig:lst-impl-links}
632\end{figure}
633
634Circular link-pointers (dashed lines) are tagged internally in the pointer to indicate linear endpoints.
635Links among neighbour nodes are not tagged.
636Iteration reports ``has more elements'' when accessing untagged links, and ``no more elements'' when accessing tagged links.
637Hence, the tags are set on the links that a user cannot navigate.
638
639The \CFA library works in headed and headless modes.
640In a headed list, the list head (@dlist(req)@) acts as an extra element in the implementation-level circularly-linked list.
641The content of a @dlist@ header is a (private) @dlink@, with the @next@ pointer to the first element, and the @prev@ pointer to the last element.
642Since the head wraps a @dlink@, as does @req@, and since a link-pointer targets a @dlink@, the resulting cycle is among @dlink@ structures, situated inside of header/node.
643An untagged pointer points within a @req@, while a tagged pointer points within a list head.
644In a headless list, the circular backing list is only among @dlink@s within @req@s.
645
646No distinction is made between an unlisted node (top left and middle) under a headed model and a singleton list under a headless model.
647Both are represented as an item referring to itself, with both tags set.
648
649
650\section{Assessment}
651\label{toc:lst:assess}
652
653This section examines the performance of the discussed list implementations.
654The goal is to show the \CFA lists are competitive with other designs, but the different list designs may not have equivalent functionality, so it is impossible to select a winner encompassing both functionality and execution performance.
655
656
657\subsection{Experiment Design}
658
659This section explains how the experiment is built.
660Many of the following parts define terminology concerning tuning knobs.
661\VRef[Figure]{f:ListPerfGlossary} provides a consolidated reference.
662
663\begin{figure}
664\noindent
665\begin{tabular}{p{1.75in}@{\ }p{4.5in}}
666Insert-Remove (IR)
667 & The atomic unit of work being measured: one insertion plus one remove (plus all looping/tracking overheads) \\
668Use Case
669 & Pattern of add-remove calls. \\
670-- Movement & \\
671 \quad $\ni$ stack
672 & IRs happen at the same end. \\
673 \quad $\ni$ queue
674 & IRs happen at opposite ends. \\
675-- Polarity
676 & Which of the two orientations in which the movement happens. \\
677 \quad $\ni$ insert-first
678 & All inserts at front; stack removes at front; queue removes at back. \\
679 \quad $\ni$ insert-last
680 & All inserts at back; stack removes at back; queue removes at front. \\
681-- Accessor
682 & How an insertion position, or removal element, is specified. The same position/element is picked either way. \\
683 \quad $\ni$ all head
684 & IRs both through the head \\
685 \quad $\ni$ insert element
686 & insert by element and remove through the head \\
687 \quad $\ni$ remove element
688 & insert through head and remove by element\\
689Physical Context & \\
690-- Size (number) & Number of nodes being linked. Unless specified, equals the \emph{length} of the program's sole list. \emph{Width}, rarely used, is the number of lists. \\
691-- Size Zone
692 & Contiguous range of sizes, chosen to avoid known anomalies and to sample a brief plateau. Each zone buckets four specific sizes. \\
693 \quad $\ni$ small
694 & lists of 4--16 elements \\
695 \quad $\ni$ medium
696 & lists of 50--200 elements \\
697 \quad $\ni$ (other)
698 & Not used for comparing intrusive frameworks. \\
699-- machine
700 & Computer running the experiment \\
701 \quad $\ni$ AMD
702 & smaller cache \\
703 \quad $\ni$ Intel
704 & bigger cache \\
705Framework & A particular linked-list implementation (within its host language) \\
706$\ni$ \CC & The @std::list@ type of g++. \\
707$\ni$ lq-list & The @list@ type of LQ from glibc of gcc. \\
708$\ni$ lq-tailq & The @tailq@ type of the same. \\
709$\ni$ \uCpp & \uCpp's @uSequence@ \\
710$\ni$ \CFA & \CFA's @dlist@ \\
711Explanation being
712 & How independent explanatory variable X is analyzed \\
713-- Marginalized
714 & Left alone, allowed to vary, yielding a more absolute measure. Shows the effect that X causes. If all explanations are marginalized, then absolute times are available and a relative time has a peer group that is the entire population. \\
715-- Conditioned
716 & Held constant, yielding a more relative measure. Hides the effect that X causes. Conditioning on X creates more, smaller relative-measure peer groups, by isolating each X-domain value. Resulting interpretation is, ``Assuming no change in X.'' \\
717\end{tabular}
718\caption{
719 Glossary of terms used in the list performance evaluation
720}
721\label{f:ListPerfGlossary}
722\end{figure}
723
724
725\subsubsection{Add-Remove Performance}
726\label{s:AddRemovePerformance}
727
728The fundamental job of a linked-list library is to manage the links that connect nodes.
729Any link management is an action that causes pair(s) of elements to become, or cease to be, adjacent in the list.
730Thus, adding and removing an element are the sole primitive actions.
731
732Repeated adding and removing is necessary to measure timing because these operations can be as short as a dozen instructions.
733These instruction sequences may have cases that proceed (in a modern, deep pipeline) without a stall.
734
735This experiment takes the position that:
736\begin{enumerate}[leftmargin=*]
737 \item The total time to add and remove is relevant, as opposed to having one time for adding and a separate time for removing.
738 Adds without removes quickly fill memory;
739 removing without adding is impossible.
740 \item A relevant breakdown ``by operation'' is, rather, the usage pattern of the add/remove calls.
741 A example pattern choice is adding and removing at the same end, making a stack, or opposite ends, for a queue.
742 Another is pushing on the front by calling @insert_first(lst, e)@ \vs @insert(e, old_first_elm)@; this aspect provides the test's API coverage.
743 \VRef[Section]{s:UseCases} gives the full breakdown.
744 \item Speed differences caused by the host machine's memory hierarchy need to be identified and explained,
745 but do not represent advantages of one framework over another.
746\end{enumerate}
747
748The experiment used to measure IR cost measures the mean duration of a sequence of additions and removals.
749The distribution of speeds experienced by an individual add-remove pair (tail latency) is not discussed.
750Space efficiency is shown only indirectly, by way of caches' impact on speed.
751The experiment is sensitive enough to show:
752\begin{itemize}
753 \item intrusive lists performing (majorly) differently than wrapped lists,
754 \item a space of (lesser) performance differences among the intrusive lists.
755\end{itemize}
756
757In all cases, the quantity discussed is the duration of one insert-remove (IR).
758An IR is the time taken to do one innermost insertion-loop iteration, one innermost removal-loop iteration, and its share of all overheads, amortized.
759Lower IR is better.
760This experiment typically does an IR in 1--10 ns.
761The short end of this range has durations of single-digit clock-cycle counts.
762Therefore, the situations that achieve the best times are saturating the instruction pipeline successfully.
763
764Often, an IR duration value needs to be considered relatively.
765For example, \VRef{s:SweetSoreSpots} asks whether one linked list implementation is more sensitive than another to the computer architecture.
766A finding might be that a machine slows implementation A by 10\% and B by 20\%.
767This finding is not saying that A is faster than B (on either machine).
768The finding could stand if B starts faster and then levels off, if B starts slower and gets worse, or in myriad other cases.
769The finding asserts that such distinctions are not what is immediately relevant.
770The arithmetic producing different answers is removing the information about which one starts or ends up faster.
771Each implementation's to-machine duration is stated relatively to \emph{the same implementation's} from-machine duration.
772The resulting measure is still about a duration.
773The framework with the lower from-machine-relative duration handles the change better.
774
775
776\subsubsection{Test Program}
777\label{s:TetProgram}
778
779The experiment driver defines a (intrusive) node type:
780\begin{cfa}
781struct Node {
782 int i, j, k; // fields
783 // possible intrusive links
784};
785\end{cfa}
786and considers the speed of building and tearing down a list of $n$ instances of it.
787% A number of experimental rounds per clock check is precalculated to be appropriate to the value of $n$.
788\begin{cfa}
789// simplified harness: CFA implementation,
790// stack movement, insert-first polarity, head-mediated access
791size_t totalOpsDone = 0;
792dlist( node_t ) lst;
793node_t nodes[ n ]; $\C{// preallocated list storage}$
794startTimer();
795while ( CONTINUE ) { $\C{// \(\approx\) 20 second duration}$
796 for ( i; n ) insert_first( lst, nodes[i] ); $\C{// build up}$
797 for ( i; n ) remove_first( lst ); $\C{// tear down}$
798 totalOpsDone += n;
799}
800stopTimer();
801reportedDuration = getTimerDuration() / totalOpsDone; // throughput per IR operation
802\end{cfa}
803To reduce administrative overhead, the $n$ nodes for each experiment list are preallocated in an array (on the stack), which removes dynamic allocations for this storage.
804These nodes contain an intrusive pointer for intrusive lists, or a pointer to it is stored in a dynamically-allocated internal-node for a wrapper list.
805Copying the node for wrapped lists skews the results with administration costs.
806The list insertion/removal operations are repeated for a typical 20+ second duration.
807After each round, a counter is incremented by $n$ (for throughput).
808Time is measured outside the loop because a large $n$ can overrun the time duration before the @CONTINUE@ flag is tested.
809Hence, there is a minimum of one outer (@CONTINUE@) loop iteration for large lists.
810The loop duration is divided by the counter and this throughput is reported.
811In a scatter-plot, each dot is one throughput, which means insert + remove + harness overhead.
812The harness overhead is constant when comparing linked-list frameworks and is kept as small as possible.
813% The remainder of the setup section discusses the choices that affected the harness overhead.
814
815To test list operations, the experiment performs the inserts/removes in different patterns, \eg insert and remove from front, insert from front and remove from back, random insert and remove, \etc.
816Unfortunately, the @std::list@ does \emph{not} support direct IR from a node without an iterator, \ie no @erase( node )@, even though the list is doubly-linked.
817To eliminate the iterator, a trick is used for random insertions without replacement, which takes advantage of the array nature of the nodes.
818The @i@ fields in each node are initialized from @0..n-1@.
819These @i@ values are then shuffled in the nodes, and the @i@ value is used to represent an indirection to that node for insertion.
820Hence, the nodes are inserted in random order and removed in the same random order.
821$\label{p:Shuffle}$
822\begin{cfa}
823 for ( i; n ) @nodes[i].i = i@; $\C[3.25in]{// indirection}$
824 shuffle( nodes, n ); $\C{// random shuffle indirects within nodes}$
825
826 while ( CONTINUE ) {
827 for ( i; n ) {
828 node_t & temp = nodes[ nodes[i].i ]; $\C{// select random node in array}$
829 @temp.j = 0;@ $\C{// only touch random node for wrapped nodes}$
830 insert_first( lst, temp ); $\C{// build up}$
831 }
832 for ( i; n ) pass( &remove_first( lst ) ); $\C{// tear down}\CRT$
833 totalOpsDone += n;
834 }
835\end{cfa}
836Note, insertion is traversing the list of nodes linearly, @node[i]@.
837For intrusive lists, the inserted (random) node is always touched because its link fields are read/written for insertion into the list.
838Hence, the array of nodes is being accessed both linearly and randomly during the traversal.
839For wrapped lists, the wrapped nodes are traversed linearly but the random node is not accessed, only a pointer to it is inserted into the linearly accessed wrapped node.
840Hence, the traversal is the same as the non-random traversal above.
841To level the experiments, an explicit access to the random node is inserted after the insertion, @temp.j = 0@, for the wrapped experiment.
842Furthermore, it is rare to IR nodes and not access them.
843
844% \emph{Interleaving} allows for movements other than pure stack and queue.
845% Note that the earlier example of using the iterators' array is still a pure stack: the item selected for @erase(...)@ is always the first.
846% Including a less predictable movement is important because real applications that justify doubly linked lists use them.
847% Freedom to remove from arbitrary places (and to insert under more relaxed assumptions) is the characteristic function of a doubly linked list.
848% A queue with drop-out is an example of such a movement.
849% A list implementation can show unrepresentative speed under a simple movement, for example, by enjoying unchallenged ``Is first element?'' branch predictions.
850
851% Interleaving brings ``at middle of list'' cases into a stream of add or remove invocations, which would otherwise be exclusively ``at end''.
852% A chosen split, like half middle and half end, populates a boolean array, which is then shuffled.
853% These booleans then direct the action to end-\vs-middle.
854%
855% \begin{cfa}
856% // harness (bookkeeping and shuffling elided): CFA implementation,
857% // stack movement, insert-first polarity, interleaved element-based remove access
858% dlist( item_t ) lst;
859% item_t items[ n ];
860% @bool interl[ n ];@ // elided: populate with weighted, shuffled [0,1]
861% while ( CONTINUE ) {
862% item_t * iters[ n ];
863% for ( i; n ) {
864% insert_first( items[i] );
865% iters[i] = & items[i];
866% }
867% @item_t ** crsr[ 2 ]@ = { // two cursors into iters
868% & iters[ @0@ ], // at stack-insert-first's removal end
869% & iters[ @n / interl_frac@ ] // in middle
870% };
871% for ( i; n ) {
872% item *** crsr_use = & crsr[ interl[ i ] ]@;
873% remove( *** crsr_use ); // removing from either middle or end
874% *crsr_use += 1; // that item is done
875% }
876% assert( crsr[0] == & iters[ @n / interl_frac@ ] ); // through second's start
877% assert( crsr[1] == & iters[ @n@ ] ); // did the rest
878% }
879% \end{cfa}
880%
881% By using the pair of cursors, the harness avoids branches, which could incur prediction stall times themselves, or prime a branch in the SUT.
882% This harness avoids telling the hardware what the SUT is about to do.
883
884
885\subsubsection{Use Cases}
886\label{s:UseCases}
887
888Where \VRef[Figure]{f:ListPerfGlossary} enumerates the specific values, recall the use-case dimensions are:
889\begin{description}
890 \item[movement ($\times 2$)]
891 In these experiments, strict stack and queue patterns are tested.
892 \item[polarity ($\times 2$)]
893 Obtain one polarity from the other by reversing uses of first/last.
894 \item[accessor ($\times 3$)]
895 Giving an add/remove location by a list head's first/last, \vs by a preexisting reference to an individual element.
896\end{description}
897
898\begin{figure}
899\begin{comment}
900\centering
901\setlength{\tabcolsep}{8pt}
902\begin{tabular}{@{}ll@{}}
903\begin{tabular}{@{}c|c|c@{}}
904movement & polarity & accessor \\
905\hline
906\hline
907stack &
908 \begin{tabular}{@{}l@{}}
909 insert-first \\
910 \hline
911 insert-last
912 \end{tabular}
913 &
914 \begin{tabular}{@{}l@{}}
915 insert-head / remove-head \\
916 \hline
917 insert-list / remove-head \\
918 \hline
919 insert-head / remove-list
920 \end{tabular}
921 \\
922\hline
923queue &
924 \begin{tabular}{@{}l@{}}
925 insert-first \\
926 \hline
927 insert-last
928 \end{tabular}
929 &
930 \begin{tabular}{@{}l@{}}
931 insert-head / remove-head \\
932 \hline
933 insert-list / remove-head \\
934 \hline
935 insert-head / remove-list
936 \end{tabular}
937\end{tabular}
938&
939 \setlength{\tabcolsep}{3pt}
940 \small
941 \begin{tabular}{@{}ll@{}}
942 I: & stack, insert first, all head \\
943 II: & stack, insert first, insert element \\
944 III:& stack, insert first, remove element \\
945 IV: & stack, insert last, all head \\
946 V: & stack, insert last, insert element \\
947 VI: & stack, insert last, remove element \\
948 VII:& queue, insert first, all head \\
949 VIII:& queue, insert first, insert element \\
950 IX: & queue, insert first, remove element \\
951 X: & queue, insert last, all head \\
952 XI: & queue, insert last, iinsert element \\
953 XII:& queue, insert last, remove element \\
954 \end{tabular}
955\end{tabular}
956\end{comment}
957\setlength{\tabcolsep}{5pt}
958\small
959\begin{tabular}{rcccccccccccc}
960& I & II & III & IV & V & VI & VII & VIII & IX & X & XI & XII \\
961Movement &
962stack & stack & stack & stack & stack & stack &
963queue & queue & queue & queue & queue & queue \\
964Polarity &
965i-first & i-first & i-first & i-last & i-last & i-last &
966i-first & i-first & i-first & i-last & i-last & i-last \\
967Acessor &
968all hd & ins-e & rem-e & all hd & ins-e & rem-e &
969all hd & ins-e & rem-e & all hd & ins-e & rem-e
970\end{tabular}
971\caption{Experiment use cases, numbered}
972\label{f:ExperimentOperations}
973\end{figure}
974
975A use case is a specific selection of movement, polarity and accessor.
976These experiments run twelve use cases.
977When a comparison is showing only what can happen when switching among use cases (as opposed to \eg how stacks are different from queues), the numbering scheme of \VRef[Figure]{f:ExperimentOperations} is used.
978
979With accessor, when an action names its insertion position or removal element, the harness either
980\begin{itemize}
981\item defers to the list-head's tracking of first/last (``through the head''), or
982\item applies its own knowledge of the current pattern, to name a position/element that happens to be first/last (``of known element'').
983\end{itemize}
984
985The accessor patterns, at the (\CFA) API level, are:
986\begin{description}
987 \item[all (through the) head:] Both IRs happen through the list head. The list head operations are @insert_first@, @insert_last@, @remove_first@ and @remove_last@.
988 \begin{sloppypar}
989 \item[insert (of known) element] \dots and remove through head: Inserts use @insert_before(e, first)@ or @insert_after(e, last)@, where @e@ is being inserted and @first@/@last@ are element references known by list-independent means.
990 \end{sloppypar}
991 \item[remove (of known) element] \dots and insert through the head: Removes use @remove(e)@, where @e@ is being removed. List-independent knowledge establishes that @e@ is first or last, as appropriate.
992\end{description}
993
994Comparing all-head with insert-element gives the relative performance of head-mediated \vs element-oriented insertion, because both use the same removal style.
995Comparing all-head with remove-element gives the relative performance of head-mediated \vs element-oriented removal, because both use the same insertion style.
996
997
998\subsubsection{Sizing}
999
1000Intuition suggests measuring IR for different sized lists should just be a multiple of the single linking/unlinking of a node.
1001However, there is a scaling issue as more memory is being read and written, where caching comes into play.
1002But caching is more than the amount of memory being accessed;
1003the access pattern is equally important.
1004Aggressive instruction-level parallelism scheduling, which enables short IR times, is the amplifier, \eg a data dependency is a critical path in one situation but not in another.
1005Therefore, the duration response to size is not a steady worsening as size increases.
1006Often, each size-independent configuration responds to size increases in steps of slowdown.
1007Occasionally a slowdown step is followed by some perforamnce increase, where an incurred penalty begins to amortize away.
1008Hence, performance results can have interesting jitter as size increases.
1009The analysis treats these behaviours as incidental.
1010It does not try to characterize various exact-size responses.
1011Rather, size zones are picked, specific effects inside of a zone are averaged away, and the story at one zone is compared to that at another.
1012
1013% It is true, but perhaps not obvious, that buildind and destroying long lists is slower than building and destroying short lists.
1014% Obviously, indeed, it takes longer to fuse and divide a hundred neighbours than five.
1015% But the key metric in this work, AII, is about a single link--unlink.
1016% So, critically, linking and unlinking a hundred neighbours actually takes \emph{more} than $20\times$ the time for five neighbours.
1017% The main reason is caching; when more neighbours are being manipulated, more memory is being read and written.
1018%
1019% But caching success is about more than the amount of memory worked on.
1020% Subtle changes in pattern become butterfly effects.
1021% Aggressive ILP scheduling, which enables short AIR times, is the amplifier.
1022% A data dependency, present in one framework but not another, is critical path in one situation but in not another.
1023% So, duration's response to size is not a steady worsening as size increases.
1024% Rather, each size-independent configuration often responds to size increases with leaps of worsening.
1025% Occasionally a leap is even followed size-run of retrograde response, where a suddenly incurred penalty has a chance to ammortize away.
1026% The frameworks tend to leapfrog over each other, at different points, as size increases.
1027%
1028% The analysis treats these behaviours as incidental.
1029% It does not try to characterize various exact-size responses.
1030% Rather, size zones are picked, specific effects inside of a zone are averaged away, and the story at one zone is compared to that at another zone.
1031
1032\begin{figure}
1033\centering
1034 \setlength{\tabcolsep}{0pt}
1035 \begin{tabular}{p{3.4375in}p{0.125in}p{2.9375in}}
1036 \subfloat[ ]{\label{f:zoomin-abs-i-swift}
1037 \includegraphics{plot-list-zoomin-abs-i-swift.pdf}
1038 } & &
1039 \subfloat[ ]{\label{f:zoomin-abs-viii-java}
1040 \includegraphics{plot-list-zoomin-abs-viii-java.pdf}
1041 }
1042 \end{tabular}
1043 \caption[Variety of IR duration responses to list length, at small--medium lengths]{Variety of IR duration responses to list length, at small--medium lengths. Two example use cases are shown: I, stack movement with head-only access (plot a); VIII, queue movement with element-oriented removal access (plot b); both use cases have insert-first polarity. One example is run on each machine: UC-I on AMD (ploat a); UC-VIII on Intel (plot b). Lower is better.}
1044 \label{fig:plot-list-zoomin-abs}
1045\end{figure}
1046
1047\VRef[Figure]{fig:plot-list-zoomin-abs} gives two example responses to size.
1048% The dataset here is a small portion of the overall result and it is premature to attempt conclusions about framework differences from it.
1049These two example cases show how differently a pair of individual configurations behave.
1050% Of more immediate significance, they also have a pattern repeated, in all eight of their size responses.
1051% Note the ``small'' and ``medium'' overlaid boxes, which call out the size zones' definitions.
1052Outside of an identified box, size response is erratic.
1053Inside a box, size response is relatively smooth.
1054Within and among boxes there are identifiable patterns, which occur throughout all the experimental results.
1055Each individual configuration is tested by five trials, giving the error bars at min and max.
1056The amount of error here is typical across the configurations.
1057With a few exceptions, it is modest, so experiments are repeatable.
1058
1059To preview, \VRef{s:ResultCoarseComparisonStyles} dismisses large sizes (above 150 elements) and wrapped lists, because the performance story is dominated by the amount of memory touched not by intrusive \vs wrapped lists.
1060At smaller sizes, \VRef{s:ComparingIntrusiveImplementations} shows differences appear among the intrusive-list implementations.
1061Among the ``not Large'' sizes ($\le$ 150), two zones, Small and Medium, are selected as representatives of what can vary when the scale is changed.
1062These particular ranges are chosen because each range tends to have one story repeated across its constituent sizes.
1063For example, if \CFA's duration increases across Small, then the other frameworks' usually do too, or if \CFA is beating \uCpp across Medium, then it usually is at the high end too.
1064% The leapfrogging tends to happen outside of these two ranges.
1065
1066Finally, on the AMD architecture, \CFA performed poorly at size 1, on queue movements only, and no other framework saw the same effect.
1067This extreme outlier is not plotted in graphs.
1068After exploring the phenomenon in depth (not presented), the conclusion is a quirky interaction between the hardware and the testing harness.
1069A side experiment (that does not enrich the overall comparisons) saw user-induced gaps of $\approx$10 ns between same-list operations hide the effect completely.
1070These gaps are realistic because when an item goes on a list another action comes back to it \emph{later}.
1071The pattern that the general harness uses, concentrating time-adjacent operations on one list, is useful for measuring the ``small'' size zone, but is contrived, from the perspective of a data hazard that only this pattern exposes.
1072The general comparisons do not see the effect at all, because they use only the Small and Medium zones, with shortest length of 4.
1073
1074% A spot of poor performance appears in the general results for \CFA at size 1.
1075% Section \MLB{TODO:xref} explores the phenomenon and concludes that it is an anomaly due to a quirky interaction with the testing rig.
1076% To do so, two it considers size as either length or width.
1077% Length is the number of elements in a list.
1078% Width is a number of these lists being kept, worked upon in round-robin order.
1079% Outside of \MLB{TODO:xref}, size always means length, and width is 1.
1080
1081
1082\subsubsection{Execution Environment}
1083\label{s:ExperimentalEnvironment}
1084
1085The performance experiments are run on:
1086\begin{description}[leftmargin=*,topsep=3pt,itemsep=2pt,parsep=0pt]
1087%\item[PC]
1088%with a 64-bit eight-core AMD FX-8370E, with ``taskset'' pinning to core \#6. The machine has 16 GB of RAM and 8 MB of last-level cache.
1089%\item[ARM]
1090%Gigabyte E252-P31 128-core socket 3.0 GHz, WO memory model
1091\item[AMD]
1092Supermicro AS--1125HS--TNR EPYC 9754 128--core socket, hyper-threading $\times$ 2 sockets (512 processing units) 2.25 GHz, TSO memory model, with cache structure 32KB L1i/L1d, 1024KB L2, 16MB L3, where each L3 cache covers 1 NUMA node and 8 cores (16 processors).
1093\item[Intel]
1094Supermicro SYS-121H-TNR Xeon Gold 6530 32--core, hyper-threading $\times$ 2 sockets (128 processing units) 2.1 GHz, TSO memory model, with cache structure 32KB L1i/L1d, 20248KB L2, 160MB L3, where each L3 cache covers 2 NUMA node and 32 cores (64 processors).
1095\end{description}
1096The experiments are single threaded and pinned to single core to prevent any OS movement, which might cause cache or NUMA effects perturbing the experiment.
1097
1098The compiler is gcc/g++-14.2.0 running on the Linux v6.8.0-52-generic OS.
1099Switching between the default memory allocators @glibc@ and @llheap@ is done with @LD_PRELOAD@.
1100To prevent eliding certain code patterns, crucial parts of a test are wrapped by the function @pass@
1101\begin{cfa}
1102// prevent eliding, cheaper than volatile
1103static inline void * pass( void * v ) { __asm__ __volatile__( "" : "+r"(v) ); return v; }
1104...
1105pass( &remove_first( lst ) ); // wrap call to prevent elision, insert cannot be elided now
1106\end{cfa}
1107The call to @pass@ can prevent a small number of compiler optimizations but this cost is the same for all lists.
1108
1109The main difference in the machines is their cache structure.
1110The AMD has smaller caches that are shared less, while the Intel shares larger caches among more processors.
1111This difference, while an interesting tradeoff for highly concurrent use, is rather one-sided for sequential use, such as this experiment.
1112Specifically, the Intel offers a single processor a bigger cache.
1113
1114
1115\subsubsection{Recap and Master Legend}
1116
1117For experiments performed in later section, there are 12 use cases, which are all combinations of 2 movents, 2 polarities and 3 accessors.
1118There are 4 pysical contexts, which are all combinations of 2 machines and 2 size (length) zones.
1119Each physical context samples 4 specific sizes.
1120There are 3.25 frameworks.
1121This accounting considers how LQ-list supports only the movement--polarity combination "stack, insert first."
1122So LQ-list fills a quarter of the otherwise-orthogonal space.
1123Use case, physical context and framework are the explanatory factors.
1124Taking all combinations of the explanatory factors gives 12 $\times$ 4 $\times$ 4 $\times$ 3.25 = 624 individual configurations.
1125
1126% \[
1127% \textrm{624 individual configurations} =
1128% \sum_{\substack{
1129% \textrm{12 use cases}\\
1130% \textrm{4 physical contexts}\\
1131% \textrm{4 specific sizes}\\
1132% \textrm{3.25 frameworks}
1133% }}
1134% \textrm{1 individual configuration}
1135% \]
1136
1137\begin{figure}
1138\centering
1139 \setlength{\tabcolsep}{0pt}
1140 \includegraphics[trim={0in, 1.75in, 0in, 0in}, clip]{plot-list-legend-rel-i-swift.pdf}
1141 \begin{tabular}{p{2.75in}p{0.125in}p{3.625in}}
1142 \subfloat[ ]{\label{f:zoomin-rel-i-swift}
1143 \includegraphics{plot-list-zoomin-rel-i-swift.pdf}
1144 } & &
1145 \subfloat[ ]{\label{f:zoomin-histo-i-swift}
1146 \includegraphics{plot-list-histo-rel-i-swift.pdf}
1147 }
1148 \end{tabular}
1149 \caption[IR duration, transformed for general anaysis]{
1150 IR duration, transformed for general anaysis.
1151 The analysis follows the single example setup of \VRef[Figure]{f:zoomin-abs-i-swift}, \ie Use Case I on AMD, where IR is given as absolute duration.
1152 Plot (a) transforms the source dataset by conditioning on specific size.
1153 Plot (b) takes the results from only the identified size zones, discards their specific-size information, and shows the resulting distribution.
1154 Lower is better.}
1155 \label{fig:plot-list-rel}
1156\end{figure}
1157
1158\begin{comment}
115932 of these individual configurations, plucked from \VRef[Figure]{f:zoomin-abs-i-swift}, are the subject of \VRef[Figure]{fig:plot-list-rel}, where they are now transformed into the format used for general analysis.
1160In \subref*{f:zoomin-rel-i-swift}, each of the 56 data points is an individual configuration; the subset within the two boxes has the 32 of interest.
1161In \subref*{f:zoomin-histo-i-swift}, the very leftmost histogram (Small, \CFA) summarizes the 4 Small--\CFA individual configurations.
1162Each remaining framework, at size zone Small, similarly summarizes 4 individual configurations.
1163This statement is the interpretation of the x-category label, ``Small (/4).''
1164Each line under the ``/4'' label has 4 individual configurations; so, this group has 16 individual configurations.
1165The interpretation of ``Medium (/4)'' is similar; it groups the remaining 16 configurations.
1166When both size zones are pooled together, ``Both (/8)'' results; this group revisits all 32 configurations.
1167
1168This example's use of 32 individual configurations is in contrast with full-universe comparisons like the forthcoming \VRef[Figure]{fig:plot-list-1ord}, whose coverage further includes all use cases and both machines.
1169
1170The transformation of \VRef[Figure]{f:zoomin-abs-i-swift} into \VRef[Figure]{f:zoomin-rel-i-swift} conditions on the configuration's specific size.
1171At a particular size, the average duration is computed, across the three frameworks that work on all use cases: \CFA, \uCpp and LQ-@tailq@.
1172\VRef[Figure]{fig:plot-list-rel} shows a particular configuration's duration relative to this average.
1173
1174The effect of conditioning on specific size erases the fact that \VRef[Figure]{fig:plot-list-zoomin-abs} shows, aside from the coarse hops, all frameworks getting smoothly slower as size increases.
1175This effect is partiularly relevant \emph{within} a size zone, most noticeable as the data lines all going up across the Small box.
1176Now, with size conditioned, \VRef[Figure]{f:zoomin-rel-i-swift} has the trends inside a zone box being flat.
1177This flatness gives \subref*{f:histo-rel-i-swift} nicely separated histograms.
1178
1179Specific size is the only factor conditioned in this view.
1180This choice was made to keep the relationship between \VRef[Figures]{f:zoomin-abs-i-swift} and \VRef[]{:zoomin-rel-i-swift} perceptible.
1181By contrast, general comparisons like \VRef[Figure]{fig:plot-list-1ord} condition on more, generally, everyting not presented.
1182Its physical-factor breakdown conditions on use case and framework, but not on physical factors; its other two breakdowns are defined similarly.
1183
1184The noteworthy performance hop, in this example, is LQ-@list@, which \VRef[Figures]{f:zoomin-abs-i-swift} has as consistently slow in the Small range, and consistently fast in the Medium range.
1185Therefore, in \VRef{f:zoomin-histo-i-swift}, its two per-size-zone histograms are far apart, and its cross-size-zone histogram is bimdal.
1186Hops and distribution contributions, like this one, are common.
1187They are attention-grabbing curiosities when comparing nearly any pair of individual configurations.
1188They are the background noise of the all-in inter-configuration comparisons following.
1189With this one, \VRef[Figure]{fig:plot-list-1ord}'s LQ-@list@ distribution (farthest right) does have a perceptible bump at $1.8\times$, corresponding to the upper mode seen here.
1190But this UC-I--Intel contribution is only $1/6 = 8/48$ of the configurations summarized there.
1191
1192Each individual configuration is tested by five trials, giving the error bars of \VRef[Figure]{:zoomin-rel-i-swift} (at min and max).
1193This trial error is unaffected by the size conditioning.
1194Though this error is presented only for the narrow slice of the current examples, the amount of it seen here is typical across all the configurations.
1195With a few exceptions, it is modest; so, the experiment is repeatable.
1196The data points in \subref{f:zoomin-rel-i-swift} show mean excluding min and max.
1197This value, alone, is the contribution to the histogram in \subref{f:zoomin-histo-i-swift}.
1198That is, inter-configuration rollups discard the modest trial-repeatability error.
1199The girth of a histogram's distribution is entirely the inter-configuration variance, of its configurations' expected performance.
1200\end{comment}
1201
1202It is impossible to present this large amount of information in graphs.
1203Therefore, a condensed graphing style is used in subsequent plots.
1204\VRef[Figure]{fig:plot-list-rel} shows how the condensed graphing style is generated from raw data.
1205\VRef[Figure]{f:zoomin-rel-i-swift} is formed from the data in \VRef[Figure]{f:zoomin-abs-i-swift}, restructured on the Y-axis using a relative duration.
1206\VRef[Figure]{f:zoomin-histo-i-swift} shows the interesting data within the two boxes (Small/Medium) and their combination (Both).
1207This graph plots a vertical histogram for each of the 4 lists.
1208The light-shaded histogram is the raw data (similar data values overlap), and the dark histogram is the goemean average when there are multiple experiments condensed in a column.
1209The caption indicates the number of values condensed into this histogram, e.g., ``/4'' $\Rightarrow$ 4 data points.
1210The vertical relationship among the averages gives a quick result for a specific experiment, where lower is better.
1211The relative duration smooths the results, where smoothness diminishes as size increases.
1212This flatness gives nicely separated histograms.
1213Thus, in the forthcoming comparison plots:
1214
1215% \begin{itemize}[leftmargin=*]
1216% \item
1217% The measure is mean IR among the middle 3 trials out of 5, that occurred for an individual configuration.
1218% \item
1219% The number of individual configurations per histogram is stated as ``/N,'' at a relevant granularity.
1220% \item
1221% All reported averages are geometric means and all IR duration axes (verticals) are logarithmic.
1222% \item
1223% Unless indicated otherwise, all explanatory factors appearing on a plot are marginalized, while those not appearing on the plot are conditioned.
1224% \end{itemize}
1225
1226
1227\subsection{Result: Coarse comparison of styles}
1228\label{s:ResultCoarseComparisonStyles}
1229
1230This comparison establishes how an intrusive list performs compared with a wrapped-reference list.
1231\VRef[Figure]{fig:plot-list-zoomout} presents throughput at various list lengths for a linear and random (shuffled) IR test.
1232Other kinds of scans were made, but the results are similar in many cases, so it is sufficient to discuss these two scans, representing difference ends of the access spectrum.
1233In the graphs, all four intrusive lists (lq-list, lq-tailq, \uCpp, \CFA, see Framework in \VRef[Figure]{f:ListPerfGlossary}) are plotted with the same symbol;
1234sometimes theses symbols clump on top of each other, showing the performance difference among intrusive lists is small in comparison to the wrapped list (std::list).
1235See~\VRef{s:ComparingIntrusiveImplementations} for details among intrusive lists.
1236
1237The list lengths start at 10 due to the short IR times of 2--4 ns, for intrusive lists \vs STL's wrapped-reference list of 15--20 ns.
1238For very short lists, like 4, the experiment time of 4 $\times$ 2.5 ns and experiment overhead (loops) of 2--4 ns, results in an artificial administrative bump at the start of the graph having nothing to do with the IR times.
1239As the list size grows, the administrative overhead for intrusive lists quickly disappears.
1240
1241\begin{figure}
1242 \centering
1243 \setlength{\tabcolsep}{0pt}
1244 \begin{tabular}{p{0.75in}p{2.75in}p{3in}}
1245 &
1246 \subfloat[Linear List Nodes, AMD]{\label{f:Linear-swift}
1247 \hspace*{-0.75in}
1248 \includegraphics{plot-list-zoomout-noshuf-swift.pdf}
1249 } % subfigure
1250 &
1251 \subfloat[Linear List Nodes, Intel]{\label{f:Linear-java}
1252 \includegraphics{plot-list-zoomout-noshuf-java.pdf}
1253 } % subfigure
1254 \\
1255 &
1256 \subfloat[Random List Nodes, AMD]{\label{f:Random-swift}
1257 \hspace*{-0.75in}
1258 \includegraphics{plot-list-zoomout-shuf-swift.pdf}
1259 } % subfigure
1260 &
1261 \subfloat[Random List Nodes, Intel]{\label{f:Random-java}
1262 \includegraphics{plot-list-zoomout-shuf-java.pdf}
1263 } % subfigure
1264 \end{tabular}
1265 \caption[IR duration \vs list length, all sizes]{IR duration \vs list length, all sizes.
1266 Lengths go as large possible without error.
1267 One example use case is shown: stack movement, insert-first polarity and head-mediated access. Lower is better.}
1268 \label{fig:plot-list-zoomout}
1269\end{figure}
1270
1271The key performance factor between intrusive and wrapped-reference lists is the dynamic allocation for the wrapped nodes.
1272Hence, this experiment is largely measuring the cost of @malloc@/\-@free@ rather than IR, and is sensitive to the layout of memory by the allocator.
1273For intrusive-list IR, the cost is manipulating the link fields, which is seen by the relatively similar results for the different intrusive lists.
1274For wrapped-reference IR, the costs are: dynamically allocating/deallocating a wrapped node, copying a external-node pointer into the wrapped node for insertion, and linking the wrapped node to/from the list;
1275the allocation dominates these costs.
1276For example, the experiment was run with both glibc and llheap memory allocators, where llheap performance reduced the cost from 20 to 16 ns, still far from the 2--4 ns for linking an intrusive node.
1277Hence, there is no way to tease apart the allocation, copying, and linking costs for wrapped lists, as there is no way to preallocate the list nodes without writing a mini-allocator to manage that storage.
1278
1279In detail, \VRef[Figure]{f:Linear-swift}--\subref*{f:Linear-java} shows linear insertion of all the nodes and then linear removal, both in the same direction.
1280For intrusive lists, the nodes are adjacent in memory from being preallocated in an array.
1281For wrapped lists, the wrapped nodes happen to be adjacent because the memory allocator uses bump allocation during the initial phase of allocation.
1282As a result, these memory layouts result in high spatial and temporal locality for both kinds of lists during the linear array traversal.
1283With address look-ahead, the hardware does an excellent job of managing the multi-level cache.
1284Hence, performance is largely constant for both kinds of lists, until L3 cache and NUMA boundaries are crossed for longer lists and the costs increase consistently for both kinds of lists.
1285For example, on AMD (\VRef[Figure]{f:Linear-swift}), there is one NUMA node but many small L3 caches, so performance slows down quickly as multiple L3 caches come into play, and remains constant at that level, except for some anomalies for very large lists.
1286On Intel (\VRef[Figure]{f:Linear-java}), there are four NUMA nodes and four slowdown steps as list-length increase.
1287At each step, the difference between the kinds of lists decreases as the NUMA effect increases.
1288
1289In detail, \VRef[Figure]{f:Random-swift}--\subref*{f:Random-java} shows random insertion and removal of the nodes.
1290As for linear, there is the issue of memory allocation for the wrapped list.
1291As well, the consecutive storage-layout is the same (array and bump allocation).
1292Hence, the difference is the random linking among nodes, resulting in random accesses, even though the list is traversed linearly, resulting in similar cache events for both kinds of lists.
1293Both \VRef[Figures]{f:Random-swift}--\subref*{f:Random-java} show the slowdown of random access as the list-length grows resulting from stepping out of caches into main memory and crossing NUMA nodes.
1294% Insert and remove operations act on both sides of a link.
1295%Both a next unlisted item to insert (found in the items' array, seen through the shuffling array), and a next listed item to remove (found by traversing list links), introduce a new user-item location.
1296As for linear, the Intel (\VRef[Figure]{f:Random-java}) graph shows steps from the four NUMA nodes.
1297Interestingly, after $10^6$ nodes, intrusive lists are slower than wrapped.
1298I did not have time to track down this anomaly, but I speculate it results from the difference in touching the data in the accessed node, as the data and links are together for intrusive and separated for wrapped.
1299For the llheap memory-allocator and the two tested architectures, intrusive lists out perform wrapped lists up to size $10^3$ for both linear and random, and performance begins to converge around $10^6$ nodes as architectural issues begin to dominate.
1300Clearly, memory allocator and hardware architecture plays a large factor in the total cost and the crossover points as list-size increases.
1301% In an odd scenario where this intuition is incorrect, and where furthermore the program's total use of the memory allocator is sufficiently limited to yield approximately adjacent allocations for successive list insertions, a non-intrusive list may be preferred for lists of approximately the cache's size.
1302
1303The takeaway from this experiment is that wrapped-list operations are expensive because memory allocation is expense at this fine-grained level of execution.
1304Hence, when possible, using intrusive links can produce a significant performance gain, even if nodes must be dynamically allocated, because the wrapping allocations are eliminated.
1305Even when space is a consideration, intrusive links may not use more storage if a node is often linked.
1306Unfortunately, many programmers are unaware of intrusive lists for dynamically-sized data-structures or their tool-set does not provide them.
1307
1308% Note, linear access may not be realistic unless dynamic size changes may occur;
1309% if the nodes are known to be adjacent, use an array.
1310
1311% In a wrapped-reference list, list nodes are allocated separately from the items put into the list.
1312% Intrusive beats wrapped at the smaller lengths, and when shuffling is avoided, because intrusive avoids dynamic memory allocation for list nodes.
1313
1314% STL's performance is not affected by element order in memory.
1315%The field of intrusive lists begins with length-1 operations costing around 10 ns and enjoys a ``sweet spot'' in lengths 10--100 of 5--7-ns operations.
1316% This much is also unaffected by element order.
1317% Beyond this point, shuffled-element list performance worsens drastically, losing to STL beyond about half a million elements, and never particularly leveling off.
1318% In the same range, an unshuffled list sees some degradation, but holds onto a 1--2 $\times$ speedup over STL.
1319
1320% The apparent intrusive ``sweet spot,'' particularly its better-than-length-1 speed, is not because of list operations truly running faster.
1321% Rather, the worsening as length decreases reflects the per-operation share of harness overheads incurred at the outer-loop level.
1322% Disabling the harness's ability to drive interleaving, even though the current scenario is using a ``never work in middle'' interleave, made this rise disappear.
1323% Subsequent analyses use length-controlled relative performance when comparing intrusive implementations, making this curiosity disappear.
1324
1325% The remaining big-swing comparison points say more about a computer's memory hierarchy than about linked lists.
1326% The tests in this chapter are only inserting and removing.
1327% They are not operating on any user payload data that is being listed.
1328% The drastic differences at large list lengths reflect differences in link-field storage density and in correlation of link-field order to element order.
1329% These differences are inherent to the two list models.
1330
1331% A wrapped-reference list's separate nodes are allocated right beside each other in this experiment, because no other memory allocation action is happening.
1332% As a result, the interlinked nodes of the STL list are generally referencing their immediate neighbours.
1333% This pattern occurs regardless of user-item shuffling because this test's ``use'' of the user-items' array is limited to storing element addresses.
1334% This experiment, driving an STL list, is simply not touching the memory that holds the user data.
1335% Because the interlinked nodes, being the only touched memory, are generally adjacent, this case too has high memory locality and stays fast.
1336
1337% But the comparison of unshuffled intrusive with wrapped-reference gives the performance of these two styles, with their the common impediment of overfilling the cache removed.
1338% Intrusive consistently beats wrapped-reference by about 20 ns, at all sizes.
1339% This difference is appreciable below list length 0.5 M, and enormous below 10 K.
1340
1341
1342\subsection{Result: Intrusive Winners and Losers}
1343\label{s:ComparingIntrusiveImplementations}
1344
1345The preceding result shows the intrusive frameworks have better performance than the wrapped lists for small to medium sized lists.
1346This analysis covers the experiment position taken in \VRef{s:AddRemovePerformance} for movement, polarity, and accessor.
1347\VRef[Figure]{f:ExperimentOperations} shows the experiment use cases tested, which results in 12 experiments (I--XII) for comparing intrusive frameworks.
1348To preclude hardware interference, only list sizes below 150 are examined to differentiate among the intrusive frameworks,
1349The data is selected from the start of \VRef[Figures]{f:Linear-swift}--\subref*{f:Linear-java}, but the start of \VRef[Figures]{f:Random-swift}--\subref*{f:Random-java} is largely the same.
1350
1351\begin{figure}
1352 \centering
1353 \includegraphics{plot-list-1ord.pdf}
1354 \small{\textsuperscript{\textdagger} LQ-@list@ is (/48) by its incomplete (25\%) use case coverage. Its bars are scaled to match.}
1355 \caption[IR duration, decomposed by all first-order effects]{
1356 IR duration, decomposed by all first-order effects.
1357 Each of the three breakdowns divides the entire population of test results into its mutually disjoint constituents.
1358 Lower is better.
1359 }
1360 \label{fig:plot-list-1ord}
1361\end{figure}
1362
1363\VRef[Figure]{fig:plot-list-1ord} gives the first-order effects.
1364The first breakdown, architecture/size-zone (left), shows the overall performance of all 12 experiment on the two different hardware architectures for small and medium lists (624 / 4 = 156 experiments per column).
1365% The relative experiment duration for each experiment is shown as a bar in each column and the black bar in that column shows the average of all 12 experiments.
1366By inspection of the averages, Intel runs faster than AMD.
1367Within an architecture, the small zone (lists of 4--16 elements) runs faster than the medium zone (lists of 50--200 elements).
1368The overall slower execution on the AMD results from its smaller L3 cache \vs the larger cache on the Intel.
1369(No NUMA effects for these list sizes.)
1370Specifically, a 20\% standard deviation exists here, between the means of the four physical-effect categories.
1371These hardware effects are accounted for when interpreting the following framework comparisons.
1372% The key takeaway for this comparison is the context it establishes for interpreting the following framework comparisons.
1373% Both the particulars of a machine's cache design, and a list length's effect on the program's cache friendliness, affect IR speed in the manner illustrated in this breakdown.
1374% That is, if you are running on an unknown machine, at a scale above anomaly-prone individuals, and below where major LLC caching effects take over the general intrusive-list advantage, but with an unknown relationship to the sizing of your fickle low-level caches, you are likely to experience an unpredictable speed impact on the order of 20\%.
1375
1376The second breakdown, use case (middle), shows the overall performance for each of the 12 use cases from \VRef[Figure]{f:ExperimentOperations} (624 / 12 = 52 experiments per column).
1377% A similar situation comes from \VRef[Figure]{fig:plot-list-1ord}'s second comparison, by use case.
1378While specific differences do occur, like framework X doing better on stacks than on queues, the overall range of the standard deviation of the individual use cases' means is only 9\%, indicating no unusual cases.
1379A more detailed analysis occurs in the discussion of \VRef[Figure]{fig:plot-list-2ord}.
1380% But they are so irrelevant to the issue of picking a winning framework that it is sufficient here to number the use cases opaquely.
1381% Whether a given list framework is suitable for a language's general library succeeds or fails without knowledge of whether your use will have stack or queue movement.
1382% So you face another lottery, with a likely win-loss range of the standard deviation of the individual use cases' means: 9\%.
1383
1384The third breakdown, framework (right), shows the overall performance of the 4 list implementations (624 / 3.25 = 192).
1385Here, \CFA runs similarly to \uCpp and LQ-@list@ runs similarly to @tailq@.
1386The standard deviation of the frameworks' means is 8\%.
1387% Framework choice has, therefore, less impact on your speed than the lottery tickets you already hold.
1388Now, \CFA/\uCpp run slower than LQ-@list@/@tailq@ by 15\%, a fact explored further in \VRef{s:SweetSoreSpots}.
1389But so too does use case VIII typically beat use case IV by 38\%.
1390As does a small size on the Intel typically beat a medium size on the AMD by 66\%.
1391Hence, architecture and usage patterns have a significant affect on the specific framework.
1392
1393
1394\subsection{Result: Sweet and Sore Spots}
1395\label{s:SweetSoreSpots}
1396
1397\begin{figure}
1398 \centering
1399 \includegraphics{plot-list-2ord.pdf}\\
1400 \small{
1401 \textsuperscript{\textdagger} LQ-@list@ is absent from Movement and Polarity comparisons because it does not support queue and insert-last, respectively.\\
1402 \textsuperscript{\textdaggerdbl} LQ-@list@ is (/24) by its incomplete (25\%) use case coverage. Its bars are scaled to match.\\
1403 \textsuperscript{*} The full population of 192 individual configurations applies (48 for LQ-@list@), but this analysis summarizes pairs of them, giving each histogram's 96 contributions (24 for LQ-@list@).
1404 }
1405 \caption[IR duration where framework selection interacts with other factors]{
1406 IR duration where framework selection interacts with other factors.
1407 Higher favours top option; lower favours bottom option.
1408 }
1409 \label{fig:plot-list-2ord}
1410\end{figure}
1411
1412% \VRef[Figure]{fig:plot-list-1ord} is focused on only first-order effects in order to contextualize a winner/loser framework observation.
1413% But this perspective cannot address questions like, ``Where are \CFA's sore spots?''
1414% Moreover, the shallow treatment of use cases by ordinals said nothing about how stack usage compares with queues.
1415
1416\VRef[Figure]{fig:plot-list-2ord} shows how frameworks react to a single other factor being varied across one pair of options.
1417Every (binned and mean-contributing) individual data point represents a pair of test setups, one with the criterion set to the option labelled at the top; the other setup uses the bottom option.
1418This point's y-axis score is the ratio of these setups' durations.
1419The point lands in a bin closer to the label of the option that performs better.
1420
1421The first breakdown, size zone (left), refines the notion that a small size runs faster than a big size;
1422this issue is by how much.
1423Indeed, all means favour small and few tails favour medium.
1424But the various frameworks do no respond to the different sizes and machines uniformly.
1425On the AMD, \CFA and \uCpp have a modest size sensitivity, LQ-tailq's is moderate, and LQ-list seems unaffected.
1426On the Intel, \CFA's increases to moderate, while \uCpp is now unaffected, and both LQs have a large effect.
1427Hence, the Intel is more sensitive to size than the AMD.
1428
1429The second breakdown, movement and polarity (middle), the responses are more subdued.
1430Note, LQ-list has no represntation in these comparisons because it only supports stacks that push and pop with the first element.
1431\CFA is completely stable under movement and polarity changes.
1432\uCpp and LQ show modest responses favouring queues and insertion at last.
1433
1434The third breakdown, accessor (right), the responses are close, except for \CFA.
1435Note the pair of two-way comparisons pulled from the three experiment setups used.
1436First, the all-head/insert-element addresses which insertion style is better---by-head (top) and by-element (bottom).
1437Then, the all-head/remove-element addresses which removal style is better---by-head (top) and by-element (bottom).
1438The LQs favour insertion by head and removal by element.
1439\CFA and \uCpp favour both operations by head.
1440The strongest effect is \CFA's aversion to removal by element---certainly an opportunity for improvement.
1441
1442
1443\begin{comment}
1444\subsection{\CFA Tiny-Size Anomaly}
1445
1446The \CFA list occasionally showed a concerning slowdown at length 1.
1447The issue, seen in \VRef[Figure]{fig:plot-list-short} (top-left corner), has \CFA taking above 10 ns per IR (top-left corner).
1448It occurs only for the queue movement, only on the AMD machine, and only for the \CFA framework.
1449
1450Length-1 performance is an important case.
1451Lists like those of waiting threads are frequently left empty, with the occasional thread (or few) momentarily joining.
1452These scenarios need to work.
1453
1454A cause of the slowdown was never determined.
1455Speculation is that \CFA's increased data dependency, a result of the tagging scheme, pairs poorly with the aliasing implied by queue movement.
1456The aliasing, at length 1, is: the head's first element is the head's last element.
1457With stack movement, one of these names for the first element is reaused for both insert and remove.
1458While with queue movement, both names are used in alternation.
1459
1460The breakdowns earlier in the performance assessment vary length only.
1461That is, they see the story down the leftmost column in a triangle.
1462The insight for contextualizing this issue was to inspect both length and width.
1463
1464The issue is seen as practically mitigated by noticing that the difficutly fades away as width increases.
1465This effect is seen both in \VRef[Figure]{fig:plot-list-short}'s easement across the top triangle rows, and, zoomed farther out, in \VRef[Figure]{fig:plot-list-wide}.
1466
1467Increasing the width matters to the aliasing hypothesis.
1468In a narrow experiment, one element's insert and remove happen in rapid succession.
1469So, the two aliases are exercied closer together, making a data hazard (that lacks ideal hardware treatment) stretch the instruction-pipeline schedule more significantly.
1470Increasing the width adds harness-induced gaps between the uses of each alias, behind which a potential hazard can hide.
1471
1472In the practical scenario that judges length-1 performance as relevant, width 1 is contrived.
1473A thread putting itself on an often-empty waiters' list is not doing so on one such list repeatedly, at least not without taking other situation-iduced pauses.
1474
1475Thus, the congestion at low width + length comes from the harness using repetition (in order to obtain a measurable time).
1476It does not reflect the situation that motivates the legitimate desire for good length-1 performance.
1477
1478There likely is a real hazard, unique to the \CFA framework, when a queue movement is repeated on a tiny list \emph{without other interventing action}.
1479Doing so is believed to occur only in contrived situations.
1480
1481
1482\begin{figure}
1483\centering
1484 \includegraphics[trim={00in, 5.5in, 0in, 0in}, clip, scale=0.8]{plot-list-short-temp.pdf}
1485 \caption{Behaviour at very short lengths.}
1486 \label{fig:plot-list-short}
1487\end{figure}
1488
1489\begin{figure}
1490\centering
1491 \includegraphics[trim={0.25in, 1in, 0.25in, 1in}, clip, scale=0.5]{plot-list-wide-temp.pdf}
1492 \caption{Length-1 anomaly resolving at modest width. Points are for varying widths, at fixed length 1.}
1493 \label{fig:plot-list-wide}
1494\end{figure}
1495\end{comment}
1496
1497\begin{comment}
1498 These remarks are mostly about 3ord over 2ord.
1499This analysis does not provide more detail about one framework beating another; it offers different benefits.
1500These interactions further illustrate the lottery-ticket unpredictability that a linked-list user inevitably faces, by revealing stronger-still performance swings hidden from the first-order view.
1501They illustrate the difficult signal-to-noise ratio that I had to overcome in preparing this data.
1502They may serve as a reference guiding future \CFA linked-list work by informing on where to target improvements.
1503Finally, the findings offer the conclusion that \CFA's list offers more consistent performance across usage scenarios, than the other lists.
1504\end{comment}
1505
1506\begin{comment}
1507Further to the interpretation guidance of \VRef[Figure]{fig:plot-list-2ord}'s caption, a comparison with the construction of \VRef[Figure]{fig:plot-list-1ord} may be helpful.
1508In the first-order graph, the factors being compared had many options: four, twelve and four.
1509# XXX Each option contributes its own, seemingly independent, mean and distribution.
1510# XXX But, in fact, they are not totally independent.
1511# WRONG: it's not just about binary, you also need a split on a conditioned factor
1512 I'm trying to get to:
1513Side by side in earlier style, but they're opposites, so they mirror each other, so you take option B, flip it over, and have option A---I'll just show A
1514\end{comment}
1515
1516\begin{comment}
1517\VRef[Figure]{fig:plot-list-zoomin} shows the sizes below 150 blown up.
1518% The same scenario as the coarse comparison is used: a stack, with insertions and removals happening at the end called ``first,'' ``head'' or ``front,'' and all changes occurring through a head-provided IR operation.
1519The error bars show fastest and slowest time seen on five trials, and the central point is the mean of the remaining three trials.
1520For readability, the points are slightly staggered at a given horizontal value, where the points might otherwise appear on top of each other.
1521The experiment runs twelve use cases;
1522the ones chosen for their variety are scenarios I and VIII from the listing of \VRef[Figure]{fig:plot-list-mchn-szz}, and their results appear in the rows.
1523As in the previous experiment, each hardware architecture appears in a column.
1524Note that LQ-list does not support queue operations, so this framework is absent from Operation VIII.
1525
1526At lengths 1 and 2, winner patterns seen at larger sizes generally do not apply.
1527Indeed, an issue with \CFA giving terrible on queues at length 1 is evident in \VRef[Figure]{f:AbsoluteTime-viii-swift}.
1528This phenomenon is elaborated in \MLB{TODO: xref}.
1529For the remainder of this section, these sizes are disregarded.
1530
1531Even after the very-small anomalies, the selections of operation and machine significantly affect how speed responds to size.
1532For example, Operation I on the AMD (\VRef[Figure]{f:AbsoluteTime-i-swift}) has \CFA generally winning over LQ, while the opposite is seen by switching either to Operation VIII (\VRef[Figure]{f:AbsoluteTime-viii-swift}) or to the Intel (\VRef[Figure]{f:AbsoluteTime-i-java}).
1533For another, Operation I has sore spots at lengths in the middle for \uCpp on AMD and LQ-list on Intel, which resolve at larger lengths; yet no such pattern presents with Operation VIII.
1534
1535In spite of these complex interactions, a couple spots of stability can be analyzed.
1536In these examples, the two defined Size Zones (4--16 being ``small,'' and above 50 being ``medium,'') covering four specific sizes apiece, each tends to show a simple winner/loser story.
1537Manual inspection of other such plots (not detailed) showed that this quality is generally upheld.
1538So these zones are used for basing comparison.
1539
1540\MLB{Peter, caution beyond here. I am reconsidering if this first-dismiss-physical approach to comparison is simplest.}
1541
1542\begin{figure}
1543 \centering
1544 \includegraphics{plot-list-mchn-szz.pdf}
1545 \caption{Histogram of operation durations, decomposed by physical factors.
1546 Measurements are included from only the sizes in the ``small'' and ``medium'' stable zones.
1547 This breakdown divides the entire population of test results into four mutually disjoint constituents.
1548 \MLB{I see that I broke it. But we might be getting rid of it.}
1549 }
1550 \label{fig:plot-list-mchn-szz}
1551\end{figure}
1552
1553\VRef[Figure]{fig:plot-list-mchn-szz} shows the effects of the physical factors of size zone and machine.
1554Each of these four histograms shows variation in duration coming from the four specific sizes in a size zone, from combining results of all twelve operations and all four frameworks.
1555Among the means of the four histograms, there is a standard deviation of 0.9 ns, which is 20\% of the global mean.
1556This variability is due solely to physical factors.
1557
1558From the perspective of assessing winning/losing frameworks, these physical effects are noise.
1559So, subsequent analysis conditions on the phisical effects.
1560That is, it supposes you are put into an unknown physical situation (that is one of the four being tested), then presents all the ways your outcome could change as a result of non-physical factors, assuming that the physical situation is kept constant.
1561It does do by presenting results relative to the mean of the physical quadrant (\VRef[fig]{fig:plot-list-mchn-szz} histogram) to which it belogs.
1562With this adjustment, absolute duration values (in nonsecods) are lost.
1563In return, the physical quadrants are re-combined, enabling assessment of the non-physical factors.
1564\end{comment}
1565
1566\begin{comment}
1567While preparing experiment results, I first tested on my old office PC, AMD FX-8370E Eight-Core, before switching to the large new server for final testing.
1568For this experiment, the results flipped in my favour when running on the server.
1569New CPU architectures are now amazingly good at branch prediction and micro-parallelism in the pipelines.
1570Specifically, on the PC, my \CFA and companion \uCpp lists are slower than lq-tail and lq-list by 10\% to 20\%.
1571On the server, \CFA and \uCpp lists are can be fast by up to 100\%.
1572Overall, LQ-tailq does the best at short lengths but loses out above a dozen elements.
1573\end{comment}
1574
1575% \begin{figure}
1576% \centering
1577% \begin{tabular}{c}
1578% \includegraphics{plot-list-cmp-intrl-shift.pdf} \\
1579% (a) \\
1580% \includegraphics{plot-list-cmp-intrl-outcome.pdf} \\
1581% (b) \\
1582% \end{tabular}
1583% \caption{Caption TODO}
1584% \label{fig:plot-list-cmp-intrl}
1585% \end{figure}
1586
1587\begin{comment}
1588\subsection{Result: \CFA cost attribution}
1589
1590This comparison loosely itemizes the reasons that the \CFA implementation runs 15--20\% slower than LQ.
1591Each reason provides for safer programming.
1592For each reason, a version of the \CFA list was measured that forgoes its safety and regains some performance.
1593These potential sacrifices are:
1594\newcommand{\mandhead}{\emph{mand-head}}
1595\newcommand{\nolisted}{\emph{no-listed}}
1596\newcommand{\noiter}{\emph{no-iter}}
1597\begin{description}
1598\item[mand(atory)-head] Removing support for headless lists.
1599 A specific explanation of why headless support causes a slowdown is not offered.
1600 But it is reasonable for a cost to result from making one piece of code handle multiple cases; the subset of the \CFA list API that applies to headless lists shares its implementation with headed lists.
1601 In the \mandhead case, disabling the feature in \CFA means using an older version of the implementation, from before headless support was added.
1602 In the pre-headless library, trying to form a headless list (instructing, ``Insert loose element B after loose element A,'') is a checked runtime error.
1603 LQ does not support headless lists\footnote{
1604 Though its documentation does not mention the headless use case, this fact is due to one of its insert-before or insert-after routines being unusable in every list model.
1605 For \lstinline{tailq}, the API requires a head.
1606 For \lstinline{list}, this usage causes an ``uncaught'' runtime crash.}.
1607\item[no-listed] Removing support for the @is_listed@ API query.
1608 Along with it goes error checking such as ``When inserting an element, it must not already be listed, \ie be referred to from somewhere else.''
1609 These abilities have a cost because, in order to support them, a listed element that is being removed must be written to, to record its change in state.
1610 In \CFA's representation, this cost is two pointer writes.
1611 To disable the feature, these writes, and the error checking that consumes their result, are put behind an @#ifdef@.
1612 The result is that a removed element sees itself as still having neighbours (though these quasi-neighbours see it differently).
1613 This state is how LQ leaves a removed element; LQ does not offer an is-listed query.
1614\item[no-iter(ation)] Removing support for well-terminating iteration.
1615 The \CFA list uses bit-manipulation tagging on link poiters (rather than \eg null links) to express, ``No more elements this way.''
1616 This tagging has the cost of submitting a retrieved value to the ALU, and awaiting this operation's completion, before dereferencing a link pointer.
1617 In some cases, the is-terminating bit is transferred from one link to another, or has a similar influence on a resulting link value; this logic adds register pressure and more data dependency.
1618 To disable the feature, the @#ifdef@-controlled tag manipulation logic compiles in answers like, ``No, that link is not a terminator,'' ``The dereferenceable pointer is the value you read from memory,'' and ``The terminator-marked value you need to write is the pointer you started with.''
1619 Without this termination marking, repeated requests for a next valid item will always provide a positive response; when it should be negative, the indicated next element is garbage data at an address unlikely to trigger a memory error.
1620 LQ has a well-terminating iteration for listed elements.
1621 In the \noiter case, the slowdown is not inherent; it represents a \CFA optimization opportunity.
1622\end{description}
1623\MLB{Ensure benefits are discussed earlier and cross-reference} % an LQ programmer must know not to ask, ``Who's next?'' about an unlisted element; an LQ programmer cannot write assertions about an item being listed; LQ requiring a head parameter is an opportunity for the user to provide inconsistent data
1624
1625\begin{figure}
1626\centering
1627 \begin{tabular}{c}
1628 \includegraphics{plot-list-cfa-attrib-swift.pdf} \\
1629 (a) \\
1630 \includegraphics{plot-list-cfa-attrib-remelem-swift.pdf} \\
1631 (b) \\
1632 \end{tabular}
1633 \caption{Operation duration ranges for functionality-reduced \CFA list implementations. (a) has the top level slices. (b) has the next level of slicing within the slower element-based removal operation.}
1634 \label{fig:plot-list-cfa-attrib}
1635\end{figure}
1636
1637\VRef[Figure]{fig:plot-list-cfa-attrib} shows the \CFA list performance with these features, and their combinations, turned on and off. When a series name is one of the three sacrifices above, the series is showing this sacrifice in isolation. These further series names give combinations:
1638\newcommand{\attribFull}{\emph{full}}
1639\newcommand{\attribParity}{\emph{parity}}
1640\newcommand{\attribStrip}{\emph{strip}}
1641\begin{description}
1642 \item[full] No sacrifices. Same as measurements presented earlier.
1643 \item[parity] \mandhead + \nolisted. Feature parity with LQ.
1644 \item[strip] \mandhead + \nolisted + \noiter. All options set to ``faster.''
1645\end{description}
1646All list implementations are \CFA, possibly stripped.
1647The plot uses the same LQ-relative basis as earlier.
1648So getting to zero means matching LQ's @tailq@.
1649
1650\VRef[Figure]{fig:plot-list-cfa-attrib}-(a) summarizes the time attribution across the main operating scenarios.
1651The \attribFull series is repeated from \VRef[Figure]{fig:plot-list-cmp-overall}, part (b), while the series showing feature sacrifices are new.
1652Going all the way to \attribStrip at least nearly matches LQ in all operating scenarios, beats LQ often, and slightly beats LQ overall.
1653Except within the accessor splits, both sacrifices contribute improvements individually, \noiter helps more than \attribParity, and the total \attribStrip benefit depends on both contributions.
1654When the accessor is not element removal, the \attribParity shift appears to be counterproductive, leaving \noiter to deliver most of the benefit.
1655For element removals, \attribParity is the heavy hitter, with \noiter contributing modestly.
1656
1657The counterproductive shift outside of element removals is likely due to optimization done in the \attribFull version after implementing headless support, \ie not present in the \mandhead version.
1658This work streamlined both head-based operations (head-based removal being half the work of the element-insertion test).
1659This improvement could be ported to a \mandhead-style implementation, which would bring down the \attribParity time in these cases.
1660
1661More significantly, missing this optimization affects every \attribParity result because they all use head-based inserts or removes for at least half their operations.
1662It is likely a reason that \attribParity is not delivering as well overall as \noiter.
1663It even represents plausible further improvements in \attribStrip.
1664
1665\VRef[Figure]{fig:plot-list-cfa-attrib}-(b) addresses element removal being the overall \CFA slow spot and element removal having a peculiar shape in the (a) analysis.
1666Here, the \attribParity sacrifice bundle is broken out into its two constituents.
1667The result is the same regardless of the operation.
1668All three individual sacrifices contribute noteworthy improvements (\nolisted slightly less).
1669The fullest improvement requires all of them.
1670
1671The \noiter feature sacrifice is unpalatable.
1672But because it is not an inherent slowdown, there may be room to pursue a \noiter-level speed improvement without the \noiter feature sacrifice.
1673The performance crux for \noiter is the pointer-bit tagging scheme.
1674Alternative designs that may offer speedup with acceptable consequences include keeping the tag information in a separate field, and (for 64-bit architectures) keeping it in the high-order byte \ie using byte- rather than bit-oriented instructions to access it.
1675The \noiter speed improvement would bring \CFA to +5\% of LQ overall, and from high twenties to high teens, in the worst case of element removal.
1676
1677Ultimately, this analysis provides options for a future effort that needs to get the most speed out of the \CFA list.
1678\end{comment}
1679
1680
1681\section{Future Work}
1682\label{toc:lst:futwork}
1683
1684Not discussed in the chapter is a \CFA type-system issue with Plan-9 @inline@ declarations, implementing the trait @embedded@ to access the contained @dlist@ link-fields.
1685This trait defines an implicit conversion from derived to base (the safe direction).
1686Such a conversion exists implicitly for concrete types using the Plan-9 inheritance feature.
1687However, this implicit conversion is only partially implemented for polymorphism types, such as @dlist@, which prevents the straightforward list interface shown throughout the chapter.
1688
1689My workaround is the macro @P9_EMBEDDED@ placed before an intrusive node is used to declare a list.
1690\begin{cfa}
1691struct node {
1692 int v;
1693 inline dlink(node);
1694};
1695@P9_EMBEDDED( node, dlink(node) );@
1696dlist( node ) dlist;
1697\end{cfa}
1698The macro creates specialized access functions to explicitly extract the required information for the polymorphic @dlist@ type.
1699These access functions could have been generated implicitly for each intrusive node by adding another compiler pass.
1700However, this would have been substantial temporary work, when the correct solution is the compiler fix.
1701Hence, the macro workaround is only a small temporary inconvenience;
1702otherwise, all the list API shown in this chapter works.
1703
1704
1705\begin{comment}
1706An API author has decided that the intended user experience is:
1707\begin{itemize}
1708\item
1709offer the user an opaque type that abstracts the API's internal state
1710\item
1711tell the user to extend this type
1712\item
1713provide functions with a ``user's type in, user's type out'' style
1714\end{itemize}
1715Fig XXX shows several attempts to provide this experience.
1716The types in (a) give the setup, achieving the first two points, while the pair of function declarations and calls of (b) are unsuccessful attempts at achieving the third point.
1717Both functions @f1@ and @f2@ allow calls of the form @f( d )@, but @f1@ has the wrong return type (@Base@) for initializing a @Derived@.
1718The @f1@ signature fails to meet ``user's type out;'' this signature does not give the \emph{user} a usable type.
1719On the other hand, the signature @f2@ offers the desired user experience, so the API author proceeds with trying to implement it.
1720
1721\begin{figure}
1722\begin{cfa}
1723#ifdef SHOW_ERRS
1724#define E(...) __VA_ARGS__
1725#else
1726#define E(...)
1727#endif
1728
1729// (a)
1730struct Base { /*...*/ }; // system offers
1731struct Derived { /*...*/ inline Base; }; // user writes
1732
1733// (b)
1734// system offers
1735Base & f1( Base & );
1736forall( T ) T & f2( T & );
1737// user writes
1738void to_elide1() {
1739 Derived & d /* = ... */;
1740 f1( d );
1741 E( Derived & d1 = f1( d ); ) // error: return is not Derived
1742 Derived & d2 = f2( d );
1743
1744 // (d), user could write
1745 Base & b = d;
1746}
1747
1748// (c), system has
1749static void helper( Base & );
1750forall( T ) T & f2( T & param ) {
1751 E( helper( param ); ) // error: param is not Base
1752 return param;
1753}
1754static void helper( Base & ) {}
1755
1756#include <list2.hfa>
1757// (e)
1758// system offers, has
1759forall( T | embedded(T, Base, Base) )
1760 T & f3( T & param ) {
1761 helper( param`inner ); // ok
1762 return param;
1763}
1764// user writes
1765struct DerivedRedux { /*...*/ inline Base; };
1766P9_EMBEDDED( DerivedRedux, Base )
1767void to_elide2() {
1768 DerivedRedux & dr /* = ... */;
1769 DerivedRedux & dr3 = f3( dr );
1770
1771 // (f)
1772 // user writes
1773 Derived & xxx /* = ... */;
1774 E( Derived & yyy = f3( xxx ); ) // error: xxx is not embedded
1775}
1776\end{cfa}
1777\end{figure}
1778
1779
1780The \CFA list examples elide the @P9_EMBEDDED@ annotations that (TODO: xref P9E future work) proposes to obviate.
1781Thus, these examples illustrate a to-be state, free of what is to be historic clutter.
1782The elided portions are immaterial to the discussion and the examples work with the annotations provided.
1783The \CFA test suite (TODO:cite?) includes equivalent demonstrations, with the annotations included.
1784\begin{cfa}
1785struct mary {
1786 float anotherdatum;
1787 inline dlink(mary);
1788};
1789struct fred {
1790 float adatum;
1791 inline struct mine { inline dlink(fred); };
1792 inline struct yours { inline dlink(fred); };
1793};
1794\end{cfa}
1795like in the thesis examples. You have to say
1796\begin{cfa}
1797struct mary {
1798 float anotherdatum;
1799 inline dlink(mary);
1800};
1801P9_EMBEDDED(mary, dlink(mary))
1802struct fred {
1803 float adatum;
1804 inline struct mine { inline dlink(fred); };
1805 inline struct yours { inline dlink(fred); };
1806};
1807P9_EMBEDDED(fred, fred.mine)
1808P9_EMBEDDED(fred, fred.yours)
1809P9_EMBEDDED(fred.mine, dlink(fred))
1810P9_EMBEDDED(fred.yours, dlink(fred))
1811\end{cfa}
1812
1813
1814The function definition in (c) gives this system-implementation attempt.
1815The system needs to operate on its own data, stored in the @Base@ part of the user's @d@, now called @param@.
1816Calling @helper@ represents this attempt to look inside.
1817It fails, because the @f2@ signature does not state that @param@ has any relationship to @Base@;
1818this signature does not give the \emph{system} a usable type.
1819The fact that the user's argument can be converted to @Base@ is lost when going through this signature.
1820
1821Moving forward needs an @f@ signature that conveys the relationship that the argument @d@ has to @Base@.
1822\CFA conveys type abilities, from caller to callee, by way of traits; so the challenge is to state the right trait member.
1823As initialization (d) illustrates, the @d@--@Base@ capability is an implicit conversion.
1824Unfortunately, in present state, \CFA does not have a first-class representation of an implicit conversion, the way operator @?{}@ (which is a value), done with the right overload, is arguably an explicit conversion.
1825So \CFA cannot directly convey ``@T@ compatible with @Base@'' in a trait.
1826
1827This work contributes a stand-in for such an ability, tunneled through the present-state trait system, shown in (e).
1828On the declaration/system side, the trait @embedded@, and its member, @`inner@, convey the ability to recover the @Base@, and thereby call @helper@.
1829On the user side, the @P9_EMBEDDED@ macro accompanies type definitions that work with @f3@-style declarations.
1830An option is presemt-state-available to compiler-emit @P9_EMBEDDED@ annotations automatically, upon each occurrence of an `inline` member.
1831The choice not to is based on conversions in \CFA being a moving target because of separate, ongoing work.
1832
1833An intended finished state for this area is achieved if \CFA's future efforts with conversions include:
1834\begin{itemize}
1835\item
1836treat conversion as operator(s), \ie values
1837\item
1838re-frame the compiler's current Plan-9 ``magic'' as seeking an implicit conversion operator, rather than seeking an @inline@ member
1839\item
1840make Plan-9 syntax cause an implementation of implicit conversion to exist (much as @struct@ syntax causes @forall(T)@ compliance to exist)
1841\end{itemize}
1842To this end, the contributed @P9_EMBEDDED@ expansion shows how to implement this conversion.
1843
1844
1845like in tests/list/dlist-insert-remove.
1846Future work should autogen those @P9_EMBEDDED@ declarations whenever it sees a plan-9 declaration.
1847The exact scheme chosen should harmonize with general user-defined conversions.
1848
1849Today's P9 scheme is: mary gets a function `inner returning this as dlink(mary).
1850Fred gets four of them in a diamond.
1851They're defined so that `inner is transitive; i.e. fred has two further ambiguous overloads mapping fred to dlink(fred).
1852The scheme allows the dlist functions to give the assertion, "we work on any T that gives a `inner to dlink(T)."
1853
1854
1855TODO: deal with: A doubly linked list is being designed.
1856
1857TODO: deal with: Link fields are system-managed.
1858Links in GDB.
1859\end{comment}
Note: See TracBrowser for help on using the repository browser.