\chapter{Background} Since this work builds on C, it is necessary to explain the C mechanisms and their shortcomings for array, linked list, and string. \section{Array} At the start, the C programming language made a significant design mistake. \begin{quote} In C, there is a strong relationship between pointers and arrays, strong enough that pointers and arrays really should be treated simultaneously. Any operation which can be achieved by array subscripting can also be done with pointers.~\cite[p.~93]{C:old} \end{quote} Accessing any storage requires pointer arithmetic, even if it is just base-displacement addressing in an instruction. The conjoining of pointers and arrays could also be applied to structures, where a pointer references a structure field like an array element. Finally, while subscripting involves pointer arithmetic (as does field references @x.y.z@), the computation is very complex for multi-dimensional arrays and requires array descriptors to know stride lengths along dimensions. Many C errors result from performing pointer arithmetic instead of using subscripting; some C textbooks teach pointer arithmetic erroneously suggesting it is faster than subscripting. A sound and efficient C program does not require explicit pointer arithmetic. C semantics want a programmer to \emph{believe} an array variable is a ``pointer to its first element.'' This desire becomes apparent by a detailed inspection of an array declaration. \lstinput{34-34}{bkgd-carray-arrty.c} The inspection begins by using @sizeof@ to provide definite program semantics for the intuition of an expression's type. \lstinput{35-36}{bkgd-carray-arrty.c} Now consider the sizes of expressions derived from @ar@, modified by adding ``pointer to'' and ``first element'' (and including unnecessary parentheses to avoid confusion about precedence). \lstinput{37-40}{bkgd-carray-arrty.c} Given the size of @float@ is 4, the size of @ar@ with 10 floats being 40 bytes is common reasoning for C programmers. Equally, C programmers know the size of a \emph{pointer} to the first array element is 8 (or 4 depending on the addressing architecture). % Now, set aside for a moment the claim that this first assertion is giving information about a type. Clearly, an array and a pointer to its first element are different. In fact, the idea that there is such a thing as a pointer to an array may be surprising and it is not the same thing as a pointer to the first element. \lstinput{42-45}{bkgd-carray-arrty.c} The first assignment gets \begin{cfa} warning: assignment to `float (*)[10]' from incompatible pointer type `float *' \end{cfa} and the second assignment gets the opposite. The inspection now refutes any suggestion that @sizeof@ is informing about allocation rather than type information. Note, @sizeof@ has two forms, one operating on an expression and the other on a type. Using the type form yields the same results as the prior expression form. \lstinput{46-49}{bkgd-carray-arrty.c} The results are also the same when there is \emph{no allocation} using a pointer-to-array type. \lstinput{51-57}{bkgd-carray-arrty.c} Hence, in all cases, @sizeof@ is informing about type information. So, thinking of an array as a pointer to its first element is too simplistic an analogue and it is not backed up by the type system. This misguided analogue works for a single-dimension array but there is no advantage other than possibly teaching beginning programmers about basic runtime array-access. Continuing, a short form for declaring array variables exists using length information provided implicitly by an initializer. \lstinput{59-62}{bkgd-carray-arrty.c} The compiler counts the number of initializer elements and uses this value as the first dimension. Unfortunately, the implicit element counting does not extend to dimensions beyond the first. \lstinput{64-67}{bkgd-carray-arrty.c} My contribution is recognizing: \begin{itemize} \item There is value in using a type that knows its size. \item The type pointer to (first) element does not. \item C \emph{has} a type that knows the whole picture: array, e.g. @T[10]@. \item This type has all the usual derived forms, which also know the whole picture. A usefully noteworthy example is pointer to array, e.g. @T (*)[10]@.\footnote{ The parenthesis are necessary because subscript has higher priority than pointer in C declarations. (Subscript also has higher priority than dereference in C expressions.)} \end{itemize} \section{Reading declarations} A significant area of confusion for reading C declarations results from embedding a declared variable in a declaration, mimicking the way the variable is used in executable statements. \begin{cquote} \begin{tabular}{@{}ll@{}} \multicolumn{1}{@{}c}{\textbf{Array}} & \multicolumn{1}{c@{}}{\textbf{Function Pointer}} \\ \begin{cfa} int @(*@ar@)[@5@]@; // definition ... @(*@ar@)[@3@]@ += 1; // usage \end{cfa} & \begin{cfa} int @(*@f@())[@5@]@ { ... }; // definition ... @(*@f@())[@3@]@ += 1; // usage \end{cfa} \end{tabular} \end{cquote} Essentially, the type is wrapped around the name in successive layers (like an \Index{onion}). While attempting to make the two contexts consistent is a laudable goal, it has not worked out in practice, even though Dennis Richie believed otherwise: \begin{quote} In spite of its difficulties, I believe that the C's approach to declarations remains plausible, and am comfortable with it; it is a useful unifying principle.~\cite[p.~12]{Ritchie93} \end{quote} After all, reading a C array type is easy: just read it from the inside out, and know when to look left and when to look right! \CFA provides its own type, variable and routine declarations, using a simpler syntax. The new declarations place qualifiers to the left of the base type, while C declarations place qualifiers to the right of the base type. The qualifiers have the same syntax and semantics in \CFA as in C. Then, a \CFA declaration is read left to right, where a function return type is enclosed in brackets @[@\,@]@. \begin{cquote} \begin{tabular}{@{}l@{\hspace{3em}}ll@{}} \multicolumn{1}{c@{\hspace{3em}}}{\textbf{C}} & \multicolumn{1}{c}{\textbf{\CFA}} & \multicolumn{1}{c}{\textbf{read left to right}} \\ \begin{cfa} int @*@ x1 @[5]@; int @(*@x2@)[5]@; int @(*@f( int p )@)[5]@; \end{cfa} & \begin{cfa} @[5] *@ int x1; @* [5]@ int x2; @[ * [5] int ]@ f( int p ); \end{cfa} & \begin{cfa} // array of 5 pointers to int // pointer to array of 5 int // function returning pointer to array of 5 ints \end{cfa} \\ & & \LstCommentStyle{//\ \ \ and taking an int argument} \end{tabular} \end{cquote} As declaration size increases, it becomes corresponding difficult to read and understand the C declaration form, whereas reading and understanding a \CFA declaration has linear complexity as the declaration size increases. Note, writing declarations left to right is common in other programming languages, where the function return-type is often placed after the parameter declarations. \VRef[Table]{bkgd:ar:usr:avp} introduces the many layers of the C and \CFA array story, where the \CFA story is discussion in \VRef[Chapter]{c:Array}. The \CFA-thesis column shows the new array declaration form, which is my contributed improvements for safety and ergonomics. The table shows there are multiple yet equivalent forms for the array types under discussion, and subsequent discussion shows interactions with orthogonal (but easily confused) language features. Each row of the table shows alternate syntactic forms. The simplest occurrences of types distinguished in the preceding discussion are marked with $\triangleright$. Removing the declared variable @x@, gives the type used for variable, structure field, cast or error messages \PAB{(though note Section TODO points out that some types cannot be casted to)}. Unfortunately, parameter declarations \PAB{(section TODO)} have more syntactic forms and rules. \begin{table} \centering \caption{Syntactic Reference for Array vs Pointer. Includes interaction with \lstinline{const}ness.} \label{bkgd:ar:usr:avp} \begin{tabular}{ll|l|l|l} & Description & \multicolumn{1}{c|}{C} & \multicolumn{1}{c|}{\CFA} & \multicolumn{1}{c}{\CFA-thesis} \\ \hline $\triangleright$ & value & @T x;@ & @T x;@ & \\ \hline & immutable value & @const T x;@ & @const T x;@ & \\ & & @T const x;@ & @T const x;@ & \\ \hline \hline $\triangleright$ & pointer to value & @T * x;@ & @* T x;@ & \\ \hline & immutable ptr. to val. & @T * const x;@ & @const * T x;@ & \\ \hline & ptr. to immutable val. & @const T * x;@ & @* const T x;@ & \\ & & @T const * x;@ & @* T const x;@ & \\ \hline \hline $\triangleright$ & array of value & @T x[10];@ & @[10] T x@ & @array(T, 10) x@ \\ \hline & ar.\ of immutable val. & @const T x[10];@ & @[10] const T x@ & @const array(T, 10) x@ \\ & & @T const x[10];@ & @[10] T const x@ & @array(T, 10) const x@ \\ \hline & ar.\ of ptr.\ to value & @T * x[10];@ & @[10] * T x@ & @array(T *, 10) x@ \\ & & & & @array(* T, 10) x@ \\ \hline & ar.\ of imm. ptr.\ to val. & @T * const x[10];@ & @[10] const * T x@ & @array(* const T, 10) x@ \\ & & & & @array(const * T, 10) x@ \\ \hline & ar.\ of ptr.\ to imm. val. & @const T * x[10];@ & @[10] * const T x@ & @array(const T *, 10) x@ \\ & & @T const * x[10];@ & @[10] * T const x@ & @array(* const T, 10) x@ \\ \hline \hline $\triangleright$ & ptr.\ to ar.\ of value & @T (*x)[10];@ & @* [10] T x@ & @* array(T, 10) x@ \\ \hline & imm. ptr.\ to ar.\ of val. & @T (* const x)[10];@ & @const * [10] T x@ & @const * array(T, 10) x@ \\ \hline & ptr.\ to ar.\ of imm. val. & @const T (*x)[10];@ & @* [10] const T x@ & @* const array(T, 10) x@ \\ & & @T const (*x)[10];@ & @* [10] T const x@ & @* array(T, 10) const x@ \\ \hline & ptr.\ to ar.\ of ptr.\ to val. & @T *(*x)[10];@ & @* [10] * T x@ & @* array(T *, 10) x@ \\ & & & & @* array(* T, 10) x@ \\ \hline \end{tabular} \end{table} TODO: Address these parked unfortunate syntaxes \begin{itemize} \item static \item star as dimension \item under pointer decay: @int p1[const 3]@ being @int const *p1@ \end{itemize} \subsection{Arrays decay and pointers diffract} The last section established the difference between these four types: \lstinput{3-6}{bkgd-carray-decay.c} But the expression used for obtaining the pointer to the first element is pedantic. The root of all C programmer experience with arrays is the shortcut \lstinput{8-8}{bkgd-carray-decay.c} which reproduces @pa0@, in type and value: \lstinput{9-9}{bkgd-carray-decay.c} The validity of this initialization is unsettling, in the context of the facts established in the last section. Notably, it initializes name @pa0x@ from expression @ar@, when they are not of the same type: \lstinput{10-10}{bkgd-carray-decay.c} So, C provides an implicit conversion from @float[10]@ to @float *@. \begin{quote} Except when it is the operand of the @sizeof@ operator, or the unary @&@ operator, or is a string literal used to initialize an array an expression that has type ``array of \emph{type}'' is converted to an expression with type ``pointer to \emph{type}'' that points to the initial element of the array object~\cite[\S~6.3.2.1.3]{C11} \end{quote} This phenomenon is the famous \newterm{pointer decay}, which is a decay of an array-typed expression into a pointer-typed one. It is worthy to note that the list of exception cases does not feature the occurrence of @ar@ in @ar[i]@. Thus, subscripting happens on pointers not arrays. Subscripting proceeds first with pointer decay, if needed. Next, \cite[\S~6.5.2.1.2]{C11} explains that @ar[i]@ is treated as if it were @(*((a)+(i)))@. \cite[\S~6.5.6.8]{C11} explains that the addition, of a pointer with an integer type, is defined only when the pointer refers to an element that is in an array, with a meaning of ``@i@ elements away from,'' which is valid if @ar@ is big enough and @i@ is small enough. Finally, \cite[\S~6.5.3.2.4]{C11} explains that the @*@ operator's result is the referenced element. Taken together, these rules illustrate that @ar[i]@ and @i[a]@ mean the same thing! Subscripting a pointer when the target is standard inappropriate is still practically well-defined. While the standard affords a C compiler freedom about the meaning of an out-of-bound access, or of subscripting a pointer that does not refer to an array element at all, the fact that C is famously both generally high-performance, and specifically not bound-checked, leads to an expectation that the runtime handling is uniform across legal and illegal accesses. Moreover, consider the common pattern of subscripting on a @malloc@ result: \begin{cfa} float * fs = malloc( 10 * sizeof(float) ); fs[5] = 3.14; \end{cfa} The @malloc@ behaviour is specified as returning a pointer to ``space for an object whose size is'' as requested (\cite[\S~7.22.3.4.2]{C11}). But \emph{nothing} more is said about this pointer value, specifically that its referent might \emph{be} an array allowing subscripting. Under this assumption, a pointer being subscripted (or added to, then dereferenced) by any value (positive, zero, or negative), gives a view of the program's entire address space, centred around the @p@ address, divided into adjacent @sizeof(*p)@ chunks, each potentially (re)interpreted as @typeof(*p)@. I call this phenomenon \emph{array diffraction}, which is a diffraction of a single-element pointer into the assumption that its target is in the middle of an array whose size is unlimited in both directions. No pointer is exempt from array diffraction. No array shows its elements without pointer decay. A further pointer--array confusion, closely related to decay, occurs in parameter declarations. \cite[\S~6.7.6.3.7]{C11} explains that when an array type is written for a parameter, the parameter's type becomes a type that can be summarized as the array-decayed type. The respective handling of the following two parameter spellings shows that the array-spelled one is really, like the other, a pointer. \lstinput{12-16}{bkgd-carray-decay.c} As the @sizeof(x)@ meaning changed, compared with when run on a similarly-spelled local variable declaration, @gcc@ also gives this code the warning for the first assertion: \begin{cfa} warning: 'sizeof' on array function parameter 'x' will return size of 'float *' \end{cfa} The caller of such a function is left with the reality that a pointer parameter is a pointer, no matter how it is spelled: \lstinput{18-21}{bkgd-carray-decay.c} This fragment gives the warning for the first argument, in the second call. \begin{cfa} warning: 'f' accessing 40 bytes in a region of size 4 \end{cfa} The shortened parameter syntax @T x[]@ is a further way to spell ``pointer.'' Note the opposite meaning of this spelling now, compared with its use in local variable declarations. This point of confusion is illustrated in: \lstinput{23-30}{bkgd-carray-decay.c} Note, \CC gives a warning for the initialization of @cp@. \begin{cfa} warning: ISO C++ forbids converting a string constant to 'char*' \end{cfa} and C gives a warning at the call of @edit@, if @const@ is added to the declaration of @cp@. \begin{cfa} warning: passing argument 1 of 'edit' discards 'const' qualifier from pointer target type \end{cfa} The basic two meanings, with a syntactic difference helping to distinguish, are illustrated in the declarations of @ca@ \vs @cp@, whose subsequent @edit@ calls behave differently. The syntax-caused confusion is in the comparison of the first and last lines, both of which use a literal to initialize an object declared with spelling @T x[]@. But these initialized declarations get opposite meanings, depending on whether the object is a local variable or a parameter. In summary, when a function is written with an array-typed parameter, \begin{itemize} \item an appearance of passing an array by value is always an incorrect understanding \item a dimension value, if any is present, is ignored \item pointer decay is forced at the call site and the callee sees the parameter having the decayed type \end{itemize} Pointer decay does not affect pointer-to-array types, because these are already pointers, not arrays. As a result, a function with a pointer-to-array parameter sees the parameter exactly as the caller does: \lstinput{32-42}{bkgd-carray-decay.c} \VRef[Table]{bkgd:ar:usr:decay-parm} gives the reference for the decay phenomenon seen in parameter declarations. \begin{table} \caption{Syntactic Reference for Decay during Parameter-Passing. Includes interaction with \lstinline{const}ness, where ``immutable'' refers to a restriction on the callee's ability.} \label{bkgd:ar:usr:decay-parm} \centering \begin{tabular}{llllll} & Description & Type & Parameter Declaration & \CFA \\ \hline & & & @T * x,@ & @* T x,@ \\ $\triangleright$ & pointer to value & @T *@ & @T x[10],@ & @[10] T x,@ \\ & & & @T x[],@ & @[] T x,@ \\ \hline & & & @T * const x,@ & @const * T x@ \\ & immutable ptr.\ to val. & @T * const@ & @T x[const 10],@ & @[const 10] T x,@ \\ & & & @T x[const],@ & @[const] T x,@\\ \hline & & & @const T * x,@ & @ * const T x,@ \\ & & & @T const * x,@ & @ * T const x,@ \\ & ptr.\ to immutable val. & @const T *@ & @const T x[10],@ & @[10] const T x,@ \\ & & @T const *@ & @T const x[10],@ & @[10] T const x,@ \\ & & & @const T x[],@ & @[] const T x,@ \\ & & & @T const x[],@ & @[] T const x,@ \\ \hline \hline & & & @T (*x)[10],@ & @* [10] T x,@ \\ $\triangleright$ & ptr.\ to ar.\ of val. & @T(*)[10]@ & @T x[3][10],@ & @[3][10] T x,@ \\ & & & @T x[][10],@ & @[][10] T x,@ \\ \hline & & & @T ** x,@ & @** T x,@ \\ & ptr.\ to ptr.\ to val. & @T **@ & @T * x[10],@ & @[10] * T x,@ \\ & & & @T * x[],@ & @[] * T x,@ \\ \hline & ptr.\ to ptr.\ to imm.\ val. & @const char **@ & @const char * argv[],@ & @[] * const char argv,@ \\ & & & \emph{others elided} & \emph{others elided} \\ \hline \end{tabular} \end{table} \subsection{Multi-dimensional} As in the last section, multi-dimensional array declarations are examined. \lstinput{16-18}{bkgd-carray-mdim.c} The significant axis of deriving expressions from @ar@ is now ``itself,'' ``first element'' or ``first grand-element (meaning, first element of first element).'' \lstinput{20-44}{bkgd-carray-mdim.c} \subsection{Lengths may vary, checking does not} When the desired number of elements is unknown at compile time, a variable-length array is a solution: \begin{cfa} int main( int argc, const char * argv[] ) { assert( argc == 2 ); size_t n = atol( argv[1] ); assert( 0 < n ); float ar[n]; float b[10]; // ... discussion continues here } \end{cfa} This arrangement allocates @n@ elements on the @main@ stack frame for @ar@, called a \newterm{variable length array} (VLA), as well as 10 elements in the same stack frame for @b@. The variable-sized allocation of @ar@ is provided by the @alloca@ routine, which bumps the stack pointer. Note, the C standard supports VLAs~\cite[\S~6.7.6.2.4]{C11} as a conditional feature, but the \CC standard does not; both @gcc@ and @g++@ support VLAs. As well, there is misinformation about VLAs, \eg the stack size is limited (small), or VLAs cause stack failures or are inefficient. VLAs exist as far back as Algol W~\cite[\S~5.2]{AlgolW} and are a sound and efficient data type. For high-performance applications, the stack size can be fixed and small (coroutines or user-level threads). Here, VLAs can overflow the stack without appropriately sizing the stack, so a heap allocation is used. \begin{cfa} float * ax1 = malloc( sizeof( float[n] ) ); float * ax2 = malloc( n * sizeof( float ) ); $\C{// arrays}$ float * bx1 = malloc( sizeof( float[1000000] ) ); float * bx2 = malloc( 1000000 * sizeof( float ) ); \end{cfa} Parameter dependency Checking is best-effort / unsound Limited special handling to get the dimension value checked (static) \subsection{Dynamically sized, multidimensional arrays} In C and \CC, ``multidimensional array'' means ``array of arrays.'' Other meanings are discussed in TODO. Just as an array's element type can be @float@, so can it be @float[10]@. While any of @float*@, @float[10]@ and @float(*)[10]@ are easy to tell apart from @float@, telling them apart from each other may need occasional reference back to TODO intro section. The sentence derived by wrapping each type in @-[3]@ follows. While any of @float*[3]@, @float[3][10]@ and @float(*)[3][10]@ are easy to tell apart from @float[3]@, telling them apart from each other is what it takes to know what ``array of arrays'' really means. Pointer decay affects the outermost array only TODO: unfortunate syntactic reference with these cases: \begin{itemize} \item ar. of ar. of val (be sure about ordering of dimensions when the declaration is dropped) \item ptr. to ar. of ar. of val \end{itemize} \subsection{Arrays are (but) almost values} Has size; can point to Can't cast to Can't pass as value Can initialize Can wrap in aggregate Can't assign \subsection{Returning an array is (but) almost possible} \subsection{The pointer-to-array type has been noticed before} \section{Linked List} Linked-lists are blocks of storage connected using one or more pointers. The storage block is logically divided into data (user payload) and links (list pointers), where the links are the only component used by the list structure. Since the data is opaque, list structures are often polymorphic over the data, which is often homogeneous. Storage linking is used to build data structures, which are a group of nodes, containing data and links, organized in a particular format, with specific operations peculiar to that format, \eg queue, tree, hash table, \etc. Because a node's existence is independent of the data structure that organizes it, all nodes are manipulated by address not value; hence, all data structure routines take and return pointers to nodes and not the nodes themselves. \subsection{Design issues} \label{toc:lst:issue} This section introduces the design space for linked lists that target \emph{system programmers}. Within this restricted target, all design-issue discussions assume the following invariants. Alternatives to the assumptions are discussed under Future Work (Section~\ref{toc:lst:futwork}). \begin{itemize} \item A doubly-linked list is being designed. Generally, the discussed issues apply similarly for singly-linked lists. Circular \vs ordered linking is discussed under List identity (Section~\ref{toc:lst:issue:ident}). \item Link fields are system-managed. The user works with the system-provided API to query and modify list membership. The system has freedom over how to represent these links. \item The user data must provide storage for the list link-fields. Hence, a list node is \emph{statically} defined as data and links \vs a node that is \emph{dynamically} constructed from data and links \see{\VRef{toc:lst:issue:attach}}. \end{itemize} \subsection{Preexisting linked-list libraries} Two preexisting linked-list libraries are used throughout, to show examples of the concepts being defined, and further libraries are introduced as needed. \begin{enumerate} \item Linux Queue library\cite{lst:linuxq} (LQ) of @@. \item \CC Standard Template Library's (STL)\footnote{The term STL is contentious as some people prefer the term standard library.} @std::list@\cite{lst:stl} \end{enumerate} %A general comparison of libraries' abilities is given under Related Work (Section~\ref{toc:lst:relwork}). For the discussion, assume the fictional type @req@ (request) is the user's payload in examples. As well, the list library is helping the user manage (organize) requests, \eg a request can be work on the level of handling a network arrival event or scheduling a thread. \subsection{Link attachment: intrusive vs.\ wrapped} \label{toc:lst:issue:attach} Link attachment deals with the question: Where are the libraries' inter-element link fields stored, in relation to the user's payload data fields? \VRef[Figure]{fig:lst-issues-attach} shows three basic styles. \VRef[Figure]{f:Intrusive} shows the \newterm{intrusive} style, placing the link fields inside the payload structure. \VRef[Figures]{f:WrappedRef} and \subref*{f:WrappedValue} show the two \newterm{wrapped} styles, which place the payload inside a generic library-provided structure that then defines the link fields. The wrapped style distinguishes between wrapping a reference and wrapping a value, \eg @list@ or @list@. (For this discussion, @list@ is similar to @list@.) This difference is one of user style, not framework capability. Library LQ is intrusive; STL is wrapped with reference and value. \begin{comment} \begin{figure} \begin{tabularx}{\textwidth}{Y|Y|Y} \lstinput[language=C]{20-39}{lst-issues-intrusive.run.c} &\lstinputlisting[language=C++]{20-39}{lst-issues-wrapped-byref.run.cpp} &\lstinputlisting[language=C++]{20-39}{lst-issues-wrapped-emplaced.run.cpp} \\ & & \\ \includegraphics[page=1]{lst-issues-attach.pdf} & \includegraphics[page=2]{lst-issues-attach.pdf} & \includegraphics[page=3]{lst-issues-attach.pdf} \\ & & \\ (a) & (b) & (c) \end{tabularx} \caption{ Three styles of link attachment: (a)~intrusive, (b)~wrapped reference, and (c)~wrapped value. The diagrams show the memory layouts that result after the code runs, eliding the head object \lstinline{reqs}; head objects are discussed in Section~\ref{toc:lst:issue:ident}. In (a), the field \lstinline{req.x} names a list direction; these are discussed in Section~\ref{toc:lst:issue:simultaneity}. In (b) and (c), the type \lstinline{node} represents a system-internal type, which is \lstinline{std::_List_node} in the GNU implementation. (TODO: cite? found in /usr/include/c++/7/bits/stl\_list.h ) } \label{fig:lst-issues-attach} \end{figure} \end{comment} \begin{figure} \centering \newsavebox{\myboxA} % used with subfigure \newsavebox{\myboxB} \newsavebox{\myboxC} \begin{lrbox}{\myboxA} \begin{tabular}{@{}l@{}} \lstinput[language=C]{20-35}{lst-issues-intrusive.run.c} \\ \includegraphics[page=1]{lst-issues-attach.pdf} \end{tabular} \end{lrbox} \begin{lrbox}{\myboxB} \begin{tabular}{@{}l@{}} \lstinput[language=C++]{20-35}{lst-issues-wrapped-byref.run.cpp} \\ \includegraphics[page=2]{lst-issues-attach.pdf} \end{tabular} \end{lrbox} \begin{lrbox}{\myboxC} \begin{tabular}{@{}l@{}} \lstinput[language=C++]{20-35}{lst-issues-wrapped-emplaced.run.cpp} \\ \includegraphics[page=3]{lst-issues-attach.pdf} \end{tabular} \end{lrbox} \subfloat[Intrusive]{\label{f:Intrusive}\usebox\myboxA} \hspace{6pt} \vrule \hspace{6pt} \subfloat[Wrapped reference]{\label{f:WrappedRef}\usebox\myboxB} \hspace{6pt} \vrule \hspace{6pt} \subfloat[Wrapped value]{\label{f:WrappedValue}\usebox\myboxC} \caption{ Three styles of link attachment: % \protect\subref*{f:Intrusive}~intrusive, \protect\subref*{f:WrappedRef}~wrapped reference, and \protect\subref*{f:WrappedValue}~wrapped value. The diagrams show the memory layouts that result after the code runs, eliding the head object \lstinline{reqs}; head objects are discussed in Section~\ref{toc:lst:issue:ident}. In \protect\subref*{f:Intrusive}, the field \lstinline{req.d} names a list direction; these are discussed in Section~\ref{toc:lst:issue:simultaneity}. In \protect\subref*{f:WrappedRef} and \protect\subref*{f:WrappedValue}, the type \lstinline{node} represents a library-internal type, which is \lstinline{std::_List_node} in the GNU implementation \see{\lstinline{/usr/include/c++/X/bits/stl_list.h}, where \lstinline{X} is the \lstinline{g++} version number}. } \label{fig:lst-issues-attach} \end{figure} Each diagrammed example is using the fewest dynamic allocations for its respective style: in intrusive, here is no dynamic allocation, in wrapped reference only the linked fields are dynamically allocated, and in wrapped value the copied data and linked fields are dynamically allocated. The advantage of intrusive is the control in memory layout and storage placement. Both wrapped styles have independent storage layout and imply library-induced heap allocations, with lifetime that matches the item's membership in the list. In all three cases, a @req@ object can enter and leave a list many times. However, in intrusive a @req@ can only be on one list at a time, unless there are separate link-fields for each simultaneous list. In wrapped reference, a @req@ can appear multiple times on the same or different lists simultaneously, but since @req@ is shared via the pointer, care must be taken if updating data also occurs simultaneously, \eg concurrency. In wrapped value, the @req@ is copied, which increases storage usage, but allows independent simultaneous changes; however, knowing which of the @req@ object is the ``true'' object becomes complex. \see*{\VRef{toc:lst:issue:simultaneity} for further discussion.} The implementation of @LIST_ENTRY@ uses a trick to find the links and the node containing the links. The macro @LIST_INSERT_HEAD(&reqs, &r2, d);@ takes the list header, a pointer to the node, and the offset of the link fields in the node. One of the fields generated by @LIST_ENTRY@ is a pointer to the node, which is set to the node address, \eg @r2@. Hence, the offset to the link fields provides an access to the entire node, \ie the node points at itself. For list traversal, @LIST_FOREACH(cur, &reqs_pri, by_pri)@, there is the node cursor, the list, and the offset of the link fields within the node. The traversal actually moves from link fields to link fields within a node and sets the node cursor from the pointer within the link fields back to the node. A further aspect of layout control is allowing the user to explicitly specify link fields controlling attributes and placement within the @req@ object. LQ allows this ability through the @LIST_ENTRY@ macro;\footnote{It is possible to have multiple named linked fields allowing a node to appear on multiple lists simultaneously.} supplying the link fields by inheritance makes them implicit and relies on compiler placement, such as the start or end of @req@. An example of an explicit attribute is cache alignment of the link fields in conjunction with other @req@ fields, improving locality and/or avoiding false sharing. Wrapped reference has no control over the link fields, but the separate data allows some control; wrapped value has no control over data or links. Another subtle advantage of intrusive arrangement is that a reference to a user-level item (@req@) is sufficient to navigate or manage the item's membership. In LQ, the intrusive @req@ pointer is the right argument type for operations @LIST_NEXT@ or @LIST_REMOVE@; there is no distinguishing a @req@ from ``a @req@ in a list.'' The same is not true of STL, wrapped reference or value. There, the analogous operations, @iterator::operator++()@, @iterator::operator*()@, and @list::erase(iterator)@, work on a parameter of type @list::iterator@; There is no mapping from @req &@ to @list::iterator@. %, for linear search. The advantage of wrapped is the abstraction of a data item from its list membership(s). In the wrapped style, the @req@ type can come from a library that serves many independent uses, which generally have no need for listing. Then, a novel use can put a @req@ in a list, without requiring any upstream change in the @req@ library. In intrusive, the ability to be listed must be planned during the definition of @req@. \begin{figure} \lstinput[language=C++]{100-117}{lst-issues-attach-reduction.hpp} \lstinput[language=C++]{150-150}{lst-issues-attach-reduction.hpp} \caption{ Simulation of wrapped using intrusive. Illustrated by pseudocode implementation of an STL-compatible API fragment using LQ as the underlying implementation. The gap that makes it pseudocode is that the LQ C macros do not expand to valid C++ when instantiated with template parameters---there is no \lstinline{struct El}. When using a custom-patched version of LQ to work around this issue, the programs of Figure~\ref{f:WrappedRef} and wrapped value work with this shim in place of real STL. Their executions lead to the same memory layouts. } \label{fig:lst-issues-attach-reduction} \end{figure} It is possible to simulate wrapped using intrusive, illustrated in Figure~\ref{fig:lst-issues-attach-reduction}. This shim layer performs the implicit dynamic allocations that pure intrusion avoids. But there is no reduction going the other way. No shimming can cancel the allocations to which wrapped membership commits. Because intrusion is a lower-level listing primitive, the system design choice is not between forcing users to use intrusion or wrapping. The choice is whether or not to provide access to an allocation-free layer of functionality. An intrusive-primitive library like LQ lets users choose when to make this tradeoff. A wrapped-primitive library like STL forces users to incur the costs of wrapping, whether or not they access its benefits. \subsection{Simultaneity: single vs.\ multi-static vs.\ dynamic} \label{toc:lst:issue:simultaneity} \begin{figure} \parbox[t]{3.5in} { \lstinput[language=C++]{20-60}{lst-issues-multi-static.run.c} }\parbox[t]{20in} { ~\\ \includegraphics[page=1]{lst-issues-direct.pdf} \\ ~\\ \hspace*{1.5in}\includegraphics[page=2]{lst-issues-direct.pdf} } \caption{ Example of simultaneity using LQ lists. The zoomed-out diagram (right/top) shows the complete multi-linked data structure. This structure can navigate all requests in priority order ({\color{blue}blue}), and navigate among requests with a common request value ({\color{orange}orange}). The zoomed-in diagram (right/bottom) shows how the link fields connect the nodes on different lists. } \label{fig:lst-issues-multi-static} \end{figure} \newterm{Simultaneity} deals with the question: In how many different lists can a node be stored, at the same time? Figure~\ref{fig:lst-issues-multi-static} shows an example that can traverse all requests in priority order (field @pri@) or navigate among requests with the same request value (field @rqr@). Each of ``by priority'' and ``by common request value'' is a separate list. For example, there is a single priority-list linked in order [1, 2, 2, 3, 3, 4], where nodes may have the same priority, and there are three common request-value lists combining requests with the same values: [42, 42], [17, 17, 17], and [99], giving four head nodes one for each list. The example shows a list can encompass all the nodes (by-priority) or only a subset of the nodes (three request-value lists). As stated, the limitation of intrusive is knowing apriori how many groups of links are needed for the maximum number of simultaneous lists. Thus, the intrusive LQ example supports multiple, but statically many, link lists. Note, it is possible to reuse links for different purposes, \eg if a list in linked one way at one time and another way at another time, and these times do not overlap, the two different linkings can use the same link fields. This feature is used in the \CFA runtime where a thread node may be on a blocked or running list, both never on both simultaneously. Now consider the STL in the wrapped-reference arrangement of Figure~\ref{f:WrappedRef}. Here it is possible to construct the same simultaneity by creating multiple STL lists, each pointing at the appropriate nodes. Each group of intrusive links become the links for each separate STL list. The upside is the unlimited number of a lists a node can be associated with simultaneously, any number of STL lists can be created dynamically. The downside is the dynamic allocation of the link nodes and managing multiple lists. Note, it might be possible to wrap the multiple lists in another type to hide this implementation issue. Now consider the STL in the wrapped-value arrangement of Figure~\ref{f:WrappedValue}. Again, it is possible to construct the same simultaneity by creating multiple STL lists, each copying the appropriate nodes, where the intrusive links become the links for each separate STL list. The upside is the same as for wrapped-reference arrangement with an unlimited number of a list bindings. The downside is the dynamic allocation and significant storage usage due to node copying. As well, it is unclear how node updates work in this scenario, without some notation of ultimately merging node information. % https://www.geeksforgeeks.org/introduction-to-multi-linked-list/ -- example of building a bespoke multi-linked list out of STL primitives (providing indication that STL doesn't offer one); offers dynamic directionality by embedding `vector pointers;` % When allowing multiple static directions, frameworks differ in their ergonomics for % the typical case: when the user needs only one direction, vs.\ the atypical case, when the user needs several. % LQ's ergonomics are well-suited to the uncommon case of multiple list directions. % Its intrusion declaration and insertion operation both use a mandatory explicit parameter naming the direction. % This decision works well in Figure~\ref{fig:lst-issues-multi-static}, where the names @by_pri@ and @by_rqr@ work well, % but it clutters Figure~\ref{f:Intrusive}, where a contrived name must be invented and used. % The example uses @x@; @reqs@ would be a more readily ignored choice. \PAB{wording?} \uCpp is a concurrent extension of \CC, which provides a basic set of intrusive lists~\cite[appx.~F]{uC++}, where the link fields are defined with the data fields using inheritance. \begin{cquote} \setlength{\tabcolsep}{15pt} \begin{tabular}{@{}ll@{}} \multicolumn{1}{c}{singly-linked list} & \multicolumn{1}{c}{doubly-linked list} \\ \begin{c++} struct Node : public uColable { int i; // data Node( int i ) : i{ i } {} }; \end{c++} & \begin{c++} struct Node : public uSeqable { int i; // data Node( int i ) : i{ i } {} }; \end{c++} \end{tabular} \end{cquote} A node can be placed in the following data structures depending on its link fields: @uStack@ and @uQueue@ (singly linked), and @uSequence@ (doubly linked). A node inheriting from @uSeqable@ can appear in a singly or doubly linked structure. Structure operations implicitly know the link-field location through the inheritance. \begin{c++} uStack stack; Node node; stack.push( node ); // link fields at beginning of node \end{c++} Simultaneity cannot be done with multiple inheritance, because there is no mechanism to either know the order of inheritance fields or name each inheritance. Instead, a special type is require that contains the link fields and points at the node. \begin{cquote} \setlength{\tabcolsep}{10pt} \begin{tabular}{@{}ll@{}} \begin{c++} struct NodeDL : public uSeqable { @Node & node;@ // node pointer NodeDL( Node & node ) : node( node ) {} Node & get() const { return node; } }; \end{c++} & \begin{c++} struct Node : public uColable { int i; // data @NodeDL nodeseq;@ // embedded intrusive links Node( int i ) : i{ i }, @nodeseq{ this }@ {} }; \end{c++} \end{tabular} \end{cquote} This node can now be inserted into a doubly-linked list through the embedded intrusive links. \begin{c++} uSequence sequence; sequence.add_front( @node.nodeseq@ ); $\C{// link fields in embedded type}$ NodeDL nodedl = sequence.remove( @node.nodeseq@ ); int i = nodedl.@get()@.i; $\C{// indirection to node}$ \end{c++} Hence, the \uCpp approach optimizes one set of intrusive links through the \CC inheritance mechanism, and falls back onto the LQ approach of explicit declarations for additional intrusive links. However, \uCpp cannot apply the LQ trick for finding the links and node. The major ergonomic difference among the approaches is naming and name usage. The intrusive model requires naming each set of intrusive links, \eg @by_pri@ and @by_rqr@ in \VRef[Figure]{fig:lst-issues-multi-static}. \uCpp cheats by using inheritance for the first intrusive links, after which explicit naming of intrusive links is required. Furthermore, these link names must be used in all list operations, including iterating, whereas wrapped reference and value hide the list names in the implicit dynamically-allocated node-structure. At issue is whether an API for simultaneity can support one list (when several are not wanted) to be \emph{implicit}. \uCpp allows it, LQ does not, and the STL does not have this question. \subsection{User integration: preprocessed vs.\ type-system mediated} % example of poor error message due to LQ's preprocessed integration % programs/lst-issues-multi-static.run.c:46:1: error: expected identifier or '(' before 'do' % 46 | LIST_INSERT_HEAD(&reqs_rtr_42, &r42b, by_rqr); % | ^~~~~~~~~~~~~~~~ % % ... not a wonderful example; it was a missing semicolon on the preceding line; but at least real \subsection{List identity: headed vs.\ ad-hoc} \label{toc:lst:issue:ident} All examples so far have used distinct user-facing types: an item found in a list (type @req@, of variables like @r1@), and a list (type @reql@ or @list@, of variables like @reqs@ or @reqs_rqr_42@). \see{Figure~\ref{fig:lst-issues-attach} and Figure~\ref{fig:lst-issues-multi-static}} The latter type is a head, and these examples are of headed lists. A bespoke ``pointer to next @req@'' implementation often omits the latter type. The resulting identity model is ad-hoc. \begin{figure} \centering \includegraphics{lst-issues-ident.pdf} \caption{ Comparison of headed and ad-hoc list identities, for various list lengths. Pointers are logical, meaning that no claim is intended about which part of an object is being targeted. } \label{fig:lst-issues-ident} \end{figure} Figure~\ref{fig:lst-issues-ident} shows the identity model's impact on the existence of certain conceptual constructs, like zero-lengths lists and unlisted elements. In headed thinking, there are length-zero lists (heads with no elements), and an element can be listed or not listed. In ad-hoc thinking, there are no length-zero lists and every element belongs to a list of length at least one. By omitting the head, elements can enter into an adjacency relationship, without requiring that someone allocate room for the head of the possibly-resulting list, or being able to find a correct existing head. A head defines one or more element roles, among elements that share a transitive adjacency. ``First'' and ``last'' are element roles. One moment's ``first'' need not be the next moment's. There is a cost to maintaining these roles. A runtime component of this cost is evident in LQ's offering the choice of type generators @LIST@ vs.~@TAILQ@. Its @LIST@ maintains a ``first,'' but not a ``last;'' its @TAILQ@ maintains both roles. (Both types are doubly linked and an analogous choice is available for singly linked.) TODO: finish making this point See WIP in lst-issues-adhoc-*.ignore.*. The code-complexity component of the cost ... Ability to offer heads is good. Point: Does maintaining a head mean that the user has to provide more state when manipulating the list? Requiring the user to do so is bad, because the user may have lots of "list" typed variables in scope, and detecting that the user passed the wrong one requires testing all the listing edge cases. \subsection{End treatment: cased vs.\ uniform } This issue is implementation-level, relevant to achieving high performance. A linear (non-circular), nonempty linked list has a first element and a last element, whether or not the list is headed. A first element has no predecessor and a last element has no successor. \begin{figure} \centering \includegraphics{lst-issues-end.pdf} \caption{ LQ sub-object-level representation of links and ends. Each object's memory is pictured as a vertical strip. Pointers' target locations, within these strips, are significant. Uniform treatment of the first-end is evident from an assertion like \lstinline{(**this.pred == this)} holding for all nodes \lstinline{this}, including the first one. Cased treatment of the last-end is evident from the symmetric proposition, \lstinline{(this.succ.pred == &this.succ)}, failing when \lstinline{this} is the last node. } \label{fig:lst-issues-end} \end{figure} End treatment refers to how the list represents the lack of a predecessor/successor. The following elaboration refers to the LQ representations, detailed in Figure~\ref{fig:lst-issues-end}. The most obvious representation, a null pointer, mandates a cased end treatment. LQ uses this representation for its successor/last. Consider the operation of inserting after a given element. A doubly-linked list must update the given node's successor, to make its predecessor-pointer to refer to the new node. This step must happen when the given node has a successor (when its successor pointer is non-null), and must be skipped when it does not (when its successor pointer cannot be navigated). So this operation contains a branch, to decide which case is happening. All branches have pathological cases where branch prediction's success rate is low and the execution pipeline is stalling often. This branch is sometimes avoidable; the result is a uniform end treatment. Here is one example of such an implementation; it works for a headed list. LQ uses uses this representation for its predecessor/first. (All LQ offerings are headed at the front.) For predecessor/first navigation, the relevant operation is inserting before a given element. LQ's predecessor representation is not a pointer to a node, but a pointer to a pseudo-successor pointer. When there is a predecessor node, that node contains a real-successor pointer; it is the target of the reference node's predecessor pointer. When there is no predecessor node, the reference node (now known to be first node) acts as the pseudo-successor of the list head. The list head contains a pointer to the first node. When inserting before the first node, the list head's first-pointer is the one that must change. So, the first node's ``predecessor'' pointer (to a pseudo-successor pointer) is set as the list head's first-pointer. Now, inserting before a given element does the same logic in both cases: follow the guaranteed-non-null predecessor pointer, and update what you find there to refer to the new node. Applying this trick makes it possible to have list management routines that are completely free of control flow. Considering a path length of only a few instructions (less than the processor's pipeline length), such list management operations are often practically free, with all the variance being due to the (inevitable) cache status of the nodes being managed. \section{String} A string is a sequence of symbols, where the form of a symbol can vary significantly: 7/8-bit characters (ASCII/Latin-1), or 2/4/8-byte (UNICODE) characters/symbols or variable length (UTF-8/16/32) characters. A string can be read left-to-right, right-to-left, top-to-bottom, and have stacked elements (Arabic). A C character constant is an ASCII/Latin-1 character enclosed in single-quotes, \eg @'x'@, @'@\textsterling@'@. A wide C character constant is the same, except prefixed by the letter @L@, @u@, or @U@, \eg @u'\u25A0'@ (black square), where the @\u@ identifies a universal character name. A character can be formed from an escape sequence, which expresses a non-typable character @'\f'@ form feed, a delimiter character @'\''@ embedded single quote, or a raw character @'\xa3'@ \textsterling. A C character string is zero or more regular, wide, or escape characters enclosed in double-quotes @"xyz\n"@. The kind of characters in the string is denoted by a prefix: UTF-8 characters are prefixed by @u8@, wide characters are prefixed by @L@, @u@, or @U@. For UTF-8 string literals, the array elements have type @char@ and are initialized with the characters of the multibyte character sequences, \eg @u8"\xe1\x90\x87"@ (Canadian syllabics Y-Cree OO). For wide string literals prefixed by the letter @L@, the array elements have type @wchar_t@ and are initialized with the wide characters corresponding to the multibyte character sequence, \eg @L"abc@$\mu$@"@ and are read/printed using @wsanf@/@wprintf@. The value of a wide-character is implementation-defined, usually a UTF-16 character. For wide string literals prefixed by the letter @u@ or @U@, the array elements have type @char16_t@ or @char32_t@, respectively, and are initialized with wide characters corresponding to the multibyte character sequence, \eg @u"abc@$\mu$@"@, @U"abc@$\mu$@"@. The value of a @"u"@ character is an UTF-16 character; the value of a @"U"@ character is an UTF-32 character. The value of a string literal containing a multibyte character or escape sequence not represented in the execution character set is implementation-defined. C strings are null-terminated rather than maintaining a separate string length. \begin{quote} Technically, a string is an array whose elements are single characters. The compiler automatically places the null character @\0@ at the end of each such string, so programs can conveniently find the end. This representation means that there is no real limit to how long a string can be, but programs have to scan one completely to determine its length.~\cite[p.~36]{C:old} \end{quote} Unfortunately, this design decision is both unsafe and inefficient. It is common error in C to forget the storage in a character array for the terminator or overwrite the terminator, resulting in array overruns in string operations. The need to repeatedly scan an entire string to determine its length can result in significant cost, as it is impossible to cache the length in many cases. C strings are fixed size because arrays are used for the implementation. However, string manipulation commonly results in dynamically-sized temporary and final string values, \eg @strcpy@, @strcat@, @strcmp@, @strlen@, @strstr@, \etc. As a result, storage management for C strings is a nightmare, quickly resulting in array overruns and incorrect results. Collectively, these design decisions make working with strings in C, awkward, time consuming, and unsafe. While there are companion string routines that take the maximum lengths of strings to prevent array overruns, \eg @strncpy@, @strncat@, @strncpy@, that means the semantics of the operation can fail because strings are truncated. Suffice it to say, C is not a go-to language for string applications, which is why \CC introduced the dynamically-sized @string@ type.