source: doc/theses/mike_brooks_MMath/background.tex@ 873e96c

Last change on this file since 873e96c was 873e96c, checked in by Peter A. Buhr <pabuhr@…>, 5 months ago

formatting

  • Property mode set to 100644
File size: 72.6 KB
Line 
1\chapter{Background}
2
3Since this work builds on C, it is necessary to explain the C mechanisms and their shortcomings for array, linked list, and string.
4
5
6\section{Ill-Typed Expressions}
7
8C reports many ill-typed expressions as warnings.
9For example, these attempts to assign @y@ to @x@ and vice-versa are obviously ill-typed.
10\lstinput{12-15}{bkgd-c-tyerr.c}
11with warnings:
12\begin{cfa}
13warning: assignment to 'float *' from incompatible pointer type 'void (*)(void)'
14warning: assignment to 'void (*)(void)' from incompatible pointer type 'float *'
15\end{cfa}
16Similarly,
17\lstinput{17-19}{bkgd-c-tyerr.c}
18with warning:
19\begin{cfa}
20warning: passing argument 1 of 'f' from incompatible pointer type
21note: expected 'void (*)(void)' but argument is of type 'float *'
22\end{cfa}
23with a segmentation fault at runtime.
24Clearly, @gcc@ understands these ill-typed case, and yet allows the program to compile, which seems inappropriate.
25Compiling with flag @-Werror@, which turns warnings into errors, is often too pervasive, because some warnings are just warnings, \eg an unused variable.
26In the following discussion, \emph{ill-typed} means giving a nonzero @gcc@ exit condition with a message that discusses typing.
27Note, \CFA's type-system rejects all these ill-typed cases as type mismatch errors.
28
29% That @f@'s attempt to call @g@ fails is not due to 3.14 being a particularly unlucky choice of value to put in the variable @pi@.
30% Rather, it is because obtaining a program that includes this essential fragment, yet exhibits a behaviour other than "doomed to crash," is a matter for an obfuscated coding competition.
31
32% A "tractable syntactic method for proving the absence of certain program behaviours by classifying phrases according to the kinds of values they compute"*1 rejected the program.
33% The behaviour (whose absence is unprovable) is neither minor nor unlikely.
34% The rejection shows that the program is ill-typed.
35%
36% Yet, the rejection presents as a GCC warning.
37% *1 TAPL-pg1 definition of a type system
38
39% reading C declaration: https://c-faq.com/decl/spiral.anderson.html
40
41
42\section{Reading Declarations}
43
44A significant area of confusion is reading C declarations, which results from interesting design choices.
45\begin{itemize}[leftmargin=*]
46\item
47In C, it is possible to have a value and a pointer to it.
48\begin{cfa}
49int i = 3, * pi = &i;
50\end{cfa}
51Extending this idea, it should be possible to have an array of values and pointer to it.
52\begin{cfa}
53int a[5] = { 1, 2, 3, 4, 5 }, * pa[5] = &a;
54\end{cfa}
55However, the declaration of @pa@ is incorrect because dimension has higher priority than pointer, so the declaration means an array of 5 pointers to integers.
56The declarations for the two interpretations of @* [5]@ are:
57\begin{cquote}
58\begin{tabular}[t]{@{}ll@{\hspace{15pt}}|@{\hspace{15pt}}ll@{}}
59\begin{cfa}
60int (* pa)[5]
61\end{cfa}
62&
63\raisebox{-0.4\totalheight}{\includegraphics{PtrToArray.pdf}}
64&
65\begin{cfa}
66int * ap[5]
67\end{cfa}
68&
69\raisebox{-0.75\totalheight}{\includegraphics{ArrayOfPtr.pdf}}
70\end{tabular}
71\end{cquote}
72If the priorities of dimension and pointer were reversed, the declarations become more intuitive: @int * pa[5]@ and @int * (ap[5])@.
73\item
74This priority inversion extends into an expression between dereference and subscript, so usage syntax mimics declaration.
75\begin{cquote}
76\setlength{\tabcolsep}{20pt}
77\begin{tabular}{@{}ll@{}}
78\begin{cfa}
79int (* pa)[5]
80 (*pa)[i] += 1;
81\end{cfa}
82&
83\begin{cfa}
84int * ap[5]
85 *ap[i] += 1;
86\end{cfa}
87\end{tabular}
88\end{cquote}
89(\VRef{s:ArraysDecay} shows pointer decay allows the first form to be written @pa[i] += 1@, which is further syntax confusion.)
90Again, if the priorities were reversed, the expressions become more intuitive: @*pa[i] += 1@ and @*(ap[i]) += 1@.
91Note, a similar priority inversion exists between deference @*@ and field selection @.@ (period), so @*ps.f@ means @*(ps.f)@;
92this anomaly is \emph{fixed} with operator @->@, which performs the two operations in the more intuitive order: @sp->f@ $\Rightarrow$ @(*sp).f@.
93\end{itemize}
94While attempting to make the declaration and expression contexts consistent is a laudable goal, it has not worked out in practice, even though Dennis Richie believed otherwise:
95\begin{quote}
96In spite of its difficulties, I believe that the C's approach to declarations remains plausible, and am comfortable with it; it is a useful unifying principle.~\cite[p.~12]{Ritchie93}
97\end{quote}
98After all, reading a C array type is easy: just read it from the inside out, and know when to look left and when to look right!
99Unfortunately, \CFA cannot correct these operator priority inversions without breaking C compatibility.
100
101The alternative solution is for \CFA to provide its own type, variable and routine declarations, using a more intuitive syntax.
102The new declarations place qualifiers to the left of the base type, while C declarations place qualifiers to the right of the base type.
103The qualifiers have the same syntax and semantics in \CFA as in C, so there is nothing to learn.
104Then, a \CFA declaration is read left to right, where a function return-type is enclosed in brackets @[@\,@]@.
105\begin{cquote}
106\begin{tabular}{@{}l@{\hspace{3em}}ll@{}}
107\multicolumn{1}{c@{\hspace{3em}}}{\textbf{C}} & \multicolumn{1}{c}{\textbf{\CFA}} & \multicolumn{1}{c}{\textbf{read left to right}} \\
108\begin{cfa}
109int @*@ x1 @[5]@;
110int @(*@x2@)[5]@;
111int @(*@f( int p )@)[5]@;
112\end{cfa}
113&
114\begin{cfa}
115@[5] *@ int x1;
116@* [5]@ int x2;
117@[ * [5] int ]@ f( int p );
118\end{cfa}
119&
120\begin{cfa}
121// array of 5 pointers to int
122// pointer to array of 5 int
123// function returning pointer to array of 5 ints
124\end{cfa}
125\\
126& &
127\LstCommentStyle{//\ \ \ and taking an int argument}
128\end{tabular}
129\end{cquote}
130As declaration size increases, it becomes corresponding difficult to read and understand the C form, whereas reading and understanding a \CFA declaration has linear complexity.
131Note, writing declarations left to right is common in other programming languages, where the function return-type is often placed after the parameter declarations, \eg \CC \lstinline[language=C++]{auto f( int ) -> int}.
132(Note, putting the return type at the end deviates from where the return value logically appears in an expression, @x = f(...)@ versus @f(...) = x@.)
133Interestingly, programmers normally speak a declaration from left to right, regardless of how it is written.
134(It is unclear if Hebrew or Arabic speakers, say declarations right to left.)
135
136\VRef[Table]{bkgd:ar:usr:avp} introduces the many layers of the C and \CFA array story, where the \CFA story is discussion in \VRef[Chapter]{c:Array}.
137The \CFA-thesis column shows the new array declaration form, which is my contribution to safety and ergonomics.
138The table shows there are multiple yet equivalent forms for the array types under discussion, and subsequent discussion shows interactions with orthogonal (but easily confused) language features.
139Each row of the table shows alternate syntactic forms.
140The simplest occurrences of types distinguished in the preceding discussion are marked with $\triangleright$.
141Removing the declared variable @x@, gives the type used for variable, structure field, cast, or error messages.
142Unfortunately, parameter declarations have more syntactic forms and rules.
143
144\begin{table}
145\centering
146\caption{Syntactic Reference for Array vs Pointer. Includes interaction with \lstinline{const}ness.}
147\label{bkgd:ar:usr:avp}
148\begin{tabular}{ll|l|l|l}
149 & Description & \multicolumn{1}{c|}{C} & \multicolumn{1}{c|}{\CFA} & \multicolumn{1}{c}{\CFA-thesis} \\
150 \hline
151$\triangleright$ & value & @T x;@ & @T x;@ & \\
152 \hline
153 & immutable value & @const T x;@ & @const T x;@ & \\
154 & & @T const x;@ & @T const x;@ & \\
155 \hline \hline
156$\triangleright$ & pointer to value & @T * x;@ & @* T x;@ & \\
157 \hline
158 & immutable ptr. to val. & @T * const x;@ & @const * T x;@ & \\
159 \hline
160 & ptr. to immutable val. & @const T * x;@ & @* const T x;@ & \\
161 & & @T const * x;@ & @* T const x;@ & \\
162 \hline \hline
163$\triangleright$ & array of value & @T x[10];@ & @[10] T x@ & @array(T, 10) x@ \\
164 \hline
165 & ar.\ of immutable val. & @const T x[10];@ & @[10] const T x@ & @const array(T, 10) x@ \\
166 & & @T const x[10];@ & @[10] T const x@ & @array(T, 10) const x@ \\
167 \hline
168 & ar.\ of ptr.\ to value & @T * x[10];@ & @[10] * T x@ & @array(T *, 10) x@ \\
169 & & & & @array(* T, 10) x@ \\
170 \hline
171 & ar.\ of imm. ptr.\ to val. & @T * const x[10];@ & @[10] const * T x@ & @array(* const T, 10) x@ \\
172 & & & & @array(const * T, 10) x@ \\
173 \hline
174 & ar.\ of ptr.\ to imm. val. & @const T * x[10];@ & @[10] * const T x@ & @array(const T *, 10) x@ \\
175 & & @T const * x[10];@ & @[10] * T const x@ & @array(* const T, 10) x@ \\
176 \hline \hline
177$\triangleright$ & ptr.\ to ar.\ of value & @T (*x)[10];@ & @* [10] T x@ & @* array(T, 10) x@ \\
178 \hline
179 & imm. ptr.\ to ar.\ of val. & @T (* const x)[10];@ & @const * [10] T x@ & @const * array(T, 10) x@ \\
180 \hline
181 & ptr.\ to ar.\ of imm. val. & @const T (*x)[10];@ & @* [10] const T x@ & @* const array(T, 10) x@ \\
182 & & @T const (*x)[10];@ & @* [10] T const x@ & @* array(T, 10) const x@ \\
183 \hline
184 & ptr.\ to ar.\ of ptr.\ to val. & @T *(*x)[10];@ & @* [10] * T x@ & @* array(T *, 10) x@ \\
185 & & & & @* array(* T, 10) x@ \\
186 \hline
187\end{tabular}
188\end{table}
189
190
191\section{Array}
192\label{s:Array}
193
194At the start, the C language designers made a significant design mistake with respect to arrays.
195\begin{quote}
196In C, there is a strong relationship between pointers and arrays, strong enough that pointers and arrays really should be treated simultaneously.
197Any operation which can be achieved by array subscripting can also be done with pointers.~\cite[p.~93]{C:old}
198\end{quote}
199Accessing any storage requires pointer arithmetic, even if it is just base-displacement addressing in an instruction.
200The conjoining of pointers and arrays could also be applied to structures, where a pointer references a structure field like an array element.
201Finally, while subscripting involves pointer arithmetic (as does a field reference @x.y.z@), the computation is complex for multi-dimensional arrays and requires array descriptors to know stride lengths along dimensions.
202Many C errors result from manually performing pointer arithmetic instead of using language subscripting so the compiler performs the arithmetic.
203
204Some modern C textbooks and web sites erroneously suggest manual pointer arithmetic is faster than subscripting.
205When compiler technology was young, this statement might have been true.
206However, a sound and efficient C program coupled with a modern C compiler does not require explicit pointer arithmetic.
207For example, the @gcc@ compiler at @-O3@ generates identical code for the following two summation loops.
208\begin{cquote}
209\vspace*{-10pt}
210\begin{cfa}
211int a[1000], sum;
212\end{cfa}
213\setlength{\tabcolsep}{20pt}
214\begin{tabular}{@{}ll@{}}
215\begin{cfa}
216for ( int i = 0; i < 1000; i += 1 ) {
217 sum += a[i];
218}
219\end{cfa}
220&
221\begin{cfa}
222for ( int * ip = a ; ip < &a[1000]; ip += 1 ) {
223 sum += *ip;
224}
225\end{cfa}
226\end{tabular}
227\end{cquote}
228I believe it is possible to refute any code examples purporting to show pointer arithmetic is faster than subscripting.
229This believe stems from the performance work I did on \CFA arrays, where it is possible to generate equivalent \CFA subscripting and performance to C subscripting.
230
231Unfortunately, C semantics want a programmer to \emph{believe} an array variable is a \emph{pointer to its first element}.
232This desire becomes apparent by a detailed inspection of an array declaration.
233\lstinput{34-34}{bkgd-carray-arrty.c}
234The inspection begins by using @sizeof@ to provide program semantics for the intuition of an expression's type.
235An architecture with 64-bit pointer size is used, to remove irrelevant details.
236\lstinput{35-36}{bkgd-carray-arrty.c}
237Now consider the @sizeof@ expressions derived from @ar@, modified by adding pointer-to and first-element (and including unnecessary parentheses to avoid any confusion about precedence).
238\lstinput{37-40}{bkgd-carray-arrty.c}
239Given that arrays are contiguous and the size of @float@ is 4, then the size of @ar@ with 10 floats being 40 bytes is common reasoning for C programmers.
240Equally, C programmers know the size of a pointer to the first array element is 8.
241% Now, set aside for a moment the claim that this first assertion is giving information about a type.
242Clearly, an array and a pointer to its first element are different.
243
244In fact, the idea that there is such a thing as a pointer to an array may be surprising.
245It it is not the same thing as a pointer to the first element.
246\lstinput{42-45}{bkgd-carray-arrty.c}
247The first assignment generates:
248\begin{cfa}
249warning: assignment to `float (*)[10]' from incompatible pointer type `float *'
250\end{cfa}
251and the second assignment generates the opposite.
252
253The inspection now refutes any suggestion that @sizeof@ is informing about allocation rather than type information.
254Note, @sizeof@ has two forms, one operating on an expression and the other on a type.
255Using the type form yields the same results as the prior expression form.
256\lstinput{46-49}{bkgd-carray-arrty.c}
257The results are also the same when there is no allocation at all.
258This time, starting from a pointer-to-array type:
259\lstinput{51-57}{bkgd-carray-arrty.c}
260Hence, in all cases, @sizeof@ is reporting on type information.
261
262Therefore, thinking of an array as a pointer to its first element is too simplistic an analogue and it is not backed up by the type system.
263This misguided analogue works for a single-dimension array but there is no advantage other than possibly teaching beginner programmers about basic runtime array-access.
264
265Continuing, there is a short form for declaring array variables using length information provided implicitly by an initializer.
266\lstinput{59-62}{bkgd-carray-arrty.c}
267The compiler counts the number of initializer elements and uses this value as the first dimension.
268Unfortunately, the implicit element counting does not extend to dimensions beyond the first.
269\lstinput{64-67}{bkgd-carray-arrty.c}
270
271My observation is recognizing:
272\begin{itemize}[leftmargin=*,itemsep=0pt]
273 \item There is value in using a type that knows its size.
274 \item The type pointer to the (first) element does not.
275 \item C \emph{has} a type that knows the whole picture: array, \eg @T[10]@.
276 \item This type has all the usual derived forms, which also know the whole picture.
277 A noteworthy example is pointer to array, \eg @T (*)[10]@.
278\end{itemize}
279
280
281\subsection{Arrays Decay and Pointers Diffract}
282\label{s:ArraysDecay}
283
284The last section established the difference among these four types:
285\lstinput{3-6}{bkgd-carray-decay.c}
286But the expression used for obtaining the pointer to the first element is pedantic.
287The root of all C programmer experience with arrays is the shortcut
288\lstinput{8-8}{bkgd-carray-decay.c}
289which reproduces @pa0@, in type and value:
290\lstinput{9-9}{bkgd-carray-decay.c}
291The validity of this initialization is unsettling, in the context of the facts established in \VRef{s:Array}.
292Notably, it initializes name @pa0x@ from expression @ar@, when they are not of the same type:
293\lstinput{10-10}{bkgd-carray-decay.c}
294
295So, C provides an implicit conversion from @float[10]@ to @float *@.
296\begin{quote}
297Except when it is the operand of the @sizeof@ operator, or the unary @&@ operator, or is a string literal used to
298initialize an array an expression that has type ``array of \emph{type}'' is converted to an expression with type
299``pointer to \emph{type}'' that points to the initial element of the array object~\cite[\S~6.3.2.1.3]{C11}
300\end{quote}
301This phenomenon is the famous \newterm{pointer decay}, which is a decay of an array-typed expression into a pointer-typed one.
302It is worthy to note that the list of exceptional cases does not feature the occurrence of @ar@ in @ar[i]@.
303Thus, subscripting happens on pointers not arrays.
304
305Subscripting proceeds first with pointer decay, if needed.
306Next, \cite[\S~6.5.2.1.2]{C11} explains that @ar[i]@ is treated as if it were @(*((a)+(i)))@.
307\cite[\S~6.5.6.8]{C11} explains that the addition, of a pointer with an integer type, is defined only when the pointer refers to an element that is in an array, with a meaning of @i@ elements away from, which is valid if @ar@ is big enough and @i@ is small enough.
308Finally, \cite[\S~6.5.3.2.4]{C11} explains that the @*@ operator's result is the referenced element.
309Taken together, these rules illustrate that @ar[i]@ and @i[a]@ mean the same thing!
310
311Subscripting a pointer when the target is standard-inappropriate is still practically well-defined.
312While the standard affords a C compiler freedom about the meaning of an out-of-bound access, or of subscripting a pointer that does not refer to an array element at all,
313the fact that C is famously both generally high-performance, and specifically not bound-checked, leads to an expectation that the runtime handling is uniform across legal and illegal accesses.
314Moreover, consider the common pattern of subscripting on a @malloc@ result:
315\begin{cfa}
316float * fs = malloc( 10 * sizeof(float) );
317fs[5] = 3.14;
318\end{cfa}
319The @malloc@ behaviour is specified as returning a pointer to ``space for an object whose size is'' as requested (\cite[\S~7.22.3.4.2]{C11}).
320But \emph{nothing} more is said about this pointer value, specifically that its referent might \emph{be} an array allowing subscripting.
321
322Under this assumption, a pointer being subscripted (or added to, then dereferenced) by any value (positive, zero, or negative), gives a view of the program's entire address space, centred around the @p@ address, divided into adjacent @sizeof(*p)@ chunks, each potentially (re)interpreted as @typeof(*p)@.
323I call this phenomenon \emph{array diffraction}, which is a diffraction of a single-element pointer into the assumption that its target is in the middle of an array whose size is unlimited in both directions.
324No pointer is exempt from array diffraction.
325No array shows its elements without pointer decay.
326
327A further pointer--array confusion, closely related to decay, occurs in parameter declarations.
328\cite[\S~6.7.6.3.7]{C11} explains that when an array type is written for a parameter,
329the parameter's type becomes a type that can be summarized as the array-decayed type.
330The respective handling of the following two parameter spellings shows that the array and pointer versions are identical.
331\lstinput{12-16}{bkgd-carray-decay.c}
332As the @sizeof(x)@ meaning changed, compared with when run on a similarly-spelled local variable declaration,
333@gcc@ also gives this code the warning for the first assertion:
334\begin{cfa}
335warning: 'sizeof' on array function parameter 'x' will return size of 'float *'
336\end{cfa}
337The caller of such a function is left with the reality that a pointer parameter is a pointer, no matter how it is spelled:
338\lstinput{18-21}{bkgd-carray-decay.c}
339This fragment gives the warning for the first argument, in the second call.
340\begin{cfa}
341warning: 'f' accessing 40 bytes in a region of size 4
342\end{cfa}
343
344The shortened parameter syntax @T x[]@ is a further way to spell \emph{pointer}.
345Note the opposite meaning of this spelling now, compared with its use in local variable declarations.
346This point of confusion is illustrated in:
347\lstinput{23-30}{bkgd-carray-decay.c}
348Note, \CC gives a warning for the initialization of @cp@.
349\begin{cfa}
350warning: ISO C++ forbids converting a string constant to 'char*'
351\end{cfa}
352and C gives a warning at the call of @edit@, if @const@ is added to the declaration of @cp@.
353\begin{cfa}
354warning: passing argument 1 of 'edit' discards 'const' qualifier from pointer target type
355\end{cfa}
356The basic two meanings, with a syntactic difference helping to distinguish, are illustrated in the declarations of @ca@ \vs @cp@, whose subsequent @edit@ calls behave differently.
357The syntax-caused confusion is in the comparison of the first and last lines, both of which use a literal to initialize an object declared with spelling @T x[]@.
358But these initialized declarations get opposite meanings, depending on whether the object is a local variable or a parameter!
359
360In summary, when a function is written with an array-typed parameter,
361\begin{itemize}[leftmargin=*]
362 \item an appearance of passing an array by value is always an incorrect understanding,
363 \item a dimension value, if any is present, is ignored,
364 \item pointer decay is forced at the call site and the callee sees the parameter having the decayed type.
365\end{itemize}
366
367Pointer decay does not affect pointer-to-array types, because these are already pointers, not arrays.
368As a result, a function with a pointer-to-array parameter sees the parameter exactly as the caller does:
369\par\noindent
370\begin{tabular}{@{\hspace*{-0.75\parindentlnth}}l@{}l@{}}
371\lstinput{32-36}{bkgd-carray-decay.c}
372&
373\lstinput{38-42}{bkgd-carray-decay.c}
374\end{tabular}
375\par\noindent
376\VRef[Table]{bkgd:ar:usr:decay-parm} gives the reference for the decay phenomenon seen in parameter declarations.
377
378\begin{table}
379\caption{Syntactic Reference for Decay during Parameter-Passing.
380Includes interaction with \lstinline{const}ness, where \emph{immutable} refers to a restriction on the callee's ability.}
381\label{bkgd:ar:usr:decay-parm}
382\centering
383\begin{tabular}{llllll}
384 & Description & Type & Parameter Declaration & \CFA \\
385 \hline
386 & & & @T * x,@ & @* T x,@ \\
387$\triangleright$ & pointer to value & @T *@ & @T x[10],@ & @[10] T x,@ \\
388 & & & @T x[],@ & @[] T x,@ \\
389 \hline
390 & & & @T * const x,@ & @const * T x@ \\
391 & immutable ptr.\ to val. & @T * const@ & @T x[const 10],@ & @[const 10] T x,@ \\
392 & & & @T x[const],@ & @[const] T x,@\\
393 \hline
394 & & & @const T * x,@ & @ * const T x,@ \\
395 & & & @T const * x,@ & @ * T const x,@ \\
396 & ptr.\ to immutable val. & @const T *@ & @const T x[10],@ & @[10] const T x,@ \\
397 & & @T const *@ & @T const x[10],@ & @[10] T const x,@ \\
398 & & & @const T x[],@ & @[] const T x,@ \\
399 & & & @T const x[],@ & @[] T const x,@ \\
400 \hline \hline
401 & & & @T (*x)[10],@ & @* [10] T x,@ \\
402$\triangleright$ & ptr.\ to ar.\ of val. & @T(*)[10]@ & @T x[3][10],@ & @[3][10] T x,@ \\
403 & & & @T x[][10],@ & @[][10] T x,@ \\
404 \hline
405 & & & @T ** x,@ & @** T x,@ \\
406 & ptr.\ to ptr.\ to val. & @T **@ & @T * x[10],@ & @[10] * T x,@ \\
407 & & & @T * x[],@ & @[] * T x,@ \\
408 \hline
409 & ptr.\ to ptr.\ to imm.\ val. & @const char **@ & @const char * argv[],@ & @[] * const char argv,@ \\
410 & & & \emph{others elided} & \emph{others elided} \\
411 \hline
412\end{tabular}
413\end{table}
414
415
416\subsection{Variable-length Arrays}
417
418As of C99, the C standard supports a \newterm{variable length array} (VLA)~\cite[\S~6.7.5.2.5]{C99}, providing a dynamic-fixed array feature \see{\VRef{s:ArrayIntro}}.
419Note, the \CC standard does not support VLAs, but @g++@ provides them.
420A VLA is used when the desired number of array elements is \emph{unknown} at compile time.
421\begin{cfa}
422size_t cols;
423scanf( "%d", &cols );
424double ar[cols];
425\end{cfa}
426The array dimension is read from outside the program and used to create an array of size @cols@ on the stack.
427The VLA is implemented by the @alloca@ routine, which bumps the stack pointer.
428Unfortunately, there is significant misinformation about VLAs, \eg the stack size is limited (small), or VLAs cause stack failures or are inefficient.
429VLAs exist as far back as Algol W~\cite[\S~5.2]{AlgolW} and are a sound and efficient data type.
430For types with a dynamic-fixed stack, \eg coroutines or user-level threads, large VLAs can overflow the stack without appropriately sizing the stack, so heap allocation is used when the array size is unbounded.
431
432
433\subsection{Multidimensional Arrays}
434\label{toc:mdimpl}
435
436% TODO: introduce multidimensional array feature and approaches
437
438When working with arrays, \eg linear algebra, array dimensions are referred to as \emph{rows} and \emph{columns} for a matrix, adding \emph{planes} for a cube.
439(There is little terminology for higher dimensional arrays.)
440For example, an acrostic poem\footnote{A type of poetry where the first, last or other letters in a line spell out a particular word or phrase in a vertical column.}
441can be treated as a grid of characters, where the rows are the text and the columns are the embedded keyword(s).
442Within a poem, there is the concept of a \newterm{slice}, \eg a row is a slice for the poem text, a column is a slice for a keyword.
443In general, the dimensioning and subscripting for multidimensional arrays has two syntactic forms: @m[r,c]@ or @m[r][c]@.
444
445Commonly, an array, matrix, or cube, is visualized (especially in mathematics) as a contiguous row, rectangle, or block.
446This conceptualization is reenforced by subscript ordering, \eg $m_{r,c}$ for a matrix and $c_{p,r,c}$ for a cube.
447Few programming languages differ from the mathematical subscript ordering.
448However, computer memory is flat, and hence, array forms are structured in memory as appropriate for the runtime system.
449The closest representation to the conceptual visualization is for an array object to be contiguous, and the language structures this memory using pointer arithmetic to access the values using various subscripts.
450This approach still has degrees of layout freedom, such as row or column major order, \ie juxtaposed rows or columns in memory, even when the subscript order remains fixed.
451For example, programming languages like MATLAB, Fortran, Julia and R store matrices in column-major order since they are commonly used for processing column-vectors in tabular data sets but retain row-major subscripting to match with mathematical notation.
452In general, storage layout is hidden by subscripting, and only appears when passing arrays among different programming languages or accessing specific hardware.
453
454\VRef[Figure]{f:FixedVariable} shows two C90 approaches for manipulating a contiguous matrix.
455Note, C90 does not support VLAs.
456The fixed-dimension approach (left) uses the type system;
457however, it requires all dimensions except the first to be specified at compile time, \eg @m[][6]@, allowing all subscripting stride calculations to be generated with constants.
458Hence, every matrix passed to @fp1@ must have exactly 6 columns but the row size can vary.
459The variable-dimension approach (right) ignores (violates) the type system, \ie argument and parameters types do not match, and subscripting is performed manually using pointer arithmetic in the macro @sub@.
460
461\begin{figure}
462\begin{tabular}{@{}l@{\hspace{40pt}}l@{}}
463\multicolumn{1}{c}{\textbf{Fixed Dimension}} & \multicolumn{1}{c}{\textbf{Variable Dimension}} \\
464\begin{cfa}
465
466void fp1( int rows, int m[][@6@] ) {
467 ... printf( "%d ", @m[r][c]@ ); ...
468}
469int fm1[4][@6@], fm2[6][@6@]; // no VLA
470// initialize matrixes
471fp1( 4, fm1 ); // implicit 6 columns
472fp1( 6, fm2 );
473\end{cfa}
474&
475\begin{cfa}
476#define sub( m, r, c ) *(m + r * sizeof( m[0] ) + c)
477void fp2( int rows, int cols, int *m ) {
478 ... printf( "%d ", @sub( m, r, c )@ ); ...
479}
480int vm1[@4@][@4@], vm2[@6@][@8@]; // no VLA
481// initialize matrixes
482fp2( 4, 4, vm1 );
483fp2( 6, 8, vm2 );
484\end{cfa}
485\end{tabular}
486\caption{C90 Fixed \vs Variable Contiguous Matrix Styles}
487\label{f:FixedVariable}
488\end{figure}
489
490Many languages allow multidimensional arrays-of-arrays, \eg in Pascal or \CC.
491\begin{cquote}
492\setlength{\tabcolsep}{15pt}
493\begin{tabular}{@{}ll@{}}
494\begin{pascal}
495var m : array[0..4, 0..4] of Integer; (* matrix *)
496type AT = array[0..4] of Integer; (* array type *)
497type MT = array[0..4] of AT; (* array of array type *)
498var aa : MT; (* array of array variable *)
499m@[1][2]@ := 1; aa@[1][2]@ := 1 (* same subscripting *)
500\end{pascal}
501&
502\begin{c++}
503int m[5][5];
504
505typedef vector< vector<int> > MT;
506MT vm( 5, vector<int>( 5 ) );
507m@[1][2]@ = 1; aa@[1][2]@ = 1;
508\end{c++}
509\end{tabular}
510\end{cquote}
511The language decides if the matrix and array-of-array are laid out the same or differently.
512For example, an array-of-array may be an array of row pointers to arrays of columns, so the rows may not be contiguous in memory nor even the same length (triangular matrix).
513Regardless, there is usually a uniform subscripting syntax masking the memory layout, even though a language could differentiated between the two forms using subscript syntax, \eg @m[1,2]@ \vs @aa[1][2]@.
514Nevertheless, controlling memory layout can make a difference in what operations are allowed and in performance (caching/NUMA effects).
515
516C also provides non-contiguous arrays-of-arrays.
517\begin{cfa}
518int m[5][5]; $\C{// contiguous}$
519int * aa[5]; $\C{// non-contiguous}$
520\end{cfa}
521both with different memory layout using the same subscripting, and both with different degrees of issues.
522The focus of this work is on the contiguous multidimensional arrays in C.
523The reason is that programmers are often forced to use the more complex array-of-array form when a contiguous array would be simpler, faster, and safer.
524Nevertheless, the C array-of-array form is still important for special circumstances.
525
526\VRef[Figure]{f:ContiguousNon-contiguous} shows a powerful extension made in C99 for manipulating contiguous \vs non-contiguous arrays.\footnote{C90 also supported non-contiguous arrays.}
527For contiguous-array arguments (including VLA), C99 conjoins one or more of the parameters as a downstream dimension(s), \eg @cols@, implicitly using this parameter to compute the row stride of @m@.
528There is now sufficient information to support array copying and subscript checking along the columns to prevent changing the argument or buffer-overflow problems, but neither feature is provided.
529If the declaration of @fc@ is changed to:
530\begin{cfa}
531void fc( int rows, int cols, int m[@rows@][@cols@] ) ...
532\end{cfa}
533it is possible for C to perform bound checking across all subscripting.
534While this contiguous-array capability is a step forward, it is still the programmer's responsibility to manually manage the number of dimensions and their sizes, both at the function definition and call sites.
535That is, the array does not automatically carry its structure and sizes for use in computing subscripts.
536While the non-contiguous style in @faa@ looks very similar to @fc@, the compiler only understands the unknown-sized array of row pointers, and it relies on the programmer to traverse the columns in a row correctly with a correctly bounded loop index.
537Specifically, there is no requirement that the rows are the same length, like a poem with different length lines.
538
539\begin{figure}
540\begin{tabular}{@{}ll@{}}
541\multicolumn{1}{c}{\textbf{Contiguous}} & \multicolumn{1}{c}{\textbf{ Non-contiguous}} \\
542\begin{cfa}
543void fc( int rows, @int cols@, int m[ /* rows */ ][@cols@] ) {
544 for ( size_t r = 0; r < rows; r += 1 ) {
545 for ( size_t c = 0; c < cols; c += 1 )
546 ... @m[r][c]@ ...
547}
548int m@[5][5]@;
549for ( int r = 0; r < 5; r += 1 ) {
550
551 for ( int c = 0; c < 5; c += 1 )
552 m[r][c] = r + c;
553}
554fc( 5, 5, m );
555\end{cfa}
556&
557\begin{cfa}
558void faa( int rows, int cols, int * m[ @/* cols */@ ] ) {
559 for ( size_t r = 0; r < rows; r += 1 ) {
560 for ( size_t c = 0; c < cols; c += 1 )
561 ... @m[r][c]@ ...
562}
563int @* aa[5]@; // row pointers
564for ( int r = 0; r < 5; r += 1 ) {
565 @aa[r] = malloc( 5 * sizeof(int) );@ // create rows
566 for ( int c = 0; c < 5; c += 1 )
567 aa[r][c] = r + c;
568}
569faa( 5, 5, aa );
570\end{cfa}
571\end{tabular}
572\caption{C99 Contiguous \vs Non-contiguous Matrix Styles}
573\label{f:ContiguousNon-contiguous}
574\end{figure}
575
576
577\subsection{Multi-Dimensional Arrays Decay and Pointers Diffract}
578
579As for single-dimension, multi-dimensional arrays have similar issues \see{\VRef{s:Array}}.
580Again, the inspection begins by using @sizeof@ to provide program semantics for the intuition of an expression's type.
581\lstinput{16-18}{bkgd-carray-mdim.c}
582There are now three axis for deriving expressions from @mx@: \emph{itself}, \emph{first element}, and \emph{first grand-element} (meaning, first element of first element).
583\lstinput{20-26}{bkgd-carray-mdim.c}
584Given that arrays are contiguous and the size of @float@ is 4, then the size of @mx@ with 3 $\times$ 10 floats is 120 bytes, the size of its first element (row) is 40 bytes, and the size of the first element of the first row is 4.
585Again, an array and a point to each of its axes are different.
586\lstinput{28-36}{bkgd-carray-mdim.c}
587As well, there is pointer decay from each of the matrix axes to pointers, which all have the same address.
588\lstinput{38-44}{bkgd-carray-mdim.c}
589Finally, subscripting on a @malloc@ result, where the referent may or may not allow subscripting or have the right number of subscripts.
590
591
592\subsection{Array Parameter Declaration}
593
594Passing an array as an argument to a function is necessary.
595Assume a parameter is an array where the function intends to subscript it.
596This section asserts that a more satisfactory/formal characterization does not exist in C, then surveys the ways that C API authors communicate @p@ has zero or more dimensions, and finally calls out the minority cases where the C type system is using or verifying such claims.
597
598A C parameter declaration looks different from the caller's and callee's perspectives.
599Both perspectives consist of the text read by a programmer and the semantics enforced by the type system.
600The caller's perspective is available from a function declaration, which allows definition-before-use and separate compilation, but can also be read from (the non-body part of) a function definition.
601The callee's perspective is what is available inside the function.
602\begin{cfa}
603int foo( int, float, char ); $\C{// declaration, parameter names optional}$
604int bar( int i, float f, char c ) { $\C{// definition, parameter names mandatory}$
605 // callee's perspective of foo and bar
606}
607// caller's perspectives of foo and bar
608\end{cfa}
609From the caller's perspective, the parameter names (by virtue of being optional) are (useful) comments;
610From the callee's perspective, parameter names are semantically significant.
611Array parameters introduce a further, subtle, semantic difference and considerable freedom to comment.
612
613At the semantic level, there is no such thing as an array parameter, except for one case (@T [static 5]@) discussed shortly.
614Rather, there are only pointer parameters.
615This fact probably shares considerable responsibility for the common sense of \emph{an array is just a pointer}, which has been refuted in non-parameter contexts.
616This fact holds in both the caller's and callee's perspectives.
617However, a parameter's type can include ``array of'', \eg the type ``pointer to array of 5 ints'' (@T (*)[5]@) is a pointer type.
618This type is fully meaningful in the sense that its description does not contain any information that the type system ignores, and the type appears the same in the caller's \vs callee's perspectives.
619In fact, the outermost type constructor (syntactically first dimension) is really the one that determines the parameter flavour.
620
621\begin{figure}
622\begin{tabular}{@{}llll@{}}
623\begin{cfa}
624float sum( float a[5] );
625float sum( float m[5][4] );
626float sum( float a[5][] );
627float sum( float a[5]* );
628float sum( float *a[5] );
629\end{cfa}
630&
631\begin{cfa}
632float sum( float a[] );
633float sum( float m[][4] );
634float sum( float a[][] );
635float sum( float a[]* );
636float sum( float *a[] );
637\end{cfa}
638&
639\begin{cfa}
640float sum( float *a );
641float sum( float (*m)[4] );
642float sum( float (*a)[] );
643float sum( float (*a)* );
644float sum( float **a );
645\end{cfa}
646&
647\begin{cfa}
648// array of float
649// matrix of float
650// invalid
651// invalid
652// array of ptr to float
653\end{cfa}
654\end{tabular}
655\caption{Multiple ways to declare an array parameter.
656Across a valid row, every declaration is equivalent.
657Each column gives a declaration style, where the style for that column is read from the first row.
658The second row begins the style for multiple dimensions, with the rows thereafter providing context for the choice of which second-row \lstinline{[]} receives the column-style variation.}
659\label{f:ArParmEquivDecl}
660\end{figure}
661
662Yet, C allows array syntax for the outermost type constructor, from which comes the freedom to comment.
663An array parameter declaration can specify the outermost dimension with a dimension value, @[10]@ (which is ignored), an empty dimension list, @[ ]@, or a pointer, @*@, as seen in \VRef[Figure]{f:ArParmEquivDecl}.
664The rationale for rejecting the first invalid row follows shortly, while the second invalid row is simple nonsense, included to complete the pattern; its syntax hints at what the final row actually achieves.
665Note, in the leftmost style, the typechecker ignores the actual value even in a dynamic expression.
666\begin{cfa}
667int N;
668void foo( float @a[N] ) ; // N is ignored
669\end{cfa}
670
671
672% To help contextualize the matrix part of this example, the syntaxes @float [5][]@, @float [][]@ and @float (*)[]@ are all rejected, for reasons discussed shortly.
673% So are @float[5]*@, @float[]*@ and @float (*)*@. These latter ones are simply nonsense, though they hint at ``1d array of pointers'', whose equivalent syntax options are, @float *[5]@, @float *[]@, and @float **@.
674
675It is a matter of taste as to whether a programmer should use a form as far left as possible (getting the most out of possible subscripting and dimension sizes), sticking to the right (avoiding false comfort from suggesting the typechecker is checking more than it is), or compromising in the middle (reducing unchecked information, yet clearly stating, ``I will subscript'').
676
677Note that this equivalence of pointer and array declarations is special to parameters.
678It does not apply to local variables, where true array declarations are possible.
679\begin{cfa}
680void f( float * a ) {
681 float * b = a; // ok
682 float c[] = a; // reject
683 float d[] = { 1.0, 2.0, 3.0 }; // ok
684 static_assert( sizeof(b) == sizeof(float*) );
685 static_assert( sizeof(d) != sizeof(float*) );
686}
687\end{cfa}
688Unfortunately, this equivalence has the consequence that the type system does not help a caller get it right.
689\begin{cfa}
690float sum( float v[] );
691float arg = 3.14;
692sum( &arg ); $\C{// accepted, v = \&arg}$
693\end{cfa}
694
695Given the syntactic dimension forms @[ ]@ or @[5]@, it raises the question of qualifying the implied array pointer rather than the array element type.
696For example, the qualifiers after the @*@ apply to the array pointer.
697\begin{cfa}
698void foo( const volatile int * @const volatile@ );
699void foo( const volatile int [ ] @const volatile@ ); // does not parse
700\end{cfa}
701C instead puts these pointer qualifiers syntactically into the first dimension.
702\begin{cquote}
703@[@ \textit{type-qualifier-list}$_{opt}$ \textit{assignment-expression}$_{opt}$ @]@
704\end{cquote}
705\begin{cfa}
706void foo( int [@const volatile@] );
707void foo( int [@const volatile@ 5] ); $\C{// 5 is ignored}$
708\end{cfa}
709To make the first dimension size meaningful, C adds this form.
710\begin{cquote}
711@[@ @static@ \textit{type-qualifier-list}$_{opt}$ \textit{assignment-expression} @]@
712\end{cquote}
713\begin{cfa}
714void foo( int [static @3@] );
715int ar[@10@];
716foo( ar ); // check argument dimension 10 > 3
717\end{cfa}
718Here, the @static@ storage qualifier defines the minimum array size for its argument.
719Earlier versions of @gcc@ ($<$ 11) and possibly @clang@ ignore this dimension qualifier, while later versions implement the check, in accordance with the standard.
720
721
722Note that there are now two different meanings for modifiers in the same position. In
723\begin{cfa}
724void foo( int x[static const volatile 3] );
725\end{cfa}
726the @static@ applies to the 3, while the @const volatile@ applies to the @x@.
727
728With multidimensional arrays, on dimensions after the first, a size is required and, is not ignored.
729These sizes are required for the callee to be able to subscript.
730\begin{cfa}
731void f( float a[][10], float b[][100] ) {
732 static_assert( ((char*)&a([1])) - ((char*)&a([0])) == 10 * sizeof(float) );
733 static_assert( ((char*)&b([1])) - ((char*)&b([0])) == 100 * sizeof(float) );
734}
735\end{cfa}
736Here, the distance between the first and second elements of each array depends on the inner dimension size.
737
738The significance of an inner dimension's length is a fact of the callee's perspective.
739In the caller's perspective, the type system is quite lax.
740Here, there is (some, but) little checking that what is being passed, matches.
741% void f( float [][10] );
742% int n = 100;
743% float a[100], b[n];
744% f(&a); // reject
745% f(&b); // accept
746\begin{cfa}
747void foo() {
748 void f( float [][10] );
749 int n = 100;
750 float a[100], b[3][12], c[n], d[n][n];
751 f( a );
752 f( b ); $\C{// reject: inner dimension 12 for 10}$
753 f( c );
754 f( @d@ ); $\C{// accept with inner dimension n for 10}$
755 f( &a ); $\C{// reject: inner dimension 100 for 10}$
756 f( &b );
757 f( @&c@ ); $\C{// accept with inner dimension n for 10}$
758 f( &d );
759}
760\end{cfa}
761The cases without comments are rejections, but simply because the array ranks do not match; in the commented cases, the ranks match and the rules being discussed apply.
762The cases @f( b )@ and @f( &a )@ show where some length checking occurs.
763But this checking misses the cases @f( d )@ and @f( &c )@, allowing the calls with mismatched lengths, actually 100 for 10.
764The C checking rule avoids false alarms, at the expense of safety, by allowing any combinations that involve dynamic values.
765Ultimately, an inner dimension's size is a callee's \emph{assumption} because the type system uses declaration details in the callee's perspective that it does not enforce in the caller's perspective.
766
767Finally, to handle higher-dimensional VLAs, C repurposed the @*@ \emph{within} the dimension in a declaration to mean that the callee has make an assumption about the size, but no (checked, possibly wrong) information about this assumption is included for the caller-programmer's benefit/\-over-confidence.
768\begin{cquote}
769@[@ \textit{type-qualifier-list$_{opt}$} @* ]@
770\end{cquote}
771\begin{cfa}
772void foo( float [][@*@] ); $\C{// declaration}$
773void foo( float a[][10] ) { ... } $\C{// definition}$
774\end{cfa}
775Repeating it with the full context of a VLA is useful:
776\begin{cfa}
777void foo( int, float [][@*@] ); $\C{// declaration}$
778void foo( int n, float a[][n] ) { ... } $\C{// definition}$
779\end{cfa}
780Omitting the dimension from the declaration is consistent with omitting parameter names, for the declaration case has no name @n@ in scope.
781The omission is also redacting all information not needed to generate correct caller-side code.
782
783
784\subsection{Arrays Could be Values}
785
786All arrays have a know runtime size at their point of declaration.
787Furthermore, C provides an explicit mechanism to pass an array's dimensions to a function.
788Nevertheless, an array cannot be copied, and hence, not passed by value to a function, even when there is sufficient information to do so.
789
790However, if an array is a structure field (compile-time size), it can be copied and passed by value.
791For example, a C @jmp_buf@ is an array.
792\begin{cfa}
793typedef long int jmp_buf[8];
794\end{cfa}
795A instance of this array can be declared as a structure field.
796\begin{cfa}
797struct Jmp_Buf {
798 @jmp_buf@ jb;
799};
800\end{cfa}
801Now the array can be copied (and passed by value) because structures can be copied.
802\begin{cfa}
803Jmp_Buf jb1, jb2;
804jb1 = jb2; // copy
805void foo( Jmp_Buf );
806foo( jb2 ); // copy
807\end{cfa}
808
809This same argument applies to returning arrays from functions.
810There can be sufficient information to return an array by value but it is unsupported.
811Again, array wrapping allows an array to be returned from a function and copied into a variable.
812
813
814\section{Linked List}
815
816Linked-lists are blocks of storage connected using one or more pointers.
817The storage block (node) is logically divided into data (user payload) and links (list pointers), where the links are the only component used by the list structure.
818Since the data is opaque, list structures are often polymorphic over the data, which is often homogeneous.
819
820The links organize nodes into a particular format, \eg queue, tree, hash table, \etc, with operations specific to that format.
821Because a node's existence is independent of the data structure that organizes it, all nodes are manipulated by address not value;
822hence, all data structure routines take and return pointers to nodes and not the nodes themselves.
823
824
825\subsection{Design Issues}
826\label{toc:lst:issue}
827
828This thesis focuses on a reduced design space for linked lists that target \emph{system programmers}.
829Within this restricted space, all design-issue discussions assume the following invariants;
830alternatives to the assumptions are discussed under Future Work (\VRef{toc:lst:futwork}).
831\begin{itemize}
832 \item A doubly-linked list is being designed.
833 Generally, the discussed issues apply similarly for singly-linked lists.
834 Circular \vs ordered linking is discussed under List identity (\VRef{toc:lst:issue:ident}).
835 \item Link fields are system-managed.
836 The user works with the system-provided API to query and modify list membership.
837 The system has freedom over how to represent these links.
838 \item The user data must provide storage for the list link-fields.
839 Hence, a list node is \emph{statically} defined as data and links \vs a node that is \emph{dynamically} constructed from data and links \see{\VRef{toc:lst:issue:attach}}.
840\end{itemize}
841
842
843\subsection{Preexisting Linked-List Libraries}
844\label{s:PreexistingLinked-ListLibraries}
845
846Two preexisting linked-list libraries are used throughout, to show examples of the concepts being defined,
847and further libraries are introduced as needed.
848\begin{enumerate}
849 \item Linux Queue library~\cite{lst:linuxq} (LQ) of @<sys/queue.h>@.
850 \item \CC Standard Template Library's (STL)\footnote{The term STL is contentious as some people prefer the term standard library.} @std::list@\cite{lst:stl}
851\end{enumerate}
852%A general comparison of libraries' abilities is given under Related Work (\VRef{toc:lst:relwork}).
853For the discussion, assume the fictional type @req@ (request) is the user's payload in examples.
854As well, the list library is helping the user manage (organize) requests, \eg a request can be work on the level of handling a network arrival-event or scheduling a thread.
855
856
857\subsection{Link Attachment: Intrusive \vs Wrapped}
858\label{toc:lst:issue:attach}
859
860Link attachment deals with the question:
861Where are the libraries' inter-node link-fields stored, in relation to the user's payload data fields?
862\VRef[Figure]{fig:lst-issues-attach} shows three basic styles.
863\VRef[Figure]{f:Intrusive} shows the \newterm{intrusive} style, placing the link fields inside the payload structure.
864\VRef[Figures]{f:WrappedRef} and \subref*{f:WrappedValue} show the two \newterm{wrapped} styles, which place the payload inside a generic library-provided structure that then defines the link fields.
865The wrapped style distinguishes between wrapping a reference and wrapping a value, \eg @list<req *>@ or @list<req>@.
866(For this discussion, @list<req &>@ is similar to @list<req *>@.)
867This difference is one of user style, not framework capability.
868Library LQ is intrusive; STL is wrapped with reference and value.
869
870\begin{comment}
871\begin{figure}
872 \begin{tabularx}{\textwidth}{Y|Y|Y}
873 \lstinput[language=C]{20-39}{lst-issues-intrusive.run.c}
874 &\lstinputlisting[language=C++]{20-39}{lst-issues-wrapped-byref.run.cpp}
875 &\lstinputlisting[language=C++]{20-39}{lst-issues-wrapped-emplaced.run.cpp}
876 \\ & &
877 \\
878 \includegraphics[page=1]{lst-issues-attach.pdf}
879 &
880 \includegraphics[page=2]{lst-issues-attach.pdf}
881 &
882 \includegraphics[page=3]{lst-issues-attach.pdf}
883 \\ & &
884 \\
885 (a) & (b) & (c)
886 \end{tabularx}
887\caption{
888 Three styles of link attachment: (a)~intrusive, (b)~wrapped reference, and (c)~wrapped value.
889 The diagrams show the memory layouts that result after the code runs, eliding the head object \lstinline{reqs};
890 head objects are discussed in \VRef{toc:lst:issue:ident}.
891 In (a), the field \lstinline{req.x} names a list direction;
892 these are discussed in \VRef{toc:lst:issue:simultaneity}.
893 In (b) and (c), the type \lstinline{node} represents a system-internal type,
894 which is \lstinline{std::_List_node} in the GNU implementation.
895 (TODO: cite? found in /usr/include/c++/7/bits/stl\_list.h )
896 }
897 \label{fig:lst-issues-attach}
898\end{figure}
899\end{comment}
900
901\begin{figure}
902\centering
903\newsavebox{\myboxA} % used with subfigure
904\newsavebox{\myboxB}
905\newsavebox{\myboxC}
906
907\begin{lrbox}{\myboxA}
908\begin{tabular}{@{}l@{}}
909\lstinput[language=C]{20-35}{lst-issues-intrusive.run.c} \\
910\includegraphics[page=1]{lst-issues-attach.pdf}
911\end{tabular}
912\end{lrbox}
913
914\begin{lrbox}{\myboxB}
915\begin{tabular}{@{}l@{}}
916\lstinput[language=C++]{20-35}{lst-issues-wrapped-byref.run.cpp} \\
917\includegraphics[page=2]{lst-issues-attach.pdf}
918\end{tabular}
919\end{lrbox}
920
921\begin{lrbox}{\myboxC}
922\begin{tabular}{@{}l@{}}
923\lstinput[language=C++]{20-35}{lst-issues-wrapped-emplaced.run.cpp} \\
924\includegraphics[page=3]{lst-issues-attach.pdf}
925\end{tabular}
926\end{lrbox}
927
928\subfloat[Intrusive]{\label{f:Intrusive}\usebox\myboxA}
929\hspace{6pt}
930\vrule
931\hspace{6pt}
932\subfloat[Wrapped reference]{\label{f:WrappedRef}\usebox\myboxB}
933\hspace{6pt}
934\vrule
935\hspace{6pt}
936\subfloat[Wrapped value]{\label{f:WrappedValue}\usebox\myboxC}
937
938\caption{
939 Three styles of link attachment:
940 % \protect\subref*{f:Intrusive}~intrusive, \protect\subref*{f:WrappedRef}~wrapped reference, and \protect\subref*{f:WrappedValue}~wrapped value.
941 The diagrams show the memory layouts that result after the code runs, eliding the head object \lstinline{reqs};
942 head objects are discussed in \VRef{toc:lst:issue:ident}.
943 In \protect\subref*{f:Intrusive}, the field \lstinline{req.d} names a list direction;
944 these are discussed in \VRef{toc:lst:issue:simultaneity}.
945 In \protect\subref*{f:WrappedRef} and \protect\subref*{f:WrappedValue}, the type \lstinline{node} represents a
946 library-internal type, which is \lstinline{std::_List_node} in the GNU implementation
947 \see{\lstinline{/usr/include/c++/X/bits/stl_list.h}, where \lstinline{X} is the \lstinline{g++} version number}.
948 }
949 \label{fig:lst-issues-attach}
950\end{figure}
951
952Each diagrammed example is using the fewest dynamic allocations for its respective style:
953in intrusive, here is no dynamic allocation, in wrapped reference only the linked fields are dynamically allocated, and in wrapped value the copied data and linked fields are dynamically allocated.
954The advantage of intrusive is the control in memory layout and storage placement.
955Both wrapped styles have independent storage layout and imply library-induced heap allocations, with lifetime that matches the item's membership in the list.
956In all three cases, a @req@ object can enter and leave a list many times.
957However, in intrusive a @req@ can only be on one list at a time, unless there are separate link-fields for each simultaneous list.
958In wrapped reference, a @req@ can appear multiple times on the same or different lists, but since @req@ is shared via the pointer, care must be taken if updating data also occurs simultaneously, \eg concurrency.
959In wrapped value, the @req@ is copied, which increases storage usage, but allows independent simultaneous changes;
960however, knowing which of the @req@ object is the \emph{true} object becomes complex.
961\see*{\VRef{toc:lst:issue:simultaneity} for further discussion.}
962
963The implementation of @LIST_ENTRY@ uses a trick to find the links and the node containing the links.
964The macro @LIST_INSERT_HEAD( &reqs, &r2, d )@ takes the list header, a pointer to the node, and the offset of the link fields in the node.
965One of the fields generated by @LIST_ENTRY@ is a pointer to the node, which is set to the node address, \eg @r2@.
966Hence, the offset to the link fields provides an access to the entire node, \ie the node points at itself.
967For list traversal, @LIST_FOREACH( cur, &reqs_pri, by_pri )@, there is the node cursor, the list, and the offset of the link fields within the node.
968The traversal actually moves from link fields to link fields within a node and sets the node cursor from the pointer within the link fields back to the node.
969
970A further aspect of layout control is allowing the user to explicitly specify link fields controlling placement and attributes within the @req@ object.
971LQ allows this ability through the @LIST_ENTRY@ macro\footnote{It is possible to have multiple named linked fields allowing a node to appear on multiple lists simultaneously.}, which can be placed anywhere in the node.
972An example of an attribute on the link fields is cache alignment, possibly in conjunction with other @req@ fields, improving locality and/or avoiding false sharing.
973Supplying the link fields by inheritance makes them implicit and relies on compiler placement, such as the start or end of @req@, and no explicit attributes.
974Wrapped reference has no control over the link fields, but the separate data allows some control;
975wrapped value has no control over data or links.
976
977Another subtle advantage of intrusive arrangement is that a reference to a user-level item (@req@) is sufficient to navigate or manage the item's membership.
978In LQ, the intrusive @req@ pointer is the right argument type for operations @LIST_NEXT@ or @LIST_REMOVE@;
979there is no distinguishing a @req@ from a @req@ in a list.
980The same is not true of STL, wrapped reference or value.
981There, the analogous operations, @iterator::operator++()@, @iterator::operator*()@, and @list::erase(iterator)@, work on a parameter of type @list<T>::iterator@;
982there is no mapping from @req &@ to @list<req>::iterator@. %, for linear search.
983
984The advantage of wrapped is the abstraction of a data item from its list membership(s).
985In the wrapped style, the @req@ type can come from a library that serves many independent uses,
986which generally have no need for listing.
987Then, a novel use can put a @req@ in a list, without requiring any upstream change in the @req@ library.
988In intrusive, the ability to be listed must be planned during the definition of @req@.
989
990\begin{figure}
991 \lstinput[language=C++]{100-117}{lst-issues-attach-reduction.hpp}
992 \lstinput[language=C++]{150-150}{lst-issues-attach-reduction.hpp}
993 \caption{
994 Simulation of wrapped using intrusive.
995 Illustrated by pseudocode implementation of an STL-compatible API fragment using LQ as the underlying implementation.
996 The gap that makes it pseudocode is that
997 the LQ C macros do not expand to valid \CC when instantiated with template parameters---there is no \lstinline{struct El}.
998 When using a custom-patched version of LQ to work around this issue,
999 the programs of \VRef[Figure]{f:WrappedRef} and wrapped value work with this shim in place of real STL.
1000 Their executions lead to the same memory layouts.
1001 }
1002 \label{fig:lst-issues-attach-reduction}
1003\end{figure}
1004
1005It is possible to simulate wrapped using intrusive, illustrated in \VRef[Figure]{fig:lst-issues-attach-reduction}.
1006This shim layer performs the implicit dynamic allocations that pure intrusion avoids.
1007But there is no reduction going the other way.
1008No shimming can cancel the allocations to which wrapped membership commits.
1009
1010Because intrusion is a lower-level listing primitive, the system design choice is not between forcing users to use intrusion or wrapping.
1011The choice is whether or not to provide access to an allocation-free layer of functionality.
1012An intrusive-primitive library like LQ lets users choose when to make this tradeoff.
1013A wrapped-primitive library like STL forces users to incur the costs of wrapping, whether or not they access its benefits.
1014\CFA is capable of supporting a wrapped library, if need arose.
1015
1016
1017\subsection{Simultaneity: Single \vs Multi-Static \vs Dynamic}
1018\label{toc:lst:issue:simultaneity}
1019
1020\begin{figure}
1021 \parbox[t]{3.5in} {
1022 \lstinput[language=C++]{20-60}{lst-issues-multi-static.run.c}
1023 }\parbox[t]{20in} {
1024 ~\\
1025 \includegraphics[page=1]{lst-issues-direct.pdf} \\
1026 ~\\
1027 \hspace*{1.5in}\includegraphics[page=2]{lst-issues-direct.pdf}
1028 }
1029 \caption{
1030 Example of simultaneity using LQ lists.
1031 The zoomed-out diagram (right/top) shows the complete multi-linked data structure.
1032 This structure can navigate all requests in priority order ({\color{blue}blue}), and navigate among requests with a common request value ({\color{orange}orange}).
1033 The zoomed-in diagram (right/bottom) shows how the link fields connect the nodes on different lists.
1034 }
1035 \label{fig:lst-issues-multi-static}
1036\end{figure}
1037
1038\newterm{Simultaneity} deals with the question:
1039In how many different lists can a node be stored, at the same time?
1040\VRef[Figure]{fig:lst-issues-multi-static} shows an example that can traverse all requests in priority order (field @pri@) or navigate among requests with the same request value (field @rqr@).
1041Each of ``by priority'' and ``by common request value'' is a separate list.
1042For example, there is a single priority-list linked in order [1, 2, 2, 3, 3, 4], where nodes may have the same priority, and there are three common request-value lists combining requests with the same values: [42, 42], [17, 17, 17], and [99], giving four head nodes one for each list.
1043The example shows a list can encompass all the nodes (by-priority) or only a subset of the nodes (three request-value lists).
1044
1045As stated, the limitation of intrusive is knowing apriori how many groups of links are needed for the maximum number of simultaneous lists.
1046Thus, the intrusive LQ example supports multiple, but statically many, link lists.
1047Note, it is possible to reuse links for different purposes, \eg if a list in linked one way at one time and another way at another time, and these times do not overlap, the two different linkings can use the same link fields.
1048This feature is used in the \CFA runtime, where a thread node may be on a blocked or running list, but never on both simultaneously.
1049
1050Now consider the STL in the wrapped-reference arrangement of \VRef[Figure]{f:WrappedRef}.
1051Here it is possible to construct the same simultaneity by creating multiple STL lists, each pointing at the appropriate nodes.
1052Each group of intrusive links become the links for each separate STL list.
1053The upside is the unlimited number of lists a node can be associated with simultaneously, as any number of STL lists can be created dynamically.
1054The downside is the dynamic allocation of the link nodes and managing multiple lists.
1055Note, it might be possible to wrap the multiple lists in another type to hide this implementation issue.
1056
1057Now consider the STL in the wrapped-value arrangement of \VRef[Figure]{f:WrappedValue}.
1058Again, it is possible to construct the same simultaneity by creating multiple STL lists, each copying the appropriate nodes, where the intrusive links become the links for each separate STL list.
1059The upside is the same as for wrapped-reference arrangement with an unlimited number of list bindings.
1060The downside is the dynamic allocation, significant storage usage, and cost of copying nodes.
1061As well, it is unclear how node updates work in this scenario, without some notation of ultimately merging node information.
1062
1063% https://www.geeksforgeeks.org/introduction-to-multi-linked-list/ -- example of building a bespoke multi-linked list out of STL primitives (providing indication that STL doesn't offer one); offers dynamic directionality by embedding `vector<struct node*> pointers;`
1064
1065% When allowing multiple static directions, frameworks differ in their ergonomics for
1066% the typical case: when the user needs only one direction, vs.\ the atypical case, when the user needs several.
1067% LQ's ergonomics are well-suited to the uncommon case of multiple list directions.
1068% Its intrusion declaration and insertion operation both use a mandatory explicit parameter naming the direction.
1069% This decision works well in \VRef[Figure]{fig:lst-issues-multi-static}, where the names @by_pri@ and @by_rqr@ work well,
1070% but it clutters \VRef[Figure]{f:Intrusive}, where a contrived name must be invented and used.
1071% The example uses @x@; @reqs@ would be a more readily ignored choice. \PAB{wording?}
1072
1073An alternative system providing intrusive data-structures is \uCpp, a concurrent extension of \CC.
1074It provides a basic set of intrusive lists~\cite[appx.~F]{uC++}, where the link fields are defined with the data fields using inheritance.
1075\begin{cquote}
1076\setlength{\tabcolsep}{15pt}
1077\begin{tabular}{@{}ll@{}}
1078\multicolumn{1}{c}{singly-linked list} & \multicolumn{1}{c}{doubly-linked list} \\
1079\begin{c++}
1080struct Node : public uColable {
1081 int i; // data
1082 Node( int i ) : i{ i } {}
1083};
1084\end{c++}
1085&
1086\begin{c++}
1087struct Node : public uSeqable {
1088 int i; // data
1089 Node( int i ) : i{ i } {}
1090};
1091\end{c++}
1092\end{tabular}
1093\end{cquote}
1094A node can be placed in the following data structures depending on its link fields: @uStack@ and @uQueue@ (singly linked), and @uSequence@ (doubly linked).
1095A node inheriting from @uSeqable@ can appear in a singly or doubly linked structure.
1096Structure operations implicitly know the link-field location through the inheritance.
1097\begin{c++}
1098uStack<Node> stack;
1099Node node;
1100stack.push( node ); // link fields at beginning of node
1101\end{c++}
1102
1103Simultaneity cannot be done with multiple inheritance, because there is no mechanism to either know the order of inheritance fields or name each inheritance.
1104Instead, a special type is require that contains the link fields and points at the node.
1105\begin{cquote}
1106\setlength{\tabcolsep}{10pt}
1107\begin{tabular}{@{}ll@{}}
1108\begin{c++}
1109struct NodeDL : public uSeqable {
1110 @Node & node;@ // node pointer
1111 NodeDL( Node & node ) : node( node ) {}
1112 Node & get() const { return node; }
1113};
1114\end{c++}
1115&
1116\begin{c++}
1117struct Node : public uColable {
1118 int i; // data
1119 @NodeDL nodeseq;@ // embedded intrusive links
1120 Node( int i ) : i{ i }, @nodeseq{ this }@ {}
1121};
1122\end{c++}
1123\end{tabular}
1124\end{cquote}
1125This node can now be inserted into a doubly-linked list through the embedded intrusive links.
1126\begin{c++}
1127uSequence<NodeDL> sequence;
1128sequence.add_front( @node.nodeseq@ ); $\C{// link fields in embedded type}$
1129NodeDL nodedl = sequence.remove( @node.nodeseq@ );
1130int i = nodedl.@get()@.i; $\C{// indirection to node}$
1131\end{c++}
1132Hence, the \uCpp approach optimizes one set of intrusive links through the \CC inheritance mechanism, and falls back onto the LQ approach of explicit declarations for additional intrusive links.
1133However, \uCpp cannot apply the LQ trick for finding the links and node.
1134
1135The major ergonomic difference among the approaches is naming and name usage.
1136The intrusive model requires naming each set of intrusive links, \eg @by_pri@ and @by_rqr@ in \VRef[Figure]{fig:lst-issues-multi-static}.
1137\uCpp cheats by using inheritance for the first intrusive links, after which explicit naming of intrusive links is required.
1138Furthermore, these link names must be used in all list operations, including iterating, whereas wrapped reference and value hide the list names in the implicit dynamically-allocated node-structure.
1139At issue is whether an API for simultaneity can support one \emph{implicit} list, when several are not wanted.
1140\uCpp allows it, LQ does not, and the STL does not have this question.
1141
1142
1143\subsection{User Integration: Preprocessed \vs Type-System Mediated}
1144
1145While the syntax for LQ is reasonably succinct, it comes at the cost of using C preprocessor macros for generics, which are not part of the language type-system, like \CC templates.
1146Hence, small errors in macro arguments can lead to large substitution mistakes, as the arguments maybe textually written in many places and/or concatenated with other arguments/text to create new names and expressions.
1147This can lead to a cascade of error messages that are confusing and difficult to debug.
1148For example, argument errors like @a.b,c@, comma instead of period, or @by-pri@, minus instead of underscore, can produce many error messages.
1149
1150Instead, language function calls (even with inlining) handled argument mistakes locally at the call, giving very specific error message.
1151\CC @concepts@ were introduced in @templates@ to deal with just this problem.
1152
1153% example of poor error message due to LQ's preprocessed integration
1154% programs/lst-issues-multi-static.run.c:46:1: error: expected identifier or '(' before 'do'
1155% 46 | LIST_INSERT_HEAD(&reqs_rtr_42, &r42b, by_rqr);
1156% | ^~~~~~~~~~~~~~~~
1157%
1158% ... not a wonderful example; it was a missing semicolon on the preceding line; but at least real
1159
1160
1161\subsection{List Identity: Headed \vs Ad-Hoc}
1162\label{toc:lst:issue:ident}
1163
1164All examples so far use two distinct types for:
1165an item found in a list (type @req@ of variable @r1@, see \VRef[Figure]{fig:lst-issues-attach}), and the list (type @reql@ of variable @reqs_pri@, see \VRef[Figure]{fig:lst-issues-ident}).
1166This kind of list is ``headed'', where the empty list is just a head.
1167An alternate ``ad-hoc'' approach omits the header, where the empty list is no nodes.
1168Here, a pointer to any node can traverse its link fields: right or left and around, depending on the data structure.
1169Note, a headed list is superset of an ad-hoc list, and can normally perform all of the ad-hoc operations.
1170\VRef[Figure]{fig:lst-issues-ident} shows both approaches for different list lengths and unlisted elements.
1171For headed, there are length-zero lists (heads with no elements), and an element can be listed or not listed.
1172For ad-hoc, there are no length-zero lists and every element belongs to a list of length at least one.
1173
1174\begin{figure}
1175 \centering
1176 \includegraphics{lst-issues-ident.pdf}
1177 \caption{
1178 Comparison of headed and ad-hoc list identities, for various list lengths.
1179 Pointers are logical, meaning that no claim is intended about which part of an object is being targeted.
1180 }
1181 \label{fig:lst-issues-ident}
1182\end{figure}
1183
1184The purpose of a header is to provide specialized but implicit node access, such as the first/last nodes in the list, where accessing these nodes is deemed a commonly occurring operation and should be $O(1)$ for performance of certain operations.
1185For example, without a last pointer in a singly-linked list, adding to the end of the list is an $O(N)$ operation to traverse the list to find the last node.
1186Without the header, this specialized information must be managed explicitly, where the programmer builds their own external, equivalent header information.
1187However, external management of particular nodes might not be beneficial because the list does not provide operations that can take advantage of them, such as using an external pointer to update an internal link.
1188Clearly, there is a cost maintaining this specialized information, which needs to be amortized across the list operations that use it, \eg rarely adding to the end of a list.
1189A runtime component of this cost is evident in LQ's offering the choice of type generators @LIST@ \vs @TAILQ@.
1190Its @LIST@ maintains a \emph{first}, but not a \emph{last};
1191its @TAILQ@ maintains both roles.
1192(Both types are doubly linked and an analogous choice is available for singly linked.)
1193
1194
1195\subsection{End Treatment: Cased \vs Uniform }
1196
1197All lists must have a logical \emph{beginning/ending}, otherwise list traversal is infinite.
1198\emph{End treatment} refers to how the list represents the lack of a predecessor/successor to demarcate end point(s).
1199For example, in a doubly-linked list containing a single node, the next/prev links have no successor/predecessor nodes.
1200Note, a list does not need to use links to denote its size;
1201it can use a node counter in the header, where $N$ node traversals indicates complete navigation of the list.
1202However, managing the number of nodes is an additional cost, as the links must always be managed.
1203
1204The following discussion refers to the LQ representations, detailed in \VRef[Figure]{fig:lst-issues-end}, using a null pointer to mark end points.
1205LQ uses this representation for its successor/last.
1206For example, consider the operation of inserting after a given element.
1207A doubly-linked list must update the given node's successor, to make its predecessor-pointer refer to the new node.
1208This step must happen when the given node has a successor (when its successor pointer is non-null),
1209and must be skipped when it does not (when its successor pointer cannot be navigated).
1210So this operation contains a branch, to decide which case is happening.
1211All branches have pathological cases where branch prediction's success rate is low and the execution pipeline stalls often.
1212Hence, this issue is relevant to achieving high performance.
1213
1214\begin{figure}
1215 \centering
1216 \includegraphics{lst-issues-end.pdf}
1217 \caption{
1218 LQ sub-object-level representation of links and ends.
1219 Each object's memory is pictured as a vertical strip.
1220 Pointers' target locations, within these strips, are significant.
1221 Uniform treatment of the first-end is evident from an assertion like \lstinline{(**this.pred == this)} holding for all nodes \lstinline{this}, including the first one.
1222 Cased treatment of the last-end is evident from the symmetric proposition, \lstinline{(this.succ.pred == &this.succ)}, failing when \lstinline{this} is the last node.
1223 }
1224 \label{fig:lst-issues-end}
1225\end{figure}
1226
1227Interestingly, this branch is sometimes avoidable, giving a uniform end-treatment in the code.
1228For example, LQ is headed at the front.
1229For predecessor/first navigation, the relevant operation is inserting before a given element.
1230LQ's predecessor representation is not a pointer to a node, but a pointer to a pseudo-successor pointer.
1231When there is a predecessor node, that node contains a real-successor pointer; it is the target of the reference node's predecessor pointer.
1232When there is no predecessor node, the reference node (now known to be first node) acts as the pseudo-successor of the list head.
1233Now, the list head contains a pointer to the first node.
1234When inserting before the first node, the list head's first-pointer is the one that must change.
1235So, the first node's \emph{predecessor} pointer (to a pseudo-successor pointer) is set as the list head's first-pointer.
1236Now, inserting before a given element does the same logic in both cases:
1237follow the guaranteed-non-null predecessor pointer, and update that location to refer to the new node.
1238Applying this trick makes it possible to have list management routines that are completely free of conditional control-flow.
1239Considering a path length of only a few instructions (less than the processor's pipeline length),
1240such list management operations are often practically free,
1241with all the variance being due to the (inevitable) cache status of the nodes being managed.
1242
1243
1244\section{String}
1245\label{s:String}
1246
1247A string is a sequence of symbols, where the form of a symbol can vary significantly: 7/8-bit characters (ASCII/Latin-1), or 2/4/8-byte (UNICODE) characters/symbols or variable length (UTF-8/16/32) characters.
1248A string can be read left-to-right, right-to-left, top-to-bottom, and have stacked elements (Arabic).
1249
1250A C character constant is an ASCII/Latin-1 character enclosed in single-quotes, \eg @'x'@, @'@\textsterling@'@.
1251A wide C character constant is the same, except prefixed by the letter @L@, @u@, or @U@, \eg @u'\u25A0'@ (black square), where the @\u@ identifies a universal character name.
1252A character can be formed from an escape sequence, which expresses a non-typable character @'\f'@ form feed, a delimiter character @'\''@ embedded single quote, or a raw character @'\xa3'@ \textsterling.
1253
1254A C character string is zero or more regular, wide, or escape characters enclosed in double-quotes @"xyz\n"@.
1255The kind of characters in the string is denoted by a prefix: UTF-8 characters are prefixed by @u8@, wide characters are prefixed by @L@, @u@, or @U@.
1256
1257For UTF-8 string literals, the array elements have type @char@ and are initialized with the characters of the multi-byte character sequences, \eg @u8"\xe1\x90\x87"@ (Canadian syllabics Y-Cree OO).
1258For wide string literals prefixed by the letter @L@, the array elements have type @wchar_t@ and are initialized with the wide characters corresponding to the multi-byte character sequence, \eg @L"abc@$\mu$@"@ and are read/printed using @wsanf@/@wprintf@.
1259The value of a wide-character is implementation-defined, usually a UTF-16 character.
1260For wide string literals prefixed by the letter @u@ or @U@, the array elements have type @char16_t@ or @char32_t@, respectively, and are initialized with wide characters corresponding to the multi-byte character sequence, \eg @u"abc@$\mu$@"@, @U"abc@$\mu$@"@.
1261The value of a @"u"@ character is an UTF-16 character;
1262the value of a @"U"@ character is an UTF-32 character.
1263The value of a string literal containing a multi-byte character or escape sequence not represented in the execution character set is implementation-defined.
1264
1265C strings are null-terminated rather than maintaining a separate string length.
1266\begin{quote}
1267Technically, a string is an array whose elements are single characters.
1268The compiler automatically places the null character @\0@ at the end of each such string, so programs can conveniently find the end.
1269This representation means that there is no real limit to how long a string can be, but programs have to scan one completely to determine its length.~\cite[p.~36]{C:old}
1270\end{quote}
1271This property is only preserved by the compiler with respect to character constants, \eg @"abc"@ is actually @"abc\0"@, \ie 4 characters rather than 3.
1272Otherwise, the compiler does not participate, making string operations both unsafe and inefficient.
1273For example, it is common in C to: forget that a character constant is larger than it appears during manipulation, that extra storage is needed in a character array for the terminator, or that the terminator must be preserved during string operations, otherwise there are array overruns.
1274Finally, the need to repeatedly scan an entire string to determine its length can result in significant cost, as it is impossible to cache the length in many cases, \eg when a string is passed into another function.
1275
1276C strings are fixed size because arrays are used for the implementation.
1277However, string manipulation commonly results in dynamically-sized temporary and final string values, \eg @strcpy@, @strcat@, @strcmp@, @strlen@, @strstr@, \etc.
1278As a result, storage management for C strings is a nightmare, quickly resulting in array overruns and incorrect results.
1279
1280Collectively, these design decisions make working with strings in C, awkward, time consuming, and unsafe.
1281While there are companion string routines that take the maximum lengths of strings to prevent array overruns, \eg @strncpy@, @strncat@, @strncpy@, that means the semantics of the operation can fail because strings are truncated.
1282Suffice it to say, C is not a go-to language for string applications, which is why \CC introduced the dynamically-sized @string@ type.
Note: See TracBrowser for help on using the repository browser.