source: doc/theses/mike_brooks_MMath/array.tex@ 9a35b43

Last change on this file since 9a35b43 was 9a35b43, checked in by Michael Brooks <mlbrooks@…>, 18 hours ago

add more array future work

  • Property mode set to 100644
File size: 111.1 KB
Line 
1\chapter{Array}
2\label{c:Array}
3
4Arrays in C are possibly the single most misunderstood and incorrectly used feature in the language \see{\VRef{s:Array}}, resulting in the largest proportion of runtime errors and security violations.
5This chapter describes the new \CFA language and library features that introduce a length-checked array type, @array@, to the \CFA standard library~\cite{Cforall}.
6
7Offering the @array@ type, as a distinct alternative to the C array, is consistent with \CFA's goal of backwards compatibility, \ie virtually all existing C (@gcc@) programs can be compiled by \CFA with only a small number of changes, similar to \CC (@g++@).
8However, a few compatibility-breaking changes to the behaviour of the C array are necessary, both as an implementation convenience and to fix C's lax treatment of arrays.
9Hence, the @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features, making it unnecessary to deal with every inherited complexity of the C array.
10Future work is to sugar the @array@ syntax to look like C arrays \see{\VRef{f:FutureWork}}.
11
12
13\section{Introduction}
14\label{s:ArrayIntro}
15
16The new \CFA array is declared by instantiating the generic @array@ type, much like instantiating any other standard-library generic type (such as \CC @vector@), using a new style of generic parameter.
17\begin{cfa}
18@array( float, 99 )@ x; $\C[2.5in]{// x contains 99 floats}$
19\end{cfa}
20Here, the arguments to the @array@ type are @float@ (element type) and @99@ (dimension).
21When this type is used as a function parameter, the type system requires the argument to be a perfect match.
22\begin{cfa}
23void f( @array( float, 42 )@ & p ) {} $\C{// p accepts 42 floats}$
24f( x ); $\C{// statically rejected: type lengths are different, 99 != 42}$
25
26test2.cfa:3:1 error: Invalid application of existing declaration(s) in expression.
27Applying untyped: Name: f ... to: Name: x
28\end{cfa}
29Function @f@'s parameter expects an array with dimension 42, but the argument dimension 99 does not match.
30
31A function can be polymorphic over @array@ arguments using the \CFA @forall@ declaration prefix.
32\begin{minipage}{\linewidth} % forbid page splitting, force
33\begin{cfa}
34forall( T, @[N]@ )
35void g( array( T, @N@ ) & p, int i ) {
36 T elem = p[i]; $\C{// dynamically checked: requires 0 <= i < N}$
37}
38g( x, 0 ); $\C{// T is float, N is 99, dynamic subscript check succeeds}$
39g( x, 1000 ); $\C{// T is float, N is 99, dynamic subscript check fails}$
40
41Cforall Runtime error: subscript 1000 exceeds dimension range [0,99) $for$ array 0x555555558020.
42\end{cfa}
43\end{minipage}
44Function @g@ takes an arbitrary type parameter @T@ and an unsigned integer \emph{dimension} @N@.
45The dimension represents a to-be-determined number of elements, managed by the type system, where 0 represents an empty array.
46The type system implicitly infers @float@ for @T@ and @99@ for @N@.
47Furthermore, the runtime subscript @x[0]@ (parameter @i@ being @0@) in @g@ is valid because 0 is in the dimension range $[0,99)$ for argument @x@.
48The call @g( x, 1000 )@ is also accepted at compile time.
49However, the subscript, @x[1000]@, generates a runtime error, because @1000@ is outside the dimension range $[0,99)$ of argument @x@.
50In general, the @forall( ..., [N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and within a function.
51The syntactic form is chosen to parallel other @forall@ forms:
52\begin{cfa}
53forall( @[N]@ ) ... $\C{// dimension}$
54forall( T ) ... $\C{// value datatype}$
55forall( T & ) ... $\C{// opaque datatype}$
56\end{cfa}
57% The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
58
59The generic @array@ type is comparable to the C array type, which \CFA inherits from C.
60Their runtime characteristics are often identical, and some features are available in both.
61For example, assume a caller has an argument that instantiates @N@ with 42.
62\begin{cfa}
63forall( [N] )
64void f( ... ) {
65 float x1[@N@]; $\C{// C array, no subscript checking}$
66 array(float, N) x2; $\C{// \CFA array, subscript checking}\CRT$
67}
68\end{cfa}
69Both of the variables, @x1@ and @x2@, name 42 contiguous, stack-allocated @float@ values.
70The two variables have identical size and layout, with no additional ``bookkeeping'' allocations or headers.
71The C array, @x1@, has no subscript checking, while \CFA array, @x2@, does.
72This explicit, generic approach requires a significant extension to the \CFA type system to support a full-feature, safe, efficient (space and time) array type, which forms the foundation for more complex array forms in \CFA.
73In all following discussion, ``C array'' means types like @x1@ and ``\CFA array'' means types like @x2@.
74
75A future goal is to provide the new @array@ features with syntax approaching C's (declaration style of @x1@).
76Then, the library @array@ type could be removed, giving \CFA a largely uniform array type.
77At present, new-array features are only partially supported through C-style syntax, so the generic @array@ is used exclusively in the thesis.
78
79My contributions in this chapter are:
80\begin{enumerate}[leftmargin=*]
81\item A type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@.
82\item Provide a dimension/subscript-checked array type in the \CFA standard library, where the array's length is statically managed and dynamically valued.
83\item Provide argument/parameter passing safety for arrays and subscript safety.
84\item Identify the interesting abilities available by the new @array@ type.
85\end{enumerate}
86
87
88\begin{comment}
89\section{Dependent Typing}
90
91General dependent typing allows a type system to encode arbitrary predicates, \eg behavioural specifications for functions, which is an anti-goal for my work.
92Firstly, this application is strongly associated with pure functional languages,
93where a characterization of the return value (giving it a precise type, generally dependent upon the parameters)
94is a sufficient postcondition.
95In an imperative language like C and \CFA, it is also necessary to discuss side effects, for which an even heavier formalism, like separation logic, is required.
96Secondly: ... bash Rust.
97... cite the crap out of these claims.
98\end{comment}
99
100
101\section{Features Added}
102
103This section shows more about using the \CFA array and dimension parameters, demonstrating syntax and semantics by way of motivating examples.
104As stated, the core capability of the new array is tracking all dimensions within the type system, where dynamic dimensions are represented using type variables.
105
106By declaring type variables at the front of object declarations, an array dimension is lexically referenceable where it is needed.
107For example, a declaration can share one length, @N@, among a pair of parameters and return type, meaning the input arrays and return array are the same length.
108\lstinput{10-17}{hello-array.cfa}
109Function @f@ does a pointwise comparison of its two input arrays, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
110The dynamic allocation of the @ret@ array uses the library @alloc@ function,
111\begin{cfa}
112forall( T & | sized(T) )
113T * alloc() {
114 return @(T *)@malloc( @sizeof(T)@ ); // C malloc
115}
116\end{cfa}
117which captures the parameterized dimension information implicitly within its @sizeof@ determination, and casts the return type.
118Note, @alloc@ only sees the whole type for its @T@, @array(bool, N)@, where this type's size is a computation based on @N@.
119This example illustrates how the new @array@ type plugs into existing \CFA behaviour by implementing necessary \emph{sized} assertions needed by other types.
120(\emph{sized} implies a concrete \vs abstract type with a runtime-available size, exposed as @sizeof@.)
121As a result, there is significant programming safety by making the size accessible and implicit, compared with C's @calloc@ and non-array supporting @memalign@, which take an explicit length parameter not managed by the type system.
122
123\begin{figure}
124\begin{cfa}
125forall( [N] ) array( bool, N ) & f( array( float, N ) &, array( float, N ) & );
126\end{cfa}
127\lstinput{30-43}{hello-array.cfa}
128\lstinput{45-48}{hello-array.cfa}
129\caption{Example of calling a dimension-generic function}
130\label{f:fExample}
131\end{figure}
132
133\VRef[Figure]{f:fExample} shows an example using function @f@, illustrating how dynamic values are fed into the @array@ type.
134Here, the dimension of arrays @x@, @y@, and @result@ is specified from a command-line value, @dim@, and these arrays are allocated on the stack.
135Then the @x@ array is initialized with decreasing values, and the @y@ array with amounts offset by constant @0.005@, giving relative differences within tolerance initially and diverging for later values.
136The program main is run (see figure bottom) with inputs @5@ and @7@ for sequence lengths.
137The loops follow the familiar pattern of using the variable @dim@ to iterate through the arrays.
138Most importantly, the type system implicitly captures @dim@ at the call of @f@ and makes it available throughout @f@ as @N@.
139The example shows @dim@ adapting into a type-system managed length at the declarations of @x@, @y@, and @result@; @N@ adapting in the same way at @f@'s loop bound; and a pass-thru use of @dim@ at @f@'s declaration of @ret@.
140Except for the lifetime-management issue of @result@, \ie explicit @free@, this program has eliminated both the syntactic and semantic problems associated with C arrays and their usage.
141The result is a significant improvement in safety and usability.
142In summary:
143\begin{itemize}[leftmargin=*]
144\item
145@[N]@ within a @forall@ declares the type variable @N@ to be a managed length.
146\item
147@N@ can be used in an expression with type @size_t@ within the function body.
148\item
149The value of an @N@-expression is the acquired length, derived from the usage site, \ie generic declaration or function call.
150\begin{comment} % was not discussed yet
151\item
152@array( T, N0, N1, ... )@ is a multi-dimensional type wrapping $\prod_i N_i$ contiguous @T@-typed objects.
153\end{comment}
154\end{itemize}
155
156\VRef[Figure]{f:TemplateVsGenericType} shows @N@ is not the same as a @size_t@ declaration in a \CC \lstinline[language=C++]{template}. It illustrates these differences:
157\begin{enumerate}[leftmargin=*]
158\item
159The \CC template @N@ can only be a compile-time value, while the \CFA @N@ may be a runtime value.
160\item
161\CC does not allow a template function to be nested, while \CFA allows polymorphic functions be nested.
162Hence, \CC precludes a simple form of information hiding.
163\item
164\label{p:DimensionPassing}
165The \CC template @N@ must be passed explicitly at the call, unless @N@ has a default value, even when \CC can deduce the type of @T@.
166The \CFA @N@ is part of the array type and passed implicitly at the call.
167% fixed by comparing to std::array
168% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v2
169\begin{comment} % does not appear in the example; I think we've found CFA suffering a similar "belief"
170\item
171\CC cannot have an array of references, but can have an array of @const@ pointers.
172\CC has a (mistaken) belief that references are not objects, but pointers are objects.
173In the \CC example, the arrays fall back on C arrays, which have a duality with references with respect to automatic dereferencing.
174The \CFA array is a contiguous object with an address, which can be stored as a reference or pointer.
175\end{comment}
176% not really about forall-N vs template-N
177% any better CFA support is how Rob left references, not what Mike did to arrays
178% https://stackoverflow.com/questions/1164266/why-are-arrays-of-references-illegal
179% https://stackoverflow.com/questions/922360/why-cant-i-make-a-vector-of-references
180\item
181\label{p:ArrayCopy}
182C/\CC arrays cannot be copied, while \CFA arrays can be copied, making them a first-class object (although array copy is often avoided for efficiency).
183% fixed by comparing to std::array
184% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v10
185\end{enumerate}
186The \CC template @std::array@ tries to accomplish a similar mechanism to \CFA @array@.
187It is an aggregate type with the same semantics as a @struct@ holding a C-style array \see{\VRef{s:ArraysCouldbeValues}}, which mitigates points \VRef[]{p:DimensionPassing} and \VRef[]{p:ArrayCopy}.
188
189\begin{figure}
190\setlength{\tabcolsep}{3pt}
191\begin{tabular}{@{}ll@{}}
192\begin{c++}
193
194@template< typename T, size_t N >@
195void copy( T ret[@N@], T x[@N@] ) {
196 for ( int i = 0; i < N; i += 1 ) ret[i] = x[i];
197}
198void demo() {
199 const size_t n = 10;
200 int ret[n], x[n]; $\C[1.9in]{// static size}$
201 for ( int i = 0; i < n; i += 1 ) x[i] = i;
202 @copy<int, n >( ret, x );@
203 for ( int i = 0; i < n; i += 1 ) cout << ret[i] << ' ';
204 cout << endl;
205}
206int main() {
207 demo();
208}
209\end{c++}
210&
211\begin{cfa}
212int main() {
213 @forall( T, [N] )@ $\C{// nested function}$
214 void copy( array( T, @N@ ) & ret, array( T, @N@ ) & x ) {
215 for ( i; N ) ret[i] = x[i];
216 }
217 void demo( const size_t n ) {
218
219 array( int, n ) ret, x; $\C{// dynamic-fixed size}$
220 for ( i; n ) x[i] = i; $\C{// initialize}$
221 @copy( ret, x );@ $\C{// alternative ret = x}$
222 for ( i; n ) sout | ret[i] | nonl;
223 sout | nl;
224 }
225 size_t n; sin | n; $\C{// obtain size}\CRT$
226 demo( n );
227}
228\end{cfa}
229\end{tabular}
230\caption{\lstinline{N}-style parameters, for \CC template \vs \CFA generic type }
231\label{f:TemplateVsGenericType}
232\end{figure}
233
234Just as the first example in \VRef[Section]{s:ArrayIntro} shows a compile-time rejection of a length mismatch,
235so are length mismatches stopped when they involve dimension parameters.
236While \VRef[Figure]{f:fExample} shows successfully calling a function @f@ expecting two arrays of the same length,
237\begin{cfa}
238array( bool, N ) & f( array( float, N ) &, array( float, N ) & );
239\end{cfa}
240a static rejection occurs when attempting to call @f@ with arrays of differing lengths.
241\lstinput[tabsize=1]{70-74}{hello-array.cfa}
242When the argument lengths themselves are statically unknown,
243the static check is conservative and, as always, \CFA's casting lets the programmer use knowledge not shared with the type system.
244\lstinput{90-96}{hello-array.cfa}
245This static check's rules are presented in \VRef[Section]{s:ArrayTypingC}.
246
247Orthogonally, the \CFA array type also works inside generic \emph{types}.
248That is, an @array@ as a generic @struct@'s member and an @[N]@ can parameterize a @struct@.
249The same argument safety, and the associated implicit length communication, occur.
250Preexisting \CFA allowed aggregate types to be generalized with type parameters, enabling parameterizing of element types.
251As polymorphism on functions was extended to allow parameterizing over dimensions, so is polymorphism on types.
252The result has profound consequences for safe memory management.
253
254C's Flexible Array Member (FAM) pattern~\cite[\S~6.7.2.1.18]{C11} gives an unsafe way to put an array of dynamic length inside a structure.
255\begin{cfa}
256struct S {
257 ...
258 double d[]; // incomplete array type => flexible array member
259};
260struct S * s = malloc( sizeof( struct S ) + sizeof( double [10] ) );
261\end{cfa}
262This type-@S@ declaration defines a quasi-VLA named @d@ as the last structure member.
263The variable-@s@ initialization sizes this particular array at 10 elements.
264The choice of @malloc@ here is the common-case FAM idiom, but @alloca@ can be substituted, resulting in stack allocation.
265A \CFA @[N]@-parameterized structure improves on this pattern.
266
267\begin{figure}
268 \lstinput{10-15}{hello-accordion.cfa}
269 \caption{\CFA Dynamic Array Member (DAM) declaration}
270 \label{DAM-declaration}
271\end{figure}
272
273\CFA allows length-parameterized array members to be nested at arbitrary locations, with further member declarations interleaved.
274The result is a \newterm{Dynamic Array Member (DAM)}
275% \footnote{
276% To keep the names straight, it may help to think of C's flexible array member as ``too flexible,'' where the discussion that follows makes the case that a FAM is weakly typed.
277% },
278as illustrated in \VRef[Figure]{DAM-declaration}. The example's @School@ structure has course- and student-level metadata (in the respectively named fields) and a position-based preferences' matrix.
279The structure's layout is dynamic; the starting offset of @student_ids@ varies according to the generic parameter @C@, which affects the size of the @course_codes@ field.
280Similarly, the offset of @preferences@ varies by both parameters.
281
282\VRef[Figure]{f:DAM-example} shows a program main using the @School@ type, along with results at different array sizes.
283The @school@ variable holds many students' course-preference forms.
284It is on the stack and its initialization does not use any casting or size arithmetic.
285When heap allocation is preferred, the original pattern still applies, with the same qualities.
286\begin{cfa}
287School( classes, students ) * sp = alloc();
288\end{cfa}
289The example program prints the courses in each student's preferred order, all using the looked-up display names.
290In the example, function @getPref@ returns, for the student at position @is@, the position of their @pref@\textsuperscript{th}-favoured class.
291This function helps the program to output each student's ordered preferences.
292The program's inputs and outputs are given on the right, for two different school sizes.
293
294\begin{figure}
295\begin{lrbox}{\myboxA}
296\begin{tabular}{@{}l@{}}
297\lstinput{30-38}{hello-accordion.cfa} \\
298\lstinput{50-55}{hello-accordion.cfa} \\
299\lstinput{90-98}{hello-accordion.cfa}
300\end{tabular}
301\end{lrbox}
302
303\begin{lrbox}{\myboxB}
304\begin{tabular}{@{}l@{}}
305@$ cat school1@ \\
306\lstinputlisting{school1} \\
307@$ ./a.out < school1@ \\
308\lstinputlisting{school1.out} \\
309@$ cat school2@ \\
310\lstinputlisting{school2} \\
311@$ ./a.out < school2@ \\
312\lstinputlisting{school2.out}
313\end{tabular}
314\end{lrbox}
315
316\setlength{\tabcolsep}{10pt}
317\begin{tabular}{@{}ll@{}}
318\usebox\myboxA
319&
320\usebox\myboxB
321\end{tabular}
322
323
324\caption[\CFA Dynamic Array Member (DAM) usage]{
325 \CFA Dynamic Array Member (DAM) usage.
326 Example program (left) with input and output (right).}
327\label{f:DAM-example}
328\end{figure}
329
330Much of these \CFA DAM features are impossible with a C FAM.
331In C, they can only occur at the end of a structure;
332\CFA allows them anywhere.
333As a consequence, \CFA allows several of them, while C is limited to one.
334The \CFA ability to avoid casting and size arithmetic improves safety and usability over C.
335While C offers the \emph{VLA} language feature, to eliminate casting and size arithmetic for stack-allocated \emph{simple arrays}, this feature does not extend to stack-allocating a FAM-structure, which makes the user fall back on explicit @alloca@.
336The \CFA DAM, by contrast, treats every stack allocation as a simple variable declaration.
337When a function operates on a \CFA DAM structure, the type system handles its memory layout transparently.
338
339
340
341\section{Single Dimension Array Implementation}
342\label{s:1d-impl}
343
344The core of the preexisting \CFA compiler already has the ``heavy equipment'' needed to provide the feature set just reviewed.
345To apply this equipment in tracking array lengths, I encoded a dimension (array's length) as a type.
346The type in question does not describe any data that the program actually uses at runtime.
347It simply carries information through intermediate stages of \CFA-to-C lowering.
348And this type takes a form such that, once \emph{it} gets lowered, the result does the right thing.
349This new dimension type, encoding the array dimension within it, is the first limited \newterm{dependent type}~\cite{DependentType} added to the \CFA type-system and is used at appropriate points during type resolution.
350
351% Furthermore, the @array@ type itself is really ``icing on the cake.''
352% Presenting its full version is deferred until \VRef[Section]{s:ArrayMdImpl} (where added complexity needed for multiple dimensions is considered).
353Discussion of the new array implementation is broken into single- and multi-dimensional features.
354The multidimensional feature set of \VRef[Section]{s:ArrayMdImpl} is a toolkit for managing repeated application of the single-dimensional features.
355Details of the @array@ type are introduced as needed.
356The single-dimension discussions begins with a simple example stripped of the @array@ type.
357\begin{cfa}
358forall( [N] )
359struct array_1d_float {
360 float items[N]; $\C[2in]{// float elements }$
361};
362forall( T, [N] )
363struct array_1d {
364 T items[N]; $\C{// any-typed elements}\CRT$
365};
366\end{cfa}
367These two structure patterns, plus a subscript operator, is all that @array@ provides.
368My main work is letting a programmer define such a structure (one whose type is parameterized by @[N]@) and define functions that operate on it (these being parameterized similarly).
369
370The repurposed heavy equipment is
371\begin{itemize}[leftmargin=*]
372\item
373 The ``resolver'' compiler pass, which provides argument values for a declaration's type-system parameters, gathered from type information in scope at the usage site.
374\item
375 The ``box'' compiler pass, which encodes information about type parameters into ``extra'' regular parameters on declarations, and corresponding arguments on calls.
376 Notably, it conveys the size of a type @foo@ as a @__sizeof_foo@ parameter, and rewrites the @sizeof(foo)@ expression as @__sizeof_foo@, \ie a use of the parameter.
377\end{itemize}
378
379The rules for resolution had to be restricted slightly, in order to achieve important refusal cases.
380This work is detailed in \VRef[Section]{s:ArrayTypingC}.
381However, the resolution--boxing scheme, in its preexisting state, is equipped to work on (desugared) dimension parameters.
382The following discussion explains the desugaring and why its lowered code is correct.
383
384A simpler structure, and a toy function on it, demonstrate what is needed for the encoding.
385\begin{cfa}
386forall( [@N@] ) { $\C{// [1]}$
387 struct thing {};
388 void f( thing(@N@) ) { sout | @N@; } $\C{// [2], [3]}$
389}
390int main() {
391 thing( @10@ ) x; f( x ); $\C{// prints 10, [4]}$
392 thing( @100@ ) y; f( y ); $\C{// prints 100}$
393}
394\end{cfa}
395This example has:
396\begin{enumerate}[leftmargin=*]
397\item
398 The symbol @N@ being declared as a type variable (a variable of the type system).
399\item
400 The symbol @N@ being used to parameterize a type.
401\item
402 The symbol @N@ being used as an expression (value).
403\item
404 A value like 10 being used as an argument to the parameter @N@.
405\end{enumerate}
406The chosen solution is to encode the value @N@ \emph{as a type}, so items 1 and 2 are immediately available for free.
407Item 3 needs a way to recover the encoded value from a (valid) type and to reject invalid types.
408Item 4 needs a way to produce a type that encodes the given value.
409
410Because the box pass handles a type's size as its main datum, the encoding is chosen to use it.
411The production and recovery are then straightforward.
412\begin{itemize}[leftmargin=*]
413\item
414 The value $n$ is encoded as a type whose size is $n$.
415\item
416 Given a dimension expression $e$, produce an internal type @char[@$e$@]@ to represent it.
417 If $e$ evaluates to $n$ then the encoded type has size $n$.
418\item
419 Given a type $T$ (produced by these rules), recover the value that it represents with the expression @sizeof(@$T$@)@.
420 If $T$ has size $n$ then the recovery expression evaluates to $n$.
421\end{itemize}
422
423This desugaring is applied in a translation step before the resolver.
424The ``validate'' pass hosts it, along with several other canonicalizing and desugaring transformations (the pass's name notwithstanding).
425The running example is lowered to:
426\begin{cfa}
427forall( @N *@ ) { $\C{// [1]}$
428 struct thing {};
429 void f( thing(@N@) ) { sout | @sizeof(N)@; } $\C{// [2], [3]}$
430}
431int main() {
432 thing( @char[10]@ ) x; f( x ); $\C{// prints 10, [4]}$
433 thing( @char[100]@ ) y; f( y ); $\C{// prints 100}$
434}
435\end{cfa}
436Observe:
437\begin{enumerate}[leftmargin=*]
438\item
439 @N@ is now declared to be a type.
440 It is declared to be \emph{sized} (by the @*@), meaning that the box pass shall do its @sizeof(N)@$\rightarrow$@__sizeof_N@ extra parameter and expression translation.
441\item
442 @thing(N)@ is a type; the argument to the generic @thing@ is a type (type variable).
443\item
444 The @sout...@ expression (being an application of the @?|?@ operator) has a second argument that is an ordinary expression.
445\item
446 The type of variable @x@ is another @thing(-)@ type; the argument to the generic @thing@ is a type (array type of bytes, @char[@$e$@]@).
447\end{enumerate}
448
449From this point, preexisting \CFA compilation takes over lowering it the rest of the way to C.
450Here the result shows only the relevant changes of the box pass (as informed by the resolver), leaving the rest unadulterated:
451\begin{cfa}
452// [1]
453void f( size_t __sizeof_N, @void *@ ) { sout | @__sizeof_N@; } $\C{// [2], [3]}$
454int main() {
455 struct __conc_thing_10 {} x; f( @10@, &x ); $\C{// prints 10, [4]}$
456 struct __conc_thing_100 {} y; f( @100@, &y ); $\C{// prints 100}$
457}
458\end{cfa}
459Observe:
460\begin{enumerate}[leftmargin=*]
461\item
462 The type parameter @N@ is gone.
463\item
464 The type @thing(N)@ is (replaced by @void *@, but thereby effectively) gone.
465\item
466 The @sout...@ expression has a regular variable (parameter) usage for its second argument.
467\item
468 Information about the particular @thing@ instantiation (value 10) is moved, from the type, to a regular function-call argument.
469\end{enumerate}
470At the end of the desugaring and downstream processing, the original C idiom of ``pass both a length parameter and a pointer'' has been reconstructed.
471In the programmer-written form, only the @thing@ is passed.
472The compiler's action produces the more complex form, which if handwritten, is error-prone.
473
474At the \CFA compiler's front end, changes to the parser are AST schema extensions and validation rules for enabling the sugared user input.
475\begin{itemize}[leftmargin=*]
476\item
477 Recognize the syntax @forall( ... [N] ... )@ as a type-variable declaration.
478\item
479 Where the AST's representation of a type variable encodes its \newterm{brand}, previously including full-object types @T@ and opaque datatypes @T&@, define new brand, \newterm{Dimension}. Apply this brand to the new syntax.
480\item
481 Allow a type variable to occur in an expression. Validate (after parsing) that only dimension-branded type-variables are used here.
482\item
483 Allow an expression to occur in type-argument position. Brand the resulting type argument as a dimension.
484\item
485 Validate (after parsing), on a generic-type usage, \eg the type part of the declaration
486 \begin{cfa}
487 array_1d( foo, bar ) x;
488 \end{cfa}
489 \vspace*{-10pt}
490 that the brands on the generic arguments match the brands of the declared type variables.
491 Here, that @foo@ is a type and @bar@ is a dimension.
492\end{itemize}
493
494
495\section{Typing of C Arrays}
496\label{s:ArrayTypingC}
497
498Essential in giving a guarantee of accurate length is the compiler's ability to reject a program that presumes to mishandle length.
499By contrast, most discussion so far deals with communicating length, from one party who knows it, to another willing to work with any given length.
500For scenarios where the concern is a mishandled length, the interaction is between two parties who both claim to know something about it.
501
502C and \CFA can check when working with two static values.
503\begin{cfa}
504enum { n = 42 };
505float x[@n@]; $\C{// or just 42}$
506float (*xp1)[@42@] = &x; $\C{// accept}$
507float (*xp2)[@999@] = &x; $\C{// reject}$
508warning: initialization of 'float (*)[999]' from incompatible pointer type 'float (*)[42]'
509\end{cfa}
510When a variable is involved, C and \CFA take two different approaches.
511Today's C compilers accept the following without a warning.
512\begin{cfa}
513static const int n = 42;
514float x[@n@];
515float (* xp)[@999@] = &x; $\C{// should be static rejection here}$
516(*xp)[@500@]; $\C{// in "bound"?}$
517\end{cfa}
518Here, the array @x@ has length 42, while a pointer to it (@xp@) claims length 999.
519So, while the subscript of @xp@ at position 500 is out of bound with its referent @x@,
520the access appears in-bound of the type information available on @xp@.
521In fact, length is being mishandled in the previous step, where the type-carried length information on @x@ is not compatible with that of @xp@.
522
523In \CFA, I choose to reject this C example at the point where the type-carried length information on @x@ is not compatible with that of @xp@, and correspondingly, its array counterpart at the same location:
524\begin{cfa}
525static const int n = 42;
526array( float, @n@ ) x;
527array( float, @999@ ) * xp = &x; $\C{// static rejection here}$
528(*xp)[@500@]; $\C{// runtime check would pass if program compiled}$
529\end{cfa}
530The way the \CFA array is implemented, the type analysis for this case reduces to a case similar to the earlier C version.
531The \CFA compiler's compatibility analysis proceeds as:
532\begin{itemize}[leftmargin=*,parsep=0pt]
533\item
534 Is @array( float, 999 )@ type-compatible with @array( float, n )@?
535\item
536 Is desugared @array( float, char[999] )@ type-compatible with desugared @array( float, char[n] )@?
537% \footnote{
538% Here, \lstinline{arrayX} represents the type that results from desugaring the \lstinline{array} type into a type whose generic parameters are all types.
539% This presentation elides the noisy fact that \lstinline{array} is actually a macro for something bigger;
540% the reduction to \lstinline{char [-]} still proceeds as sketched.}
541\item
542 Is internal type @char[999]@ type-compatible with internal type @char[n]@?
543\end{itemize}
544The answer is false because, in general, the value of @n@ is unknown at compile time, and hence, an error is raised.
545For safety, it makes sense to reject the corresponding C case, which is a non-backwards compatible change.
546There are two mitigations for this incompatibility.
547
548First, a simple recourse is available in a situation where @n@ is \emph{known} to be 999 by using a cast.
549\begin{cfa}
550float (* xp)[999] = @(float (*)[999])@&x;
551\end{cfa}
552The cast means the programmer has accepted blame.
553Moreover, the cast is ``eye candy'' marking where the unchecked length knowledge is used.
554Therefore, a program being onboarded to \CFA requires some upgrading to satisfy the \CFA rules (and arguably become clearer), without giving up its validity to a plain C compiler.
555Second, the incompatibility only affects types like pointer-to-array, which are infrequently used in C.
556The more common C idiom for aliasing an array is to use a pointer-to-first-element type, which does not participate in the \CFA array's length checking.\footnote{
557 Notably, the desugaring of the \lstinline{array} type avoids letting any \lstinline{-[-]} type decay,
558 in order to preserve the length information that powers runtime bound-checking.}
559Therefore, the need to upgrade legacy C code is low.
560Finally, if this incompatibility is a problem onboarding C programs to \CFA, it should be possible to change the C type check to a warning rather than an error, acting as a \emph{lint} of the original code for a missing type annotation.
561
562To handle two occurrences of the same variable, more information is needed, \eg, this is fine,
563\begin{cfa}
564int n = 42;
565float x[@n@];
566float (*xp)[@n@] = x; $\C{// accept}$
567\end{cfa}
568where @n@ remains fixed across a contiguous declaration context.
569However, intervening dynamic statements can cause failures.
570\begin{cfa}
571int n = 42;
572float x[@n@];
573@n@ = 999; $\C{// dynamic change}$
574float (*xp)[@n@] = x; $\C{// reject}$
575\end{cfa}
576As well, side-effects can even occur in a contiguous declaration context.
577\begin{cquote}
578\setlength{\tabcolsep}{20pt}
579\begin{tabular}{@{}ll@{}}
580\begin{cfa}
581// compile unit 1
582extern int @n@;
583extern float g();
584void f() {
585 float x[@n@] = { @g()@ }; // side-effect
586 float (*xp)[@n@] = x; // reject
587}
588\end{cfa}
589&
590\begin{cfa}
591// compile unit 2
592int @n@ = 42;
593void g() {
594 @n@ = 999; // accept
595}
596
597
598\end{cfa}
599\end{tabular}
600\end{cquote}
601The issue here is that knowledge needed to make a correct decision is hidden by separate compilation.
602Even within a translation unit, static analysis might not be able to provide all the information.
603However, if the example uses @const@, the check is possible even though the value is unknown.
604\begin{cquote}
605\setlength{\tabcolsep}{20pt}
606\begin{tabular}{@{}ll@{}}
607\begin{cfa}
608// compile unit 1
609extern @const@ int n;
610extern float g();
611void f() {
612 float x[n] = { g() };
613 float (*xp)[n] = x; // accept
614}
615\end{cfa}
616&
617\begin{cfa}
618// compile unit 2
619@const@ int n = 42;
620void g() {
621 @n = 999@; // reject
622}
623
624
625\end{cfa}
626\end{tabular}
627\end{cquote}
628
629In summary, the new rules classify expressions into three groups:
630\begin{description}
631\item[Statically Evaluable]
632 Expressions for which a specific value can be calculated (conservatively) at compile-time.
633 A preexisting \CFA compiler module defines which literals, enumerators, and expressions qualify and evaluates them.
634\item[Dynamic but Stable]
635 The value of a variable declared as @const@, including a @const@ parameter.
636\item[Potentially Unstable]
637 The catch-all category. Notable examples include:
638 any function-call result, @float x[foo()]@, the particular function-call result that is a pointer dereference, @void f(const int * n)@ @{ float x[*n]; }@, and any use of a reference-type variable.
639\end{description}
640Within these groups, my \CFA rules are:
641\begin{itemize}[leftmargin=*]
642\item
643 Accept a Statically Evaluable pair, if both expressions have the same value.
644 Notably, this rule allows a literal to match with an enumeration value, based on the value.
645\item
646 Accept a Dynamic but Stable pair, if both expressions are written out the same, \eg refers to the same variable declaration.
647\item
648 Otherwise, reject.
649 Notably, reject all pairs from the Potentially Unstable group and all pairs that cross groups.
650\end{itemize}
651The traditional C rules are:
652\begin{itemize}[leftmargin=*]
653\item
654 Reject a Statically Evaluable pair, if the expressions have two different values.
655\item
656 Otherwise, accept.
657\end{itemize}
658\VRef[Figure]{f:DimexprRuleCompare} gives a case-by-case comparison of the consequences of these rule sets.
659It demonstrates that the \CFA false alarms occur in the same cases as C treats unsafely.
660It also shows that C-incompatibilities only occur in cases that C treats unsafely.
661
662\begin{figure}
663 \newcommand{\falsealarm}{{\color{blue}\small{*}}}
664 \newcommand{\allowmisuse}{{\color{red}\textbf{!}}}
665
666 \begin{tabular}{@{}l@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c}
667 & \multicolumn{2}{c}{\underline{Values Equal}}
668 & \multicolumn{2}{c}{\underline{Values Unequal}}
669 & \\
670 \textbf{Case} & C & \CFA & C & \CFA & Compat. \\
671 Both Statically Evaluable, Same Symbol & Accept & Accept & & & \cmark \\
672 Both Statically Evaluable, Different Symbols & Accept & Accept & Reject & Reject & \cmark \\
673 Both Dynamic but Stable, Same Symbol & Accept & Accept & & & \cmark \\
674 Both Dynamic but Stable, Different Symbols & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\
675 Both Potentially Unstable, Same Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\
676 Any other grouping, Different Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark
677 \end{tabular}
678
679 \medskip
680 \noindent\textbf{Legend}
681 \begin{itemize}[leftmargin=*]
682 \item
683 Each row gives the treatment of a test harness of the form
684 \begin{cfa}
685 float x[ expr1 ];
686 float (*xp)[ expr2 ] = &x;
687 \end{cfa}
688 \vspace*{-10pt}
689 where \lstinline{expr1} and \lstinline{expr2} are meta-variables varying according to the row's Case.
690 Each row's claim applies to other harnesses too, including,
691 \begin{itemize}[leftmargin=*]
692 \item
693 calling a function with a parameter like \lstinline{x} and an argument of the \lstinline{xp} type,
694 \item
695 assignment in place of initialization,
696 \item
697 using references in place of pointers, and
698 \item
699 for the \CFA array, calling a polymorphic function on two \lstinline{T}-typed parameters with \lstinline{&x}- and \lstinline{xp}-typed arguments.
700 \end{itemize}
701 \item
702 Each case's claim is symmetric (swapping \lstinline{expr1} with \lstinline{expr2} has no effect),
703 even though most test harnesses are asymmetric.
704 \item
705 The table treats symbolic identity (Same/Different on rows)
706 apart from value equality (Equal/Unequal on columns).
707 \begin{itemize}[leftmargin=*]
708 \item
709 The expressions \lstinline{1}, \lstinline{0+1} and \lstinline{n}
710 (where \lstinline{n} is a variable with value 1),
711 are all different symbols with the value 1.
712 \item
713 The column distinction expresses ground truth about whether an omniscient analysis should accept or reject.
714 \item
715 The row distinction expresses the simple static factors used by today's analyses.
716 \end{itemize}
717 \item
718 Accordingly, every Reject under Values Equal is a false alarm (\falsealarm),
719 while every Accept under Values Unequal is an allowed misuse (\allowmisuse).
720 \end{itemize}
721
722 \caption[Case comparison for array type compatibility]{Case comparison for array type compatibility, given pairs of dimension expressions.}
723 \label{f:DimexprRuleCompare}
724\end{figure}
725
726\begin{comment}
727Given that the above check
728\begin{itemize}
729 \item
730 Is internal type @char[999]@ type-compatible with internal type @char[n]@?
731\end{itemize}
732answers false, discussion turns to how I got the \CFA compiler to produce this answer.
733In its preexisting form, the type system had a buggy approximation of the C rules.
734To implement the new \CFA rules, I added one further step.
735\begin{itemize}
736 \item
737 Is @999@ compatible with @n@?
738\end{itemize}
739This question applies to a pair of expressions, where the earlier question applies to types.
740An expression-compatibility question is a new addition to the \CFA compiler, and occurs in the context of dimension expressions, and possibly enumerations assigns, which must be unique.
741
742% ... ensure these compiler implementation matters are treated under \CFA compiler background: type unification, cost calculation, GenPoly.
743
744The relevant technical component of the \CFA compiler is the standard type-unification within the type resolver.
745\begin{cfa}
746example
747\end{cfa}
748I added rules for continuing this unification into expressions that occur within types.
749It is still fundamentally doing \emph{type} unification because it is participating in binding type variables, and not participating in binding any variables that stand in for expression fragments (for there is no such sort of variable in \CFA's analysis.)
750An unfortunate fact about the \CFA compiler's preexisting implementation is that type unification suffers from two forms of duplication.
751
752In detail, the first duplication has (many of) the unification rules stated twice.
753As a result, my additions for dimension expressions are stated twice.
754The extra statement of the rules occurs in the @GenPoly@ module, where concrete types like @array( int, 5 )@\footnote{
755 Again, the presentation is simplified
756 by leaving the \lstinline{array} macro unexpanded.}
757are lowered into corresponding C types @struct __conc_array_1234@ (the suffix being a generated index).
758In this case, the struct's definition contains fields that hardcode the argument values of @float@ and @5@.
759The next time an @array( -, - )@ concrete instance is encountered, it checks if the previous @struct __conc_array_1234@ is suitable for it.
760Yes, for another occurrence of @array( int, 5 )@;
761no, for examples like @array( int, 42 )@ or @array( rational(int), 5 )@.
762In the first example, it must reject the reuse hypothesis for a dimension-@5@ and a dimension-@42@ instance.
763
764The second duplication has unification (proper) being invoked at two stages of expression resolution.
765As a result, my added rule set needs to handle more cases than the preceding discussion motivates.
766In the program
767\begin{cfa}
768void @f@( double ); // overload
769forall( T & ) void @f@( T & ); // overload
770void g( int n ) {
771 array( float, n + 1 ) x;
772 f(x); // overloaded
773}
774\end{cfa}
775when resolving a function call to @g@, the first unification stage compares the type @T@ of the parameter with @array( float, n + 1 )@, of the argument.
776
777The actual rules for comparing two dimension expressions are conservative.
778To answer, ``yes, consider this pair of expressions to be matching,''
779is to imply, ``all else being equal, allow an array with length calculated by $e_1$
780to be passed to a function expecting a length-$e_2$ array.''\footnote{
781 ... Deal with directionality, that I'm doing exact-match, no ``at least as long as,'' no subtyping.
782 Should it be an earlier scoping principle? Feels like it should matter in more places than here.}
783So, a ``yes'' answer must represent a guarantee that both expressions evaluate the
784same result, while a ``no'' can tolerate ``they might, but we're not sure'',
785provided that practical recourses are available
786to let programmers express better knowledge.
787The new rule-set in the current release is, in fact, extremely conservative.
788I chose to keep things simple,
789and allow future needs to drive adding additional complexity, within the new framework.
790
791For starters, the original motivating example's rejection is not based on knowledge that the @xp@ length of (the literal) 999 is value-unequal to the (obvious) runtime value of the variable @n@, which is the @x@ length.
792Rather, the analysis assumes a variable's value can be anything, and so there can be no guarantee that its value is 999.
793So, a variable and a literal can never match.
794
795... Discuss the interaction of this dimension hoisting with the challenge of extra unification for cost calculation
796\end{comment}
797
798These rules being conservative is the reason that \VRef{s:1d-impl} calls the \CFA array type only \emph{limited} dependent.
799In programming languages featuring general dependent typing, considerably more effort is invested in being able to conclude that two type-parameterizing expressions always compute the same value.
800While conservatism is inescapable, these languages use good approximations of eventual runtime behaviour to enable conclusions about more dynamic behaviours, such as a program never popping from an empty stack.
801
802The conservatism of the new rule set can leave a programmer requiring a recourse, when needing to use a dimension expression whose stability argument is more subtle than current-state analysis.
803This recourse is to declare an explicit constant for the dimension value.
804Consider these two dimension expressions, rejected by the blunt current-state rules:
805\begin{cfa}
806void f( int @&@ nr, @const@ int nv ) {
807 float x[@nr@];
808 float (*xp)[@nr@] = &x; $\C[2.5in]{// reject: nr varying (no references)}$
809 float y[@nv + 1@];
810 float (*yp)[@nv + 1@] = &y; $\C{// reject: ?+? unpredictable (no functions)}\CRT$
811}
812\end{cfa}
813Yet, both dimension expressions are reused safely.
814The @nr@ reference is never written, no implicit code (load) between declarations, and control does not leave the function between the uses.
815As well, the build-in @?+?@ function is predictable.
816
817To make these cases work, the programmer adds these constant declarations:
818\begin{cfa}
819void f( int & nr, const int nv ) {
820 @const int nx@ = nr;
821 float x[nx];
822 float (*xp)[nx] = & x; $\C[2.5in]{// accept}$
823 @const int ny@ = nv + 1;
824 float y[ny];
825 float (*yp)[ny] = & y; $\C{// accept}\CRT$
826}
827\end{cfa}
828The result is the originally intended semantics,
829achieved by adding a superfluous ``snapshot it as of now'' directive.
830Note that casting does not help with this issue, because it does not express information about values at different points in time.
831
832The snapshot trick is also used by the \CFA translation, though to achieve a different outcome.
833Rather obviously, every array must be subscriptable, even a bizarre one:
834\begin{cfa}
835array( float, @rand(10)@ ) x; $\C[2.5in]{// x[0] => no elements}$
836x[@0@]; $\C{// 10\% chance of bound-check failure}\CRT$
837\end{cfa}
838Less obvious is that the mechanism of subscripting is a function call, which must communicate length accurately.
839The bound-check above (callee logic) must use the actual allocated length of @x@, without mistakenly reevaluating the dimension expression, @rand(10)@.
840Adjusting the example to make the function's use of length more explicit:
841\begin{cfa}
842forall( T * )
843void f( T * x ) { sout | sizeof( *x ); }
844float x[ rand(10) ];
845f( x );
846\end{cfa}
847Considering that the partly translated function declaration is, loosely,
848\begin{cfa}
849void f( size_t __sizeof_T, void * x ) { sout | __sizeof_T; }
850\end{cfa}
851the translation calls the dimension argument twice:
852\begin{cfa}
853float x[ rand(10) ];
854f( rand(10), &x );
855\end{cfa}
856The correct form is:
857\begin{cfa}
858size_t __dim_x = rand(10);
859float x[ __dim_x ];
860f( __dim_x, &x );
861\end{cfa}
862Dimension hoisting already existed in the \CFA compiler.
863However, it was buggy, particularly with determining, ``Can hoisting the expression be skipped here?'' for skipping this hoisting is clearly desirable in some cases.
864For example, when a programmer has already done manual hoisting.
865In the new implementation, these cases are correct, harmonized with the accept/reject criteria.
866
867
868\section{Multidimensional Array Implementation}
869\label{s:ArrayMdImpl}
870
871A multidimensional array implementation has three relevant levels of abstraction, from highest to lowest, where the array occupies \emph{contiguous memory}.
872\begin{enumerate}[leftmargin=*]
873\item
874Flexible-stride memory:
875this model has complete independence between subscript ordering and memory layout, offering the ability to slice by (provide an index for) any dimension, \eg slice a row, column, or plane, \eg @c[3][4][*]@, @c[3][*][5]@, @c[3][*][*]@.
876\item
877Fixed-stride memory:
878this model binds the first subscript and the first memory layout dimension, offering the ability to slice by (provide an index for) only the coarsest dimension, @m[row][*]@ or @c[plane][*][*]@, \eg slice only by row (2D) or plane (3D).
879\item
880Explicit-displacement memory:
881this model has no awareness of dimensions just the ability to access memory at a distance from a reference point (base-displacement addressing), \eg @x + 23@ or @x[23]@ $\Rightarrow$ 23rd element after the start of @x@.
882A programmer must manually build any notion of dimensions using other tools;
883hence, this style is not offering multidimensional arrays \see{\VRef[Figure]{f:FixedVariable} right example}.
884\end{enumerate}
885
886There is some debate as to whether the abstraction point ordering above goes $\{1, 2\} < 3$, rather than my numeric ordering.
887That is, styles 1 and 2 are at the same abstraction level, with 3 offering a limited set of functionality.
888I chose to build the \CFA style-1 array upon a style-2 abstraction.
889Justification of the decision follows, after the description of the design.
890
891Style 3 is the inevitable target of any array implementation.
892The hardware offers this model to the C compiler, with bytes as the unit of displacement.
893C offers this model to programmers as pointer arithmetic, with arbitrary sizes as the unit.
894Casting a multidimensional array as a single-dimensional array/pointer, then using @x[i]@ syntax to access its elements, is still a form of pointer arithmetic.
895
896To step into the implementation of \CFA's new type-1 multidimensional arrays in terms of C's existing type-2 multidimensional arrays, it helps to clarify that the interface is low-level, \ie a C/\CFA array interface includes memory-layout detail.
897Specifically, the defining requirement of a type-1 system is the ability to slice a column from a column-finest matrix.
898Hence, the required memory shape of such a slice is fixed, before any discussion of implementation.
899The implementation presented here is how the \CFA array-library wrangles the C type system to make it do memory steps that are consistent with this layout while not affecting legacy C programs.
900% TODO: do I have/need a presentation of just this layout, just the semantics of -[all]?
901
902The new \CFA standard-library @array@-datatype supports richer multidimensional features than C.
903The new array implementation follows C's contiguous approach, \ie @float [r][c]@, with one contiguous object subscripted by coarsely-strided dimensions directly wrapping finely-strided dimensions.
904Beyond what C's array type offers, the new array brings direct support for working with a \emph{noncontiguous} array slice, allowing a program to work with dimension subscripts given in a non-physical order.
905
906The following examples use the matrix declaration @array( float, 5, 7 ) m@, loaded with values incremented by $0.1$, when stepping across the length-7 finely-strided column dimension, and incremented by $1$, when stepping across the length-5 coarsely-strided row dimension.
907\par
908\mbox{\lstinput{121-126}{hello-md.cfa}}
909\par\noindent
910The memory layout is 35 contiguous elements with strictly increasing addresses.
911
912A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.
913As for the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array by a prefix of its dimensions, \eg @m[2]@, giving the third row.
914This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be subscripted by the more finely strided dimensions, \eg @m[2][3]@, giving the value @2.3@.
915The following is an example slicing a row.
916\lstinput{60-64}{hello-md.cfa}
917\lstinput[aboveskip=0pt]{140-140}{hello-md.cfa}
918
919However, function @print1d_cstyle@ is asserting too much knowledge about its parameter @r@ for printing either a row slice or a column slice.
920Specifically, declaring the parameter @r@ with type @array@ means that @r@ is contiguous, which is unnecessarily restrictive.
921That is, @r@ need only be of a collection type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@) with managed length @N@.
922The new-array library provides the trait @ar@, so-defined.
923With it, the original declaration can be generalized with the same body.
924\lstinput{43-44}{hello-md.cfa}
925\lstinput[aboveskip=0pt]{145-145}{hello-md.cfa}
926The nontrivial slicing in this example now allows passing a \emph{noncontiguous} slice to @print1d@, where the new-array library provides a ``subscript by all'' operation for this purpose.
927In a multi-dimensional subscript operation, any dimension given as @all@ is a placeholder, \ie ``not yet subscripted by a value'', waiting for a value implementing the @ar@ trait.
928\lstinput{150-151}{hello-md.cfa}
929
930The example shows @x[2]@ and @x[2, all]@ both refer to the same, ``2.*'' slice.
931Indeed, the various @print1d@ calls under discussion access the entry with value @2.3@ as @x[2][3]@, @x[2,all][3]@, and @x[all,3][2]@.
932This design preserves (and extends) C array semantics by defining @x[i,j]@ to be @x[i][j]@ for numeric subscripts, but also for ``subscripting by all''.
933That is:
934\begin{cquote}
935\begin{tabular}{@{}cccccl@{}}
936@x[2,all][3]@ & $\equiv$ & @x[2][all][3]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is redundant) \\
937@x[all,3][2]@ & $\equiv$ & @x[all][3][2]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is effective)
938\end{tabular}
939\end{cquote}
940
941Narrating progress through each of the @-[-][-][-]@\footnote{
942The first ``\lstinline{-}'' is a variable expression and the remaining ``\lstinline{-}'' are subscript expressions.}
943expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
944Where @all@ is redundant:
945\begin{cquote}
946\begin{tabular}{@{}ll@{}}
947@x@ & 2-dimensional, want subscripts for coarse then fine \\
948@x[2]@ & 1-dimensional, want subscript for fine; lock coarse == 2 \\
949@x[2][all]@ & 1-dimensional, want subscript for fine \\
950@x[2][all][3]@ & 0-dimensional; lock fine == 3
951\end{tabular}
952\end{cquote}
953Where @all@ is effective:
954\begin{cquote}
955\begin{tabular}{@{}ll@{}}
956@x@ & 2-dimensional, want subscripts for coarse then fine \\
957@x[all]@ & 2-dimensional, want subscripts for fine then coarse \\
958@x[all][3]@ & 1-dimensional, want subscript for coarse; lock fine == 3 \\
959@x[all][3][2]@ & 0-dimensional; lock coarse == 2
960\end{tabular}
961\end{cquote}
962The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back.
963For example, in a two dimensional matrix, this semantics conceptually transposes the matrix by reversing the subscripts.
964The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@.
965
966Contiguous arrays, and slices of them, are all represented by the same underlying parameterized type, which includes stride information in its metadata.
967The \lstinline{-[all]} operation takes subscripts, \eg @x[2][all]@, @x[all][3]@, \etc, and converts (transposes) from the base reference @x[all]@ to a specific reference of the appropriate form.
968The running example's @all@-effective step, stated more concretely, is:
969\begin{cquote}
970\begin{tabular}{@{}ll@{}}
971@x@ & : 5 of ( 7 of @float@ each spaced 1 @float@ apart ) each spaced 7 @floats@ apart \\
972@x[all]@ & : 7 of ( 5 of @float@ each spaced 7 @float@s apart ) each spaced 1 @float@ apart
973\end{tabular}
974\end{cquote}
975
976\begin{figure}
977\includegraphics{measuring-like-layout}
978\caption[Visualization of subscripting by value and by `all']{
979 Visualization of subscripting by value and by `\lstinline[language=CFA]{all}.'
980 Uses example variable \lstinline{x} of type \lstinline{array( float, 5, 7 )}, understood as 5 rows by 7 columns.
981 The horizontal layout represents contiguous memory addresses while the vertical layout is conceptual.
982 The vertical shaded band highlights the location of the targeted element, 2.3.
983 Any such vertical slice contains various interpretations of a single address.
984}
985\label{fig:subscr-all}
986\end{figure}
987
988\VRef[Figure]{fig:subscr-all} shows one element (in the shaded band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@.
989In both cases, subscript 2 selects from the coarser dimension (rows of @x@),
990while subscript 3 selects from the finer dimension (columns of @x@).
991The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, \vs from @x[all]@.
992Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse.
993These two starting expressions, which are the example's only multidimensional subexpressions
994(those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index selects.
995
996The figure's presentation offers an intuitive answer to: What is an atomic element of @x[all]@?
997\begin{enumerate}[leftmargin=*]
998\item
999@x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these building blocks.
1000\item
1001An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them, done according to their size;
1002call such an array a column.
1003A column is almost ready to be arranged into a matrix;
1004it is the \emph{contained value} of the next-level building block, but another lie about size is required.
1005\item
1006An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box) and a lie about its size (the left diagonal above it, growing upward).
1007\end{enumerate}
1008At first, an atom needs to be arranged as if it is bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).
1009These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
1010Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@.
1011Yet every time the programmer presents an index, a C-array subscript is achieving the offset calculation.
1012
1013In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected),
1014the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats,
1015compared with where analogous rows appear when the row-level option is presented for @x@.
1016For example, in \lstinline{x[all]}, the shaded band and its immediate values to the left touches atoms 2.3, 2.0, 2.1, 2.2, 1.4, 1.5 and 1.6.
1017But only the atom 2.3 is storing its value there.
1018The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims.
1019
1020Lying is implemented as casting.
1021The arrangement just described is implemented in the structure @arpk@.
1022This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator.
1023The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@.
1024The subscript operator presents what is really inside, by casting to the type below the left diagonal of the lie.
1025
1026% Does x[all] have to lie too? The picture currently glosses over how it it advertises a size of 7 floats. I'm leaving that as an edge case benignly misrepresented in the picture. Edge cases only have to be handled right in the code.
1027
1028Casting, overlapping, and lying are unsafe.
1029The mission is to implement a style-1 feature in the type system for safe use by a programmer.
1030The offered style-1 system is allowed to be internally unsafe,
1031just as C's implementation of a style-2 system (upon a style-3 system) is unsafe within, even when the programmer is using it without casts or pointer arithmetic.
1032Having a style-1 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices.
1033
1034% PAB: repeat from previous paragraph.
1035% The choice to implement this style-1 system upon C's style-2 arrays, rather than its style-3 pointer arithmetic, reduces the attack surface of unsafe code.
1036% My casting is unsafe, but I do not do any pointer arithmetic.
1037% When a programmer works in the common-case style-2 subset (in the no-@[all]@ top of \VRef[Figure]{fig:subscr-all}), my casts are identities, and the C compiler is doing its usual displacement calculations.
1038% If I had implemented my system upon style-3 pointer arithmetic, then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own.
1039
1040% \noindent END: Paste looking for a home
1041
1042\begin{figure}
1043\begin{cfa}
1044forall(
1045 [N], $\C{// length of current dimension}$
1046 S & | sized(S), $\C{// masquerading-as}$
1047 Timmed &, $\C{// immediate element, often another array}$
1048 Tbase & $\C{// base element, \eg float, never array}$
1049) { // distribute forall to each element
1050 struct arpk {
1051 S strides[N]; $\C{// so that sizeof(this) is N of S}$
1052 };
1053 // expose Timmed, stride by S
1054 static inline Timmed & ?[?]( arpk( N, S, Timmed, Tbase ) & a, long int i ) {
1055 subcheck( a, i, 0, N );
1056 return (Timmed &)a.strides[i];
1057 }
1058}
1059\end{cfa}
1060\caption{Definition of structure `\lstinline{arpk},' which underlies the \CFA array}
1061\label{f:arpk}
1062\end{figure}
1063
1064The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.
1065The @arpk@ structure and its @-[i]@ operator are defined in \VRef[Figure]{f:arpk}.
1066The private @arpk@ structure (array with explicit packing) is generic over four types: dimension length, masquerading-as, ...
1067This structure's public interface is hidden behind the @array(...)@ macro and the subscript operator.
1068Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.
1069Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.
1070Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns the element found there, in unmasked form.
1071
1072An instantiation of the @arpk@ generic is given by the @array( E_base, N0, N1, ... )@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array( E_base, N1, ... )@.
1073In the base case, @array( E_base )@ is just @E_base@.
1074Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
1075
1076Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.
1077Expressed as an operation on types, this rotation is:
1078\begin{eqnarray*}
1079suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
1080enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
1081enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
1082\end{eqnarray*}
1083
1084
1085\section{Zero Overhead}
1086\label{s:zero-overhead}
1087
1088At runtime, the \CFA array is exactly a C array.
1089\CFA array subscripting is protected with runtime bound checks.
1090The array dependent-typing provides information to the C optimizer, allowing it to remove many of the bound checks.
1091This section provides a demonstration of these effects.
1092
1093The experiment compares the \CFA array system with the simple-safety system most typically exemplified by Java arrays (\VRef[Section]{JavaCompare}), and reflected in \CC vector using member @.at@ for subscripting (\VRef[Section]{CppCompare}).
1094The essential feature is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
1095Hence, it is the implicit \CFA checked subscripting and explicit \CC @vector@ @.at@ mechanisms being explained and not limitations of \CC optimization.
1096
1097As a control case, a simple loop receives the same optimization treatment in both the \CFA and \CC versions.
1098When the surrounding code makes it clear subscripting is always in bound, no dynamic bound check is observed in the program's optimized assembly code.
1099But when the environment is adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch a mistake.
1100
1101\VRef[Figure]{f:ovhd-ctl} gives a control-case example of summing values in an array, where each column shows the program in languages C (a,~d,~g), \CFA (b,~e,~h), and \CC (c,~f,~i).
1102The C code has no bounds checking, while the \CFA and \CC can have bounds checking.
1103The source-code functions in (a, b, c) can be compiled to have either correct or incorrect uses of bounds.
1104When compiling for correct bound use, the @BND@ macro passes its argument through, so the loops cover exactly the passed array sizes.
1105When compiling for possible incorrect bound use (a programming error), the @BND@ macro hardcodes the loops for 100 iterations, which implies out-of-bound access attempts when the passed array has fewer than 100 elements.
1106The assembly code excerpts show optimized translations for correct-bound mode in (d, e, f) and incorrect-bound mode in (g, h, i).
1107Because incorrect-bound mode hardcodes 100 iterations, the loop always executes a first time, so this mode does not have the @n == 0@ branch seen in correct-bound mode.
1108For C, this difference is the only (d--g) difference.
1109For correct bounds, the \CFA translation equals the C translation, \ie~there is no (d--e) difference, while \CC has an additional indirection to dereference the vector's auxiliary allocation.
1110
1111\begin{figure}
1112\setlength{\tabcolsep}{10pt}
1113\begin{tabular}{llll@{}}
1114\multicolumn{1}{c}{C} &
1115\multicolumn{1}{c}{\CFA} &
1116\multicolumn{1}{c}{\CC (nested vector)}
1117\\[1em]
1118\lstinput{20-28}{ar-bchk/control.c} &
1119\lstinput{20-28}{ar-bchk/control.cfa} &
1120\lstinput{20-28}{ar-bchk/control.cc}
1121\\
1122\multicolumn{1}{c}{(a)} &
1123\multicolumn{1}{c}{(b)} &
1124\multicolumn{1}{c}{(c)}
1125\\[1em]
1126\multicolumn{4}{l}{
1127 \includegraphics[page=1,trim=0 9.125in 2in 0,clip]{ar-bchk}
1128}
1129\\
1130\multicolumn{1}{c}{(d)} &
1131\multicolumn{1}{c}{(e)} &
1132\multicolumn{1}{c}{(f)}
1133\\[1em]
1134\multicolumn{4}{l}{
1135 \includegraphics[page=2,trim=0 8.75in 2in 0,clip]{ar-bchk}
1136}
1137\\
1138\multicolumn{1}{c}{(g)} &
1139\multicolumn{1}{c}{(h)} &
1140\multicolumn{1}{c}{(i)}
1141\end{tabular}
1142\caption[Bound-check overhead comparison, control case]{
1143 Bound-check overhead comparison, control case.
1144 All three programs/compilers generate equally performant code when the programmer ``self-checks'' bounds via a loop's condition.
1145 Yet both \CFA and \CC versions generate code that is slower and safer than C's when the programmer might overrun the bounds.
1146}
1147\label{f:ovhd-ctl}
1148\end{figure}
1149
1150Differences arise when the bound-incorrect programs are passed an array shorter than 100.
1151The C version, (g), proceeds with undefined behaviour, reading past the end of the array.
1152The \CFA and \CC versions, (h, i), flag the mistake with the runtime check, the @i >= n@ branch.
1153This check is provided implicitly by the library types @array@ and @vector@, respectively.
1154The significant result is the \emph{absence} of runtime checks from the bound-correct translations, (e, f).
1155The code optimizer has removed them because, at the point where they would occur (immediately past @.L28@/@.L3@), it knows from the surrounding control flow either @i == 0 && 0 < n@ or @i < n@ (directly), \ie @i < n@ either way.
1156So any repeated checks, \ie the branch and its consequent code (raise error), are removable.
1157
1158% Progressing from the control case, a deliberately bound-incorrect mode is no longer informative.
1159% Rather, given a (well-typed) program that does good work on good data, the issue is whether it is (determinably) bound-correct on all data.
1160
1161When dimension sizes get reused, \CFA has an advantage over \CC-vector at getting simply written programs well optimized.
1162\VRef[Figures]{f:ovhd-treat-src} shows a simple matrix multiplication over a row-major encoding.
1163Simple means coding directly to the intuition of the mathematical definition without trying to optimize for memory layout.
1164In the assembly code of \VRef[Figure]{f:ovhd-treat-asm}, the looping pattern of \VRef[Figure]{f:ovhd-ctl} (d, e, f), ``Skip ahead on zero; loop back for next,'' occurs with three nesting levels.
1165Simultaneously, the dynamic bound-check pattern of \VRef[Figure]{f:ovhd-ctl} (h,~i), ``Get out on invalid,'' occurs, targeting @.L7@, @.L9@ and @.L8@ (bottom right).
1166Here, \VRef[Figures]{f:ovhd-treat-asm} shows the \CFA solution optimizing into practically the C solution, while the \CC solution shows added runtime bound checks.
1167Like in \VRef[Figure]{f:ovhd-ctl} (e), the significance is the \emph{absence} of library-imposed runtime checks, even though the source code is working through the the \CFA @array@ library.
1168The optimizer removed the library-imposed checks because the data structure @array@-of-@array@ is constrained by its type to be shaped correctly for the intuitive looping.
1169In \CC, the same constraint does not apply to @vector@-of-@vector@.
1170Because every individual @vector@ carries its own size, two types of bound mistakes are possible.
1171
1172\begin{figure}
1173\setlength{\tabcolsep}{10pt}
1174\begin{tabular}{llll@{}}
1175\multicolumn{1}{c}{C} &
1176\multicolumn{1}{c}{\CFA} &
1177\multicolumn{1}{c}{\CC (nested vector)}
1178\\
1179\lstinput{20-37}{ar-bchk/treatment.c} &
1180\lstinput{20-37}{ar-bchk/treatment.cfa} &
1181\lstinput{20-37}{ar-bchk/treatment.cc}
1182\end{tabular}
1183\caption{Overhead comparison, differentiation case, source.}
1184\label{f:ovhd-treat-src}
1185\end{figure}
1186
1187\begin{figure}
1188\newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
1189\begin{tabular}{C{1.5in}C{1.5in}C{1.5in}p{1in}}
1190C &
1191\CFA &
1192\CC (nested vector)
1193\\[1em]
1194\multicolumn{4}{l}{
1195 \includegraphics[page=3,trim=0 4in 2in 0,clip]{ar-bchk}
1196}
1197\end{tabular}
1198\caption{Overhead comparison, differentiation case, assembly.
1199}
1200\label{f:ovhd-treat-asm}
1201\end{figure}
1202
1203In detail, the three structures received may not be matrices at all, per the obvious/dense/total interpretation; rather, any one might be ragged-right in its rows.
1204The \CFA solution guards against this possibility by encoding the minor length (number of columns) in the major element (row) type.
1205In @res@, for example, each of the @M@ rows is @array(float, N)@, guaranteeing @N@ cells within it.
1206Or more technically, guaranteeing @N@ as the basis for the imposed bound check \emph{of every row.}
1207As well, the \CC type does not impose the mathematically familiar constraint of $(M \times P) (P \times N) \rightarrow (M \times N)$.
1208Even assuming away the first issue, \eg that in @lhs@ all minor/cell counts agree, the data format allows the @rhs@ major/row count to disagree.
1209Or, the possibility that the row count @res.size()@ disagrees with the row count @lhs.size()@ illustrates this bound-mistake type in isolation.
1210The \CFA solution guards against this possibility by keeping length information separate from the array data, and therefore eligible for sharing.
1211This capability lets \CFA escape the one-to-one correspondence between array instances and symbolic bounds, where this correspondence leaves a \CC-vector programmer stuck with a matrix representation that repeats itself.
1212
1213It is important to clarify that the \CFA solution does not become unsafe (like C) in losing its dynamic checks, even though it becomes fast (as C) in losing them.
1214The dynamic checks are dismissed as unnecessary \emph{because} the program is safe to begin with.
1215To achieve the same performance, a \CC programmer must state appropriate assertions or assumptions, to allow the optimizer to dismiss the runtime checks.
1216% Especially considering that two of them are in the inner-most loop.
1217The solution requires doing the work of the inner-loop checks as a \emph{preflight step}.
1218But this step requires looping and doing it upfront gives too much separation for the optimizer to see ``has been checked already'' in the deep loop.
1219So, the programmer must restate the preflight observation within the deep loop, but this time as an unchecked assumption.
1220Such assumptions are risky because they introduce further undefined behaviour when misused.
1221Only the programmer's discipline remains to ensure this work is done without error.
1222
1223In summary, the \CFA solution lets a simply stated program have dynamic guards that catch bugs, while letting a simply stated bug-free program run as fast as the unguarded C equivalent.
1224
1225\begin{comment}
1226The ragged-right issue brings with it a source-of-truth difficulty: Where, in the \CC version, is one to find the value of $N$? $M$ can come from either @res@'s or @lhs@'s major/row count, and checking these for equality is straightforward. $P$ can come from @rhs@'s major/row count. But $N$ is only available from columns, \ie minor/cell counts, which are ragged. So any choice of initial source of truth, \eg
1227\end{comment}
1228
1229
1230\section{Array Lifecycle}
1231
1232An array, like a structure, wraps subordinate objects.
1233Any object type, like @string@, can be an array element or structure member.
1234A consequence is that the lifetime of the container must match with its subordinate objects: all elements and members must be initialized/uninitialized implicitly as part of the container's allocation/deallocation.
1235Modern programming languages implicitly perform these operations via a type's constructor and destructor.
1236Therefore, \CFA must assure that an array's subordinate objects' lifetime operations are called.
1237Preexisting \CFA mechanisms achieve this requirement, but with poor performance.
1238Furthermore, advanced array users need an exception to the basic mechanism, which does not occur with other aggregates.
1239Hence, arrays introduce subtleties in supporting an element's lifecycle.
1240
1241The preexisting \CFA support for contained-element lifecycle is based on recursive occurrences of the object-type (otype) pseudo-trait.
1242A type is an otype, if it provides a default (parameterless) constructor, copy constructor, assignment operator, and destructor (like \CC).
1243For a structure with otype members, the compiler implicitly generates implementations of the four otype functions for the outer structure.
1244Then the generated default constructor for the outer structure calls the default constructor for each member, and the other otype functions work similarly.
1245For a member that is a C array, these calls occur in a loop for each array element, which even works for VLAs.
1246This logic works the same, whether the member is a concrete type (that happens to be an otype) or if the member is a polymorphic type asserted to be an otype (which is implicit in the syntax, @forall(T)@).
1247The \CFA array has the simplified form (similar to one seen before):
1248\begin{cfa}
1249forall( T * ) // non-otype element, no lifecycle functions
1250// forall( T ) // otype element, lifecycle functions asserted
1251struct array5 {
1252 T __items[ 5 ];
1253};
1254\end{cfa}
1255Being a structure with a C-array member, the otype-form declaration @T@ causes @array5( float )@ to be an otype too.
1256But this otype-recursion pattern has a performance issue.
1257\VRef[Figure]{f:OtypeRecursionBlowup} shows the steps to find lifecycle functions, under the otype-recursion pattern, for a cube of @float@:
1258\begin{cfa}
1259array5( array5( array5( float ) ) )
1260\end{cfa}
1261The work needed for the full @float@-cube is 256 leaves.
1262Then the otype-recursion pattern generates helper functions and thunks that are exponential in the number of dimensions.
1263Anecdotal experience is annoyingly slow compile time at three dimensions and unusable at four.
1264
1265%array5(T) offers
1266%1 parameterless ctor, which asks for T to have
1267% 1 parameterless ctor
1268% 2 copy ctor
1269% 3 asgt
1270% 4 dtor
1271%2 copy ctor, which asks for T to have
1272% 1 parameterless ctor
1273% 2 copy ctor
1274% 3 asgt
1275% 4 dtor
1276%3 asgt, which asks for T to have
1277% 1 parameterless ctor
1278% 2 copy ctor
1279% 3 asgt
1280% 4 dtor
1281%4 dtor, which asks for T to have
1282% 1 parameterless ctor
1283% 2 copy ctor
1284% 3 asgt
1285% 4 dtor
1286
1287\begin{figure}
1288\centering
1289\setlength{\tabcolsep}{20pt}
1290\begin{tabular}{@{}lll@{}}
1291\begin{cfa}[deletekeywords={default}]
1292float offers
12931 default ctor
12942 copy ctor
12953 assignment
12964 dtor
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321\end{cfa}
1322&
1323\begin{cfa}[deletekeywords={default}]
1324array5( float ) has
13251 default ctor, using float$'$s
1326 1 default ctor
1327 2 copy ctor
1328 3 assignment
1329 4 dtor
13302 copy ctor, using float$'$s
1331 1 default ctor
1332 2 copy ctor
1333 3 assignment
1334 4 dtor
13353 assignment, using float$'$s
1336 1 default ctor
1337 2 copy ctor
1338 3 assignment
1339 4 dtor
13404 dtor, using float$'$s
1341 1 default ctor
1342 2 copy ctor
1343 3 assignment
1344 4 dtor
1345
1346
1347
1348
1349
1350
1351
1352
1353\end{cfa}
1354&
1355\begin{cfa}[deletekeywords={default}]
1356array5( array5( float ) ) has
13571 default ctor, using array5( float )
1358 1 default ctor, using float$'$s
1359 1 default ctor
1360 2 copy ctor
1361 3 assignment
1362 4 dtor
1363 2 copy ctor, using float$'$s
1364 1 default ctor
1365 2 copy ctor
1366 3 assignment
1367 4 dtor
1368 3 assignment, using float$'$s
1369 1 default ctor
1370 2 copy ctor
1371 3 assignment
1372 4 dtor
1373 4 dtor, using float$'$s
1374 1 default ctor
1375 2 copy ctor
1376 3 assignment
1377 4 dtor
13782 copy ctor, using array5( float )
1379 ... 4 children, 16 leaves
13803 assignment, using array5( float )
1381 ... 4 children, 16 leaves
13824 dtor, using array5( float )
1383 ... 4 children, 16 leaves
1384(64 leaves)
1385\end{cfa}
1386\end{tabular}
1387\caption[Exponential thunk generation under the otype-recursion pattern]{
1388 Exponential thunk generation under the otype-recursion pattern.
1389 Each time one type's function (\eg ctor) uses another type's function, the \CFA compiler generates a thunk, to capture the used function's dependencies, presented according to the using function's need.
1390 So, each non-leaf line represents a generated thunk and every line represents a search request for the resolver to find a satisfying function.}
1391\label{f:OtypeRecursionBlowup}
1392\end{figure}
1393
1394The issue is that the otype-recursion pattern uses more assertions than needed.
1395For example, @array5( float )@'s default constructor has all four lifecycle assertions about the element type, @float@.
1396However, it only needs @float@'s default constructor, as the other operations are never used.
1397Current work by the \CFA team aims to improve this situation.
1398My workaround moves from otype (value) to dtype (pointer) with a default-constructor assertion, where dtype does not generate any constructors but the assertion claws back the default otype constructor.
1399\begin{cquote}
1400\setlength{\tabcolsep}{10pt}
1401\begin{tabular}{@{}ll@{}}
1402\begin{cfa}
1403// autogenerated for otype-recursion:
1404forall( T )
1405void ?{}( array5( T ) & this ) {
1406 for ( i; 5 ) { ( this[i] ){}; }
1407}
1408forall( T )
1409void ?{}( array5( T ) & this, array5( T ) & src ) {
1410 for ( i; 5 ) { ( this[i] ){ src[i] }; }
1411}
1412forall( T )
1413void ^?{}( array5( T ) & this ) {
1414 for ( i; 5 ) { ^( this[i] ){}; }
1415}
1416\end{cfa}
1417&
1418\begin{cfa}
1419// lean, bespoke:
1420forall( T* | { void @?{}( T & )@; } )
1421void ?{}( array5( T ) & this ) {
1422 for ( i; 5 ) { ( this[i] ){}; }
1423}
1424forall( T* | { void @?{}( T &, T )@; } )
1425void ?{}( array5( T ) & this, array5( T ) & src ) {
1426 for ( i; 5 ) { ( this[i] ){ src[i] }; }
1427}
1428forall( T* | { void @?{}( T & )@; } )
1429void ^?{}( array5(T) & this ) {
1430 for (i; 5) { ^( this[i] ){}; }
1431}
1432\end{cfa}
1433\end{tabular}
1434\end{cquote}
1435Moreover, the assignment operator is skipped, to avoid hitting a lingering growth case.
1436Temporarily skipping assignment is tolerable because array assignment is not a common operation.
1437With this solution, the critical lifecycle functions are available, with linear growth in thunk creation for the default constructor.
1438
1439Finally, the intuition that a programmer using an array always wants the elements' default constructor called \emph{automatically} is simplistic.
1440Arrays exist to store different values at each position.
1441So, array initialization needs to let the programmer provide different constructor arguments to each element.
1442\begin{cfa}
1443thread worker { int id; };
1444void ?{}( worker & ) = void; $\C[2.5in]{// remove default constructor}$
1445void ?{}( worker &, int id );
1446array( worker, 5 ) ws = @{}@; $\C{// rejected; but desire is for no initialization yet}$
1447for ( i; 5 ) (ws[i]){ @i@ }; $\C{// explicit (placement) initialization for each thread, giving id}\CRT$
1448\end{cfa}
1449Note the use of the \CFA explicit constructor call, analogous to \CC's placement-@new@.
1450This call is where initialization is desired, and not at the declaration of @ws@.
1451The attempt to initialize from nothing (equivalent to dropping @= {}@ altogether) is invalid because the @worker@ type removes the default constructor.
1452The @worker@ type is designed this way to work with the threading system.
1453A thread type forks a thread at the end of each constructor and joins with it at the start of each destructor.
1454But a @worker@ cannot begin its forked-thread work without knowing its @id@.
1455Therefore, there is a conflict between the implicit actions of the builtin @thread@ type and a user's desire to defer these actions.
1456
1457Another \CFA feature for providing C backwards compatibility, at first seems viable for initializing the array @ws@, though on closer inspection is not.
1458C initialization without a constructor is specified with \lstinline|@=|, \eg \lstinline|array(worker, 5) ws @= {}| ignores all \CFA lifecycle management and uses C empty initialization.
1459This option does achieve the desired semantics on the construction side.
1460But on destruction side, the desired semantics is for implicit destructor calls to continue, to keep the join operation tied to lexical scope.
1461C initialization disables \emph{all} implicit lifecycle management, but the goal is to disable only the implicit construction.
1462
1463To fix this problem, I enhanced the \CFA standard library to provide the missing semantics, available in either form:
1464\begin{cfa}
1465array( @uninit@(worker), 5 ) ws1;
1466array( worker, 5) ws2 = { @delay_init@ };
1467\end{cfa}
1468Both cause the @ws@-construction-time implicit call chain to stop before reaching a @worker@ constructor, while leaving the implicit destruction calls intact.
1469Two forms are available, to parallel the development of this feature in \uCpp~\cite{uC++}.
1470Originally \uCpp offered only the @ws1@ form, using the class-template @uNoCtor@ equivalent to \CFA's @uninit@.
1471More recently, \uCpp was extended with the declaration macro, @uArray@, with usage similar to the @ws2@ case.
1472Based on experience piloting @uArray@ as a replacement of @uNoCtor@, it should be possible to remove the first option.
1473
1474% note to Mike, I have more fragments on some details available in push24\fragments\uNoCtor.txt
1475
1476\section{Array Comparison}
1477
1478
1479\subsection{Rust}
1480
1481A Rust array is a set of mutable or immutable (@const@) contiguous objects of the same type @T@, where the compile-time dimension(s) is part of the type signature @[T; dimension]@.
1482\begin{rust}
1483let val = 42;
1484let mut ia: [usize; 5] = [val; 5]; $\C{// mutable, repeated initialization [42, 42, 42, 42, 42]}$
1485let fval = 42.2;
1486let fa: [f64; 5] = [1.2, fval, 5.6, 7.3, 9.1]; $\C{// immutable, individual initialization}$
1487\end{rust}
1488Initialization is required even if an array is subsequently initialized.
1489Rust arrays are not VLAs, but the compile-time length can be queried using member @len()@.
1490Arrays can be assigned and passed to parameters by value or reference, if and only if, the type and dimension match, meaning a different function is needed for each array size.
1491
1492Array iteration is by a range or array variable.
1493\begin{rust}
1494for i in @0..ia.len()@ { print!("{:?} ", ia[i] ); } $\C{// 42 42 42 42 42}$
1495for fv in @fa@ { print!("{:?} ", fv ); } $\C{// 1.2 42.2 5.6 7.3 9.1}$
1496for i in 0..=1 { ia[i] = i; } $\C{// mutable changes}$
1497for iv in ia { print!("{:?} ", iv ); } $\C{// 0 1 42 42 42}$
1498\end{rust}
1499An array can be assigned and printed as a set.
1500\begin{rust}
1501ia = @[5; 5]@; println!( "{:?} {:?}", @ia@, ia.len() ); $\C{// [5, 5, 5, 5, 5] 5}$
1502ia = @[1, 2, 3, 4, 5]@; println!( "{:?} {:?}", @ia@, ia.len() ); $\C{// [1, 2, 3, 4, 5] 5}$
1503\end{rust}
1504
1505Multi-dimensional arrays use nested dimensions, resulting in columns before rows.
1506Here the outer array is a length @ROWS@ array whose elements are @f64@ arrays of length @COLS@ each.
1507\begin{cquote}
1508\setlength{\tabcolsep}{10pt}
1509\begin{tabular}{@{}ll@{}}
1510\begin{rust}
1511const ROWS: usize = 4; const COLS: usize = 6;
1512let mut fmatrix: [[f64; @COLS@]; @ROWS@] = [[0.; @COLS@]; @ROWS@];
1513for r in 0 .. ROWS {
1514 for c in 0 .. COLS { fmatrix[r][c] = r as f64 + c as f64; } }
1515\end{rust}
1516&
1517\begin{rust}
1518[ 0.0, 1.0, 2.0, 3.0, 4.0, 5.0 ]
1519[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0 ]
1520[ 2.0, 3.0, 4.0, 5.0, 6.0, 7.0 ]
1521[ 3.0, 4.0, 5.0, 6.0, 7.0, 8.0 ]
1522\end{rust}
1523\end{tabular}
1524\end{cquote}
1525
1526Slices borrow a section of an array (subarray) and have type @&[T]@.
1527A slice has a dynamic size and does not implicitly coerce to an array.
1528\begin{rust}
1529println!( "{:?}", @&ia[0 .. 3]@ ); $\C{// fixed bounds, [1, 2, 3]}$
1530println!( "{:?}", @&ia[ia.len() - 2 .. ia.len()@] ); $\C{// variable bounds, [4, 5]}$
1531println!( "{:?}", @&ia[ia.len() - 2 .. ]@ ); $\C{// drop upper bound, [4, 5]}$
1532println!( "{:?}", @&ia[.. ia.len() - 2 ]@ ); $\C{// drop lower bound, [1, 2, 3]}$
1533println!( "{:?}", @&ia[..]@ ); $\C{// drop both bound, [1, 2, 3, 4, 5]}$
1534\end{rust}
1535A multi-dimensional slice can only borrow subrows because a slice requires contiguous memory.
1536Here columns 2--4 are sliced from row 3.
1537\begin{rust}
1538let mut slice2: &[f64] = &fmatrix[3][@2 ..= 4@];
1539println!( "{:?}", slice2 ); $\C{// [5.0, 6.0, 7.0]}$
1540\end{rust}
1541Here row 2 is sliced from the sub-matrix formed from rows 1--3.
1542\begin{rust}
1543slice2 = &fmatrix[@1 ..= 3@][1];
1544println!( "{:?}", slice2 ); $\C{// [3.0, 4.0, 5.0, 6.0, 7.0, 8.0]}$
1545\end{rust}
1546A slice can be explicitly converted into an array or reference to an array.
1547\begin{rust}
1548let slice: &[f64];
1549slice = &fa[0 ..= 1]; $\C{// create slice, [1.2, 42.2]}$
1550let mut fa2: [f64; 2] = @<[f64; 2]>::try_from( slice ).unwrap()@; $\C{// convert slice to array, [1.2, 42.2]}$
1551fa2 = @(& fa[2 ..= 3]).try_into().unwrap()@; $\C{// convert slice to array, [5.6, 7.3]}$
1552\end{rust}
1553
1554The @vec@ macro (using @Vec@ type) provides dynamically-size dynamically-growable arrays of arrays only using the heap (\CC @vector@ type).
1555\begin{cquote}
1556\setlength{\tabcolsep}{10pt}
1557\begin{tabular}{@{}ll@{}}
1558\multicolumn{1}{c}{Rust} &\multicolumn{1}{c}{\CC} \\
1559\begin{rust}
1560let (rows, cols) = (4, 6);
1561let mut matrix = vec![vec![0; cols]; rows];
1562... matrix[r][c] = r + c; // fill matrix
1563\end{rust}
1564&
1565\begin{c++}
1566int rows = 4, cols = 6;
1567vector<vector<int>> matrix( rows, vector<int>(cols) );
1568... matrix[r][c] = r + c; // fill matrix}
1569\end{c++}
1570\end{tabular}
1571\end{cquote}
1572A dynamically-size array is sliced the same as a fixed-size one.
1573\begin{rust}
1574let mut slice3: &[usize] = &matrix[3][2 ..= 4]; $\C{// [5, 6, 7]}$
1575slice3 = &matrix[1 ..= 3][1]; $\C{// [2, 3, 4, 5, 6, 7]}$
1576\end{rust}
1577Finally, to mitigate the restriction of matching array dimensions between argument and parameter types, it is possible for a parameter to be a slice.
1578\begin{rust}
1579fn zero( arr: @& mut [usize]@ ){
1580 for i in 0 .. arr.len() { arr[i] = 0; }
1581}
1582zero( & mut ia[0..5] ); // arbitrary sized slice
1583zero( & mut ia[0..3] );
1584\end{rust}
1585Or write a \emph{generic} function taking an array of fixed-element type and any size.
1586\begin{rust}
1587fn aprint<@const N: usize@>( arg: [usize; N] ) {
1588 for r in 0 .. arg.len() { print!( "{} ", arg[r] ); }
1589}
1590aprint( ia );
1591\end{rust}
1592
1593
1594\subsection{Java}
1595\label{JavaCompare}
1596
1597Java arrays are references, so multi-dimension arrays are arrays-of-arrays \see{\VRef{toc:mdimpl}}.
1598For each array, Java implicitly stores the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations.
1599\begin{cquote}
1600\begin{tabular}{rl}
1601C & @void f( size_t n, size_t m, float x[n][m] );@ \\
1602Java & @void f( float x[][] );@
1603\end{tabular}
1604\end{cquote}
1605However, in the C prototype, the parameters @n@ and @m@ are manually managed, \ie there is no guarantee they match with the argument matrix for parameter @x@.
1606\VRef[Figure]{f:JavaVsCTriangularMatrix} compares a triangular matrix (array-of-arrays) in dynamically safe Java to unsafe C.
1607Each dynamically sized row in Java stores its dimension, while C requires the programmer to manage these sizes explicitly (@rlnth@).
1608All subscripting is Java has bounds checking, while C has none.
1609Both Java and C require explicit null checking, otherwise there is a runtime failure.
1610
1611\begin{figure}
1612\centering
1613\begin{lrbox}{\myboxA}
1614\begin{java}
1615int m[][] = { // triangular matrix
1616 new int [4],
1617 new int [3],
1618 new int [2],
1619 new int [1],
1620 null
1621};
1622
1623for ( int r = 0; r < m.length; r += 1 ) {
1624 if ( m[r] == null ) continue;
1625 for ( int c = 0; c < m[r].length; c += 1 ) {
1626 m[r][c] = c + r; // subscript checking
1627 }
1628
1629}
1630
1631for ( int r = 0; r < m.length; r += 1 ) {
1632 if ( m[r] == null ) {
1633 System.out.println( "null row" );
1634 continue;
1635 }
1636 for ( int c = 0; c < m[r].length; c += 1 ) {
1637 System.out.print( m[r][c] + " " );
1638 }
1639 System.out.println();
1640
1641}
1642\end{java}
1643\end{lrbox}
1644
1645\begin{lrbox}{\myboxB}
1646\begin{cfa}
1647int * m[5] = { // triangular matrix
1648 calloc( 4, sizeof(int) ),
1649 calloc( 3, sizeof(int) ),
1650 calloc( 2, sizeof(int) ),
1651 calloc( 1, sizeof(int) ),
1652 NULL
1653};
1654int rlnth = 4;
1655for ( int r = 0; r < 5; r += 1 ) {
1656 if ( m[r] == NULL ) continue;
1657 for ( int c = 0; c < rlnth; c += 1 ) {
1658 m[r][c] = c + r; // no subscript checking
1659 }
1660 rlnth -= 1;
1661}
1662rlnth = 4;
1663for ( int r = 0; r < 5; r += 1 ) {
1664 if ( m[r] == NULL ) {
1665 printf( "null row\n" );
1666 continue;
1667 }
1668 for ( int c = 0; c < rlnth; c += 1 ) {
1669 printf( "%d ", m[r][c] );
1670 }
1671 printf( "\n" );
1672 rlnth -= 1;
1673}
1674\end{cfa}
1675\end{lrbox}
1676
1677\subfloat[Java]{\label{f:JavaTriangularMatrix}\usebox\myboxA}
1678\hspace{6pt}
1679\vrule
1680\hspace{6pt}
1681\subfloat[C]{\label{f:CTriangularMatrix}\usebox\myboxB}
1682\hspace{6pt}
1683
1684\caption{Triangular matrix in Java and C}
1685\label{f:JavaVsCTriangularMatrix}
1686\end{figure}
1687
1688The downside of the arrays-of-arrays approach is performance due to pointer chasing versus pointer arithmetic for a contiguous arrays.
1689Furthermore, there is the cost of managing the implicit array descriptor.
1690It is unlikely that a JIT can dynamically rewrite an arrays-of-arrays form into a contiguous form.
1691
1692
1693\subsection{\CC}
1694\label{CppCompare}
1695
1696Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array.
1697While the vector size can grow and shrink dynamically, \vs an unchanging dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration.
1698So, it costs one upfront dynamic allocation and avoids growing the array through pushing.
1699\begin{c++}
1700vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations
1701\end{c++}
1702Multidimensional arrays are arrays-of-arrays with associated costs.
1703Each @vector@ array has an array descriptor containing the dimension, which allows bound checked using @x.at(i)@ in place of @x[i]@.
1704Used with these restrictions, out-of-bound accesses are caught, and in-bound accesses never exercise the vector's ability to grow, preventing costly reallocate-and-copy, and never invalidating references to contained values.
1705
1706This scheme matches a Java array's safety and expressiveness exactly, with the same inherent costs.
1707Notably, a \CC vector of vectors does not provide contiguous element storage (even when upfront allocation is done carefully) because a vector puts its elements in an auxiliary allocation.
1708So, in spite of @vector< vector< int > >@ appearing to be ``everything by value,'' it is still a first array whose elements include pointers to further arrays.
1709
1710\CC has options for working with memory-adjacent data in desirable ways, particularly in recent revisions.
1711But none makes an allocation with a dynamically given fixed size less awkward than the vector arrangement just described.
1712
1713\CC~26 adds @std::inplace_vector@ providing an interesting vector--array hybrid.
1714Like a vector, it lets a user construct elements in a loop, rather than imposing uniform construction.
1715Yet it preserves \lstinline{std::array}'s ability to be entirely stack-allocated, by avoiding an auxiliary elements' allocation.
1716However, it preserves the fundamental limit of \lstinline{std::array}, that the length, being a template parameter, must be a static value.
1717
1718\CC~20's @std::span@ is a view that unifies true arrays, vectors, static sizes and dynamic sizes, under a common API that adds bounds checking.
1719When wrapping a vector, bounds checking occurs on regular subscripting, \ie one need not remember to use @.at@.
1720When wrapping a locally declared fixed-size array, bound communication is implicit.
1721But it has a soundness gap by offering construction from pointer and user-given length.
1722
1723\CC~23's @std::mdspan@ adds multidimensional indexing and reshaping capabilities analogous to those built into \CFA's @array@.
1724Like @span@, it works over multiple underlying collection types.
1725But neither @span@ nor @mdspan@ augments the available allocation options.
1726
1727Thus, these options do not offer an allocation with a dynamically given fixed size.
1728And furthermore, they do not provide any improvement to the C flexible array member pattern, for making a dynamic amount of storage contiguous with its header, as do \CFA's new dynamic array members.
1729
1730
1731\begin{comment}
1732\subsection{Levels of Dependently Typed Arrays}
1733
1734TODO: fix the position; checked c does not do linear types
1735\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.
1736Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly-typed ownership-system, which further helps guarantee statically the validity of every pointer deference.
1737These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.
1738\CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
1739
1740\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.
1741Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.
1742The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
1743
1744The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
1745\begin{itemize}[leftmargin=*]
1746\item a \emph{zip}-style operation that consumes two arrays of equal length
1747\item a \emph{map}-style operation whose produced length matches the consumed length
1748\item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
1749\end{itemize}
1750Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.
1751Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
1752
1753Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.
1754In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.
1755The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.
1756In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.
1757[TODO: clarify how even Idris type checking terminates]
1758
1759Idris is a current, general-purpose dependently typed programming language.
1760Length checking is a common benchmark for full dependent type systems.
1761Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.
1762[TODO: finish explaining what Data.Vect is and then the essence of the comparison]
1763
1764POINTS:
1765here is how our basic checks look (on a system that does not have to compromise);
1766it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
1767
1768Two contemporary array-centric languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, contribute similar, restricted dependent types for tracking array length.
1769Unlike \CFA, both are garbage-collected functional languages.
1770Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.
1771So, like \CFA, the checking in question is a lightweight bounds-only analysis.
1772Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
1773
1774
1775
1776The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.
1777There is a particular emphasis on an existential type, enabling callee-determined return shapes.
1778
1779Dex uses an Ada-style conception of size, embedding quantitative information completely into an ordinary type.
1780
1781Futhark and full-strength dependently typed languages treat array sizes as ordinary values.
1782Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
1783
1784\CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet no objects of type @[N]@ occur.
1785Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.
1786Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
1787
1788If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
1789
1790[TODO: introduce Ada in the comparators]
1791
1792In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, \CC, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.
1793The Ada--Dex generality has aesthetic benefits for certain programmers. For those working on scheduling resources to weekdays:
1794For those who prefer to count from an initial number of their own choosing:
1795
1796This change of perspective also lets us remove ubiquitous dynamic bound checks.
1797[TODO: xref] discusses how automatically inserted bound checks can often be optimized away.
1798But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.
1799To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
1800
1801[TODO, fix confusion: Idris has this arrangement of checks, but still the natural numbers as the domain.]
1802
1803The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.
1804Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
1805\begin{cfa}
1806interface Ix n
1807get_size n : Unit -> Int
1808ordinal : n -> Int
1809unsafe_from_ordinal n : Int -> n
1810\end{cfa}
1811
1812Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.
1813This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.
1814Dex's example is a routine that calculates pointwise differences between two samples.
1815Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).
1816In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size.
1817
1818The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.
1819In the audio instantiation, the duration-of-clip type argument is used for the domain.
1820In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.
1821This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
1822\begin{cfa}
1823instance {a b} [Ix a, Ix b] Ix (a & b)
1824get_size = \(). size a * size b
1825ordinal = \(i, j). (ordinal i * size b) + ordinal j
1826unsafe_from_ordinal = \o.
1827bs = size b
1828(unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
1829\end{cfa}
1830% fix mike's syntax highlighter
1831and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
1832\begin{cfa}
1833img_trans :: (img_wd,img_ht)=>Real
1834img_trans.(i,j) = img.i.j
1835result = pairwise img_trans
1836\end{cfa}
1837[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
1838
1839In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
1840
1841\subsection{Static Safety in C Extensions}
1842\end{comment}
1843
1844
1845\section{Future Work}
1846\label{f:FutureWork}
1847
1848The ultimate goal is to replace all C arrays with \CFA arrays, establishing a higher level of safety in C programs.
1849To that end, this section surveys possible next steps that could support this shift.
1850
1851\subsection{Array Syntax}
1852
1853An important goal is to recast \CFA-style @array(...)@ syntax into C-style array syntax.
1854The proposal (which is partially implemented) is an alternate dimension and subscript syntax for multi-dimension arrays.
1855C multi-dimension and subscripting syntax uses multiple bracketed constants/expressions.
1856\begin{cfa}
1857int m@[2][3]@; // dimension
1858m@[0][1]@ = 3; // subscript
1859\end{cfa}
1860The alternative \CFA syntax is a comma separated list:
1861\begin{cfa}
1862int m@[2, 3]@; // dimension
1863m@[0, 1]@ = 3; // subscript
1864\end{cfa}
1865which should be intuitive to C programmers and is used in mathematics $M_{i,j}$ and other programing languages, \eg PL/I, Fortran.
1866With respect to the dimension expressions, C only allows an assignment expression, not a comma expression.
1867\begin{cfa}
1868 a[i, j];
1869test.c:3:16: error: expected ']' before ',' token
1870\end{cfa}
1871However, there is an ambiguity for a single dimension array, where the syntax for old and new arrays are the same, @int ar[10]@.
1872A solution is to use a terminating comma to denote a \CFA-style single-dimension array.
1873\begin{cfa}
1874int ar[2$\Huge\color{red},$]; // single dimension new array
1875\end{cfa}
1876This syntactic form is also used for the (rare) singleton tuple @[y@{\Large\color{red},}@]@.
1877The extra comma in the dimension is only mildly annoying, and acts as eye-candy differentiating old and new arrays.
1878Hence, \CFA can repurpose the comma expression in this context for a list of dimensions.
1879Should the C/\CFA array distinction disappear in the future the need for the terminating comma woudl too.
1880With respect to subscript expression, C allows a comma expression here.
1881However, a comma expression in this context is rare, and is most commonly a (silent) mistake: subscripting a C matrix with @m[i, j]@ instead of @m[i][j]@ selects the @j@th row not the @i, j@ element.
1882It is still possible to write @m[(i, j)]@ in the new syntax to achieve the equivalent of the old @m[i, j]@.
1883Internally, the compiler must desugar @[i, j, k]@ into @[i][j][k]@ to match with three calls to subscript operators.
1884Note, there is no ambiguity for subscripting a single dimensional array, as the subscript operator selects the correct form from the array type.
1885Currently, @array@ supports the old and new subscript syntax \see{\VRef[Figure]{f:ovhd-treat-src}}, including combinations of new and old, @arr[1, 2][3]@.
1886The new subscript syntax can be extended to C arrays for uniformity, but requires the non-compatible removal of the (rare) comma-expression as a subscript.
1887
1888
1889
1890\subsection{Error Messages}
1891
1892Length was a first foray into non-type polymorphic parameters.
1893In one respect, this work's solution is less general than \CC's non-type template parameters\footnote{
1894 In another respect, use of dynamic values, the \CFA solution can do more.
1895}, which can accept strings, functions and values of many more types.
1896A @forall( [N] )@ parameter is @size_t@, only.
1897
1898To generalize from \CFA's present solution, to \CC-equivalent type coverage, would be a significant change.
1899The \CFA compiler's resolver, the most complex piece, would need to be upgraded to understand these non-type parameters directly.
1900
1901Today's solution leverages a special relationship between types and @size_t@: the two-way mapping afforded by @char[n]@ and @sizeof(T)@.
1902This mapping allows for desugaring user input into the language that the pre-existing resolver understands, as detailed in \VRef[Section]{s:1d-impl}.
1903
1904The downside of this desugaring, as \VRef[Section]{s:ll-user-integration} admonishes, is that it does not provide a means for reporting a user-caused error, from its discovery in the resolver, to the user's attention, without including clutter from the desugaring.
1905The \CFA team generally recognizes substantial room for improvement in even general, resolver-alone, errors.
1906I fully acknowledge that the current state of errors, from new-array usage mistakes, makes the new-array system hard to learn.
1907
1908
1909\subsection{Retire Pointer Arithmetic}
1910
1911\CFA has a tremendously flexible core machinery, in which facts about C are expressed.
1912Even the fact that there is a @?+?@ operator for a pair of @int@s is not baked into the compiler proper; it is expressed, as a \CFA source-code declaration, by an implicit pseudo-include, in the \newterm{prelude}.
1913More controversial facts are stated there too, notably, that every pointer type has arithmetic and subscript operators.
1914As \VRef[Section]{s:ArraysDecay} explains, this fact is a necessary component in how a \emph{C array} is indexed.
1915But this fact is unused when dereferencing a \CFA @array@, as the subscript operator, in this case, is defined directly on @array@, and not via an intermediate pointer type.
1916
1917I conducted a successful proof of concept for the technical elements that underpin this vision.
1918\begin{itemize}
1919 \item
1920 Demote pointer-arithmetic operators from permanent prelude members to header-includable.
1921 \item
1922 Include them when in a C-compatibility mode, but not when in an enhanced-safety mode.
1923 \item
1924 Let a development team who maintains a compile unit that they believe they have onboarded to use only new \CFA arrays, set it to build in enhanced-safety mode.
1925 \item
1926 When a developer falls back on habit and forgetfully uses a C array in the onboarded module, the compiler's error serves as a reminder and a protector of the onboarded code quality.
1927\end{itemize}
1928
1929This notion should be explored beyond the toy examples it has seen to date.
1930
1931
1932\subsection{Defer Size Tracking}
1933
1934The current length checking and communication works well in the style of example illustrated in this thesis.
1935The characteristic of this style is:
1936\begin{itemize}
1937 \item
1938 Alice holds and array and knows its size.
1939 \item
1940 Bob offers a service that can apply to an array of any size.
1941 \item
1942 Alice calls Bob, passing her array; Bob (implicitly) learns the array's (correct) size.
1943\end{itemize}
1944This interaction is straightforward universal polymorphism, where Bob is parameterized by size.
1945And compared with the following alternative, it is much more common.
1946
1947But size knowledge could just as well flow the other way:
1948\begin{itemize}
1949 \item
1950 Bob offers a service that provides access to an array of a specific size known to Bob.
1951 \item
1952 Alice is content to work on an array of any size.
1953 \item
1954 Alice calls Bob, and receives his array as a return; Alice (implicitly) learns the array's (correct) size.
1955\end{itemize}
1956This interaction is the dual notion of existential polymorphism, working upon size information.
1957Work\cite{arr:futhark:tytheory} accompanying the array-centric language Futhark has detailed its theory.
1958
1959An existential size helps cope with unnecessary repetition that occurs as program size increases and more intermediaries separate an array producer and consumer.
1960Consider the @School@ structure of \VRef[Figure]{DAM-declaration}, and imagine passing one through a chain of intermediaries.
1961While the producer and consumer have every reason to know the school's sizes, the intermediaries do not.
1962Yet, they too would have to be parameterized by the numbers of courses and students, so that they could have them in scope at the point where they name the school's type.
1963
1964The desired solution is to write the size down, somewhere in relation to the school on which producer and consumer agree, but where it need not bother the intermediaries.
1965\begin{cfa}
1966struct SchoolRef {
1967 size_t classes;
1968 size_t students;
1969 School(classes, students) * school;
1970};
1971\end{cfa}
1972This declaration is wishful; at the @school@ member declaration site, the names @classes@ and @students@ are not actually in scope.
1973But if it worked, @SchoolRef@ would be doing the intermediaries a tremendous service.
1974It is not a polymorphic type.
1975The intermediaries no longer need size information in scope to pass one along.
1976@SchoolRef@ stops the polymorphism of @School@ from flowing farther outward.
1977The final consumer need only open the box, discover what's inside, and proceed.
1978\begin{cfa}
1979forall( [C], [S] ) static void process( School(C, S) * );
1980void receive( SchoolRef r ) {
1981 with( r ) {
1982 classes; students; // are in scope
1983 School( classes, students ) * s2 = school; // we can state the type
1984 process( s2 ); // we can communicate the type
1985 }
1986}
1987\end{cfa}
1988
1989
1990\subsection{Guarantee Zero Overhead}
1991
1992\VRef[Section]{s:zero-overhead} demonstrates how \CFA does well at skipping unnecessary runtime bound checks.
1993Nevertheless, the scheme presented puts a programmer's hopes in the hands of the optimizer.
1994The programmer writes code in which it appears that a bound check ought to be unnecessary, but feedback about success or failure there is indirect.
1995
1996Dex\cite{arr:dex:long} is a contemporary array-centric language that follows in the Ada array tradition\cite{arr:ada:learn} of designating a type to be used for subscripting an array.
1997While \CFA has its new @[N]@ living in the type system, this @N@ is used to represent an array's size.
1998An indexing type would be the type of @i@, in @ar[i]@.
1999\begin{cfa}
2000const size_t n = ask_the_user();
2001typedef range(0, n) till_n_t; // does not exist in CFA yet
2002till_n_t x = 42; // static reject
2003till_n_t y = check(42); // static accept, dynamic check
2004void f( array(int, n) & ar, till_n_t i ) {
2005 sout | ar[ i ]; // bound check unnecessary
2006}
2007\end{cfa}
2008In today's \CFA solution, any such @i@ is @size_t@.
2009\begin{cfa}
2010void f2( array(int, n) & ar, size_t i ) {
2011 sout | ar[ i ]; // bound check necessary
2012}
2013\end{cfa}
2014
2015With a constrained index type, bound checks shift from being implicit at the time of an array access, to explicit at the time of a conversion.
2016A user then has the option to import only the index-typed subscript operator.
2017In such a setup, no implicit bound check can occur.
2018Bound checks occur exactly where the user places them.
2019If the user succeeds at needing no explicit bound checks, then the user is assured that there are no bound checks.
2020
2021Prior \CFA work\cite{Liang24} has touched upon this topic.
2022Therein, a user's enumeration type is adapted to serve as the domain of an array.
2023However, this work progressed as far as inferring an array size from the enumeration type, \eg that @int happy_hours[ weekday_t ]@ should be a declaration with length seven.
2024It did not address forbidding @happy_hours[ 42 ]@.
2025
2026Both enumerations, and the above hand-waived @range(0, n)@ are valid indexing types.
2027Supporting indexing types would put the user more in control of a program's performance and failure modes.
2028
2029\begin{comment}
2030\subsection{Range Slicing}
2031
2032\subsection{With a Module System}
2033
2034
2035
2036
2037\section{\texorpdfstring{\CFA}{Cforall}}
2038
2039XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
2040moved from background chapter \\
2041XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
2042
2043Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it.
2044(For later: That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.)
2045
2046\subsection{\texorpdfstring{\CFA}{Cforall} Features Interacting with Arrays}
2047
2048Prior work on \CFA included making C arrays, as used in C code from the wild,
2049work, if this code is fed into @cfacc@.
2050The quality of this this treatment was fine, with no more or fewer bugs than is typical.
2051
2052More mixed results arose with feeding these ``C'' arrays into preexisting \CFA features.
2053
2054A notable success was with the \CFA @alloc@ function,
2055which type information associated with a polymorphic return type
2056replaces @malloc@'s use of programmer-supplied size information.
2057\begin{cfa}
2058// C, library
2059void * malloc( size_t );
2060// C, user
2061struct tm * el1 = malloc( sizeof(struct tm) );
2062struct tm * ar1 = malloc( 10 * sizeof(struct tm) );
2063
2064// CFA, library
2065forall( T * ) T * alloc();
2066// CFA, user
2067tm * el2 = alloc();
2068tm (*ar2)[10] = alloc();
2069\end{cfa}
2070The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument.
2071This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type.
2072Using a compiler-produced value eliminates an opportunity for user error.
2073
2074...someday... fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa
2075
2076Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug.
2077In the last example, the choice of ``pointer to array'' @ar2@ breaks a parallel with @ar1@.
2078They are not subscripted in the same way.
2079\begin{cfa}
2080ar1[5];
2081(*ar2)[5];
2082\end{cfa}
2083Using ``reference to array'' works at resolving this issue. ...someday... discuss connection with Doug-Lea \CC proposal.
2084\begin{cfa}
2085tm (&ar3)[10] = *alloc();
2086ar3[5];
2087\end{cfa}
2088The implicit size communication to @alloc@ still works in the same ways as for @ar2@.
2089
2090Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one.
2091...someday... xref C standard does not claim that @ar1@ may be subscripted,
2092because no stage of interpreting the construction of @ar1@ has it be that ``there is an \emph{array object} here.''
2093But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls,
2094where the type requested is an array, making the result, much more obviously, an array object.
2095
2096The ``reference to array'' type has its sore spots too.
2097...someday... see also @dimexpr-match-c/REFPARAM_CALL@ (under @TRY_BUG_1@)
2098
2099...someday... I fixed a bug associated with using an array as a T. I think. Did I really? What was the bug?
2100\end{comment}
Note: See TracBrowser for help on using the repository browser.