# source:doc/theses/mike_brooks_MMath/array.tex

Last change on this file was 77328d0, checked in by Michael Brooks <mlbrooks@…>, 6 days ago

Elaborate the description and context of the md-array subscripting figure.

• Property mode set to 100644
File size: 44.2 KB
Line
1\chapter{Array}
2\label{c:Array}
3
4\section{Introduction}
5
6This chapter describes my contribution of language and library features that provide a length-checked array type, as in:
7
8\begin{lstlisting}
9array(float, 99) x;    // x contains 99 floats
10
11void f( array(float, 42) & a ) {}
12f(x);                  // statically rejected: types are different
13
14forall( T, [N] )
15void g( array(T, N) & a, int i ) {
16        T elem = a[i];     // dynamically checked: requires 0 <= i < N
17}
18g(x, 0);               // T is float, N is 99, succeeds
19g(x, 1000);            // T is float, N is 99, dynamic check fails
20\end{lstlisting}
21
22This example first declares @x@ a variable, whose type is an instantiation of the generic type named @array@, with arguments @float@ and @99@.
23Next, it declares @f@ as a function that expects a length-42 array; the type system rejects the call's attempt to pass @x@ to @f@, because the lengths do not match.
24Next, the @forall@ annotation on function @g@ introduces @T@ as a familiar type parameter and @N@ as a \emph{dimension} parameter, a new feature that represents a count of elements, as managed by the type system.
25Because @g@ accepts any length of array; the type system accepts the calls' passing @x@ to @g@, inferring that this length is 99.
26Just as the caller's code does not need to explain that @T@ is @float@, the safe capture and communication of the value @99@ occurs without programmer involvement.
27In the case of the second call (which passes the value 1000 for @i@), within the body of @g@, the attempt to subscript @a@ by @i@ fails with a runtime error, since $@i@ \nless @N@$.
28
29The type @array@, as seen above, comes from my additions to the \CFA standard library.
30It is very similar to the built-in array type, which \CFA inherits from C.
31Its runtime characteristics are often identical, and some features are available in both.
32
33\begin{lstlisting}
34forall( [N] )
35void declDemo() {
36        float a1[N];         // built-in type ("C array")
37        array(float, N) a2;  // type from library
38}
39\end{lstlisting}
40
41If a caller instantiates @N@ with 42, then both locally-declared array variables, @a1@ and @a2@, become arrays of 42 elements, each element being a @float@.
42The two variables have identical size and layout; they both encapsulate 42-float stack allocations, no heap allocations, and no further "bookkeeping" allocations/header.
43Having the @array@ library type (that of @a2@) is a tactical measure, an early implementation that offers full feature support.
44A future goal (TODO xref) is to port all of its features into the built-in array type (that of @a1@); then, the library type could be removed, and \CFA would have only one array type.
45In present state, the built-in array has partial support for the new features.
46The fully-featured library type is used exclusively in introductory examples; feature support and C compatibility are revisited in sec TODO.
47
48Offering the @array@ type, as a distinct alternative from the the C array, is consistent with \CFA's extension philosophy (TODO xref background) to date.
49A few compatibility-breaking changes to the behaviour of the C array were also made, both as an implementation convenience, and as justified fixes to C's lax treatment.
50
51The @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features.
52A clean slate was an important starting point because it meant not having to deal with every inherited complexity introduced in TODO xref background-array.
53
54
55My contributions are
56\begin{itemize}
57\item a type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@
58\item TODO: general parking...
59\item identify specific abilities brought by @array@
60\item Where there is a gap concerning this feature's readiness for prime-time, identification of specific workable improvements that are likely to close the gap
61\end{itemize}
62
63
64\section{Definitions and design considerations}
65
66
67\subsection{Dependent typing}
68
69
71
72The present work adds a type @array@ to the \CFA standard library~\cite{Cforall}.
73
74This array's length is statically managed and dynamically valued.
75This static management achieves argument safety and suggests a path to subscript safety as future work (TODO: cross reference).
76
77This section presents motivating examples of the new array type's usage and follows up with definitions of the notations that appear.
78
79The core of the new array management is tracking all array lengths in the type system.
80Dynamically valued lengths are represented using type variables.
81The stratification of type variables preceding object declarations makes a length referenceable everywhere that it is needed.
82For example, a declaration can share one length, @N@, among a pair of parameters and the return.
83\lstinput{10-17}{hello-array.cfa}
84Here, the function @f@ does a pointwise comparison, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
85
86The array type uses the parameterized length information in its @sizeof@ determination, illustrated in the example's call to @alloc@.
87That call requests an allocation of type @array(bool, N)@, which the type system deduces from the left-hand side of the initialization, into the return type of the @alloc@ call.
88Preexisting \CFA behaviour is leveraged here, both in the return-type-only polymorphism, and the @sized(T)@-aware standard-library @alloc@ routine.
89The new @array@ type plugs into this behaviour by implementing the @sized@/@sizeof@ assertion to have the intuitive meaning.
90As a result, this design avoids an opportunity for programmer error by making the size/length communication to a called routine implicit, compared with C's @calloc@ (or the low-level \CFA analog @aalloc@), which take an explicit length parameter not managed by the type system.
91
92\VRef[Figure]{f:fHarness} shows the harness to use the @f@ function illustrating how dynamic values are fed into the system.
93Here, the @a@ array is loaded with decreasing values, and the @b@ array with amounts off by a constant, giving relative differences within tolerance at first and out of tolerance later.
94The program main is run with two different inputs of sequence length.
95
96\begin{figure}
97\lstinput{30-49}{hello-array.cfa}
98\caption{\lstinline{f} Harness}
99\label{f:fHarness}
100\end{figure}
101
102The loops in the program main follow the more familiar pattern of using the ordinary variable @n@ to convey the length.
103The type system implicitly captures this value at the call site (@main@ calling @f@) and makes it available within the callee (@f@'s loop bound).
104
105The two parts of the example show @n@ adapting a variable into a type-system managed length (at @main@'s declarations of @a@, @b@, and @result@), @N@ adapting in the opposite direction (at @f@'s loop bound), and a pass-thru use of a managed length (at @f@'s declaration of @ret@).
106
107The @forall( ...[N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and in the function @b@.
108The present form is chosen to parallel the existing @forall@ forms:
109\begin{cfa}
110forall( @[N]@ ) ... // array kind
111forall( & T  ) ...  // reference kind (dtype)
112forall( T  ) ...    // value kind (otype)
113\end{cfa}
114
115The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
116In summary:
117\begin{itemize}
118\item
119@[N]@ -- within a forall, declares the type variable @N@ to be a managed length
120\item
121$e$ -- a type representing the value of $e$ as a managed length, where $e$ is a @size_t@-typed expression
122\item
123N -- an expression of type @size_t@, whose value is the managed length @N@
124\item
125@array( thing, N0, N1, ... )@ -- a type wrapping $\prod_i N_i$ adjacent occurrences of @thing@ objects
126\end{itemize}
127Unsigned integers have a special status in this type system.
128Unlike how C++ allows
129\begin{c++}
130template< size_t N, char * msg, typename T >... // declarations
131\end{c++}
132\CFA does not accommodate values of any user-provided type.
133TODO: discuss connection with dependent types.
134An example of a type error demonstrates argument safety.
135The running example has @f@ expecting two arrays of the same length.
136A compile-time error occurs when attempting to call @f@ with arrays whose lengths may differ.
137\begin{cfa}
138forall( [M], [N] )
139void bad( array(float, M) &a, array(float, N) &b ) {
140        f( a, a ); // ok
141        f( b, b ); // ok
142        f( a, b ); // error
143}
144\end{cfa}
145%\lstinput{60-65}{hello-array.cfa}
146As is common practice in C, the programmer is free to cast, to assert knowledge not shared with the type system.
147\begin{cfa}
148forall( [M], [N] )
149void bad_fixed( array(float, M) & a, array(float, N) & b ) {
150        if ( M == N ) {
151            f( a, (array(float, M) &)b ); // cast b to matching type
152        }
153}
154\end{cfa}
155%\lstinput{70-75}{hello-array.cfa}
156
157Argument safety and the associated implicit communication of array length work with \CFA's generic types too.
158\CFA allows aggregate types to be generalized with multiple type parameters, including parameterized element type, so can it be defined over a parameterized length.
159Doing so gives a refinement of C's flexible array member'' pattern, that allows nesting structures with array members anywhere within other structures.
160\lstinput{10-16}{hello-accordion.cfa}
161This structure's layout has the starting offset of @cost_contribs@ varying in @Nclients@, and the offset of @total_cost@ varying in both generic parameters.
162For a function that operates on a @request@ structure, the type system handles this variation transparently.
163\lstinput{40-47}{hello-accordion.cfa}
164In the example, different runs of the program result in different offset values being used.
165\lstinput{60-76}{hello-accordion.cfa}
166The output values show that @summarize@ and its caller agree on both the offsets (where the callee starts reading @cost_contribs@ and where the callee writes @total_cost@).
167Yet the call site still says just, pass the request.''
168
169
170\section{Multidimensional implementation}
171\label{toc:mdimpl}
172
173TODO: introduce multidimensional array feature and approaches
174
175The new \CFA standard library @array@ datatype supports multidimensional uses more richly than the C array.
176The new array's multidimensional interface and implementation, follows an array-of-arrays setup, meaning, like C's @float[n][m]@ type, one contiguous object, with coarsely-strided dimensions directly wrapping finely-strided dimensions.
177This setup is in contrast with the pattern of array of pointers to other allocations representing a sub-array.
178Beyond what C's type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.
179C and C++ require a programmer with such a need to manage pointer/offset arithmetic manually.
180
181Examples are shown using a $5 \times 7$ float array, @a@, loaded with increments of $0.1$ when stepping across the length-7 finely-strided dimension shown on columns, and with increments of $1.0$ when stepping across the length-5 coarsely-strided dimension shown on rows.
182%\lstinput{120-126}{hello-md.cfa}
183The memory layout of @a@ has strictly increasing numbers along its 35 contiguous positions.
184
185A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.
186Like with the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array, by a prefix of its dimensions.
187This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be be subscripted by the more finely strided dimensions.
188\lstinput{60-66}{hello-md.cfa}
189\lstinput[aboveskip=0pt]{140-140}{hello-md.cfa}
190
191This function declaration is asserting too much knowledge about its parameter @c@, for it to be usable for printing either a row slice or a column slice.
192Specifically, declaring the parameter @c@ with type @array@ means that @c@ is contiguous.
193However, the function does not use this fact.
194For the function to do its job, @c@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@), with managed length @N@.
195The new-array library provides the trait @ix@, so-defined.
196With it, the original declaration can be generalized, while still implemented with the same body, to the latter declaration:
197\lstinput{40-44}{hello-md.cfa}
198\lstinput[aboveskip=0pt]{145-145}{hello-md.cfa}
199
200Nontrivial slicing, in this example, means passing a noncontiguous slice to @print1d@.
201The new-array library provides a subscript by all'' operation for this purpose.
202In a multi-dimensional subscript operation, any dimension given as @all@ is left not yet subscripted by a value,'' implementing the @ix@ trait, waiting for such a value.
203\lstinput{150-151}{hello-md.cfa}
204
205The example has shown that @a[2]@ and @a[[2, all]]@ both refer to the same, 2.*'' slice.
206Indeed, the various @print1d@ calls under discussion access the entry with value 2.3 as @a[2][3]@, @a[[2,all]][3]@, and @a[[all,3]][2]@.
207This design preserves (and extends) C array semantics by defining @a[[i,j]]@ to be @a[i][j]@ for numeric subscripts, but also for subscripting by all''.
208That is:
209
210\begin{tabular}{cccccl}
211@a[[2,all]][3]@  &  $=$  &  @a[2][all][3]@  & $=$  &  @a[2][3]@  & (here, @all@ is redundant)  \\
212@a[[all,3]][2]@  &  $=$  &  @a[all][3][2]@  & $=$  &  @a[2][3]@  & (here, @all@ is effective)
213\end{tabular}
214
215Narrating progress through each of the @-[-][-][-]@ expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
216
217\noindent Where @all@ is redundant:
218
219\begin{tabular}{ll}
220@a@  & 2-dimensional, want subscripts for coarse then fine \\
221@a[2]@  & 1-dimensional, want subscript for fine; lock coarse = 2 \\
222@a[2][all]@  & 1-dimensional, want subscript for fine \\
223@a[2][all][3]@  & 0-dimensional; lock fine = 3
224\end{tabular}
225
226\noindent Where @all@ is effective:
227
228\begin{tabular}{ll}
229@a@  & 2-dimensional, want subscripts for coarse then fine \\
230@a[all]@  & 2-dimensional, want subscripts for fine then coarse \\
231@a[all][3]@  & 1-dimensional, want subscript for coarse; lock fine = 3 \\
232@a[all][3][2]@  & 0-dimensional; lock coarse = 2
233\end{tabular}
234
235The semantics of @-[all]@ is to dequeue from the front of the want subscripts'' list and re-enqueue at its back.
236The semantics of @-[i]@ is to dequeue from the front of the want subscripts'' list and lock its value to be @i@.
237
238Contiguous arrays, and slices of them, are all realized by the same underlying parameterized type.
239It includes stride information in its metatdata.
240The @-[all]@ operation is a conversion from a reference to one instantiation, to a reference to another instantiation.
241The running example's @all@-effective step, stated more concretely, is:
242
243\begin{tabular}{ll}
244@a@       & : 5 of ( 7 of float each spaced 1 float apart ) each spaced 7 floats apart \\
245@a[all]@  & : 7 of ( 5 of float each spaced 7 floats apart ) each spaced 1 float apart
246\end{tabular}
247
248\begin{figure}
249\includegraphics{measuring-like-layout}
250\caption{Visualization of subscripting, by numeric value, and by \lstinline[language=CFA]{all}.
251        Here \lstinline[language=CFA]{x} has type \lstinline[language=CFA]{array( float, 5, 7 )}, understood as 5 rows by 7 columns.
252        The horizontal layout represents contiguous memory addresses while the vertical layout uses artistic license.
253        The vertical shaded band highlights the location of the targeted element, 2.3.
254        Any such vertical contains various interpretations of a single address.}
255\label{fig:subscr-all}
256\end{figure}
257
258\noindent BEGIN: Paste looking for a home
259
260The world of multidimensional array implementation has, or abuts, four relevant levels of abstraction, highest to lowest:
261
2621, purpose:
263If you are doing linear algebra, you might call its dimensions, "column" and "row."
264If you are treating an acrostic poem as a grid of characters, you might say,
265the direction of reading the phrases vs the direction of reading the keyword.
266
2672, flexible-stride memory:
268assuming, from here on, a need to see/use contiguous memory,
269this model offers the ability to slice by (provide an index for) any dimension
270
2713, fixed-stride memory:
272this model offers the ability to slice by (provide an index for) only the coarsest dimension
273
2744, explicit-displacement memory:
275no awareness of dimensions, so no distinguishing them; just the ability to access memory at a distance from a reference point
276
277C offers style-3 arrays.  Fortran, Matlab and APL offer style-2 arrays.
278Offering style-2 implies offering style-3 as a sub-case.
279My CFA arrays are style-2.
280
281Some debate is reasonable as to whether the abstraction actually goes $1 < \{2, 3\} < 4$,
282rather than my numerically-ordered chain.
283According to the diamond view, styles 2 and 3 are at the same abstraction level, just with 3 offering a more limited set of functionality.
284The chain view reflects the design decision I made in my mission to offer a style-2 abstraction;
285I chose to build it upon a style-3 abstraction.
286(Justification of the decision follows, after the description of the design.)
287
288The following discussion first dispenses with API styles 1 and 4, then elaborates on my work with styles 2 and 3.
289
290Style 1 is not a concern of array implementations.
291It concerns documentation and identifier choices of the purpose-specific API.
292If one is offering a matrix-multiply function, one must specify which dimension(s) is/are being summed over
293(or rely on the familiar convention of these being the first argument's rows and second argument's columns).
294Some libraries offer a style-1 abstraction that is not directly backed by a single array
295(e.g. make quadrants contiguous, as may help cache coherence during a parallel matrix multiply),
296but such designs are out of scope for a discussion on arrays; they are applications of several arrays.
297I typically include style-1 language with examples to help guide intuition.
298
299It is often said that C has row-major arrays while Fortran has column-major arrays.
300This comparison brings an unhelpful pollution of style-1 thinking into issues of array implementation.
301Unfortunately, -major'' has two senses: the program's order of presenting indices and the array's layout in memory.
302(The program's order could be either lexical, as in @x[1,2,3]@ subscripting, or runtime, as in the @x[1][2][3]@ version.)
303Style 2 is concerned with introducing a nontrivial relationship between program order and memory order,
304while style 3 sees program order identical with memory order.
305Both C and (the style-3 subset of) Fortran actually use the same relationship here:
306an earlier subscript in program order controls coarser steps in memory.
307The job of a layer-2/3 system is to implement program-ordered subscripting according to a defined memory layout.
308C and Fortran do not use opposite orders in doing this job.
309Fortran is only backward'' in its layer-1 conventions for reading/writing and linear algebra.
310Fortran subscripts as $m(c,r)$.  When I use style-1 language, I am following the C/mathematical convention of $m(r,c)$.
311
312Style 4 is the inevitable target of any array implementation.
313The hardware offers this model to the C compiler, with bytes as the unit of displacement.
314C offers this model to its programmer as pointer arithmetic, with arbitrary sizes as the unit.
315I consider casting a multidimensional array as a single-dimensional array/pointer,
316then using @x[i]@ syntax to access its elements, to be a form of pointer arithmetic.
317But style 4 is not offering arrays.
318
319Now stepping into the implementation
320of CFA's new type-3 multidimensional arrays in terms of C's existing type-2 multidimensional arrays,
321it helps to clarify that even the interface is quite low-level.
322A C/CFA array interface includes the resulting memory layout.
323The defining requirement of a type-3 system is the ability to slice a column from a column-finest matrix.
324The required memory shape of such a slice is set, before any discussion of implementation.
325The implementation presented here is how the CFA array library wrangles the C type system,
326to make it do memory steps that are consistent with this layout.
327TODO: do I have/need a presentation of just this layout, just the semantics of -[all]?
328
329Figure~\ref{fig:subscr-all} shows one element (in the shaded band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@.
330In both cases, value 2 selects from the coarser dimension (rows of @x@),
331while the value 3 selects from the finer dimension (columns of @x@).
332The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, vs from @x[all]@.
333Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse.
334These two starting expressions, which are the example's only multidimensional subexpressions
335(those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index would select.
336
337The figure's presentation offers an intuition answering, What is an atomic element of @x[all]@?
338From there, @x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these strange building blocks.
339An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box)
340and a lie about its size (the wedge above it, growing upward).
341An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them,
342done according to their size, as announced.  Call such an array a column.
343A column is almost ready to be arranged into a matrix; it is the \emph{contained value} of the next-level building block,
344but another lie about size is required.
345At first, an atom needed to be arranged as if it were bigger,
346but now a column needs to be arranged as if it is smaller (the wedge above it, shrinking upward).
347These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
348Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride,
349it achieves the requirement of representing the transpose of @x@.
350Yet every time the programmer presents an index, a mere C-array subscript is achieving the offset calculation.
351
352In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected),
353the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats,
354compared with where analogous rows appear when the row-level option is presented for @x@.
355
356These size lies create an appearance of overlap.
357For example, in @x[all]@, the shaded band touches atoms 2.0, 2.1, 2.2, 2.3, 1.4, 1.5 and 1.6.
358But only the atom 2.3 is storing its value there.
359The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims.
360
361Lying is implemented as casting.
362The arrangement just described is implemented in the structure @arpk@.
363This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator.
364The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@.
365The subscript operator presents what's really inside, by casting to the type below the wedge of lie.
366
367%  Does x[all] have to lie too?  The picture currently glosses over how it it advertizes a size of 7 floats.  I'm leaving that as an edge case benignly misrepresented in the picture.  Edge cases only have to be handled right in the code.
368
369Casting, overlapping and lying are unsafe.
370The mission here is to implement a style-2 feature that the type system helps the programmer use safely.
371The offered style-2 system is allowed to be internally unsafe,
372just as C's implementation of a style-3 system (upon a style-4 system) is unsafe within,
373even when the programmer is using it without casts or pointer arithmetic.
374Having a style-2 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices.
375
376The choice to implement this style-2 system upon C's style-3 arrays, rather than its style-4 pointer arithmetic,
377reduces the attack surface of unsafe code.
378My casting is unsafe, but I do not do any pointer arithmetic.
379When a programmer works in the common-case style-3 subset (in the no-@[all]@ top of Figure~\ref{fig:subscr-all}),
380my casts are identities, and the C compiler is doing its usual displacement calculations.
381If I had implemented my system upon style-4 pointer arithmetic,
382then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own.
383
384\noindent END: Paste looking for a home
385
386The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.
387The private @arpk@ structure (array with explicit packing) is generic over these two types (and more): the contained element, what it is masquerading as.
388This structure's public interface is the @array(...)@ construction macro and the two subscript operators.
389Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.
390Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.
391Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
392
393The @arpk@ structure and its @-[i]@ operator are thus defined as:
394\begin{lstlisting}
395forall( ztype(N),               // length of current dimension
396        dtype(S) | sized(S),    // masquerading-as
397        dtype E_im,             // immediate element, often another array
398        dtype E_base            // base element, e.g. float, never array
399 ) {
400struct arpk {
401        S strides[N];           // so that sizeof(this) is N of S
402};
403
404// expose E_im, stride by S
405E_im & ?[?]( arpk(N, S, E_im, E_base) & a, ptrdiff_t i ) {
406        return (E_im &) a.strides[i];
407}
408}
409\end{lstlisting}
410
411An instantiation of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.
412In the base case, @array(E_base)@ is just @E_base@.
413Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
414
415Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.
416Expressed as an operation on types, this rotation is:
417\begin{eqnarray*}
418suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
419enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
420enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
421\end{eqnarray*}
422
423
425
426\CFA array subscripting is protected with runtime bound checks.
427Having dependent typing causes the optimizer to remove more of these bound checks than it would without them.
428This section provides a demonstration of the effect.
429
430The experiment compares the \CFA array system with the padded-room system [TODO:xref] most typically exemplified by Java arrays, but also reflected in the C++ pattern where restricted vector usage models a checked array.
431The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
432The experiment compares with the C++ version to keep access to generated assembly code simple.
433
434As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and C++ versions.
435When the programmer treats the array's bound correctly (making the subscript obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.
436But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
437
438TODO: paste source and assembly codes
439
440Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.
441The case is naive matrix multiplication over a row-major encoding.
442
443TODO: paste source codes
444
445
446
447
448
449\section{Comparison with other arrays}
450
451\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.
452Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly typed ownership system that further helps guarantee statically the validity of every pointer deference.
453These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.
454\CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
455
456\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.
457Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.
458The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
459
461
462Java's array [TODO:cite] is a straightforward example of assuring safety against undefined behaviour, at a cost of expressiveness for more applied properties.
463Consider the array parameter declarations in:
464
465\begin{tabular}{rl}
466C      &  @void f( size_t n, size_t m, float a[n][m] );@ \\
467Java   &  @void f( float[][] a );@
468\end{tabular}
469
470Java's safety against undefined behaviour assures the callee that, if @a@ is non-null, then @a.length@ is a valid access (say, evaluating to the number $\ell$) and if @i@ is in $[0, \ell)$ then @a[i]@ is a valid access.
471If a value of @i@ outside this range is used, a runtime error is guaranteed.
472In these respects, C offers no guarantees at all.
473Notably, the suggestion that @n@ is the intended size of the first dimension of @a@ is documentation only.
474Indeed, many might prefer the technically equivalent declarations @float a[][m]@ or @float (*a)[m]@ as emphasizing the no guarantees'' nature of an infrequently used language feature, over using the opportunity to explain a programmer intention.
475Moreover, even if @a[0][0]@ is valid for the purpose intended, C's basic infamous feature is the possibility of an @i@, such that @a[i][0]@ is not valid for the same purpose, and yet, its evaluation does not produce an error.
476
477Java's lack of expressiveness for more applied properties means these outcomes are possible:
478\begin{itemize}
479\item @a[0][17]@ and @a[2][17]@ are valid accesses, yet @a[1][17]@ is a runtime error, because @a[1]@ is a null pointer
480\item the same observation, now because @a[1]@ refers to an array of length 5
481\item execution times vary, because the @float@ values within @a@ are sometimes stored nearly contiguously, and other times, not at all
482\end{itemize}
483C's array has none of these limitations, nor do any of the array language'' comparators discussed in this section.
484
485This Java level of safety and expressiveness is also exemplified in the C family, with the commonly given advice [TODO:cite example], for C++ programmers to use @std::vector@ in place of the C++ language's array, which is essentially the C array.
486The advice is that, while a vector is also more powerful (and quirky) than an array, its capabilities include options to preallocate with an upfront size, to use an available bound-checked accessor (@a.at(i)@ in place of @a[i]@), to avoid using @push_back@, and to use a vector of vectors.
487Used with these restrictions, out-of-bound accesses are stopped, and in-bound accesses never exercise the vector's ability to grow, which is to say, they never make the program slow to reallocate and copy, and they never invalidate the program's other references to the contained values.
488Allowing this scheme the same referential integrity assumption that \CFA enjoys [TODO:xref], this scheme matches Java's safety and expressiveness exactly.
489[TODO: decide about going deeper; some of the Java expressiveness concerns have mitigations, up to even more tradeoffs.]
490
491\subsection{Levels of dependently typed arrays}
492
493The \CFA array and the field of array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
494\begin{itemize}
495\item a \emph{zip}-style operation that consumes two arrays of equal length
496\item a \emph{map}-style operation whose produced length matches the consumed length
497\item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
498\end{itemize}
499Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.
500Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
501
502Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.
503In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.
504The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.
505In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.
506[TODO: clarify how even Idris type checking terminates]
507
508Idris is a current, general-purpose dependently typed programming language.
509Length checking is a common benchmark for full dependent type systems.
510Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.
511[TODO: finish explaining what Data.Vect is and then the essence of the comparison]
512
513POINTS:
514here is how our basic checks look (on a system that does not have to compromise);
515it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
516
517Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer offer novel contributions concerning similar, restricted dependent types for tracking array length.
518Unlike \CFA, both are garbage-collected functional languages.
519Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.
520So, like \CFA, the checking in question is a lightweight bounds-only analysis.
521Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
522
523
524
525The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.
526There is a particular emphasis on an existential type, enabling callee-determined return shapes.
527
528
529Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type.
530
531Futhark and full-strength dependently typed languages treat array sizes are ordinary values.
532Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
533
534CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.
535Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.
536Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
537
538\subsection{Static safety in C extensions}
539
540
541\section{Future Work}
542
543\subsection{Declaration syntax}
544
545\subsection{Range slicing}
546
547\subsection{With a module system}
548
549\subsection{With described enumerations}
550
551A project in \CFA's current portfolio will improve enumerations.
552In the incumbent state, \CFA has C's enumerations, unmodified.
553I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use.
554It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations:
555\begin{lstlisting}
556forall( T | is_enum(T) )
557void show_in_context( T val ) {
558        for( T i ) {
559                string decorator = "";
560                if ( i == val-1 ) decorator = "< ready";
561                if ( i == val   ) decorator = "< go"   ;
562                sout | i | decorator;
563        }
564}
565enum weekday { mon, tue, wed = 500, thu, fri };
566show_in_context( wed );
567\end{lstlisting}
568with output
569\begin{lstlisting}
570mon
572wed < go
573thu
574fri
575\end{lstlisting}
576The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA.
577But the example shows these abilities:
578\begin{itemize}
579\item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type
580\item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@)
581\item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@)
582\item a provision for looping (the @for@ form used) over the values of the type.
583\end{itemize}
584
585If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
586
587[TODO: introduce Ada in the comparators]
588
589In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, C++, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.
590The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing.
591
592This change of perspective also lets us remove ubiquitous dynamic bound checks.
593[TODO: xref] discusses how automatically inserted bound checks can often be optimized away.
594But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.
595To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
596
597[TODO, fix confusion:  Idris has this arrangement of checks, but still the natural numbers as the domain.]
598
599The structural assumptions required for the domain of an array in Dex are given by the trait (there, interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.
600Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
601\begin{lstlisting}
602interface Ix n
603 get_size n : Unit -> Int
604 ordinal : n -> Int
605 unsafe_from_ordinal n : Int -> n
606\end{lstlisting}
607
608Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.
609This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.
610Dex's example is a routine that calculates pointwise differences between two samples.
611Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).
612In both cases, but with respectively dimensioned interpretations of size,'' this function requires the argument sizes to match, and it produces a result of the that size.
613
614The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.
615In the audio instantiation, the duration-of-clip type argument is used for the domain.
616In the photograph instantiation, it's the tuple-type of $\langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle$.
617This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
618\begin{lstlisting}
619instance {a b} [Ix a, Ix b] Ix (a & b)
620 get_size = \(). size a * size b
621 ordinal = \(i, j). (ordinal i * size b) + ordinal j
622 unsafe_from_ordinal = \o.
623bs = size b
624(unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
625\end{lstlisting}
626and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
627\begin{lstlisting}
628img_trans :: (img_wd,img_ht)=>Real
629img_trans.(i,j) = img.i.j
630result = pairwise img_trans
631\end{lstlisting}
632[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
633
634In the case of adapting this pattern to \CFA, my current work provides an adapter from successively subscripted'' to subscripted by tuple,'' so it is likely that generalizing my adapter beyond subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
635
636\subsection{Retire pointer arithmetic}
637
638
639\section{\CFA}
640
641XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
642moved from background chapter \\
643XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
644
645Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it.
646(For later:  That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.)
647
648\subsection{\CFA features interacting with arrays}
649
650Prior work on \CFA included making C arrays, as used in C code from the wild,
651work, if this code is fed into @cfacc@.
652The quality of this this treatment was fine, with no more or fewer bugs than is typical.
653
654More mixed results arose with feeding these C'' arrays into preexisting \CFA features.
655
656A notable success was with the \CFA @alloc@ function,
657which type information associated with a polymorphic return type
658replaces @malloc@'s use of programmer-supplied size information.
659\begin{cfa}
660// C, library
661void * malloc( size_t );
662// C, user
663struct tm * el1 = malloc(      sizeof(struct tm) );
664struct tm * ar1 = malloc( 10 * sizeof(struct tm) );
665
666// CFA, library
667forall( T * ) T * alloc();
668// CFA, user
669tm * el2 = alloc();
670tm (*ar2)[10] = alloc();
671\end{cfa}
672The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument.
673This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type.
674Using a compiler-produced value eliminates an opportunity for user error.
675
676TODO: fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa
677
678Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug.
679In the last example, the choice of pointer to array'' @ar2@ breaks a parallel with @ar1@.
680They are not subscripted in the same way.
681\begin{cfa}
682ar1[5];
683(*ar2)[5];
684\end{cfa}
685Using reference to array'' works at resolving this issue.  TODO: discuss connection with Doug-Lea \CC proposal.
686\begin{cfa}
687tm (&ar3)[10] = *alloc();
688ar3[5];
689\end{cfa}
690The implicit size communication to @alloc@ still works in the same ways as for @ar2@.
691
692Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one.
693TODO xref C standard does not claim that @ar1@ may be subscripted,
694because no stage of interpreting the construction of @ar1@ has it be that there is an \emph{array object} here.''
695But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls,
696where the type requested is an array, making the result, much more obviously, an array object.
697
698The reference to array'' type has its sore spots too.