source: doc/theses/mike_brooks_MMath/array.tex @ 9d5eacb

Last change on this file since 9d5eacb was 5a553e2, checked in by Peter A. Buhr <pabuhr@…>, 4 months ago

proofreading array chapter

  • Property mode set to 100644
File size: 52.1 KB
Line 
1\chapter{Array}
2\label{c:Array}
3
4
5\section{Introduction}
6
7Arrays in C are possible the single most misunderstood and incorrectly used features in the language, resulting in the largest proportion of runtime errors and security violations.
8This chapter describes the new \CFA language and library features that introduce a length-checked array-type to the \CFA standard library~\cite{Cforall}, \eg:
9\begin{cfa}
10@array( float, 99 )@ x;                                 $\C{// x contains 99 floats}$
11void f( @array( float, 42 )@ & p ) {}   $\C{// p accepts 42 floats}$
12f( x );                                                                 $\C{// statically rejected: types are different, 99 != 42}$
13
14forall( T, [N] )
15void g( @array( T, N )@ & p, int i ) {
16        T elem = p[i];                                          $\C{// dynamically checked: requires 0 <= i < N}$
17}
18g( x, 0 );                                                              $\C{// T is float, N is 99, dynamic subscript check succeeds}$
19g( x, 1000 );                                                   $\C{// T is float, N is 99, dynamic subscript check fails}$
20\end{cfa}
21This example declares variable @x@, with generic type @array@ using arguments @float@ and @99@.
22Function @f@ is declared with an @array@ parameter of length @42@.
23The call @f( x )@ is invalid because the @array@ lengths @99@ and @42@ do not match.
24Next, function @g@ introduces a @forall@ prefix on type parameter @T@ and arbitrary \emph{dimension parameter} @N@, the new feature that represents a count of elements managed by the type system.
25The call @g( x, 0 )@ is valid because @g@ accepts any length of array, where the type system infers @float@ for @T@ and length @99@ for @N@.
26Inferring values for @T@ and @N@ is implicit without programmer involvement.
27Furthermore, the runtime subscript @x[0]@ (parameter @i@ is @0@) in @g@ is valid because @0@ is in the dimension range $[0,99)$ of argument @x@.
28The call @g( x, 1000 )@ is also valid;
29however, the runtime subscript @x[1000]@ is invalid (generates a subscript error) because @1000@ is outside the dimension range $[0,99)$ of argument @x@.
30
31The generic @array@ type is similar to the C array type, which \CFA inherits from C.
32Its runtime characteristics are often identical, and some features are available in both.
33For example, assume a caller can instantiates @N@ with 42 in the following (details to follow).
34\begin{cfa}
35forall( [N] )
36void declDemo() {
37        float x1[N];                            $\C{// built-in type ("C array")}$
38        array(float, N) x2;                     $\C{// type from library}$
39}
40\end{cfa}
41Both of the locally-declared array variables, @x1@ and @x2@, have 42 elements, each element being a @float@.
42The two variables have identical size and layout; they both encapsulate 42-float, stack \vs heap allocations with no additional ``bookkeeping'' allocations or headers.
43Providing this explicit generic approach required a significant extension to the \CFA type system to support a full-feature, safe, efficient (space and time) array-type, which forms the foundation for more complex array forms in \CFA.
44
45Admittedly, the @array@ library type (type for @x2@) is syntactically different from its C counterpart.
46A future goal (TODO xref) is to provide a built-in array type with syntax approaching C's (type for @x1@);
47then, the library @array@ type can be removed giving \CFA a largely uniform array type.
48At present, the built-in array is only partially supported, so the generic @array@ is used exclusively in the discussion;
49feature support and C compatibility are revisited in Section ? TODO.
50
51Offering an @array@ type, as a distinct alternative to the C array, is consistent with \CFA's goal of backwards compatibility, \ie virtually all existing C (gcc) programs can be compiled by \CFA with only a small number of changes, similar to \CC (g++).
52However, a few compatibility-breaking changes to the behaviour of the C array are necessary, both as an implementation convenience and to fix C's lax treatment of arrays.
53Hence, the @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features, making it unnecessary to deal with every inherited complexity introduced by the C array TODO xref.
54
55My contributions are:
56\begin{enumerate}
57\item A type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@.
58\item Provide a length-checked array-type in the \CFA standard library, where the array's length is statically managed and dynamically valued.
59\item Provide argument/parameter passing safety for arrays and subscript safety.
60\item TODO: general parking...
61\item Identify the interesting specific abilities available by the new @array@ type.
62\item Where there is a gap concerning this feature's readiness for prime-time, identification of specific workable improvements that are likely to close the gap.
63\end{enumerate}
64
65
66\section{Definitions and design considerations}
67
68
69\subsection{Dependent typing}
70
71
72\section{Features added}
73
74This section presents motivating examples for the new array type, demonstrating the syntax and semantics of the generic @array@.
75As stated, the core capability of the new array is tracking all dimensions in the type system, where dynamic dimensions are represented using type variables.
76
77The definition of type variables preceding object declarations makes the array dimension lexically referenceable where it is needed.
78For example, a declaration can share one length, @N@, among a pair of parameters and the return.
79\lstinput{10-17}{hello-array.cfa}
80Here, the function @f@ does a pointwise comparison, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
81The dynamic allocation of the @ret@ array by @alloc@ uses the parameterized dimension information in its implicit @_Alignof@ and @sizeof@ determinations, and casting the return type.
82\begin{cfa}
83static inline forall( T & | sized(T) )
84T * alloc( size_t dim ) {
85        if ( _Alignof(T) <= libAlign() ) return (T *)aalloc( dim, sizeof(T) ); // calloc without zero fill
86        else return (T *)amemalign( _Alignof(T), dim, sizeof(T) ); // array memalign
87}
88\end{cfa}
89Here, the type system deduces from the left-hand side of the assignment the type @array(bool, N)@, and forwards it as the type variable @T@ for the generic @alloc@ function, making it available in its body.
90Hence, preexisting \CFA behaviour is leveraged here, both in the return-type polymorphism, and the @sized(T)@-aware standard-library @alloc@ routine.
91This example illustrates how the new @array@ type plugs into existing \CFA behaviour by implementing necessary @sized@ assertions needed by other types.
92(@sized@ implies a concrete \vs abstract type with a compile-time size.)
93As a result, there is significant programming safety by making the size accessible and implicit, compared with C's @calloc@ and non-array supporting @memalign@, which take an explicit length parameter not managed by the type system.
94
95\begin{figure}
96\lstinput{30-43}{hello-array.cfa}
97\lstinput{45-48}{hello-array.cfa}
98\caption{\lstinline{f} Harness}
99\label{f:fHarness}
100\end{figure}
101
102\VRef[Figure]{f:fHarness} shows a harness that uses the @f@ function illustrating how dynamic values are fed into the @array@ type.
103Here, the dimension of the @x@, @y@, and @result@ arrays is specified from a command-line value and these arrays are allocated on the stack.
104Then the @x@ array is initialized with decreasing values, and the @y@ array with amounts offset by constant @0.005@, giving relative differences within tolerance initially and diverging for later values.
105The program main is run (see figure bottom) with inputs @5@ and @7@ for sequence lengths.
106The loops follow the familiar pattern of using the variable @n@ to iterate through the arrays.
107Most importantly, the type system implicitly captures @n@ at the call of @f@ and makes it available throughout @f@ as @N@.
108The example shows @n@ adapting into a type-system managed length at the declarations of @x@, @y@, and @result@, @N@ adapting in the same way at @f@'s loop bound, and a pass-thru use of @n@ at @f@'s declaration of @ret@.
109Except for the lifetime-management issue of @result@, \ie explicit @free@, this program has eliminated both the syntactic and semantic problems associated with C arrays and their usage.
110These benefits cannot be underestimated.
111
112In general, the @forall( ..., [N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and within a function.
113The syntactic form is chosen to parallel other @forall@ forms:
114\begin{cfa}
115forall( @[N]@ ) ...     $\C[1.5in]{// array kind}$
116forall( T & ) ...       $\C{// reference kind (dtype)}$
117forall( T ) ...         $\C{// value kind (otype)}\CRT$
118\end{cfa}
119% The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
120In summary:
121\begin{itemize}
122\item
123@[N]@ within a forall declares the type variable @N@ to be a managed length.
124\item
125The type of @N@ within code is @size_t@.
126\item
127The value of @N@ within code is the acquired length derived from the usage site, \ie generic declaration or function call.
128\item
129@array( thing, N0, N1, ... )@ is a multi-dimensional type wrapping $\prod_i N_i$ adjacent occurrences of @thing@ objects.
130\end{itemize}
131
132\VRef[Figure]{f:TemplateVsGenericType} shows @N@ is not the same as a @size_t@ declaration in a \CC \lstinline[language=C++]{template}.
133\begin{enumerate}[leftmargin=*]
134\item
135The \CC template @N@ is a compile-time value, while the \CFA @N@ is a runtime value.
136\item
137The \CC template @N@ must be passed explicitly at the call, unless @N@ has a default value, even when \CC can deduct the type of @T@.
138The \CFA @N@ is part of the array type and passed implicitly at the call.
139\item
140\CC cannot have an array of references, but can have an array of pointers.
141\CC has a (mistaken) belief that references are not objects, but pointers are objects.
142In the \CC example, the arrays fall back on C arrays, which have a duality with references with respect to automatic dereferencing.
143The \CFA array is a contiguous object with an address, which can stored as a reference or pointer.
144\item
145C/\CC arrays cannot be copied, while \CFA arrays can be copied, making them a first-class object (although array copy is often avoided for efficiency).
146\end{enumerate}
147
148\begin{figure}
149\begin{tabular}{@{}l@{\hspace{20pt}}l@{}}
150\begin{c++}
151
152@template< typename T, size_t N >@
153void copy( T ret[N], T x[N] ) {
154        for ( int i = 0; i < N; i += 1 ) ret[i] = x[i];
155}
156int main() {
157        int ret[10], x[10];
158        for ( int i = 0; i < 10; i += 1 ) x[i] = i;
159        @copy<int, 10 >( ret, x );@
160        for ( int i = 0; i < 10; i += 1 )
161                cout << ret[i] << ' ';
162        cout << endl;
163}
164\end{c++}
165&
166\begin{cfa}
167int main() {
168        @forall( T, [N] )@   // nested function
169        void copy( array( T, N ) & ret, array( T, N ) & x ) {
170                for ( i; 10 ) ret[i] = x[i];
171        }
172
173        array( int, 10 ) ret, x;
174        for ( i; 10 ) x[i] = i;
175        @copy( ret,  x );@
176        for ( i; 10 )
177                sout | ret[i] | nonl;
178        sout | nl;
179}
180\end{cfa}
181\end{tabular}
182\caption{\CC \lstinline[language=C++]{template} \vs \CFA generic type }
183\label{f:TemplateVsGenericType}
184\end{figure}
185
186Continuing the discussion of \VRef[Figure]{f:fHarness}, the example has @f@ expecting two arrays of the same length.
187A compile-time error occurs when attempting to call @f@ with arrays of differing lengths.
188% removing leading whitespace
189\lstinput[tabsize=1]{52-53}{hello-array.cfa}
190\lstinput[tabsize=1,aboveskip=0pt]{62-64}{hello-array.cfa}
191As is common practice in C, the programmer is free to cast, \ie to assert knowledge not shared with the type system.
192\lstinput{70-74}{hello-array.cfa}
193
194Orthogonally, the new @array@ type works with \CFA's generic types, providing argument safety and the associated implicit communication of array length.
195Specifically, \CFA allows aggregate types to be generalized with multiple type parameters, including parameterized element types and lengths.
196Doing so gives a refinement of C's ``flexible array member'' pattern, allowing nesting structures with array members anywhere within other structures.
197\lstinput{10-15}{hello-accordion.cfa}
198This structure's layout has the starting offset of @municipalities@ varying in @NprovTerty@, and the offset of @total_pt@ and @total_mun@ varying in both generic parameters.
199For a function that operates on a @CanadaPop@ structure, the type system handles this variation transparently.
200\lstinput{40-45}{hello-accordion.cfa}
201\VRef[Figure]{f:checkHarness} shows program results where different offset values being used.
202The output values show that @summarize@ and its caller agree on both the offsets (where the callee starts reading @cost_contribs@ and where the callee writes @total_cost@).
203Yet the call site just says, ``pass the request.''
204
205\begin{figure}
206\lstinput{60-68}{hello-accordion.cfa}
207\lstinput{70-72}{hello-accordion.cfa}
208\caption{\lstinline{check} Harness}
209\label{f:checkHarness}
210\end{figure}
211
212
213\section{Multidimensional Arrays}
214\label{toc:mdimpl}
215
216% TODO: introduce multidimensional array feature and approaches
217
218When working with arrays, \eg linear algebra, array dimensions are referred to as ``rows'' and ``columns'' for a matrix, adding ``planes'' for a cube.
219(There is little terminology for higher dimensional arrays.)
220For example, an acrostic poem\footnote{A type of poetry where the first, last or other letters in a line spell out a particular word or phrase in a vertical column.}
221can be treated as a grid of characters, where the rows are the text and the columns are the embedded keyword(s).
222Within a poem, there is the concept of a \newterm{slice}, \eg a row is a slice for the poem text, a column is a slice for a keyword.
223In general, the dimensioning and subscripting for multidimensional arrays has two syntactic forms: @m[r,c]@ or @m[r][c]@.
224
225Commonly, an array, matrix, or cube, is visualized (especially in mathematics) as a contiguous row, rectangle, or block.
226This conceptualization is reenforced by subscript ordering, \eg $m_{r,c}$ for a matrix and $c_{p,r,c}$ for a cube.
227Few programming languages differ from the mathematical subscript ordering.
228However, computer memory is flat, and hence, array forms are structured in memory as appropriate for the runtime system.
229The closest representation to the conceptual visualization is for an array object to be contiguous, and the language structures this memory using pointer arithmetic to access the values using various subscripts.
230This approach still has degrees of layout freedom, such as row or column major order, \ie juxtaposed rows or columns in memory, even when the subscript order remains fixed.
231For example, programming languages like MATLAB, Fortran, Julia and R store matrices in column-major order since they are commonly used for processing column-vectors in tabular data sets but retain row-major subscripting.
232In general, storage layout is hidden by subscripting, and only appears when passing arrays among different programming languages or accessing specific hardware.
233
234\VRef[Figure]{f:FixedVariable} shows two C90 approaches for manipulating contiguous arrays.
235Note, C90 does not support VLAs.
236The fixed-dimension approach uses the type system;
237however, it requires all dimensions except the first to be specified at compile time, \eg @m[][6]@, allowing all subscripting stride calculations to be generated with constants.
238Hence, every matrix passed to @fp1@ must have exactly 6 columns but the row size can vary.
239The variable-dimension approach ignores (violates) the type system, \ie argument and parameters types do not match, and manually performs pointer arithmetic for subscripting in the macro @sub@.
240
241\begin{figure}
242\begin{tabular}{@{}l@{\hspace{40pt}}l@{}}
243\multicolumn{1}{c}{\textbf{Fixed Dimension}} & \multicolumn{1}{c}{\textbf{Variable Dimension}} \\
244\begin{cfa}
245
246void fp1( int rows, int m[][@6@] ) {
247        ...  printf( "%d ", @m[r][c]@ );  ...
248}
249int fm1[4][@6@], fm2[6][@6@]; // no VLA
250// initialize matrixes
251fp1( 4, fm1 ); // implicit 6 columns
252fp1( 6, fm2 );
253\end{cfa}
254&
255\begin{cfa}
256#define sub( m, r, c ) *(m + r * sizeof( m[0] ) + c)
257void fp2( int rows, int cols, int *m ) {
258        ...  printf( "%d ", @sub( m, r, c )@ );  ...
259}
260int vm1[4][4], vm2[6][8]; // no VLA
261// initialize matrixes
262fp2( 4, 4, vm1 );
263fp2( 6, 8, vm2 );
264\end{cfa}
265\end{tabular}
266\caption{C90 Fixed \vs Variable Contiguous Matrix Styles}
267\label{f:FixedVariable}
268\end{figure}
269
270Many languages allow multidimensional arrays-of-arrays, \eg in Pascal or \CC.
271\begin{cquote}
272\begin{tabular}{@{}ll@{}}
273\begin{pascal}
274var m : array[0..4, 0..4] of Integer;  (* matrix *)
275type AT = array[0..4] of Integer;  (* array type *)
276type MT = array[0..4] of AT;  (* array of array type *)
277var aa : MT;  (* array of array variable *)
278m@[1][2]@ := 1;   aa@[1][2]@ := 1 (* same subscripting *)
279\end{pascal}
280&
281\begin{c++}
282int m[5][5];
283
284typedef vector< vector<int> > MT;
285MT vm( 5, vector<int>( 5 ) );
286m@[1][2]@ = 1;  aa@[1][2]@ = 1;
287\end{c++}
288\end{tabular}
289\end{cquote}
290The language decides if the matrix and array-of-array are laid out the same or differently.
291For example, an array-of-array may be an array of row pointers to arrays of columns, so the rows may not be contiguous in memory nor even the same length (triangular matrix).
292Regardless, there is usually a uniform subscripting syntax masking the memory layout, even though the two array forms could be differentiated at the subscript level, \eg @m[1,2]@ \vs @aa[1][2]@.
293Nevertheless, controlling memory layout can make a difference in what operations are allowed and in performance (caching/NUMA effects).
294
295C also provides non-contiguous arrays-of-arrays.
296\begin{cfa}
297int m[5][5];                                                    $\C{// contiguous}$
298int * aa[5];                                                    $\C{// non-contiguous}$
299\end{cfa}
300both with different memory layout using the same subscripting, and both with different degrees of issues.
301The focus of this work is on the contiguous multidimensional arrays in C.
302The reason is that programmers are often forced to use the more complex array-of-array form when a contiguous array would be simpler, faster, and safer.
303Nevertheless, the C array-of-array form continues to be useful for special circumstances.
304
305\VRef[Figure]{f:ContiguousNon-contiguous} shows the extensions made in C99 for manipulating contiguous \vs non-contiguous arrays.\footnote{C90 also supported non-contiguous arrays.}
306First, VLAs are supported.
307Second, for contiguous arrays, C99 conjoins one or more of the parameters as a downstream dimension(s), \eg @cols@, implicitly using this parameter to compute the row stride of @m@.
308If the declaration of @fc@ is changed to:
309\begin{cfa}
310void fc( int rows, int cols, int m[@rows@][@cols@] ) ...
311\end{cfa}
312it is possible for C to perform bound checking across all subscripting, but it does not.
313While this contiguous-array capability is a step forward, it is still the programmer's responsibility to manually manage the number of dimensions and their sizes, both at the function definition and call sites.
314That is, the array does not automatically carry its structure and sizes for use in computing subscripts.
315While the non-contiguous style in @faa@ looks very similar to @fc@, the compiler only understands the unknown-sized array of row pointers, and it relies on the programmer to traverse the columns in a row correctly.
316Specifically, there is no requirement that the rows are the same length, like a poem with different length lines.
317
318\begin{figure}
319\begin{tabular}{@{}ll@{}}
320\multicolumn{1}{c}{\textbf{Contiguous}} & \multicolumn{1}{c}{\textbf{ Non-contiguous}} \\
321\begin{cfa}
322void fc( int rows, @int cols@, int m[ /* rows */ ][@cols@] ) {
323        ...  printf( "%d ", @m[r][c]@ );  ...
324}
325int m@[5][5]@;
326for ( int r = 0; r < 5; r += 1 ) {
327
328        for ( int c = 0; c < 5; c += 1 )
329                m[r][c] = r + c;
330}
331fc( 5, 5, m );
332\end{cfa}
333&
334\begin{cfa}
335void faa( int rows, int cols, int * m[ @/* cols */@ ] ) {
336        ...  printf( "%d ", @m[r][c]@ );  ...
337}
338int @* aa[5]@;  // row pointers
339for ( int r = 0; r < 5; r += 1 ) {
340        @aa[r] = malloc( 5 * sizeof(int) );@ // create rows
341        for ( int c = 0; c < 5; c += 1 )
342                aa[r][c] = r + c;
343}
344faa( 5, 5, aa );
345\end{cfa}
346\end{tabular}
347\caption{C99 Contiguous \vs Non-contiguous Matrix Styles}
348\label{f:ContiguousNon-contiguous}
349\end{figure}
350
351
352\subsection{Multidimensional array implementation}
353
354A multidimensional array implementation has three relevant levels of abstraction, from highest to lowest, where the array occupies \emph{contiguous memory}.
355\begin{enumerate}
356\item
357Flexible-stride memory:
358this model has complete independence between subscripting ordering and memory layout, offering the ability to slice by (provide an index for) any dimension, \eg slice a plane, row, or column, \eg @c[3][*][*]@, @c[3][4][*]@, @c[3][*][5]@.
359\item
360Fixed-stride memory:
361this model binds the first subscript and the first memory layout dimension, offering the ability to slice by (provide an index for) only the coarsest dimension, @m[row][*]@ or @c[plane][*][*]@, \eg slice only by row (2D) or plane (3D).
362After which, subscripting and memory layout are independent.
363\item
364Explicit-displacement memory:
365this model has no awareness of dimensions just the ability to access memory at a distance from a reference point (base-displacement addressing), \eg @x + 23@ or @x[23}@ $\Rightarrow$ 23rd element from the start of @x@.
366A programmer must manually build any notion of dimensions using other tools;
367hence, this style is not offering multidimensional arrays \see{\VRef[Figure]{f:FixedVariable}}.
368\end{enumerate}
369
370There is some debate as to whether the abstraction ordering goes $\{1, 2\} < 3$, rather than my numerically-ordering.
371That is, styles 1 and 2 are at the same abstraction level, with 3 offering a limited set of functionality.
372I chose to build the \CFA style-1 array upon a style-2 abstraction.
373(Justification of the decision follows, after the description of the design.)
374
375Style 3 is the inevitable target of any array implementation.
376The hardware offers this model to the C compiler, with bytes as the unit of displacement.
377C offers this model to its programmer as pointer arithmetic, with arbitrary sizes as the unit.
378Casting a multidimensional array as a single-dimensional array/pointer, then using @x[i]@ syntax to access its elements, is still a form of pointer arithmetic.
379
380Now stepping into the implementation of \CFA's new type-1 multidimensional arrays in terms of C's existing type-2 multidimensional arrays, it helps to clarify that even the interface is quite low-level.
381A C/\CFA array interface includes the resulting memory layout.
382The defining requirement of a type-2 system is the ability to slice a column from a column-finest matrix.
383The required memory shape of such a slice is set, before any discussion of implementation.
384The implementation presented here is how the \CFA array library wrangles the C type system, to make it do memory steps that are consistent with this layout.
385TODO: do I have/need a presentation of just this layout, just the semantics of -[all]?
386
387The new \CFA standard library @array@ datatype supports richer multidimensional features than C.
388The new array implementation follows C's contiguous approach, \ie @float [r][c]@, with one contiguous object subscripted by coarsely-strided dimensions directly wrapping finely-strided dimensions.
389Beyond what C's array type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.
390
391The following examples use an @array( float, 5, 7) m@, loaded with values incremented by $0.1$, when stepping across the length-7 finely-strided column dimension, and stepping across the length-5 coarsely-strided row dimension.
392\par\noindent
393\mbox{\lstinput{121-126}{hello-md.cfa}}
394\par\noindent
395The memory layout is 35 contiguous elements with strictly increasing addresses.
396
397A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.
398As for the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array by a prefix of its dimensions, \eg @m[2]@, giving the third row.
399This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be subscripted by the more finely strided dimensions, \eg @m[2][3]@, giving the value @2.3@.
400The following is an example slicing a row.
401\lstinput{60-64}{hello-md.cfa}
402\lstinput[aboveskip=0pt]{140-140}{hello-md.cfa}
403
404However, function @print1d@ is asserting too much knowledge about its parameter @r@ for printing either a row slice or a column slice.
405Specifically, declaring the parameter @r@ with type @array@ means that @r@ is contiguous, which is unnecessarily restrictive.
406That is, @r@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@) with managed length @N@.
407The new-array library provides the trait @ix@, so-defined.
408With it, the original declaration can be generalized with the same body.
409\lstinput{43-44}{hello-md.cfa}
410\lstinput[aboveskip=0pt]{145-145}{hello-md.cfa}
411The nontrivial slicing in this example now allows passing a \emph{noncontiguous} slice to @print1d@, where the new-array library provides a ``subscript by all'' operation for this purpose.
412In a multi-dimensional subscript operation, any dimension given as @all@ is a placeholder, \ie ``not yet subscripted by a value'', waiting for such a value, implementing the @ix@ trait.
413\lstinput{150-151}{hello-md.cfa}
414
415The example shows @x[2]@ and @x[[2, all]]@ both refer to the same, ``2.*'' slice.
416Indeed, the various @print1d@ calls under discussion access the entry with value @2.3@ as @x[2][3]@, @x[[2,all]][3]@, and @x[[all,3]][2]@.
417This design preserves (and extends) C array semantics by defining @x[[i,j]]@ to be @x[i][j]@ for numeric subscripts, but also for ``subscripting by all''.
418That is:
419\begin{cquote}
420\begin{tabular}{@{}cccccl@{}}
421@x[[2,all]][3]@ & $\equiv$      & @x[2][all][3]@  & $\equiv$    & @x[2][3]@  & (here, @all@ is redundant)  \\
422@x[[all,3]][2]@ & $\equiv$      & @x[all][3][2]@  & $\equiv$    & @x[2][3]@  & (here, @all@ is effective)
423\end{tabular}
424\end{cquote}
425
426Narrating progress through each of the @-[-][-][-]@\footnote{
427The first ``\lstinline{-}'' is a variable expression and the remaining ``\lstinline{-}'' are subscript expressions.}
428expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
429Where @all@ is redundant:
430\begin{cquote}
431\begin{tabular}{@{}ll@{}}
432@x@  & 2-dimensional, want subscripts for coarse then fine \\
433@x[2]@  & 1-dimensional, want subscript for fine; lock coarse == 2 \\
434@x[2][all]@  & 1-dimensional, want subscript for fine \\
435@x[2][all][3]@  & 0-dimensional; lock fine == 3
436\end{tabular}
437\end{cquote}
438Where @all@ is effective:
439\begin{cquote}
440\begin{tabular}{@{}ll@{}}
441@x@  & 2-dimensional, want subscripts for coarse then fine \\
442@x[all]@  & 2-dimensional, want subscripts for fine then coarse \\
443@x[all][3]@  & 1-dimensional, want subscript for coarse; lock fine == 3 \\
444@x[all][3][2]@  & 0-dimensional; lock coarse == 2
445\end{tabular}
446\end{cquote}
447The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back.
448For example, in a two dimensional matrix, this semantics conceptually transposes the matrix by reversing the subscripts.
449The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@.
450
451Contiguous arrays, and slices of them, are all represented by the same underlying parameterized type, which includes stride information in its metatdata.
452\PAB{Do not understand this sentence: The \lstinline{-[all]} operation is a conversion from a reference to one instantiation to a reference to another instantiation.}
453The running example's @all@-effective step, stated more concretely, is:
454\begin{cquote}
455\begin{tabular}{@{}ll@{}}
456@x@       & : 5 of ( 7 of @float@ each spaced 1 @float@ apart ) each spaced 7 @floats@ apart \\
457@x[all]@  & : 7 of ( 5 of @float@ each spaced 7 @float@s apart ) each spaced 1 @float@ apart
458\end{tabular}
459\end{cquote}
460
461\begin{figure}
462\includegraphics{measuring-like-layout}
463\caption{Visualization of subscripting by value and by \lstinline[language=CFA]{all}, for \lstinline{x} of type \lstinline{array( float, 5, 7 )} understood as 5 rows by 7 columns.
464The horizontal layout represents contiguous memory addresses while the vertical layout is conceptual.
465The vertical shaded band highlights the location of the targeted element, 2.3.
466Any such vertical slice contains various interpretations of a single address.}
467\label{fig:subscr-all}
468\end{figure}
469
470Figure~\ref{fig:subscr-all} shows one element (in the shaded band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@.
471In both cases, value 2 selects from the coarser dimension (rows of @x@),
472while the value 3 selects from the finer dimension (columns of @x@).
473The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, vs from @x[all]@.
474Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse.
475These two starting expressions, which are the example's only multidimensional subexpressions
476(those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index would select.
477
478The figure's presentation offers an intuition answering, What is an atomic element of @x[all]@?
479From there, @x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these strange building blocks.
480An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box)
481and a lie about its size (the wedge above it, growing upward).
482An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them,
483done according to their size, as announced.  Call such an array a column.
484A column is almost ready to be arranged into a matrix; it is the \emph{contained value} of the next-level building block,
485but another lie about size is required.
486At first, an atom needed to be arranged as if it were bigger,
487but now a column needs to be arranged as if it is smaller (the wedge above it, shrinking upward).
488These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
489Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride,
490it achieves the requirement of representing the transpose of @x@.
491Yet every time the programmer presents an index, a mere C-array subscript is achieving the offset calculation.
492
493In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected),
494the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats,
495compared with where analogous rows appear when the row-level option is presented for @x@.
496
497These size lies create an appearance of overlap.
498For example, in @x[all]@, the shaded band touches atoms 2.0, 2.1, 2.2, 2.3, 1.4, 1.5 and 1.6.
499But only the atom 2.3 is storing its value there.
500The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims.
501
502Lying is implemented as casting.
503The arrangement just described is implemented in the structure @arpk@.
504This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator.
505The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@.
506The subscript operator presents what's really inside, by casting to the type below the wedge of lie.
507
508%  Does x[all] have to lie too?  The picture currently glosses over how it it advertises a size of 7 floats.  I'm leaving that as an edge case benignly misrepresented in the picture.  Edge cases only have to be handled right in the code.
509
510Casting, overlapping and lying are unsafe.
511The mission here is to implement a style-2 feature that the type system helps the programmer use safely.
512The offered style-2 system is allowed to be internally unsafe,
513just as C's implementation of a style-3 system (upon a style-4 system) is unsafe within,
514even when the programmer is using it without casts or pointer arithmetic.
515Having a style-2 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices.
516
517The choice to implement this style-2 system upon C's style-3 arrays, rather than its style-4 pointer arithmetic,
518reduces the attack surface of unsafe code.
519My casting is unsafe, but I do not do any pointer arithmetic.
520When a programmer works in the common-case style-3 subset (in the no-@[all]@ top of Figure~\ref{fig:subscr-all}),
521my casts are identities, and the C compiler is doing its usual displacement calculations.
522If I had implemented my system upon style-4 pointer arithmetic,
523then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own.
524
525\noindent END: Paste looking for a home
526
527The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.
528The private @arpk@ structure (array with explicit packing) is generic over these two types (and more): the contained element, what it is masquerading as.
529This structure's public interface is the @array(...)@ construction macro and the two subscript operators.
530Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.
531Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.
532Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
533
534The @arpk@ structure and its @-[i]@ operator are thus defined as:
535\begin{cfa}
536forall( ztype(N),                       $\C{// length of current dimension}$
537        dtype(S) | sized(S),    $\C{// masquerading-as}$
538        dtype E_im,                             $\C{// immediate element, often another array}$
539        dtype E_base                    $\C{// base element, e.g. float, never array}$
540) { // distribute forall to each element
541        struct arpk {
542                S strides[N];           $\C{// so that sizeof(this) is N of S}$
543        };
544        // expose E_im, stride by S
545        E_im & ?[?]( arpk(N, S, E_im, E_base) & a, ptrdiff_t i ) {
546                return (E_im &) a.strides[i];
547        }
548}
549\end{cfa}
550
551An instantiation of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.
552In the base case, @array(E_base)@ is just @E_base@.
553Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
554
555Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.
556Expressed as an operation on types, this rotation is:
557\begin{eqnarray*}
558suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
559enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
560enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
561\end{eqnarray*}
562
563
564\section{Bound checks, added and removed}
565
566\CFA array subscripting is protected with runtime bound checks.
567Having dependent typing causes the optimizer to remove more of these bound checks than it would without them.
568This section provides a demonstration of the effect.
569
570The experiment compares the \CFA array system with the padded-room system [TODO:xref] most typically exemplified by Java arrays, but also reflected in the C++ pattern where restricted vector usage models a checked array.
571The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
572The experiment compares with the C++ version to keep access to generated assembly code simple.
573
574As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and C++ versions.
575When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.
576But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
577
578TODO: paste source and assembly codes
579
580Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.
581The case is naive matrix multiplication over a row-major encoding.
582
583TODO: paste source codes
584
585
586
587
588
589\section{Comparison with other arrays}
590
591\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.
592Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly typed ownership system that further helps guarantee statically the validity of every pointer deference.
593These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.
594\CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
595
596\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.
597Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.
598The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
599
600\subsection{Safety in a padded room}
601
602Java's array [TODO:cite] is a straightforward example of assuring safety against undefined behaviour, at a cost of expressiveness for more applied properties.
603Consider the array parameter declarations in:
604
605\begin{tabular}{rl}
606C      &  @void f( size_t n, size_t m, float x[n][m] );@ \\
607Java   &  @void f( float[][] a );@
608\end{tabular}
609
610Java's safety against undefined behaviour assures the callee that, if @x@ is non-null, then @a.length@ is a valid access (say, evaluating to the number $\ell$) and if @i@ is in $[0, \ell)$ then @x[i]@ is a valid access.
611If a value of @i@ outside this range is used, a runtime error is guaranteed.
612In these respects, C offers no guarantees at all.
613Notably, the suggestion that @n@ is the intended size of the first dimension of @x@ is documentation only.
614Indeed, many might prefer the technically equivalent declarations @float x[][m]@ or @float (*a)[m]@ as emphasizing the ``no guarantees'' nature of an infrequently used language feature, over using the opportunity to explain a programmer intention.
615Moreover, even if @x[0][0]@ is valid for the purpose intended, C's basic infamous feature is the possibility of an @i@, such that @x[i][0]@ is not valid for the same purpose, and yet, its evaluation does not produce an error.
616
617Java's lack of expressiveness for more applied properties means these outcomes are possible:
618\begin{itemize}
619\item @x[0][17]@ and @x[2][17]@ are valid accesses, yet @x[1][17]@ is a runtime error, because @x[1]@ is a null pointer
620\item the same observation, now because @x[1]@ refers to an array of length 5
621\item execution times vary, because the @float@ values within @x@ are sometimes stored nearly contiguously, and other times, not at all
622\end{itemize}
623C's array has none of these limitations, nor do any of the ``array language'' comparators discussed in this section.
624
625This Java level of safety and expressiveness is also exemplified in the C family, with the commonly given advice [TODO:cite example], for C++ programmers to use @std::vector@ in place of the C++ language's array, which is essentially the C array.
626The advice is that, while a vector is also more powerful (and quirky) than an array, its capabilities include options to preallocate with an upfront size, to use an available bound-checked accessor (@a.at(i)@ in place of @x[i]@), to avoid using @push_back@, and to use a vector of vectors.
627Used with these restrictions, out-of-bound accesses are stopped, and in-bound accesses never exercise the vector's ability to grow, which is to say, they never make the program slow to reallocate and copy, and they never invalidate the program's other references to the contained values.
628Allowing this scheme the same referential integrity assumption that \CFA enjoys [TODO:xref], this scheme matches Java's safety and expressiveness exactly.
629[TODO: decide about going deeper; some of the Java expressiveness concerns have mitigations, up to even more tradeoffs.]
630
631\subsection{Levels of dependently typed arrays}
632
633The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
634\begin{itemize}
635\item a \emph{zip}-style operation that consumes two arrays of equal length
636\item a \emph{map}-style operation whose produced length matches the consumed length
637\item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
638\end{itemize}
639Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.
640Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
641
642Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.
643In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.
644The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.
645In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.
646[TODO: clarify how even Idris type checking terminates]
647
648Idris is a current, general-purpose dependently typed programming language.
649Length checking is a common benchmark for full dependent type systems.
650Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.
651[TODO: finish explaining what Data.Vect is and then the essence of the comparison]
652
653POINTS:
654here is how our basic checks look (on a system that does not have to compromise);
655it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
656
657Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer offer novel contributions concerning similar, restricted dependent types for tracking array length.
658Unlike \CFA, both are garbage-collected functional languages.
659Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.
660So, like \CFA, the checking in question is a lightweight bounds-only analysis.
661Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
662
663
664
665The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.
666There is a particular emphasis on an existential type, enabling callee-determined return shapes.
667
668
669Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type.
670
671Futhark and full-strength dependently typed languages treat array sizes are ordinary values.
672Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
673
674\CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.
675Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.
676Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
677
678\subsection{Static safety in C extensions}
679
680
681\section{Future work}
682
683\subsection{Declaration syntax}
684
685\subsection{Range slicing}
686
687\subsection{With a module system}
688
689\subsection{With described enumerations}
690
691A project in \CFA's current portfolio will improve enumerations.
692In the incumbent state, \CFA has C's enumerations, unmodified.
693I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use.
694It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations:
695\begin{cfa}
696forall( T | is_enum(T) )
697void show_in_context( T val ) {
698        for( T i ) {
699                string decorator = "";
700                if ( i == val-1 ) decorator = "< ready";
701                if ( i == val   ) decorator = "< go"   ;
702                sout | i | decorator;
703        }
704}
705enum weekday { mon, tue, wed = 500, thu, fri };
706show_in_context( wed );
707\end{cfa}
708with output
709\begin{cfa}
710mon
711tue < ready
712wed < go
713thu
714fri
715\end{cfa}
716The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA.
717But the example shows these abilities:
718\begin{itemize}
719\item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type
720\item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@)
721\item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@)
722\item a provision for looping (the @for@ form used) over the values of the type.
723\end{itemize}
724
725If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
726
727[TODO: introduce Ada in the comparators]
728
729In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, C++, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.
730The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing.
731
732This change of perspective also lets us remove ubiquitous dynamic bound checks.
733[TODO: xref] discusses how automatically inserted bound checks can often be optimized away.
734But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.
735To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
736
737[TODO, fix confusion:  Idris has this arrangement of checks, but still the natural numbers as the domain.]
738
739The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.
740Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
741\begin{cfa}
742interface Ix n
743get_size n : Unit -> Int
744ordinal : n -> Int
745unsafe_from_ordinal n : Int -> n
746\end{cfa}
747
748Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.
749This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.
750Dex's example is a routine that calculates pointwise differences between two samples.
751Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).
752In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size.
753
754The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.
755In the audio instantiation, the duration-of-clip type argument is used for the domain.
756In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.
757This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
758\begin{cfa}
759instance {a b} [Ix a, Ix b] Ix (a & b)
760get_size = \(). size a * size b
761ordinal = \(i, j). (ordinal i * size b) + ordinal j
762unsafe_from_ordinal = \o.
763bs = size b
764(unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
765\end{cfa}
766and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
767\begin{cfa}
768img_trans :: (img_wd,img_ht)=>Real
769img_trans.(i,j) = img.i.j
770result = pairwise img_trans
771\end{cfa}
772[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
773
774In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
775
776\subsection{Retire pointer arithmetic}
777
778
779\section{\CFA}
780
781XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
782moved from background chapter \\
783XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
784
785Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it.
786(For later:  That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.)
787
788\subsection{\CFA features interacting with arrays}
789
790Prior work on \CFA included making C arrays, as used in C code from the wild,
791work, if this code is fed into @cfacc@.
792The quality of this this treatment was fine, with no more or fewer bugs than is typical.
793
794More mixed results arose with feeding these ``C'' arrays into preexisting \CFA features.
795
796A notable success was with the \CFA @alloc@ function,
797which type information associated with a polymorphic return type
798replaces @malloc@'s use of programmer-supplied size information.
799\begin{cfa}
800// C, library
801void * malloc( size_t );
802// C, user
803struct tm * el1 = malloc( sizeof(struct tm) );
804struct tm * ar1 = malloc( 10 * sizeof(struct tm) );
805
806// CFA, library
807forall( T * ) T * alloc();
808// CFA, user
809tm * el2 = alloc();
810tm (*ar2)[10] = alloc();
811\end{cfa}
812The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument.
813This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type.
814Using a compiler-produced value eliminates an opportunity for user error.
815
816TODO: fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa
817
818Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug.
819In the last example, the choice of ``pointer to array'' @ar2@ breaks a parallel with @ar1@.
820They are not subscripted in the same way.
821\begin{cfa}
822ar1[5];
823(*ar2)[5];
824\end{cfa}
825Using ``reference to array'' works at resolving this issue.  TODO: discuss connection with Doug-Lea \CC proposal.
826\begin{cfa}
827tm (&ar3)[10] = *alloc();
828ar3[5];
829\end{cfa}
830The implicit size communication to @alloc@ still works in the same ways as for @ar2@.
831
832Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one.
833TODO xref C standard does not claim that @ar1@ may be subscripted,
834because no stage of interpreting the construction of @ar1@ has it be that ``there is an \emph{array object} here.''
835But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls,
836where the type requested is an array, making the result, much more obviously, an array object.
837
838The ``reference to array'' type has its sore spots too.
839TODO see also @dimexpr-match-c/REFPARAM_CALL@ (under @TRY_BUG_1@)
840
841TODO: I fixed a bug associated with using an array as a T.  I think.  Did I really?  What was the bug?
Note: See TracBrowser for help on using the repository browser.