source: doc/theses/mike_brooks_MMath/array.tex @ 63a7394

Last change on this file since 63a7394 was b7921d8, checked in by Michael Brooks <mlbrooks@…>, 3 weeks ago

Thesis, array: Add section Dimension Parameter Implementation

  • Property mode set to 100644
File size: 81.8 KB
Line 
1\chapter{Array}
2\label{c:Array}
3
4
5\section{Introduction}
6\label{s:ArrayIntro}
7
8Arrays in C are possibly the single most misunderstood and incorrectly used feature in the language, resulting in the largest proportion of runtime errors and security violations.
9This chapter describes the new \CFA language and library features that introduce a length-checked array type to the \CFA standard library~\cite{Cforall}.
10
11Specifically, a new \CFA array is declared by instantiating the generic @array@ type,
12much like instantiating any other standard-library generic type (such as @dlist@),
13though using a new style of generic parameter.
14\begin{cfa}
15@array( float, 99 )@ x;                                 $\C[2.75in]{// x contains 99 floats}$
16\end{cfa}
17Here, the arguments to the @array@ type are @float@ (element type) and @99@ (length).
18When this type is used as a function parameter, the type-system requires that a call's argument matches, down to the length.
19\begin{cfa}
20void f( @array( float, 42 )@ & p ) {}   $\C{// p accepts 42 floats}$
21f( x );                                                                 $\C{// statically rejected: types are different, 99 != 42}$
22
23test2.cfa:3:1 error: Invalid application of existing declaration(s) in expression.
24Applying untyped:  Name: f ... to:  Name: x
25\end{cfa}
26Here, the function @f@'s parameter @p@ is declared with length 42.
27The call @f( x )@, with the argument being the previously-declared object, is invalid, because the @array@ lengths @99@ and @42@ do not match.
28
29A function declaration can be polymorphic over these @array@ arguments by using the @forall@ declaration prefix.
30This function @g@'s takes arbitrary type parameter @T@ (familiar) and \emph{dimension parameter} @N@ (new).
31A dimension paramter represents a to-be-determined count of elements, managed by the type system.
32\begin{cfa}
33forall( T, @[N]@ )
34void g( array( T, @N@ ) & p, int i ) {
35        T elem = p[i];                                          $\C{// dynamically checked: requires 0 <= i < N}$
36}
37g( x, 0 );                                                              $\C{// T is float, N is 99, dynamic subscript check succeeds}$
38g( x, 1000 );                                                   $\C{// T is float, N is 99, dynamic subscript check fails}\CRT$
39
40Cforall Runtime error: subscript 1000 exceeds dimension range [0,99) $for$ array 0x555555558020.
41\end{cfa}
42The call @g( x, 0 )@ is valid because @g@ accepts any length of array, where the type system infers @float@ for @T@ and length @99@ for @N@.
43Inferring values for @T@ and @N@ is implicit, without programmer involvement.
44Furthermore, in this case, the runtime subscript @x[0]@ (parameter @i@ being @0@) in @g@ is valid because 0 is in the dimension range $[0,99)$ of argument @x@.
45The call @g( x, 1000 )@ is also accepted through compile time;
46however, this case's subscript, @x[1000]@, generates an error, because @1000@ is outside the dimension range $[0,99)$ of argument @x@.
47
48The generic @array@ type is comparable to the C array type, which \CFA inherits from C.
49Their runtime characteristics are often identical, and some features are available in both.
50For example, assume a caller instantiates @N@ with 42 (discussion about how to follow) in:
51\begin{cfa}
52forall( [N] )
53void declDemo() {
54        float x1[N];                                            $\C{// built-in type ("C array")}$
55        array(float, N) x2;                                     $\C{// type from library}$
56}
57\end{cfa}
58Both of the locally-declared array variables, @x1@ and @x2@, have 42 elements, each element being a @float@.
59The two variables have identical size and layout; they both encapsulate 42-float stack allocations, with no additional ``bookkeeping'' allocations or headers.
60Providing this explicit generic approach requires a significant extension to the \CFA type system to support a full-feature, safe, efficient (space and time) array-type, which forms the foundation for more complex array forms in \CFA.
61
62Admittedly, the @array@ library type (type for @x2@) is syntactically different from its C counterpart.
63A future goal (TODO xref) is to provide the new features upon a built-in type whose syntax approaches C's (declaration style of @x1@).
64Then, the library @array@ type could be removed, giving \CFA a largely uniform array type.
65At present, the C-syntax array gets partial support for the new features, so the generic @array@ is used exclusively when introducing features;
66feature support and C compatibility are revisited in Section ? TODO.
67
68Offering the @array@ type, as a distinct alternative to the C array, is consistent with \CFA's goal of backwards compatibility, \ie virtually all existing C (@gcc@) programs can be compiled by \CFA with only a small number of changes, similar to \CC (@g++@).
69However, a few compatibility-breaking changes to the behaviour of the C array are necessary, both as an implementation convenience and to fix C's lax treatment of arrays.
70Hence, the @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features, making it unnecessary to deal with every inherited complexity of the C array.
71
72In all discussion following, ``C array'' means the types like that of @x@ and ``\CFA array'' means the standard-library @array@ type (instantiations), like the type of @x2@.
73
74My contributions in this chapter are:
75\begin{enumerate}
76\item A type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@.
77\item Provide a length-checked array-type in the \CFA standard library, where the array's length is statically managed and dynamically valued.
78\item Provide argument/parameter passing safety for arrays and subscript safety.
79\item TODO: general parking...
80\item Identify the interesting specific abilities available by the new @array@ type.
81\item Where there is a gap concerning this feature's readiness for prime-time, identification of specific workable improvements that are likely to close the gap.
82\end{enumerate}
83
84
85\section{Definitions and design considerations}
86
87
88\subsection{Dependent typing}
89
90
91
92General dependent typing allows the type system to encode arbitrary predicates (e.g. behavioural specifications for functions),
93which is an anti-goal for my work.
94Firstly, this application is strongly associated with pure functional languages,
95where a characterization of the return value (giving it a precise type, generally dependent upon the parameters)
96is a sufficient postcondition.
97In an imperative language like C and \CFA, it is also necessary to discuss side effects,
98for which an even heavier formalism, like separation logic, is required.
99Secondly, TODO: bash Rust.
100TODO: cite the crap out of these claims.
101
102
103
104\section{Features added}
105
106This section shows more about using the \CFA array and dimension parameters, demonstrating their syntax and semantics by way of motivating examples.
107As stated, the core capability of the new array is tracking all dimensions within the type system, where dynamic dimensions are represented using type variables.
108
109By declaring type variables at the front of object declarations, an array dimension is lexically referenceable where it is needed.
110For example, a declaration can share one length, @N@, among a pair of parameters and the return,
111meaning that it requires both input arrays to be of the same length, and guarantees that the result with be of that length as well.
112\lstinput{10-17}{hello-array.cfa}
113This function @f@ does a pointwise comparison of its two input arrays, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
114The dynamic allocation of the @ret@ array by preexisting @alloc@ uses the parameterized dimension information implicitly within its @sizeof@ determination, and casts the return type.
115Note that alloc only sees one whole type for its @T@ (which is @f@'s @array(bool, N)@); this type's size is a computation based on @N@.
116\begin{cfa}
117// simplification
118static inline forall( T & | sized(T) )
119T * alloc() {
120        return (T *)malloc( sizeof(T) );
121}
122\end{cfa}
123This example illustrates how the new @array@ type plugs into existing \CFA behaviour by implementing necessary @sized@ assertions needed by other types.
124(@sized@ implies a concrete \vs abstract type with a runtime-available size, exposed as @sizeof@.)
125As a result, there is significant programming safety by making the size accessible and implicit, compared with C's @calloc@ and non-array supporting @memalign@, which take an explicit length parameter not managed by the type system.
126
127\begin{figure}
128\lstinput{30-43}{hello-array.cfa}
129\lstinput{45-48}{hello-array.cfa}
130\caption{\lstinline{f} Harness}
131\label{f:fHarness}
132\end{figure}
133
134\VRef[Figure]{f:fHarness} shows a harness that uses function @f@, illustrating how dynamic values are fed into the @array@ type.
135Here, the dimension of arrays @x@, @y@, and @result@ is specified from a command-line value, @dim@, and these arrays are allocated on the stack.
136Then the @x@ array is initialized with decreasing values, and the @y@ array with amounts offset by constant @0.005@, giving relative differences within tolerance initially and diverging for later values.
137The program main is run (see figure bottom) with inputs @5@ and @7@ for sequence lengths.
138The loops follow the familiar pattern of using the variable @dim@ to iterate through the arrays.
139Most importantly, the type system implicitly captures @dim@ at the call of @f@ and makes it available throughout @f@ as @N@.
140The example shows @dim@ adapting into a type-system managed length at the declarations of @x@, @y@, and @result@, @N@ adapting in the same way at @f@'s loop bound, and a pass-thru use of @dim@ at @f@'s declaration of @ret@.
141Except for the lifetime-management issue of @result@, \ie explicit @free@, this program has eliminated both the syntactic and semantic problems associated with C arrays and their usage.
142The result is a significant improvement in safety and usability.
143
144In general, the @forall( ..., [N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and within a function.
145The syntactic form is chosen to parallel other @forall@ forms:
146\begin{cfa}
147forall( @[N]@ ) ...     $\C[1.5in]{// dimension}$
148forall( T & ) ...       $\C{// opaque datatype (formerly, "dtype")}$
149forall( T ) ...         $\C{// value datatype (formerly, "otype")}\CRT$
150\end{cfa}
151% The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
152In summary:
153\begin{itemize}
154\item
155@[N]@ within a @forall@ declares the type variable @N@ to be a managed length.
156\item
157@N@ can be used an expression of type @size_t@ within the declared function body.
158\item
159The value of an @N@-expression is the acquired length, derived from the usage site, \ie generic declaration or function call.
160\item
161@array( thing, N0, N1, ... )@ is a multi-dimensional type wrapping $\prod_i N_i$ adjacent occurrences of @thing@-typed objects.
162\end{itemize}
163
164\VRef[Figure]{f:TemplateVsGenericType} shows @N@ is not the same as a @size_t@ declaration in a \CC \lstinline[language=C++]{template}.
165\begin{enumerate}[leftmargin=*]
166\item
167The \CC template @N@ can only be compile-time value, while the \CFA @N@ may be a runtime value.
168% agreed, though already said
169\item
170\CC does not allow a template function to be nested, while \CFA lests its polymorphic functions to be nested.
171% why is this important?
172\item
173The \CC template @N@ must be passed explicitly at the call, unless @N@ has a default value, even when \CC can deduct the type of @T@.
174The \CFA @N@ is part of the array type and passed implicitly at the call.
175% fixed by comparing to std::array
176% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v2
177\item
178\CC cannot have an array of references, but can have an array of pointers.
179\CC has a (mistaken) belief that references are not objects, but pointers are objects.
180In the \CC example, the arrays fall back on C arrays, which have a duality with references with respect to automatic dereferencing.
181The \CFA array is a contiguous object with an address, which can be stored as a reference or pointer.
182% not really about forall-N vs template-N
183% any better CFA support is how Rob left references, not what Mike did to arrays
184% https://stackoverflow.com/questions/1164266/why-are-arrays-of-references-illegal
185% https://stackoverflow.com/questions/922360/why-cant-i-make-a-vector-of-references
186\item
187C/\CC arrays cannot be copied, while \CFA arrays can be copied, making them a first-class object (although array copy is often avoided for efficiency).
188% fixed by comparing to std::array
189% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v10
190\end{enumerate}
191TODO: settle Mike's concerns with this comparison (perhaps, remove)
192
193\begin{figure}
194\begin{tabular}{@{}l@{\hspace{20pt}}l@{}}
195\begin{c++}
196
197@template< typename T, size_t N >@
198void copy( T ret[@N@], T x[@N@] ) {
199        for ( int i = 0; i < N; i += 1 ) ret[i] = x[i];
200}
201int main() {
202
203        int ret[10], x[10];
204        for ( int i = 0; i < 10; i += 1 ) x[i] = i;
205        @copy<int, 10 >( ret, x );@
206        for ( int i = 0; i < 10; i += 1 )
207                cout << ret[i] << ' ';
208        cout << endl;
209}
210\end{c++}
211&
212\begin{cfa}
213int main() {
214        @forall( T, [N] )@              // nested function
215        void copy( array( T, @N@ ) & ret, array( T, @N@ ) & x ) {
216                for ( i; N ) ret[i] = x[i];
217        }
218
219        const int n = promptForLength();
220        array( int, n ) ret, x;
221        for ( i; n ) x[i] = i;
222        @copy( ret,  x );@
223        for ( i; n )
224                sout | ret[i] | nonl;
225        sout | nl;
226}
227\end{cfa}
228\end{tabular}
229\caption{\lstinline{N}-style paramters, for \CC template \vs \CFA generic type }
230\label{f:TemplateVsGenericType}
231\end{figure}
232
233Just as the first example in \VRef[Section]{s:ArrayIntro} shows a compile-time rejection of a length mismatch,
234so are length mismatches stopped when they invlove dimension parameters.
235While \VRef[Figure]{f:fHarness} shows successfully calling a function @f@ expecting two arrays of the same length,
236\begin{cfa}
237array( bool, N ) & f( array( float, N ) &, array( float, N ) & );
238\end{cfa}
239a static rejection occurs when attempting to call @f@ with arrays of potentially differing lengths.
240\lstinput[tabsize=1]{70-74}{hello-array.cfa}
241When the argument lengths themselves are statically unknown,
242the static check is conservative and, as always, \CFA's casting lets the programmer use knowledge not shared with the type system.
243\begin{tabular}{@{\hspace{0.5in}}l@{\hspace{1in}}l@{}}
244\lstinput{90-97}{hello-array.cfa}
245&
246\lstinput{110-117}{hello-array.cfa}
247\end{tabular}
248
249\noindent
250This static check's full rules are presented in \VRef[Section]{s:ArrayTypingC}.
251
252Orthogonally, the \CFA array type works within generic \emph{types}, \ie @forall@-on-@struct@.
253The same argument safety and the associated implicit communication of array length occurs.
254Preexisting \CFA allowed aggregate types to be generalized with type parameters, enabling parameterizing for element types.
255Now, \CFA also allows parameterizing them by length.
256Doing so gives a refinement of C's ``flexible array member'' pattern[TODO: cite ARM 6.7.2.1 pp18]\cite{arr:gnu-flex-mbr}.
257While a C flexible array member can only occur at the end of the enclosing structure,
258\CFA allows length-parameterized array members to be nested at arbitrary locations.
259This flexibility, in turn, allows for multiple array members.
260\lstinput{10-15}{hello-accordion.cfa}
261This structure's layout has the starting offset of @studentIds@ varying according to the generic parameter @C@, and the offset of @preferences@ varying according to both generic parameters.
262
263The school example has the data structure capturing many students' course-preference forms.
264It has course- and student-level metadata (their respective display names) and a position-based preferecens' matrix.
265The input files in \VRef[Figure]{f:checkHarness} give example data.
266
267When a function operates on a @School@ structure, the type system handles its memory layout transparently.
268\lstinput{30-37}{hello-accordion.cfa}
269In the running example, this @getPref@ function answers,
270for the student at position @sIx@, what is the position of its @pref@\textsuperscript{th}-favoured class?
271
272\VRef[Figure]{f:checkHarness} shows the @School@ harness and results with different array sizes.
273This example program prints the courses in each student's preferred order, all using the looked-up display names.
274Note the declaration of the @school@ variable.
275It is on the stack and its initialization does not use any casting or size arithmetic.
276Both of these points are impossible with a C flexible array member.
277When heap allocation is preferred, the original pattern still applies.
278\begin{cfa}
279School( classes, students ) * sp = alloc();
280\end{cfa}
281This ability to avoid casting and size arithmetic improves safety and usability over C flexible array members.
282
283
284\begin{figure}
285% super hack to get this to line up
286\begin{tabular}{@{}ll@{\hspace{25pt}}l@{}}
287\begin{tabular}{@{}p{3.25in}@{}}
288\lstinput{50-55}{hello-accordion.cfa}
289\vspace*{-3pt}
290\lstinput{90-98}{hello-accordion.cfa}
291\end{tabular}
292&
293\raisebox{0.32\totalheight}{%
294}%
295&
296\end{tabular}
297
298TODO: Get Peter's layout help
299
300\$ cat school1
301
302\lstinput{}{school1}
303
304\$ ./a.out < school1
305
306\lstinput{}{school1.out}
307
308\$ cat school2
309
310\lstinput{}{school2}
311
312\$ ./a.out < school2
313
314\lstinput{}{school2.out}
315
316\caption{\lstinline{School} harness, input and output}
317\label{f:checkHarness}
318\end{figure}
319
320
321\section{Dimension Parameter Implementation}
322
323The core of the preexisting \CFA compiler already had the ``heavy equipment'' needed
324to provide the feature set just reviewed (up to bugs in cases not yet exercised).
325To apply this equipment in tracking array lengths, I encoded a dimension (array's length) as a type.
326The type in question does not describe any data that the program actually uses at runtime.
327It simply carries information through intermediate stages of \CFA-to-C lowering.
328And this type takes a form such that, once \emph{it} gets lowered, the result does the right thing.
329
330Furthermore, the @array@ type itself is really ``icing on the cake.''
331Presenting its full version is deferred until \VRef[Section]{s:ArrayMdImpl}
332(where added complexity needed for multiple dimensions is considered).
333But simplifications close enough for the present discussion are:
334\begin{cfa}
335        forall( [N] )
336        struct array_1d_float {
337                float items[N];
338        };
339        forall( T, [N] )
340        struct array_1d {
341                T items[N];
342        };
343\end{cfa}
344This structure pattern, plus a subscript operator, is all that @array@ provides.
345
346My main work is letting a programmer define
347such a structre (one whose type is parameterized by @[N]@)
348and functions that operate on it (these being similarly parameterized).
349
350The repurposed heavy equipment is
351\begin{itemize}
352\item
353        The resolver, providing values for a used declaration's type-system variables,
354        gathered from type information in scope at the usage site.
355
356\item
357        The box pass, encoding information about type parameters
358        into ``extra'' regular parameters/arguments on declarations and calls.
359        Notably, it conveys the size of a type @foo@ as a @__sizeof_foo@ parameter,
360        and rewrites the @sizeof(foo)@ expression as @__sizeof_foo@, i.e. a use of the parameter.
361\end{itemize}
362
363The rules for resolution had to be restricted slightly, in order to achieve important refusal cases.
364This work is detailed in \VRef[Section]{s:ArrayTypingC}.
365However, the resolution--boxing scheme, in its preexisting state, was already equipped to work on (desugared) dimension parameters.
366The discussion following explains the desugaring and how correct lowered code results.
367
368An even simpler structure, and a toy function on it, demonstrate what's needed for the encoding.
369\begin{cfa}
370        forall( [@N@] ) { // [1]
371                struct thing {};
372                void f( thing(@N@) ) { sout | @N@; } // [2], [3]
373        }
374        int main() {
375                thing( @10@ ) x;  f(x);  // prints 10, [4]
376                thing( 100 ) y;  f(y);  // prints 100
377                return 0;
378        }
379\end{cfa}
380This example has:
381\begin{enumerate}
382\item
383        The symbol @N@ being declared as a type variable (a variable of the type system).
384\item
385        The symbol @N@ being used to parameterize a type.
386\item
387        The symbol @N@ being used as an expression (value).
388\item
389        A value like 10 being used as an argument to the parameter @N@.
390\end{enumerate}
391The chosen solution being to encode the value @N@ \emph{as a type}, items 1 and 2 are immediately available for free.
392Item 3 needs a way to recover the encoded value from a (valid) type (and to reject invalid types occurring here).
393Item 4 needs a way to produce a type that encodes the given value.
394
395Because the box pass handles a type's size as its main datum, the encoding is chosen to use it.
396The production and recovery are then straightforward.
397\begin{itemize}
398\item
399        The value $n$ is encoded as a type whose size is $n$.
400\item
401        Given a dimension expression $e$, produce type @char[@$e$@]@ to represent it.
402        If $e$ evaluates to $n$ then the endoded type has size $n$.
403\item
404        Given a type $T$ (produced by these rules), recover the value that it represents with the expression @sizeof(@$T$@)@.
405        If $T$ has size $n$ then the recovery expression evaluates to $n$.
406\end{itemize}
407
408This desugaring is applied in a translation step before the resolver.
409The ``validate'' pass hosts it, along with several other canonicalizing and desugaring transformations (the pass's name notwithstanding).
410The running example is lowered to:
411\begin{cfa}
412        forall( @N*@ ) { // [1]
413                struct thing {};
414                void f( thing(@N@) ) { sout | @sizeof(N)@; } // [2], [3]
415        }
416        int main() {
417                thing( char[@10@] ) x;  f(x);  // prints 10, [4]
418                thing( char[100] ) y;  f(y);  // prints 100
419                return 0;
420        }
421\end{cfa}
422Observe:
423\begin{enumerate}
424\item
425        @N@ is now declared to be a type.
426        It is declared to be \emph{sized} (by the @*@), meaning that the box pass shall do its @sizeof(N)@--@__sizeof_N@ extra parameter and expression translation.
427\item
428        @thing(N)@ is a type; the argument to the generic @thing@ is a type (type variable).
429\item
430        The @sout...@ expression (being an application of the @?|?@ operator) has a second argument that is an ordinary expression.
431\item
432        The type of variable @x@ is another @thing(-)@ type;  the argument to the generic @thing@ is a type (array type).
433\end{enumerate}
434
435From this point, preexisting \CFA compilation takes over lowering it the rest of the way to C.
436Inspecting the result shows what the above translation achieves.
437A form that shows only the relevant changes of the box pass (as informed by the resolver), leaving the rest unadulterated, is:
438\begin{cfa}
439        // [1]
440        void f( size_t __sizeof_N, @void *@ ) { sout | @__sizeof_N@; } // [2], [3]
441        int main() {
442                struct __conc_thing_10 {} x;  f(@10@, &x);  // prints 10, [4]
443                struct __conc_thing_100 {} y;  f(@100@, &y);  // prints 100
444                return 0;
445        }
446\end{cfa}
447Observe:
448\begin{enumerate}
449\item
450        The type parameter @N@ is gone.
451\item
452        The type @thing(N)@ is (replaced by @void *@, but thereby effectively) gone.
453\item
454        The @sout...@ expression (being an application of the @?|?@ operator) has a second argument that is a regular variable (parameter) usage.
455\item
456        Information about the particular @thing@ instantiation (value 10) has moved, from the type, to a regular function-call argument.
457\end{enumerate}
458At the end of the desugaring and downstream processing, the original C idiom of ``pass both a pointer and a length parameter'' has been reconstructed.
459In the programmer-written form, only the thing is passed.
460The compiler's action produces the more complex form, which if handwritten, would be error-prone.
461
462Back at the very front end, the parsing changes, AST schema extensions, and validation rules for enabling the sugared user input are:
463\begin{itemize}
464\item
465        Recognize the form @[N]@ as a type-variable declaration within a @forall@.
466\item
467        Have the new brand of type-variable, \emph{Dimension}, in the AST form of a type-variable, to represent one parsed from @[-]@.
468\item
469        Allow a type variable to occur in expression position.  Validate (after parsing) that only dimension-branded type variables are used here.
470\item
471        Allow an expression to occur in type-argument position.  Brand the resulting type argument as a dimension.
472\item
473        Validate (after parsing), on a generic-type usage, \eg the type part of the declaration
474        \begin{cfa}
475                @array_1d( foo, bar ) x;@
476        \end{cfa}
477        that the brands on the generic arguments match the brands of the declared type variables.
478        Here, that @foo@ is a type and @bar@ is a dimension.
479\end{itemize}
480
481
482\section{Typing of C Arrays}
483\label{s:ArrayTypingC}
484
485Essential in giving a guarantee of accurate length is the compiler's ability
486to reject a program that presumes to mishandle length.
487By contrast, most discussion so far dealt with communicating length,
488from one party who knows it, to another who is willing to work with any given length.
489For scenarios where the concern is a mishandled length,
490the interaction is between two parties who both claim to know (something about) it.
491Such a scenario occurs in this pure C fragment, wich today's C compilers accept:
492\begin{cfa}
493        int n = @42@;
494        float x[n];
495        float (*xp)[@999@] = &x;
496        (*xp)[@500@];  // in "bound"?
497\end{cfa}
498
499Here, the array @x@ has length 42, while a pointer to it (@xp@) claims length 999.
500So, while the subscript of @xp@ at position 500 is out of bound of its referent @x@,
501the access appears in-bound of the type information available on @xp@.
502Truly, length is being mishandled in the previous step,
503where the type-carried length information on @x@ is not compatible with that of @xp@.
504
505The \CFA new-array rejects the analogous case:
506\begin{cfa}
507        int n = @42@;
508        array(float, n) x;
509        array(float, 999) * xp = x; // static rejection here
510        (*xp)[@500@]; // runtime check vs len 999
511\end{cfa}
512
513% TODO: kill the vertical whitespace around these lists
514% nothing from https://stackoverflow.com/questions/1061112/eliminate-space-before-beginitemize is working
515
516The way the \CFA array is implemented,
517the type analysis of this \CFA case reduces to a case similar to the earlier C version.
518The \CFA compiler's compatibility analysis proceeds as:
519\begin{itemize}[noitemsep,partopsep=-\parskip,parsep=0pt,leftmargin=4em]
520\item
521        Is @array(float, 999)@ type-compatible with @array(float, n)@?
522\item
523        Is @arrayX(float, char[999])@ type-compatible with @arrayX(float, char[n])@?
524        \footnote{Here, \lstinline{arrayX} represents the type that results
525                from desugaring the \lstinline{array} type
526                into a type whose generic parameters are all types.
527                This presentation elides the noisy fact that
528                \lstinline{array} is actually a macro for something bigger;
529                the reduction to \lstinline{char[-]} still proceeds as sketched.}
530\item
531        Is @char[999]@ type-compatible with @char[n]@?
532\end{itemize}
533
534I chose to achieve the necessary rejection of the \CFA case
535by adding a rejection of the corresponding C case.
536
537This decision is not backward compatible.
538There are two complementary mitigations for this incompatibility.
539
540First, a simple recourse is available to a programmer who intends to proceed
541with the statically unsound assignment.
542This situation might arise if @n@ were known to be 999,
543rather than 42, as in the introductory examples.
544The programmer can add a cast, as in:
545\begin{cfa}
546        xp = ( float (*)[999] ) & x;
547\end{cfa}
548This addition causes \CFA to accept, beacause now, the programmer has accepted blame.
549This addition is benign in plain C, because the cast is valid, just unnecessary there.
550Moreover, the addition can even be seen as appropriate ``eye candy,''
551marking where the unchecked length knowledge is used.
552Therefore, a program being onboarded to \CFA can receive a simple upgrade,
553to satisfy the \CFA rules (and arguably become clearer),
554without giving up its validity to a plain C compiler.
555
556Second, the incompatibility only affects types like pointer-to-array,
557which are are infrequently used in C.
558The more common C idiom for aliasing an array is to use the pointer-to-first-element type,
559which does not participate in the \CFA array's length checking.
560\footnote{Notably, the desugaring of the \lstinline{array@} type,
561        avoids letting any \lstinline{-[-]} type decay,
562        in order to preserve the length information that powers runtime bound checking.}
563Therefore, the frequency of needing to upgrade wild C code (as discussed in the first mitigation)
564is anticipated to be low.
565
566Because the incompatibility represents a low cost to a \CFA onboarding effort
567(with a plausible side benefit of linting the original code for a missing annotation),
568I elected not to add special measures to retain the compatibility.
569It would be possible to flag occurrences of @-[-]@ types that come from @array@ desugaring,
570treating those with stricter \CFA rules, while treating others with classic C rules.
571If future lessons from C project onboarding warrant it,
572this special compatibility measure can be added.
573
574Having allowed that both the initial C example's check
575\begin{itemize}[noitemsep,partopsep=-\parskip,parsep=0pt,leftmargin=4em]
576        \item
577                Is @float[999]@ type-compatible with @float[n]@?
578\end{itemize}
579and the second \CFA exmple's induced check
580\begin{itemize}[noitemsep,partopsep=-\parskip,parsep=0pt,leftmargin=4em]
581        \item
582                Is @char[999]@ type-compatible with @char[n]@?
583\end{itemize}
584shall have the same answer, (``no''),
585discussion turns to how I got the \CFA compiler to produce this answer.
586In its preexisting form, it produced a (buggy) approximation of the C rules.
587To implement the new \CFA rules, I took the syntactic recursion a step further, obtaining,
588in both cases:
589\begin{itemize}[noitemsep,partopsep=-\parskip,parsep=0pt,leftmargin=4em]
590        \item
591                Is @999@ TBD-compatible with @n@?
592\end{itemize}
593This compatibility question applies to a pair of expressions, where the earlier ones were to types.
594Such an expression-compatibility question is a new addition to the \CFA compiler.
595These questions only arise in the context of dimension expressions on (C) array types.
596
597TODO: ensure these compiler implementation matters are treated under \CFA compiler background:
598type unification,
599cost calculation,
600GenPoly.
601
602The relevant technical component of the \CFA compiler is,
603within the type resolver, the type unification procedure.
604I added rules for continuing this unification into expressions that occur within types.
605It is still fundamentally doing \emph{type} unification
606because it is participating in binding type variables,
607and not participating in binding any variables that stand in for expression fragments
608(for there is no such sort of variable in \CFA's analysis.)
609
610An unfortunate fact about the \CFA compiler's preexisting implementation is that
611type unification suffers from two forms of duplication.
612
613The first duplication has (many of) the unification rules stated twice.
614As a result, my additions for dimension expressions are stated twice.
615The extra statement of the rules occurs in the GenPoly module,
616where concrete types like @array(int, 5)@\footnote{
617        Again, the presentation is simplified
618        by leaving the \lstinline{array} macro unexpanded}
619are lowered into corresponding C types @struct __conc_array_1234@ (the suffix being a generated index).
620In this case, the struct's definition gives fields that hardcode the argument values of @float@ and @5@.
621The next time an @array(-,-)@ concrete instance is encountered,
622is the previous @struct __conc_array_1234@ suitable for it?
623Yes, for another occurrance of @array(int, 5)@;
624no, for either @array(rational(int), 5)@ or @array(int, 42)@.
625By the last example, this phase must ``reject''
626the hypothesis that it should reuse the dimension-5 instance's C-lowering for a dimension-42 instance.
627
628The second duplication has unification (proper) being invoked at two stages of expression resolution.
629As a result, my added rule set needs to handle more cases than the preceding discussion motivates.
630In the program
631\begin{cfa}
632        void f( double );
633        forall( T & ) void f( T & );
634        void g( int n ) {
635                array( float, n + 1 ) x;
636                f(x);
637        }
638\end{cfa}
639when resolving the function call, the first unification stage
640compares the types @T@, of the parameter, with @array( float, n + 1 )@, of the argument.
641TODO: finish.
642
643The actual rules for comparing two dimension expressions are conservative.
644To answer, ``yes, consider this pair of expressions to be matching,''
645is to imply, ``all else being equal, allow an array with length calculated by $e_1$
646to be passed to a function expecting a length-$e_2$ array.''\footnote{
647        TODO: Deal with directionality, that I'm doing exact-match, no ``at least as long as,'' no subtyping.
648        Should it be an earlier scoping principle?  Feels like it should matter in more places than here.}
649So, a ``yes'' answer must represent a guarantee that both expressions will evaluate the
650same result, while a ``no'' can tolerate ``they might, but we're not sure,'
651provided that practical recourses are available
652to let programmers express their better knowledge.
653The specific rule-set that I offer with the current release is, in fact, extremely conservative.
654I chose to keep things simple,
655and allow real future needs do drive adding additional complexity,
656within the framework that I laid out.
657
658For starters, the original motivating example's rejection
659is not based on knowledge that
660the @xp@ length of (the literal) 999 is value-unequal to
661the (obvious) runtime value of the variable @n@, which is the @x@ length.
662Rather, the analysis assumes a variable's value can be anything,
663and so there can be no guarantee that its value is 999.
664So, a variable use and a literal can never match.
665
666Two occurrences of the same literal value are obviously a fine match.
667For two occurrences of the same varialbe, more information is needed.
668For example, this one is fine
669\begin{cfa}
670        void f( const int n ) {
671                float x[n];
672                float (*xp)[n] = x; // accept
673        }
674\end{cfa}
675while this one is not:
676\begin{cfa}
677        void f() {
678                int n = 42;
679                float x[n];
680                n = 999;
681                float (*xp)[n] = x; // reject
682        }
683\end{cfa}
684Furthermore, the fact that the first example sees @n@ as @const@
685is not actually a sufficent basis.
686In this example, @f@'s length expression's declaration is as @const@ as it can be,
687yet its value still changes between the two invocations:
688\begin{cfa}
689        // compile unit 1
690        void g();
691        void f( const int & const nr ) {
692                float x[nr];
693                g();
694                float (*xp)[nr] = x; // reject
695        }
696        // compile unit 2
697        static int n = 42;
698        void g() {
699                n = 99;
700        }
701        void f( const int & );
702        int main () {
703                f(n);
704                return 0;
705        }
706\end{cfa}
707The issue in this last case is,
708just because you aren't able to change something doesn't mean someone else can't.
709
710My rule set also respects a feature of the C tradition.
711In spite of the several limitations of the C rules
712accepting cases that produce different values, there are a few mismatches that C stops.
713C is quite precise when working with two static values:
714\begin{cfa}
715        enum { fortytwo = 42 };
716        float x[fortytwo];
717        float (*xp1)[42] = &x; // accept
718        float (*xp2)[999] = &x; // reject
719\end{cfa}
720My \CFA rules agree with C's on these cases.
721
722My rules classify expressions into three groups:
723\begin{description}
724\item[Statically Evaluable]
725        Expressions for which a specific value can be calculated (conservatively)
726        at compile-time.
727        A preexisting \CFA compiler module defines which expressions qualify,
728        and evaluates them.
729        Includes literals and enumeration values.
730\item[Dynamic but Stable]
731        The value of a variable declared as @const@.
732        Includes a @const@ parameter.
733\item[Potentially Unstable]
734        The catch-all category.  Notable examples include:
735        any function-call result (@float x[foo()];@),
736        the particular function-call result that is a pointer dereference (@void f(const int * n) { float x[*n]; }@), and
737        any use of a reference-typed variable.
738\end{description}
739
740My \CFA rules are:
741\begin{itemize}
742\item
743        Accept a Statically Evaluable pair, if both expressions have the same value.
744        Notably, this rule allows a literal to match with an enumeration value, based on the value.
745\item
746        Accept a Dynamic but Stable pair, if both expressions are written out the same, e.g. refers to same variable declaration.
747\item
748        Otherwise, reject.
749        Notably, reject all pairs from the Potentially Unstable group.
750        Notably, reject all pairs that cross groups.
751\end{itemize}
752
753The traditional C rules are:
754\begin{itemize}
755\item
756        Reject a Statically Evaluable pair, if the expressions have two different values.
757\item
758        Otherwise, accept.
759\end{itemize}
760
761
762\newcommand{\falsealarm}{{\color{orange}\small{*}}}
763\newcommand{\allowmisuse}{{\color{red}\textbf{!}}}
764\newcommand{\cmark}{\ding{51}} % from pifont
765\newcommand{\xmark}{\ding{55}}
766\begin{figure}
767        \begin{tabular}{@{}l@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c}
768         & \multicolumn{2}{c}{\underline{Values Equal}}
769         & \multicolumn{2}{c}{\underline{Values Unequal}} 
770         & \\
771        \textbf{Case}                                & C      & \CFA                & C                      & \CFA    & Compat. \\
772        Both Statically Evaluable, Same Symbol       & Accept & Accept              &                        &         & \cmark \\
773        Both Statically Evaluable, Different Symbols & Accept & Accept              & Reject                 & Reject  & \cmark \\
774        Both Dynamic but Stable, Same Symbol         & Accept & Accept              &                        &         & \cmark \\
775        Both Dynamic but Stable, Different Symbols   & Accept & Reject\,\falsealarm & Accept\,\allowmisuse   & Reject  & \xmark \\
776        Both Potentially Unstable, Same Symbol       & Accept & Reject\,\falsealarm & Accept\,\allowmisuse   & Reject  & \xmark \\
777        Any other grouping, Different Symbol         & Accept & Reject\,\falsealarm & Accept\,\allowmisuse   & Reject  & \xmark
778        \end{tabular}
779
780        \vspace{12pt}
781        \noindent\textbf{Legend:}
782        \begin{itemize}
783        \item
784                Each row gives the treatment of a test harness of the form
785                \begin{cfa}
786                        float x[ expr1 ];
787                        float (*xp)[ expr2 ] = &x;
788                \end{cfa}
789                where \lstinline{expr1} and \lstinline{expr2} are metavariables varying according to the row's Case.
790                Each row's claim applies to other harnesses too, including,
791                \begin{itemize}
792                \item
793                        calling a function with a paramter like \lstinline{x} and an argument of the \lstinline{xp} type,
794                \item
795                        assignment in place of initialization,
796                \item
797                        using references in place of pointers, and
798                \item
799                        for the \CFA array, calling a polymorphic function on two \lstinline{T}-typed parameters with \lstinline{&x}- and \lstinline{xp}-typed arguments.
800                \end{itemize}
801        \item
802                Each case's claim is symmetric (swapping \lstinline{expr1} with \lstinline{expr2} has no effect),
803                even though most test harnesses are asymetric.
804        \item
805                The table treats symbolic identity (Same/Different on rows)
806                apart from value eqality (Equal/Unequal on columns).
807                \begin{itemize}
808                \item
809                        The expressions \lstinline{1}, \lstinline{0+1} and \lstinline{n}
810                        (where \lstinline{n} is a variable with value 1),
811                        are all different symbols with the value 1.
812                \item
813                        The column distinction expresses ground truth about whether an omniscient analysis should accept or reject.
814                \item
815                        The row distinction expresses the simple static factors used by today's analyses.
816                \end{itemize}
817        \item
818                Accordingly, every Reject under Values Equal is a false alarm (\falsealarm),
819                while every Accept under Values Unequal is an allowed misuse (\allowmisuse).
820        \end{itemize}
821        \caption{Case comparison for array type compatibility, given pairs of dimension expressions.
822                TODO: get Peter's LaTeX help on overall appearance, probably including column spacing/centering and bullet indentation.}
823        \label{f:DimexprRuleCompare}
824\end{figure}
825
826
827Figure~\ref{f:DimexprRuleCompare} gives a case-by-case comparison of the consequences of these rule sets.
828It demonstrates that the \CFA false alarms occur in the same cases as C treats unsafely.
829It also shows that C-incompatibilities only occur in cases that C treats unsafely.
830
831
832The conservatism of the new rule set can leave a programmer needing a recourse,
833when needing to use a dimension expression whose stability argument
834is more subtle than current-state analysis.
835This recourse is to declare an explicit constant for the dimension value.
836Consider these two dimension expressions,
837whose reuses are rejected by the blunt current-state rules:
838\begin{cfa}
839        void f( int & nr, const int nv ) {
840                float x[nr];
841                float (*xp)[nr] = & x; // reject: nr varying (no references)
842                float y[nv + 1];
843                float (*yp)[nv + 1] = & y; // reject: ?+? unpredicable (no functions)
844        }
845\end{cfa}
846Yet, both dimension expressions are reused safely.
847(The @nr@ reference is never written, not volatile
848and control does not leave the function between the uses.
849The name @?+?@ resolves to a function that is quite predictable.)
850The programmer here can add the constant declarations:
851\begin{cfa}
852        void f( int & nr, const int nv ) {
853                @const int nx@ = nr;
854                float x[nx];
855                float (*xp)[nx] = & x; // acept
856                @const int ny@ = nv + 1;
857                float y[ny];
858                float (*yp)[ny] = & y; // accept
859        }
860\end{cfa}
861The result is the originally intended semantics,
862achieved by adding a superfluous ``snapshot it as of now'' directive.
863
864The snapshotting trick is also used by the translation, though to achieve a different outcome.
865Rather obviously, every array must be subscriptable, even a bizzarre one:
866\begin{cfa}
867        array( float, rand(10) ) x;
868        x[0];  // 10% chance of bound-check failure
869\end{cfa}
870Less obvious is that the mechanism of subscripting is a function call,
871which must communicate length accurately.
872The bound-check above (callee logic) must use the actual allocated length of @x@,
873without mistakenly reevaluating the dimension expression, @rand(10)@.
874Adjusting the example to make the function's use of length more explicit:
875\begin{cfa}
876        forall ( T * )
877        void f( T * x ) { sout | sizeof(*x); }
878        float x[ rand(10) ];
879        f( x );
880\end{cfa}
881Considering that the partly translated function declaration is, loosely,
882\begin{cfa}
883        void f( size_t __sizeof_T, void * x ) { sout | __sizeof_T; }
884\end{cfa}
885the translated call must not go like:
886\begin{cfa}
887        float x[ rand(10) ];
888        f( rand(10), &x );
889\end{cfa}
890Rather, its actual translation is like:
891\begin{cfa}
892        size_t __dim_x = rand(10);
893        float x[ __dim_x ];
894        f( __dim_x, &x );
895\end{cfa}
896The occurrence of this dimension hoisting during translation was present in the preexisting \CFA compiler.
897But its cases were buggy, particularly with determining, ``Can hoisting be skipped here?''
898For skipping this hoisting is clearly desirable in some cases,
899not the least of which is when the programmer has already done so manually.
900My work includes getting these cases right, harmonized with the accept/reject criteria, and tested.
901
902
903
904TODO: Discuss the interaction of this dimension hoisting with the challenge of extra unification for cost calculation
905
906
907\section{Multidimensional Arrays}
908\label{toc:mdimpl}
909
910% TODO: introduce multidimensional array feature and approaches
911
912When working with arrays, \eg linear algebra, array dimensions are referred to as ``rows'' and ``columns'' for a matrix, adding ``planes'' for a cube.
913(There is little terminology for higher dimensional arrays.)
914For example, an acrostic poem\footnote{A type of poetry where the first, last or other letters in a line spell out a particular word or phrase in a vertical column.}
915can be treated as a grid of characters, where the rows are the text and the columns are the embedded keyword(s).
916Within a poem, there is the concept of a \newterm{slice}, \eg a row is a slice for the poem text, a column is a slice for a keyword.
917In general, the dimensioning and subscripting for multidimensional arrays has two syntactic forms: @m[r,c]@ or @m[r][c]@.
918
919Commonly, an array, matrix, or cube, is visualized (especially in mathematics) as a contiguous row, rectangle, or block.
920This conceptualization is reenforced by subscript ordering, \eg $m_{r,c}$ for a matrix and $c_{p,r,c}$ for a cube.
921Few programming languages differ from the mathematical subscript ordering.
922However, computer memory is flat, and hence, array forms are structured in memory as appropriate for the runtime system.
923The closest representation to the conceptual visualization is for an array object to be contiguous, and the language structures this memory using pointer arithmetic to access the values using various subscripts.
924This approach still has degrees of layout freedom, such as row or column major order, \ie juxtaposed rows or columns in memory, even when the subscript order remains fixed.
925For example, programming languages like MATLAB, Fortran, Julia and R store matrices in column-major order since they are commonly used for processing column-vectors in tabular data sets but retain row-major subscripting.
926In general, storage layout is hidden by subscripting, and only appears when passing arrays among different programming languages or accessing specific hardware.
927
928\VRef[Figure]{f:FixedVariable} shows two C90 approaches for manipulating a contiguous matrix.
929Note, C90 does not support VLAs.
930The fixed-dimension approach (left) uses the type system;
931however, it requires all dimensions except the first to be specified at compile time, \eg @m[][6]@, allowing all subscripting stride calculations to be generated with constants.
932Hence, every matrix passed to @fp1@ must have exactly 6 columns but the row size can vary.
933The variable-dimension approach (right) ignores (violates) the type system, \ie argument and parameters types do not match, and subscripting is performed manually using pointer arithmetic in the macro @sub@.
934
935\begin{figure}
936\begin{tabular}{@{}l@{\hspace{40pt}}l@{}}
937\multicolumn{1}{c}{\textbf{Fixed Dimension}} & \multicolumn{1}{c}{\textbf{Variable Dimension}} \\
938\begin{cfa}
939
940void fp1( int rows, int m[][@6@] ) {
941        ...  printf( "%d ", @m[r][c]@ );  ...
942}
943int fm1[4][@6@], fm2[6][@6@]; // no VLA
944// initialize matrixes
945fp1( 4, fm1 ); // implicit 6 columns
946fp1( 6, fm2 );
947\end{cfa}
948&
949\begin{cfa}
950#define sub( m, r, c ) *(m + r * sizeof( m[0] ) + c)
951void fp2( int rows, int cols, int *m ) {
952        ...  printf( "%d ", @sub( m, r, c )@ );  ...
953}
954int vm1[@4@][@4@], vm2[@6@][@8@]; // no VLA
955// initialize matrixes
956fp2( 4, 4, vm1 );
957fp2( 6, 8, vm2 );
958\end{cfa}
959\end{tabular}
960\caption{C90 Fixed \vs Variable Contiguous Matrix Styles}
961\label{f:FixedVariable}
962\end{figure}
963
964Many languages allow multidimensional arrays-of-arrays, \eg in Pascal or \CC.
965\begin{cquote}
966\begin{tabular}{@{}ll@{}}
967\begin{pascal}
968var m : array[0..4, 0..4] of Integer;  (* matrix *)
969type AT = array[0..4] of Integer;  (* array type *)
970type MT = array[0..4] of AT;  (* array of array type *)
971var aa : MT;  (* array of array variable *)
972m@[1][2]@ := 1;   aa@[1][2]@ := 1 (* same subscripting *)
973\end{pascal}
974&
975\begin{c++}
976int m[5][5];
977
978typedef vector< vector<int> > MT;
979MT vm( 5, vector<int>( 5 ) );
980m@[1][2]@ = 1;  aa@[1][2]@ = 1;
981\end{c++}
982\end{tabular}
983\end{cquote}
984The language decides if the matrix and array-of-array are laid out the same or differently.
985For example, an array-of-array may be an array of row pointers to arrays of columns, so the rows may not be contiguous in memory nor even the same length (triangular matrix).
986Regardless, there is usually a uniform subscripting syntax masking the memory layout, even though a language could differentiated between the two forms using subscript syntax, \eg @m[1,2]@ \vs @aa[1][2]@.
987Nevertheless, controlling memory layout can make a difference in what operations are allowed and in performance (caching/NUMA effects).
988
989C also provides non-contiguous arrays-of-arrays.
990\begin{cfa}
991int m[5][5];                                                    $\C{// contiguous}$
992int * aa[5];                                                    $\C{// non-contiguous}$
993\end{cfa}
994both with different memory layout using the same subscripting, and both with different degrees of issues.
995The focus of this work is on the contiguous multidimensional arrays in C.
996The reason is that programmers are often forced to use the more complex array-of-array form when a contiguous array would be simpler, faster, and safer.
997Nevertheless, the C array-of-array form is still important for special circumstances.
998
999\VRef[Figure]{f:ContiguousNon-contiguous} shows the extensions made in C99 for manipulating contiguous \vs non-contiguous arrays.\footnote{C90 also supported non-contiguous arrays.}
1000First, VLAs are supported.
1001Second, for contiguous arrays, C99 conjoins one or more of the parameters as a downstream dimension(s), \eg @cols@, implicitly using this parameter to compute the row stride of @m@.
1002If the declaration of @fc@ is changed to:
1003\begin{cfa}
1004void fc( int rows, int cols, int m[@rows@][@cols@] ) ...
1005\end{cfa}
1006it is possible for C to perform bound checking across all subscripting, but it does not.
1007While this contiguous-array capability is a step forward, it is still the programmer's responsibility to manually manage the number of dimensions and their sizes, both at the function definition and call sites.
1008That is, the array does not automatically carry its structure and sizes for use in computing subscripts.
1009While the non-contiguous style in @faa@ looks very similar to @fc@, the compiler only understands the unknown-sized array of row pointers, and it relies on the programmer to traverse the columns in a row correctly with a correctly bounded loop index.
1010Specifically, there is no requirement that the rows are the same length, like a poem with different length lines.
1011
1012\begin{figure}
1013\begin{tabular}{@{}ll@{}}
1014\multicolumn{1}{c}{\textbf{Contiguous}} & \multicolumn{1}{c}{\textbf{ Non-contiguous}} \\
1015\begin{cfa}
1016void fc( int rows, @int cols@, int m[ /* rows */ ][@cols@] ) {
1017        ...  printf( "%d ", @m[r][c]@ );  ...
1018}
1019int m@[5][5]@;
1020for ( int r = 0; r < 5; r += 1 ) {
1021
1022        for ( int c = 0; c < 5; c += 1 )
1023                m[r][c] = r + c;
1024}
1025fc( 5, 5, m );
1026\end{cfa}
1027&
1028\begin{cfa}
1029void faa( int rows, int cols, int * m[ @/* cols */@ ] ) {
1030        ...  printf( "%d ", @m[r][c]@ );  ...
1031}
1032int @* aa[5]@;  // row pointers
1033for ( int r = 0; r < 5; r += 1 ) {
1034        @aa[r] = malloc( 5 * sizeof(int) );@ // create rows
1035        for ( int c = 0; c < 5; c += 1 )
1036                aa[r][c] = r + c;
1037}
1038faa( 5, 5, aa );
1039\end{cfa}
1040\end{tabular}
1041\caption{C99 Contiguous \vs Non-contiguous Matrix Styles}
1042\label{f:ContiguousNon-contiguous}
1043\end{figure}
1044
1045
1046\subsection{Multidimensional array implementation}
1047\label{s:ArrayMdImpl}
1048
1049A multidimensional array implementation has three relevant levels of abstraction, from highest to lowest, where the array occupies \emph{contiguous memory}.
1050\begin{enumerate}
1051\item
1052Flexible-stride memory:
1053this model has complete independence between subscripting ordering and memory layout, offering the ability to slice by (provide an index for) any dimension, \eg slice a plane, row, or column, \eg @c[3][*][*]@, @c[3][4][*]@, @c[3][*][5]@.
1054\item
1055Fixed-stride memory:
1056this model binds the first subscript and the first memory layout dimension, offering the ability to slice by (provide an index for) only the coarsest dimension, @m[row][*]@ or @c[plane][*][*]@, \eg slice only by row (2D) or plane (3D).
1057After which, subscripting and memory layout are independent.
1058\item
1059Explicit-displacement memory:
1060this model has no awareness of dimensions just the ability to access memory at a distance from a reference point (base-displacement addressing), \eg @x + 23@ or @x[23}@ $\Rightarrow$ 23rd element from the start of @x@.
1061A programmer must manually build any notion of dimensions using other tools;
1062hence, this style is not offering multidimensional arrays \see{\VRef[Figure]{f:FixedVariable} right example}.
1063\end{enumerate}
1064
1065There is some debate as to whether the abstraction ordering goes $\{1, 2\} < 3$, rather than my numerically-ordering.
1066That is, styles 1 and 2 are at the same abstraction level, with 3 offering a limited set of functionality.
1067I chose to build the \CFA style-1 array upon a style-2 abstraction.
1068(Justification of the decision follows, after the description of the design.)
1069
1070Style 3 is the inevitable target of any array implementation.
1071The hardware offers this model to the C compiler, with bytes as the unit of displacement.
1072C offers this model to its programmer as pointer arithmetic, with arbitrary sizes as the unit.
1073Casting a multidimensional array as a single-dimensional array/pointer, then using @x[i]@ syntax to access its elements, is still a form of pointer arithmetic.
1074
1075Now stepping into the implementation of \CFA's new type-1 multidimensional arrays in terms of C's existing type-2 multidimensional arrays, it helps to clarify that even the interface is quite low-level.
1076A C/\CFA array interface includes the resulting memory layout.
1077The defining requirement of a type-2 system is the ability to slice a column from a column-finest matrix.
1078The required memory shape of such a slice is fixed, before any discussion of implementation.
1079The implementation presented here is how the \CFA array library wrangles the C type system, to make it do memory steps that are consistent with this layout.
1080TODO: do I have/need a presentation of just this layout, just the semantics of -[all]?
1081
1082The new \CFA standard library @array@ datatype supports richer multidimensional features than C.
1083The new array implementation follows C's contiguous approach, \ie @float [r][c]@, with one contiguous object subscripted by coarsely-strided dimensions directly wrapping finely-strided dimensions.
1084Beyond what C's array type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.
1085
1086The following examples use the matrix declaration @array( float, 5, 7 ) m@, loaded with values incremented by $0.1$, when stepping across the length-7 finely-strided column dimension, and stepping across the length-5 coarsely-strided row dimension.
1087\par
1088\mbox{\lstinput{121-126}{hello-md.cfa}}
1089\par\noindent
1090The memory layout is 35 contiguous elements with strictly increasing addresses.
1091
1092A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.
1093As for the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array by a prefix of its dimensions, \eg @m[2]@, giving the third row.
1094This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be subscripted by the more finely strided dimensions, \eg @m[2][3]@, giving the value @2.3@.
1095The following is an example slicing a row.
1096\lstinput{60-64}{hello-md.cfa}
1097\lstinput[aboveskip=0pt]{140-140}{hello-md.cfa}
1098
1099However, function @print1d@ is asserting too much knowledge about its parameter @r@ for printing either a row slice or a column slice.
1100Specifically, declaring the parameter @r@ with type @array@ means that @r@ is contiguous, which is unnecessarily restrictive.
1101That is, @r@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@) with managed length @N@.
1102The new-array library provides the trait @ar@, so-defined.
1103With it, the original declaration can be generalized with the same body.
1104\lstinput{43-44}{hello-md.cfa}
1105\lstinput[aboveskip=0pt]{145-145}{hello-md.cfa}
1106The nontrivial slicing in this example now allows passing a \emph{noncontiguous} slice to @print1d@, where the new-array library provides a ``subscript by all'' operation for this purpose.
1107In a multi-dimensional subscript operation, any dimension given as @all@ is a placeholder, \ie ``not yet subscripted by a value'', waiting for such a value, implementing the @ar@ trait.
1108\lstinput{150-151}{hello-md.cfa}
1109
1110The example shows @x[2]@ and @x[[2, all]]@ both refer to the same, ``2.*'' slice.
1111Indeed, the various @print1d@ calls under discussion access the entry with value @2.3@ as @x[2][3]@, @x[[2,all]][3]@, and @x[[all,3]][2]@.
1112This design preserves (and extends) C array semantics by defining @x[[i,j]]@ to be @x[i][j]@ for numeric subscripts, but also for ``subscripting by all''.
1113That is:
1114\begin{cquote}
1115\begin{tabular}{@{}cccccl@{}}
1116@x[[2,all]][3]@ & $\equiv$      & @x[2][all][3]@  & $\equiv$    & @x[2][3]@  & (here, @all@ is redundant)  \\
1117@x[[all,3]][2]@ & $\equiv$      & @x[all][3][2]@  & $\equiv$    & @x[2][3]@  & (here, @all@ is effective)
1118\end{tabular}
1119\end{cquote}
1120
1121Narrating progress through each of the @-[-][-][-]@\footnote{
1122The first ``\lstinline{-}'' is a variable expression and the remaining ``\lstinline{-}'' are subscript expressions.}
1123expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
1124Where @all@ is redundant:
1125\begin{cquote}
1126\begin{tabular}{@{}ll@{}}
1127@x@  & 2-dimensional, want subscripts for coarse then fine \\
1128@x[2]@  & 1-dimensional, want subscript for fine; lock coarse == 2 \\
1129@x[2][all]@  & 1-dimensional, want subscript for fine \\
1130@x[2][all][3]@  & 0-dimensional; lock fine == 3
1131\end{tabular}
1132\end{cquote}
1133Where @all@ is effective:
1134\begin{cquote}
1135\begin{tabular}{@{}ll@{}}
1136@x@  & 2-dimensional, want subscripts for coarse then fine \\
1137@x[all]@  & 2-dimensional, want subscripts for fine then coarse \\
1138@x[all][3]@  & 1-dimensional, want subscript for coarse; lock fine == 3 \\
1139@x[all][3][2]@  & 0-dimensional; lock coarse == 2
1140\end{tabular}
1141\end{cquote}
1142The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back.
1143For example, in a two dimensional matrix, this semantics conceptually transposes the matrix by reversing the subscripts.
1144The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@.
1145
1146Contiguous arrays, and slices of them, are all represented by the same underlying parameterized type, which includes stride information in its metatdata.
1147\PAB{Do not understand this sentence: The \lstinline{-[all]} operation is a conversion from a reference to one instantiation to a reference to another instantiation.}
1148The running example's @all@-effective step, stated more concretely, is:
1149\begin{cquote}
1150\begin{tabular}{@{}ll@{}}
1151@x@       & : 5 of ( 7 of @float@ each spaced 1 @float@ apart ) each spaced 7 @floats@ apart \\
1152@x[all]@  & : 7 of ( 5 of @float@ each spaced 7 @float@s apart ) each spaced 1 @float@ apart
1153\end{tabular}
1154\end{cquote}
1155
1156\begin{figure}
1157\includegraphics{measuring-like-layout}
1158\caption{Visualization of subscripting by value and by \lstinline[language=CFA]{all}, for \lstinline{x} of type \lstinline{array( float, 5, 7 )} understood as 5 rows by 7 columns.
1159The horizontal layout represents contiguous memory addresses while the vertical layout is conceptual.
1160The vertical shaded band highlights the location of the targeted element, 2.3.
1161Any such vertical slice contains various interpretations of a single address.}
1162\label{fig:subscr-all}
1163\end{figure}
1164
1165Figure~\ref{fig:subscr-all} shows one element (in the shaded band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@.
1166In both cases, value 2 selects from the coarser dimension (rows of @x@),
1167while the value 3 selects from the finer dimension (columns of @x@).
1168The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, \vs from @x[all]@.
1169Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse.
1170These two starting expressions, which are the example's only multidimensional subexpressions
1171(those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index would select.
1172
1173The figure's presentation offers an intuition answering to: What is an atomic element of @x[all]@?
1174From there, @x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these building blocks.
1175An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box)
1176and a lie about its size (the left diagonal above it, growing upward).
1177An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them, done according to their size;
1178call such an array a column.
1179A column is almost ready to be arranged into a matrix;
1180it is the \emph{contained value} of the next-level building block, but another lie about size is required.
1181At first, an atom needs to be arranged as if it were bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).
1182These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
1183Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@.
1184Yet every time the programmer presents an index, a C-array subscript is achieving the offset calculation.
1185
1186In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected),
1187the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats,
1188compared with where analogous rows appear when the row-level option is presented for @x@.
1189
1190For example, in \lstinline{x[all]}, the shaded band touches atoms 2.0, 2.1, 2.2, 2.3, 1.4, 1.5 and 1.6 (left diagonal).
1191But only the atom 2.3 is storing its value there.
1192The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims.
1193
1194Lying is implemented as casting.
1195The arrangement just described is implemented in the structure @arpk@.
1196This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator.
1197The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@.
1198The subscript operator presents what is really inside, by casting to the type below the left diagonal of the lie.
1199
1200%  Does x[all] have to lie too?  The picture currently glosses over how it it advertises a size of 7 floats.  I'm leaving that as an edge case benignly misrepresented in the picture.  Edge cases only have to be handled right in the code.
1201
1202Casting, overlapping, and lying are unsafe.
1203The mission is to implement a style-1 feature in the type system for safe use by a programmer.
1204The offered style-1 system is allowed to be internally unsafe,
1205just as C's implementation of a style-2 system (upon a style-3 system) is unsafe within, even when the programmer is using it without casts or pointer arithmetic.
1206Having a style-1 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices.
1207
1208% PAB: repeat from previous paragraph.
1209% The choice to implement this style-1 system upon C's style-2 arrays, rather than its style-3 pointer arithmetic, reduces the attack surface of unsafe code.
1210% My casting is unsafe, but I do not do any pointer arithmetic.
1211% When a programmer works in the common-case style-2 subset (in the no-@[all]@ top of Figure~\ref{fig:subscr-all}), my casts are identities, and the C compiler is doing its usual displacement calculations.
1212% If I had implemented my system upon style-3 pointer arithmetic, then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own.
1213
1214% \noindent END: Paste looking for a home
1215
1216The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.
1217The @arpk@ structure and its @-[i]@ operator are defined as:
1218\begin{cfa}
1219forall(
1220        [N],                                    $\C{// length of current dimension}$
1221        S & | sized(S),                 $\C{// masquerading-as}$
1222        Timmed &,                               $\C{// immediate element, often another array}$
1223        Tbase &                                 $\C{// base element, e.g. float, never array}$
1224) { // distribute forall to each element
1225        struct arpk {
1226                S strides[N];           $\C{// so that sizeof(this) is N of S}$
1227        };
1228        // expose Timmed, stride by S
1229        static inline Timmed & ?[?]( arpk( N, S, Timmed, Tbase ) & a, long int i ) {
1230                subcheck( a, i, 0, N );
1231                return (Timmed &)a.strides[i];
1232        }
1233}
1234\end{cfa}
1235The private @arpk@ structure (array with explicit packing) is generic over four types: dimension length, masquerading-as, ...
1236This structure's public interface is hidden behind the @array(...)@ macro and the subscript operator.
1237Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.
1238Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.
1239Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
1240
1241An instantiation of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.
1242In the base case, @array(E_base)@ is just @E_base@.
1243Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
1244
1245Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.
1246Expressed as an operation on types, this rotation is:
1247\begin{eqnarray*}
1248suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
1249enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
1250enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
1251\end{eqnarray*}
1252
1253
1254\section{Bound checks, added and removed}
1255
1256\CFA array subscripting is protected with runtime bound checks.
1257Having dependent typing causes the optimizer to remove more of these bound checks than it would without them.
1258This section provides a demonstration of the effect.
1259
1260The experiment compares the \CFA array system with the padded-room system [TODO:xref] most typically exemplified by Java arrays, but also reflected in the \CC pattern where restricted vector usage models a checked array.
1261The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
1262The experiment compares with the \CC version to keep access to generated assembly code simple.
1263
1264As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and \CC versions.
1265When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.
1266But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
1267
1268TODO: paste source and assembly codes
1269
1270Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.
1271The case is naive matrix multiplication over a row-major encoding.
1272
1273TODO: paste source codes
1274
1275
1276
1277
1278
1279\section{Comparison with other arrays}
1280
1281
1282\subsection{Rust}
1283
1284\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.
1285Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly-typed ownership-system, which further helps guarantee statically the validity of every pointer deference.
1286These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.
1287\CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
1288
1289\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.
1290Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.
1291The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
1292
1293
1294\subsection{Java}
1295
1296Java arrays are arrays-of-arrays because all objects are references \see{\VRef{toc:mdimpl}}.
1297For each array, Java implicitly storages the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations.
1298\begin{cquote}
1299\begin{tabular}{rl}
1300C      &  @void f( size_t n, size_t m, float x[n][m] );@ \\
1301Java   &  @void f( float x[][] );@
1302\end{tabular}
1303\end{cquote}
1304However, in the C prototype, the parameters @n@ and @m@  are documentation only as the intended size of the first and second dimension of @x@.
1305\VRef[Figure]{f:JavaVsCTriangularMatrix} compares a triangular matrix (array-of-arrays) in dynamically safe Java to unsafe C.
1306Each dynamically sized row in Java stores its dimension, while C requires the programmer to manage these sizes explicitly (@rlnth@).
1307All subscripting is Java has bounds checking, while C has none.
1308Both Java and C require explicit null checking, otherwise there is a runtime failure.
1309
1310\begin{figure}
1311\setlength{\tabcolsep}{15pt}
1312\begin{tabular}{ll@{}}
1313\begin{java}
1314int m[][] = {  // triangular matrix
1315        new int [4],
1316        new int [3],
1317        new int [2],
1318        new int [1],
1319        null
1320};
1321
1322for ( int r = 0; r < m.length; r += 1 ) {
1323        if ( m[r] == null ) continue;
1324        for ( int c = 0; c < m[r].length; c += 1 ) {
1325                m[r][c] = c + r; // subscript checking
1326        }
1327
1328}
1329
1330for ( int r = 0; r < m.length; r += 1 ) {
1331        if ( m[r] == null ) {
1332                System.out.println( "null row" );
1333                continue;
1334        }
1335        for ( int c = 0; c < m[r].length; c += 1 ) {
1336                System.out.print( m[r][c] + " " );
1337        }
1338        System.out.println();
1339
1340}
1341\end{java}
1342&
1343\begin{cfa}
1344int * m[5] = {  // triangular matrix
1345        calloc( 4, sizeof(int) ),
1346        calloc( 3, sizeof(int) ),
1347        calloc( 2, sizeof(int) ),
1348        calloc( 1, sizeof(int) ),
1349        NULL
1350};
1351int rlnth = 4;
1352for ( int r = 0; r < 5; r += 1 ) {
1353        if ( m[r] == NULL ) continue;
1354        for ( int c = 0; c < rlnth; c += 1 ) {
1355                m[r][c] = c + r; // no subscript checking
1356        }
1357        rlnth -= 1;
1358}
1359rlnth = 4;
1360for ( int r = 0; r < 5; r += 1 ) {
1361        if ( m[r] == NULL ) {
1362                printf( "null row\n" );
1363                continue;
1364        }
1365        for ( int c = 0; c < rlnth; c += 1 ) {
1366                printf( "%d ", m[r][c] );
1367        }
1368        printf( "\n" );
1369        rlnth -= 1;
1370}
1371\end{cfa}
1372\end{tabular}
1373\caption{Java (left) \vs C (right) Triangular Matrix}
1374\label{f:JavaVsCTriangularMatrix}
1375\end{figure}
1376
1377The downside of the arrays-of-arrays approach is performance due to pointer chasing versus pointer arithmetic for a contiguous arrays.
1378Furthermore, there is the cost of managing the implicit array descriptor.
1379It is unlikely that a JIT can dynamically rewrite an arrays-of-arrays form into a contiguous form.
1380
1381
1382\subsection{\CC}
1383
1384Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array.
1385While the vector size can grow and shrink dynamically, \vs a fixed-size dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration (one dynamic call) to avoid using @push_back@.
1386\begin{c++}
1387vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations
1388\end{c++}
1389Multidimensional arrays are arrays-of-arrays with associated costs.
1390Each @vector@ array has an array descriptor contain the dimension, which allows bound checked using @x.at(i)@ in place of @x[i]@.
1391Used with these restrictions, out-of-bound accesses are caught, and in-bound accesses never exercise the vector's ability to grow, preventing costly reallocate and copy, and never invalidate references to contained values.
1392This scheme matches Java's safety and expressiveness exactly, but with the inherent costs.
1393
1394
1395\subsection{Levels of dependently typed arrays}
1396
1397The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
1398\begin{itemize}
1399\item a \emph{zip}-style operation that consumes two arrays of equal length
1400\item a \emph{map}-style operation whose produced length matches the consumed length
1401\item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
1402\end{itemize}
1403Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.
1404Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
1405
1406Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.
1407In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.
1408The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.
1409In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.
1410[TODO: clarify how even Idris type checking terminates]
1411
1412Idris is a current, general-purpose dependently typed programming language.
1413Length checking is a common benchmark for full dependent type systems.
1414Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.
1415[TODO: finish explaining what Data.Vect is and then the essence of the comparison]
1416
1417POINTS:
1418here is how our basic checks look (on a system that does not have to compromise);
1419it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
1420
1421Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer novel contributions concerning similar, restricted dependent types for tracking array length.
1422Unlike \CFA, both are garbage-collected functional languages.
1423Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.
1424So, like \CFA, the checking in question is a lightweight bounds-only analysis.
1425Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
1426
1427
1428
1429The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.
1430There is a particular emphasis on an existential type, enabling callee-determined return shapes.
1431
1432
1433Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type.
1434
1435Futhark and full-strength dependently typed languages treat array sizes are ordinary values.
1436Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
1437
1438\CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.
1439Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.
1440Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
1441
1442\subsection{Static safety in C extensions}
1443
1444
1445\section{Future work}
1446
1447\subsection{Declaration syntax}
1448
1449\subsection{Range slicing}
1450
1451\subsection{With a module system}
1452
1453\subsection{With described enumerations}
1454
1455A project in \CFA's current portfolio will improve enumerations.
1456In the incumbent state, \CFA has C's enumerations, unmodified.
1457I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use.
1458It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations:
1459\begin{cfa}
1460forall( T | is_enum(T) )
1461void show_in_context( T val ) {
1462        for( T i ) {
1463                string decorator = "";
1464                if ( i == val-1 ) decorator = "< ready";
1465                if ( i == val   ) decorator = "< go"   ;
1466                sout | i | decorator;
1467        }
1468}
1469enum weekday { mon, tue, wed = 500, thu, fri };
1470show_in_context( wed );
1471\end{cfa}
1472with output
1473\begin{cfa}
1474mon
1475tue < ready
1476wed < go
1477thu
1478fri
1479\end{cfa}
1480The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA.
1481But the example shows these abilities:
1482\begin{itemize}
1483\item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type
1484\item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@)
1485\item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@)
1486\item a provision for looping (the @for@ form used) over the values of the type.
1487\end{itemize}
1488
1489If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
1490
1491[TODO: introduce Ada in the comparators]
1492
1493In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, \CC, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.
1494The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing.
1495
1496This change of perspective also lets us remove ubiquitous dynamic bound checks.
1497[TODO: xref] discusses how automatically inserted bound checks can often be optimized away.
1498But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.
1499To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
1500
1501[TODO, fix confusion:  Idris has this arrangement of checks, but still the natural numbers as the domain.]
1502
1503The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.
1504Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
1505\begin{cfa}
1506interface Ix n
1507get_size n : Unit -> Int
1508ordinal : n -> Int
1509unsafe_from_ordinal n : Int -> n
1510\end{cfa}
1511
1512Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.
1513This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.
1514Dex's example is a routine that calculates pointwise differences between two samples.
1515Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).
1516In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size.
1517
1518The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.
1519In the audio instantiation, the duration-of-clip type argument is used for the domain.
1520In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.
1521This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
1522\begin{cfa}
1523instance {a b} [Ix a, Ix b] Ix (a & b)
1524get_size = \(). size a * size b
1525ordinal = \(i, j). (ordinal i * size b) + ordinal j
1526unsafe_from_ordinal = \o.
1527bs = size b
1528(unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
1529\end{cfa}
1530and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
1531\begin{cfa}
1532img_trans :: (img_wd,img_ht)=>Real
1533img_trans.(i,j) = img.i.j
1534result = pairwise img_trans
1535\end{cfa}
1536[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
1537
1538In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
1539
1540\subsection{Retire pointer arithmetic}
1541
1542
1543\section{\CFA}
1544
1545XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1546moved from background chapter \\
1547XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1548
1549Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it.
1550(For later:  That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.)
1551
1552\subsection{\CFA features interacting with arrays}
1553
1554Prior work on \CFA included making C arrays, as used in C code from the wild,
1555work, if this code is fed into @cfacc@.
1556The quality of this this treatment was fine, with no more or fewer bugs than is typical.
1557
1558More mixed results arose with feeding these ``C'' arrays into preexisting \CFA features.
1559
1560A notable success was with the \CFA @alloc@ function,
1561which type information associated with a polymorphic return type
1562replaces @malloc@'s use of programmer-supplied size information.
1563\begin{cfa}
1564// C, library
1565void * malloc( size_t );
1566// C, user
1567struct tm * el1 = malloc( sizeof(struct tm) );
1568struct tm * ar1 = malloc( 10 * sizeof(struct tm) );
1569
1570// CFA, library
1571forall( T * ) T * alloc();
1572// CFA, user
1573tm * el2 = alloc();
1574tm (*ar2)[10] = alloc();
1575\end{cfa}
1576The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument.
1577This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type.
1578Using a compiler-produced value eliminates an opportunity for user error.
1579
1580TODO: fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa
1581
1582Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug.
1583In the last example, the choice of ``pointer to array'' @ar2@ breaks a parallel with @ar1@.
1584They are not subscripted in the same way.
1585\begin{cfa}
1586ar1[5];
1587(*ar2)[5];
1588\end{cfa}
1589Using ``reference to array'' works at resolving this issue.  TODO: discuss connection with Doug-Lea \CC proposal.
1590\begin{cfa}
1591tm (&ar3)[10] = *alloc();
1592ar3[5];
1593\end{cfa}
1594The implicit size communication to @alloc@ still works in the same ways as for @ar2@.
1595
1596Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one.
1597TODO xref C standard does not claim that @ar1@ may be subscripted,
1598because no stage of interpreting the construction of @ar1@ has it be that ``there is an \emph{array object} here.''
1599But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls,
1600where the type requested is an array, making the result, much more obviously, an array object.
1601
1602The ``reference to array'' type has its sore spots too.
1603TODO see also @dimexpr-match-c/REFPARAM_CALL@ (under @TRY_BUG_1@)
1604
1605TODO: I fixed a bug associated with using an array as a T.  I think.  Did I really?  What was the bug?
Note: See TracBrowser for help on using the repository browser.