source: doc/theses/mike_brooks_MMath/array.tex@ 97ac01d3

Last change on this file since 97ac01d3 was 97ac01d3, checked in by Michael Brooks <mlbrooks@…>, 11 months ago

Thesis, array: add section on lifecycle

  • Property mode set to 100644
File size: 81.9 KB
Line 
1\chapter{Array}
2\label{c:Array}
3
4Arrays in C are possibly the single most misunderstood and incorrectly used feature in the language, resulting in the largest proportion of runtime errors and security violations.
5This chapter describes the new \CFA language and library features that introduce a length-checked array type, @array@, to the \CFA standard library~\cite{Cforall}.
6
7Offering the @array@ type, as a distinct alternative to the C array, is consistent with \CFA's goal of backwards compatibility, \ie virtually all existing C (@gcc@) programs can be compiled by \CFA with only a small number of changes, similar to \CC (@g++@).
8However, a few compatibility-breaking changes to the behaviour of the C array are necessary, both as an implementation convenience and to fix C's lax treatment of arrays.
9Hence, the @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features, making it unnecessary to deal with every inherited complexity of the C array.
10
11
12\section{Introduction}
13\label{s:ArrayIntro}
14
15The new \CFA array is declared by instantiating the generic @array@ type,
16much like instantiating any other standard-library generic type (such as \CC @vector@),
17though using a new style of generic parameter.
18\begin{cfa}
19@array( float, 99 )@ x; $\C[2.75in]{// x contains 99 floats}$
20\end{cfa}
21Here, the arguments to the @array@ type are @float@ (element type) and @99@ (length).
22When this type is used as a function parameter, the type-system requires that a call's argument is a perfect match.
23\begin{cfa}
24void f( @array( float, 42 )@ & p ) {} $\C{// p accepts 42 floats}$
25f( x ); $\C{// statically rejected: type lengths are different, 99 != 42}$
26
27test2.cfa:3:1 error: Invalid application of existing declaration(s) in expression.
28Applying untyped: Name: f ... to: Name: x
29\end{cfa}
30Here, the function @f@'s parameter @p@ is declared with length 42.
31However, the call @f( x )@ is invalid, because @x@'s length is @99@, which does not match @42@.
32
33A function declaration can be polymorphic over these @array@ arguments by using the \CFA @forall@ declaration prefix.
34\begin{cfa}
35forall( T, @[N]@ )
36void g( array( T, @N@ ) & p, int i ) {
37 T elem = p[i]; $\C{// dynamically checked: requires 0 <= i < N}$
38}
39g( x, 0 ); $\C{// T is float, N is 99, dynamic subscript check succeeds}$
40g( x, 1000 ); $\C{// T is float, N is 99, dynamic subscript check fails}\CRT$
41
42Cforall Runtime error: subscript 1000 exceeds dimension range [0,99) $for$ array 0x555555558020.
43\end{cfa}
44Function @g@ takes an arbitrary type parameter @T@ and a \emph{dimension parameter} @N@.
45A dimension parameter represents a to-be-determined count of elements, managed by the type system.
46The call @g( x, 0 )@ is valid because @g@ accepts any length of array, where the type system infers @float@ for @T@ and length @99@ for @N@.
47Inferring values for @T@ and @N@ is implicit.
48Furthermore, in this case, the runtime subscript @x[0]@ (parameter @i@ being @0@) in @g@ is valid because 0 is in the dimension range $[0,99)$ of argument @x@.
49However, the call @g( x, 1000 )@ is also accepted through compile time;
50however, this case's subscript, @x[1000]@, generates an error, because @1000@ is outside the dimension range $[0,99)$ of argument @x@.
51In general, the @forall( ..., [N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and within a function.
52The syntactic form is chosen to parallel other @forall@ forms:
53\begin{cfa}
54forall( @[N]@ ) ... $\C[1.5in]{// dimension}$
55forall( T ) ... $\C{// value datatype (formerly, "otype")}$
56forall( T & ) ... $\C{// opaque datatype (formerly, "dtype")}\CRT$
57\end{cfa}
58% The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
59
60The generic @array@ type is comparable to the C array type, which \CFA inherits from C.
61Their runtime characteristics are often identical, and some features are available in both.
62For example, assume a caller has an argument that instantiates @N@ with 42.
63\begin{cfa}
64forall( [N] )
65void declDemo( ... ) {
66 float x1[N]; $\C{// built-in type ("C array")}$
67 array(float, N) x2; $\C{// type from library}$
68}
69\end{cfa}
70Both of the locally-declared array variables, @x1@ and @x2@, have 42 elements, each element being a @float@.
71The two variables have identical size and layout; they both encapsulate 42-float stack allocations, with no additional ``bookkeeping'' allocations or headers.
72Providing this explicit generic approach requires a significant extension to the \CFA type system to support a full-feature, safe, efficient (space and time) array-type, which forms the foundation for more complex array forms in \CFA.
73In all following discussion, ``C array'' means the types like that of @x@ and ``\CFA array'' means the standard-library @array@ type (instantiations), like the type of @x2@.
74
75Admittedly, the @array@ library type for @x2@ is syntactically different from its C counterpart.
76A future goal (TODO xref) is to provide the new @array@ features with syntax approaching C's (declaration style of @x1@).
77Then, the library @array@ type could be removed, giving \CFA a largely uniform array type.
78At present, the C-syntax @array@ is only partially supported, so the generic @array@ is used exclusively in the thesis;
79feature support and C compatibility are revisited in Section ? TODO.
80
81My contributions in this chapter are:
82\begin{enumerate}
83\item A type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@.
84\item Provide a length-checked array-type in the \CFA standard library, where the array's length is statically managed and dynamically valued.
85\item Provide argument/parameter passing safety for arrays and subscript safety.
86\item TODO: general parking...
87\item Identify the interesting specific abilities available by the new @array@ type.
88\item Where there is a gap concerning this feature's readiness for prime-time, identification of specific workable improvements that are likely to close the gap.
89\end{enumerate}
90
91
92\section{Dependent typing}
93
94General dependent typing allows the type system to encode arbitrary predicates (\eg behavioural specifications for functions),
95which is an anti-goal for my work.
96Firstly, this application is strongly associated with pure functional languages,
97where a characterization of the return value (giving it a precise type, generally dependent upon the parameters)
98is a sufficient postcondition.
99In an imperative language like C and \CFA, it is also necessary to discuss side effects,
100for which an even heavier formalism, like separation logic, is required.
101Secondly, TODO: bash Rust.
102TODO: cite the crap out of these claims.
103
104
105\section{Features added}
106
107This section shows more about using the \CFA array and dimension parameters, demonstrating their syntax and semantics by way of motivating examples.
108As stated, the core capability of the new array is tracking all dimensions within the type system, where dynamic dimensions are represented using type variables.
109
110By declaring type variables at the front of object declarations, an array dimension is lexically referenceable where it is needed.
111For example, a declaration can share one length, @N@, among a pair of parameters and the return,
112meaning that it requires both input arrays to be of the same length, and guarantees that the result is of that length as well.
113\lstinput{10-17}{hello-array.cfa}
114Function @f@ does a pointwise comparison of its two input arrays, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
115The dynamic allocation of the @ret@ array, by the library @alloc@ function,
116\begin{cfa}
117forall( T & | sized(T) )
118T * alloc() {
119 return @(T *)@malloc( @sizeof(T)@ );
120}
121\end{cfa}
122uses the parameterized dimension information implicitly within its @sizeof@ determination, and casts the return type.
123Note that @alloc@ only sees one whole type for its @T@ (which is @f@'s @array(bool, N)@); this type's size is a computation based on @N@.
124This example illustrates how the new @array@ type plugs into existing \CFA behaviour by implementing necessary \emph{sized} assertions needed by other types.
125(\emph{sized} implies a concrete \vs abstract type with a runtime-available size, exposed as @sizeof@.)
126As a result, there is significant programming safety by making the size accessible and implicit, compared with C's @calloc@ and non-array supporting @memalign@, which take an explicit length parameter not managed by the type system.
127
128\begin{figure}
129\lstinput{30-43}{hello-array.cfa}
130\lstinput{45-48}{hello-array.cfa}
131\caption{\lstinline{f} Harness}
132\label{f:fHarness}
133\end{figure}
134
135\VRef[Figure]{f:fHarness} shows a harness that uses function @f@, illustrating how dynamic values are fed into the @array@ type.
136Here, the dimension of arrays @x@, @y@, and @result@ is specified from a command-line value, @dim@, and these arrays are allocated on the stack.
137Then the @x@ array is initialized with decreasing values, and the @y@ array with amounts offset by constant @0.005@, giving relative differences within tolerance initially and diverging for later values.
138The program main is run (see figure bottom) with inputs @5@ and @7@ for sequence lengths.
139The loops follow the familiar pattern of using the variable @dim@ to iterate through the arrays.
140Most importantly, the type system implicitly captures @dim@ at the call of @f@ and makes it available throughout @f@ as @N@.
141The example shows @dim@ adapting into a type-system managed length at the declarations of @x@, @y@, and @result@, @N@ adapting in the same way at @f@'s loop bound, and a pass-thru use of @dim@ at @f@'s declaration of @ret@.
142Except for the lifetime-management issue of @result@, \ie explicit @free@, this program has eliminated both the syntactic and semantic problems associated with C arrays and their usage.
143The result is a significant improvement in safety and usability.
144
145In summary:
146\begin{itemize}
147\item
148@[N]@ within a @forall@ declares the type variable @N@ to be a managed length.
149\item
150@N@ can be used an expression of type @size_t@ within the declared function body.
151\item
152The value of an @N@-expression is the acquired length, derived from the usage site, \ie generic declaration or function call.
153\item
154@array( thing, N0, N1, ... )@ is a multi-dimensional type wrapping $\prod_i N_i$ adjacent occurrences of @thing@-typed objects.
155\end{itemize}
156
157\VRef[Figure]{f:TemplateVsGenericType} shows @N@ is not the same as a @size_t@ declaration in a \CC \lstinline[language=C++]{template}.
158\begin{enumerate}[leftmargin=*]
159\item
160The \CC template @N@ can only be compile-time value, while the \CFA @N@ may be a runtime value.
161% agreed, though already said
162\item
163\CC does not allow a template function to be nested, while \CFA lets its polymorphic functions to be nested.
164% why is this important?
165\item
166The \CC template @N@ must be passed explicitly at the call, unless @N@ has a default value, even when \CC can deduct the type of @T@.
167The \CFA @N@ is part of the array type and passed implicitly at the call.
168% fixed by comparing to std::array
169% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v2
170\item
171\CC cannot have an array of references, but can have an array of pointers.
172\CC has a (mistaken) belief that references are not objects, but pointers are objects.
173In the \CC example, the arrays fall back on C arrays, which have a duality with references with respect to automatic dereferencing.
174The \CFA array is a contiguous object with an address, which can be stored as a reference or pointer.
175% not really about forall-N vs template-N
176% any better CFA support is how Rob left references, not what Mike did to arrays
177% https://stackoverflow.com/questions/1164266/why-are-arrays-of-references-illegal
178% https://stackoverflow.com/questions/922360/why-cant-i-make-a-vector-of-references
179\item
180C/\CC arrays cannot be copied, while \CFA arrays can be copied, making them a first-class object (although array copy is often avoided for efficiency).
181% fixed by comparing to std::array
182% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v10
183\end{enumerate}
184TODO: settle Mike's concerns with this comparison (perhaps, remove)
185
186\begin{figure}
187\begin{tabular}{@{}l@{\hspace{20pt}}l@{}}
188\begin{c++}
189
190@template< typename T, size_t N >@
191void copy( T ret[@N@], T x[@N@] ) {
192 for ( int i = 0; i < N; i += 1 ) ret[i] = x[i];
193}
194int main() {
195
196 int ret[10], x[10];
197 for ( int i = 0; i < 10; i += 1 ) x[i] = i;
198 @copy<int, 10 >( ret, x );@
199 for ( int i = 0; i < 10; i += 1 )
200 cout << ret[i] << ' ';
201 cout << endl;
202}
203\end{c++}
204&
205\begin{cfa}
206int main() {
207 @forall( T, [N] )@ // nested function
208 void copy( array( T, @N@ ) & ret, array( T, @N@ ) & x ) {
209 for ( i; N ) ret[i] = x[i];
210 }
211
212 const int n = promptForLength();
213 array( int, n ) ret, x;
214 for ( i; n ) x[i] = i;
215 @copy( ret, x );@
216 for ( i; n )
217 sout | ret[i] | nonl;
218 sout | nl;
219}
220\end{cfa}
221\end{tabular}
222\caption{\lstinline{N}-style parameters, for \CC template \vs \CFA generic type }
223\label{f:TemplateVsGenericType}
224\end{figure}
225
226Just as the first example in \VRef[Section]{s:ArrayIntro} shows a compile-time rejection of a length mismatch,
227so are length mismatches stopped when they involve dimension parameters.
228While \VRef[Figure]{f:fHarness} shows successfully calling a function @f@ expecting two arrays of the same length,
229\begin{cfa}
230array( bool, N ) & f( array( float, N ) &, array( float, N ) & );
231\end{cfa}
232a static rejection occurs when attempting to call @f@ with arrays of differing lengths.
233\lstinput[tabsize=1]{70-74}{hello-array.cfa}
234When the argument lengths themselves are statically unknown,
235the static check is conservative and, as always, \CFA's casting lets the programmer use knowledge not shared with the type system.
236\begin{tabular}{@{\hspace{0.5in}}l@{\hspace{1in}}l@{}}
237\lstinput{90-97}{hello-array.cfa}
238&
239\lstinput{110-117}{hello-array.cfa}
240\end{tabular}
241
242\noindent
243This static check's full rules are presented in \VRef[Section]{s:ArrayTypingC}.
244
245Orthogonally, the \CFA array type works within generic \emph{types}, \ie @forall@-on-@struct@.
246The same argument safety and the associated implicit communication of array length occurs.
247Preexisting \CFA allowed aggregate types to be generalized with type parameters, enabling parameterizing of element types.
248Now, \CFA also allows parameterizing them by length.
249Doing so gives a refinement of C's ``flexible array member'' pattern[TODO: cite ARM 6.7.2.1 pp18]\cite{arr:gnu-flex-mbr}.
250While a C flexible array member can only occur at the end of the enclosing structure,
251\CFA allows length-parameterized array members to be nested at arbitrary locations.
252This flexibility, in turn, allows for multiple array members.
253\lstinput{10-15}{hello-accordion.cfa}
254The structure has course- and student-level metatdata (their respective field names) and a position-based preferences' matrix.
255Its layout has the starting offset of @studentIds@ varying according to the generic parameter @C@, and the offset of @preferences@ varying according to both generic parameters.
256
257\VRef[Figure]{f:checkHarness} shows a program main using @School@ and results with different array sizes.
258The @school@ variable holds many students' course-preference forms.
259It is on the stack and its initialization does not use any casting or size arithmetic.
260Both of these points are impossible with a C flexible array member.
261When heap allocation is preferred, the original pattern still applies.
262\begin{cfa}
263School( classes, students ) * sp = alloc();
264\end{cfa}
265This ability to avoid casting and size arithmetic improves safety and usability over C flexible array members.
266Finally, inputs and outputs are given at the bottom for different sized schools.
267The example program prints the courses in each student's preferred order, all using the looked-up display names.
268
269\begin{figure}
270\begin{cquote}
271\lstinput{50-55}{hello-accordion.cfa}
272\lstinput{90-98}{hello-accordion.cfa}
273\ \\
274@$ cat school1@
275\lstinput{}{school1}
276
277@$ ./a.out < school1@
278\lstinput{}{school1.out}
279
280@$ cat school2@
281\lstinput{}{school2}
282
283@$ ./a.out < school2@
284\lstinput{}{school2.out}
285\end{cquote}
286
287\caption{\lstinline{School} harness, input and output}
288\label{f:checkHarness}
289\end{figure}
290
291When a function operates on a @School@ structure, the type system handles its memory layout transparently.
292\lstinput{30-37}{hello-accordion.cfa}
293In the example, this @getPref@ function answers, for the student at position @is@, what is the position of its @pref@\textsuperscript{th}-favoured class?
294
295
296\section{Dimension Parameter Implementation}
297
298The core of the preexisting \CFA compiler already had the ``heavy equipment'' needed
299to provide the feature set just reviewed (up to bugs in cases not yet exercised).
300To apply this equipment in tracking array lengths, I encoded a dimension (array's length) as a type.
301The type in question does not describe any data that the program actually uses at runtime.
302It simply carries information through intermediate stages of \CFA-to-C lowering.
303And this type takes a form such that, once \emph{it} gets lowered, the result does the right thing.
304
305Furthermore, the @array@ type itself is really ``icing on the cake.''
306Presenting its full version is deferred until \VRef[Section]{s:ArrayMdImpl}
307(where added complexity needed for multiple dimensions is considered).
308But simplifications close enough for the present discussion are:
309\begin{cfa}
310forall( [N] )
311struct array_1d_float {
312 float items[N];
313};
314forall( T, [N] )
315struct array_1d_T {
316 T items[N];
317};
318\end{cfa}
319These two structure patterns, plus a subscript operator, is all that @array@ provides.
320
321My main work is letting a programmer define
322such a structure (one whose type is parameterized by @[N]@)
323and functions that operate on it (these being similarly parameterized).
324
325The repurposed heavy equipment is
326\begin{itemize}
327\item
328 Resolver provided values for a used declaration's type-system variables,
329 gathered from type information in scope at the usage site.
330\item
331 The box pass, encoding information about type parameters
332 into ``extra'' regular parameters/arguments on declarations and calls.
333 Notably, it conveys the size of a type @foo@ as a @__sizeof_foo@ parameter,
334 and rewrites the @sizeof(foo)@ expression as @__sizeof_foo@, \ie a use of the parameter.
335\end{itemize}
336
337The rules for resolution had to be restricted slightly, in order to achieve important refusal cases.
338This work is detailed in \VRef[Section]{s:ArrayTypingC}.
339However, the resolution--boxing scheme, in its preexisting state, was already equipped to work on (desugared) dimension parameters.
340The following discussion explains the desugaring and how correctly lowered code results.
341
342A simpler structure, and a toy function on it, demonstrate what is needed for the encoding.
343\begin{cfa}
344forall( [@N@] ) { $\C{// [1]}$
345 struct thing {};
346 void f( thing(@N@) ) { sout | @N@; } $\C{// [2], [3]}$
347}
348int main() {
349 thing( @10@ ) x; f( x ); $\C{// prints 10, [4]}$
350 thing( 100 ) y; f( y ); $\C{// prints 100}$
351 return 0;
352}
353\end{cfa}
354This example has:
355\begin{enumerate}
356\item
357 The symbol @N@ being declared as a type variable (a variable of the type system).
358\item
359 The symbol @N@ being used to parameterize a type.
360\item
361 The symbol @N@ being used as an expression (value).
362\item
363 A value like 10 being used as an argument to the parameter @N@.
364\end{enumerate}
365The chosen solution is to encode the value @N@ \emph{as a type}, so items 1 and 2 are immediately available for free.
366Item 3 needs a way to recover the encoded value from a (valid) type (and to reject invalid types occurring here).
367Item 4 needs a way to produce a type that encodes the given value.
368
369Because the box pass handles a type's size as its main datum, the encoding is chosen to use it.
370The production and recovery are then straightforward.
371\begin{itemize}
372\item
373 The value $n$ is encoded as a type whose size is $n$.
374\item
375 Given a dimension expression $e$, produce type @char[@$e$@]@ to represent it.
376 If $e$ evaluates to $n$ then the encoded type has size $n$.
377\item
378 Given a type $T$ (produced by these rules), recover the value that it represents with the expression @sizeof(@$T$@)@.
379 If $T$ has size $n$ then the recovery expression evaluates to $n$.
380\end{itemize}
381
382This desugaring is applied in a translation step before the resolver.
383The ``validate'' pass hosts it, along with several other canonicalizing and desugaring transformations (the pass's name notwithstanding).
384The running example is lowered to:
385\begin{cfa}
386forall( @N *@ ) { $\C{// [1]}$
387 struct thing {};
388 void f( thing(@N@) ) { sout | @sizeof(N)@; } $\C{// [2], [3]}$
389}
390int main() {
391 thing( char[@10@] ) x; f( x ); $\C{// prints 10, [4]}$
392 thing( char[100] ) y; f( y ); $\C{// prints 100}$
393 return 0;
394}
395\end{cfa}
396Observe:
397\begin{enumerate}
398\item
399 @N@ is now declared to be a type.
400 It is declared to be \emph{sized} (by the @*@), meaning that the box pass shall do its @sizeof(N)@--@__sizeof_N@ extra parameter and expression translation.
401\item
402 @thing(N)@ is a type; the argument to the generic @thing@ is a type (type variable).
403\item
404 The @sout...@ expression (being an application of the @?|?@ operator) has a second argument that is an ordinary expression.
405\item
406 The type of variable @x@ is another @thing(-)@ type; the argument to the generic @thing@ is a type (array type of bytes, @char@).
407\end{enumerate}
408
409From this point, preexisting \CFA compilation takes over lowering it the rest of the way to C.
410Here the result shows only the relevant changes of the box pass (as informed by the resolver), leaving the rest unadulterated:
411\begin{cfa}
412// [1]
413void f( size_t __sizeof_N, @void *@ ) { sout | @__sizeof_N@; } $\C{// [2], [3]}$
414int main() {
415 struct __conc_thing_10 {} x; f( @10@, &x ); $\C{// prints 10, [4]}$
416 struct __conc_thing_100 {} y; f( @100@, &y ); $\C{// prints 100}$
417 return 0;
418}
419\end{cfa}
420Observe:
421\begin{enumerate}
422\item
423 The type parameter @N@ is gone.
424\item
425 The type @thing(N)@ is (replaced by @void *@, but thereby effectively) gone.
426\item
427 The @sout...@ expression (being an application of the @?|?@ operator) has a regular variable (parameter) usage for its second argument.
428\item
429 Information about the particular @thing@ instantiation (value 10) has moved, from the type, to a regular function-call argument.
430\end{enumerate}
431At the end of the desugaring and downstream processing, the original C idiom of ``pass both a length parameter and a pointer'' has been reconstructed.
432In the programmer-written form, only the @thing@ is passed.
433The compiler's action produces the more complex form, which if handwritten, would be error-prone.
434
435Back at the compiler front end, the parsing changes AST schema extensions and validation rules for enabling the sugared user input.
436\begin{itemize}
437\item
438 Recognize the form @[N]@ as a type-variable declaration within a @forall@.
439\item
440 Have the new brand of type-variable, \emph{Dimension}, in the AST form of a type-variable, to represent one parsed from @[-]@.
441\item
442 Allow a type variable to occur in an expression. Validate (after parsing) that only dimension-branded type variables are used here.
443\item
444 Allow an expression to occur in type-argument position. Brand the resulting type argument as a dimension.
445\item
446 Validate (after parsing), on a generic-type usage, \eg the type part of the declaration
447 \begin{cfa}
448 array_1d( foo, bar ) x;
449 \end{cfa}
450 \vspace*{-10pt}
451 that the brands on the generic arguments match the brands of the declared type variables.
452 Here, that @foo@ is a type and @bar@ is a dimension.
453\end{itemize}
454
455
456\section{Typing of C Arrays}
457\label{s:ArrayTypingC}
458
459Essential in giving a guarantee of accurate length is the compiler's ability
460to reject a program that presumes to mishandle length.
461By contrast, most discussion so far dealt with communicating length,
462from one party who knows it, to another who is willing to work with any given length.
463For scenarios where the concern is a mishandled length,
464the interaction is between two parties who both claim to know something about it.
465Such a scenario occurs in this pure C fragment, which today's C compilers accept:
466\begin{cfa}
467int n = @42@;
468float x[n];
469float (*xp)[@999@] = &x;
470(*xp)[@500@]; $\C{// in "bound"?}$
471\end{cfa}
472Here, the array @x@ has length 42, while a pointer to it (@xp@) claims length 999.
473So, while the subscript of @xp@ at position 500 is out of bound of its referent @x@,
474the access appears in-bound of the type information available on @xp@.
475Truly, length is being mishandled in the previous step,
476where the type-carried length information on @x@ is not compatible with that of @xp@.
477
478The \CFA new-array rejects the analogous case:
479\begin{cfa}
480int n = @42@;
481array(float, n) x;
482array(float, 999) * xp = x; $\C{// static rejection here}$
483(*xp)[@500@]; $\C{// runtime check vs len 999}$
484\end{cfa}
485The way the \CFA array is implemented, the type analysis of this case reduces to a case similar to the earlier C version.
486The \CFA compiler's compatibility analysis proceeds as:
487\begin{itemize}[parsep=0pt]
488\item
489 Is @array(float, 999)@ type-compatible with @array(float, n)@?
490\item
491 Is @arrayX(float, char[999])@ type-compatible with @arrayX(float, char[n])@?\footnote{
492 Here, \lstinline{arrayX} represents the type that results
493 from desugaring the \lstinline{array} type
494 into a type whose generic parameters are all types.
495 This presentation elides the noisy fact that
496 \lstinline{array} is actually a macro for something bigger;
497 the reduction to \lstinline{char[-]} still proceeds as sketched.}
498\item
499 Is @char[999]@ type-compatible with @char[n]@?
500\end{itemize}
501To achieve the necessary \CFA rejections meant rejecting the corresponding C case, which is not backward compatible.
502There are two complementary mitigations for this incompatibility.
503
504First, a simple recourse is available to a programmer who intends to proceed
505with the statically unsound assignment.
506This situation might arise if @n@ were known to be 999,
507rather than 42, as in the introductory examples.
508The programmer can add a cast in the \CFA code.
509\begin{cfa}
510xp = @(float (*)[999])@ &x;
511\end{cfa}
512This addition causes \CFA to accept, because now, the programmer has accepted blame.
513This addition is benign in plain C, because the cast is valid, just unnecessary there.
514Moreover, the addition can even be seen as appropriate ``eye candy,''
515marking where the unchecked length knowledge is used.
516Therefore, a program being onboarded to \CFA can receive a simple upgrade,
517to satisfy the \CFA rules (and arguably become clearer),
518without giving up its validity to a plain C compiler.
519
520Second, the incompatibility only affects types like pointer-to-array,
521which are are infrequently used in C.
522The more common C idiom for aliasing an array is to use a pointer-to-first-element type,
523which does not participate in the \CFA array's length checking.\footnote{
524 Notably, the desugaring of the \lstinline{array} type avoids letting any \lstinline{-[-]} type decay,
525 in order to preserve the length information that powers runtime bound-checking.}
526Therefore, the frequency of needing to upgrade legacy C code (as discussed in the first mitigation)
527is anticipated to be low.
528
529Because the incompatibility represents a low cost to a \CFA onboarding effort
530(with a plausible side benefit of linting the original code for a missing annotation),
531no special measures were added to retain the compatibility.
532It would be possible to flag occurrences of @-[-]@ types that come from @array@ desugaring,
533treating those with stricter \CFA rules, while treating others with classic C rules.
534If future lessons from C project onboarding warrant it,
535this special compatibility measure can be added.
536
537Having allowed that both the initial C example's check
538\begin{itemize}
539 \item
540 Is @float[999]@ type-compatible with @float[n]@?
541\end{itemize}
542and the second \CFA example's induced check
543\begin{itemize}
544 \item
545 Is @char[999]@ type-compatible with @char[n]@?
546\end{itemize}
547shall have the same answer, (``no''),
548discussion turns to how I got the \CFA compiler to produce this answer.
549In its preexisting form, it produced a (buggy) approximation of the C rules.
550To implement the new \CFA rules, I took the syntactic recursion a step further, obtaining,
551in both cases:
552\begin{itemize}
553 \item
554 Is @999@ compatible with @n@?
555\end{itemize}
556This compatibility question applies to a pair of expressions, where the earlier implementation were to types.
557Such an expression-compatibility question is a new addition to the \CFA compiler.
558Note, these questions only arise in the context of dimension expressions on (C) array types.
559
560TODO: ensure these compiler implementation matters are treated under \CFA compiler background:
561type unification,
562cost calculation,
563GenPoly.
564
565The relevant technical component of the \CFA compiler is the type unification procedure within the type resolver.
566I added rules for continuing this unification into expressions that occur within types.
567It is still fundamentally doing \emph{type} unification
568because it is participating in binding type variables,
569and not participating in binding any variables that stand in for expression fragments
570(for there is no such sort of variable in \CFA's analysis.)
571An unfortunate fact about the \CFA compiler's preexisting implementation is that
572type unification suffers from two forms of duplication.
573
574The first duplication has (many of) the unification rules stated twice.
575As a result, my additions for dimension expressions are stated twice.
576The extra statement of the rules occurs in the @GenPoly@ module,
577where concrete types like @array(int, 5)@\footnote{
578 Again, the presentation is simplified
579 by leaving the \lstinline{array} macro unexpanded.}
580are lowered into corresponding C types @struct __conc_array_1234@ (the suffix being a generated index).
581In this case, the struct's definition contains fields that hardcode the argument values of @float@ and @5@.
582The next time an @array(-,-)@ concrete instance is encountered, it checks if the previous @struct __conc_array_1234@ is suitable for it.
583Yes, for another occurrence of @array(int, 5)@;
584no, for either @array(rational(int), 5)@ or @array(int, 42)@.
585By the last example, this phase must ``reject''
586the hypothesis that it should reuse the dimension-5 instance's C-lowering for a dimension-42 instance.
587
588The second duplication has unification (proper) being invoked at two stages of expression resolution.
589As a result, my added rule set needs to handle more cases than the preceding discussion motivates.
590In the program
591\begin{cfa}
592void @f@( double );
593forall( T & ) void @f@( T & );
594void g( int n ) {
595 array( float, n + 1 ) x;
596 f(x); // overloaded
597}
598\end{cfa}
599when resolving the function call, @g@, the first unification stage
600compares the type @T@ of the parameter with @array( float, n + 1 )@, of the argument.
601TODO: finish.
602
603The actual rules for comparing two dimension expressions are conservative.
604To answer, ``yes, consider this pair of expressions to be matching,''
605is to imply, ``all else being equal, allow an array with length calculated by $e_1$
606to be passed to a function expecting a length-$e_2$ array.''\footnote{
607 TODO: Deal with directionality, that I'm doing exact-match, no ``at least as long as,'' no subtyping.
608 Should it be an earlier scoping principle? Feels like it should matter in more places than here.}
609So, a ``yes'' answer must represent a guarantee that both expressions evaluate the
610same result, while a ``no'' can tolerate ``they might, but we're not sure'',
611provided that practical recourses are available
612to let programmers express better knowledge.
613The new rule-set in the current release is, in fact, extremely conservative.
614I chose to keep things simple,
615and allow future needs to drive adding additional complexity, within the new framework.
616
617For starters, the original motivating example's rejection
618is not based on knowledge that
619the @xp@ length of (the literal) 999 is value-unequal to
620the (obvious) runtime value of the variable @n@, which is the @x@ length.
621Rather, the analysis assumes a variable's value can be anything,
622and so there can be no guarantee that its value is 999.
623So, a variable and a literal can never match.
624
625Two occurrences of the same literal value are obviously a fine match.
626For two occurrences of the same variable, more information is needed.
627For example, this one is fine
628\begin{cfa}
629void f( const int n ) {
630 float x[n];
631 float (*xp)[n] = x; // accept
632}
633\end{cfa}
634while this one is not:
635\begin{cfa}
636void f() {
637 int n = 42;
638 float x[n];
639 n = 999;
640 float (*xp)[n] = x; // reject
641}
642\end{cfa}
643Furthermore, the fact that the first example sees @n@ as @const@
644is not actually sufficient.
645In this example, @f@'s length expression's declaration is as @const@ as it can be,
646yet its value still changes between the two invocations:
647\begin{cquote}
648\setlength{\tabcolsep}{15pt}
649\begin{tabular}{@{}ll@{}}
650\begin{cfa}
651// compile unit 1
652void g();
653void f( const int & const nr ) {
654 float x[nr];
655 g(); // change n
656 @float (*xp)[nr] = x;@ // reject
657}
658\end{cfa}
659&
660\begin{cfa}
661// compile unit 2
662static int n = 42;
663void g() {
664 n = 99;
665}
666
667f( n );
668\end{cfa}
669\end{tabular}
670\end{cquote}
671The issue here is that knowledge needed to make a correct decision is hidden by separate compilation.
672Even within a translation unit, static analysis might not be able to provide all the information.
673
674My rule set also respects a traditional C feature: In spite of the several limitations of the C rules
675accepting cases that produce different values, there are a few mismatches that C stops.
676C is quite precise when working with two static values.
677\begin{cfa}
678enum { fortytwo = 42 };
679float x[fortytwo];
680float (*xp1)[42] = &x; // accept
681float (*xp2)[999] = &x; // reject
682\end{cfa}
683My \CFA rules agree with C's on these cases.
684
685In summary, the new rules classify expressions into three groups:
686\begin{description}
687\item[Statically Evaluable]
688 Expressions for which a specific value can be calculated (conservatively)
689 at compile-time.
690 A preexisting \CFA compiler module defines which literals, enumerators, and expressions qualify,
691 and evaluates them.
692\item[Dynamic but Stable]
693 The value of a variable declared as @const@, including a @const@ parameter.
694\item[Potentially Unstable]
695 The catch-all category. Notable examples include:
696 any function-call result, @float x[foo()];@,
697 the particular function-call result that is a pointer dereference, @void f(const int * n)@ @{ float x[*n]; }@, and
698 any use of a reference-typed variable.
699\end{description}
700Within these groups, my \CFA rules are:
701\begin{itemize}
702\item
703 Accept a Statically Evaluable pair, if both expressions have the same value.
704 Notably, this rule allows a literal to match with an enumeration value, based on the value.
705\item
706 Accept a Dynamic but Stable pair, if both expressions are written out the same, \eg refers to the same variable declaration.
707\item
708 Otherwise, reject.
709 Notably, reject all pairs from the Potentially Unstable group and all pairs that cross groups.
710\end{itemize}
711The traditional C rules are:
712\begin{itemize}
713\item
714 Reject a Statically Evaluable pair, if the expressions have two different values.
715\item
716 Otherwise, accept.
717\end{itemize}
718
719\begin{figure}
720 \newcommand{\falsealarm}{{\color{blue}\small{*}}}
721 \newcommand{\allowmisuse}{{\color{red}\textbf{!}}}
722 \newcommand{\cmark}{\ding{51}} % from pifont
723 \newcommand{\xmark}{\ding{55}}
724
725 \begin{tabular}{@{}l@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c}
726 & \multicolumn{2}{c}{\underline{Values Equal}}
727 & \multicolumn{2}{c}{\underline{Values Unequal}}
728 & \\
729 \textbf{Case} & C & \CFA & C & \CFA & Compat. \\
730 Both Statically Evaluable, Same Symbol & Accept & Accept & & & \cmark \\
731 Both Statically Evaluable, Different Symbols & Accept & Accept & Reject & Reject & \cmark \\
732 Both Dynamic but Stable, Same Symbol & Accept & Accept & & & \cmark \\
733 Both Dynamic but Stable, Different Symbols & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\
734 Both Potentially Unstable, Same Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\
735 Any other grouping, Different Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark
736 \end{tabular}
737
738 \medskip
739 \noindent\textbf{Legend}
740 \begin{itemize}[leftmargin=*]
741 \item
742 Each row gives the treatment of a test harness of the form
743 \begin{cfa}
744 float x[ expr1 ];
745 float (*xp)[ expr2 ] = &x;
746 \end{cfa}
747 \vspace*{-10pt}
748 where \lstinline{expr1} and \lstinline{expr2} are meta-variables varying according to the row's Case.
749 Each row's claim applies to other harnesses too, including,
750 \begin{itemize}
751 \item
752 calling a function with a parameter like \lstinline{x} and an argument of the \lstinline{xp} type,
753 \item
754 assignment in place of initialization,
755 \item
756 using references in place of pointers, and
757 \item
758 for the \CFA array, calling a polymorphic function on two \lstinline{T}-typed parameters with \lstinline{&x}- and \lstinline{xp}-typed arguments.
759 \end{itemize}
760 \item
761 Each case's claim is symmetric (swapping \lstinline{expr1} with \lstinline{expr2} has no effect),
762 even though most test harnesses are asymmetric.
763 \item
764 The table treats symbolic identity (Same/Different on rows)
765 apart from value equality (Equal/Unequal on columns).
766 \begin{itemize}
767 \item
768 The expressions \lstinline{1}, \lstinline{0+1} and \lstinline{n}
769 (where \lstinline{n} is a variable with value 1),
770 are all different symbols with the value 1.
771 \item
772 The column distinction expresses ground truth about whether an omniscient analysis should accept or reject.
773 \item
774 The row distinction expresses the simple static factors used by today's analyses.
775 \end{itemize}
776 \item
777 Accordingly, every Reject under Values Equal is a false alarm (\falsealarm),
778 while every Accept under Values Unequal is an allowed misuse (\allowmisuse).
779 \end{itemize}
780
781 \caption{Case comparison for array type compatibility, given pairs of dimension expressions.}
782 \label{f:DimexprRuleCompare}
783\end{figure}
784
785
786Figure~\ref{f:DimexprRuleCompare} gives a case-by-case comparison of the consequences of these rule sets.
787It demonstrates that the \CFA false alarms occur in the same cases as C treats unsafe.
788It also shows that C-incompatibilities only occur in cases that C treats unsafe.
789
790
791The conservatism of the new rule set can leave a programmer needing a recourse,
792when needing to use a dimension expression whose stability argument
793is more subtle than current-state analysis.
794This recourse is to declare an explicit constant for the dimension value.
795Consider these two dimension expressions,
796whose reuses are rejected by the blunt current-state rules:
797\begin{cfa}
798void f( int & nr, const int nv ) {
799 float x[nr];
800 float (*xp)[nr] = &x; // reject: nr varying (no references)
801 float y[nv + 1];
802 float (*yp)[nv + 1] = &y; // reject: ?+? unpredictable (no functions)
803}
804\end{cfa}
805Yet, both dimension expressions are reused safely.
806The @nr@ reference is never written, not volatile
807and control does not leave the function between the uses.
808The name @?+?@ resolves to a function that is quite predictable.
809Here, the programmer can add the constant declarations (cast does not work):
810\begin{cfa}
811void f( int & nr, const int nv ) {
812 @const int nx@ = nr;
813 float x[nx];
814 float (*xp)[nx] = & x; // accept
815 @const int ny@ = nv + 1;
816 float y[ny];
817 float (*yp)[ny] = & y; // accept
818}
819\end{cfa}
820The result is the originally intended semantics,
821achieved by adding a superfluous ``snapshot it as of now'' directive.
822
823The snapshotting trick is also used by the translation, though to achieve a different outcome.
824Rather obviously, every array must be subscriptable, even a bizarre one:
825\begin{cfa}
826array( float, rand(10) ) x;
827x[0]; // 10% chance of bound-check failure
828\end{cfa}
829Less obvious is that the mechanism of subscripting is a function call,
830which must communicate length accurately.
831The bound-check above (callee logic) must use the actual allocated length of @x@,
832without mistakenly reevaluating the dimension expression, @rand(10)@.
833Adjusting the example to make the function's use of length more explicit:
834\begin{cfa}
835forall ( T * )
836void f( T * x ) { sout | sizeof(*x); }
837float x[ rand(10) ];
838f( x );
839\end{cfa}
840Considering that the partly translated function declaration is, loosely,
841\begin{cfa}
842void f( size_t __sizeof_T, void * x ) { sout | __sizeof_T; }
843\end{cfa}
844the translation must call the dimension argument twice:
845\begin{cfa}
846float x[ rand(10) ];
847f( rand(10), &x );
848\end{cfa}
849Rather, the translation is:
850\begin{cfa}
851size_t __dim_x = rand(10);
852float x[ __dim_x ];
853f( __dim_x, &x );
854\end{cfa}
855The occurrence of this dimension hoisting during translation was in the preexisting \CFA compiler.
856But its cases were buggy, particularly with determining, ``Can hoisting the expression be skipped here?'', for skipping this hoisting is clearly desirable in some cases.
857For example, when the programmer has already done so manually. \PAB{I don't know what this means.}
858In the new implementation, these cases are correct, harmonized with the accept/reject criteria.
859
860TODO: Discuss the interaction of this dimension hoisting with the challenge of extra unification for cost calculation
861
862
863\section{Multidimensional array implementation}
864\label{s:ArrayMdImpl}
865
866A multidimensional array implementation has three relevant levels of abstraction, from highest to lowest, where the array occupies \emph{contiguous memory}.
867\begin{enumerate}
868\item
869Flexible-stride memory:
870this model has complete independence between subscripting ordering and memory layout, offering the ability to slice by (provide an index for) any dimension, \eg slice a plane, row, or column, \eg @c[3][*][*]@, @c[3][4][*]@, @c[3][*][5]@.
871\item
872Fixed-stride memory:
873this model binds the first subscript and the first memory layout dimension, offering the ability to slice by (provide an index for) only the coarsest dimension, @m[row][*]@ or @c[plane][*][*]@, \eg slice only by row (2D) or plane (3D).
874After which, subscripting and memory layout are independent.
875\item
876Explicit-displacement memory:
877this model has no awareness of dimensions just the ability to access memory at a distance from a reference point (base-displacement addressing), \eg @x + 23@ or @x[23}@ $\Rightarrow$ 23rd element from the start of @x@.
878A programmer must manually build any notion of dimensions using other tools;
879hence, this style is not offering multidimensional arrays \see{\VRef[Figure]{f:FixedVariable} right example}.
880\end{enumerate}
881
882There is some debate as to whether the abstraction ordering goes $\{1, 2\} < 3$, rather than my numerically-ordering.
883That is, styles 1 and 2 are at the same abstraction level, with 3 offering a limited set of functionality.
884I chose to build the \CFA style-1 array upon a style-2 abstraction.
885(Justification of the decision follows, after the description of the design.)
886
887Style 3 is the inevitable target of any array implementation.
888The hardware offers this model to the C compiler, with bytes as the unit of displacement.
889C offers this model to its programmer as pointer arithmetic, with arbitrary sizes as the unit.
890Casting a multidimensional array as a single-dimensional array/pointer, then using @x[i]@ syntax to access its elements, is still a form of pointer arithmetic.
891
892Now stepping into the implementation of \CFA's new type-1 multidimensional arrays in terms of C's existing type-2 multidimensional arrays, it helps to clarify that even the interface is quite low-level.
893A C/\CFA array interface includes the resulting memory layout.
894The defining requirement of a type-2 system is the ability to slice a column from a column-finest matrix.
895The required memory shape of such a slice is fixed, before any discussion of implementation.
896The implementation presented here is how the \CFA array library wrangles the C type system, to make it do memory steps that are consistent with this layout.
897TODO: do I have/need a presentation of just this layout, just the semantics of -[all]?
898
899The new \CFA standard library @array@ datatype supports richer multidimensional features than C.
900The new array implementation follows C's contiguous approach, \ie @float [r][c]@, with one contiguous object subscripted by coarsely-strided dimensions directly wrapping finely-strided dimensions.
901Beyond what C's array type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.
902
903The following examples use the matrix declaration @array( float, 5, 7 ) m@, loaded with values incremented by $0.1$, when stepping across the length-7 finely-strided column dimension, and stepping across the length-5 coarsely-strided row dimension.
904\par
905\mbox{\lstinput{121-126}{hello-md.cfa}}
906\par\noindent
907The memory layout is 35 contiguous elements with strictly increasing addresses.
908
909A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.
910As for the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array by a prefix of its dimensions, \eg @m[2]@, giving the third row.
911This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be subscripted by the more finely strided dimensions, \eg @m[2][3]@, giving the value @2.3@.
912The following is an example slicing a row.
913\lstinput{60-64}{hello-md.cfa}
914\lstinput[aboveskip=0pt]{140-140}{hello-md.cfa}
915
916However, function @print1d@ is asserting too much knowledge about its parameter @r@ for printing either a row slice or a column slice.
917Specifically, declaring the parameter @r@ with type @array@ means that @r@ is contiguous, which is unnecessarily restrictive.
918That is, @r@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@) with managed length @N@.
919The new-array library provides the trait @ar@, so-defined.
920With it, the original declaration can be generalized with the same body.
921\lstinput{43-44}{hello-md.cfa}
922\lstinput[aboveskip=0pt]{145-145}{hello-md.cfa}
923The nontrivial slicing in this example now allows passing a \emph{noncontiguous} slice to @print1d@, where the new-array library provides a ``subscript by all'' operation for this purpose.
924In a multi-dimensional subscript operation, any dimension given as @all@ is a placeholder, \ie ``not yet subscripted by a value'', waiting for such a value, implementing the @ar@ trait.
925\lstinput{150-151}{hello-md.cfa}
926
927The example shows @x[2]@ and @x[[2, all]]@ both refer to the same, ``2.*'' slice.
928Indeed, the various @print1d@ calls under discussion access the entry with value @2.3@ as @x[2][3]@, @x[[2,all]][3]@, and @x[[all,3]][2]@.
929This design preserves (and extends) C array semantics by defining @x[[i,j]]@ to be @x[i][j]@ for numeric subscripts, but also for ``subscripting by all''.
930That is:
931\begin{cquote}
932\begin{tabular}{@{}cccccl@{}}
933@x[[2,all]][3]@ & $\equiv$ & @x[2][all][3]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is redundant) \\
934@x[[all,3]][2]@ & $\equiv$ & @x[all][3][2]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is effective)
935\end{tabular}
936\end{cquote}
937
938Narrating progress through each of the @-[-][-][-]@\footnote{
939The first ``\lstinline{-}'' is a variable expression and the remaining ``\lstinline{-}'' are subscript expressions.}
940expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
941Where @all@ is redundant:
942\begin{cquote}
943\begin{tabular}{@{}ll@{}}
944@x@ & 2-dimensional, want subscripts for coarse then fine \\
945@x[2]@ & 1-dimensional, want subscript for fine; lock coarse == 2 \\
946@x[2][all]@ & 1-dimensional, want subscript for fine \\
947@x[2][all][3]@ & 0-dimensional; lock fine == 3
948\end{tabular}
949\end{cquote}
950Where @all@ is effective:
951\begin{cquote}
952\begin{tabular}{@{}ll@{}}
953@x@ & 2-dimensional, want subscripts for coarse then fine \\
954@x[all]@ & 2-dimensional, want subscripts for fine then coarse \\
955@x[all][3]@ & 1-dimensional, want subscript for coarse; lock fine == 3 \\
956@x[all][3][2]@ & 0-dimensional; lock coarse == 2
957\end{tabular}
958\end{cquote}
959The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back.
960For example, in a two dimensional matrix, this semantics conceptually transposes the matrix by reversing the subscripts.
961The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@.
962
963Contiguous arrays, and slices of them, are all represented by the same underlying parameterized type, which includes stride information in its metatdata.
964\PAB{Do not understand this sentence: The \lstinline{-[all]} operation is a conversion from a reference to one instantiation to a reference to another instantiation.}
965The running example's @all@-effective step, stated more concretely, is:
966\begin{cquote}
967\begin{tabular}{@{}ll@{}}
968@x@ & : 5 of ( 7 of @float@ each spaced 1 @float@ apart ) each spaced 7 @floats@ apart \\
969@x[all]@ & : 7 of ( 5 of @float@ each spaced 7 @float@s apart ) each spaced 1 @float@ apart
970\end{tabular}
971\end{cquote}
972
973\begin{figure}
974\includegraphics{measuring-like-layout}
975\caption{Visualization of subscripting by value and by \lstinline[language=CFA]{all}, for \lstinline{x} of type \lstinline{array( float, 5, 7 )} understood as 5 rows by 7 columns.
976The horizontal layout represents contiguous memory addresses while the vertical layout is conceptual.
977The vertical shaded band highlights the location of the targeted element, 2.3.
978Any such vertical slice contains various interpretations of a single address.}
979\label{fig:subscr-all}
980\end{figure}
981
982Figure~\ref{fig:subscr-all} shows one element (in the shaded band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@.
983In both cases, value 2 selects from the coarser dimension (rows of @x@),
984while the value 3 selects from the finer dimension (columns of @x@).
985The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, \vs from @x[all]@.
986Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse.
987These two starting expressions, which are the example's only multidimensional subexpressions
988(those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index would select.
989
990The figure's presentation offers an intuition answering to: What is an atomic element of @x[all]@?
991From there, @x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these building blocks.
992An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box)
993and a lie about its size (the left diagonal above it, growing upward).
994An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them, done according to their size;
995call such an array a column.
996A column is almost ready to be arranged into a matrix;
997it is the \emph{contained value} of the next-level building block, but another lie about size is required.
998At first, an atom needs to be arranged as if it were bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).
999These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
1000Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@.
1001Yet every time the programmer presents an index, a C-array subscript is achieving the offset calculation.
1002
1003In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected),
1004the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats,
1005compared with where analogous rows appear when the row-level option is presented for @x@.
1006
1007For example, in \lstinline{x[all]}, the shaded band touches atoms 2.0, 2.1, 2.2, 2.3, 1.4, 1.5 and 1.6 (left diagonal).
1008But only the atom 2.3 is storing its value there.
1009The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims.
1010
1011Lying is implemented as casting.
1012The arrangement just described is implemented in the structure @arpk@.
1013This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator.
1014The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@.
1015The subscript operator presents what is really inside, by casting to the type below the left diagonal of the lie.
1016
1017% Does x[all] have to lie too? The picture currently glosses over how it it advertises a size of 7 floats. I'm leaving that as an edge case benignly misrepresented in the picture. Edge cases only have to be handled right in the code.
1018
1019Casting, overlapping, and lying are unsafe.
1020The mission is to implement a style-1 feature in the type system for safe use by a programmer.
1021The offered style-1 system is allowed to be internally unsafe,
1022just as C's implementation of a style-2 system (upon a style-3 system) is unsafe within, even when the programmer is using it without casts or pointer arithmetic.
1023Having a style-1 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices.
1024
1025% PAB: repeat from previous paragraph.
1026% The choice to implement this style-1 system upon C's style-2 arrays, rather than its style-3 pointer arithmetic, reduces the attack surface of unsafe code.
1027% My casting is unsafe, but I do not do any pointer arithmetic.
1028% When a programmer works in the common-case style-2 subset (in the no-@[all]@ top of Figure~\ref{fig:subscr-all}), my casts are identities, and the C compiler is doing its usual displacement calculations.
1029% If I had implemented my system upon style-3 pointer arithmetic, then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own.
1030
1031% \noindent END: Paste looking for a home
1032
1033The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.
1034The @arpk@ structure and its @-[i]@ operator are defined as:
1035\begin{cfa}
1036forall(
1037 [N], $\C{// length of current dimension}$
1038 S & | sized(S), $\C{// masquerading-as}$
1039 Timmed &, $\C{// immediate element, often another array}$
1040 Tbase & $\C{// base element, \eg float, never array}$
1041) { // distribute forall to each element
1042 struct arpk {
1043 S strides[N]; $\C{// so that sizeof(this) is N of S}$
1044 };
1045 // expose Timmed, stride by S
1046 static inline Timmed & ?[?]( arpk( N, S, Timmed, Tbase ) & a, long int i ) {
1047 subcheck( a, i, 0, N );
1048 return (Timmed &)a.strides[i];
1049 }
1050}
1051\end{cfa}
1052The private @arpk@ structure (array with explicit packing) is generic over four types: dimension length, masquerading-as, ...
1053This structure's public interface is hidden behind the @array(...)@ macro and the subscript operator.
1054Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.
1055Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.
1056Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
1057
1058An instantiation of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.
1059In the base case, @array(E_base)@ is just @E_base@.
1060Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
1061
1062Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.
1063Expressed as an operation on types, this rotation is:
1064\begin{eqnarray*}
1065suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
1066enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
1067enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
1068\end{eqnarray*}
1069
1070
1071\section{Bound checks, added and removed}
1072
1073\CFA array subscripting is protected with runtime bound checks.
1074Having dependent typing causes the optimizer to remove more of these bound checks than it would without them.
1075This section provides a demonstration of the effect.
1076
1077The experiment compares the \CFA array system with the padded-room system [TODO:xref] most typically exemplified by Java arrays, but also reflected in the \CC pattern where restricted vector usage models a checked array.
1078The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
1079The experiment compares with the \CC version to keep access to generated assembly code simple.
1080
1081As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and \CC versions.
1082When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.
1083But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
1084
1085TODO: paste source and assembly codes
1086
1087Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.
1088The case is naive matrix multiplication over a row-major encoding.
1089
1090TODO: paste source codes
1091
1092
1093
1094\section{Array lifecycle}
1095
1096An array is an aggregate, like a struct.
1097Both wrap subordinate objects.
1098An arbitrary other type, like @string@, can be used as a struct member or an array element.
1099When it is, that type's lifecycle assumptions must be respected.
1100In the case of @string@, it would be disasterous to start calling string-management functions on uninitialized storage.
1101The @string@ type provides its constructors and destructor;
1102\CFA is supposed to provide a reasonable assurance that they will be called, even if the string is wrapped in something else, even if the user-programmer isn't thinking about it.
1103
1104Preexisting \CFA mechanisms achieve this requirement in correctness, but not with reasonable performance.
1105Furthermore, advanced users of arrays need an exception to the bluntly-stated requirement, which does not come up when working with other aggregates.
1106So the \CFA array has subtleties in its support for element lifecycle.
1107
1108The preexisting \CFA support for contained-element lifecycle is based on a recursive implementation of the object-type (otype) pseudo-trait.
1109A type is an otype if it provides a parameterless constructor, copy constructor, assignment operator and destructor.
1110When declaring a struct whose members are all otypes, the compiler generates implementations of the four otype functions for the outer struct.
1111The generated parameterless constructor for the outer struct calls the parameterless constructor for each memeber, and the other otype functions work similarly.
1112For a member that is a C array, these calls occur in loops, which work correctly for dynamic bounds.
1113This logic works the same, whether the member is a concrete type (that happens to be an otype) or if the member is a polymorphic type variable asserted to be an otype (which is implicit in the default syntax, @forall(T)@).
1114The \CFA array has the simplified form (similar to one seen before):
1115\begin{cfa}
1116forall( T* ) // non-otype element, no lifecycle functions
1117// forall( T ) // otype element, lifecycle functions asserted
1118struct array5 {
1119 T __items[ 5 ];
1120};
1121\end{cfa}
1122Being a struct with a C-array member, using the otype-form declaration for @T@ causes @array5(float)@ to implement otype too.
1123
1124But this otype-recursion pattern has a performance issue.
1125A cube of @float@ is, in the running simplification:
1126\begin{cfa}
1127array5( array5( array5( float ) ) )
1128\end{cfa}
1129Its lifecycle functions, under the otype-recursion pattern, would be:
1130\begin{cfa}
1131TODO: rework this crude exposition into a readable figure
1132
1133float offers
11341 parameterless ctor
11352 copy ctor
11363 asgt
11374 dtor
1138
1139array5(T) offers
11401 parameterless ctor, which asks for T to have
1141 1 parameterless ctor
1142 2 copy ctor
1143 3 asgt
1144 4 dtor
11452 copy ctor, which asks for T to have
1146 1 parameterless ctor
1147 2 copy ctor
1148 3 asgt
1149 4 dtor
11503 asgt, which asks for T to have
1151 1 parameterless ctor
1152 2 copy ctor
1153 3 asgt
1154 4 dtor
11554 dtor, which asks for T to have
1156 1 parameterless ctor
1157 2 copy ctor
1158 3 asgt
1159 4 dtor
1160
1161array5(float) has
11621 parameterless ctor, which gets float's
1163 1 parameterless ctor
1164 2 copy ctor
1165 3 asgt
1166 4 dtor
11672 copy ctor, which gets float's
1168 1 parameterless ctor
1169 2 copy ctor
1170 3 asgt
1171 4 dtor
11723 asgt, which gets float's
1173 1 parameterless ctor
1174 2 copy ctor
1175 3 asgt
1176 4 dtor
11774 dtor, which gets float's
1178 1 parameterless ctor
1179 2 copy ctor
1180 3 asgt
1181 4 dtor
1182
1183array5(array5(float)) has
11841 parameterless ctor, which gets array5(float)'s
1185 1 parameterless ctor, which gets float's
1186 1 parameterless ctor
1187 2 copy ctor
1188 3 asgt
1189 4 dtor
1190 2 copy ctor, which gets float's
1191 1 parameterless ctor
1192 2 copy ctor
1193 3 asgt
1194 4 dtor
1195 3 asgt, which gets float's
1196 1 parameterless ctor
1197 2 copy ctor
1198 3 asgt
1199 4 dtor
1200 4 dtor, which gets float's
1201 1 parameterless ctor
1202 2 copy ctor
1203 3 asgt
1204 4 dtor
12052 copy ctor, which gets array5(float)'s
1206 ... 4 children, 16 leaves
12073 asgt, which gets array5(float)'s
1208 ... 4 children, 16 leaves
12094 dtor, which gets array5(float)'s
1210 ... 4 children, 16 leaves
1211(64 leaves)
1212
1213array5(array5(array5(float))) has
1214...(256 leaves)
1215\end{cfa}
1216
1217Each xxx corresponds to a generated thunk.
1218So the otype-recursion pattern generates a quantity of thunks that is exponential in the number of dimensions.
1219Anecdotal experience with this solutuion found the resulting compile times annoying at three dimensions, and unusable at four.
1220
1221But the otype-recursion pattern uses more assertions than are really needed.
1222Consider how @array5(float)@'s parameterless constructor is getting all four lifecycle assertions about the element type, @float@.
1223It only needs @float@'s parameterless constructor; it is getting a destructor (among others) and never using it.
1224The same idea applies to the copy constructor and the destructor.
1225For the assignment operator, it turns out that today's RAII pattern has it using a copy constructor, assignment operator and destructor.
1226Concurrent work on the \CFA team aims to improve this whole situation.
1227A workaround is needed for now.
1228
1229The workaround is to provide parameterless constructor, copy constructor and destructor, defined with lean, bespoke assertions:
1230
1231\noindent
1232\begin{tabular}{@{}l@{\hspace{0.5in}}l@{}}
1233\begin{cfa}
1234// autogenerated for otype-recursion:
1235forall( T )
1236void ?{}( array5(T) & this ) {
1237 for (i; 5) { ( this[i] ){}; }
1238}
1239forall( T )
1240void ?{}( array5(T) & this, array5(T) & src ) {
1241 for (i; 5) { ( this[i] ){ src[i] }; }
1242}
1243forall( T )
1244void ^?{}( array5(T) & this ) {
1245 for (i; 5) { ^( this[i] ){}; }
1246}
1247\end{cfa}
1248&
1249\begin{cfa}
1250// lean, bespoke:
1251forall( T* | { void @?{}( T & )@; } )
1252void ?{}( array5(T) & this ) {
1253 for (i; 5) { ( this[i] ){}; }
1254}
1255forall( T* | { void @?{}( T &, T )@; } )
1256void ?{}( array5(T) & this, array5(T) & src ) {
1257 for (i; 5) { ( this[i] ){ src[i] }; }
1258}
1259forall( T* | { void @?{}( T & )@; } )
1260void ^?{}( array5(T) & this ) {
1261 for (i; 5) { ^( this[i] ){}; }
1262}
1263\end{cfa}
1264\end{tabular}
1265Moreover, the assignment operator is skipped, because array assignment is not a common operation.
1266This way, the critical lifecycle functions are available, with no growth in thunk creation.
1267
1268Finally, the intuition that a programmer using an array always wants element constructors called \emph{automatically} is simplistic.
1269Arrays exist to store different values at each position.
1270So, an array initialization, in general, needs to let the prorammer provide different constructor arguments to each element.
1271\begin{cfa}
1272thread worker;
1273void ?{}( worker & ) = void;
1274void ?{}( worker &, int worker_num );
1275array(worker, 5) ws @= ???@;
1276for (i; 5) ( ws[i] ){ @i@ };
1277\end{cfa}
1278
1279Two \CFA features may, at first, seem viable for initializing the array @ws@, but on closer inspection, they are not.
1280\begin{itemize}
1281\item
1282 Empty initializer / default constructor: \lstinline{array(worker, 5) ws;}.
1283 This option does not work with the semantics of a thread: it has forked as soon as the constructor returns.
1284 The type does not provide a parameterless constructor because it is not possible to fork the thread until the @worker_num@ parameter is received.
1285\item
1286 C initialization: \lstinline{array(worker, 5) ws AT EQUALS OPEN CLOSE;} (TODO: typeset at-equals).
1287 This option does achieve the desired semantics on the construction side.
1288 But destruction side, the desired semantics is for implicit destructor calls to continue, because with threads, the destructor is a join operation.
1289 C initialization disables \emph{all} implicit lifecycle management; here, the goal is to disable only the implicit construction.
1290\end{itemize}
1291
1292So, I enhanced the \CFA standard library to provide the missing semantics, available in either form:
1293\begin{cfa}
1294array(@uninit@(worker), 5) ws1;
1295array(worker, 5) ws2 = { @delay_init@ };
1296\end{cfa}
1297Both cause the @ws@-construction-time implicit call chain to stop before reaching a @worker@ constructor, while leaving the implicit destruction calls intact.
1298Two forms are available, to parallel the development of this feature in \uCpp.
1299Originally \uCpp offered only the @ws1@ form, there with the class-tempate @uNoCtor@ replacing \CFA's @uninit@.
1300More recently, \uCpp got a declaration-helping macro, @uArray@, whose usage similar to the @ws2@ case.
1301Based on results of piloting @uArray@ (as a replacement of @uNoCtor@), future work may choose to remove one of the options.
1302
1303% note to Mike, I have more fragments on some details available in push24\fragments\uNoCtor.txt
1304
1305\section{Comparison with other arrays}
1306
1307
1308\subsection{Rust}
1309
1310\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.
1311Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly-typed ownership-system, which further helps guarantee statically the validity of every pointer deference.
1312These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.
1313\CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
1314
1315\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.
1316Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.
1317The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
1318
1319
1320\subsection{Java}
1321
1322Java arrays are arrays-of-arrays because all objects are references \see{\VRef{toc:mdimpl}}.
1323For each array, Java implicitly storages the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations.
1324\begin{cquote}
1325\begin{tabular}{rl}
1326C & @void f( size_t n, size_t m, float x[n][m] );@ \\
1327Java & @void f( float x[][] );@
1328\end{tabular}
1329\end{cquote}
1330However, in the C prototype, the parameters @n@ and @m@ are documentation only as the intended size of the first and second dimension of @x@.
1331\VRef[Figure]{f:JavaVsCTriangularMatrix} compares a triangular matrix (array-of-arrays) in dynamically safe Java to unsafe C.
1332Each dynamically sized row in Java stores its dimension, while C requires the programmer to manage these sizes explicitly (@rlnth@).
1333All subscripting is Java has bounds checking, while C has none.
1334Both Java and C require explicit null checking, otherwise there is a runtime failure.
1335
1336\begin{figure}
1337\setlength{\tabcolsep}{15pt}
1338\begin{tabular}{ll@{}}
1339\begin{java}
1340int m[][] = { // triangular matrix
1341 new int [4],
1342 new int [3],
1343 new int [2],
1344 new int [1],
1345 null
1346};
1347
1348for ( int r = 0; r < m.length; r += 1 ) {
1349 if ( m[r] == null ) continue;
1350 for ( int c = 0; c < m[r].length; c += 1 ) {
1351 m[r][c] = c + r; // subscript checking
1352 }
1353
1354}
1355
1356for ( int r = 0; r < m.length; r += 1 ) {
1357 if ( m[r] == null ) {
1358 System.out.println( "null row" );
1359 continue;
1360 }
1361 for ( int c = 0; c < m[r].length; c += 1 ) {
1362 System.out.print( m[r][c] + " " );
1363 }
1364 System.out.println();
1365
1366}
1367\end{java}
1368&
1369\begin{cfa}
1370int * m[5] = { // triangular matrix
1371 calloc( 4, sizeof(int) ),
1372 calloc( 3, sizeof(int) ),
1373 calloc( 2, sizeof(int) ),
1374 calloc( 1, sizeof(int) ),
1375 NULL
1376};
1377int rlnth = 4;
1378for ( int r = 0; r < 5; r += 1 ) {
1379 if ( m[r] == NULL ) continue;
1380 for ( int c = 0; c < rlnth; c += 1 ) {
1381 m[r][c] = c + r; // no subscript checking
1382 }
1383 rlnth -= 1;
1384}
1385rlnth = 4;
1386for ( int r = 0; r < 5; r += 1 ) {
1387 if ( m[r] == NULL ) {
1388 printf( "null row\n" );
1389 continue;
1390 }
1391 for ( int c = 0; c < rlnth; c += 1 ) {
1392 printf( "%d ", m[r][c] );
1393 }
1394 printf( "\n" );
1395 rlnth -= 1;
1396}
1397\end{cfa}
1398\end{tabular}
1399\caption{Java (left) \vs C (right) Triangular Matrix}
1400\label{f:JavaVsCTriangularMatrix}
1401\end{figure}
1402
1403The downside of the arrays-of-arrays approach is performance due to pointer chasing versus pointer arithmetic for a contiguous arrays.
1404Furthermore, there is the cost of managing the implicit array descriptor.
1405It is unlikely that a JIT can dynamically rewrite an arrays-of-arrays form into a contiguous form.
1406
1407
1408\subsection{\CC}
1409
1410Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array.
1411While the vector size can grow and shrink dynamically, \vs a fixed-size dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration (one dynamic call) to avoid using @push_back@.
1412\begin{c++}
1413vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations
1414\end{c++}
1415Multidimensional arrays are arrays-of-arrays with associated costs.
1416Each @vector@ array has an array descriptor contain the dimension, which allows bound checked using @x.at(i)@ in place of @x[i]@.
1417Used with these restrictions, out-of-bound accesses are caught, and in-bound accesses never exercise the vector's ability to grow, preventing costly reallocate and copy, and never invalidate references to contained values.
1418This scheme matches Java's safety and expressiveness exactly, but with the inherent costs.
1419
1420
1421\subsection{Levels of dependently typed arrays}
1422
1423The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
1424\begin{itemize}
1425\item a \emph{zip}-style operation that consumes two arrays of equal length
1426\item a \emph{map}-style operation whose produced length matches the consumed length
1427\item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
1428\end{itemize}
1429Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.
1430Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
1431
1432Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.
1433In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.
1434The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.
1435In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.
1436[TODO: clarify how even Idris type checking terminates]
1437
1438Idris is a current, general-purpose dependently typed programming language.
1439Length checking is a common benchmark for full dependent type systems.
1440Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.
1441[TODO: finish explaining what Data.Vect is and then the essence of the comparison]
1442
1443POINTS:
1444here is how our basic checks look (on a system that does not have to compromise);
1445it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
1446
1447Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer novel contributions concerning similar, restricted dependent types for tracking array length.
1448Unlike \CFA, both are garbage-collected functional languages.
1449Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.
1450So, like \CFA, the checking in question is a lightweight bounds-only analysis.
1451Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
1452
1453
1454
1455The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.
1456There is a particular emphasis on an existential type, enabling callee-determined return shapes.
1457
1458
1459Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type.
1460
1461Futhark and full-strength dependently typed languages treat array sizes are ordinary values.
1462Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
1463
1464\CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.
1465Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.
1466Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
1467
1468\subsection{Static safety in C extensions}
1469
1470
1471\section{Future work}
1472
1473\subsection{Declaration syntax}
1474
1475\subsection{Range slicing}
1476
1477\subsection{With a module system}
1478
1479\subsection{With described enumerations}
1480
1481A project in \CFA's current portfolio will improve enumerations.
1482In the incumbent state, \CFA has C's enumerations, unmodified.
1483I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use.
1484It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations:
1485\begin{cfa}
1486forall( T | is_enum(T) )
1487void show_in_context( T val ) {
1488 for( T i ) {
1489 string decorator = "";
1490 if ( i == val-1 ) decorator = "< ready";
1491 if ( i == val ) decorator = "< go" ;
1492 sout | i | decorator;
1493 }
1494}
1495enum weekday { mon, tue, wed = 500, thu, fri };
1496show_in_context( wed );
1497\end{cfa}
1498with output
1499\begin{cfa}
1500mon
1501tue < ready
1502wed < go
1503thu
1504fri
1505\end{cfa}
1506The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA.
1507But the example shows these abilities:
1508\begin{itemize}
1509\item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type
1510\item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@)
1511\item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@)
1512\item a provision for looping (the @for@ form used) over the values of the type.
1513\end{itemize}
1514
1515If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
1516
1517[TODO: introduce Ada in the comparators]
1518
1519In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, \CC, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.
1520The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing.
1521
1522This change of perspective also lets us remove ubiquitous dynamic bound checks.
1523[TODO: xref] discusses how automatically inserted bound checks can often be optimized away.
1524But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.
1525To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
1526
1527[TODO, fix confusion: Idris has this arrangement of checks, but still the natural numbers as the domain.]
1528
1529The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.
1530Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
1531\begin{cfa}
1532interface Ix n
1533get_size n : Unit -> Int
1534ordinal : n -> Int
1535unsafe_from_ordinal n : Int -> n
1536\end{cfa}
1537
1538Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.
1539This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.
1540Dex's example is a routine that calculates pointwise differences between two samples.
1541Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).
1542In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size.
1543
1544The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.
1545In the audio instantiation, the duration-of-clip type argument is used for the domain.
1546In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.
1547This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
1548\begin{cfa}
1549instance {a b} [Ix a, Ix b] Ix (a & b)
1550get_size = \(). size a * size b
1551ordinal = \(i, j). (ordinal i * size b) + ordinal j
1552unsafe_from_ordinal = \o.
1553bs = size b
1554(unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
1555\end{cfa}
1556and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
1557\begin{cfa}
1558img_trans :: (img_wd,img_ht)=>Real
1559img_trans.(i,j) = img.i.j
1560result = pairwise img_trans
1561\end{cfa}
1562[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
1563
1564In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
1565
1566\subsection{Retire pointer arithmetic}
1567
1568
1569\section{\CFA}
1570
1571XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1572moved from background chapter \\
1573XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1574
1575Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it.
1576(For later: That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.)
1577
1578\subsection{\CFA features interacting with arrays}
1579
1580Prior work on \CFA included making C arrays, as used in C code from the wild,
1581work, if this code is fed into @cfacc@.
1582The quality of this this treatment was fine, with no more or fewer bugs than is typical.
1583
1584More mixed results arose with feeding these ``C'' arrays into preexisting \CFA features.
1585
1586A notable success was with the \CFA @alloc@ function,
1587which type information associated with a polymorphic return type
1588replaces @malloc@'s use of programmer-supplied size information.
1589\begin{cfa}
1590// C, library
1591void * malloc( size_t );
1592// C, user
1593struct tm * el1 = malloc( sizeof(struct tm) );
1594struct tm * ar1 = malloc( 10 * sizeof(struct tm) );
1595
1596// CFA, library
1597forall( T * ) T * alloc();
1598// CFA, user
1599tm * el2 = alloc();
1600tm (*ar2)[10] = alloc();
1601\end{cfa}
1602The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument.
1603This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type.
1604Using a compiler-produced value eliminates an opportunity for user error.
1605
1606TODO: fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa
1607
1608Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug.
1609In the last example, the choice of ``pointer to array'' @ar2@ breaks a parallel with @ar1@.
1610They are not subscripted in the same way.
1611\begin{cfa}
1612ar1[5];
1613(*ar2)[5];
1614\end{cfa}
1615Using ``reference to array'' works at resolving this issue. TODO: discuss connection with Doug-Lea \CC proposal.
1616\begin{cfa}
1617tm (&ar3)[10] = *alloc();
1618ar3[5];
1619\end{cfa}
1620The implicit size communication to @alloc@ still works in the same ways as for @ar2@.
1621
1622Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one.
1623TODO xref C standard does not claim that @ar1@ may be subscripted,
1624because no stage of interpreting the construction of @ar1@ has it be that ``there is an \emph{array object} here.''
1625But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls,
1626where the type requested is an array, making the result, much more obviously, an array object.
1627
1628The ``reference to array'' type has its sore spots too.
1629TODO see also @dimexpr-match-c/REFPARAM_CALL@ (under @TRY_BUG_1@)
1630
1631TODO: I fixed a bug associated with using an array as a T. I think. Did I really? What was the bug?
Note: See TracBrowser for help on using the repository browser.