source: doc/theses/mike_brooks_MMath/array.tex@ 579427b

Last change on this file since 579427b was 579427b, checked in by Peter A. Buhr <pabuhr@…>, 5 months ago

half way through proofreading array chapter

  • Property mode set to 100644
File size: 82.7 KB
Line 
1\chapter{Array}
2\label{c:Array}
3
4Arrays in C are possibly the single most misunderstood and incorrectly used feature in the language, resulting in the largest proportion of runtime errors and security violations.
5This chapter describes the new \CFA language and library features that introduce a length-checked array type, @array@, to the \CFA standard library~\cite{Cforall}.
6
7Offering the @array@ type, as a distinct alternative to the C array, is consistent with \CFA's goal of backwards compatibility, \ie virtually all existing C (@gcc@) programs can be compiled by \CFA with only a small number of changes, similar to \CC (@g++@).
8However, a few compatibility-breaking changes to the behaviour of the C array are necessary, both as an implementation convenience and to fix C's lax treatment of arrays.
9Hence, the @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features, making it unnecessary to deal with every inherited complexity of the C array.
10
11
12\section{Introduction}
13\label{s:ArrayIntro}
14
15The new \CFA array is declared by instantiating the generic @array@ type,
16much like instantiating any other standard-library generic type (such as \CC @vector@),
17though using a new style of generic parameter.
18\begin{cfa}
19@array( float, 99 )@ x; $\C[2.5in]{// x contains 99 floats}$
20\end{cfa}
21Here, the arguments to the @array@ type are @float@ (element type) and @99@ (dimension).
22When this type is used as a function parameter, the type-system requires the argument is a perfect match.
23\begin{cfa}
24void f( @array( float, 42 )@ & p ) {} $\C{// p accepts 42 floats}$
25f( x ); $\C{// statically rejected: type lengths are different, 99 != 42}$
26test2.cfa:3:1 error: Invalid application of existing declaration(s) in expression.
27Applying untyped: Name: f ... to: Name: x
28\end{cfa}
29Function @f@'s parameter expects an array with dimension 42, but the argument dimension 99 does not match.
30
31A function can be polymorphic over @array@ arguments using the \CFA @forall@ declaration prefix.
32\begin{cfa}
33forall( T, @[N]@ )
34void g( array( T, @N@ ) & p, int i ) {
35 T elem = p[i]; $\C{// dynamically checked: requires 0 <= i < N}$
36}
37g( x, 0 ); $\C{// T is float, N is 99, dynamic subscript check succeeds}$
38g( x, 1000 ); $\C{// T is float, N is 99, dynamic subscript check fails}$
39Cforall Runtime error: subscript 1000 exceeds dimension range [0,99) $for$ array 0x555555558020.
40\end{cfa}
41Function @g@ takes an arbitrary type parameter @T@ and an unsigned integer \emph{dimension} @N@.
42The dimension represents a to-be-determined number of elements, managed by the type system, where 0 represents an empty array.
43The type system implicitly infers @float@ for @T@ and @99@ for @N@.
44Furthermore, the runtime subscript @x[0]@ (parameter @i@ being @0@) in @g@ is valid because 0 is in the dimension range $[0,99)$ for argument @x@.
45The call @g( x, 1000 )@ is also accepted at compile time.
46However, the subscript, @x[1000]@, generates a runtime error, because @1000@ is outside the dimension range $[0,99)$ of argument @x@.
47In general, the @forall( ..., [N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and within a function.
48The syntactic form is chosen to parallel other @forall@ forms:
49\begin{cfa}
50forall( @[N]@ ) ... $\C{// dimension}$
51forall( T ) ... $\C{// value datatype}$
52forall( T & ) ... $\C{// opaque datatype}$
53\end{cfa}
54% The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
55
56The generic @array@ type is comparable to the C array type, which \CFA inherits from C.
57Their runtime characteristics are often identical, and some features are available in both.
58For example, assume a caller has an argument that instantiates @N@ with 42.
59\begin{cfa}
60forall( [N] )
61void f( ... ) {
62 float x1[@N@]; $\C{// C array, no subscript checking}$
63 array(float, N) x2; $\C{// \CFA array, subscript checking}\CRT$
64}
65\end{cfa}
66Both of the stack declared array variables, @x1@ and @x2@, have 42 elements, each element being a @float@.
67The two variables have identical size and layout, with no additional ``bookkeeping'' allocations or headers.
68The C array, @x1@, has no subscript checking, while \CFA array, @x2@, does.
69Providing this explicit generic approach requires a significant extension to the \CFA type system to support a full-feature, safe, efficient (space and time) array-type, which forms the foundation for more complex array forms in \CFA.
70In all following discussion, ``C array'' means types like @x1@ and ``\CFA array'' means types like @x2@.
71
72A future goal is to provide the new @array@ features with syntax approaching C's (declaration style of @x1@).
73Then, the library @array@ type could be removed, giving \CFA a largely uniform array type.
74At present, the C-syntax @array@ is only partially supported, so the generic @array@ is used exclusively in the thesis.
75
76My contributions in this chapter are:
77\begin{enumerate}[leftmargin=*]
78\item A type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@.
79\item Provide a dimension/subscript-checked array-type in the \CFA standard library, where the array's length is statically managed and dynamically valued.
80\item Provide argument/parameter passing safety for arrays and subscript safety.
81\item Identify the interesting specific abilities available by the new @array@ type.
82\item Where there is a gap concerning this feature's readiness for prime-time, identification of specific workable improvements that are likely to close the gap.
83\end{enumerate}
84
85
86\begin{comment}
87\section{Dependent Typing}
88
89General dependent typing allows a type system to encode arbitrary predicates, \eg behavioural specifications for functions, which is an anti-goal for my work.
90Firstly, this application is strongly associated with pure functional languages,
91where a characterization of the return value (giving it a precise type, generally dependent upon the parameters)
92is a sufficient postcondition.
93In an imperative language like C and \CFA, it is also necessary to discuss side effects,
94for which an even heavier formalism, like separation logic, is required.
95Secondly, TODO: bash Rust.
96TODO: cite the crap out of these claims.
97\end{comment}
98
99
100\section{Features Added}
101
102This section shows more about using the \CFA array and dimension parameters, demonstrating syntax and semantics by way of motivating examples.
103As stated, the core capability of the new array is tracking all dimensions within the type system, where dynamic dimensions are represented using type variables.
104
105By declaring type variables at the front of object declarations, an array dimension is lexically referenceable where it is needed.
106For example, a declaration can share one length, @N@, among a pair of parameters and return type, meaning the input arrays and return array are the same length.
107\lstinput{10-17}{hello-array.cfa}
108Function @f@ does a pointwise comparison of its two input arrays, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
109The dynamic allocation of the @ret@ array uses the library @alloc@ function,
110\begin{cfa}
111forall( T & | sized(T) )
112T * alloc() {
113 return @(T *)@malloc( @sizeof(T)@ );
114}
115\end{cfa}
116which captures the parameterized dimension information implicitly within its @sizeof@ determination, and casts the return type.
117Note, @alloc@ only sees the whole type for its @T@, @array(bool, N)@, where this type's size is a computation based on @N@.
118This example illustrates how the new @array@ type plugs into existing \CFA behaviour by implementing necessary \emph{sized} assertions needed by other types.
119(\emph{sized} implies a concrete \vs abstract type with a runtime-available size, exposed as @sizeof@.)
120As a result, there is significant programming safety by making the size accessible and implicit, compared with C's @calloc@ and non-array supporting @memalign@, which take an explicit length parameter not managed by the type system.
121
122\begin{figure}
123\lstinput{30-43}{hello-array.cfa}
124\lstinput{45-48}{hello-array.cfa}
125\caption{\lstinline{f} Example}
126\label{f:fExample}
127\end{figure}
128
129\VRef[Figure]{f:fExample} shows an example using function @f@, illustrating how dynamic values are fed into the @array@ type.
130Here, the dimension of arrays @x@, @y@, and @result@ is specified from a command-line value, @dim@, and these arrays are allocated on the stack.
131Then the @x@ array is initialized with decreasing values, and the @y@ array with amounts offset by constant @0.005@, giving relative differences within tolerance initially and diverging for later values.
132The program main is run (see figure bottom) with inputs @5@ and @7@ for sequence lengths.
133The loops follow the familiar pattern of using the variable @dim@ to iterate through the arrays.
134Most importantly, the type system implicitly captures @dim@ at the call of @f@ and makes it available throughout @f@ as @N@.
135The example shows @dim@ adapting into a type-system managed length at the declarations of @x@, @y@, and @result@, @N@ adapting in the same way at @f@'s loop bound, and a pass-thru use of @dim@ at @f@'s declaration of @ret@.
136Except for the lifetime-management issue of @result@, \ie explicit @free@, this program has eliminated both the syntactic and semantic problems associated with C arrays and their usage.
137The result is a significant improvement in safety and usability.
138
139In summary:
140\begin{itemize}[leftmargin=*]
141\item
142@[N]@ within a @forall@ declares the type variable @N@ to be a managed length.
143\item
144@N@ can be used in an expression with type @size_t@ within the function body.
145\item
146The value of an @N@-expression is the acquired length, derived from the usage site, \ie generic declaration or function call.
147\item
148@array( thing, N0, N1, ... )@ is a multi-dimensional type wrapping $\prod_i N_i$ adjacent occurrences of @thing@-typed objects.
149\end{itemize}
150
151\VRef[Figure]{f:TemplateVsGenericType} shows @N@ is not the same as a @size_t@ declaration in a \CC \lstinline[language=C++]{template}.
152\begin{enumerate}[leftmargin=*]
153\item
154The \CC template @N@ can only be a compile-time value, while the \CFA @N@ may be a runtime value.
155\item
156\CC does not allow a template function to be nested, while \CFA lets its polymorphic functions to be nested.
157Hence, \CC precludes a simple form of information hiding.
158\item
159\label{p:DimensionPassing}
160The \CC template @N@ must be passed explicitly at the call, unless @N@ has a default value, even when \CC can deduct the type of @T@.
161The \CFA @N@ is part of the array type and passed implicitly at the call.
162% fixed by comparing to std::array
163% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v2
164\item
165\CC cannot have an array of references, but can have an array of @const@ pointers.
166\CC has a (mistaken) belief that references are not objects, but pointers are objects.
167In the \CC example, the arrays fall back on C arrays, which have a duality with references with respect to automatic dereferencing.
168The \CFA array is a contiguous object with an address, which can be stored as a reference or pointer.
169% not really about forall-N vs template-N
170% any better CFA support is how Rob left references, not what Mike did to arrays
171% https://stackoverflow.com/questions/1164266/why-are-arrays-of-references-illegal
172% https://stackoverflow.com/questions/922360/why-cant-i-make-a-vector-of-references
173\item
174\label{p:ArrayCopy}
175C/\CC arrays cannot be copied, while \CFA arrays can be copied, making them a first-class object (although array copy is often avoided for efficiency).
176% fixed by comparing to std::array
177% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v10
178\end{enumerate}
179The \CC template @array@ type mitigates points \VRef[]{p:DimensionPassing} and \VRef[]{p:ArrayCopy}, but it is also trying to accomplish a similar mechanism to \CFA @array@.
180
181\begin{figure}
182\begin{tabular}{@{}l@{\hspace{20pt}}l@{}}
183\begin{c++}
184
185@template< typename T, size_t N >@
186void copy( T ret[@N@], T x[@N@] ) {
187 for ( int i = 0; i < N; i += 1 ) ret[i] = x[i];
188}
189int main() {
190
191 int ret[10], x[10];
192 for ( int i = 0; i < 10; i += 1 ) x[i] = i;
193 @copy<int, 10 >( ret, x );@
194 for ( int i = 0; i < 10; i += 1 )
195 cout << ret[i] << ' ';
196 cout << endl;
197}
198\end{c++}
199&
200\begin{cfa}
201int main() {
202 @forall( T, [N] )@ // nested function
203 void copy( array( T, @N@ ) & ret, array( T, @N@ ) & x ) {
204 for ( i; N ) ret[i] = x[i];
205 }
206
207 const int n = promptForLength();
208 array( int, n ) ret, x;
209 for ( i; n ) x[i] = i;
210 @copy( ret, x );@
211 for ( i; n )
212 sout | ret[i] | nonl;
213 sout | nl;
214}
215\end{cfa}
216\end{tabular}
217\caption{\lstinline{N}-style parameters, for \CC template \vs \CFA generic type }
218\label{f:TemplateVsGenericType}
219\end{figure}
220
221Just as the first example in \VRef[Section]{s:ArrayIntro} shows a compile-time rejection of a length mismatch,
222so are length mismatches stopped when they involve dimension parameters.
223While \VRef[Figure]{f:fExample} shows successfully calling a function @f@ expecting two arrays of the same length,
224\begin{cfa}
225array( bool, N ) & f( array( float, N ) &, array( float, N ) & );
226\end{cfa}
227a static rejection occurs when attempting to call @f@ with arrays of differing lengths.
228\lstinput[tabsize=1]{70-74}{hello-array.cfa}
229When the argument lengths themselves are statically unknown,
230the static check is conservative and, as always, \CFA's casting lets the programmer use knowledge not shared with the type system.
231\begin{tabular}{@{\hspace{0.5in}}l@{\hspace{1in}}l@{}}
232\lstinput{90-97}{hello-array.cfa}
233&
234\lstinput{110-117}{hello-array.cfa}
235\end{tabular}
236
237\noindent
238This static check's full rules are presented in \VRef[Section]{s:ArrayTypingC}.
239
240Orthogonally, the \CFA array type works within generic \emph{types}, \ie @forall@-on-@struct@.
241The same argument safety and the associated implicit communication of array length occurs.
242Preexisting \CFA allowed aggregate types to be generalized with type parameters, enabling parameterizing of element types.
243This has been extended to allow parameterizing by dimension.
244Doing so gives a refinement of C's ``flexible array member''~\cite[\S~6.7.2.1.18]{C11}.
245\begin{cfa}
246struct S {
247 ...
248 double d []; // incomplete array type => flexible array member
249} * s = malloc( sizeof( struct S ) + sizeof( double [10] ) );
250\end{cfa}
251which creates a VLA of size 10 @double@s at the end of the structure.
252A C flexible array member can only occur at the end of a structure;
253\CFA allows length-parameterized array members to be nested at arbitrary locations, with intervening member declarations.
254\lstinput{10-15}{hello-accordion.cfa}
255The structure has course- and student-level metatdata (their respective field names) and a position-based preferences' matrix.
256Its layout has the starting offset of @studentIds@ varying according to the generic parameter @C@, and the offset of @preferences@ varying according to both generic parameters.
257
258\VRef[Figure]{f:checkExample} shows a program main using @School@ and results with different array sizes.
259The @school@ variable holds many students' course-preference forms.
260It is on the stack and its initialization does not use any casting or size arithmetic.
261Both of these points are impossible with a C flexible array member.
262When heap allocation is preferred, the original pattern still applies.
263\begin{cfa}
264School( classes, students ) * sp = alloc();
265\end{cfa}
266This ability to avoid casting and size arithmetic improves safety and usability over C flexible array members.
267Finally, inputs and outputs are given at the bottom for different sized schools.
268The example program prints the courses in each student's preferred order, all using the looked-up display names.
269
270\begin{figure}
271\begin{cquote}
272\lstinput{50-55}{hello-accordion.cfa}
273\lstinput{90-98}{hello-accordion.cfa}
274\ \\
275@$ cat school1@
276\lstinput{}{school1}
277
278@$ ./a.out < school1@
279\lstinput{}{school1.out}
280
281@$ cat school2@
282\lstinput{}{school2}
283
284@$ ./a.out < school2@
285\lstinput{}{school2.out}
286\end{cquote}
287
288\caption{\lstinline{School} Example, Input and Output}
289\label{f:checkExample}
290\end{figure}
291
292When a function operates on a @School@ structure, the type system handles its memory layout transparently.
293\lstinput{30-37}{hello-accordion.cfa}
294In the example, function @getPref@ returns, for the student at position @is@, what is the position of their @pref@\textsuperscript{th}-favoured class?
295
296
297\section{Dimension Parameter Implementation}
298
299The core of the preexisting \CFA compiler already had the ``heavy equipment'' needed
300to provide the feature set just reviewed (up to bugs in cases not yet exercised).
301To apply this equipment in tracking array lengths, I encoded a dimension (array's length) as a type.
302The type in question does not describe any data that the program actually uses at runtime.
303It simply carries information through intermediate stages of \CFA-to-C lowering.
304And this type takes a form such that, once \emph{it} gets lowered, the result does the right thing.
305
306Furthermore, the @array@ type itself is really ``icing on the cake.''
307Presenting its full version is deferred until \VRef[Section]{s:ArrayMdImpl}
308(where added complexity needed for multiple dimensions is considered).
309But simplifications close enough for the present discussion are:
310\begin{cfa}
311forall( [N] )
312struct array_1d_float {
313 float items[N];
314};
315forall( T, [N] )
316struct array_1d_T {
317 T items[N];
318};
319\end{cfa}
320These two structure patterns, plus a subscript operator, is all that @array@ provides.
321
322My main work is letting a programmer define
323such a structure (one whose type is parameterized by @[N]@)
324and functions that operate on it (these being similarly parameterized).
325
326The repurposed heavy equipment is
327\begin{itemize}[leftmargin=*]
328\item
329 Resolver provided values for a used declaration's type-system variables,
330 gathered from type information in scope at the usage site.
331\item
332 The box pass, encoding information about type parameters
333 into ``extra'' regular parameters/arguments on declarations and calls.
334 Notably, it conveys the size of a type @foo@ as a @__sizeof_foo@ parameter,
335 and rewrites the @sizeof(foo)@ expression as @__sizeof_foo@, \ie a use of the parameter.
336\end{itemize}
337
338The rules for resolution had to be restricted slightly, in order to achieve important refusal cases.
339This work is detailed in \VRef[Section]{s:ArrayTypingC}.
340However, the resolution--boxing scheme, in its preexisting state, was already equipped to work on (desugared) dimension parameters.
341The following discussion explains the desugaring and how correctly lowered code results.
342
343A simpler structure, and a toy function on it, demonstrate what is needed for the encoding.
344\begin{cfa}
345forall( [@N@] ) { $\C{// [1]}$
346 struct thing {};
347 void f( thing(@N@) ) { sout | @N@; } $\C{// [2], [3]}$
348}
349int main() {
350 thing( @10@ ) x; f( x ); $\C{// prints 10, [4]}$
351 thing( @100@ ) y; f( y ); $\C{// prints 100}$
352}
353\end{cfa}
354This example has:
355\begin{enumerate}[leftmargin=*]
356\item
357 The symbol @N@ being declared as a type variable (a variable of the type system).
358\item
359 The symbol @N@ being used to parameterize a type.
360\item
361 The symbol @N@ being used as an expression (value).
362\item
363 A value like 10 being used as an argument to the parameter @N@.
364\end{enumerate}
365The chosen solution is to encode the value @N@ \emph{as a type}, so items 1 and 2 are immediately available for free.
366Item 3 needs a way to recover the encoded value from a (valid) type (and to reject invalid types occurring here).
367Item 4 needs a way to produce a type that encodes the given value.
368
369Because the box pass handles a type's size as its main datum, the encoding is chosen to use it.
370The production and recovery are then straightforward.
371\begin{itemize}[leftmargin=*]
372\item
373 The value $n$ is encoded as a type whose size is $n$.
374\item
375 Given a dimension expression $e$, produce an internal type @char[@$e$@]@ to represent it.
376 If $e$ evaluates to $n$ then the encoded type has size $n$.
377\item
378 Given a type $T$ (produced by these rules), recover the value that it represents with the expression @sizeof(@$T$@)@.
379 If $T$ has size $n$ then the recovery expression evaluates to $n$.
380\end{itemize}
381
382This desugaring is applied in a translation step before the resolver.
383The ``validate'' pass hosts it, along with several other canonicalizing and desugaring transformations (the pass's name notwithstanding).
384The running example is lowered to:
385\begin{cfa}
386forall( @N *@ ) { $\C{// [1]}$
387 struct thing {};
388 void f( thing(@N@) ) { sout | @sizeof(N)@; } $\C{// [2], [3]}$
389}
390int main() {
391 thing( @char[10]@ ) x; f( x ); $\C{// prints 10, [4]}$
392 thing( @char[100]@ ) y; f( y ); $\C{// prints 100}$
393}
394\end{cfa}
395Observe:
396\begin{enumerate}[leftmargin=*]
397\item
398 @N@ is now declared to be a type.
399 It is declared to be \emph{sized} (by the @*@), meaning that the box pass shall do its @sizeof(N)@$\rightarrow$@__sizeof_N@ extra parameter and expression translation.
400\item
401 @thing(N)@ is a type; the argument to the generic @thing@ is a type (type variable).
402\item
403 The @sout...@ expression (being an application of the @?|?@ operator) has a second argument that is an ordinary expression.
404\item
405 The type of variable @x@ is another @thing(-)@ type; the argument to the generic @thing@ is a type (array type of bytes, @char[@$e$@]@).
406\end{enumerate}
407
408From this point, preexisting \CFA compilation takes over lowering it the rest of the way to C.
409Here the result shows only the relevant changes of the box pass (as informed by the resolver), leaving the rest unadulterated:
410\begin{cfa}
411// [1]
412void f( size_t __sizeof_N, @void *@ ) { sout | @__sizeof_N@; } $\C{// [2], [3]}$
413int main() {
414 struct __conc_thing_10 {} x; f( @10@, &x ); $\C{// prints 10, [4]}$
415 struct __conc_thing_100 {} y; f( @100@, &y ); $\C{// prints 100}$
416}
417\end{cfa}
418Observe:
419\begin{enumerate}[leftmargin=*]
420\item
421 The type parameter @N@ is gone.
422\item
423 The type @thing(N)@ is (replaced by @void *@, but thereby effectively) gone.
424\item
425 The @sout...@ expression (being an application of the @?|?@ operator) has a regular variable (parameter) usage for its second argument.
426\item
427 Information about the particular @thing@ instantiation (value 10) is moved, from the type, to a regular function-call argument.
428\end{enumerate}
429At the end of the desugaring and downstream processing, the original C idiom of ``pass both a length parameter and a pointer'' has been reconstructed.
430In the programmer-written form, only the @thing@ is passed.
431The compiler's action produces the more complex form, which if handwritten, would be error-prone.
432
433At the compiler front end, the parsing changes AST schema extensions and validation rules for enabling the sugared user input.
434\begin{itemize}[leftmargin=*]
435\item
436 Recognize the form @[N]@ as a type-variable declaration within a @forall@.
437\item
438 Have the new brand of type-variable, \emph{Dimension}, in the AST form of a type-variable, to represent one parsed from @[-]@.
439\item
440 Allow a type variable to occur in an expression. Validate (after parsing) that only dimension-branded type-variables are used here.
441\item
442 Allow an expression to occur in type-argument position. Brand the resulting type argument as a dimension.
443\item
444 Validate (after parsing), on a generic-type usage, \eg the type part of the declaration
445 \begin{cfa}
446 array_1d( foo, bar ) x;
447 \end{cfa}
448 \vspace*{-10pt}
449 that the brands on the generic arguments match the brands of the declared type variables.
450 Here, that @foo@ is a type and @bar@ is a dimension.
451\end{itemize}
452
453
454\section{Typing of C Arrays}
455\label{s:ArrayTypingC}
456
457Essential in giving a guarantee of accurate length is the compiler's ability to reject a program that presumes to mishandle length.
458By contrast, most discussion so far deals with communicating length, from one party who knows it, to another willing to work with any given length.
459For scenarios where the concern is a mishandled length, the interaction is between two parties who both claim to know something about it.
460
461C and \CFA can check when working with two static values.
462\begin{cfa}
463enum { n = 42 };
464float x[@n@]; // or just 42
465float (*xp1)[@42@] = &x; // accept
466float (*xp2)[@999@] = &x; // reject
467warning: initialization of 'float (*)[999]' from incompatible pointer type 'float (*)[42]'
468\end{cfa}
469When a variable is involved, C and \CFA take two different approaches.
470Today's C compilers accept the following without warning.
471\begin{cfa}
472static const int n = 42;
473float x[@n@];
474float (* xp)[@999@] = &x; $\C{// should be static rejection here}$
475(*xp)[@500@]; $\C{// in "bound"?}$
476\end{cfa}
477Here, the array @x@ has length 42, while a pointer to it (@xp@) claims length 999.
478So, while the subscript of @xp@ at position 500 is out of bound with its referent @x@,
479the access appears in-bound of the type information available on @xp@.
480In fact, length is being mishandled in the previous step, where the type-carried length information on @x@ is not compatible with that of @xp@.
481
482In \CFA, I choose to reject this C example at the point where the type-carried length information on @x@ is not compatible with that of @xp@, and correspondingly, its array counterpart at the same location:
483\begin{cfa}
484static const int n = 42;
485array( float, @n@ ) x;
486array( float, @999@ ) * xp = &x; $\C{// static rejection here}$
487(*xp)[@500@]; $\C{// runtime check passes}$
488\end{cfa}
489The way the \CFA array is implemented, the type analysis for this case reduces to a case similar to the earlier C version.
490The \CFA compiler's compatibility analysis proceeds as:
491\begin{itemize}[parsep=0pt]
492\item
493 Is @array( float, 999 )@ type-compatible with @array( float, n )@?
494\item
495 Is desugared @array( float, char[999] )@ type-compatible with desugared @array( float, char[n] )@?
496% \footnote{
497% Here, \lstinline{arrayX} represents the type that results from desugaring the \lstinline{array} type into a type whose generic parameters are all types.
498% This presentation elides the noisy fact that \lstinline{array} is actually a macro for something bigger;
499% the reduction to \lstinline{char [-]} still proceeds as sketched.}
500\item
501 Is internal type @char[999]@ type-compatible with internal type @char[n]@?
502\end{itemize}
503The answer is false because, in general, the value of @n@ is unknown at compile time, and hence, an error is raised.
504For safety, it makes sense to reject the corresponding C case, which is a non-backwards compatible change.
505There are two mitigations for this incompatibility.
506
507First, a simple recourse is available in a situation where @n@ is \emph{known} to be 999 by using a cast.
508\begin{cfa}
509float (* xp)[999] = @(float (*)[999])@&x;
510\end{cfa}
511The cast means the programmer has accepted blame.
512Moreover, the cast is ``eye candy'' marking where the unchecked length knowledge is used.
513Therefore, a program being onboarded to \CFA requires some upgrading to satisfy the \CFA rules (and arguably become clearer), without giving up its validity to a plain C compiler.
514Second, the incompatibility only affects types like pointer-to-array, which are infrequently used in C.
515The more common C idiom for aliasing an array is to use a pointer-to-first-element type, which does not participate in the \CFA array's length checking.\footnote{
516 Notably, the desugaring of the \lstinline{array} type avoids letting any \lstinline{-[-]} type decay,
517 in order to preserve the length information that powers runtime bound-checking.}
518Therefore, the need to upgrade legacy C code is low.
519Finally, if this incompatibility is a problem onboarding C programs to \CFA, it is should be possible to change the C type check to a warning rather than an error, acting as a \emph{lint} of the original code for a missing type annotation.
520
521To handle two occurrences of the same variable, more information is needed, \eg, this is fine,
522\begin{cfa}
523int n = 42;
524float x[@n@];
525float (*xp)[@n@] = x; // accept
526\end{cfa}
527where @n@ remains fixed across a contiguous declaration context.
528However, intervening dynamic statement cause failures.
529\begin{cfa}
530int n = 42;
531float x[@n@];
532@n@ = 999; // dynamic change
533float (*xp)[@n@] = x; // reject
534\end{cfa}
535However, side-effects can occur in a contiguous declaration context.
536\begin{cquote}
537\setlength{\tabcolsep}{20pt}
538\begin{tabular}{@{}ll@{}}
539\begin{cfa}
540// compile unit 1
541extern int @n@;
542extern float g();
543void f() {
544 float x[@n@] = { g() };
545 float (*xp)[@n@] = x; // reject
546}
547\end{cfa}
548&
549\begin{cfa}
550// compile unit 2
551int @n@ = 42;
552void g() {
553 @n@ = 99;
554}
555
556
557\end{cfa}
558\end{tabular}
559\end{cquote}
560The issue here is that knowledge needed to make a correct decision is hidden by separate compilation.
561Even within a translation unit, static analysis might not be able to provide all the information.
562However, if the example uses @const@, the check is possible.
563\begin{cquote}
564\setlength{\tabcolsep}{20pt}
565\begin{tabular}{@{}ll@{}}
566\begin{cfa}
567// compile unit 1
568extern @const@ int n;
569extern float g();
570void f() {
571 float x[n] = { g() };
572 float (*xp)[n] = x; // reject
573}
574\end{cfa}
575&
576\begin{cfa}
577// compile unit 2
578@const@ int n = 42;
579void g() {
580 @n = 99@; // allowed
581}
582
583
584\end{cfa}
585\end{tabular}
586\end{cquote}
587
588In summary, the new rules classify expressions into three groups:
589\begin{description}
590\item[Statically Evaluable]
591 Expressions for which a specific value can be calculated (conservatively) at compile-time.
592 A preexisting \CFA compiler module defines which literals, enumerators, and expressions qualify and evaluates them.
593\item[Dynamic but Stable]
594 The value of a variable declared as @const@, including a @const@ parameter.
595\item[Potentially Unstable]
596 The catch-all category. Notable examples include:
597 any function-call result, @float x[foo()]@, the particular function-call result that is a pointer dereference, @void f(const int * n)@ @{ float x[*n]; }@, and any use of a reference-typed variable.
598\end{description}
599Within these groups, my \CFA rules are:
600\begin{itemize}[leftmargin=*]
601\item
602 Accept a Statically Evaluable pair, if both expressions have the same value.
603 Notably, this rule allows a literal to match with an enumeration value, based on the value.
604\item
605 Accept a Dynamic but Stable pair, if both expressions are written out the same, \eg refers to the same variable declaration.
606\item
607 Otherwise, reject.
608 Notably, reject all pairs from the Potentially Unstable group and all pairs that cross groups.
609\end{itemize}
610The traditional C rules are:
611\begin{itemize}[leftmargin=*]
612\item
613 Reject a Statically Evaluable pair, if the expressions have two different values.
614\item
615 Otherwise, accept.
616\end{itemize}
617\VRef[Figure]{f:DimexprRuleCompare} gives a case-by-case comparison of the consequences of these rule sets.
618It demonstrates that the \CFA false alarms occur in the same cases as C treats unsafe.
619It also shows that C-incompatibilities only occur in cases that C treats unsafe.
620
621\begin{figure}
622 \newcommand{\falsealarm}{{\color{blue}\small{*}}}
623 \newcommand{\allowmisuse}{{\color{red}\textbf{!}}}
624
625 \begin{tabular}{@{}l@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c}
626 & \multicolumn{2}{c}{\underline{Values Equal}}
627 & \multicolumn{2}{c}{\underline{Values Unequal}}
628 & \\
629 \textbf{Case} & C & \CFA & C & \CFA & Compat. \\
630 Both Statically Evaluable, Same Symbol & Accept & Accept & & & \cmark \\
631 Both Statically Evaluable, Different Symbols & Accept & Accept & Reject & Reject & \cmark \\
632 Both Dynamic but Stable, Same Symbol & Accept & Accept & & & \cmark \\
633 Both Dynamic but Stable, Different Symbols & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\
634 Both Potentially Unstable, Same Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\
635 Any other grouping, Different Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark
636 \end{tabular}
637
638 \medskip
639 \noindent\textbf{Legend}
640 \begin{itemize}[leftmargin=*]
641 \item
642 Each row gives the treatment of a test harness of the form
643 \begin{cfa}
644 float x[ expr1 ];
645 float (*xp)[ expr2 ] = &x;
646 \end{cfa}
647 \vspace*{-10pt}
648 where \lstinline{expr1} and \lstinline{expr2} are meta-variables varying according to the row's Case.
649 Each row's claim applies to other harnesses too, including,
650 \begin{itemize}[leftmargin=*]
651 \item
652 calling a function with a parameter like \lstinline{x} and an argument of the \lstinline{xp} type,
653 \item
654 assignment in place of initialization,
655 \item
656 using references in place of pointers, and
657 \item
658 for the \CFA array, calling a polymorphic function on two \lstinline{T}-typed parameters with \lstinline{&x}- and \lstinline{xp}-typed arguments.
659 \end{itemize}
660 \item
661 Each case's claim is symmetric (swapping \lstinline{expr1} with \lstinline{expr2} has no effect),
662 even though most test harnesses are asymmetric.
663 \item
664 The table treats symbolic identity (Same/Different on rows)
665 apart from value equality (Equal/Unequal on columns).
666 \begin{itemize}[leftmargin=*]
667 \item
668 The expressions \lstinline{1}, \lstinline{0+1} and \lstinline{n}
669 (where \lstinline{n} is a variable with value 1),
670 are all different symbols with the value 1.
671 \item
672 The column distinction expresses ground truth about whether an omniscient analysis should accept or reject.
673 \item
674 The row distinction expresses the simple static factors used by today's analyses.
675 \end{itemize}
676 \item
677 Accordingly, every Reject under Values Equal is a false alarm (\falsealarm),
678 while every Accept under Values Unequal is an allowed misuse (\allowmisuse).
679 \end{itemize}
680
681 \caption{Case comparison for array type compatibility, given pairs of dimension expressions.}
682 \label{f:DimexprRuleCompare}
683\end{figure}
684
685\begin{comment}
686Given that the above check
687\begin{itemize}
688 \item
689 Is internal type @char[999]@ type-compatible with internal type @char[n]@?
690\end{itemize}
691answers false, discussion turns to how I got the \CFA compiler to produce this answer.
692In its preexisting form, the type system had a buggy approximation of the C rules.
693To implement the new \CFA rules, I added one further step.
694\begin{itemize}
695 \item
696 Is @999@ compatible with @n@?
697\end{itemize}
698This question applies to a pair of expressions, where the earlier question applies to types.
699An expression-compatibility question is a new addition to the \CFA compiler, and occurs in the context of dimension expressions, and possibly enumerations assigns, which must be unique.
700
701% TODO: ensure these compiler implementation matters are treated under \CFA compiler background: type unification, cost calculation, GenPoly.
702
703The relevant technical component of the \CFA compiler is the standard type-unification within the type resolver.
704\begin{cfa}
705example
706\end{cfa}
707I added rules for continuing this unification into expressions that occur within types.
708It is still fundamentally doing \emph{type} unification because it is participating in binding type variables, and not participating in binding any variables that stand in for expression fragments (for there is no such sort of variable in \CFA's analysis.)
709An unfortunate fact about the \CFA compiler's preexisting implementation is that type unification suffers from two forms of duplication.
710
711In detail, the first duplication has (many of) the unification rules stated twice.
712As a result, my additions for dimension expressions are stated twice.
713The extra statement of the rules occurs in the @GenPoly@ module, where concrete types like @array( int, 5 )@\footnote{
714 Again, the presentation is simplified
715 by leaving the \lstinline{array} macro unexpanded.}
716are lowered into corresponding C types @struct __conc_array_1234@ (the suffix being a generated index).
717In this case, the struct's definition contains fields that hardcode the argument values of @float@ and @5@.
718The next time an @array( -, - )@ concrete instance is encountered, it checks if the previous @struct __conc_array_1234@ is suitable for it.
719Yes, for another occurrence of @array( int, 5 )@;
720no, for examples like @array( int, 42 )@ or @array( rational(int), 5 )@.
721In the first example, it must reject the reuse hypothesis for a dimension-@5@ and a dimension-@42@ instance.
722
723The second duplication has unification (proper) being invoked at two stages of expression resolution.
724As a result, my added rule set needs to handle more cases than the preceding discussion motivates.
725In the program
726\begin{cfa}
727void @f@( double ); // overload
728forall( T & ) void @f@( T & ); // overload
729void g( int n ) {
730 array( float, n + 1 ) x;
731 f(x); // overloaded
732}
733\end{cfa}
734when resolving a function call to @g@, the first unification stage compares the type @T@ of the parameter with @array( float, n + 1 )@, of the argument.
735\PAB{TODO: finish.}
736
737The actual rules for comparing two dimension expressions are conservative.
738To answer, ``yes, consider this pair of expressions to be matching,''
739is to imply, ``all else being equal, allow an array with length calculated by $e_1$
740to be passed to a function expecting a length-$e_2$ array.''\footnote{
741 TODO: Deal with directionality, that I'm doing exact-match, no ``at least as long as,'' no subtyping.
742 Should it be an earlier scoping principle? Feels like it should matter in more places than here.}
743So, a ``yes'' answer must represent a guarantee that both expressions evaluate the
744same result, while a ``no'' can tolerate ``they might, but we're not sure'',
745provided that practical recourses are available
746to let programmers express better knowledge.
747The new rule-set in the current release is, in fact, extremely conservative.
748I chose to keep things simple,
749and allow future needs to drive adding additional complexity, within the new framework.
750
751For starters, the original motivating example's rejection is not based on knowledge that the @xp@ length of (the literal) 999 is value-unequal to the (obvious) runtime value of the variable @n@, which is the @x@ length.
752Rather, the analysis assumes a variable's value can be anything, and so there can be no guarantee that its value is 999.
753So, a variable and a literal can never match.
754
755TODO: Discuss the interaction of this dimension hoisting with the challenge of extra unification for cost calculation
756\end{comment}
757
758The conservatism of the new rule set can leave a programmer needing a recourse, when needing to use a dimension expression whose stability argument is more subtle than current-state analysis.
759This recourse is to declare an explicit constant for the dimension value.
760Consider these two dimension expressions, whose uses are rejected by the blunt current-state rules:
761\begin{cfa}
762void f( int @&@ nr, @const@ int nv ) {
763 float x[@nr@];
764 float (*xp)[@nr@] = &x; // reject: nr varying (no references)
765 float y[@nv + 1@];
766 float (*yp)[@nv + 1@] = &y; // reject: ?+? unpredictable (no functions)
767}
768\end{cfa}
769Yet, both dimension expressions are reused safely.
770The @nr@ reference is never written, not volatile meaning no implicit code (load) between declarations, and control does not leave the function between the uses.
771As well, the build-in @?+?@ function is predictable.
772To make these cases work, the programmer must add the follow constant declarations (cast does not work):
773\begin{cfa}
774void f( int & nr, const int nv ) {
775 @const int nx@ = nr;
776 float x[nx];
777 float (*xp)[nx] = & x; // accept
778 @const int ny@ = nv + 1;
779 float y[ny];
780 float (*yp)[ny] = & y; // accept
781}
782\end{cfa}
783The result is the originally intended semantics,
784achieved by adding a superfluous ``snapshot it as of now'' directive.
785
786The snapshot trick is also used by the \CFA translation, though to achieve a different outcome.
787Rather obviously, every array must be subscriptable, even a bizarre one:
788\begin{cfa}
789array( float, @rand(10)@ ) x;
790x[@0@]; // 10% chance of bound-check failure
791\end{cfa}
792Less obvious is that the mechanism of subscripting is a function call, which must communicate length accurately.
793The bound-check above (callee logic) must use the actual allocated length of @x@, without mistakenly reevaluating the dimension expression, @rand(10)@.
794Adjusting the example to make the function's use of length more explicit:
795\begin{cfa}
796forall( T * )
797void f( T * x ) { sout | sizeof( *x ); }
798float x[ rand(10) ];
799f( x );
800\end{cfa}
801Considering that the partly translated function declaration is, loosely,
802\begin{cfa}
803void f( size_t __sizeof_T, void * x ) { sout | __sizeof_T; }
804\end{cfa}
805the translation calls the dimension argument twice:
806\begin{cfa}
807float x[ rand(10) ];
808f( rand(10), &x );
809\end{cfa}
810The correct form is:
811\begin{cfa}
812size_t __dim_x = rand(10);
813float x[ __dim_x ];
814f( __dim_x, &x );
815\end{cfa}
816Dimension hoisting already existed in the \CFA compiler.
817But its was buggy, particularly with determining, ``Can hoisting the expression be skipped here?'', for skipping this hoisting is clearly desirable in some cases.
818For example, when a programmer has already hoisted to perform an optimiation to prelude duplicate code (expression) and/or expression evaluation.
819In the new implementation, these cases are correct, harmonized with the accept/reject criteria.
820
821
822\section{Multidimensional Array Implementation}
823\label{s:ArrayMdImpl}
824
825A multidimensional array implementation has three relevant levels of abstraction, from highest to lowest, where the array occupies \emph{contiguous memory}.
826\begin{enumerate}[leftmargin=*]
827\item
828Flexible-stride memory:
829this model has complete independence between subscripting ordering and memory layout, offering the ability to slice by (provide an index for) any dimension, \eg slice a plane, row, or column, \eg @c[3][*][*]@, @c[3][4][*]@, @c[3][*][5]@.
830\item
831Fixed-stride memory:
832this model binds the first subscript and the first memory layout dimension, offering the ability to slice by (provide an index for) only the coarsest dimension, @m[row][*]@ or @c[plane][*][*]@, \eg slice only by row (2D) or plane (3D).
833After which, subscripting and memory layout are independent.
834\item
835Explicit-displacement memory:
836this model has no awareness of dimensions just the ability to access memory at a distance from a reference point (base-displacement addressing), \eg @x + 23@ or @x[23}@ $\Rightarrow$ 23rd element from the start of @x@.
837A programmer must manually build any notion of dimensions using other tools;
838hence, this style is not offering multidimensional arrays \see{\VRef[Figure]{f:FixedVariable} right example}.
839\end{enumerate}
840
841There is some debate as to whether the abstraction ordering goes $\{1, 2\} < 3$, rather than my numerically-ordering.
842That is, styles 1 and 2 are at the same abstraction level, with 3 offering a limited set of functionality.
843I chose to build the \CFA style-1 array upon a style-2 abstraction.
844(Justification of the decision follows, after the description of the design.)
845
846Style 3 is the inevitable target of any array implementation.
847The hardware offers this model to the C compiler, with bytes as the unit of displacement.
848C offers this model to its programmer as pointer arithmetic, with arbitrary sizes as the unit.
849Casting a multidimensional array as a single-dimensional array/pointer, then using @x[i]@ syntax to access its elements, is still a form of pointer arithmetic.
850
851Now stepping into the implementation of \CFA's new type-1 multidimensional arrays in terms of C's existing type-2 multidimensional arrays, it helps to clarify that even the interface is quite low-level.
852A C/\CFA array interface includes the resulting memory layout.
853The defining requirement of a type-2 system is the ability to slice a column from a column-finest matrix.
854The required memory shape of such a slice is fixed, before any discussion of implementation.
855The implementation presented here is how the \CFA array library wrangles the C type system, to make it do memory steps that are consistent with this layout.
856TODO: do I have/need a presentation of just this layout, just the semantics of -[all]?
857
858The new \CFA standard library @array@ datatype supports richer multidimensional features than C.
859The new array implementation follows C's contiguous approach, \ie @float [r][c]@, with one contiguous object subscripted by coarsely-strided dimensions directly wrapping finely-strided dimensions.
860Beyond what C's array type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.
861
862The following examples use the matrix declaration @array( float, 5, 7 ) m@, loaded with values incremented by $0.1$, when stepping across the length-7 finely-strided column dimension, and stepping across the length-5 coarsely-strided row dimension.
863\par
864\mbox{\lstinput{121-126}{hello-md.cfa}}
865\par\noindent
866The memory layout is 35 contiguous elements with strictly increasing addresses.
867
868A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.
869As for the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array by a prefix of its dimensions, \eg @m[2]@, giving the third row.
870This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be subscripted by the more finely strided dimensions, \eg @m[2][3]@, giving the value @2.3@.
871The following is an example slicing a row.
872\lstinput{60-64}{hello-md.cfa}
873\lstinput[aboveskip=0pt]{140-140}{hello-md.cfa}
874
875However, function @print1d@ is asserting too much knowledge about its parameter @r@ for printing either a row slice or a column slice.
876Specifically, declaring the parameter @r@ with type @array@ means that @r@ is contiguous, which is unnecessarily restrictive.
877That is, @r@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@) with managed length @N@.
878The new-array library provides the trait @ar@, so-defined.
879With it, the original declaration can be generalized with the same body.
880\lstinput{43-44}{hello-md.cfa}
881\lstinput[aboveskip=0pt]{145-145}{hello-md.cfa}
882The nontrivial slicing in this example now allows passing a \emph{noncontiguous} slice to @print1d@, where the new-array library provides a ``subscript by all'' operation for this purpose.
883In a multi-dimensional subscript operation, any dimension given as @all@ is a placeholder, \ie ``not yet subscripted by a value'', waiting for such a value, implementing the @ar@ trait.
884\lstinput{150-151}{hello-md.cfa}
885
886The example shows @x[2]@ and @x[[2, all]]@ both refer to the same, ``2.*'' slice.
887Indeed, the various @print1d@ calls under discussion access the entry with value @2.3@ as @x[2][3]@, @x[[2,all]][3]@, and @x[[all,3]][2]@.
888This design preserves (and extends) C array semantics by defining @x[[i,j]]@ to be @x[i][j]@ for numeric subscripts, but also for ``subscripting by all''.
889That is:
890\begin{cquote}
891\begin{tabular}{@{}cccccl@{}}
892@x[[2,all]][3]@ & $\equiv$ & @x[2][all][3]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is redundant) \\
893@x[[all,3]][2]@ & $\equiv$ & @x[all][3][2]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is effective)
894\end{tabular}
895\end{cquote}
896
897Narrating progress through each of the @-[-][-][-]@\footnote{
898The first ``\lstinline{-}'' is a variable expression and the remaining ``\lstinline{-}'' are subscript expressions.}
899expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
900Where @all@ is redundant:
901\begin{cquote}
902\begin{tabular}{@{}ll@{}}
903@x@ & 2-dimensional, want subscripts for coarse then fine \\
904@x[2]@ & 1-dimensional, want subscript for fine; lock coarse == 2 \\
905@x[2][all]@ & 1-dimensional, want subscript for fine \\
906@x[2][all][3]@ & 0-dimensional; lock fine == 3
907\end{tabular}
908\end{cquote}
909Where @all@ is effective:
910\begin{cquote}
911\begin{tabular}{@{}ll@{}}
912@x@ & 2-dimensional, want subscripts for coarse then fine \\
913@x[all]@ & 2-dimensional, want subscripts for fine then coarse \\
914@x[all][3]@ & 1-dimensional, want subscript for coarse; lock fine == 3 \\
915@x[all][3][2]@ & 0-dimensional; lock coarse == 2
916\end{tabular}
917\end{cquote}
918The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back.
919For example, in a two dimensional matrix, this semantics conceptually transposes the matrix by reversing the subscripts.
920The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@.
921
922Contiguous arrays, and slices of them, are all represented by the same underlying parameterized type, which includes stride information in its metatdata.
923\PAB{Do not understand this sentence: The \lstinline{-[all]} operation is a conversion from a reference to one instantiation to a reference to another instantiation.}
924The running example's @all@-effective step, stated more concretely, is:
925\begin{cquote}
926\begin{tabular}{@{}ll@{}}
927@x@ & : 5 of ( 7 of @float@ each spaced 1 @float@ apart ) each spaced 7 @floats@ apart \\
928@x[all]@ & : 7 of ( 5 of @float@ each spaced 7 @float@s apart ) each spaced 1 @float@ apart
929\end{tabular}
930\end{cquote}
931
932\begin{figure}
933\includegraphics{measuring-like-layout}
934\caption{Visualization of subscripting by value and by \lstinline[language=CFA]{all}, for \lstinline{x} of type \lstinline{array( float, 5, 7 )} understood as 5 rows by 7 columns.
935The horizontal layout represents contiguous memory addresses while the vertical layout is conceptual.
936The vertical shaded band highlights the location of the targeted element, 2.3.
937Any such vertical slice contains various interpretations of a single address.}
938\label{fig:subscr-all}
939\end{figure}
940
941\VRef[Figure]{fig:subscr-all} shows one element (in the shaded band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@.
942In both cases, value 2 selects from the coarser dimension (rows of @x@),
943while the value 3 selects from the finer dimension (columns of @x@).
944The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, \vs from @x[all]@.
945Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse.
946These two starting expressions, which are the example's only multidimensional subexpressions
947(those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index would select.
948
949The figure's presentation offers an intuition answering to: What is an atomic element of @x[all]@?
950From there, @x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these building blocks.
951An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box)
952and a lie about its size (the left diagonal above it, growing upward).
953An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them, done according to their size;
954call such an array a column.
955A column is almost ready to be arranged into a matrix;
956it is the \emph{contained value} of the next-level building block, but another lie about size is required.
957At first, an atom needs to be arranged as if it were bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).
958These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
959Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@.
960Yet every time the programmer presents an index, a C-array subscript is achieving the offset calculation.
961
962In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected),
963the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats,
964compared with where analogous rows appear when the row-level option is presented for @x@.
965
966For example, in \lstinline{x[all]}, the shaded band touches atoms 2.0, 2.1, 2.2, 2.3, 1.4, 1.5 and 1.6 (left diagonal).
967But only the atom 2.3 is storing its value there.
968The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims.
969
970Lying is implemented as casting.
971The arrangement just described is implemented in the structure @arpk@.
972This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator.
973The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@.
974The subscript operator presents what is really inside, by casting to the type below the left diagonal of the lie.
975
976% Does x[all] have to lie too? The picture currently glosses over how it it advertises a size of 7 floats. I'm leaving that as an edge case benignly misrepresented in the picture. Edge cases only have to be handled right in the code.
977
978Casting, overlapping, and lying are unsafe.
979The mission is to implement a style-1 feature in the type system for safe use by a programmer.
980The offered style-1 system is allowed to be internally unsafe,
981just as C's implementation of a style-2 system (upon a style-3 system) is unsafe within, even when the programmer is using it without casts or pointer arithmetic.
982Having a style-1 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices.
983
984% PAB: repeat from previous paragraph.
985% The choice to implement this style-1 system upon C's style-2 arrays, rather than its style-3 pointer arithmetic, reduces the attack surface of unsafe code.
986% My casting is unsafe, but I do not do any pointer arithmetic.
987% When a programmer works in the common-case style-2 subset (in the no-@[all]@ top of \VRef[Figure]{fig:subscr-all}), my casts are identities, and the C compiler is doing its usual displacement calculations.
988% If I had implemented my system upon style-3 pointer arithmetic, then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own.
989
990% \noindent END: Paste looking for a home
991
992The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.
993The @arpk@ structure and its @-[i]@ operator are defined as:
994\begin{cfa}
995forall(
996 [N], $\C{// length of current dimension}$
997 S & | sized(S), $\C{// masquerading-as}$
998 Timmed &, $\C{// immediate element, often another array}$
999 Tbase & $\C{// base element, \eg float, never array}$
1000) { // distribute forall to each element
1001 struct arpk {
1002 S strides[N]; $\C{// so that sizeof(this) is N of S}$
1003 };
1004 // expose Timmed, stride by S
1005 static inline Timmed & ?[?]( arpk( N, S, Timmed, Tbase ) & a, long int i ) {
1006 subcheck( a, i, 0, N );
1007 return (Timmed &)a.strides[i];
1008 }
1009}
1010\end{cfa}
1011The private @arpk@ structure (array with explicit packing) is generic over four types: dimension length, masquerading-as, ...
1012This structure's public interface is hidden behind the @array(...)@ macro and the subscript operator.
1013Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.
1014Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.
1015Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
1016
1017An instantiation of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.
1018In the base case, @array(E_base)@ is just @E_base@.
1019Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
1020
1021Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.
1022Expressed as an operation on types, this rotation is:
1023\begin{eqnarray*}
1024suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
1025enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
1026enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
1027\end{eqnarray*}
1028
1029
1030\section{Bound Checks, Added and Removed}
1031
1032\CFA array subscripting is protected with runtime bound checks.
1033Having dependent typing causes the optimizer to remove more of these bound checks than it would without them.
1034This section provides a demonstration of the effect.
1035
1036The experiment compares the \CFA array system with the padded-room system [TODO:xref] most typically exemplified by Java arrays, but also reflected in the \CC pattern where restricted vector usage models a checked array.
1037The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
1038The experiment compares with the \CC version to keep access to generated assembly code simple.
1039
1040As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and \CC versions.
1041When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.
1042But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
1043
1044TODO: paste source and assembly codes
1045
1046Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.
1047The case is naive matrix multiplication over a row-major encoding.
1048
1049TODO: paste source codes
1050
1051
1052\section{Array Lifecycle}
1053
1054An array is an aggregate, like a structure;
1055both are containers wrapping subordinate objects.
1056Any arbitrary object type, like @string@, can be an array element or structure member.
1057A consequence is that the lifetime of the container must match with its subordinate objects: all elements and members must be initialized/uninitialized implicitly as part of the container's allocation/deallocation.
1058Modern programming languages implicitly perform these operations via a type's constructor and destructor.
1059Therefore, \CFA must assure that an array's subordinate objects' lifetime operations are called.
1060
1061Preexisting \CFA mechanisms achieve this requirement, but with poor performance.
1062Furthermore, advanced array users need an exception to the basic mechanism, which does not occur with other aggregates.
1063Hence, arrays introduce subtleties in supporting an element's lifecycle.
1064
1065The preexisting \CFA support for contained-element lifecycle is based on recursive occurrences of the object-type (@otype@) pseudo-trait.
1066A type is an @otype@, if it provides a default (parameterless) constructor, copy constructor, assignment operator, and destructor (like \CC).
1067When declaring a structure with @otype@ members, the compiler implicitly generates implementations of the four @otype@ functions for the outer structure.
1068Then the generated default constructor for the outer structure calls the default constructor for each member, and the other @otype@ functions work similarly.
1069For a member that is a C array, these calls occur in a loop for each array element, which even works for VLAs.
1070This logic works the same, whether the member is a concrete type (that happens to be an @otype@) or if the member is a polymorphic type asserted to be an @otype@ (which is implicit in the syntax, @forall(T)@).
1071The \CFA array has the simplified form (similar to one seen before):
1072\begin{cfa}
1073forall( T * ) // non-otype element, no lifecycle functions
1074// forall( T ) // otype element, lifecycle functions asserted
1075struct array5 {
1076 T __items[ 5 ];
1077};
1078\end{cfa}
1079Being a structure with a C-array member, using the otype-form declaration for @T@ causes @array5(float)@ to implement @otype@ too.
1080
1081But this @otype@-recursion pattern has a performance issue.
1082For example, in a cube of @float@:
1083\begin{cfa}
1084array5( array5( array5( float ) ) )
1085\end{cfa}
1086the first few steps of the compiler's work to find the lifecycle functions, under the @otype@-recursion pattern, are shown in \VRef[Figure]{f:OtypeRecursionBlowup}.
1087All the work needed for the full @float@-cube would have 256 leaves.
1088
1089%array5(T) offers
1090%1 parameterless ctor, which asks for T to have
1091% 1 parameterless ctor
1092% 2 copy ctor
1093% 3 asgt
1094% 4 dtor
1095%2 copy ctor, which asks for T to have
1096% 1 parameterless ctor
1097% 2 copy ctor
1098% 3 asgt
1099% 4 dtor
1100%3 asgt, which asks for T to have
1101% 1 parameterless ctor
1102% 2 copy ctor
1103% 3 asgt
1104% 4 dtor
1105%4 dtor, which asks for T to have
1106% 1 parameterless ctor
1107% 2 copy ctor
1108% 3 asgt
1109% 4 dtor
1110
1111\begin{figure}
1112\centering
1113\setlength{\tabcolsep}{15pt}
1114\begin{tabular}{@{}lll@{}}
1115\begin{cfa}[deletekeywords={default}]
1116float offers
11171 default ctor
11182 copy ctor
11193 asgt
11204 dtor
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145\end{cfa}
1146&
1147\begin{cfa}[deletekeywords={default}]
1148array5(float) has
11491 default ctor, using float's
1150 1 default ctor
1151 2 copy ctor
1152 3 asgt
1153 4 dtor
11542 copy ctor, using float's
1155 1 default ctor
1156 2 copy ctor
1157 3 asgt
1158 4 dtor
11593 asgt, using float's
1160 1 default ctor
1161 2 copy ctor
1162 3 asgt
1163 4 dtor
11644 dtor, using float's
1165 1 default ctor
1166 2 copy ctor
1167 3 asgt
1168 4 dtor
1169
1170
1171
1172
1173
1174
1175
1176
1177\end{cfa}
1178&
1179\begin{cfa}[deletekeywords={default}]
1180array5(array5(float)) has
11811 default ctor, using array5(float)'s
1182 1 default ctor, using float's
1183 1 default ctor
1184 2 copy ctor
1185 3 asgt
1186 4 dtor
1187 2 copy ctor, using float's
1188 1 default ctor
1189 2 copy ctor
1190 3 asgt
1191 4 dtor
1192 3 asgt, using float's
1193 1 default ctor
1194 2 copy ctor
1195 3 asgt
1196 4 dtor
1197 4 dtor, using float's
1198 1 default ctor
1199 2 copy ctor
1200 3 asgt
1201 4 dtor
12022 copy ctor, using array5(float)'s
1203 ... 4 children, 16 leaves
12043 asgt, using array5(float)'s
1205 ... 4 children, 16 leaves
12064 dtor, using array5(float)'s
1207 ... 4 children, 16 leaves
1208(64 leaves)
1209\end{cfa}
1210\end{tabular}
1211\caption{Exponential thunk generation under the otype-recursion pattern.
1212 Each time that one type's function (\eg ctor) uses another type's function, the \CFA compiler generates a thunk, to capture the used function's dependencies, presented according to the using function's need.
1213 So, each non-leaf line represents a generated thunk and every line represents a search request for the resolver to find a satisfying function.}
1214\label{f:OtypeRecursionBlowup}
1215\end{figure}
1216
1217So the @otype@-recursion pattern seeks a quantity of helper functions, and generates a quantity of thunks, that are exponential in the number of dimensions.
1218Anecdotal experience with this solution found the resulting compile times annoyingly slow at three dimensions, and unusable at four.
1219
1220The issue is that the otype-recursion pattern uses more assertions than needed.
1221Consider how @array5(float)@'s default constructor is getting all four lifecycle assertions about the element type, @float@.
1222It only needs @float@'s default constructor;
1223the full set of operations is never used.
1224Current work by the \CFA team aims to improve this situation.
1225Therefore, a workaround is needed for now.
1226
1227The workaround is to provide a default constructor, copy constructor and destructor, defined with lean, bespoke assertions:
1228\begin{cquote}
1229\begin{tabular}{@{}l@{\hspace{0.5in}}l@{}}
1230\begin{cfa}
1231// autogenerated for otype-recursion:
1232forall( T )
1233void ?{}( array5(T) & this ) {
1234 for (i; 5) { ( this[i] ){}; }
1235}
1236forall( T )
1237void ?{}( array5(T) & this, array5(T) & src ) {
1238 for (i; 5) { ( this[i] ){ src[i] }; }
1239}
1240forall( T )
1241void ^?{}( array5(T) & this ) {
1242 for (i; 5) { ^( this[i] ){}; }
1243}
1244\end{cfa}
1245&
1246\begin{cfa}
1247// lean, bespoke:
1248forall( T* | { void @?{}( T & )@; } )
1249void ?{}( array5(T) & this ) {
1250 for (i; 5) { ( this[i] ){}; }
1251}
1252forall( T* | { void @?{}( T &, T )@; } )
1253void ?{}( array5(T) & this, array5(T) & src ) {
1254 for (i; 5) { ( this[i] ){ src[i] }; }
1255}
1256forall( T* | { void @?{}( T & )@; } )
1257void ^?{}( array5(T) & this ) {
1258 for (i; 5) { ^( this[i] ){}; }
1259}
1260\end{cfa}
1261\end{tabular}
1262\end{cquote}
1263Moreover, the assignment operator is skipped, to avoid hitting a lingering growth case.
1264Skipping assignment is tolerable because array assignment is not a common operation.
1265With this solution, the critical lifecycle functions are available, with no growth in thunk creation.
1266
1267Finally, the intuition that a programmer using an array always wants the elements' default constructor called \emph{automatically} is simplistic.
1268Arrays exist to store different values at each position.
1269So, array initialization needs to let the programmer provide different constructor arguments to each element.
1270\begin{cfa}
1271thread worker { int id; };
1272void ?{}( worker & ) = void; // remove default constructor
1273void ?{}( worker &, int id );
1274array( worker, 5 ) ws = @{}@; // rejected; but desire is for no initialization yet
1275for (i; 5) (ws[i]){ @i@ }; // explicitly initialize each thread, giving id
1276\end{cfa}
1277Note the use of the \CFA explicit constructor call, analogous to \CC's placement-@new@.
1278This call is where initialization is desired, and not at the declaration of @ws@.
1279The attempt to initialize from nothing (equivalent to dropping @= {}@ altogether) is invalid because the @worker@ type removes the default constructor.
1280The @worker@ type is designed this way to work with the threading system.
1281A thread type forks a thread at the end of each constructor and joins with it at the start of each destructor.
1282But a @worker@ cannot begin its forked-thread work without knowing its @id@.
1283Therefore, there is a conflict between the implicit actions of the builtin @thread@ type and a user's desire to defer these actions.
1284
1285Another \CFA feature may, at first, seem viable for initializing the array @ws@, though on closer inspection, it is not.
1286C initialization, \lstinline|array(worker, 5) ws @= {};|, ignores all \CFA lifecycle management and uses C empty initialization.
1287This option does achieve the desired semantics on the construction side.
1288But on destruction side, the desired semantics is for implicit destructor calls to continue, to keep the join operation tied to lexical scope.
1289C initialization disables \emph{all} implicit lifecycle management, but the goal is to disable only the implicit construction.
1290
1291To fix this problem, I enhanced the \CFA standard library to provide the missing semantics, available in either form:
1292\begin{cfa}
1293array( @uninit@(worker), 5 ) ws1;
1294array( worker, 5) ws2 = { @delay_init@ };
1295\end{cfa}
1296Both cause the @ws@-construction-time implicit call chain to stop before reaching a @worker@ constructor, while leaving the implicit destruction calls intact.
1297Two forms are available, to parallel the development of this feature in \uCpp.
1298Originally \uCpp offered only the @ws1@ form, using the class-template @uNoCtor@ equivalent to \CFA's @uninit@.
1299More recently, \uCpp was extended with the declaration macro, @uArray@, with usage similar to the @ws2@ case.
1300Based on experience piloting @uArray@ as a replacement of @uNoCtor@, it might be possible to remove the first option.
1301
1302% note to Mike, I have more fragments on some details available in push24\fragments\uNoCtor.txt
1303
1304\section{Comparison with Other Arrays}
1305
1306
1307\subsection{Rust}
1308
1309\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.
1310Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly-typed ownership-system, which further helps guarantee statically the validity of every pointer deference.
1311These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.
1312\CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
1313
1314\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.
1315Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.
1316The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
1317
1318
1319\subsection{Java}
1320
1321Java arrays are arrays-of-arrays because all objects are references \see{\VRef{toc:mdimpl}}.
1322For each array, Java implicitly storages the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations.
1323\begin{cquote}
1324\begin{tabular}{rl}
1325C & @void f( size_t n, size_t m, float x[n][m] );@ \\
1326Java & @void f( float x[][] );@
1327\end{tabular}
1328\end{cquote}
1329However, in the C prototype, the parameters @n@ and @m@ are documentation only as the intended size of the first and second dimension of @x@.
1330\VRef[Figure]{f:JavaVsCTriangularMatrix} compares a triangular matrix (array-of-arrays) in dynamically safe Java to unsafe C.
1331Each dynamically sized row in Java stores its dimension, while C requires the programmer to manage these sizes explicitly (@rlnth@).
1332All subscripting is Java has bounds checking, while C has none.
1333Both Java and C require explicit null checking, otherwise there is a runtime failure.
1334
1335\begin{figure}
1336\setlength{\tabcolsep}{15pt}
1337\begin{tabular}{ll@{}}
1338\begin{java}
1339int m[][] = { // triangular matrix
1340 new int [4],
1341 new int [3],
1342 new int [2],
1343 new int [1],
1344 null
1345};
1346
1347for ( int r = 0; r < m.length; r += 1 ) {
1348 if ( m[r] == null ) continue;
1349 for ( int c = 0; c < m[r].length; c += 1 ) {
1350 m[r][c] = c + r; // subscript checking
1351 }
1352
1353}
1354
1355for ( int r = 0; r < m.length; r += 1 ) {
1356 if ( m[r] == null ) {
1357 System.out.println( "null row" );
1358 continue;
1359 }
1360 for ( int c = 0; c < m[r].length; c += 1 ) {
1361 System.out.print( m[r][c] + " " );
1362 }
1363 System.out.println();
1364
1365}
1366\end{java}
1367&
1368\begin{cfa}
1369int * m[5] = { // triangular matrix
1370 calloc( 4, sizeof(int) ),
1371 calloc( 3, sizeof(int) ),
1372 calloc( 2, sizeof(int) ),
1373 calloc( 1, sizeof(int) ),
1374 NULL
1375};
1376int rlnth = 4;
1377for ( int r = 0; r < 5; r += 1 ) {
1378 if ( m[r] == NULL ) continue;
1379 for ( int c = 0; c < rlnth; c += 1 ) {
1380 m[r][c] = c + r; // no subscript checking
1381 }
1382 rlnth -= 1;
1383}
1384rlnth = 4;
1385for ( int r = 0; r < 5; r += 1 ) {
1386 if ( m[r] == NULL ) {
1387 printf( "null row\n" );
1388 continue;
1389 }
1390 for ( int c = 0; c < rlnth; c += 1 ) {
1391 printf( "%d ", m[r][c] );
1392 }
1393 printf( "\n" );
1394 rlnth -= 1;
1395}
1396\end{cfa}
1397\end{tabular}
1398\caption{Java (left) \vs C (right) Triangular Matrix}
1399\label{f:JavaVsCTriangularMatrix}
1400\end{figure}
1401
1402The downside of the arrays-of-arrays approach is performance due to pointer chasing versus pointer arithmetic for a contiguous arrays.
1403Furthermore, there is the cost of managing the implicit array descriptor.
1404It is unlikely that a JIT can dynamically rewrite an arrays-of-arrays form into a contiguous form.
1405
1406
1407\subsection{\CC}
1408
1409Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array.
1410While the vector size can grow and shrink dynamically, \vs a fixed-size dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration (one dynamic call) to avoid using @push_back@.
1411\begin{c++}
1412vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations
1413\end{c++}
1414Multidimensional arrays are arrays-of-arrays with associated costs.
1415Each @vector@ array has an array descriptor contain the dimension, which allows bound checked using @x.at(i)@ in place of @x[i]@.
1416Used with these restrictions, out-of-bound accesses are caught, and in-bound accesses never exercise the vector's ability to grow, preventing costly reallocate and copy, and never invalidate references to contained values.
1417This scheme matches Java's safety and expressiveness exactly, but with the inherent costs.
1418
1419
1420\subsection{Levels of Dependently Typed Arrays}
1421
1422The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
1423\begin{itemize}[leftmargin=*]
1424\item a \emph{zip}-style operation that consumes two arrays of equal length
1425\item a \emph{map}-style operation whose produced length matches the consumed length
1426\item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
1427\end{itemize}
1428Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.
1429Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
1430
1431Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.
1432In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.
1433The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.
1434In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.
1435[TODO: clarify how even Idris type checking terminates]
1436
1437Idris is a current, general-purpose dependently typed programming language.
1438Length checking is a common benchmark for full dependent type systems.
1439Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.
1440[TODO: finish explaining what Data.Vect is and then the essence of the comparison]
1441
1442POINTS:
1443here is how our basic checks look (on a system that does not have to compromise);
1444it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
1445
1446Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer novel contributions concerning similar, restricted dependent types for tracking array length.
1447Unlike \CFA, both are garbage-collected functional languages.
1448Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.
1449So, like \CFA, the checking in question is a lightweight bounds-only analysis.
1450Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
1451
1452
1453
1454The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.
1455There is a particular emphasis on an existential type, enabling callee-determined return shapes.
1456
1457
1458Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type.
1459
1460Futhark and full-strength dependently typed languages treat array sizes are ordinary values.
1461Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
1462
1463\CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.
1464Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.
1465Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
1466
1467\subsection{Static Safety in C Extensions}
1468
1469
1470\section{Future Work}
1471
1472\subsection{Declaration Syntax}
1473
1474\subsection{Range Slicing}
1475
1476\subsection{With a Module System}
1477
1478\subsection{With Described Enumerations}
1479
1480A project in \CFA's current portfolio will improve enumerations.
1481In the incumbent state, \CFA has C's enumerations, unmodified.
1482I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use.
1483It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations:
1484\begin{cfa}
1485forall( T | is_enum(T) )
1486void show_in_context( T val ) {
1487 for( T i ) {
1488 string decorator = "";
1489 if ( i == val-1 ) decorator = "< ready";
1490 if ( i == val ) decorator = "< go" ;
1491 sout | i | decorator;
1492 }
1493}
1494enum weekday { mon, tue, wed = 500, thu, fri };
1495show_in_context( wed );
1496\end{cfa}
1497with output
1498\begin{cfa}
1499mon
1500tue < ready
1501wed < go
1502thu
1503fri
1504\end{cfa}
1505The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA.
1506But the example shows these abilities:
1507\begin{itemize}[leftmargin=*]
1508\item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type
1509\item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@)
1510\item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@)
1511\item a provision for looping (the @for@ form used) over the values of the type.
1512\end{itemize}
1513
1514If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
1515
1516[TODO: introduce Ada in the comparators]
1517
1518In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, \CC, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.
1519The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing.
1520
1521This change of perspective also lets us remove ubiquitous dynamic bound checks.
1522[TODO: xref] discusses how automatically inserted bound checks can often be optimized away.
1523But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.
1524To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
1525
1526[TODO, fix confusion: Idris has this arrangement of checks, but still the natural numbers as the domain.]
1527
1528The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.
1529Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
1530\begin{cfa}
1531interface Ix n
1532get_size n : Unit -> Int
1533ordinal : n -> Int
1534unsafe_from_ordinal n : Int -> n
1535\end{cfa}
1536
1537Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.
1538This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.
1539Dex's example is a routine that calculates pointwise differences between two samples.
1540Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).
1541In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size.
1542
1543The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.
1544In the audio instantiation, the duration-of-clip type argument is used for the domain.
1545In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.
1546This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
1547\begin{cfa}
1548instance {a b} [Ix a, Ix b] Ix (a & b)
1549get_size = \(). size a * size b
1550ordinal = \(i, j). (ordinal i * size b) + ordinal j
1551unsafe_from_ordinal = \o.
1552bs = size b
1553(unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
1554\end{cfa}
1555and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
1556\begin{cfa}
1557img_trans :: (img_wd,img_ht)=>Real
1558img_trans.(i,j) = img.i.j
1559result = pairwise img_trans
1560\end{cfa}
1561[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
1562
1563In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
1564
1565\subsection{Retire Pointer Arithmetic}
1566
1567
1568\section{\CFA}
1569
1570XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1571moved from background chapter \\
1572XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1573
1574Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it.
1575(For later: That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.)
1576
1577\subsection{\CFA Features Interacting with Arrays}
1578
1579Prior work on \CFA included making C arrays, as used in C code from the wild,
1580work, if this code is fed into @cfacc@.
1581The quality of this this treatment was fine, with no more or fewer bugs than is typical.
1582
1583More mixed results arose with feeding these ``C'' arrays into preexisting \CFA features.
1584
1585A notable success was with the \CFA @alloc@ function,
1586which type information associated with a polymorphic return type
1587replaces @malloc@'s use of programmer-supplied size information.
1588\begin{cfa}
1589// C, library
1590void * malloc( size_t );
1591// C, user
1592struct tm * el1 = malloc( sizeof(struct tm) );
1593struct tm * ar1 = malloc( 10 * sizeof(struct tm) );
1594
1595// CFA, library
1596forall( T * ) T * alloc();
1597// CFA, user
1598tm * el2 = alloc();
1599tm (*ar2)[10] = alloc();
1600\end{cfa}
1601The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument.
1602This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type.
1603Using a compiler-produced value eliminates an opportunity for user error.
1604
1605TODO: fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa
1606
1607Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug.
1608In the last example, the choice of ``pointer to array'' @ar2@ breaks a parallel with @ar1@.
1609They are not subscripted in the same way.
1610\begin{cfa}
1611ar1[5];
1612(*ar2)[5];
1613\end{cfa}
1614Using ``reference to array'' works at resolving this issue. TODO: discuss connection with Doug-Lea \CC proposal.
1615\begin{cfa}
1616tm (&ar3)[10] = *alloc();
1617ar3[5];
1618\end{cfa}
1619The implicit size communication to @alloc@ still works in the same ways as for @ar2@.
1620
1621Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one.
1622TODO xref C standard does not claim that @ar1@ may be subscripted,
1623because no stage of interpreting the construction of @ar1@ has it be that ``there is an \emph{array object} here.''
1624But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls,
1625where the type requested is an array, making the result, much more obviously, an array object.
1626
1627The ``reference to array'' type has its sore spots too.
1628TODO see also @dimexpr-match-c/REFPARAM_CALL@ (under @TRY_BUG_1@)
1629
1630TODO: I fixed a bug associated with using an array as a T. I think. Did I really? What was the bug?
Note: See TracBrowser for help on using the repository browser.