source: doc/theses/mike_brooks_MMath/array.tex@ eb0d9b7

Last change on this file since eb0d9b7 was eb0d9b7, checked in by Michael Brooks <mlbrooks@…>, 16 hours ago

Improve libcfa-array's bound-check removal and write that thesis section.

The libcfa change adds a more performant alternative for a subset of multidimensional indexing cases that were already functionally correct.
That the new alternative is more performant is not shown in the test suite.
There is an associated new high-performance option for passing an array-or-slice to a function.
The added test cases cover those options.

The added in-thesis demos rely on the new more-performant alternative for multidimensional indexing.

  • Property mode set to 100644
File size: 99.1 KB
Line 
1\chapter{Array}
2\label{c:Array}
3
4Arrays in C are possibly the single most misunderstood and incorrectly used feature in the language \see{\VRef{s:Array}}, resulting in the largest proportion of runtime errors and security violations.
5This chapter describes the new \CFA language and library features that introduce a length-checked array type, @array@, to the \CFA standard library~\cite{Cforall}.
6
7Offering the @array@ type, as a distinct alternative to the C array, is consistent with \CFA's goal of backwards compatibility, \ie virtually all existing C (@gcc@) programs can be compiled by \CFA with only a small number of changes, similar to \CC (@g++@).
8However, a few compatibility-breaking changes to the behaviour of the C array are necessary, both as an implementation convenience and to fix C's lax treatment of arrays.
9Hence, the @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features, making it unnecessary to deal with every inherited complexity of the C array.
10
11
12\section{Introduction}
13\label{s:ArrayIntro}
14
15The new \CFA array is declared by instantiating the generic @array@ type, much like instantiating any other standard-library generic type (such as \CC @vector@), using a new style of generic parameter.
16\begin{cfa}
17@array( float, 99 )@ x; $\C[2.5in]{// x contains 99 floats}$
18\end{cfa}
19Here, the arguments to the @array@ type are @float@ (element type) and @99@ (dimension).
20When this type is used as a function parameter, the type-system requires the argument is a perfect match.
21\begin{cfa}
22void f( @array( float, 42 )@ & p ) {} $\C{// p accepts 42 floats}$
23f( x ); $\C{// statically rejected: type lengths are different, 99 != 42}$
24
25test2.cfa:3:1 error: Invalid application of existing declaration(s) in expression.
26Applying untyped: Name: f ... to: Name: x
27\end{cfa}
28Function @f@'s parameter expects an array with dimension 42, but the argument dimension 99 does not match.
29
30A function can be polymorphic over @array@ arguments using the \CFA @forall@ declaration prefix.
31\begin{cfa}
32forall( T, @[N]@ )
33void g( array( T, @N@ ) & p, int i ) {
34 T elem = p[i]; $\C{// dynamically checked: requires 0 <= i < N}$
35}
36g( x, 0 ); $\C{// T is float, N is 99, dynamic subscript check succeeds}$
37g( x, 1000 ); $\C{// T is float, N is 99, dynamic subscript check fails}$
38
39Cforall Runtime error: subscript 1000 exceeds dimension range [0,99) $for$ array 0x555555558020.
40\end{cfa}
41Function @g@ takes an arbitrary type parameter @T@ and an unsigned integer \emph{dimension} @N@.
42The dimension represents a to-be-determined number of elements, managed by the type system, where 0 represents an empty array.
43The type system implicitly infers @float@ for @T@ and @99@ for @N@.
44Furthermore, the runtime subscript @x[0]@ (parameter @i@ being @0@) in @g@ is valid because 0 is in the dimension range $[0,99)$ for argument @x@.
45The call @g( x, 1000 )@ is also accepted at compile time.
46However, the subscript, @x[1000]@, generates a runtime error, because @1000@ is outside the dimension range $[0,99)$ of argument @x@.
47In general, the @forall( ..., [N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and within a function.
48The syntactic form is chosen to parallel other @forall@ forms:
49\begin{cfa}
50forall( @[N]@ ) ... $\C{// dimension}$
51forall( T ) ... $\C{// value datatype}$
52forall( T & ) ... $\C{// opaque datatype}$
53\end{cfa}
54% The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
55
56The generic @array@ type is comparable to the C array type, which \CFA inherits from C.
57Their runtime characteristics are often identical, and some features are available in both.
58For example, assume a caller has an argument that instantiates @N@ with 42.
59\begin{cfa}
60forall( [N] )
61void f( ... ) {
62 float x1[@N@]; $\C{// C array, no subscript checking}$
63 array(float, N) x2; $\C{// \CFA array, subscript checking}\CRT$
64}
65\end{cfa}
66Both of the stack declared array variables, @x1@ and @x2@, have 42 elements, each element being a @float@.
67The two variables have identical size and layout, with no additional ``bookkeeping'' allocations or headers.
68The C array, @x1@, has no subscript checking, while \CFA array, @x2@, does.
69Providing this explicit generic approach requires a significant extension to the \CFA type system to support a full-feature, safe, efficient (space and time) array-type, which forms the foundation for more complex array forms in \CFA.
70In all following discussion, ``C array'' means types like @x1@ and ``\CFA array'' means types like @x2@.
71
72A future goal is to provide the new @array@ features with syntax approaching C's (declaration style of @x1@).
73Then, the library @array@ type could be removed, giving \CFA a largely uniform array type.
74At present, the C-syntax @array@ is only partially supported, so the generic @array@ is used exclusively in the thesis.
75
76My contributions in this chapter are:
77\begin{enumerate}[leftmargin=*]
78\item A type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@.
79\item Provide a dimension/subscript-checked array-type in the \CFA standard library, where the array's length is statically managed and dynamically valued.
80\item Provide argument/parameter passing safety for arrays and subscript safety.
81\item Identify the interesting specific abilities available by the new @array@ type.
82\item Where there is a gap concerning this feature's readiness for prime-time, identification of specific workable improvements that are likely to close the gap.
83\end{enumerate}
84
85
86\begin{comment}
87\section{Dependent Typing}
88
89General dependent typing allows a type system to encode arbitrary predicates, \eg behavioural specifications for functions, which is an anti-goal for my work.
90Firstly, this application is strongly associated with pure functional languages,
91where a characterization of the return value (giving it a precise type, generally dependent upon the parameters)
92is a sufficient postcondition.
93In an imperative language like C and \CFA, it is also necessary to discuss side effects, for which an even heavier formalism, like separation logic, is required.
94Secondly: ... bash Rust.
95... cite the crap out of these claims.
96\end{comment}
97
98
99\section{Features Added}
100
101This section shows more about using the \CFA array and dimension parameters, demonstrating syntax and semantics by way of motivating examples.
102As stated, the core capability of the new array is tracking all dimensions within the type system, where dynamic dimensions are represented using type variables.
103
104By declaring type variables at the front of object declarations, an array dimension is lexically referenceable where it is needed.
105For example, a declaration can share one length, @N@, among a pair of parameters and return type, meaning the input arrays and return array are the same length.
106\lstinput{10-17}{hello-array.cfa}
107Function @f@ does a pointwise comparison of its two input arrays, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
108The dynamic allocation of the @ret@ array uses the library @alloc@ function,
109\begin{cfa}
110forall( T & | sized(T) )
111T * alloc() {
112 return @(T *)@malloc( @sizeof(T)@ ); // C malloc
113}
114\end{cfa}
115which captures the parameterized dimension information implicitly within its @sizeof@ determination, and casts the return type.
116Note, @alloc@ only sees the whole type for its @T@, @array(bool, N)@, where this type's size is a computation based on @N@.
117This example illustrates how the new @array@ type plugs into existing \CFA behaviour by implementing necessary \emph{sized} assertions needed by other types.
118(\emph{sized} implies a concrete \vs abstract type with a runtime-available size, exposed as @sizeof@.)
119As a result, there is significant programming safety by making the size accessible and implicit, compared with C's @calloc@ and non-array supporting @memalign@, which take an explicit length parameter not managed by the type system.
120
121\begin{figure}
122\lstinput{30-43}{hello-array.cfa}
123\lstinput{45-48}{hello-array.cfa}
124\caption{\lstinline{f} Example}
125\label{f:fExample}
126\end{figure}
127
128\VRef[Figure]{f:fExample} shows an example using function @f@, illustrating how dynamic values are fed into the @array@ type.
129Here, the dimension of arrays @x@, @y@, and @result@ is specified from a command-line value, @dim@, and these arrays are allocated on the stack.
130Then the @x@ array is initialized with decreasing values, and the @y@ array with amounts offset by constant @0.005@, giving relative differences within tolerance initially and diverging for later values.
131The program main is run (see figure bottom) with inputs @5@ and @7@ for sequence lengths.
132The loops follow the familiar pattern of using the variable @dim@ to iterate through the arrays.
133Most importantly, the type system implicitly captures @dim@ at the call of @f@ and makes it available throughout @f@ as @N@.
134The example shows @dim@ adapting into a type-system managed length at the declarations of @x@, @y@, and @result@; @N@ adapting in the same way at @f@'s loop bound; and a pass-thru use of @dim@ at @f@'s declaration of @ret@.
135Except for the lifetime-management issue of @result@, \ie explicit @free@, this program has eliminated both the syntactic and semantic problems associated with C arrays and their usage.
136The result is a significant improvement in safety and usability.
137
138In summary:
139\begin{itemize}[leftmargin=*]
140\item
141@[N]@ within a @forall@ declares the type variable @N@ to be a managed length.
142\item
143@N@ can be used in an expression with type @size_t@ within the function body.
144\item
145The value of an @N@-expression is the acquired length, derived from the usage site, \ie generic declaration or function call.
146\item
147@array( thing, N0, N1, ... )@ is a multi-dimensional type wrapping $\prod_i N_i$ adjacent occurrences of @thing@-typed objects.
148\end{itemize}
149
150\VRef[Figure]{f:TemplateVsGenericType} shows @N@ is not the same as a @size_t@ declaration in a \CC \lstinline[language=C++]{template}.\footnote{
151The \CFA program requires a snapshot declaration for \lstinline{n} to compile, as described at the end of \Vref{s:ArrayTypingC}.}
152\begin{enumerate}[leftmargin=*]
153\item
154The \CC template @N@ can only be a compile-time value, while the \CFA @N@ may be a runtime value.
155\item
156\CC does not allow a template function to be nested, while \CFA allows polymorphic functions be nested.
157Hence, \CC precludes a simple form of information hiding.
158\item
159\label{p:DimensionPassing}
160The \CC template @N@ must be passed explicitly at the call, unless @N@ has a default value, even when \CC can deduct the type of @T@.
161The \CFA @N@ is part of the array type and passed implicitly at the call.
162% fixed by comparing to std::array
163% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v2
164\item
165\CC cannot have an array of references, but can have an array of @const@ pointers.
166\CC has a (mistaken) belief that references are not objects, but pointers are objects.
167In the \CC example, the arrays fall back on C arrays, which have a duality with references with respect to automatic dereferencing.
168The \CFA array is a contiguous object with an address, which can be stored as a reference or pointer.
169% not really about forall-N vs template-N
170% any better CFA support is how Rob left references, not what Mike did to arrays
171% https://stackoverflow.com/questions/1164266/why-are-arrays-of-references-illegal
172% https://stackoverflow.com/questions/922360/why-cant-i-make-a-vector-of-references
173\item
174\label{p:ArrayCopy}
175C/\CC arrays cannot be copied, while \CFA arrays can be copied, making them a first-class object (although array copy is often avoided for efficiency).
176% fixed by comparing to std::array
177% mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v10
178\end{enumerate}
179The \CC template @std::array@ tries to accomplish a similar mechanism to \CFA @array@.
180It is an aggregate type with the same semantics as a @struct@ holding a C-style array \see{\VRef{s:ArraysCouldbeValues}}, which mitigates points \VRef[]{p:DimensionPassing} and \VRef[]{p:ArrayCopy}.
181
182\begin{figure}
183\begin{tabular}{@{}ll@{}}
184\begin{c++}
185
186@template< typename T, size_t N >@
187void copy( T ret[@N@], T x[@N@] ) {
188 for ( int i = 0; i < N; i += 1 ) ret[i] = x[i];
189}
190int main() {
191 const size_t n = 10; // must be constant
192 int ret[n], x[n];
193 for ( int i = 0; i < n; i += 1 ) x[i] = i;
194 @copy<int, n >( ret, x );@
195 for ( int i = 0; i < n; i += 1 )
196 cout << ret[i] << ' ';
197 cout << endl;
198}
199\end{c++}
200&
201\begin{cfa}
202int main() {
203 @forall( T, [N] )@ // nested function
204 void copy( array( T, @N@ ) & ret, array( T, @N@ ) & x ) {
205 for ( i; N ) ret[i] = x[i];
206 }
207 size_t n;
208 sin | n;
209 array( int, n ) ret, x;
210 for ( i; n ) x[i] = i;
211 @copy( ret, x );@
212 for ( i; n )
213 sout | ret[i] | nonl;
214 sout | nl;
215}
216\end{cfa}
217\end{tabular}
218\caption{\lstinline{N}-style parameters, for \CC template \vs \CFA generic type }
219\label{f:TemplateVsGenericType}
220\end{figure}
221
222Just as the first example in \VRef[Section]{s:ArrayIntro} shows a compile-time rejection of a length mismatch,
223so are length mismatches stopped when they involve dimension parameters.
224While \VRef[Figure]{f:fExample} shows successfully calling a function @f@ expecting two arrays of the same length,
225\begin{cfa}
226array( bool, N ) & f( array( float, N ) &, array( float, N ) & );
227\end{cfa}
228a static rejection occurs when attempting to call @f@ with arrays of differing lengths.
229\lstinput[tabsize=1]{70-74}{hello-array.cfa}
230When the argument lengths themselves are statically unknown,
231the static check is conservative and, as always, \CFA's casting lets the programmer use knowledge not shared with the type system.
232\lstinput{90-96}{hello-array.cfa}
233This static check's rules are presented in \VRef[Section]{s:ArrayTypingC}.
234
235Orthogonally, the \CFA array type works within generic \emph{types}, \ie @forall@-on-@struct@.
236The same argument safety and the associated implicit communication of array length occurs.
237Preexisting \CFA allowed aggregate types to be generalized with type parameters, enabling parameterizing of element types.
238This feature is extended to allow parameterizing by dimension.
239Doing so gives a refinement of C's ``flexible array member''~\cite[\S~6.7.2.1.18]{C11}:
240\begin{cfa}
241struct S {
242 ...
243 double d []; // incomplete array type => flexible array member
244} * s = malloc( sizeof( struct S ) + sizeof( double [10] ) );
245\end{cfa}
246which creates a VLA of size 10 @double@s at the end of the structure.
247A C flexible array member can only occur at the end of a structure;
248\CFA allows length-parameterized array members to be nested at arbitrary locations, with intervening member declarations.
249\lstinput{10-15}{hello-accordion.cfa}
250The structure has course- and student-level metatdata (their respective field names) and a position-based preferences' matrix.
251Its layout has the starting offset of @studentIds@ varying according to the generic parameter @C@, and the offset of @preferences@ varying according to both generic parameters.
252
253\VRef[Figure]{f:checkExample} shows a program main using @School@ and results with different array sizes.
254The @school@ variable holds many students' course-preference forms.
255It is on the stack and its initialization does not use any casting or size arithmetic.
256Both of these points are impossible with a C flexible array member.
257When heap allocation is preferred, the original pattern still applies.
258\begin{cfa}
259School( classes, students ) * sp = alloc();
260\end{cfa}
261This ability to avoid casting and size arithmetic improves safety and usability over C flexible array members.
262Finally, inputs and outputs are given on the right for different sized schools.
263The example program prints the courses in each student's preferred order, all using the looked-up display names.
264
265\begin{figure}
266\begin{lrbox}{\myboxA}
267\begin{tabular}{@{}l@{}}
268\lstinput{50-55}{hello-accordion.cfa} \\
269\lstinput{90-98}{hello-accordion.cfa}
270\end{tabular}
271\end{lrbox}
272
273\begin{lrbox}{\myboxB}
274\begin{tabular}{@{}l@{}}
275@$ cat school1@ \\
276\lstinputlisting{school1} \\
277@$ ./a.out < school1@ \\
278\lstinputlisting{school1.out} \\
279@$ cat school2@ \\
280\lstinputlisting{school2} \\
281@$ ./a.out < school2@ \\
282\lstinputlisting{school2.out}
283\end{tabular}
284\end{lrbox}
285
286\setlength{\tabcolsep}{10pt}
287\begin{tabular}{@{}ll@{}}
288\usebox\myboxA
289&
290\usebox\myboxB
291\end{tabular}
292
293\caption{\lstinline{School} Example, Input and Output}
294\label{f:checkExample}
295\end{figure}
296
297When a function operates on a @School@ structure, the type system handles its memory layout transparently.
298\lstinput{30-36}{hello-accordion.cfa}
299In the example, function @getPref@ returns, for the student at position @is@, what is the position of their @pref@\textsuperscript{th}-favoured class?
300
301
302\section{Dimension Parameter Implementation}
303
304The core of the preexisting \CFA compiler already has the ``heavy equipment'' needed to provide the feature set just reviewed (up to bugs in cases not yet exercised).
305To apply this equipment in tracking array lengths, I encoded a dimension (array's length) as a type.
306The type in question does not describe any data that the program actually uses at runtime.
307It simply carries information through intermediate stages of \CFA-to-C lowering.
308And this type takes a form such that, once \emph{it} gets lowered, the result does the right thing.
309This new dimension type, encoding the array dimension within it, is the first limited \newterm{dependent type}~\cite{DependentType} added to the \CFA type-system and is used at appropriate points during type resolution.
310
311Furthermore, the @array@ type itself is really ``icing on the cake.''
312Presenting its full version is deferred until \VRef[Section]{s:ArrayMdImpl}
313(where added complexity needed for multiple dimensions is considered).
314But simplifications close enough for the present discussion are:
315\begin{cfa}
316forall( [N] )
317struct array_1d_float {
318 float items[N];
319};
320forall( T, [N] )
321struct array_1d_T {
322 T items[N];
323};
324\end{cfa}
325These two structure patterns, plus a subscript operator, is all that @array@ provides.
326My main work is letting a programmer define such a structure (one whose type is parameterized by @[N]@) and functions that operate on it (these being similarly parameterized).
327
328The repurposed heavy equipment is
329\begin{itemize}[leftmargin=*]
330\item
331 Resolver provided values for a declaration's type-system variables, gathered from type information in scope at the usage site.
332\item
333 The box pass, encoding information about type parameters into ``extra'' regular parameters and arguments on declarations and calls.
334 Notably, it conveys the size of a type @foo@ as a @__sizeof_foo@ parameter, and rewrites the @sizeof(foo)@ expression as @__sizeof_foo@, \ie a use of the parameter.
335\end{itemize}
336
337The rules for resolution had to be restricted slightly, in order to achieve important refusal cases.
338This work is detailed in \VRef[Section]{s:ArrayTypingC}.
339However, the resolution--boxing scheme, in its preexisting state, is equipped to work on (desugared) dimension parameters.
340The following discussion explains the desugaring and how correctly lowered code results.
341
342A simpler structure, and a toy function on it, demonstrate what is needed for the encoding.
343\begin{cfa}
344forall( [@N@] ) { $\C{// [1]}$
345 struct thing {};
346 void f( thing(@N@) ) { sout | @N@; } $\C{// [2], [3]}$
347}
348int main() {
349 thing( @10@ ) x; f( x ); $\C{// prints 10, [4]}$
350 thing( @100@ ) y; f( y ); $\C{// prints 100}$
351}
352\end{cfa}
353This example has:
354\begin{enumerate}[leftmargin=*]
355\item
356 The symbol @N@ being declared as a type variable (a variable of the type system).
357\item
358 The symbol @N@ being used to parameterize a type.
359\item
360 The symbol @N@ being used as an expression (value).
361\item
362 A value like 10 being used as an argument to the parameter @N@.
363\end{enumerate}
364The chosen solution is to encode the value @N@ \emph{as a type}, so items 1 and 2 are immediately available for free.
365Item 3 needs a way to recover the encoded value from a (valid) type and to reject invalid types.
366Item 4 needs a way to produce a type that encodes the given value.
367
368Because the box pass handles a type's size as its main datum, the encoding is chosen to use it.
369The production and recovery are then straightforward.
370\begin{itemize}[leftmargin=*]
371\item
372 The value $n$ is encoded as a type whose size is $n$.
373\item
374 Given a dimension expression $e$, produce an internal type @char[@$e$@]@ to represent it.
375 If $e$ evaluates to $n$ then the encoded type has size $n$.
376\item
377 Given a type $T$ (produced by these rules), recover the value that it represents with the expression @sizeof(@$T$@)@.
378 If $T$ has size $n$ then the recovery expression evaluates to $n$.
379\end{itemize}
380
381This desugaring is applied in a translation step before the resolver.
382The ``validate'' pass hosts it, along with several other canonicalizing and desugaring transformations (the pass's name notwithstanding).
383The running example is lowered to:
384\begin{cfa}
385forall( @N *@ ) { $\C{// [1]}$
386 struct thing {};
387 void f( thing(@N@) ) { sout | @sizeof(N)@; } $\C{// [2], [3]}$
388}
389int main() {
390 thing( @char[10]@ ) x; f( x ); $\C{// prints 10, [4]}$
391 thing( @char[100]@ ) y; f( y ); $\C{// prints 100}$
392}
393\end{cfa}
394Observe:
395\begin{enumerate}[leftmargin=*]
396\item
397 @N@ is now declared to be a type.
398 It is declared to be \emph{sized} (by the @*@), meaning that the box pass shall do its @sizeof(N)@$\rightarrow$@__sizeof_N@ extra parameter and expression translation.
399\item
400 @thing(N)@ is a type; the argument to the generic @thing@ is a type (type variable).
401\item
402 The @sout...@ expression (being an application of the @?|?@ operator) has a second argument that is an ordinary expression.
403\item
404 The type of variable @x@ is another @thing(-)@ type; the argument to the generic @thing@ is a type (array type of bytes, @char[@$e$@]@).
405\end{enumerate}
406
407From this point, preexisting \CFA compilation takes over lowering it the rest of the way to C.
408Here the result shows only the relevant changes of the box pass (as informed by the resolver), leaving the rest unadulterated:
409\begin{cfa}
410// [1]
411void f( size_t __sizeof_N, @void *@ ) { sout | @__sizeof_N@; } $\C{// [2], [3]}$
412int main() {
413 struct __conc_thing_10 {} x; f( @10@, &x ); $\C{// prints 10, [4]}$
414 struct __conc_thing_100 {} y; f( @100@, &y ); $\C{// prints 100}$
415}
416\end{cfa}
417Observe:
418\begin{enumerate}[leftmargin=*]
419\item
420 The type parameter @N@ is gone.
421\item
422 The type @thing(N)@ is (replaced by @void *@, but thereby effectively) gone.
423\item
424 The @sout...@ expression has a regular variable (parameter) usage for its second argument.
425\item
426 Information about the particular @thing@ instantiation (value 10) is moved, from the type, to a regular function-call argument.
427\end{enumerate}
428At the end of the desugaring and downstream processing, the original C idiom of ``pass both a length parameter and a pointer'' has been reconstructed.
429In the programmer-written form, only the @thing@ is passed.
430The compiler's action produces the more complex form, which if handwritten, would be error-prone.
431
432At the compiler front end, the parsing changes AST schema extensions and validation rules for enabling the sugared user input.
433\begin{itemize}[leftmargin=*]
434\item
435 Recognize the form @[N]@ as a type-variable declaration within a @forall@.
436\item
437 Have the new brand of type-variable, \emph{Dimension}, in the AST form of a type-variable, to represent one parsed from @[-]@.
438\item
439 Allow a type variable to occur in an expression. Validate (after parsing) that only dimension-branded type-variables are used here.
440\item
441 Allow an expression to occur in type-argument position. Brand the resulting type argument as a dimension.
442\item
443 Validate (after parsing), on a generic-type usage, \eg the type part of the declaration
444 \begin{cfa}
445 array_1d( foo, bar ) x;
446 \end{cfa}
447 \vspace*{-10pt}
448 that the brands on the generic arguments match the brands of the declared type variables.
449 Here, that @foo@ is a type and @bar@ is a dimension.
450\end{itemize}
451
452
453\section{Typing of C Arrays}
454\label{s:ArrayTypingC}
455
456Essential in giving a guarantee of accurate length is the compiler's ability to reject a program that presumes to mishandle length.
457By contrast, most discussion so far deals with communicating length, from one party who knows it, to another willing to work with any given length.
458For scenarios where the concern is a mishandled length, the interaction is between two parties who both claim to know something about it.
459
460C and \CFA can check when working with two static values.
461\begin{cfa}
462enum { n = 42 };
463float x[@n@]; $\C{// or just 42}$
464float (*xp1)[@42@] = &x; $\C{// accept}$
465float (*xp2)[@999@] = &x; $\C{// reject}$
466warning: initialization of 'float (*)[999]' from incompatible pointer type 'float (*)[42]'
467\end{cfa}
468When a variable is involved, C and \CFA take two different approaches.
469Today's C compilers accept the following without a warning.
470\begin{cfa}
471static const int n = 42;
472float x[@n@];
473float (* xp)[@999@] = &x; $\C{// should be static rejection here}$
474(*xp)[@500@]; $\C{// in "bound"?}$
475\end{cfa}
476Here, the array @x@ has length 42, while a pointer to it (@xp@) claims length 999.
477So, while the subscript of @xp@ at position 500 is out of bound with its referent @x@,
478the access appears in-bound of the type information available on @xp@.
479In fact, length is being mishandled in the previous step, where the type-carried length information on @x@ is not compatible with that of @xp@.
480
481In \CFA, I choose to reject this C example at the point where the type-carried length information on @x@ is not compatible with that of @xp@, and correspondingly, its array counterpart at the same location:
482\begin{cfa}
483static const int n = 42;
484array( float, @n@ ) x;
485array( float, @999@ ) * xp = &x; $\C{// static rejection here}$
486(*xp)[@500@]; $\C{// runtime check passes}$
487\end{cfa}
488The way the \CFA array is implemented, the type analysis for this case reduces to a case similar to the earlier C version.
489The \CFA compiler's compatibility analysis proceeds as:
490\begin{itemize}[leftmargin=*,parsep=0pt]
491\item
492 Is @array( float, 999 )@ type-compatible with @array( float, n )@?
493\item
494 Is desugared @array( float, char[999] )@ type-compatible with desugared @array( float, char[n] )@?
495% \footnote{
496% Here, \lstinline{arrayX} represents the type that results from desugaring the \lstinline{array} type into a type whose generic parameters are all types.
497% This presentation elides the noisy fact that \lstinline{array} is actually a macro for something bigger;
498% the reduction to \lstinline{char [-]} still proceeds as sketched.}
499\item
500 Is internal type @char[999]@ type-compatible with internal type @char[n]@?
501\end{itemize}
502The answer is false because, in general, the value of @n@ is unknown at compile time, and hence, an error is raised.
503For safety, it makes sense to reject the corresponding C case, which is a non-backwards compatible change.
504There are two mitigations for this incompatibility.
505
506First, a simple recourse is available in a situation where @n@ is \emph{known} to be 999 by using a cast.
507\begin{cfa}
508float (* xp)[999] = @(float (*)[999])@&x;
509\end{cfa}
510The cast means the programmer has accepted blame.
511Moreover, the cast is ``eye candy'' marking where the unchecked length knowledge is used.
512Therefore, a program being onboarded to \CFA requires some upgrading to satisfy the \CFA rules (and arguably become clearer), without giving up its validity to a plain C compiler.
513Second, the incompatibility only affects types like pointer-to-array, which are infrequently used in C.
514The more common C idiom for aliasing an array is to use a pointer-to-first-element type, which does not participate in the \CFA array's length checking.\footnote{
515 Notably, the desugaring of the \lstinline{array} type avoids letting any \lstinline{-[-]} type decay,
516 in order to preserve the length information that powers runtime bound-checking.}
517Therefore, the need to upgrade legacy C code is low.
518Finally, if this incompatibility is a problem onboarding C programs to \CFA, it should be possible to change the C type check to a warning rather than an error, acting as a \emph{lint} of the original code for a missing type annotation.
519
520To handle two occurrences of the same variable, more information is needed, \eg, this is fine,
521\begin{cfa}
522int n = 42;
523float x[@n@];
524float (*xp)[@n@] = x; $\C{// accept}$
525\end{cfa}
526where @n@ remains fixed across a contiguous declaration context.
527However, intervening dynamic statements can cause failures.
528\begin{cfa}
529int n = 42;
530float x[@n@];
531@n@ = 999; $\C{// dynamic change}$
532float (*xp)[@n@] = x; $\C{// reject}$
533\end{cfa}
534As well, side-effects can even occur in a contiguous declaration context.
535\begin{cquote}
536\setlength{\tabcolsep}{20pt}
537\begin{tabular}{@{}ll@{}}
538\begin{cfa}
539// compile unit 1
540extern int @n@;
541extern float g();
542void f() {
543 float x[@n@] = { g() };
544 float (*xp)[@n@] = x; // reject
545}
546\end{cfa}
547&
548\begin{cfa}
549// compile unit 2
550int @n@ = 42;
551void g() {
552 @n@ = 999; // accept
553}
554
555
556\end{cfa}
557\end{tabular}
558\end{cquote}
559The issue here is that knowledge needed to make a correct decision is hidden by separate compilation.
560Even within a translation unit, static analysis might not be able to provide all the information.
561However, if the example uses @const@, the check is possible even though the value is unknown.
562\begin{cquote}
563\setlength{\tabcolsep}{20pt}
564\begin{tabular}{@{}ll@{}}
565\begin{cfa}
566// compile unit 1
567extern @const@ int n;
568extern float g();
569void f() {
570 float x[n] = { g() };
571 float (*xp)[n] = x; // accept
572}
573\end{cfa}
574&
575\begin{cfa}
576// compile unit 2
577@const@ int n = 42;
578void g() {
579 @n = 999@; // reject
580}
581
582
583\end{cfa}
584\end{tabular}
585\end{cquote}
586
587In summary, the new rules classify expressions into three groups:
588\begin{description}
589\item[Statically Evaluable]
590 Expressions for which a specific value can be calculated (conservatively) at compile-time.
591 A preexisting \CFA compiler module defines which literals, enumerators, and expressions qualify and evaluates them.
592\item[Dynamic but Stable]
593 The value of a variable declared as @const@, including a @const@ parameter.
594\item[Potentially Unstable]
595 The catch-all category. Notable examples include:
596 any function-call result, @float x[foo()]@, the particular function-call result that is a pointer dereference, @void f(const int * n)@ @{ float x[*n]; }@, and any use of a reference-typed variable.
597\end{description}
598Within these groups, my \CFA rules are:
599\begin{itemize}[leftmargin=*]
600\item
601 Accept a Statically Evaluable pair, if both expressions have the same value.
602 Notably, this rule allows a literal to match with an enumeration value, based on the value.
603\item
604 Accept a Dynamic but Stable pair, if both expressions are written out the same, \eg refers to the same variable declaration.
605\item
606 Otherwise, reject.
607 Notably, reject all pairs from the Potentially Unstable group and all pairs that cross groups.
608\end{itemize}
609The traditional C rules are:
610\begin{itemize}[leftmargin=*]
611\item
612 Reject a Statically Evaluable pair, if the expressions have two different values.
613\item
614 Otherwise, accept.
615\end{itemize}
616\VRef[Figure]{f:DimexprRuleCompare} gives a case-by-case comparison of the consequences of these rule sets.
617It demonstrates that the \CFA false alarms occur in the same cases as C treats unsafe.
618It also shows that C-incompatibilities only occur in cases that C treats unsafe.
619
620\begin{figure}
621 \newcommand{\falsealarm}{{\color{blue}\small{*}}}
622 \newcommand{\allowmisuse}{{\color{red}\textbf{!}}}
623
624 \begin{tabular}{@{}l@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c}
625 & \multicolumn{2}{c}{\underline{Values Equal}}
626 & \multicolumn{2}{c}{\underline{Values Unequal}}
627 & \\
628 \textbf{Case} & C & \CFA & C & \CFA & Compat. \\
629 Both Statically Evaluable, Same Symbol & Accept & Accept & & & \cmark \\
630 Both Statically Evaluable, Different Symbols & Accept & Accept & Reject & Reject & \cmark \\
631 Both Dynamic but Stable, Same Symbol & Accept & Accept & & & \cmark \\
632 Both Dynamic but Stable, Different Symbols & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\
633 Both Potentially Unstable, Same Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\
634 Any other grouping, Different Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark
635 \end{tabular}
636
637 \medskip
638 \noindent\textbf{Legend}
639 \begin{itemize}[leftmargin=*]
640 \item
641 Each row gives the treatment of a test harness of the form
642 \begin{cfa}
643 float x[ expr1 ];
644 float (*xp)[ expr2 ] = &x;
645 \end{cfa}
646 \vspace*{-10pt}
647 where \lstinline{expr1} and \lstinline{expr2} are meta-variables varying according to the row's Case.
648 Each row's claim applies to other harnesses too, including,
649 \begin{itemize}[leftmargin=*]
650 \item
651 calling a function with a parameter like \lstinline{x} and an argument of the \lstinline{xp} type,
652 \item
653 assignment in place of initialization,
654 \item
655 using references in place of pointers, and
656 \item
657 for the \CFA array, calling a polymorphic function on two \lstinline{T}-typed parameters with \lstinline{&x}- and \lstinline{xp}-typed arguments.
658 \end{itemize}
659 \item
660 Each case's claim is symmetric (swapping \lstinline{expr1} with \lstinline{expr2} has no effect),
661 even though most test harnesses are asymmetric.
662 \item
663 The table treats symbolic identity (Same/Different on rows)
664 apart from value equality (Equal/Unequal on columns).
665 \begin{itemize}[leftmargin=*]
666 \item
667 The expressions \lstinline{1}, \lstinline{0+1} and \lstinline{n}
668 (where \lstinline{n} is a variable with value 1),
669 are all different symbols with the value 1.
670 \item
671 The column distinction expresses ground truth about whether an omniscient analysis should accept or reject.
672 \item
673 The row distinction expresses the simple static factors used by today's analyses.
674 \end{itemize}
675 \item
676 Accordingly, every Reject under Values Equal is a false alarm (\falsealarm),
677 while every Accept under Values Unequal is an allowed misuse (\allowmisuse).
678 \end{itemize}
679
680 \caption{Case comparison for array type compatibility, given pairs of dimension expressions.}
681 \label{f:DimexprRuleCompare}
682\end{figure}
683
684\begin{comment}
685Given that the above check
686\begin{itemize}
687 \item
688 Is internal type @char[999]@ type-compatible with internal type @char[n]@?
689\end{itemize}
690answers false, discussion turns to how I got the \CFA compiler to produce this answer.
691In its preexisting form, the type system had a buggy approximation of the C rules.
692To implement the new \CFA rules, I added one further step.
693\begin{itemize}
694 \item
695 Is @999@ compatible with @n@?
696\end{itemize}
697This question applies to a pair of expressions, where the earlier question applies to types.
698An expression-compatibility question is a new addition to the \CFA compiler, and occurs in the context of dimension expressions, and possibly enumerations assigns, which must be unique.
699
700% ... ensure these compiler implementation matters are treated under \CFA compiler background: type unification, cost calculation, GenPoly.
701
702The relevant technical component of the \CFA compiler is the standard type-unification within the type resolver.
703\begin{cfa}
704example
705\end{cfa}
706I added rules for continuing this unification into expressions that occur within types.
707It is still fundamentally doing \emph{type} unification because it is participating in binding type variables, and not participating in binding any variables that stand in for expression fragments (for there is no such sort of variable in \CFA's analysis.)
708An unfortunate fact about the \CFA compiler's preexisting implementation is that type unification suffers from two forms of duplication.
709
710In detail, the first duplication has (many of) the unification rules stated twice.
711As a result, my additions for dimension expressions are stated twice.
712The extra statement of the rules occurs in the @GenPoly@ module, where concrete types like @array( int, 5 )@\footnote{
713 Again, the presentation is simplified
714 by leaving the \lstinline{array} macro unexpanded.}
715are lowered into corresponding C types @struct __conc_array_1234@ (the suffix being a generated index).
716In this case, the struct's definition contains fields that hardcode the argument values of @float@ and @5@.
717The next time an @array( -, - )@ concrete instance is encountered, it checks if the previous @struct __conc_array_1234@ is suitable for it.
718Yes, for another occurrence of @array( int, 5 )@;
719no, for examples like @array( int, 42 )@ or @array( rational(int), 5 )@.
720In the first example, it must reject the reuse hypothesis for a dimension-@5@ and a dimension-@42@ instance.
721
722The second duplication has unification (proper) being invoked at two stages of expression resolution.
723As a result, my added rule set needs to handle more cases than the preceding discussion motivates.
724In the program
725\begin{cfa}
726void @f@( double ); // overload
727forall( T & ) void @f@( T & ); // overload
728void g( int n ) {
729 array( float, n + 1 ) x;
730 f(x); // overloaded
731}
732\end{cfa}
733when resolving a function call to @g@, the first unification stage compares the type @T@ of the parameter with @array( float, n + 1 )@, of the argument.
734
735The actual rules for comparing two dimension expressions are conservative.
736To answer, ``yes, consider this pair of expressions to be matching,''
737is to imply, ``all else being equal, allow an array with length calculated by $e_1$
738to be passed to a function expecting a length-$e_2$ array.''\footnote{
739 ... Deal with directionality, that I'm doing exact-match, no ``at least as long as,'' no subtyping.
740 Should it be an earlier scoping principle? Feels like it should matter in more places than here.}
741So, a ``yes'' answer must represent a guarantee that both expressions evaluate the
742same result, while a ``no'' can tolerate ``they might, but we're not sure'',
743provided that practical recourses are available
744to let programmers express better knowledge.
745The new rule-set in the current release is, in fact, extremely conservative.
746I chose to keep things simple,
747and allow future needs to drive adding additional complexity, within the new framework.
748
749For starters, the original motivating example's rejection is not based on knowledge that the @xp@ length of (the literal) 999 is value-unequal to the (obvious) runtime value of the variable @n@, which is the @x@ length.
750Rather, the analysis assumes a variable's value can be anything, and so there can be no guarantee that its value is 999.
751So, a variable and a literal can never match.
752
753... Discuss the interaction of this dimension hoisting with the challenge of extra unification for cost calculation
754\end{comment}
755
756The conservatism of the new rule set can leave a programmer requiring a recourse, when needing to use a dimension expression whose stability argument is more subtle than current-state analysis.
757This recourse is to declare an explicit constant for the dimension value.
758Consider these two dimension expressions, whose uses are rejected by the blunt current-state rules:
759\begin{cfa}
760void f( int @&@ nr, @const@ int nv ) {
761 float x[@nr@];
762 float (*xp)[@nr@] = &x; // reject: nr varying (no references)
763 float y[@nv + 1@];
764 float (*yp)[@nv + 1@] = &y; // reject: ?+? unpredictable (no functions)
765}
766\end{cfa}
767Yet, both dimension expressions are reused safely.
768The @nr@ reference is never written, no implicit code (load) between declarations, and control does not leave the function between the uses.
769As well, the build-in @?+?@ function is predictable.
770To make these cases work, the programmer must add the follow constant declarations (cast does not work):
771\begin{cfa}
772void f( int & nr, const int nv ) {
773 @const int nx@ = nr;
774 float x[nx];
775 float (*xp)[nx] = & x; // accept
776 @const int ny@ = nv + 1;
777 float y[ny];
778 float (*yp)[ny] = & y; // accept
779}
780\end{cfa}
781The result is the originally intended semantics,
782achieved by adding a superfluous ``snapshot it as of now'' directive.
783
784The snapshot trick is also used by the \CFA translation, though to achieve a different outcome.
785Rather obviously, every array must be subscriptable, even a bizarre one:
786\begin{cfa}
787array( float, @rand(10)@ ) x;
788x[@0@]; // 10% chance of bound-check failure
789\end{cfa}
790Less obvious is that the mechanism of subscripting is a function call, which must communicate length accurately.
791The bound-check above (callee logic) must use the actual allocated length of @x@, without mistakenly reevaluating the dimension expression, @rand(10)@.
792Adjusting the example to make the function's use of length more explicit:
793\begin{cfa}
794forall( T * )
795void f( T * x ) { sout | sizeof( *x ); }
796float x[ rand(10) ];
797f( x );
798\end{cfa}
799Considering that the partly translated function declaration is, loosely,
800\begin{cfa}
801void f( size_t __sizeof_T, void * x ) { sout | __sizeof_T; }
802\end{cfa}
803the translation calls the dimension argument twice:
804\begin{cfa}
805float x[ rand(10) ];
806f( rand(10), &x );
807\end{cfa}
808The correct form is:
809\begin{cfa}
810size_t __dim_x = rand(10);
811float x[ __dim_x ];
812f( __dim_x, &x );
813\end{cfa}
814Dimension hoisting already existed in the \CFA compiler.
815However, it was buggy, particularly with determining, ``Can hoisting the expression be skipped here?'', for skipping this hoisting is clearly desirable in some cases.
816For example, when a programmer has already hoisted to perform an optimization to prelude duplicate code (expression) and/or expression evaluation.
817In the new implementation, these cases are correct, harmonized with the accept/reject criteria.
818
819
820\section{Multidimensional Array Implementation}
821\label{s:ArrayMdImpl}
822
823A multidimensional array implementation has three relevant levels of abstraction, from highest to lowest, where the array occupies \emph{contiguous memory}.
824\begin{enumerate}[leftmargin=*]
825\item
826Flexible-stride memory:
827this model has complete independence between subscript ordering and memory layout, offering the ability to slice by (provide an index for) any dimension, \eg slice a row, column, or plane, \eg @c[3][4][*]@, @c[3][*][5]@, @c[3][*][*]@.
828\item
829Fixed-stride memory:
830this model binds the first subscript and the first memory layout dimension, offering the ability to slice by (provide an index for) only the coarsest dimension, @m[row][*]@ or @c[plane][*][*]@, \eg slice only by row (2D) or plane (3D).
831After which, subscripting and memory layout are independent.
832\item
833Explicit-displacement memory:
834this model has no awareness of dimensions just the ability to access memory at a distance from a reference point (base-displacement addressing), \eg @x + 23@ or @x[23}@ $\Rightarrow$ 23rd element from the start of @x@.
835A programmer must manually build any notion of dimensions using other tools;
836hence, this style is not offering multidimensional arrays \see{\VRef[Figure]{f:FixedVariable} right example}.
837\end{enumerate}
838
839There is some debate as to whether the abstraction point ordering above goes $\{1, 2\} < 3$, rather than my numerically-ordering.
840That is, styles 1 and 2 are at the same abstraction level, with 3 offering a limited set of functionality.
841I chose to build the \CFA style-1 array upon a style-2 abstraction.
842(Justification of the decision follows, after the description of the design.)
843
844Style 3 is the inevitable target of any array implementation.
845The hardware offers this model to the C compiler, with bytes as the unit of displacement.
846C offers this model to programmers as pointer arithmetic, with arbitrary sizes as the unit.
847Casting a multidimensional array as a single-dimensional array/pointer, then using @x[i]@ syntax to access its elements, is still a form of pointer arithmetic.
848
849To step into the implementation of \CFA's new type-1 multidimensional arrays in terms of C's existing type-2 multidimensional arrays, it helps to clarify that the interface is low-level, \ie a C/\CFA array interface includes the resulting memory layout.
850Specifically, the defining requirement of a type-2 system is the ability to slice a column from a column-finest matrix.
851Hence, the required memory shape of such a slice is fixed, before any discussion of implementation.
852The implementation presented here is how the \CFA array-library wrangles the C type system to make it do memory steps that are consistent with this layout while not affecting legacy C programs.
853% TODO: do I have/need a presentation of just this layout, just the semantics of -[all]?
854
855The new \CFA standard-library @array@-datatype supports richer multidimensional features than C.
856The new array implementation follows C's contiguous approach, \ie @float [r][c]@, with one contiguous object subscripted by coarsely-strided dimensions directly wrapping finely-strided dimensions.
857Beyond what C's array type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.
858
859The following examples use the matrix declaration @array( float, 5, 7 ) m@, loaded with values incremented by $0.1$, when stepping across the length-7 finely-strided column dimension, and stepping across the length-5 coarsely-strided row dimension.
860\par
861\mbox{\lstinput{121-126}{hello-md.cfa}}
862\par\noindent
863The memory layout is 35 contiguous elements with strictly increasing addresses.
864
865A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.
866As for the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array by a prefix of its dimensions, \eg @m[2]@, giving the third row.
867This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be subscripted by the more finely strided dimensions, \eg @m[2][3]@, giving the value @2.3@.
868The following is an example slicing a row.
869\lstinput{60-64}{hello-md.cfa}
870\lstinput[aboveskip=0pt]{140-140}{hello-md.cfa}
871
872However, function @print1d@ is asserting too much knowledge about its parameter @r@ for printing either a row slice or a column slice.
873Specifically, declaring the parameter @r@ with type @array@ means that @r@ is contiguous, which is unnecessarily restrictive.
874That is, @r@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@) with managed length @N@.
875The new-array library provides the trait @ar@, so-defined.
876With it, the original declaration can be generalized with the same body.
877\lstinput{43-44}{hello-md.cfa}
878\lstinput[aboveskip=0pt]{145-145}{hello-md.cfa}
879The nontrivial slicing in this example now allows passing a \emph{noncontiguous} slice to @print1d@, where the new-array library provides a ``subscript by all'' operation for this purpose.
880In a multi-dimensional subscript operation, any dimension given as @all@ is a placeholder, \ie ``not yet subscripted by a value'', waiting for a value implementing the @ar@ trait.
881\lstinput{150-151}{hello-md.cfa}
882
883The example shows @x[2]@ and @x[[2, all]]@ both refer to the same, ``2.*'' slice.
884Indeed, the various @print1d@ calls under discussion access the entry with value @2.3@ as @x[2][3]@, @x[[2,all]][3]@, and @x[[all,3]][2]@.
885This design preserves (and extends) C array semantics by defining @x[[i,j]]@ to be @x[i][j]@ for numeric subscripts, but also for ``subscripting by all''.
886That is:
887\begin{cquote}
888\begin{tabular}{@{}cccccl@{}}
889@x[[2,all]][3]@ & $\equiv$ & @x[2][all][3]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is redundant) \\
890@x[[all,3]][2]@ & $\equiv$ & @x[all][3][2]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is effective)
891\end{tabular}
892\end{cquote}
893
894Narrating progress through each of the @-[-][-][-]@\footnote{
895The first ``\lstinline{-}'' is a variable expression and the remaining ``\lstinline{-}'' are subscript expressions.}
896expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
897Where @all@ is redundant:
898\begin{cquote}
899\begin{tabular}{@{}ll@{}}
900@x@ & 2-dimensional, want subscripts for coarse then fine \\
901@x[2]@ & 1-dimensional, want subscript for fine; lock coarse == 2 \\
902@x[2][all]@ & 1-dimensional, want subscript for fine \\
903@x[2][all][3]@ & 0-dimensional; lock fine == 3
904\end{tabular}
905\end{cquote}
906Where @all@ is effective:
907\begin{cquote}
908\begin{tabular}{@{}ll@{}}
909@x@ & 2-dimensional, want subscripts for coarse then fine \\
910@x[all]@ & 2-dimensional, want subscripts for fine then coarse \\
911@x[all][3]@ & 1-dimensional, want subscript for coarse; lock fine == 3 \\
912@x[all][3][2]@ & 0-dimensional; lock coarse == 2
913\end{tabular}
914\end{cquote}
915The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back.
916For example, in a two dimensional matrix, this semantics conceptually transposes the matrix by reversing the subscripts.
917The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@.
918
919Contiguous arrays, and slices of them, are all represented by the same underlying parameterized type, which includes stride information in its metatdata.
920The \lstinline{-[all]} operation takes subscripts, \eg @x[2][all]@, @x[all][3]@, \etc, and converts (transposes) from the base reference @x[all]@ to a specific reference of the appropriate form.
921The running example's @all@-effective step, stated more concretely, is:
922\begin{cquote}
923\begin{tabular}{@{}ll@{}}
924@x@ & : 5 of ( 7 of @float@ each spaced 1 @float@ apart ) each spaced 7 @floats@ apart \\
925@x[all]@ & : 7 of ( 5 of @float@ each spaced 7 @float@s apart ) each spaced 1 @float@ apart
926\end{tabular}
927\end{cquote}
928
929\begin{figure}
930\includegraphics{measuring-like-layout}
931\caption{Visualization of subscripting by value and by \lstinline[language=CFA]{all}, for \lstinline{x} of type \lstinline{array( float, 5, 7 )} understood as 5 rows by 7 columns.
932The horizontal layout represents contiguous memory addresses while the vertical layout is conceptual.
933The vertical shaded band highlights the location of the targeted element, 2.3.
934Any such vertical slice contains various interpretations of a single address.}
935\label{fig:subscr-all}
936\end{figure}
937
938\VRef[Figure]{fig:subscr-all} shows one element (in the white band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@.
939In both cases, subscript 2 selects from the coarser dimension (rows of @x@),
940while subscript 3 selects from the finer dimension (columns of @x@).
941The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, \vs from @x[all]@.
942Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse.
943These two starting expressions, which are the example's only multidimensional subexpressions
944(those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index would select.
945
946The figure's presentation offers an intuition answer to: What is an atomic element of @x[all]@?
947From there, @x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these building blocks.
948An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box)
949and a lie about its size (the left diagonal above it, growing upward).
950An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them, done according to their size;
951call such an array a column.
952A column is almost ready to be arranged into a matrix;
953it is the \emph{contained value} of the next-level building block, but another lie about size is required.
954At first, an atom needs to be arranged as if it were bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).
955These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
956Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@.
957Yet every time the programmer presents an index, a C-array subscript is achieving the offset calculation.
958
959In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected),
960the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats,
961compared with where analogous rows appear when the row-level option is presented for @x@.
962For example, in \lstinline{x[all]}, the shaded band and its immediate values to the left touches atoms 2.3, 2.0, 2.1, 2.2, 1.4, 1.5 and 1.6.
963But only the atom 2.3 is storing its value there.
964The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims.
965
966Lying is implemented as casting.
967The arrangement just described is implemented in the structure @arpk@.
968This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator.
969The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@.
970The subscript operator presents what is really inside, by casting to the type below the left diagonal of the lie.
971
972% Does x[all] have to lie too? The picture currently glosses over how it it advertises a size of 7 floats. I'm leaving that as an edge case benignly misrepresented in the picture. Edge cases only have to be handled right in the code.
973
974Casting, overlapping, and lying are unsafe.
975The mission is to implement a style-1 feature in the type system for safe use by a programmer.
976The offered style-1 system is allowed to be internally unsafe,
977just as C's implementation of a style-2 system (upon a style-3 system) is unsafe within, even when the programmer is using it without casts or pointer arithmetic.
978Having a style-1 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices.
979
980% PAB: repeat from previous paragraph.
981% The choice to implement this style-1 system upon C's style-2 arrays, rather than its style-3 pointer arithmetic, reduces the attack surface of unsafe code.
982% My casting is unsafe, but I do not do any pointer arithmetic.
983% When a programmer works in the common-case style-2 subset (in the no-@[all]@ top of \VRef[Figure]{fig:subscr-all}), my casts are identities, and the C compiler is doing its usual displacement calculations.
984% If I had implemented my system upon style-3 pointer arithmetic, then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own.
985
986% \noindent END: Paste looking for a home
987
988The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.
989The @arpk@ structure and its @-[i]@ operator are defined as:
990\begin{cfa}
991forall(
992 [N], $\C{// length of current dimension}$
993 S & | sized(S), $\C{// masquerading-as}$
994 Timmed &, $\C{// immediate element, often another array}$
995 Tbase & $\C{// base element, \eg float, never array}$
996) { // distribute forall to each element
997 struct arpk {
998 S strides[N]; $\C{// so that sizeof(this) is N of S}$
999 };
1000 // expose Timmed, stride by S
1001 static inline Timmed & ?[?]( arpk( N, S, Timmed, Tbase ) & a, long int i ) {
1002 subcheck( a, i, 0, N );
1003 return (Timmed &)a.strides[i];
1004 }
1005}
1006\end{cfa}
1007The private @arpk@ structure (array with explicit packing) is generic over four types: dimension length, masquerading-as, ...
1008This structure's public interface is hidden behind the @array(...)@ macro and the subscript operator.
1009Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.
1010Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.
1011Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns the element found there, in unmasked form.
1012
1013An instantiation of the @arpk@ generic is given by the @array( E_base, N0, N1, ... )@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array( E_base, N1, ... )@.
1014In the base case, @array( E_base )@ is just @E_base@.
1015Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
1016
1017Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.
1018Expressed as an operation on types, this rotation is:
1019\begin{eqnarray*}
1020suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
1021enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
1022enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
1023\end{eqnarray*}
1024
1025
1026\section{Zero Overhead}
1027
1028At runtime, the \CFA array is exactly a C array.
1029\CFA array subscripting is protected with runtime bound checks.
1030The array dependent-typing provides information to the C optimizer, allowing it remove many of the bound checks.
1031This section provides a demonstration of these effects.
1032
1033The experiment compares the \CFA array system with the simple-safety system most typically exemplified by Java arrays (\VRef[Section]{JavaCompare}), but also reflected in the \CC pattern where restricted vector usage models a checked array (\VRef[Section]{CppCompare}).
1034The essential feature of this simple system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
1035The experiment uses the \CC version to simplify access to generated assembly code.
1036While ``\CC'' labels a participant, it is really the simple-safety system (of @vector@ with @.at@) whose limitaitons are being explained, and not limitations of \CC optimization.
1037
1038As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and \CC versions.
1039Firstly, when the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.
1040But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
1041
1042\begin{figure}
1043\setlength{\tabcolsep}{10pt}
1044\begin{tabular}{llll@{}}
1045\multicolumn{1}{c}{C} &
1046\multicolumn{1}{c}{\CFA} &
1047\multicolumn{1}{c}{\CC (nested vector)}
1048\\[1em]
1049\lstinput{20-28}{ar-bchk/control.c} &
1050\lstinput{20-28}{ar-bchk/control.cfa} &
1051\lstinput{20-28}{ar-bchk/control.cc}
1052\\
1053\multicolumn{1}{c}{(a)} &
1054\multicolumn{1}{c}{(b)} &
1055\multicolumn{1}{c}{(c)}
1056\\[1em]
1057\multicolumn{4}{l}{
1058 \includegraphics[page=1,trim=0 9.125in 2in 0,clip]{ar-bchk}
1059}
1060\\
1061\multicolumn{1}{c}{(d)} &
1062\multicolumn{1}{c}{(e)} &
1063\multicolumn{1}{c}{(f)}
1064\\[1em]
1065\multicolumn{4}{l}{
1066 \includegraphics[page=2,trim=0 8.75in 2in 0,clip]{ar-bchk}
1067}
1068\\
1069\multicolumn{1}{c}{(g)} &
1070\multicolumn{1}{c}{(h)} &
1071\multicolumn{1}{c}{(i)}
1072\end{tabular}
1073\caption{Overhead comparison, control case.
1074All three programs/compilers generate equally performant code when the programmer ``self-checks'' bounds via a loop's condition.
1075Yet both \CFA and \CC versions generate code that is slower and safer than C's when the programmer might overrun the bounds.}
1076\label{f:ovhd-ctl}
1077\end{figure}
1078
1079\VRef[Figure]{f:ovhd-ctl} gives a control-case example, of summing values in an array.
1080Each column shows a this program in a different language: C (with no bound checking ever, a,~d,~g), \CFA (b,~e,~h), and \CC (c,~f,~i).
1081The source-code functions in (a, b, c) can be compiled to have either correct or incorrect uses of bounds.
1082When compiling for correct bound use, the @BND@ macro passes its argument through, so the loops cover exactly the passed array sizes.
1083When compiling for incorrect bound use (a programming error), the @BND@ macro hardcodes the loops for 100 iterations, which implies out-of-bound access attempts when the passed array has fewer than 100 elements.
1084The assembly code excerpts show optimized translations, for correct-bound mode in (d, e, f) and incorrect-bound mode in (g, h, i).
1085Because incorrect-bound mode hardcodes 100 iterations, the loop always executes a first time, so this mode does not have the @n == 0@ branch seen in correct-bound mode.
1086For C, this difference is the only (d--g) difference.
1087For correct bounds, the \CFA translation equals the C translation, \ie~there is no (d--e) difference.
1088It is practically so for \CC too, modulo the additional indirection of dereferencing into the vector's auxiliary allocation.
1089
1090Differences arise when the bound-incorrect programs are passed an array shorter than 100.
1091The C version, (g), proceeds with undefined behaviour, reading past the end of the passed array.
1092The \CFA and \CC versions, (h, i), flag the mistake with the runtime check, the @i >= n@ branch.
1093This check is provided implicitly by their library types, @array@ and @vector@ respectively.
1094The significant result here is these runtime checks being \emph{absent} from the bound-correct translations, (e, f).
1095The code optimizer has removed them because, at the point where they would occur (immediately past @.L28@/@.L3@), it knows from the surrounding control flow either @i == 0 && 0 < n@ or @i < n@ (directly), \i.e. @i < n@ either way.
1096So a repeat of this check, the branch and its consequent code (raise error) are all removable.
1097
1098Progressing from the control case, a deliberately bound-incorrect mode is no longer informative.
1099Rather, given a (well-typed) program that does good work on good data, the issue is whether it is (determinably) bound-correct on all data.
1100
1101When dimension sizes get reused, \CFA has an advantage over \CC-vector at getting simply written programs well optimized.
1102
1103\begin{figure}
1104\setlength{\tabcolsep}{10pt}
1105\begin{tabular}{llll@{}}
1106\multicolumn{1}{c}{C} &
1107\multicolumn{1}{c}{\CFA} &
1108\multicolumn{1}{c}{\CC (nested vector)}
1109\\[1em]
1110\lstinput{20-37}{ar-bchk/treatment.c} &
1111\lstinput{20-37}{ar-bchk/treatment.cfa} &
1112\lstinput{20-37}{ar-bchk/treatment.cc}
1113\end{tabular}
1114\caption{Overhead comparison, differentiation case, source.
1115}
1116\label{f:ovhd-treat-src}
1117\end{figure}
1118
1119\begin{figure}
1120\newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
1121\begin{tabular}{C{1.5in}C{1.5in}C{1.5in}p{1in}}
1122C &
1123\CFA &
1124\CC (nested vector)
1125\\[1em]
1126\multicolumn{4}{l}{
1127 \includegraphics[page=3,trim=0 4in 2in 0,clip]{ar-bchk}
1128}
1129\end{tabular}
1130\caption{Overhead comparison, differentiation case, assembly.
1131}
1132\label{f:ovhd-treat-asm}
1133\end{figure}
1134
1135The example case of \VRef[Figures]{f:ovhd-treat-src} and \VRef{f:ovhd-treat-asm} is simple matrix multiplication over a row-major encoding.
1136Simple means coding directly to the intuition of the mathematical definition, without trying to optimize for memory layout.
1137In the assembly code of \VRef[Figures]{f:ovhd-treat-asm}, the looping pattern of \VRef[Figure]{f:ovhd-ctl} (d, e, f), ``Skip aheas on zero; loop back for next,'' occurs with three nesting levels.
1138Simultaneously, the dynamic bound-check pattern of \VRef[Figure]{f:ovhd-ctl} (h,~i), ``Get out on invalid,'' occurs, targeting @.L7@, @.L9@ and @.L8@.
1139
1140Here, \VRef[Figures]{f:ovhd-treat-asm} shows the \CFA solution optimizing into practically the C solution, while the \CC solution shows added runtime bound checks.
1141Like in \VRef[Figure]{f:ovhd-ctl} (e), the significance is the \emph{absence} of library-imposed runtime checks, even though the source code is working through the the \CFA @array@ library.
1142The optimizer removed the library-imposed checks because the data strructure @array@-of-@array@ is constained by its type to be shaped correctly for the intuitive looping.
1143In \CC, the same constraint does not apply to @vector@-of-@vector@.
1144Because every individual @vector@ carries its own size, two types of bound mistakes are possible.
1145
1146Firstly, the three structures received may not be matrices at all, per the obvious/dense/total interpretation; rather, any one might be ragged-right in its rows.
1147The \CFA solution guards against this possibility by encoding the minor length (number of columns) in the major element (row) type.
1148In @res@, for example, each of the @M@ rows is @array(float, N)@, guaranteeing @N@ cells within it.
1149Or more technically, guaranteeing @N@ as the basis for the imposed bound check \emph{of every row.}
1150
1151The second type of \CC bound mistake is that its types do not impose the mathematically familiar constraint of $(M \times P) (P \times N) \rightarrow (M \times N)$.
1152Even assuming away the first issue, \eg that in @lhs@, all minor/cell counts agree, the data format allows the @rhs@ major/row count to disagree.
1153Or, the possibility that the row count @res.size()@ disagrees with the row count @lhs.size()@ illustrates this bound-mistake type in isolation.
1154The \CFA solution guards against this possibility by keeping length information separate from the array data, and therefore eligible for sharing.
1155This capability lets \CFA escape the one-to-one correspondence between array instances and symbolic bounds, where this correspondence leaves a \CC-vector programmer stuck with a matrix representation that repeats itself.
1156
1157It is important to clarify that the \CFA solution does not become unsafe (like C) in losing its dynamic checks, even though it does become fast (as C) in losing them.
1158The dynamic chekcs were dismissed as unnecessary \emph{because} the program was safe to begin with.
1159
1160To regain performance, a \CC programmer is left needing to state appropriate assertions or assumptions, to convince the optimizer to dismiss the runtime checks.
1161Especially considering that two of them are in the inner-most loop.
1162The solution is nontrivial.
1163It requires doing the work of the inner-loop checks as a preflight step.
1164But this work still takes looping; doing it upfront gives too much separation for the optimizer to see ``has been checked already'' in the deep loop.
1165So, the programmer must restate the preflight observation within the deep loop, but this time as an unchecked assumption.
1166Such assumptions are risky because they introduce further undefined behaviour when misused.
1167Only the programmer's discipline remains to ensure this work is done without error.
1168
1169The \CFA solution lets a simply stated program have dynamic guards that catch bugs, while letting a simply stated bug-free program run as fast as the unguarded C equivalent.
1170
1171\begin{comment}
1172The ragged-right issue brings with it a source-of-truth difficulty: Where, in the \CC version, is one to find the value of $N$? $M$ can come from either @res@'s or @lhs@'s major/row count, and checking these for equality is straightforward. $P$ can come from @rhs@'s major/row count. But $N$ is only available from columns, \ie minor/cell counts, which are ragged. So any choice of initial source of truth, \eg
1173\end{comment}
1174
1175\section{Array Lifecycle}
1176
1177An array, like a structure, wraps subordinate objects.
1178Any object type, like @string@, can be an array element or structure member.
1179A consequence is that the lifetime of the container must match with its subordinate objects: all elements and members must be initialized/uninitialized implicitly as part of the container's allocation/deallocation.
1180Modern programming languages implicitly perform these operations via a type's constructor and destructor.
1181Therefore, \CFA must assure that an array's subordinate objects' lifetime operations are called.
1182Preexisting \CFA mechanisms achieve this requirement, but with poor performance.
1183Furthermore, advanced array users need an exception to the basic mechanism, which does not occur with other aggregates.
1184Hence, arrays introduce subtleties in supporting an element's lifecycle.
1185
1186The preexisting \CFA support for contained-element lifecycle is based on recursive occurrences of the object-type (otype) pseudo-trait.
1187A type is an otype, if it provides a default (parameterless) constructor, copy constructor, assignment operator, and destructor (like \CC).
1188For a structure with otype members, the compiler implicitly generates implementations of the four otype functions for the outer structure.
1189Then the generated default constructor for the outer structure calls the default constructor for each member, and the other otype functions work similarly.
1190For a member that is a C array, these calls occur in a loop for each array element, which even works for VLAs.
1191This logic works the same, whether the member is a concrete type (that happens to be an otype) or if the member is a polymorphic type asserted to be an otype (which is implicit in the syntax, @forall(T)@).
1192The \CFA array has the simplified form (similar to one seen before):
1193\begin{cfa}
1194forall( T * ) // non-otype element, no lifecycle functions
1195// forall( T ) // otype element, lifecycle functions asserted
1196struct array5 {
1197 T __items[ 5 ];
1198};
1199\end{cfa}
1200Being a structure with a C-array member, the otype-form declaration @T@ causes @array5( float )@ to be an otype too.
1201But this otype-recursion pattern has a performance issue.
1202\VRef[Figure]{f:OtypeRecursionBlowup} shows the steps to find lifecycle functions, under the otype-recursion pattern, for a cube of @float@:
1203\begin{cfa}
1204array5( array5( array5( float ) ) )
1205\end{cfa}
1206The work needed for the full @float@-cube is 256 leaves.
1207Then the otype-recursion pattern generates helper functions and thunks that are exponential in the number of dimensions.
1208Anecdotal experience is annoyingly slow compile time at three dimensions and unusable at four.
1209
1210%array5(T) offers
1211%1 parameterless ctor, which asks for T to have
1212% 1 parameterless ctor
1213% 2 copy ctor
1214% 3 asgt
1215% 4 dtor
1216%2 copy ctor, which asks for T to have
1217% 1 parameterless ctor
1218% 2 copy ctor
1219% 3 asgt
1220% 4 dtor
1221%3 asgt, which asks for T to have
1222% 1 parameterless ctor
1223% 2 copy ctor
1224% 3 asgt
1225% 4 dtor
1226%4 dtor, which asks for T to have
1227% 1 parameterless ctor
1228% 2 copy ctor
1229% 3 asgt
1230% 4 dtor
1231
1232\begin{figure}
1233\centering
1234\setlength{\tabcolsep}{20pt}
1235\begin{tabular}{@{}lll@{}}
1236\begin{cfa}[deletekeywords={default}]
1237float offers
12381 default ctor
12392 copy ctor
12403 assignment
12414 dtor
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266\end{cfa}
1267&
1268\begin{cfa}[deletekeywords={default}]
1269array5( float ) has
12701 default ctor, using float$'$s
1271 1 default ctor
1272 2 copy ctor
1273 3 assignment
1274 4 dtor
12752 copy ctor, using float$'$s
1276 1 default ctor
1277 2 copy ctor
1278 3 assignment
1279 4 dtor
12803 assignment, using float$'$s
1281 1 default ctor
1282 2 copy ctor
1283 3 assignment
1284 4 dtor
12854 dtor, using float$'$s
1286 1 default ctor
1287 2 copy ctor
1288 3 assignment
1289 4 dtor
1290
1291
1292
1293
1294
1295
1296
1297
1298\end{cfa}
1299&
1300\begin{cfa}[deletekeywords={default}]
1301array5( array5( float ) ) has
13021 default ctor, using array5( float )
1303 1 default ctor, using float$'$s
1304 1 default ctor
1305 2 copy ctor
1306 3 assignment
1307 4 dtor
1308 2 copy ctor, using float$'$s
1309 1 default ctor
1310 2 copy ctor
1311 3 assignment
1312 4 dtor
1313 3 assignment, using float$'$s
1314 1 default ctor
1315 2 copy ctor
1316 3 assignment
1317 4 dtor
1318 4 dtor, using float$'$s
1319 1 default ctor
1320 2 copy ctor
1321 3 assignment
1322 4 dtor
13232 copy ctor, using array5( float )
1324 ... 4 children, 16 leaves
13253 assignment, using array5( float )
1326 ... 4 children, 16 leaves
13274 dtor, using array5( float )
1328 ... 4 children, 16 leaves
1329(64 leaves)
1330\end{cfa}
1331\end{tabular}
1332\caption{Exponential thunk generation under the otype-recursion pattern.
1333 Each time one type's function (\eg ctor) uses another type's function, the \CFA compiler generates a thunk, to capture the used function's dependencies, presented according to the using function's need.
1334 So, each non-leaf line represents a generated thunk and every line represents a search request for the resolver to find a satisfying function.}
1335\label{f:OtypeRecursionBlowup}
1336\end{figure}
1337
1338The issue is that the otype-recursion pattern uses more assertions than needed.
1339For example, @array5( float )@'s default constructor has all four lifecycle assertions about the element type, @float@.
1340However, it only needs @float@'s default constructor, as the other operations are never used.
1341Current work by the \CFA team aims to improve this situation.
1342Therefore, I had to construct a workaround.
1343
1344My workaround moves from otype (value) to dtype (pointer) with a default-constructor assertion, where dtype does not generate any constructors but the assertion claws back the default otype constructor.
1345\begin{cquote}
1346\setlength{\tabcolsep}{10pt}
1347\begin{tabular}{@{}ll@{}}
1348\begin{cfa}
1349// autogenerated for otype-recursion:
1350forall( T )
1351void ?{}( array5( T ) & this ) {
1352 for ( i; 5 ) { ( this[i] ){}; }
1353}
1354forall( T )
1355void ?{}( array5( T ) & this, array5( T ) & src ) {
1356 for ( i; 5 ) { ( this[i] ){ src[i] }; }
1357}
1358forall( T )
1359void ^?{}( array5( T ) & this ) {
1360 for ( i; 5 ) { ^( this[i] ){}; }
1361}
1362\end{cfa}
1363&
1364\begin{cfa}
1365// lean, bespoke:
1366forall( T* | { void @?{}( T & )@; } )
1367void ?{}( array5( T ) & this ) {
1368 for ( i; 5 ) { ( this[i] ){}; }
1369}
1370forall( T* | { void @?{}( T &, T )@; } )
1371void ?{}( array5( T ) & this, array5( T ) & src ) {
1372 for ( i; 5 ) { ( this[i] ){ src[i] }; }
1373}
1374forall( T* | { void @?{}( T & )@; } )
1375void ^?{}( array5(T) & this ) {
1376 for (i; 5) { ^( this[i] ){}; }
1377}
1378\end{cfa}
1379\end{tabular}
1380\end{cquote}
1381Moreover, the assignment operator is skipped, to avoid hitting a lingering growth case.
1382Temporarily skipping assignment is tolerable because array assignment is not a common operation.
1383With this solution, the critical lifecycle functions are available, with linear growth in thunk creation for the default constructor.
1384
1385Finally, the intuition that a programmer using an array always wants the elements' default constructor called \emph{automatically} is simplistic.
1386Arrays exist to store different values at each position.
1387So, array initialization needs to let the programmer provide different constructor arguments to each element.
1388\begin{cfa}
1389thread worker { int id; };
1390void ?{}( worker & ) = void; // remove default constructor
1391void ?{}( worker &, int id );
1392array( worker, 5 ) ws = @{}@; // rejected; but desire is for no initialization yet
1393for ( i; 5 ) (ws[i]){ @i@ }; // explicitly initialize each thread, giving id
1394\end{cfa}
1395Note the use of the \CFA explicit constructor call, analogous to \CC's placement-@new@.
1396This call is where initialization is desired, and not at the declaration of @ws@.
1397The attempt to initialize from nothing (equivalent to dropping @= {}@ altogether) is invalid because the @worker@ type removes the default constructor.
1398The @worker@ type is designed this way to work with the threading system.
1399A thread type forks a thread at the end of each constructor and joins with it at the start of each destructor.
1400But a @worker@ cannot begin its forked-thread work without knowing its @id@.
1401Therefore, there is a conflict between the implicit actions of the builtin @thread@ type and a user's desire to defer these actions.
1402
1403Another \CFA feature for providing C backwards compatibility, at first seem viable for initializing the array @ws@, though on closer inspection is not.
1404C initialization without a constructor is specified with \lstinline|@=|, \eg \lstinline|array(worker, 5) ws @= {}| ignores all \CFA lifecycle management and uses C empty initialization.
1405This option does achieve the desired semantics on the construction side.
1406But on destruction side, the desired semantics is for implicit destructor calls to continue, to keep the join operation tied to lexical scope.
1407C initialization disables \emph{all} implicit lifecycle management, but the goal is to disable only the implicit construction.
1408
1409To fix this problem, I enhanced the \CFA standard library to provide the missing semantics, available in either form:
1410\begin{cfa}
1411array( @uninit@(worker), 5 ) ws1;
1412array( worker, 5) ws2 = { @delay_init@ };
1413\end{cfa}
1414Both cause the @ws@-construction-time implicit call chain to stop before reaching a @worker@ constructor, while leaving the implicit destruction calls intact.
1415Two forms are available, to parallel the development of this feature in \uCpp~\cite{uC++}.
1416Originally \uCpp offered only the @ws1@ form, using the class-template @uNoCtor@ equivalent to \CFA's @uninit@.
1417More recently, \uCpp was extended with the declaration macro, @uArray@, with usage similar to the @ws2@ case.
1418Based on experience piloting @uArray@ as a replacement of @uNoCtor@, it should be possible to remove the first option.
1419
1420% note to Mike, I have more fragments on some details available in push24\fragments\uNoCtor.txt
1421
1422\section{Array Comparison}
1423
1424\subsection{Rust}
1425
1426A Rust array is a set of mutable or immutable (@const@) contiguous objects of the same type @T@, where the compile-time dimension(s) is part of the type signature @[T; dimension]@.
1427\begin{rust}
1428let val = 42;
1429let mut ia: [usize; 5] = [val; 5]; $\C{// mutable, repeated initialization [42, 42, 42, 42, 42]}$
1430let fval = 42.2;
1431let fa: [f64; 5] = [1.2, fval, 5.6, 7.3, 9.1]; $\C{// immutable, individual initialization}$
1432\end{rust}
1433Initialization is required even if the array is subsequently initialized.
1434Rust arrays are not VLAs, but the compile-time length can be queried using member @len()@.
1435Arrays can be assigned and passed to parameters by value or reference, if and only if, the type and dimension match, meaning a different function is needed for each array size.
1436
1437Array iteration is by a range or array variable.
1438\begin{rust}
1439for i in @0..ia.len()@ { print!("{:?} ", ia[i] ); } $\C{// 42 42 42 42 42}$
1440for fv in @fa@ { print!("{:?} ", fv ); } $\C{// 1.2 42.2 5.6 7.3 9.1}$
1441for i in 0..=1 { ia[i] = i; } $\C{// mutable changes}$
1442for iv in ia { print!("{:?} ", iv ); } $\C{// 0 1 42 42 42}$
1443\end{rust}
1444An array can be assigned and printed as a set.
1445\begin{rust}
1446ia = @[5; 5]@; println!( "{:?} {:?}", @ia@, ia.len() ); $\C{// [5, 5, 5, 5, 5] 5}$
1447ia = @[1, 2, 3, 4, 5]@; println!( "{:?} {:?}", @ia@, ia.len() ); $\C{// [1, 2, 3, 4, 5] 5}$
1448\end{rust}
1449
1450Multi-dimensional arrays use nested dimensions, resulting in columns before rows.
1451Here the outer array is a length @ROWS@ array whose elements are @f64@ arrays of length @COLS@ each.
1452\begin{cquote}
1453\setlength{\tabcolsep}{10pt}
1454\begin{tabular}{@{}ll@{}}
1455\begin{rust}
1456const ROWS: usize = 4; const COLS: usize = 6;
1457let mut fmatrix: [[f64; @COLS@]; @ROWS@] = [[0.; @COLS@]; @ROWS@];
1458for r in 0 .. ROWS {
1459 for c in 0 .. COLS { fmatrix[r][c] = r as f64 + c as f64; } }
1460\end{rust}
1461&
1462\begin{rust}
1463[ 0.0, 1.0, 2.0, 3.0, 4.0, 5.0 ]
1464[ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0 ]
1465[ 2.0, 3.0, 4.0, 5.0, 6.0, 7.0 ]
1466[ 3.0, 4.0, 5.0, 6.0, 7.0, 8.0 ]
1467\end{rust}
1468\end{tabular}
1469\end{cquote}
1470
1471Slices borrow a section of an array (subarray) and have type @&[T]@.
1472A slice has a dynamic size and does not implicitly coerce to an array.
1473\begin{rust}
1474println!( "{:?}", @&ia[0 .. 3]@ ); $\C{// fixed bounds, [1, 2, 3]}$
1475println!( "{:?}", @&ia[ia.len() - 2 .. ia.len()@] ); $\C{// variable bounds, [4, 5]}$
1476println!( "{:?}", @&ia[ia.len() - 2 .. ]@ ); $\C{// drop upper bound, [4, 5]}$
1477println!( "{:?}", @&ia[.. ia.len() - 2 ]@ ); $\C{// drop lower bound, [1, 2, 3]}$
1478println!( "{:?}", @&ia[..]@ ); $\C{// drop both bound, [1, 2, 3, 4, 5]}$
1479\end{rust}
1480A multi-dimensional slice can only borrow subrows because a slice requires contiguous memory.
1481Here columns 2--4 are sliced from row 3.
1482\begin{rust}
1483let mut slice2: &[f64] = &fmatrix[3][@2 ..= 4@];
1484println!( "{:?}", slice2 ); $\C{// [5.0, 6.0, 7.0]}$
1485\end{rust}
1486Here row 2 is sliced from the sub-matrix formed from rows 1--3.
1487\begin{rust}
1488slice2 = &fmatrix[@1 ..= 3@][1];
1489println!( "{:?}", slice2 ); $\C{// [3.0, 4.0, 5.0, 6.0, 7.0, 8.0]}$
1490\end{rust}
1491A slice can be explicitly converted into an array or reference to an array.
1492\begin{rust}
1493let slice: &[f64];
1494slice = &fa[0 ..= 1]; $\C{// create slice, [1.2, 42.2]}$
1495let mut fa2: [f64; 2] = @<[f64; 2]>::try_from( slice ).unwrap()@; $\C{// convert slice to array, [1.2, 42.2]}$
1496fa2 = @(& fa[2 ..= 3]).try_into().unwrap()@; $\C{// convert slice to array, [5.6, 7.3]}$
1497\end{rust}
1498
1499The @vec@ macro (using @Vec@ type) provides dynamically-size dynamically-growable arrays of arrays only using the heap (\CC @vector@ type).
1500\begin{cquote}
1501\setlength{\tabcolsep}{10pt}
1502\begin{tabular}{@{}ll@{}}
1503\multicolumn{1}{c}{Rust} &\multicolumn{1}{c}{\CC} \\
1504\begin{rust}
1505let (rows, cols) = (4, 6);
1506let mut matrix = vec![vec![0; cols]; rows];
1507... matrix[r][c] = r + c; // fill matrix
1508\end{rust}
1509&
1510\begin{c++}
1511int rows = 4, cols = 6;
1512vector<vector<int>> matrix( rows, vector<int>(cols) );
1513... matrix[r][c] = r + c; // fill matrix}
1514\end{c++}
1515\end{tabular}
1516\end{cquote}
1517A dynamically-size array is sliced the same as a fixed-size one.
1518\begin{rust}
1519let mut slice3: &[usize] = &matrix[3][2 ..= 4]; $\C{// [5, 6, 7]}$
1520slice3 = &matrix[1 ..= 3][1]; $\C{// [2, 3, 4, 5, 6, 7]}$
1521\end{rust}
1522Finally, to mitigate the restriction of matching array dimensions between argument and parameter types, it is possible for a parameter to be a slice.
1523\begin{rust}
1524fn zero( arr: @& mut [usize]@ ){
1525 for i in 0 .. arr.len() { arr[i] = 0; }
1526}
1527zero( & mut ia[0..5] ); // arbitrary sized slice
1528zero( & mut ia[0..3] );
1529\end{rust}
1530Or write a \emph{generic} function taking an array of fixed-element type and any size.
1531\begin{rust}
1532fn aprint<@const N: usize@>( arg: [usize; N] ) {
1533 for r in 0 .. arg.len() { print!( "{} ", arg[r] ); }
1534}
1535aprint( ia );
1536\end{rust}
1537
1538
1539\subsection{Java}
1540\label{JavaCompare}
1541
1542
1543Java arrays are references, so multi-dimension arrays are arrays-of-arrays \see{\VRef{toc:mdimpl}}.
1544For each array, Java implicitly storages the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations.
1545\begin{cquote}
1546\begin{tabular}{rl}
1547C & @void f( size_t n, size_t m, float x[n][m] );@ \\
1548Java & @void f( float x[][] );@
1549\end{tabular}
1550\end{cquote}
1551However, in the C prototype, the parameters @n@ and @m@ are manually managed, \ie there is no guarantee they match with the argument matrix for parameter @x@.
1552\VRef[Figure]{f:JavaVsCTriangularMatrix} compares a triangular matrix (array-of-arrays) in dynamically safe Java to unsafe C.
1553Each dynamically sized row in Java stores its dimension, while C requires the programmer to manage these sizes explicitly (@rlnth@).
1554All subscripting is Java has bounds checking, while C has none.
1555Both Java and C require explicit null checking, otherwise there is a runtime failure.
1556
1557\begin{figure}
1558\setlength{\tabcolsep}{15pt}
1559\begin{tabular}{ll@{}}
1560\begin{java}
1561int m[][] = { // triangular matrix
1562 new int [4],
1563 new int [3],
1564 new int [2],
1565 new int [1],
1566 null
1567};
1568
1569for ( int r = 0; r < m.length; r += 1 ) {
1570 if ( m[r] == null ) continue;
1571 for ( int c = 0; c < m[r].length; c += 1 ) {
1572 m[r][c] = c + r; // subscript checking
1573 }
1574
1575}
1576
1577for ( int r = 0; r < m.length; r += 1 ) {
1578 if ( m[r] == null ) {
1579 System.out.println( "null row" );
1580 continue;
1581 }
1582 for ( int c = 0; c < m[r].length; c += 1 ) {
1583 System.out.print( m[r][c] + " " );
1584 }
1585 System.out.println();
1586
1587}
1588\end{java}
1589&
1590\begin{cfa}
1591int * m[5] = { // triangular matrix
1592 calloc( 4, sizeof(int) ),
1593 calloc( 3, sizeof(int) ),
1594 calloc( 2, sizeof(int) ),
1595 calloc( 1, sizeof(int) ),
1596 NULL
1597};
1598int rlnth = 4;
1599for ( int r = 0; r < 5; r += 1 ) {
1600 if ( m[r] == NULL ) continue;
1601 for ( int c = 0; c < rlnth; c += 1 ) {
1602 m[r][c] = c + r; // no subscript checking
1603 }
1604 rlnth -= 1;
1605}
1606rlnth = 4;
1607for ( int r = 0; r < 5; r += 1 ) {
1608 if ( m[r] == NULL ) {
1609 printf( "null row\n" );
1610 continue;
1611 }
1612 for ( int c = 0; c < rlnth; c += 1 ) {
1613 printf( "%d ", m[r][c] );
1614 }
1615 printf( "\n" );
1616 rlnth -= 1;
1617}
1618\end{cfa}
1619\end{tabular}
1620\caption{Java (left) \vs C (right) Triangular Matrix}
1621\label{f:JavaVsCTriangularMatrix}
1622\end{figure}
1623
1624The downside of the arrays-of-arrays approach is performance due to pointer chasing versus pointer arithmetic for a contiguous arrays.
1625Furthermore, there is the cost of managing the implicit array descriptor.
1626It is unlikely that a JIT can dynamically rewrite an arrays-of-arrays form into a contiguous form.
1627
1628
1629\subsection{\CC}
1630\label{CppCompare}
1631
1632Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array.
1633While the vector size can grow and shrink dynamically, \vs an unchanging dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration.
1634So, it costs one upfront dynamic allocation and avoids growing the arry through pushing.
1635\begin{c++}
1636vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations
1637\end{c++}
1638Multidimensional arrays are arrays-of-arrays with associated costs.
1639Each @vector@ array has an array descriptor containing the dimension, which allows bound checked using @x.at(i)@ in place of @x[i]@.
1640Used with these restrictions, out-of-bound accesses are caught, and in-bound accesses never exercise the vector's ability to grow, preventing costly reallocate-and-copy, and never invalidating references to contained values.
1641
1642This scheme matches a Java array's safety and expressiveness exactly, with the same inherent costs.
1643Notably, a \CC vector of vectors does not provide contiguous element storage (even when upfront allocation is done carefully) because a vector puts its elements in an auxiliary allocation.
1644So, in spite of @vector< vector< int > >@ appearing to be ``everything by value,'' it is still a first array whose elements include pointers to further arrays.
1645
1646\CC has options for working with memory-adjacent data in desirable ways, particularly in recent revisions.
1647But none makes an allocation with a dynamically given fixed size less awkward than the vector arrangement just described.
1648
1649\CC~26 adds @std::inplace_vector@, which provides an interesting vector--array hybrid,\footnote{
1650 Like a vector, it lets a user construct elements in a loop, rather than imposing uniform construction.
1651 Yet it preserves \lstinline{std::array}'s ability to be entirely stack-allocated, by avoiding an auxiliary elements' allocation.
1652} but does not change the fundamental limit of \lstinline{std::array}, that the length, being a template parameter, must be a static value.
1653
1654\CC~20's @std::span@ is a view that unifies true arrays, vectors, static sizes and dynamic sizes, under a common API that adds bound checking.
1655When wrapping a vector, bound checking occurs on regular subscripting; one needn't remember to use @.at@.
1656When wrapping a locally declared fixed-size array, bound communication is implicit.
1657But it has a soundness gap by offering construction from pointer and user-given length.
1658
1659\CC~23's @std::mdspan@ adds multidimensional indexing and reshaping capabilities analogous to those built into \CFA's @array@.
1660Like @span@, it works over multiple underlying container types.
1661But neither @span@ nor @mdspan@ augments the available allocation options.
1662
1663Thus, these options do not offer an allocation with a dynamically given fixed size.
1664And furthermore, they do not provide any improvement to the C flexible array member pattern, for making a dynamic amount of storage contiguous with its header, as do \CFA's accordions.
1665
1666\subsection{Levels of Dependently Typed Arrays}
1667
1668TODO: fix the position; checked c does not do linear types
1669\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.
1670Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly-typed ownership-system, which further helps guarantee statically the validity of every pointer deference.
1671These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.
1672\CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
1673
1674\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.
1675Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.
1676The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
1677
1678The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
1679\begin{itemize}[leftmargin=*]
1680\item a \emph{zip}-style operation that consumes two arrays of equal length
1681\item a \emph{map}-style operation whose produced length matches the consumed length
1682\item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
1683\end{itemize}
1684Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.
1685Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
1686
1687Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.
1688In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.
1689The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.
1690In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.
1691[TODO: clarify how even Idris type checking terminates]
1692
1693Idris is a current, general-purpose dependently typed programming language.
1694Length checking is a common benchmark for full dependent type systems.
1695Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.
1696[TODO: finish explaining what Data.Vect is and then the essence of the comparison]
1697
1698POINTS:
1699here is how our basic checks look (on a system that does not have to compromise);
1700it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
1701
1702Two contemporary array-centric languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, contribute similar, restricted dependent types for tracking array length.
1703Unlike \CFA, both are garbage-collected functional languages.
1704Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.
1705So, like \CFA, the checking in question is a lightweight bounds-only analysis.
1706Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
1707
1708
1709
1710The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.
1711There is a particular emphasis on an existential type, enabling callee-determined return shapes.
1712
1713Dex uses an Ada-style conception of size, embedding quantitative information completely into an ordinary type.
1714
1715Futhark and full-strength dependently typed languages treat array sizes as ordinary values.
1716Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
1717
1718\CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet no objects of type @[N]@ occur.
1719Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.
1720Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
1721
1722If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
1723
1724[TODO: introduce Ada in the comparators]
1725
1726In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, \CC, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.
1727The Ada--Dex generality has aesthetic benefits for certain programmers. For those working on scheduling resources to weekdays:
1728For those who prefer to count from an initial number of their own choosing:
1729
1730This change of perspective also lets us remove ubiquitous dynamic bound checks.
1731[TODO: xref] discusses how automatically inserted bound checks can often be optimized away.
1732But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.
1733To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
1734
1735[TODO, fix confusion: Idris has this arrangement of checks, but still the natural numbers as the domain.]
1736
1737The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.
1738Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
1739\begin{cfa}
1740interface Ix n
1741get_size n : Unit -> Int
1742ordinal : n -> Int
1743unsafe_from_ordinal n : Int -> n
1744\end{cfa}
1745
1746Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.
1747This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.
1748Dex's example is a routine that calculates pointwise differences between two samples.
1749Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).
1750In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size.
1751
1752The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.
1753In the audio instantiation, the duration-of-clip type argument is used for the domain.
1754In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.
1755This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
1756\begin{cfa}
1757instance {a b} [Ix a, Ix b] Ix (a & b)
1758get_size = \(). size a * size b
1759ordinal = \(i, j). (ordinal i * size b) + ordinal j
1760unsafe_from_ordinal = \o.
1761bs = size b
1762(unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
1763\end{cfa}
1764% fix mike's syntax highlighter
1765and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
1766\begin{cfa}
1767img_trans :: (img_wd,img_ht)=>Real
1768img_trans.(i,j) = img.i.j
1769result = pairwise img_trans
1770\end{cfa}
1771[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
1772
1773In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
1774
1775\subsection{Static Safety in C Extensions}
1776
1777
1778\section{Future Work}
1779
1780\subsection{Array Syntax}
1781
1782An important goal is to recast @array(...)@ syntax into C-style @[]@.
1783The proposal (which is partially implemented) is an alternate dimension and subscript syntax.
1784C multi-dimension and subscripting syntax uses multiple brackets.
1785\begin{cfa}
1786int m@[2][3]@; // dimension
1787m@[0][1]@ = 3; // subscript
1788\end{cfa}
1789Leveraging this syntax, the following (simpler) syntax should be intuitive to C programmers with only a small backwards compatibility issue.
1790\begin{cfa}
1791int m@[2, 3]@; // dimension
1792m@[0, 1]@ = 3; // subscript
1793\end{cfa}
1794However, the new subscript syntax is not backwards compatible, as @0, 1@ is a comma expression.
1795Interestingly, disallowed the comma expression in this context eliminates an unreported error: subscripting a matrix with @m[i, j]@ instead of @m[i][j]@, which selects the @j@th row not the @i, j@ element.
1796Hence, a comma expression in this context is rare.
1797Finally, it is possible to write @m[(i, j)]@ in the new syntax to achieve the equivalent of the old @m[i, j]@.
1798Note, the new subscript syntax can easily be internally lowered to @[-][-]...@ and handled as regular subscripting.
1799The only ambiguity with C syntax is for a single dimension array, where the syntax for old and new are the same.
1800\begin{cfa}
1801int m[2@,@]; // single dimension
1802m[0] = 3; // subscript
1803\end{cfa}
1804The solution for the dimension is to use a terminating comma to denote a new single-dimension array.
1805This syntactic form is also used for the (rare) singleton tuple @[y@{\color{red},}@]@.
1806The extra comma in the dimension is only mildly annoying, and acts as eye-candy differentiating old and new arrays.
1807The subscript operator is not an issue as overloading selects the correct single-dimension operation for old/new array types.
1808The ultimately goal is to replace all C arrays with \CFA arrays, establishing a higher level of safety in C programs, and eliminating the need for the terminating comma.
1809
1810
1811\subsection{Range Slicing}
1812
1813\subsection{With a Module System}
1814
1815
1816\subsection{Retire Pointer Arithmetic}
1817
1818
1819\begin{comment}
1820\section{\texorpdfstring{\CFA}{Cforall}}
1821
1822XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1823moved from background chapter \\
1824XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1825
1826Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it.
1827(For later: That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.)
1828
1829\subsection{\texorpdfstring{\CFA}{Cforall} Features Interacting with Arrays}
1830
1831Prior work on \CFA included making C arrays, as used in C code from the wild,
1832work, if this code is fed into @cfacc@.
1833The quality of this this treatment was fine, with no more or fewer bugs than is typical.
1834
1835More mixed results arose with feeding these ``C'' arrays into preexisting \CFA features.
1836
1837A notable success was with the \CFA @alloc@ function,
1838which type information associated with a polymorphic return type
1839replaces @malloc@'s use of programmer-supplied size information.
1840\begin{cfa}
1841// C, library
1842void * malloc( size_t );
1843// C, user
1844struct tm * el1 = malloc( sizeof(struct tm) );
1845struct tm * ar1 = malloc( 10 * sizeof(struct tm) );
1846
1847// CFA, library
1848forall( T * ) T * alloc();
1849// CFA, user
1850tm * el2 = alloc();
1851tm (*ar2)[10] = alloc();
1852\end{cfa}
1853The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument.
1854This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type.
1855Using a compiler-produced value eliminates an opportunity for user error.
1856
1857...someday... fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa
1858
1859Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug.
1860In the last example, the choice of ``pointer to array'' @ar2@ breaks a parallel with @ar1@.
1861They are not subscripted in the same way.
1862\begin{cfa}
1863ar1[5];
1864(*ar2)[5];
1865\end{cfa}
1866Using ``reference to array'' works at resolving this issue. ...someday... discuss connection with Doug-Lea \CC proposal.
1867\begin{cfa}
1868tm (&ar3)[10] = *alloc();
1869ar3[5];
1870\end{cfa}
1871The implicit size communication to @alloc@ still works in the same ways as for @ar2@.
1872
1873Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one.
1874...someday... xref C standard does not claim that @ar1@ may be subscripted,
1875because no stage of interpreting the construction of @ar1@ has it be that ``there is an \emph{array object} here.''
1876But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls,
1877where the type requested is an array, making the result, much more obviously, an array object.
1878
1879The ``reference to array'' type has its sore spots too.
1880...someday... see also @dimexpr-match-c/REFPARAM_CALL@ (under @TRY_BUG_1@)
1881
1882...someday... I fixed a bug associated with using an array as a T. I think. Did I really? What was the bug?
1883\end{comment}
Note: See TracBrowser for help on using the repository browser.