source: doc/theses/mike_brooks_MMath/array.tex@ 3a08cb1

Last change on this file since 3a08cb1 was 187be97, checked in by Michael Brooks <mlbrooks@…>, 11 months ago

Thesis, array, add content on C typing changes.

  • Property mode set to 100644
File size: 64.9 KB
Line 
1\chapter{Array}
2\label{c:Array}
3
4
5\section{Introduction}
6
7Arrays in C are possible the single most misunderstood and incorrectly used features in the language, resulting in the largest proportion of runtime errors and security violations.
8This chapter describes the new \CFA language and library features that introduce a length-checked array-type to the \CFA standard library~\cite{Cforall}.
9
10Specifically, a new \CFA array is declared:
11\begin{cfa}
12@array( float, 99 )@ x; $\C[2.75in]{// x contains 99 floats}$
13\end{cfa}
14using generic type @array@ with arguments @float@ and @99@.
15A function @f@ is declared with an @array@ parameter of length @42@.
16\begin{cfa}
17void f( @array( float, 42 )@ & p ) {} $\C{// p accepts 42 floats}$
18f( x ); $\C{// statically rejected: types are different, 99 != 42}$
19
20test2.cfa:3:1 error: Invalid application of existing declaration(s) in expression.
21Applying untyped: Name: f ... to: Name: x
22\end{cfa}
23The call @f( x )@ is invalid because the @array@ lengths @99@ and @42@ do not match.
24
25Next, function @g@ introduces a @forall@ prefix on type parameter @T@ and arbitrary \emph{dimension parameter} @N@, the new feature that represents a count of elements managed by the type system.
26\begin{cfa}
27forall( T, @[N]@ )
28void g( array( T, @N@ ) & p, int i ) {
29 T elem = p[i]; $\C{// dynamically checked: requires 0 <= i < N}$
30}
31g( x, 0 ); $\C{// T is float, N is 99, dynamic subscript check succeeds}$
32g( x, 1000 ); $\C{// T is float, N is 99, dynamic subscript check fails}\CRT$
33
34Cforall Runtime error: subscript 1000 exceeds dimension range [0,99) $for$ array 0x555555558020.
35\end{cfa}
36The call @g( x, 0 )@ is valid because @g@ accepts any length of array, where the type system infers @float@ for @T@ and length @99@ for @N@.
37Inferring values for @T@ and @N@ is implicit without programmer involvement.
38Furthermore, the runtime subscript @x[0]@ (parameter @i@ is @0@) in @g@ is valid because @0@ is in the dimension range $[0,99)$ of argument @x@.
39The call @g( x, 1000 )@ is also valid;
40however, the runtime subscript @x[1000]@ is invalid (generates a subscript error) because @1000@ is outside the dimension range $[0,99)$ of argument @x@.
41
42The generic @array@ type is similar to the C array type, which \CFA inherits from C.
43Its runtime characteristics are often identical, and some features are available in both.
44For example, assume a caller can instantiates @N@ with 42 in the following (details to follow).
45\begin{cfa}
46forall( [N] )
47void declDemo() {
48 float x1[N]; $\C{// built-in type ("C array")}$
49 array(float, N) x2; $\C{// type from library}$
50}
51\end{cfa}
52Both of the locally-declared array variables, @x1@ and @x2@, have 42 elements, each element being a @float@.
53The two variables have identical size and layout; they both encapsulate 42-float, stack \vs heap allocations with no additional ``bookkeeping'' allocations or headers.
54Providing this explicit generic approach requires a significant extension to the \CFA type system to support a full-feature, safe, efficient (space and time) array-type, which forms the foundation for more complex array forms in \CFA.
55
56Admittedly, the @array@ library type (type for @x2@) is syntactically different from its C counterpart.
57A future goal (TODO xref) is to provide a built-in array type with syntax approaching C's (type for @x1@);
58then, the library @array@ type can be removed giving \CFA a largely uniform array type.
59At present, the C syntax @array@ is only partially supported, so the generic @array@ is used exclusively in the discussion;
60feature support and C compatibility are revisited in Section ? TODO.
61
62Offering the @array@ type, as a distinct alternative to the C array, is consistent with \CFA's goal of backwards compatibility, \ie virtually all existing C (@gcc@) programs can be compiled by \CFA with only a small number of changes, similar to \CC (@g++@).
63However, a few compatibility-breaking changes to the behaviour of the C array are necessary, both as an implementation convenience and to fix C's lax treatment of arrays.
64Hence, the @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features, making it unnecessary to deal with every inherited complexity introduced by the C array TODO xref.
65
66My contributions in this chapter are:
67\begin{enumerate}
68\item A type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@.
69\item Provide a length-checked array-type in the \CFA standard library, where the array's length is statically managed and dynamically valued.
70\item Provide argument/parameter passing safety for arrays and subscript safety.
71\item TODO: general parking...
72\item Identify the interesting specific abilities available by the new @array@ type.
73\item Where there is a gap concerning this feature's readiness for prime-time, identification of specific workable improvements that are likely to close the gap.
74\end{enumerate}
75
76
77\section{Definitions and design considerations}
78
79
80\subsection{Dependent typing}
81
82
83\section{Features added}
84
85This section presents motivating examples for the new array type, demonstrating the syntax and semantics of the generic @array@.
86As stated, the core capability of the new array is tracking all dimensions in the type system, where dynamic dimensions are represented using type variables.
87
88The definition of type variables preceding object declarations makes the array dimension lexically referenceable where it is needed.
89For example, a declaration can share one length, @N@, among a pair of parameters and the return.
90\lstinput{10-17}{hello-array.cfa}
91Here, the function @f@ does a pointwise comparison, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
92The dynamic allocation of the @ret@ array by @alloc@ uses the parameterized dimension information in its implicit @_Alignof@ and @sizeof@ determinations, and casting the return type.
93\begin{cfa}
94static inline forall( T & | sized(T) )
95T * alloc( size_t dim ) {
96 if ( _Alignof(T) <= libAlign() ) return (T *)aalloc( dim, sizeof(T) ); // calloc without zero fill
97 else return (T *)amemalign( _Alignof(T), dim, sizeof(T) ); // array memalign
98}
99\end{cfa}
100Here, the type system deduces from the left-hand side of the assignment the type @array(bool, N)@, and forwards it as the type variable @T@ for the generic @alloc@ function, making it available in its body.
101Hence, preexisting \CFA behaviour is leveraged here, both in the return-type polymorphism, and the @sized(T)@-aware standard-library @alloc@ routine.
102This example illustrates how the new @array@ type plugs into existing \CFA behaviour by implementing necessary @sized@ assertions needed by other types.
103(@sized@ implies a concrete \vs abstract type with a compile-time size.)
104As a result, there is significant programming safety by making the size accessible and implicit, compared with C's @calloc@ and non-array supporting @memalign@, which take an explicit length parameter not managed by the type system.
105
106\begin{figure}
107\lstinput{30-43}{hello-array.cfa}
108\lstinput{45-48}{hello-array.cfa}
109\caption{\lstinline{f} Harness}
110\label{f:fHarness}
111\end{figure}
112
113\VRef[Figure]{f:fHarness} shows a harness that uses function @f@ to illustrate how dynamic values are fed into the @array@ type.
114Here, the dimension of arrays @x@, @y@, and @result@ is specified from a command-line value, @dim@, and these arrays are allocated on the stack.
115Then the @x@ array is initialized with decreasing values, and the @y@ array with amounts offset by constant @0.005@, giving relative differences within tolerance initially and diverging for later values.
116The program main is run (see figure bottom) with inputs @5@ and @7@ for sequence lengths.
117The loops follow the familiar pattern of using the variable @dim@ to iterate through the arrays.
118Most importantly, the type system implicitly captures @dim@ at the call of @f@ and makes it available throughout @f@ as @N@.
119The example shows @dim@ adapting into a type-system managed length at the declarations of @x@, @y@, and @result@, @N@ adapting in the same way at @f@'s loop bound, and a pass-thru use of @dim@ at @f@'s declaration of @ret@.
120Except for the lifetime-management issue of @result@, \ie explicit @free@, this program has eliminated both the syntactic and semantic problems associated with C arrays and their usage.
121These benefits cannot be underestimated.
122
123In general, the @forall( ..., [N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and within a function.
124The syntactic form is chosen to parallel other @forall@ forms:
125\begin{cfa}
126forall( @[N]@ ) ... $\C[1.5in]{// array kind}$
127forall( T & ) ... $\C{// reference kind (dtype)}$
128forall( T ) ... $\C{// value kind (otype)}\CRT$
129\end{cfa}
130% The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
131In summary:
132\begin{itemize}
133\item
134@[N]@ within a forall declares the type variable @N@ to be a managed length.
135\item
136The type of @N@ within code is @size_t@.
137\item
138The value of @N@ within code is the acquired length derived from the usage site, \ie generic declaration or function call.
139\item
140@array( thing, N0, N1, ... )@ is a multi-dimensional type wrapping $\prod_i N_i$ adjacent occurrences of @thing@ objects.
141\end{itemize}
142
143\VRef[Figure]{f:TemplateVsGenericType} shows @N@ is not the same as a @size_t@ declaration in a \CC \lstinline[language=C++]{template}.
144\begin{enumerate}[leftmargin=*]
145\item
146The \CC template @N@ is a compile-time value, while the \CFA @N@ is a runtime value.
147\item
148The \CC template @N@ must be passed explicitly at the call, unless @N@ has a default value, even when \CC can deduct the type of @T@.
149The \CFA @N@ is part of the array type and passed implicitly at the call.
150\item
151\CC cannot have an array of references, but can have an array of pointers.
152\CC has a (mistaken) belief that references are not objects, but pointers are objects.
153In the \CC example, the arrays fall back on C arrays, which have a duality with references with respect to automatic dereferencing.
154The \CFA array is a contiguous object with an address, which can be stored as a reference or pointer.
155\item
156C/\CC arrays cannot be copied, while \CFA arrays can be copied, making them a first-class object (although array copy is often avoided for efficiency).
157\end{enumerate}
158
159\begin{figure}
160\begin{tabular}{@{}l@{\hspace{20pt}}l@{}}
161\begin{c++}
162
163@template< typename T, size_t N >@
164void copy( T ret[@N@], T x[@N@] ) {
165 for ( int i = 0; i < N; i += 1 ) ret[i] = x[i];
166}
167int main() {
168 int ret[10], x[10];
169 for ( int i = 0; i < 10; i += 1 ) x[i] = i;
170 @copy<int, 10 >( ret, x );@
171 for ( int i = 0; i < 10; i += 1 )
172 cout << ret[i] << ' ';
173 cout << endl;
174}
175\end{c++}
176&
177\begin{cfa}
178int main() {
179 @forall( T, [N] )@ // nested function
180 void copy( array( T, @N@ ) & ret, array( T, @N@ ) & x ) {
181 for ( i; 10 ) ret[i] = x[i];
182 }
183
184 array( int, 10 ) ret, x;
185 for ( i; 10 ) x[i] = i;
186 @copy( ret, x );@
187 for ( i; 10 )
188 sout | ret[i] | nonl;
189 sout | nl;
190}
191\end{cfa}
192\end{tabular}
193\caption{\CC \lstinline[language=C++]{template} \vs \CFA generic type }
194\label{f:TemplateVsGenericType}
195\end{figure}
196
197Continuing the discussion of \VRef[Figure]{f:fHarness}, the example has @f@ expecting two arrays of the same length.
198As stated previous, a compile-time error occurs when attempting to call @f@ with arrays of differing lengths.
199% removing leading whitespace
200\lstinput[tabsize=1]{52-53}{hello-array.cfa}
201\lstinput[tabsize=1,aboveskip=0pt]{62-64}{hello-array.cfa}
202C allows casting to assert knowledge not shared with the type system.
203\lstinput{70-74}{hello-array.cfa}
204
205Orthogonally, the new @array@ type works with \CFA's generic types, providing argument safety and the associated implicit communication of array length.
206Specifically, \CFA allows aggregate types to be generalized with multiple type parameters, including parameterized element types and lengths.
207Doing so gives a refinement of C's ``flexible array member'' pattern, allowing nesting structures with array members anywhere within the structures.
208\lstinput{10-15}{hello-accordion.cfa}
209This structure's layout has the starting offset of @studentIds@ varying in generic parameter @C@, and the offset of @preferences@ varying in both generic parameters.
210For a function that operates on a @School@ structure, the type system handles this memory layout transparently.
211\lstinput{40-45}{hello-accordion.cfa}
212\VRef[Figure]{f:checkHarness} shows the @School@ harness and results with different array sizes, where multidimensional arrays are discussed next.
213
214\begin{figure}
215% super hack to get this to line up
216\begin{tabular}{@{}ll@{\hspace{25pt}}l@{}}
217\begin{tabular}{@{}p{3.25in}@{}}
218\lstinput{60-66}{hello-accordion.cfa}
219\vspace*{-3pt}
220\lstinput{73-80}{hello-accordion.cfa}
221\end{tabular}
222&
223\raisebox{0.32\totalheight}{%
224\lstinput{85-93}{hello-accordion.cfa}
225}%
226&
227\lstinput{95-109}{hello-accordion.cfa}
228\end{tabular}
229\caption{\lstinline{school} Harness and Output}
230\label{f:checkHarness}
231\end{figure}
232
233
234\section{Typing of C Arrays}
235
236Essential in giving a guarantee of accurate length is the compiler's ability
237to reject a program that presumes to mishandle length.
238By contrast, most discussion so far dealt with communicating length,
239from one party who knows it, to another who is willing to work with any given length.
240For scenarios where the concern is a mishandled length,
241the interaction is between two parties who both claim to know (something about) it.
242Such a scenario occurs in this pure C fragment, wich today's C compilers accept:
243\begin{cfa}
244 int n = @42@;
245 float x[n];
246 float (*xp)[@999@] = &x;
247 (*xp)[@500@]; // in "bound"?
248\end{cfa}
249
250Here, the array @x@ has length 42, while a pointer to it (@xp@) claims length 999.
251So, while the subscript of @xp@ at position 500 is out of bound of its referent @x@,
252the access appears in-bound of the type information available on @xp@.
253Truly, length is being mishandled in the previous step,
254where the type-carried length information on @x@ is not compatible with that of @xp@.
255
256The \CFA new-array rejects the analogous case:
257\begin{cfa}
258 int n = @42@;
259 array(float, n) x;
260 array(float, 999) * xp = x; // static rejection here
261 (*xp)[@500@]; // runtime check vs len 999
262\end{cfa}
263
264% TODO: kill the vertical whitespace around these lists
265% nothing from https://stackoverflow.com/questions/1061112/eliminate-space-before-beginitemize is working
266
267The way the \CFA array is implemented,
268the type analysis of this \CFA case reduces to a case similar to the earlier C version.
269The \CFA compiler's compatibility analysis proceeds as:
270\begin{itemize}[noitemsep,partopsep=-\parskip,parsep=0pt,leftmargin=4em]
271\item
272 Is @array(float, 999)@ type-compatible with @array(float, n)@?
273\item
274 Is @arrayX(float, char[999])@ type-compatible with @arrayX(float, char[n])@?
275 \footnote{Here, \lstinline{arrayX} represents the type that results
276 from desugaring the \lstinline{array} type
277 into a type whose generic parameters are all types.
278 This presentation elides the noisy fact that
279 \lstinline{array} is actually a macro for something bigger;
280 the reduction to \lstinline{char[-]} still proceeds as sketched.}
281\item
282 Is @char[999]@ type-compatible with @char[n]@?
283\end{itemize}
284
285I chose to achieve the necessary rejection of the \CFA case
286by adding a rejection of the corresponding C case.
287
288This decision is not backward compatible.
289There are two complementary mitigations for this incompatibility.
290
291First, a simple recourse is available to a programmer who intends to proceed
292with the statically unsound assignment.
293This situation might arise if @n@ were known to be 999,
294rather than 42, as in the introductory examples.
295The programmer can add a cast, as in:
296\begin{cfa}
297 xp = ( float (*)[999] ) & x;
298\end{cfa}
299This addition causes \CFA to accept, beacause now, the programmer has accepted blame.
300This addition is benign in plain C, because the cast is valid, just unnecessary there.
301Moreover, the addition can even be seen as appropriate ``eye candy,''
302marking where the unchecked length knowledge is used.
303Therefore, a program being onboarded to \CFA can receive a simple upgrade,
304to satisfy the \CFA rules (and arguably become clearer),
305without giving up its validity to a plain C compiler.
306
307Second, the incompatibility only affects types like pointer-to-array,
308which are are infrequently used in C.
309The more common C idiom for aliasing an array is to use the pointer-to-first-element type,
310which does not participate in the \CFA array's length checking.
311\footnote{Notably, the desugaring of the \lstinline{array@} type,
312 avoids letting any \lstinline{-[-]} type decay,
313 in order to preserve the length information that powers runtime bound checking.}
314Therefore, the frequency of needing to upgrade wild C code (as discussed in the first mitigation)
315is anticipated to be low.
316
317Because the incompatibility represents a low cost to a \CFA onboarding effort
318(with a plausible side benefit of linting the original code for a missing annotation),
319I elected not to add special measures to retain the compatibility.
320It would be possible to flag occurrences of @-[-]@ types that come from @array@ desugaring,
321treating those with stricter \CFA rules, while treating others with classic C rules.
322If future lessons from C project onboarding warrant it,
323this special compatibility measure can be added.
324
325Having allowed that both the initial C example's check
326\begin{itemize}[noitemsep,partopsep=-\parskip,parsep=0pt,leftmargin=4em]
327 \item
328 Is @float[999]@ type-compatible with @float[n]@?
329\end{itemize}
330and the second \CFA exmple's induced check
331\begin{itemize}[noitemsep,partopsep=-\parskip,parsep=0pt,leftmargin=4em]
332 \item
333 Is @char[999]@ type-compatible with @char[n]@?
334\end{itemize}
335shall have the same answer, (``no''),
336discussion turns to how I got the \CFA compiler to produce this answer.
337In its preexisting form, it produced a (buggy) approximation of the C rules.
338To implement the new \CFA rules, I took the syntactic recursion a step further, obtaining,
339in both cases:
340\begin{itemize}[noitemsep,partopsep=-\parskip,parsep=0pt,leftmargin=4em]
341 \item
342 Is @999@ TBD-compatible with @n@?
343\end{itemize}
344This compatibility question applies to a pair of expressions, where the earlier ones were to types.
345Such an expression-compatibility question is a new addition to the \CFA compiler.
346These questions only arise in the context of dimension expressions on (C) array types.
347
348TODO: ensure these compiler implementation matters are treated under \CFA compiler background:
349type unification,
350cost calculation,
351GenPoly.
352
353The relevant technical component of the \CFA compiler is,
354within the type resolver, the type unification procedure.
355I added rules for continuing this unification into expressions that occur within types.
356It is still fundamentally doing \emph{type} unification
357because it is participating in binding type variables,
358and not participating in binding any variables that stand in for expression fragments
359(for there is no such sort of variable in \CFA's analysis.)
360
361An unfortunate fact about the \CFA compiler's preexisting implementation is that
362type unification suffers from two forms of duplication.
363
364The first duplication has (many of) the unification rules stated twice.
365As a result, my additions for dimension expressions are stated twice.
366The extra statement of the rules occurs in the GenPoly module,
367where concrete types like @array(int, 5)@\footnote{
368 Again, the presentation is simplified
369 by leaving the \lstinline{array} macro unexpanded}
370are lowered into corresponding C types @struct __conc_array_1234@ (the suffix being a generated index).
371In this case, the struct's definition gives fields that hardcode the argument values of @float@ and @5@.
372The next time an @array(-,-)@ concrete instance is encountered,
373is the previous @struct __conc_array_1234@ suitable for it?
374Yes, for another occurrance of @array(int, 5)@;
375no, for either @array(rational(int), 5)@ or @array(int, 42)@.
376By the last example, this phase must ``reject''
377the hypothesis that it should reuse the dimension-5 instance's C-lowering for a dimension-42 instance.
378
379The second duplication has unification (proper) being invoked at two stages of expression resolution.
380As a result, my added rule set needs to handle more cases than the preceding discussion motivates.
381In the program
382\begin{cfa}
383 void f( double );
384 forall( T & ) void f( T & );
385 void g( int n ) {
386 array( float, n + 1 ) x;
387 f(x);
388 }
389\end{cfa}
390when resolving the function call, the first unification stage
391compares the types @T@, of the parameter, with @array( float, n + 1 )@, of the argument.
392TODO: finish.
393
394The actual rules for comparing two dimension expressions are conservative.
395To answer, ``yes, consider this pair of expressions to be matching,''
396is to imply, ``all else being equal, allow an array with length calculated by $e_1$
397to be passed to a function expecting a length-$e_2$ array.''\footnote{
398 TODO: Deal with directionality, that I'm doing exact-match, no ``at least as long as,'' no subtyping.
399 Should it be an earlier scoping principle? Feels like it should matter in more places than here.}
400So, a ``yes'' answer must represent a guarantee that both expressions will evaluate the
401same result, while a ``no'' can tolerate ``they might, but we're not sure,'
402provided that practical recourses are available
403to let programmers express their better knowledge.
404The specific rule-set that I offer with the current release is, in fact, extremely conservative.
405I chose to keep things simple,
406and allow real future needs do drive adding additional complexity,
407within the framework that I laid out.
408
409For starters, the original motivating example's rejection
410is not based on knowledge that
411the @xp@ length of (the literal) 999 is value-unequal to
412the (obvious) runtime value of the variable @n@, which is the @x@ length.
413Rather, the analysis assumes a variable's value can be anything,
414and so there can be no guarantee that its value is 999.
415So, a variable use and a literal can never match.
416
417Two occurrences of the same literal value are obviously a fine match.
418For two occurrences of the same varialbe, more information is needed.
419For example, this one is fine
420\begin{cfa}
421 void f( const int n ) {
422 float x[n];
423 float (*xp)[n] = x; // accept
424 }
425\end{cfa}
426while this one is not:
427\begin{cfa}
428 void f() {
429 int n = 42;
430 float x[n];
431 n = 999;
432 float (*xp)[n] = x; // reject
433 }
434\end{cfa}
435Furthermore, the fact that the first example sees @n@ as @const@
436is not actually a sufficent basis.
437In this example, @f@'s length expression's declaration is as @const@ as it can be,
438yet its value still changes between the two invocations:
439\begin{cfa}
440 // compile unit 1
441 void g();
442 void f( const int & const nr ) {
443 float x[nr];
444 g();
445 float (*xp)[nr] = x; // reject
446 }
447 // compile unit 2
448 static int n = 42;
449 void g() {
450 n = 99;
451 }
452 void f( const int & );
453 int main () {
454 f(n);
455 return 0;
456 }
457\end{cfa}
458The issue in this last case is,
459just because you aren't able to change something doesn't mean someone else can't.
460
461My rule set also respects a feature of the C tradition.
462In spite of the several limitations of the C rules
463accepting cases that produce different values, there are a few mismatches that C stops.
464C is quite precise when working with two static values:
465\begin{cfa}
466 enum { fortytwo = 42 };
467 float x[fortytwo];
468 float (*xp1)[42] = &x; // accept
469 float (*xp2)[999] = &x; // reject
470\end{cfa}
471My \CFA rules agree with C's on these cases.
472
473My rules classify expressions into three groups:
474\begin{description}
475\item[Statically Evaluable]
476 Expressions for which a specific value can be calculated (conservatively)
477 at compile-time.
478 A preexisting \CFA compiler module defines which expressions qualify,
479 and evaluates them.
480 Includes literals and enumeration values.
481\item[Dynamic but Stable]
482 The value of a variable declared as @const@.
483 Includes a @const@ parameter.
484\item[Potentially Unstable]
485 The catch-all category. Notable examples include:
486 any function-call result (@float x[foo()];@),
487 the particular function-call result that is a pointer dereference (@void f(const int * n) { float x[*n]; }@), and
488 any use of a reference-typed variable.
489\end{description}
490
491My \CFA rules are:
492\begin{itemize}
493\item
494 Accept a Statically Evaluable pair, if both expressions have the same value.
495 Notably, this rule allows a literal to match with an enumeration value, based on the value.
496\item
497 Accept a Dynamic but Stable pair, if both expressions are written out the same, e.g. refers to same variable declaration.
498\item
499 Otherwise, reject.
500 Notably, reject all pairs from the Potentially Unstable group.
501 Notably, reject all pairs that cross groups.
502\end{itemize}
503
504TODO: summarize the C rules and add the case-comparison table
505
506TODO: Discuss Recourse
507
508TODO: Discuss dimension hoisting, which addresses the challenge of extra unification for cost calculation
509
510\section{Multidimensional Arrays}
511\label{toc:mdimpl}
512
513% TODO: introduce multidimensional array feature and approaches
514
515When working with arrays, \eg linear algebra, array dimensions are referred to as ``rows'' and ``columns'' for a matrix, adding ``planes'' for a cube.
516(There is little terminology for higher dimensional arrays.)
517For example, an acrostic poem\footnote{A type of poetry where the first, last or other letters in a line spell out a particular word or phrase in a vertical column.}
518can be treated as a grid of characters, where the rows are the text and the columns are the embedded keyword(s).
519Within a poem, there is the concept of a \newterm{slice}, \eg a row is a slice for the poem text, a column is a slice for a keyword.
520In general, the dimensioning and subscripting for multidimensional arrays has two syntactic forms: @m[r,c]@ or @m[r][c]@.
521
522Commonly, an array, matrix, or cube, is visualized (especially in mathematics) as a contiguous row, rectangle, or block.
523This conceptualization is reenforced by subscript ordering, \eg $m_{r,c}$ for a matrix and $c_{p,r,c}$ for a cube.
524Few programming languages differ from the mathematical subscript ordering.
525However, computer memory is flat, and hence, array forms are structured in memory as appropriate for the runtime system.
526The closest representation to the conceptual visualization is for an array object to be contiguous, and the language structures this memory using pointer arithmetic to access the values using various subscripts.
527This approach still has degrees of layout freedom, such as row or column major order, \ie juxtaposed rows or columns in memory, even when the subscript order remains fixed.
528For example, programming languages like MATLAB, Fortran, Julia and R store matrices in column-major order since they are commonly used for processing column-vectors in tabular data sets but retain row-major subscripting.
529In general, storage layout is hidden by subscripting, and only appears when passing arrays among different programming languages or accessing specific hardware.
530
531\VRef[Figure]{f:FixedVariable} shows two C90 approaches for manipulating a contiguous matrix.
532Note, C90 does not support VLAs.
533The fixed-dimension approach (left) uses the type system;
534however, it requires all dimensions except the first to be specified at compile time, \eg @m[][6]@, allowing all subscripting stride calculations to be generated with constants.
535Hence, every matrix passed to @fp1@ must have exactly 6 columns but the row size can vary.
536The variable-dimension approach (right) ignores (violates) the type system, \ie argument and parameters types do not match, and subscripting is performed manually using pointer arithmetic in the macro @sub@.
537
538\begin{figure}
539\begin{tabular}{@{}l@{\hspace{40pt}}l@{}}
540\multicolumn{1}{c}{\textbf{Fixed Dimension}} & \multicolumn{1}{c}{\textbf{Variable Dimension}} \\
541\begin{cfa}
542
543void fp1( int rows, int m[][@6@] ) {
544 ... printf( "%d ", @m[r][c]@ ); ...
545}
546int fm1[4][@6@], fm2[6][@6@]; // no VLA
547// initialize matrixes
548fp1( 4, fm1 ); // implicit 6 columns
549fp1( 6, fm2 );
550\end{cfa}
551&
552\begin{cfa}
553#define sub( m, r, c ) *(m + r * sizeof( m[0] ) + c)
554void fp2( int rows, int cols, int *m ) {
555 ... printf( "%d ", @sub( m, r, c )@ ); ...
556}
557int vm1[@4@][@4@], vm2[@6@][@8@]; // no VLA
558// initialize matrixes
559fp2( 4, 4, vm1 );
560fp2( 6, 8, vm2 );
561\end{cfa}
562\end{tabular}
563\caption{C90 Fixed \vs Variable Contiguous Matrix Styles}
564\label{f:FixedVariable}
565\end{figure}
566
567Many languages allow multidimensional arrays-of-arrays, \eg in Pascal or \CC.
568\begin{cquote}
569\begin{tabular}{@{}ll@{}}
570\begin{pascal}
571var m : array[0..4, 0..4] of Integer; (* matrix *)
572type AT = array[0..4] of Integer; (* array type *)
573type MT = array[0..4] of AT; (* array of array type *)
574var aa : MT; (* array of array variable *)
575m@[1][2]@ := 1; aa@[1][2]@ := 1 (* same subscripting *)
576\end{pascal}
577&
578\begin{c++}
579int m[5][5];
580
581typedef vector< vector<int> > MT;
582MT vm( 5, vector<int>( 5 ) );
583m@[1][2]@ = 1; aa@[1][2]@ = 1;
584\end{c++}
585\end{tabular}
586\end{cquote}
587The language decides if the matrix and array-of-array are laid out the same or differently.
588For example, an array-of-array may be an array of row pointers to arrays of columns, so the rows may not be contiguous in memory nor even the same length (triangular matrix).
589Regardless, there is usually a uniform subscripting syntax masking the memory layout, even though a language could differentiated between the two forms using subscript syntax, \eg @m[1,2]@ \vs @aa[1][2]@.
590Nevertheless, controlling memory layout can make a difference in what operations are allowed and in performance (caching/NUMA effects).
591
592C also provides non-contiguous arrays-of-arrays.
593\begin{cfa}
594int m[5][5]; $\C{// contiguous}$
595int * aa[5]; $\C{// non-contiguous}$
596\end{cfa}
597both with different memory layout using the same subscripting, and both with different degrees of issues.
598The focus of this work is on the contiguous multidimensional arrays in C.
599The reason is that programmers are often forced to use the more complex array-of-array form when a contiguous array would be simpler, faster, and safer.
600Nevertheless, the C array-of-array form is still important for special circumstances.
601
602\VRef[Figure]{f:ContiguousNon-contiguous} shows the extensions made in C99 for manipulating contiguous \vs non-contiguous arrays.\footnote{C90 also supported non-contiguous arrays.}
603First, VLAs are supported.
604Second, for contiguous arrays, C99 conjoins one or more of the parameters as a downstream dimension(s), \eg @cols@, implicitly using this parameter to compute the row stride of @m@.
605If the declaration of @fc@ is changed to:
606\begin{cfa}
607void fc( int rows, int cols, int m[@rows@][@cols@] ) ...
608\end{cfa}
609it is possible for C to perform bound checking across all subscripting, but it does not.
610While this contiguous-array capability is a step forward, it is still the programmer's responsibility to manually manage the number of dimensions and their sizes, both at the function definition and call sites.
611That is, the array does not automatically carry its structure and sizes for use in computing subscripts.
612While the non-contiguous style in @faa@ looks very similar to @fc@, the compiler only understands the unknown-sized array of row pointers, and it relies on the programmer to traverse the columns in a row correctly with a correctly bounded loop index.
613Specifically, there is no requirement that the rows are the same length, like a poem with different length lines.
614
615\begin{figure}
616\begin{tabular}{@{}ll@{}}
617\multicolumn{1}{c}{\textbf{Contiguous}} & \multicolumn{1}{c}{\textbf{ Non-contiguous}} \\
618\begin{cfa}
619void fc( int rows, @int cols@, int m[ /* rows */ ][@cols@] ) {
620 ... printf( "%d ", @m[r][c]@ ); ...
621}
622int m@[5][5]@;
623for ( int r = 0; r < 5; r += 1 ) {
624
625 for ( int c = 0; c < 5; c += 1 )
626 m[r][c] = r + c;
627}
628fc( 5, 5, m );
629\end{cfa}
630&
631\begin{cfa}
632void faa( int rows, int cols, int * m[ @/* cols */@ ] ) {
633 ... printf( "%d ", @m[r][c]@ ); ...
634}
635int @* aa[5]@; // row pointers
636for ( int r = 0; r < 5; r += 1 ) {
637 @aa[r] = malloc( 5 * sizeof(int) );@ // create rows
638 for ( int c = 0; c < 5; c += 1 )
639 aa[r][c] = r + c;
640}
641faa( 5, 5, aa );
642\end{cfa}
643\end{tabular}
644\caption{C99 Contiguous \vs Non-contiguous Matrix Styles}
645\label{f:ContiguousNon-contiguous}
646\end{figure}
647
648
649\subsection{Multidimensional array implementation}
650
651A multidimensional array implementation has three relevant levels of abstraction, from highest to lowest, where the array occupies \emph{contiguous memory}.
652\begin{enumerate}
653\item
654Flexible-stride memory:
655this model has complete independence between subscripting ordering and memory layout, offering the ability to slice by (provide an index for) any dimension, \eg slice a plane, row, or column, \eg @c[3][*][*]@, @c[3][4][*]@, @c[3][*][5]@.
656\item
657Fixed-stride memory:
658this model binds the first subscript and the first memory layout dimension, offering the ability to slice by (provide an index for) only the coarsest dimension, @m[row][*]@ or @c[plane][*][*]@, \eg slice only by row (2D) or plane (3D).
659After which, subscripting and memory layout are independent.
660\item
661Explicit-displacement memory:
662this model has no awareness of dimensions just the ability to access memory at a distance from a reference point (base-displacement addressing), \eg @x + 23@ or @x[23}@ $\Rightarrow$ 23rd element from the start of @x@.
663A programmer must manually build any notion of dimensions using other tools;
664hence, this style is not offering multidimensional arrays \see{\VRef[Figure]{f:FixedVariable} right example}.
665\end{enumerate}
666
667There is some debate as to whether the abstraction ordering goes $\{1, 2\} < 3$, rather than my numerically-ordering.
668That is, styles 1 and 2 are at the same abstraction level, with 3 offering a limited set of functionality.
669I chose to build the \CFA style-1 array upon a style-2 abstraction.
670(Justification of the decision follows, after the description of the design.)
671
672Style 3 is the inevitable target of any array implementation.
673The hardware offers this model to the C compiler, with bytes as the unit of displacement.
674C offers this model to its programmer as pointer arithmetic, with arbitrary sizes as the unit.
675Casting a multidimensional array as a single-dimensional array/pointer, then using @x[i]@ syntax to access its elements, is still a form of pointer arithmetic.
676
677Now stepping into the implementation of \CFA's new type-1 multidimensional arrays in terms of C's existing type-2 multidimensional arrays, it helps to clarify that even the interface is quite low-level.
678A C/\CFA array interface includes the resulting memory layout.
679The defining requirement of a type-2 system is the ability to slice a column from a column-finest matrix.
680The required memory shape of such a slice is fixed, before any discussion of implementation.
681The implementation presented here is how the \CFA array library wrangles the C type system, to make it do memory steps that are consistent with this layout.
682TODO: do I have/need a presentation of just this layout, just the semantics of -[all]?
683
684The new \CFA standard library @array@ datatype supports richer multidimensional features than C.
685The new array implementation follows C's contiguous approach, \ie @float [r][c]@, with one contiguous object subscripted by coarsely-strided dimensions directly wrapping finely-strided dimensions.
686Beyond what C's array type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.
687
688The following examples use the matrix declaration @array( float, 5, 7 ) m@, loaded with values incremented by $0.1$, when stepping across the length-7 finely-strided column dimension, and stepping across the length-5 coarsely-strided row dimension.
689\par
690\mbox{\lstinput{121-126}{hello-md.cfa}}
691\par\noindent
692The memory layout is 35 contiguous elements with strictly increasing addresses.
693
694A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.
695As for the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array by a prefix of its dimensions, \eg @m[2]@, giving the third row.
696This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be subscripted by the more finely strided dimensions, \eg @m[2][3]@, giving the value @2.3@.
697The following is an example slicing a row.
698\lstinput{60-64}{hello-md.cfa}
699\lstinput[aboveskip=0pt]{140-140}{hello-md.cfa}
700
701However, function @print1d@ is asserting too much knowledge about its parameter @r@ for printing either a row slice or a column slice.
702Specifically, declaring the parameter @r@ with type @array@ means that @r@ is contiguous, which is unnecessarily restrictive.
703That is, @r@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@) with managed length @N@.
704The new-array library provides the trait @ar@, so-defined.
705With it, the original declaration can be generalized with the same body.
706\lstinput{43-44}{hello-md.cfa}
707\lstinput[aboveskip=0pt]{145-145}{hello-md.cfa}
708The nontrivial slicing in this example now allows passing a \emph{noncontiguous} slice to @print1d@, where the new-array library provides a ``subscript by all'' operation for this purpose.
709In a multi-dimensional subscript operation, any dimension given as @all@ is a placeholder, \ie ``not yet subscripted by a value'', waiting for such a value, implementing the @ar@ trait.
710\lstinput{150-151}{hello-md.cfa}
711
712The example shows @x[2]@ and @x[[2, all]]@ both refer to the same, ``2.*'' slice.
713Indeed, the various @print1d@ calls under discussion access the entry with value @2.3@ as @x[2][3]@, @x[[2,all]][3]@, and @x[[all,3]][2]@.
714This design preserves (and extends) C array semantics by defining @x[[i,j]]@ to be @x[i][j]@ for numeric subscripts, but also for ``subscripting by all''.
715That is:
716\begin{cquote}
717\begin{tabular}{@{}cccccl@{}}
718@x[[2,all]][3]@ & $\equiv$ & @x[2][all][3]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is redundant) \\
719@x[[all,3]][2]@ & $\equiv$ & @x[all][3][2]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is effective)
720\end{tabular}
721\end{cquote}
722
723Narrating progress through each of the @-[-][-][-]@\footnote{
724The first ``\lstinline{-}'' is a variable expression and the remaining ``\lstinline{-}'' are subscript expressions.}
725expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@.
726Where @all@ is redundant:
727\begin{cquote}
728\begin{tabular}{@{}ll@{}}
729@x@ & 2-dimensional, want subscripts for coarse then fine \\
730@x[2]@ & 1-dimensional, want subscript for fine; lock coarse == 2 \\
731@x[2][all]@ & 1-dimensional, want subscript for fine \\
732@x[2][all][3]@ & 0-dimensional; lock fine == 3
733\end{tabular}
734\end{cquote}
735Where @all@ is effective:
736\begin{cquote}
737\begin{tabular}{@{}ll@{}}
738@x@ & 2-dimensional, want subscripts for coarse then fine \\
739@x[all]@ & 2-dimensional, want subscripts for fine then coarse \\
740@x[all][3]@ & 1-dimensional, want subscript for coarse; lock fine == 3 \\
741@x[all][3][2]@ & 0-dimensional; lock coarse == 2
742\end{tabular}
743\end{cquote}
744The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back.
745For example, in a two dimensional matrix, this semantics conceptually transposes the matrix by reversing the subscripts.
746The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@.
747
748Contiguous arrays, and slices of them, are all represented by the same underlying parameterized type, which includes stride information in its metatdata.
749\PAB{Do not understand this sentence: The \lstinline{-[all]} operation is a conversion from a reference to one instantiation to a reference to another instantiation.}
750The running example's @all@-effective step, stated more concretely, is:
751\begin{cquote}
752\begin{tabular}{@{}ll@{}}
753@x@ & : 5 of ( 7 of @float@ each spaced 1 @float@ apart ) each spaced 7 @floats@ apart \\
754@x[all]@ & : 7 of ( 5 of @float@ each spaced 7 @float@s apart ) each spaced 1 @float@ apart
755\end{tabular}
756\end{cquote}
757
758\begin{figure}
759\includegraphics{measuring-like-layout}
760\caption{Visualization of subscripting by value and by \lstinline[language=CFA]{all}, for \lstinline{x} of type \lstinline{array( float, 5, 7 )} understood as 5 rows by 7 columns.
761The horizontal layout represents contiguous memory addresses while the vertical layout is conceptual.
762The vertical shaded band highlights the location of the targeted element, 2.3.
763Any such vertical slice contains various interpretations of a single address.}
764\label{fig:subscr-all}
765\end{figure}
766
767Figure~\ref{fig:subscr-all} shows one element (in the shaded band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@.
768In both cases, value 2 selects from the coarser dimension (rows of @x@),
769while the value 3 selects from the finer dimension (columns of @x@).
770The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, \vs from @x[all]@.
771Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse.
772These two starting expressions, which are the example's only multidimensional subexpressions
773(those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index would select.
774
775The figure's presentation offers an intuition answering to: What is an atomic element of @x[all]@?
776From there, @x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these building blocks.
777An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box)
778and a lie about its size (the left diagonal above it, growing upward).
779An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them, done according to their size;
780call such an array a column.
781A column is almost ready to be arranged into a matrix;
782it is the \emph{contained value} of the next-level building block, but another lie about size is required.
783At first, an atom needs to be arranged as if it were bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).
784These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
785Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@.
786Yet every time the programmer presents an index, a C-array subscript is achieving the offset calculation.
787
788In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected),
789the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats,
790compared with where analogous rows appear when the row-level option is presented for @x@.
791
792For example, in \lstinline{x[all]}, the shaded band touches atoms 2.0, 2.1, 2.2, 2.3, 1.4, 1.5 and 1.6 (left diagonal).
793But only the atom 2.3 is storing its value there.
794The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims.
795
796Lying is implemented as casting.
797The arrangement just described is implemented in the structure @arpk@.
798This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator.
799The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@.
800The subscript operator presents what is really inside, by casting to the type below the left diagonal of the lie.
801
802% Does x[all] have to lie too? The picture currently glosses over how it it advertises a size of 7 floats. I'm leaving that as an edge case benignly misrepresented in the picture. Edge cases only have to be handled right in the code.
803
804Casting, overlapping, and lying are unsafe.
805The mission is to implement a style-1 feature in the type system for safe use by a programmer.
806The offered style-1 system is allowed to be internally unsafe,
807just as C's implementation of a style-2 system (upon a style-3 system) is unsafe within, even when the programmer is using it without casts or pointer arithmetic.
808Having a style-1 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices.
809
810% PAB: repeat from previous paragraph.
811% The choice to implement this style-1 system upon C's style-2 arrays, rather than its style-3 pointer arithmetic, reduces the attack surface of unsafe code.
812% My casting is unsafe, but I do not do any pointer arithmetic.
813% When a programmer works in the common-case style-2 subset (in the no-@[all]@ top of Figure~\ref{fig:subscr-all}), my casts are identities, and the C compiler is doing its usual displacement calculations.
814% If I had implemented my system upon style-3 pointer arithmetic, then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own.
815
816% \noindent END: Paste looking for a home
817
818The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.
819The @arpk@ structure and its @-[i]@ operator are defined as:
820\begin{cfa}
821forall(
822 [N], $\C{// length of current dimension}$
823 S & | sized(S), $\C{// masquerading-as}$
824 Timmed &, $\C{// immediate element, often another array}$
825 Tbase & $\C{// base element, e.g. float, never array}$
826) { // distribute forall to each element
827 struct arpk {
828 S strides[N]; $\C{// so that sizeof(this) is N of S}$
829 };
830 // expose Timmed, stride by S
831 static inline Timmed & ?[?]( arpk( N, S, Timmed, Tbase ) & a, long int i ) {
832 subcheck( a, i, 0, N );
833 return (Timmed &)a.strides[i];
834 }
835}
836\end{cfa}
837The private @arpk@ structure (array with explicit packing) is generic over four types: dimension length, masquerading-as, ...
838This structure's public interface is hidden behind the @array(...)@ macro and the subscript operator.
839Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.
840Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.
841Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
842
843An instantiation of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.
844In the base case, @array(E_base)@ is just @E_base@.
845Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
846
847Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.
848Expressed as an operation on types, this rotation is:
849\begin{eqnarray*}
850suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
851enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\
852enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b )
853\end{eqnarray*}
854
855
856\section{Bound checks, added and removed}
857
858\CFA array subscripting is protected with runtime bound checks.
859Having dependent typing causes the optimizer to remove more of these bound checks than it would without them.
860This section provides a demonstration of the effect.
861
862The experiment compares the \CFA array system with the padded-room system [TODO:xref] most typically exemplified by Java arrays, but also reflected in the \CC pattern where restricted vector usage models a checked array.
863The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
864The experiment compares with the \CC version to keep access to generated assembly code simple.
865
866As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and \CC versions.
867When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.
868But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
869
870TODO: paste source and assembly codes
871
872Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.
873The case is naive matrix multiplication over a row-major encoding.
874
875TODO: paste source codes
876
877
878
879
880
881\section{Comparison with other arrays}
882
883
884\subsection{Rust}
885
886\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.
887Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly-typed ownership-system, which further helps guarantee statically the validity of every pointer deference.
888These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.
889\CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
890
891\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.
892Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.
893The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
894
895
896\subsection{Java}
897
898Java arrays are arrays-of-arrays because all objects are references \see{\VRef{toc:mdimpl}}.
899For each array, Java implicitly storages the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations.
900\begin{cquote}
901\begin{tabular}{rl}
902C & @void f( size_t n, size_t m, float x[n][m] );@ \\
903Java & @void f( float x[][] );@
904\end{tabular}
905\end{cquote}
906However, in the C prototype, the parameters @n@ and @m@ are documentation only as the intended size of the first and second dimension of @x@.
907\VRef[Figure]{f:JavaVsCTriangularMatrix} compares a triangular matrix (array-of-arrays) in dynamically safe Java to unsafe C.
908Each dynamically sized row in Java stores its dimension, while C requires the programmer to manage these sizes explicitly (@rlnth@).
909All subscripting is Java has bounds checking, while C has none.
910Both Java and C require explicit null checking, otherwise there is a runtime failure.
911
912\begin{figure}
913\setlength{\tabcolsep}{15pt}
914\begin{tabular}{ll@{}}
915\begin{java}
916int m[][] = { // triangular matrix
917 new int [4],
918 new int [3],
919 new int [2],
920 new int [1],
921 null
922};
923
924for ( int r = 0; r < m.length; r += 1 ) {
925 if ( m[r] == null ) continue;
926 for ( int c = 0; c < m[r].length; c += 1 ) {
927 m[r][c] = c + r; // subscript checking
928 }
929
930}
931
932for ( int r = 0; r < m.length; r += 1 ) {
933 if ( m[r] == null ) {
934 System.out.println( "null row" );
935 continue;
936 }
937 for ( int c = 0; c < m[r].length; c += 1 ) {
938 System.out.print( m[r][c] + " " );
939 }
940 System.out.println();
941
942}
943\end{java}
944&
945\begin{cfa}
946int * m[5] = { // triangular matrix
947 calloc( 4, sizeof(int) ),
948 calloc( 3, sizeof(int) ),
949 calloc( 2, sizeof(int) ),
950 calloc( 1, sizeof(int) ),
951 NULL
952};
953int rlnth = 4;
954for ( int r = 0; r < 5; r += 1 ) {
955 if ( m[r] == NULL ) continue;
956 for ( int c = 0; c < rlnth; c += 1 ) {
957 m[r][c] = c + r; // no subscript checking
958 }
959 rlnth -= 1;
960}
961rlnth = 4;
962for ( int r = 0; r < 5; r += 1 ) {
963 if ( m[r] == NULL ) {
964 printf( "null row\n" );
965 continue;
966 }
967 for ( int c = 0; c < rlnth; c += 1 ) {
968 printf( "%d ", m[r][c] );
969 }
970 printf( "\n" );
971 rlnth -= 1;
972}
973\end{cfa}
974\end{tabular}
975\caption{Java (left) \vs C (right) Triangular Matrix}
976\label{f:JavaVsCTriangularMatrix}
977\end{figure}
978
979The downside of the arrays-of-arrays approach is performance due to pointer chasing versus pointer arithmetic for a contiguous arrays.
980Furthermore, there is the cost of managing the implicit array descriptor.
981It is unlikely that a JIT can dynamically rewrite an arrays-of-arrays form into a contiguous form.
982
983
984\subsection{\CC}
985
986Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array.
987While the vector size can grow and shrink dynamically, \vs a fixed-size dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration (one dynamic call) to avoid using @push_back@.
988\begin{c++}
989vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations
990\end{c++}
991Multidimensional arrays are arrays-of-arrays with associated costs.
992Each @vector@ array has an array descriptor contain the dimension, which allows bound checked using @x.at(i)@ in place of @x[i]@.
993Used with these restrictions, out-of-bound accesses are caught, and in-bound accesses never exercise the vector's ability to grow, preventing costly reallocate and copy, and never invalidate references to contained values.
994This scheme matches Java's safety and expressiveness exactly, but with the inherent costs.
995
996
997\subsection{Levels of dependently typed arrays}
998
999The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as:
1000\begin{itemize}
1001\item a \emph{zip}-style operation that consumes two arrays of equal length
1002\item a \emph{map}-style operation whose produced length matches the consumed length
1003\item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
1004\end{itemize}
1005Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.
1006Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
1007
1008Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.
1009In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.
1010The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.
1011In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.
1012[TODO: clarify how even Idris type checking terminates]
1013
1014Idris is a current, general-purpose dependently typed programming language.
1015Length checking is a common benchmark for full dependent type systems.
1016Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.
1017[TODO: finish explaining what Data.Vect is and then the essence of the comparison]
1018
1019POINTS:
1020here is how our basic checks look (on a system that does not have to compromise);
1021it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
1022
1023Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer novel contributions concerning similar, restricted dependent types for tracking array length.
1024Unlike \CFA, both are garbage-collected functional languages.
1025Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.
1026So, like \CFA, the checking in question is a lightweight bounds-only analysis.
1027Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
1028
1029
1030
1031The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics.
1032There is a particular emphasis on an existential type, enabling callee-determined return shapes.
1033
1034
1035Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type.
1036
1037Futhark and full-strength dependently typed languages treat array sizes are ordinary values.
1038Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
1039
1040\CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.
1041Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.
1042Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
1043
1044\subsection{Static safety in C extensions}
1045
1046
1047\section{Future work}
1048
1049\subsection{Declaration syntax}
1050
1051\subsection{Range slicing}
1052
1053\subsection{With a module system}
1054
1055\subsection{With described enumerations}
1056
1057A project in \CFA's current portfolio will improve enumerations.
1058In the incumbent state, \CFA has C's enumerations, unmodified.
1059I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use.
1060It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations:
1061\begin{cfa}
1062forall( T | is_enum(T) )
1063void show_in_context( T val ) {
1064 for( T i ) {
1065 string decorator = "";
1066 if ( i == val-1 ) decorator = "< ready";
1067 if ( i == val ) decorator = "< go" ;
1068 sout | i | decorator;
1069 }
1070}
1071enum weekday { mon, tue, wed = 500, thu, fri };
1072show_in_context( wed );
1073\end{cfa}
1074with output
1075\begin{cfa}
1076mon
1077tue < ready
1078wed < go
1079thu
1080fri
1081\end{cfa}
1082The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA.
1083But the example shows these abilities:
1084\begin{itemize}
1085\item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type
1086\item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@)
1087\item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@)
1088\item a provision for looping (the @for@ form used) over the values of the type.
1089\end{itemize}
1090
1091If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
1092
1093[TODO: introduce Ada in the comparators]
1094
1095In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, \CC, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.
1096The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing.
1097
1098This change of perspective also lets us remove ubiquitous dynamic bound checks.
1099[TODO: xref] discusses how automatically inserted bound checks can often be optimized away.
1100But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.
1101To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
1102
1103[TODO, fix confusion: Idris has this arrangement of checks, but still the natural numbers as the domain.]
1104
1105The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values.
1106Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above.
1107\begin{cfa}
1108interface Ix n
1109get_size n : Unit -> Int
1110ordinal : n -> Int
1111unsafe_from_ordinal n : Int -> n
1112\end{cfa}
1113
1114Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.
1115This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.
1116Dex's example is a routine that calculates pointwise differences between two samples.
1117Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).
1118In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size.
1119
1120The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.
1121In the audio instantiation, the duration-of-clip type argument is used for the domain.
1122In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.
1123This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
1124\begin{cfa}
1125instance {a b} [Ix a, Ix b] Ix (a & b)
1126get_size = \(). size a * size b
1127ordinal = \(i, j). (ordinal i * size b) + ordinal j
1128unsafe_from_ordinal = \o.
1129bs = size b
1130(unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs))
1131\end{cfa}
1132and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time
1133\begin{cfa}
1134img_trans :: (img_wd,img_ht)=>Real
1135img_trans.(i,j) = img.i.j
1136result = pairwise img_trans
1137\end{cfa}
1138[TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4]
1139
1140In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary.
1141
1142\subsection{Retire pointer arithmetic}
1143
1144
1145\section{\CFA}
1146
1147XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1148moved from background chapter \\
1149XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\
1150
1151Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it.
1152(For later: That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.)
1153
1154\subsection{\CFA features interacting with arrays}
1155
1156Prior work on \CFA included making C arrays, as used in C code from the wild,
1157work, if this code is fed into @cfacc@.
1158The quality of this this treatment was fine, with no more or fewer bugs than is typical.
1159
1160More mixed results arose with feeding these ``C'' arrays into preexisting \CFA features.
1161
1162A notable success was with the \CFA @alloc@ function,
1163which type information associated with a polymorphic return type
1164replaces @malloc@'s use of programmer-supplied size information.
1165\begin{cfa}
1166// C, library
1167void * malloc( size_t );
1168// C, user
1169struct tm * el1 = malloc( sizeof(struct tm) );
1170struct tm * ar1 = malloc( 10 * sizeof(struct tm) );
1171
1172// CFA, library
1173forall( T * ) T * alloc();
1174// CFA, user
1175tm * el2 = alloc();
1176tm (*ar2)[10] = alloc();
1177\end{cfa}
1178The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument.
1179This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type.
1180Using a compiler-produced value eliminates an opportunity for user error.
1181
1182TODO: fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa
1183
1184Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug.
1185In the last example, the choice of ``pointer to array'' @ar2@ breaks a parallel with @ar1@.
1186They are not subscripted in the same way.
1187\begin{cfa}
1188ar1[5];
1189(*ar2)[5];
1190\end{cfa}
1191Using ``reference to array'' works at resolving this issue. TODO: discuss connection with Doug-Lea \CC proposal.
1192\begin{cfa}
1193tm (&ar3)[10] = *alloc();
1194ar3[5];
1195\end{cfa}
1196The implicit size communication to @alloc@ still works in the same ways as for @ar2@.
1197
1198Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one.
1199TODO xref C standard does not claim that @ar1@ may be subscripted,
1200because no stage of interpreting the construction of @ar1@ has it be that ``there is an \emph{array object} here.''
1201But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls,
1202where the type requested is an array, making the result, much more obviously, an array object.
1203
1204The ``reference to array'' type has its sore spots too.
1205TODO see also @dimexpr-match-c/REFPARAM_CALL@ (under @TRY_BUG_1@)
1206
1207TODO: I fixed a bug associated with using an array as a T. I think. Did I really? What was the bug?
Note: See TracBrowser for help on using the repository browser.