1 | \chapter{Array} |
---|
2 | \label{c:Array} |
---|
3 | |
---|
4 | Arrays in C are possibly the single most misunderstood and incorrectly used feature in the language, resulting in the largest proportion of runtime errors and security violations. |
---|
5 | This chapter describes the new \CFA language and library features that introduce a length-checked array type, @array@, to the \CFA standard library~\cite{Cforall}. |
---|
6 | |
---|
7 | Offering the @array@ type, as a distinct alternative to the C array, is consistent with \CFA's goal of backwards compatibility, \ie virtually all existing C (@gcc@) programs can be compiled by \CFA with only a small number of changes, similar to \CC (@g++@). |
---|
8 | However, a few compatibility-breaking changes to the behaviour of the C array are necessary, both as an implementation convenience and to fix C's lax treatment of arrays. |
---|
9 | Hence, the @array@ type is an opportunity to start from a clean slate and show a cohesive selection of features, making it unnecessary to deal with every inherited complexity of the C array. |
---|
10 | |
---|
11 | |
---|
12 | \section{Introduction} |
---|
13 | \label{s:ArrayIntro} |
---|
14 | |
---|
15 | The new \CFA array is declared by instantiating the generic @array@ type, |
---|
16 | much like instantiating any other standard-library generic type (such as \CC @vector@), |
---|
17 | though using a new style of generic parameter. |
---|
18 | \begin{cfa} |
---|
19 | @array( float, 99 )@ x; $\C[2.75in]{// x contains 99 floats}$ |
---|
20 | \end{cfa} |
---|
21 | Here, the arguments to the @array@ type are @float@ (element type) and @99@ (length). |
---|
22 | When this type is used as a function parameter, the type-system requires that a call's argument is a perfect match. |
---|
23 | \begin{cfa} |
---|
24 | void f( @array( float, 42 )@ & p ) {} $\C{// p accepts 42 floats}$ |
---|
25 | f( x ); $\C{// statically rejected: type lengths are different, 99 != 42}$ |
---|
26 | |
---|
27 | test2.cfa:3:1 error: Invalid application of existing declaration(s) in expression. |
---|
28 | Applying untyped: Name: f ... to: Name: x |
---|
29 | \end{cfa} |
---|
30 | Here, the function @f@'s parameter @p@ is declared with length 42. |
---|
31 | However, the call @f( x )@ is invalid, because @x@'s length is @99@, which does not match @42@. |
---|
32 | |
---|
33 | A function declaration can be polymorphic over these @array@ arguments by using the \CFA @forall@ declaration prefix. |
---|
34 | \begin{cfa} |
---|
35 | forall( T, @[N]@ ) |
---|
36 | void g( array( T, @N@ ) & p, int i ) { |
---|
37 | T elem = p[i]; $\C{// dynamically checked: requires 0 <= i < N}$ |
---|
38 | } |
---|
39 | g( x, 0 ); $\C{// T is float, N is 99, dynamic subscript check succeeds}$ |
---|
40 | g( x, 1000 ); $\C{// T is float, N is 99, dynamic subscript check fails}\CRT$ |
---|
41 | |
---|
42 | Cforall Runtime error: subscript 1000 exceeds dimension range [0,99) $for$ array 0x555555558020. |
---|
43 | \end{cfa} |
---|
44 | Function @g@ takes an arbitrary type parameter @T@ and a \emph{dimension parameter} @N@. |
---|
45 | A dimension parameter represents a to-be-determined count of elements, managed by the type system. |
---|
46 | The call @g( x, 0 )@ is valid because @g@ accepts any length of array, where the type system infers @float@ for @T@ and length @99@ for @N@. |
---|
47 | Inferring values for @T@ and @N@ is implicit. |
---|
48 | Furthermore, in this case, the runtime subscript @x[0]@ (parameter @i@ being @0@) in @g@ is valid because 0 is in the dimension range $[0,99)$ of argument @x@. |
---|
49 | However, the call @g( x, 1000 )@ is also accepted through compile time; |
---|
50 | however, this case's subscript, @x[1000]@, generates an error, because @1000@ is outside the dimension range $[0,99)$ of argument @x@. |
---|
51 | In general, the @forall( ..., [N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and within a function. |
---|
52 | The syntactic form is chosen to parallel other @forall@ forms: |
---|
53 | \begin{cfa} |
---|
54 | forall( @[N]@ ) ... $\C[1.5in]{// dimension}$ |
---|
55 | forall( T ) ... $\C{// value datatype (formerly, "otype")}$ |
---|
56 | forall( T & ) ... $\C{// opaque datatype (formerly, "dtype")}\CRT$ |
---|
57 | \end{cfa} |
---|
58 | % The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance. |
---|
59 | |
---|
60 | The generic @array@ type is comparable to the C array type, which \CFA inherits from C. |
---|
61 | Their runtime characteristics are often identical, and some features are available in both. |
---|
62 | For example, assume a caller has an argument that instantiates @N@ with 42. |
---|
63 | \begin{cfa} |
---|
64 | forall( [N] ) |
---|
65 | void declDemo( ... ) { |
---|
66 | float x1[N]; $\C{// built-in type ("C array")}$ |
---|
67 | array(float, N) x2; $\C{// type from library}$ |
---|
68 | } |
---|
69 | \end{cfa} |
---|
70 | Both of the locally-declared array variables, @x1@ and @x2@, have 42 elements, each element being a @float@. |
---|
71 | The two variables have identical size and layout; they both encapsulate 42-float stack allocations, with no additional ``bookkeeping'' allocations or headers. |
---|
72 | Providing this explicit generic approach requires a significant extension to the \CFA type system to support a full-feature, safe, efficient (space and time) array-type, which forms the foundation for more complex array forms in \CFA. |
---|
73 | In all following discussion, ``C array'' means the types like that of @x@ and ``\CFA array'' means the standard-library @array@ type (instantiations), like the type of @x2@. |
---|
74 | |
---|
75 | Admittedly, the @array@ library type for @x2@ is syntactically different from its C counterpart. |
---|
76 | A future goal (TODO xref) is to provide the new @array@ features with syntax approaching C's (declaration style of @x1@). |
---|
77 | Then, the library @array@ type could be removed, giving \CFA a largely uniform array type. |
---|
78 | At present, the C-syntax @array@ is only partially supported, so the generic @array@ is used exclusively in the thesis; |
---|
79 | feature support and C compatibility are revisited in Section ? TODO. |
---|
80 | |
---|
81 | My contributions in this chapter are: |
---|
82 | \begin{enumerate} |
---|
83 | \item A type system enhancement that lets polymorphic functions and generic types be parameterized by a numeric value: @forall( [N] )@. |
---|
84 | \item Provide a length-checked array-type in the \CFA standard library, where the array's length is statically managed and dynamically valued. |
---|
85 | \item Provide argument/parameter passing safety for arrays and subscript safety. |
---|
86 | \item TODO: general parking... |
---|
87 | \item Identify the interesting specific abilities available by the new @array@ type. |
---|
88 | \item Where there is a gap concerning this feature's readiness for prime-time, identification of specific workable improvements that are likely to close the gap. |
---|
89 | \end{enumerate} |
---|
90 | |
---|
91 | |
---|
92 | \section{Dependent typing} |
---|
93 | |
---|
94 | General dependent typing allows the type system to encode arbitrary predicates (\eg behavioural specifications for functions), |
---|
95 | which is an anti-goal for my work. |
---|
96 | Firstly, this application is strongly associated with pure functional languages, |
---|
97 | where a characterization of the return value (giving it a precise type, generally dependent upon the parameters) |
---|
98 | is a sufficient postcondition. |
---|
99 | In an imperative language like C and \CFA, it is also necessary to discuss side effects, |
---|
100 | for which an even heavier formalism, like separation logic, is required. |
---|
101 | Secondly, TODO: bash Rust. |
---|
102 | TODO: cite the crap out of these claims. |
---|
103 | |
---|
104 | |
---|
105 | \section{Features added} |
---|
106 | |
---|
107 | This section shows more about using the \CFA array and dimension parameters, demonstrating their syntax and semantics by way of motivating examples. |
---|
108 | As stated, the core capability of the new array is tracking all dimensions within the type system, where dynamic dimensions are represented using type variables. |
---|
109 | |
---|
110 | By declaring type variables at the front of object declarations, an array dimension is lexically referenceable where it is needed. |
---|
111 | For example, a declaration can share one length, @N@, among a pair of parameters and the return, |
---|
112 | meaning that it requires both input arrays to be of the same length, and guarantees that the result is of that length as well. |
---|
113 | \lstinput{10-17}{hello-array.cfa} |
---|
114 | Function @f@ does a pointwise comparison of its two input arrays, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array. |
---|
115 | The dynamic allocation of the @ret@ array, by the library @alloc@ function, |
---|
116 | \begin{cfa} |
---|
117 | forall( T & | sized(T) ) |
---|
118 | T * alloc() { |
---|
119 | return @(T *)@malloc( @sizeof(T)@ ); |
---|
120 | } |
---|
121 | \end{cfa} |
---|
122 | uses the parameterized dimension information implicitly within its @sizeof@ determination, and casts the return type. |
---|
123 | Note that @alloc@ only sees one whole type for its @T@ (which is @f@'s @array(bool, N)@); this type's size is a computation based on @N@. |
---|
124 | This example illustrates how the new @array@ type plugs into existing \CFA behaviour by implementing necessary \emph{sized} assertions needed by other types. |
---|
125 | (\emph{sized} implies a concrete \vs abstract type with a runtime-available size, exposed as @sizeof@.) |
---|
126 | As a result, there is significant programming safety by making the size accessible and implicit, compared with C's @calloc@ and non-array supporting @memalign@, which take an explicit length parameter not managed by the type system. |
---|
127 | |
---|
128 | \begin{figure} |
---|
129 | \lstinput{30-43}{hello-array.cfa} |
---|
130 | \lstinput{45-48}{hello-array.cfa} |
---|
131 | \caption{\lstinline{f} Harness} |
---|
132 | \label{f:fHarness} |
---|
133 | \end{figure} |
---|
134 | |
---|
135 | \VRef[Figure]{f:fHarness} shows a harness that uses function @f@, illustrating how dynamic values are fed into the @array@ type. |
---|
136 | Here, the dimension of arrays @x@, @y@, and @result@ is specified from a command-line value, @dim@, and these arrays are allocated on the stack. |
---|
137 | Then the @x@ array is initialized with decreasing values, and the @y@ array with amounts offset by constant @0.005@, giving relative differences within tolerance initially and diverging for later values. |
---|
138 | The program main is run (see figure bottom) with inputs @5@ and @7@ for sequence lengths. |
---|
139 | The loops follow the familiar pattern of using the variable @dim@ to iterate through the arrays. |
---|
140 | Most importantly, the type system implicitly captures @dim@ at the call of @f@ and makes it available throughout @f@ as @N@. |
---|
141 | The example shows @dim@ adapting into a type-system managed length at the declarations of @x@, @y@, and @result@, @N@ adapting in the same way at @f@'s loop bound, and a pass-thru use of @dim@ at @f@'s declaration of @ret@. |
---|
142 | Except for the lifetime-management issue of @result@, \ie explicit @free@, this program has eliminated both the syntactic and semantic problems associated with C arrays and their usage. |
---|
143 | The result is a significant improvement in safety and usability. |
---|
144 | |
---|
145 | In summary: |
---|
146 | \begin{itemize} |
---|
147 | \item |
---|
148 | @[N]@ within a @forall@ declares the type variable @N@ to be a managed length. |
---|
149 | \item |
---|
150 | @N@ can be used an expression of type @size_t@ within the declared function body. |
---|
151 | \item |
---|
152 | The value of an @N@-expression is the acquired length, derived from the usage site, \ie generic declaration or function call. |
---|
153 | \item |
---|
154 | @array( thing, N0, N1, ... )@ is a multi-dimensional type wrapping $\prod_i N_i$ adjacent occurrences of @thing@-typed objects. |
---|
155 | \end{itemize} |
---|
156 | |
---|
157 | \VRef[Figure]{f:TemplateVsGenericType} shows @N@ is not the same as a @size_t@ declaration in a \CC \lstinline[language=C++]{template}. |
---|
158 | \begin{enumerate}[leftmargin=*] |
---|
159 | \item |
---|
160 | The \CC template @N@ can only be compile-time value, while the \CFA @N@ may be a runtime value. |
---|
161 | % agreed, though already said |
---|
162 | \item |
---|
163 | \CC does not allow a template function to be nested, while \CFA lets its polymorphic functions to be nested. |
---|
164 | % why is this important? |
---|
165 | \item |
---|
166 | The \CC template @N@ must be passed explicitly at the call, unless @N@ has a default value, even when \CC can deduct the type of @T@. |
---|
167 | The \CFA @N@ is part of the array type and passed implicitly at the call. |
---|
168 | % fixed by comparing to std::array |
---|
169 | % mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v2 |
---|
170 | \item |
---|
171 | \CC cannot have an array of references, but can have an array of pointers. |
---|
172 | \CC has a (mistaken) belief that references are not objects, but pointers are objects. |
---|
173 | In the \CC example, the arrays fall back on C arrays, which have a duality with references with respect to automatic dereferencing. |
---|
174 | The \CFA array is a contiguous object with an address, which can be stored as a reference or pointer. |
---|
175 | % not really about forall-N vs template-N |
---|
176 | % any better CFA support is how Rob left references, not what Mike did to arrays |
---|
177 | % https://stackoverflow.com/questions/1164266/why-are-arrays-of-references-illegal |
---|
178 | % https://stackoverflow.com/questions/922360/why-cant-i-make-a-vector-of-references |
---|
179 | \item |
---|
180 | C/\CC arrays cannot be copied, while \CFA arrays can be copied, making them a first-class object (although array copy is often avoided for efficiency). |
---|
181 | % fixed by comparing to std::array |
---|
182 | % mycode/arrr/thesis-examples/check-peter/cs-cpp.cpp, v10 |
---|
183 | \end{enumerate} |
---|
184 | TODO: settle Mike's concerns with this comparison (perhaps, remove) |
---|
185 | |
---|
186 | \begin{figure} |
---|
187 | \begin{tabular}{@{}l@{\hspace{20pt}}l@{}} |
---|
188 | \begin{c++} |
---|
189 | |
---|
190 | @template< typename T, size_t N >@ |
---|
191 | void copy( T ret[@N@], T x[@N@] ) { |
---|
192 | for ( int i = 0; i < N; i += 1 ) ret[i] = x[i]; |
---|
193 | } |
---|
194 | int main() { |
---|
195 | |
---|
196 | int ret[10], x[10]; |
---|
197 | for ( int i = 0; i < 10; i += 1 ) x[i] = i; |
---|
198 | @copy<int, 10 >( ret, x );@ |
---|
199 | for ( int i = 0; i < 10; i += 1 ) |
---|
200 | cout << ret[i] << ' '; |
---|
201 | cout << endl; |
---|
202 | } |
---|
203 | \end{c++} |
---|
204 | & |
---|
205 | \begin{cfa} |
---|
206 | int main() { |
---|
207 | @forall( T, [N] )@ // nested function |
---|
208 | void copy( array( T, @N@ ) & ret, array( T, @N@ ) & x ) { |
---|
209 | for ( i; N ) ret[i] = x[i]; |
---|
210 | } |
---|
211 | |
---|
212 | const int n = promptForLength(); |
---|
213 | array( int, n ) ret, x; |
---|
214 | for ( i; n ) x[i] = i; |
---|
215 | @copy( ret, x );@ |
---|
216 | for ( i; n ) |
---|
217 | sout | ret[i] | nonl; |
---|
218 | sout | nl; |
---|
219 | } |
---|
220 | \end{cfa} |
---|
221 | \end{tabular} |
---|
222 | \caption{\lstinline{N}-style parameters, for \CC template \vs \CFA generic type } |
---|
223 | \label{f:TemplateVsGenericType} |
---|
224 | \end{figure} |
---|
225 | |
---|
226 | Just as the first example in \VRef[Section]{s:ArrayIntro} shows a compile-time rejection of a length mismatch, |
---|
227 | so are length mismatches stopped when they involve dimension parameters. |
---|
228 | While \VRef[Figure]{f:fHarness} shows successfully calling a function @f@ expecting two arrays of the same length, |
---|
229 | \begin{cfa} |
---|
230 | array( bool, N ) & f( array( float, N ) &, array( float, N ) & ); |
---|
231 | \end{cfa} |
---|
232 | a static rejection occurs when attempting to call @f@ with arrays of differing lengths. |
---|
233 | \lstinput[tabsize=1]{70-74}{hello-array.cfa} |
---|
234 | When the argument lengths themselves are statically unknown, |
---|
235 | the static check is conservative and, as always, \CFA's casting lets the programmer use knowledge not shared with the type system. |
---|
236 | \begin{tabular}{@{\hspace{0.5in}}l@{\hspace{1in}}l@{}} |
---|
237 | \lstinput{90-97}{hello-array.cfa} |
---|
238 | & |
---|
239 | \lstinput{110-117}{hello-array.cfa} |
---|
240 | \end{tabular} |
---|
241 | |
---|
242 | \noindent |
---|
243 | This static check's full rules are presented in \VRef[Section]{s:ArrayTypingC}. |
---|
244 | |
---|
245 | Orthogonally, the \CFA array type works within generic \emph{types}, \ie @forall@-on-@struct@. |
---|
246 | The same argument safety and the associated implicit communication of array length occurs. |
---|
247 | Preexisting \CFA allowed aggregate types to be generalized with type parameters, enabling parameterizing of element types. |
---|
248 | Now, \CFA also allows parameterizing them by length. |
---|
249 | Doing so gives a refinement of C's ``flexible array member'' pattern[TODO: cite ARM 6.7.2.1 pp18]\cite{arr:gnu-flex-mbr}. |
---|
250 | While a C flexible array member can only occur at the end of the enclosing structure, |
---|
251 | \CFA allows length-parameterized array members to be nested at arbitrary locations. |
---|
252 | This flexibility, in turn, allows for multiple array members. |
---|
253 | \lstinput{10-15}{hello-accordion.cfa} |
---|
254 | The structure has course- and student-level metatdata (their respective field names) and a position-based preferences' matrix. |
---|
255 | Its layout has the starting offset of @studentIds@ varying according to the generic parameter @C@, and the offset of @preferences@ varying according to both generic parameters. |
---|
256 | |
---|
257 | \VRef[Figure]{f:checkHarness} shows a program main using @School@ and results with different array sizes. |
---|
258 | The @school@ variable holds many students' course-preference forms. |
---|
259 | It is on the stack and its initialization does not use any casting or size arithmetic. |
---|
260 | Both of these points are impossible with a C flexible array member. |
---|
261 | When heap allocation is preferred, the original pattern still applies. |
---|
262 | \begin{cfa} |
---|
263 | School( classes, students ) * sp = alloc(); |
---|
264 | \end{cfa} |
---|
265 | This ability to avoid casting and size arithmetic improves safety and usability over C flexible array members. |
---|
266 | Finally, inputs and outputs are given at the bottom for different sized schools. |
---|
267 | The example program prints the courses in each student's preferred order, all using the looked-up display names. |
---|
268 | |
---|
269 | \begin{figure} |
---|
270 | \begin{cquote} |
---|
271 | \lstinput{50-55}{hello-accordion.cfa} |
---|
272 | \lstinput{90-98}{hello-accordion.cfa} |
---|
273 | \ \\ |
---|
274 | @$ cat school1@ |
---|
275 | \lstinput{}{school1} |
---|
276 | |
---|
277 | @$ ./a.out < school1@ |
---|
278 | \lstinput{}{school1.out} |
---|
279 | |
---|
280 | @$ cat school2@ |
---|
281 | \lstinput{}{school2} |
---|
282 | |
---|
283 | @$ ./a.out < school2@ |
---|
284 | \lstinput{}{school2.out} |
---|
285 | \end{cquote} |
---|
286 | |
---|
287 | \caption{\lstinline{School} harness, input and output} |
---|
288 | \label{f:checkHarness} |
---|
289 | \end{figure} |
---|
290 | |
---|
291 | When a function operates on a @School@ structure, the type system handles its memory layout transparently. |
---|
292 | \lstinput{30-37}{hello-accordion.cfa} |
---|
293 | In the example, this @getPref@ function answers, for the student at position @is@, what is the position of its @pref@\textsuperscript{th}-favoured class? |
---|
294 | |
---|
295 | |
---|
296 | \section{Dimension Parameter Implementation} |
---|
297 | |
---|
298 | The core of the preexisting \CFA compiler already had the ``heavy equipment'' needed |
---|
299 | to provide the feature set just reviewed (up to bugs in cases not yet exercised). |
---|
300 | To apply this equipment in tracking array lengths, I encoded a dimension (array's length) as a type. |
---|
301 | The type in question does not describe any data that the program actually uses at runtime. |
---|
302 | It simply carries information through intermediate stages of \CFA-to-C lowering. |
---|
303 | And this type takes a form such that, once \emph{it} gets lowered, the result does the right thing. |
---|
304 | |
---|
305 | Furthermore, the @array@ type itself is really ``icing on the cake.'' |
---|
306 | Presenting its full version is deferred until \VRef[Section]{s:ArrayMdImpl} |
---|
307 | (where added complexity needed for multiple dimensions is considered). |
---|
308 | But simplifications close enough for the present discussion are: |
---|
309 | \begin{cfa} |
---|
310 | forall( [N] ) |
---|
311 | struct array_1d_float { |
---|
312 | float items[N]; |
---|
313 | }; |
---|
314 | forall( T, [N] ) |
---|
315 | struct array_1d_T { |
---|
316 | T items[N]; |
---|
317 | }; |
---|
318 | \end{cfa} |
---|
319 | These two structure patterns, plus a subscript operator, is all that @array@ provides. |
---|
320 | |
---|
321 | My main work is letting a programmer define |
---|
322 | such a structure (one whose type is parameterized by @[N]@) |
---|
323 | and functions that operate on it (these being similarly parameterized). |
---|
324 | |
---|
325 | The repurposed heavy equipment is |
---|
326 | \begin{itemize} |
---|
327 | \item |
---|
328 | Resolver provided values for a used declaration's type-system variables, |
---|
329 | gathered from type information in scope at the usage site. |
---|
330 | \item |
---|
331 | The box pass, encoding information about type parameters |
---|
332 | into ``extra'' regular parameters/arguments on declarations and calls. |
---|
333 | Notably, it conveys the size of a type @foo@ as a @__sizeof_foo@ parameter, |
---|
334 | and rewrites the @sizeof(foo)@ expression as @__sizeof_foo@, \ie a use of the parameter. |
---|
335 | \end{itemize} |
---|
336 | |
---|
337 | The rules for resolution had to be restricted slightly, in order to achieve important refusal cases. |
---|
338 | This work is detailed in \VRef[Section]{s:ArrayTypingC}. |
---|
339 | However, the resolution--boxing scheme, in its preexisting state, was already equipped to work on (desugared) dimension parameters. |
---|
340 | The following discussion explains the desugaring and how correctly lowered code results. |
---|
341 | |
---|
342 | A simpler structure, and a toy function on it, demonstrate what is needed for the encoding. |
---|
343 | \begin{cfa} |
---|
344 | forall( [@N@] ) { $\C{// [1]}$ |
---|
345 | struct thing {}; |
---|
346 | void f( thing(@N@) ) { sout | @N@; } $\C{// [2], [3]}$ |
---|
347 | } |
---|
348 | int main() { |
---|
349 | thing( @10@ ) x; f( x ); $\C{// prints 10, [4]}$ |
---|
350 | thing( 100 ) y; f( y ); $\C{// prints 100}$ |
---|
351 | return 0; |
---|
352 | } |
---|
353 | \end{cfa} |
---|
354 | This example has: |
---|
355 | \begin{enumerate} |
---|
356 | \item |
---|
357 | The symbol @N@ being declared as a type variable (a variable of the type system). |
---|
358 | \item |
---|
359 | The symbol @N@ being used to parameterize a type. |
---|
360 | \item |
---|
361 | The symbol @N@ being used as an expression (value). |
---|
362 | \item |
---|
363 | A value like 10 being used as an argument to the parameter @N@. |
---|
364 | \end{enumerate} |
---|
365 | The chosen solution is to encode the value @N@ \emph{as a type}, so items 1 and 2 are immediately available for free. |
---|
366 | Item 3 needs a way to recover the encoded value from a (valid) type (and to reject invalid types occurring here). |
---|
367 | Item 4 needs a way to produce a type that encodes the given value. |
---|
368 | |
---|
369 | Because the box pass handles a type's size as its main datum, the encoding is chosen to use it. |
---|
370 | The production and recovery are then straightforward. |
---|
371 | \begin{itemize} |
---|
372 | \item |
---|
373 | The value $n$ is encoded as a type whose size is $n$. |
---|
374 | \item |
---|
375 | Given a dimension expression $e$, produce type @char[@$e$@]@ to represent it. |
---|
376 | If $e$ evaluates to $n$ then the encoded type has size $n$. |
---|
377 | \item |
---|
378 | Given a type $T$ (produced by these rules), recover the value that it represents with the expression @sizeof(@$T$@)@. |
---|
379 | If $T$ has size $n$ then the recovery expression evaluates to $n$. |
---|
380 | \end{itemize} |
---|
381 | |
---|
382 | This desugaring is applied in a translation step before the resolver. |
---|
383 | The ``validate'' pass hosts it, along with several other canonicalizing and desugaring transformations (the pass's name notwithstanding). |
---|
384 | The running example is lowered to: |
---|
385 | \begin{cfa} |
---|
386 | forall( @N *@ ) { $\C{// [1]}$ |
---|
387 | struct thing {}; |
---|
388 | void f( thing(@N@) ) { sout | @sizeof(N)@; } $\C{// [2], [3]}$ |
---|
389 | } |
---|
390 | int main() { |
---|
391 | thing( char[@10@] ) x; f( x ); $\C{// prints 10, [4]}$ |
---|
392 | thing( char[100] ) y; f( y ); $\C{// prints 100}$ |
---|
393 | return 0; |
---|
394 | } |
---|
395 | \end{cfa} |
---|
396 | Observe: |
---|
397 | \begin{enumerate} |
---|
398 | \item |
---|
399 | @N@ is now declared to be a type. |
---|
400 | It is declared to be \emph{sized} (by the @*@), meaning that the box pass shall do its @sizeof(N)@--@__sizeof_N@ extra parameter and expression translation. |
---|
401 | \item |
---|
402 | @thing(N)@ is a type; the argument to the generic @thing@ is a type (type variable). |
---|
403 | \item |
---|
404 | The @sout...@ expression (being an application of the @?|?@ operator) has a second argument that is an ordinary expression. |
---|
405 | \item |
---|
406 | The type of variable @x@ is another @thing(-)@ type; the argument to the generic @thing@ is a type (array type of bytes, @char@). |
---|
407 | \end{enumerate} |
---|
408 | |
---|
409 | From this point, preexisting \CFA compilation takes over lowering it the rest of the way to C. |
---|
410 | Here the result shows only the relevant changes of the box pass (as informed by the resolver), leaving the rest unadulterated: |
---|
411 | \begin{cfa} |
---|
412 | // [1] |
---|
413 | void f( size_t __sizeof_N, @void *@ ) { sout | @__sizeof_N@; } $\C{// [2], [3]}$ |
---|
414 | int main() { |
---|
415 | struct __conc_thing_10 {} x; f( @10@, &x ); $\C{// prints 10, [4]}$ |
---|
416 | struct __conc_thing_100 {} y; f( @100@, &y ); $\C{// prints 100}$ |
---|
417 | return 0; |
---|
418 | } |
---|
419 | \end{cfa} |
---|
420 | Observe: |
---|
421 | \begin{enumerate} |
---|
422 | \item |
---|
423 | The type parameter @N@ is gone. |
---|
424 | \item |
---|
425 | The type @thing(N)@ is (replaced by @void *@, but thereby effectively) gone. |
---|
426 | \item |
---|
427 | The @sout...@ expression (being an application of the @?|?@ operator) has a regular variable (parameter) usage for its second argument. |
---|
428 | \item |
---|
429 | Information about the particular @thing@ instantiation (value 10) has moved, from the type, to a regular function-call argument. |
---|
430 | \end{enumerate} |
---|
431 | At the end of the desugaring and downstream processing, the original C idiom of ``pass both a length parameter and a pointer'' has been reconstructed. |
---|
432 | In the programmer-written form, only the @thing@ is passed. |
---|
433 | The compiler's action produces the more complex form, which if handwritten, would be error-prone. |
---|
434 | |
---|
435 | Back at the compiler front end, the parsing changes AST schema extensions and validation rules for enabling the sugared user input. |
---|
436 | \begin{itemize} |
---|
437 | \item |
---|
438 | Recognize the form @[N]@ as a type-variable declaration within a @forall@. |
---|
439 | \item |
---|
440 | Have the new brand of type-variable, \emph{Dimension}, in the AST form of a type-variable, to represent one parsed from @[-]@. |
---|
441 | \item |
---|
442 | Allow a type variable to occur in an expression. Validate (after parsing) that only dimension-branded type variables are used here. |
---|
443 | \item |
---|
444 | Allow an expression to occur in type-argument position. Brand the resulting type argument as a dimension. |
---|
445 | \item |
---|
446 | Validate (after parsing), on a generic-type usage, \eg the type part of the declaration |
---|
447 | \begin{cfa} |
---|
448 | array_1d( foo, bar ) x; |
---|
449 | \end{cfa} |
---|
450 | \vspace*{-10pt} |
---|
451 | that the brands on the generic arguments match the brands of the declared type variables. |
---|
452 | Here, that @foo@ is a type and @bar@ is a dimension. |
---|
453 | \end{itemize} |
---|
454 | |
---|
455 | |
---|
456 | \section{Typing of C Arrays} |
---|
457 | \label{s:ArrayTypingC} |
---|
458 | |
---|
459 | Essential in giving a guarantee of accurate length is the compiler's ability |
---|
460 | to reject a program that presumes to mishandle length. |
---|
461 | By contrast, most discussion so far dealt with communicating length, |
---|
462 | from one party who knows it, to another who is willing to work with any given length. |
---|
463 | For scenarios where the concern is a mishandled length, |
---|
464 | the interaction is between two parties who both claim to know something about it. |
---|
465 | Such a scenario occurs in this pure C fragment, which today's C compilers accept: |
---|
466 | \begin{cfa} |
---|
467 | int n = @42@; |
---|
468 | float x[n]; |
---|
469 | float (*xp)[@999@] = &x; |
---|
470 | (*xp)[@500@]; $\C{// in "bound"?}$ |
---|
471 | \end{cfa} |
---|
472 | Here, the array @x@ has length 42, while a pointer to it (@xp@) claims length 999. |
---|
473 | So, while the subscript of @xp@ at position 500 is out of bound of its referent @x@, |
---|
474 | the access appears in-bound of the type information available on @xp@. |
---|
475 | Truly, length is being mishandled in the previous step, |
---|
476 | where the type-carried length information on @x@ is not compatible with that of @xp@. |
---|
477 | |
---|
478 | The \CFA new-array rejects the analogous case: |
---|
479 | \begin{cfa} |
---|
480 | int n = @42@; |
---|
481 | array(float, n) x; |
---|
482 | array(float, 999) * xp = x; $\C{// static rejection here}$ |
---|
483 | (*xp)[@500@]; $\C{// runtime check vs len 999}$ |
---|
484 | \end{cfa} |
---|
485 | The way the \CFA array is implemented, the type analysis of this case reduces to a case similar to the earlier C version. |
---|
486 | The \CFA compiler's compatibility analysis proceeds as: |
---|
487 | \begin{itemize}[parsep=0pt] |
---|
488 | \item |
---|
489 | Is @array(float, 999)@ type-compatible with @array(float, n)@? |
---|
490 | \item |
---|
491 | Is @arrayX(float, char[999])@ type-compatible with @arrayX(float, char[n])@?\footnote{ |
---|
492 | Here, \lstinline{arrayX} represents the type that results |
---|
493 | from desugaring the \lstinline{array} type |
---|
494 | into a type whose generic parameters are all types. |
---|
495 | This presentation elides the noisy fact that |
---|
496 | \lstinline{array} is actually a macro for something bigger; |
---|
497 | the reduction to \lstinline{char[-]} still proceeds as sketched.} |
---|
498 | \item |
---|
499 | Is @char[999]@ type-compatible with @char[n]@? |
---|
500 | \end{itemize} |
---|
501 | To achieve the necessary \CFA rejections meant rejecting the corresponding C case, which is not backward compatible. |
---|
502 | There are two complementary mitigations for this incompatibility. |
---|
503 | |
---|
504 | First, a simple recourse is available to a programmer who intends to proceed |
---|
505 | with the statically unsound assignment. |
---|
506 | This situation might arise if @n@ were known to be 999, |
---|
507 | rather than 42, as in the introductory examples. |
---|
508 | The programmer can add a cast in the \CFA code. |
---|
509 | \begin{cfa} |
---|
510 | xp = @(float (*)[999])@ &x; |
---|
511 | \end{cfa} |
---|
512 | This addition causes \CFA to accept, because now, the programmer has accepted blame. |
---|
513 | This addition is benign in plain C, because the cast is valid, just unnecessary there. |
---|
514 | Moreover, the addition can even be seen as appropriate ``eye candy,'' |
---|
515 | marking where the unchecked length knowledge is used. |
---|
516 | Therefore, a program being onboarded to \CFA can receive a simple upgrade, |
---|
517 | to satisfy the \CFA rules (and arguably become clearer), |
---|
518 | without giving up its validity to a plain C compiler. |
---|
519 | |
---|
520 | Second, the incompatibility only affects types like pointer-to-array, |
---|
521 | which are are infrequently used in C. |
---|
522 | The more common C idiom for aliasing an array is to use a pointer-to-first-element type, |
---|
523 | which does not participate in the \CFA array's length checking.\footnote{ |
---|
524 | Notably, the desugaring of the \lstinline{array} type avoids letting any \lstinline{-[-]} type decay, |
---|
525 | in order to preserve the length information that powers runtime bound-checking.} |
---|
526 | Therefore, the frequency of needing to upgrade legacy C code (as discussed in the first mitigation) |
---|
527 | is anticipated to be low. |
---|
528 | |
---|
529 | Because the incompatibility represents a low cost to a \CFA onboarding effort |
---|
530 | (with a plausible side benefit of linting the original code for a missing annotation), |
---|
531 | no special measures were added to retain the compatibility. |
---|
532 | It would be possible to flag occurrences of @-[-]@ types that come from @array@ desugaring, |
---|
533 | treating those with stricter \CFA rules, while treating others with classic C rules. |
---|
534 | If future lessons from C project onboarding warrant it, |
---|
535 | this special compatibility measure can be added. |
---|
536 | |
---|
537 | Having allowed that both the initial C example's check |
---|
538 | \begin{itemize} |
---|
539 | \item |
---|
540 | Is @float[999]@ type-compatible with @float[n]@? |
---|
541 | \end{itemize} |
---|
542 | and the second \CFA example's induced check |
---|
543 | \begin{itemize} |
---|
544 | \item |
---|
545 | Is @char[999]@ type-compatible with @char[n]@? |
---|
546 | \end{itemize} |
---|
547 | shall have the same answer, (``no''), |
---|
548 | discussion turns to how I got the \CFA compiler to produce this answer. |
---|
549 | In its preexisting form, it produced a (buggy) approximation of the C rules. |
---|
550 | To implement the new \CFA rules, I took the syntactic recursion a step further, obtaining, |
---|
551 | in both cases: |
---|
552 | \begin{itemize} |
---|
553 | \item |
---|
554 | Is @999@ compatible with @n@? |
---|
555 | \end{itemize} |
---|
556 | This compatibility question applies to a pair of expressions, where the earlier implementation were to types. |
---|
557 | Such an expression-compatibility question is a new addition to the \CFA compiler. |
---|
558 | Note, these questions only arise in the context of dimension expressions on (C) array types. |
---|
559 | |
---|
560 | TODO: ensure these compiler implementation matters are treated under \CFA compiler background: |
---|
561 | type unification, |
---|
562 | cost calculation, |
---|
563 | GenPoly. |
---|
564 | |
---|
565 | The relevant technical component of the \CFA compiler is the type unification procedure within the type resolver. |
---|
566 | I added rules for continuing this unification into expressions that occur within types. |
---|
567 | It is still fundamentally doing \emph{type} unification |
---|
568 | because it is participating in binding type variables, |
---|
569 | and not participating in binding any variables that stand in for expression fragments |
---|
570 | (for there is no such sort of variable in \CFA's analysis.) |
---|
571 | An unfortunate fact about the \CFA compiler's preexisting implementation is that |
---|
572 | type unification suffers from two forms of duplication. |
---|
573 | |
---|
574 | The first duplication has (many of) the unification rules stated twice. |
---|
575 | As a result, my additions for dimension expressions are stated twice. |
---|
576 | The extra statement of the rules occurs in the @GenPoly@ module, |
---|
577 | where concrete types like @array(int, 5)@\footnote{ |
---|
578 | Again, the presentation is simplified |
---|
579 | by leaving the \lstinline{array} macro unexpanded.} |
---|
580 | are lowered into corresponding C types @struct __conc_array_1234@ (the suffix being a generated index). |
---|
581 | In this case, the struct's definition contains fields that hardcode the argument values of @float@ and @5@. |
---|
582 | The next time an @array(-,-)@ concrete instance is encountered, it checks if the previous @struct __conc_array_1234@ is suitable for it. |
---|
583 | Yes, for another occurrence of @array(int, 5)@; |
---|
584 | no, for either @array(rational(int), 5)@ or @array(int, 42)@. |
---|
585 | By the last example, this phase must ``reject'' |
---|
586 | the hypothesis that it should reuse the dimension-5 instance's C-lowering for a dimension-42 instance. |
---|
587 | |
---|
588 | The second duplication has unification (proper) being invoked at two stages of expression resolution. |
---|
589 | As a result, my added rule set needs to handle more cases than the preceding discussion motivates. |
---|
590 | In the program |
---|
591 | \begin{cfa} |
---|
592 | void @f@( double ); |
---|
593 | forall( T & ) void @f@( T & ); |
---|
594 | void g( int n ) { |
---|
595 | array( float, n + 1 ) x; |
---|
596 | f(x); // overloaded |
---|
597 | } |
---|
598 | \end{cfa} |
---|
599 | when resolving the function call, @g@, the first unification stage |
---|
600 | compares the type @T@ of the parameter with @array( float, n + 1 )@, of the argument. |
---|
601 | TODO: finish. |
---|
602 | |
---|
603 | The actual rules for comparing two dimension expressions are conservative. |
---|
604 | To answer, ``yes, consider this pair of expressions to be matching,'' |
---|
605 | is to imply, ``all else being equal, allow an array with length calculated by $e_1$ |
---|
606 | to be passed to a function expecting a length-$e_2$ array.''\footnote{ |
---|
607 | TODO: Deal with directionality, that I'm doing exact-match, no ``at least as long as,'' no subtyping. |
---|
608 | Should it be an earlier scoping principle? Feels like it should matter in more places than here.} |
---|
609 | So, a ``yes'' answer must represent a guarantee that both expressions evaluate the |
---|
610 | same result, while a ``no'' can tolerate ``they might, but we're not sure'', |
---|
611 | provided that practical recourses are available |
---|
612 | to let programmers express better knowledge. |
---|
613 | The new rule-set in the current release is, in fact, extremely conservative. |
---|
614 | I chose to keep things simple, |
---|
615 | and allow future needs to drive adding additional complexity, within the new framework. |
---|
616 | |
---|
617 | For starters, the original motivating example's rejection |
---|
618 | is not based on knowledge that |
---|
619 | the @xp@ length of (the literal) 999 is value-unequal to |
---|
620 | the (obvious) runtime value of the variable @n@, which is the @x@ length. |
---|
621 | Rather, the analysis assumes a variable's value can be anything, |
---|
622 | and so there can be no guarantee that its value is 999. |
---|
623 | So, a variable and a literal can never match. |
---|
624 | |
---|
625 | Two occurrences of the same literal value are obviously a fine match. |
---|
626 | For two occurrences of the same variable, more information is needed. |
---|
627 | For example, this one is fine |
---|
628 | \begin{cfa} |
---|
629 | void f( const int n ) { |
---|
630 | float x[n]; |
---|
631 | float (*xp)[n] = x; // accept |
---|
632 | } |
---|
633 | \end{cfa} |
---|
634 | while this one is not: |
---|
635 | \begin{cfa} |
---|
636 | void f() { |
---|
637 | int n = 42; |
---|
638 | float x[n]; |
---|
639 | n = 999; |
---|
640 | float (*xp)[n] = x; // reject |
---|
641 | } |
---|
642 | \end{cfa} |
---|
643 | Furthermore, the fact that the first example sees @n@ as @const@ |
---|
644 | is not actually sufficient. |
---|
645 | In this example, @f@'s length expression's declaration is as @const@ as it can be, |
---|
646 | yet its value still changes between the two invocations: |
---|
647 | \begin{cquote} |
---|
648 | \setlength{\tabcolsep}{15pt} |
---|
649 | \begin{tabular}{@{}ll@{}} |
---|
650 | \begin{cfa} |
---|
651 | // compile unit 1 |
---|
652 | void g(); |
---|
653 | void f( const int & const nr ) { |
---|
654 | float x[nr]; |
---|
655 | g(); // change n |
---|
656 | @float (*xp)[nr] = x;@ // reject |
---|
657 | } |
---|
658 | \end{cfa} |
---|
659 | & |
---|
660 | \begin{cfa} |
---|
661 | // compile unit 2 |
---|
662 | static int n = 42; |
---|
663 | void g() { |
---|
664 | n = 99; |
---|
665 | } |
---|
666 | |
---|
667 | f( n ); |
---|
668 | \end{cfa} |
---|
669 | \end{tabular} |
---|
670 | \end{cquote} |
---|
671 | The issue here is that knowledge needed to make a correct decision is hidden by separate compilation. |
---|
672 | Even within a translation unit, static analysis might not be able to provide all the information. |
---|
673 | |
---|
674 | My rule set also respects a traditional C feature: In spite of the several limitations of the C rules |
---|
675 | accepting cases that produce different values, there are a few mismatches that C stops. |
---|
676 | C is quite precise when working with two static values. |
---|
677 | \begin{cfa} |
---|
678 | enum { fortytwo = 42 }; |
---|
679 | float x[fortytwo]; |
---|
680 | float (*xp1)[42] = &x; // accept |
---|
681 | float (*xp2)[999] = &x; // reject |
---|
682 | \end{cfa} |
---|
683 | My \CFA rules agree with C's on these cases. |
---|
684 | |
---|
685 | In summary, the new rules classify expressions into three groups: |
---|
686 | \begin{description} |
---|
687 | \item[Statically Evaluable] |
---|
688 | Expressions for which a specific value can be calculated (conservatively) |
---|
689 | at compile-time. |
---|
690 | A preexisting \CFA compiler module defines which literals, enumerators, and expressions qualify, |
---|
691 | and evaluates them. |
---|
692 | \item[Dynamic but Stable] |
---|
693 | The value of a variable declared as @const@, including a @const@ parameter. |
---|
694 | \item[Potentially Unstable] |
---|
695 | The catch-all category. Notable examples include: |
---|
696 | any function-call result, @float x[foo()];@, |
---|
697 | the particular function-call result that is a pointer dereference, @void f(const int * n)@ @{ float x[*n]; }@, and |
---|
698 | any use of a reference-typed variable. |
---|
699 | \end{description} |
---|
700 | Within these groups, my \CFA rules are: |
---|
701 | \begin{itemize} |
---|
702 | \item |
---|
703 | Accept a Statically Evaluable pair, if both expressions have the same value. |
---|
704 | Notably, this rule allows a literal to match with an enumeration value, based on the value. |
---|
705 | \item |
---|
706 | Accept a Dynamic but Stable pair, if both expressions are written out the same, \eg refers to the same variable declaration. |
---|
707 | \item |
---|
708 | Otherwise, reject. |
---|
709 | Notably, reject all pairs from the Potentially Unstable group and all pairs that cross groups. |
---|
710 | \end{itemize} |
---|
711 | The traditional C rules are: |
---|
712 | \begin{itemize} |
---|
713 | \item |
---|
714 | Reject a Statically Evaluable pair, if the expressions have two different values. |
---|
715 | \item |
---|
716 | Otherwise, accept. |
---|
717 | \end{itemize} |
---|
718 | |
---|
719 | \begin{figure} |
---|
720 | \newcommand{\falsealarm}{{\color{blue}\small{*}}} |
---|
721 | \newcommand{\allowmisuse}{{\color{red}\textbf{!}}} |
---|
722 | \newcommand{\cmark}{\ding{51}} % from pifont |
---|
723 | \newcommand{\xmark}{\ding{55}} |
---|
724 | |
---|
725 | \begin{tabular}{@{}l@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c@{\hspace{8pt}}c@{\hspace{16pt}}c} |
---|
726 | & \multicolumn{2}{c}{\underline{Values Equal}} |
---|
727 | & \multicolumn{2}{c}{\underline{Values Unequal}} |
---|
728 | & \\ |
---|
729 | \textbf{Case} & C & \CFA & C & \CFA & Compat. \\ |
---|
730 | Both Statically Evaluable, Same Symbol & Accept & Accept & & & \cmark \\ |
---|
731 | Both Statically Evaluable, Different Symbols & Accept & Accept & Reject & Reject & \cmark \\ |
---|
732 | Both Dynamic but Stable, Same Symbol & Accept & Accept & & & \cmark \\ |
---|
733 | Both Dynamic but Stable, Different Symbols & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\ |
---|
734 | Both Potentially Unstable, Same Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark \\ |
---|
735 | Any other grouping, Different Symbol & Accept & Reject\,\falsealarm & Accept\,\allowmisuse & Reject & \xmark |
---|
736 | \end{tabular} |
---|
737 | |
---|
738 | \medskip |
---|
739 | \noindent\textbf{Legend} |
---|
740 | \begin{itemize}[leftmargin=*] |
---|
741 | \item |
---|
742 | Each row gives the treatment of a test harness of the form |
---|
743 | \begin{cfa} |
---|
744 | float x[ expr1 ]; |
---|
745 | float (*xp)[ expr2 ] = &x; |
---|
746 | \end{cfa} |
---|
747 | \vspace*{-10pt} |
---|
748 | where \lstinline{expr1} and \lstinline{expr2} are meta-variables varying according to the row's Case. |
---|
749 | Each row's claim applies to other harnesses too, including, |
---|
750 | \begin{itemize} |
---|
751 | \item |
---|
752 | calling a function with a parameter like \lstinline{x} and an argument of the \lstinline{xp} type, |
---|
753 | \item |
---|
754 | assignment in place of initialization, |
---|
755 | \item |
---|
756 | using references in place of pointers, and |
---|
757 | \item |
---|
758 | for the \CFA array, calling a polymorphic function on two \lstinline{T}-typed parameters with \lstinline{&x}- and \lstinline{xp}-typed arguments. |
---|
759 | \end{itemize} |
---|
760 | \item |
---|
761 | Each case's claim is symmetric (swapping \lstinline{expr1} with \lstinline{expr2} has no effect), |
---|
762 | even though most test harnesses are asymmetric. |
---|
763 | \item |
---|
764 | The table treats symbolic identity (Same/Different on rows) |
---|
765 | apart from value equality (Equal/Unequal on columns). |
---|
766 | \begin{itemize} |
---|
767 | \item |
---|
768 | The expressions \lstinline{1}, \lstinline{0+1} and \lstinline{n} |
---|
769 | (where \lstinline{n} is a variable with value 1), |
---|
770 | are all different symbols with the value 1. |
---|
771 | \item |
---|
772 | The column distinction expresses ground truth about whether an omniscient analysis should accept or reject. |
---|
773 | \item |
---|
774 | The row distinction expresses the simple static factors used by today's analyses. |
---|
775 | \end{itemize} |
---|
776 | \item |
---|
777 | Accordingly, every Reject under Values Equal is a false alarm (\falsealarm), |
---|
778 | while every Accept under Values Unequal is an allowed misuse (\allowmisuse). |
---|
779 | \end{itemize} |
---|
780 | |
---|
781 | \caption{Case comparison for array type compatibility, given pairs of dimension expressions.} |
---|
782 | \label{f:DimexprRuleCompare} |
---|
783 | \end{figure} |
---|
784 | |
---|
785 | |
---|
786 | Figure~\ref{f:DimexprRuleCompare} gives a case-by-case comparison of the consequences of these rule sets. |
---|
787 | It demonstrates that the \CFA false alarms occur in the same cases as C treats unsafe. |
---|
788 | It also shows that C-incompatibilities only occur in cases that C treats unsafe. |
---|
789 | |
---|
790 | |
---|
791 | The conservatism of the new rule set can leave a programmer needing a recourse, |
---|
792 | when needing to use a dimension expression whose stability argument |
---|
793 | is more subtle than current-state analysis. |
---|
794 | This recourse is to declare an explicit constant for the dimension value. |
---|
795 | Consider these two dimension expressions, |
---|
796 | whose reuses are rejected by the blunt current-state rules: |
---|
797 | \begin{cfa} |
---|
798 | void f( int & nr, const int nv ) { |
---|
799 | float x[nr]; |
---|
800 | float (*xp)[nr] = &x; // reject: nr varying (no references) |
---|
801 | float y[nv + 1]; |
---|
802 | float (*yp)[nv + 1] = &y; // reject: ?+? unpredictable (no functions) |
---|
803 | } |
---|
804 | \end{cfa} |
---|
805 | Yet, both dimension expressions are reused safely. |
---|
806 | The @nr@ reference is never written, not volatile |
---|
807 | and control does not leave the function between the uses. |
---|
808 | The name @?+?@ resolves to a function that is quite predictable. |
---|
809 | Here, the programmer can add the constant declarations (cast does not work): |
---|
810 | \begin{cfa} |
---|
811 | void f( int & nr, const int nv ) { |
---|
812 | @const int nx@ = nr; |
---|
813 | float x[nx]; |
---|
814 | float (*xp)[nx] = & x; // accept |
---|
815 | @const int ny@ = nv + 1; |
---|
816 | float y[ny]; |
---|
817 | float (*yp)[ny] = & y; // accept |
---|
818 | } |
---|
819 | \end{cfa} |
---|
820 | The result is the originally intended semantics, |
---|
821 | achieved by adding a superfluous ``snapshot it as of now'' directive. |
---|
822 | |
---|
823 | The snapshotting trick is also used by the translation, though to achieve a different outcome. |
---|
824 | Rather obviously, every array must be subscriptable, even a bizarre one: |
---|
825 | \begin{cfa} |
---|
826 | array( float, rand(10) ) x; |
---|
827 | x[0]; // 10% chance of bound-check failure |
---|
828 | \end{cfa} |
---|
829 | Less obvious is that the mechanism of subscripting is a function call, |
---|
830 | which must communicate length accurately. |
---|
831 | The bound-check above (callee logic) must use the actual allocated length of @x@, |
---|
832 | without mistakenly reevaluating the dimension expression, @rand(10)@. |
---|
833 | Adjusting the example to make the function's use of length more explicit: |
---|
834 | \begin{cfa} |
---|
835 | forall ( T * ) |
---|
836 | void f( T * x ) { sout | sizeof(*x); } |
---|
837 | float x[ rand(10) ]; |
---|
838 | f( x ); |
---|
839 | \end{cfa} |
---|
840 | Considering that the partly translated function declaration is, loosely, |
---|
841 | \begin{cfa} |
---|
842 | void f( size_t __sizeof_T, void * x ) { sout | __sizeof_T; } |
---|
843 | \end{cfa} |
---|
844 | the translation must call the dimension argument twice: |
---|
845 | \begin{cfa} |
---|
846 | float x[ rand(10) ]; |
---|
847 | f( rand(10), &x ); |
---|
848 | \end{cfa} |
---|
849 | Rather, the translation is: |
---|
850 | \begin{cfa} |
---|
851 | size_t __dim_x = rand(10); |
---|
852 | float x[ __dim_x ]; |
---|
853 | f( __dim_x, &x ); |
---|
854 | \end{cfa} |
---|
855 | The occurrence of this dimension hoisting during translation was in the preexisting \CFA compiler. |
---|
856 | But its cases were buggy, particularly with determining, ``Can hoisting the expression be skipped here?'', for skipping this hoisting is clearly desirable in some cases. |
---|
857 | For example, when the programmer has already done so manually. \PAB{I don't know what this means.} |
---|
858 | In the new implementation, these cases are correct, harmonized with the accept/reject criteria. |
---|
859 | |
---|
860 | TODO: Discuss the interaction of this dimension hoisting with the challenge of extra unification for cost calculation |
---|
861 | |
---|
862 | |
---|
863 | \section{Multidimensional array implementation} |
---|
864 | \label{s:ArrayMdImpl} |
---|
865 | |
---|
866 | A multidimensional array implementation has three relevant levels of abstraction, from highest to lowest, where the array occupies \emph{contiguous memory}. |
---|
867 | \begin{enumerate} |
---|
868 | \item |
---|
869 | Flexible-stride memory: |
---|
870 | this model has complete independence between subscripting ordering and memory layout, offering the ability to slice by (provide an index for) any dimension, \eg slice a plane, row, or column, \eg @c[3][*][*]@, @c[3][4][*]@, @c[3][*][5]@. |
---|
871 | \item |
---|
872 | Fixed-stride memory: |
---|
873 | this model binds the first subscript and the first memory layout dimension, offering the ability to slice by (provide an index for) only the coarsest dimension, @m[row][*]@ or @c[plane][*][*]@, \eg slice only by row (2D) or plane (3D). |
---|
874 | After which, subscripting and memory layout are independent. |
---|
875 | \item |
---|
876 | Explicit-displacement memory: |
---|
877 | this model has no awareness of dimensions just the ability to access memory at a distance from a reference point (base-displacement addressing), \eg @x + 23@ or @x[23}@ $\Rightarrow$ 23rd element from the start of @x@. |
---|
878 | A programmer must manually build any notion of dimensions using other tools; |
---|
879 | hence, this style is not offering multidimensional arrays \see{\VRef[Figure]{f:FixedVariable} right example}. |
---|
880 | \end{enumerate} |
---|
881 | |
---|
882 | There is some debate as to whether the abstraction ordering goes $\{1, 2\} < 3$, rather than my numerically-ordering. |
---|
883 | That is, styles 1 and 2 are at the same abstraction level, with 3 offering a limited set of functionality. |
---|
884 | I chose to build the \CFA style-1 array upon a style-2 abstraction. |
---|
885 | (Justification of the decision follows, after the description of the design.) |
---|
886 | |
---|
887 | Style 3 is the inevitable target of any array implementation. |
---|
888 | The hardware offers this model to the C compiler, with bytes as the unit of displacement. |
---|
889 | C offers this model to its programmer as pointer arithmetic, with arbitrary sizes as the unit. |
---|
890 | Casting a multidimensional array as a single-dimensional array/pointer, then using @x[i]@ syntax to access its elements, is still a form of pointer arithmetic. |
---|
891 | |
---|
892 | Now stepping into the implementation of \CFA's new type-1 multidimensional arrays in terms of C's existing type-2 multidimensional arrays, it helps to clarify that even the interface is quite low-level. |
---|
893 | A C/\CFA array interface includes the resulting memory layout. |
---|
894 | The defining requirement of a type-2 system is the ability to slice a column from a column-finest matrix. |
---|
895 | The required memory shape of such a slice is fixed, before any discussion of implementation. |
---|
896 | The implementation presented here is how the \CFA array library wrangles the C type system, to make it do memory steps that are consistent with this layout. |
---|
897 | TODO: do I have/need a presentation of just this layout, just the semantics of -[all]? |
---|
898 | |
---|
899 | The new \CFA standard library @array@ datatype supports richer multidimensional features than C. |
---|
900 | The new array implementation follows C's contiguous approach, \ie @float [r][c]@, with one contiguous object subscripted by coarsely-strided dimensions directly wrapping finely-strided dimensions. |
---|
901 | Beyond what C's array type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order. |
---|
902 | |
---|
903 | The following examples use the matrix declaration @array( float, 5, 7 ) m@, loaded with values incremented by $0.1$, when stepping across the length-7 finely-strided column dimension, and stepping across the length-5 coarsely-strided row dimension. |
---|
904 | \par |
---|
905 | \mbox{\lstinput{121-126}{hello-md.cfa}} |
---|
906 | \par\noindent |
---|
907 | The memory layout is 35 contiguous elements with strictly increasing addresses. |
---|
908 | |
---|
909 | A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays. |
---|
910 | As for the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array by a prefix of its dimensions, \eg @m[2]@, giving the third row. |
---|
911 | This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be subscripted by the more finely strided dimensions, \eg @m[2][3]@, giving the value @2.3@. |
---|
912 | The following is an example slicing a row. |
---|
913 | \lstinput{60-64}{hello-md.cfa} |
---|
914 | \lstinput[aboveskip=0pt]{140-140}{hello-md.cfa} |
---|
915 | |
---|
916 | However, function @print1d@ is asserting too much knowledge about its parameter @r@ for printing either a row slice or a column slice. |
---|
917 | Specifically, declaring the parameter @r@ with type @array@ means that @r@ is contiguous, which is unnecessarily restrictive. |
---|
918 | That is, @r@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@) with managed length @N@. |
---|
919 | The new-array library provides the trait @ar@, so-defined. |
---|
920 | With it, the original declaration can be generalized with the same body. |
---|
921 | \lstinput{43-44}{hello-md.cfa} |
---|
922 | \lstinput[aboveskip=0pt]{145-145}{hello-md.cfa} |
---|
923 | The nontrivial slicing in this example now allows passing a \emph{noncontiguous} slice to @print1d@, where the new-array library provides a ``subscript by all'' operation for this purpose. |
---|
924 | In a multi-dimensional subscript operation, any dimension given as @all@ is a placeholder, \ie ``not yet subscripted by a value'', waiting for such a value, implementing the @ar@ trait. |
---|
925 | \lstinput{150-151}{hello-md.cfa} |
---|
926 | |
---|
927 | The example shows @x[2]@ and @x[[2, all]]@ both refer to the same, ``2.*'' slice. |
---|
928 | Indeed, the various @print1d@ calls under discussion access the entry with value @2.3@ as @x[2][3]@, @x[[2,all]][3]@, and @x[[all,3]][2]@. |
---|
929 | This design preserves (and extends) C array semantics by defining @x[[i,j]]@ to be @x[i][j]@ for numeric subscripts, but also for ``subscripting by all''. |
---|
930 | That is: |
---|
931 | \begin{cquote} |
---|
932 | \begin{tabular}{@{}cccccl@{}} |
---|
933 | @x[[2,all]][3]@ & $\equiv$ & @x[2][all][3]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is redundant) \\ |
---|
934 | @x[[all,3]][2]@ & $\equiv$ & @x[all][3][2]@ & $\equiv$ & @x[2][3]@ & (here, @all@ is effective) |
---|
935 | \end{tabular} |
---|
936 | \end{cquote} |
---|
937 | |
---|
938 | Narrating progress through each of the @-[-][-][-]@\footnote{ |
---|
939 | The first ``\lstinline{-}'' is a variable expression and the remaining ``\lstinline{-}'' are subscript expressions.} |
---|
940 | expressions gives, firstly, a definition of @-[all]@, and secondly, a generalization of C's @-[i]@. |
---|
941 | Where @all@ is redundant: |
---|
942 | \begin{cquote} |
---|
943 | \begin{tabular}{@{}ll@{}} |
---|
944 | @x@ & 2-dimensional, want subscripts for coarse then fine \\ |
---|
945 | @x[2]@ & 1-dimensional, want subscript for fine; lock coarse == 2 \\ |
---|
946 | @x[2][all]@ & 1-dimensional, want subscript for fine \\ |
---|
947 | @x[2][all][3]@ & 0-dimensional; lock fine == 3 |
---|
948 | \end{tabular} |
---|
949 | \end{cquote} |
---|
950 | Where @all@ is effective: |
---|
951 | \begin{cquote} |
---|
952 | \begin{tabular}{@{}ll@{}} |
---|
953 | @x@ & 2-dimensional, want subscripts for coarse then fine \\ |
---|
954 | @x[all]@ & 2-dimensional, want subscripts for fine then coarse \\ |
---|
955 | @x[all][3]@ & 1-dimensional, want subscript for coarse; lock fine == 3 \\ |
---|
956 | @x[all][3][2]@ & 0-dimensional; lock coarse == 2 |
---|
957 | \end{tabular} |
---|
958 | \end{cquote} |
---|
959 | The semantics of @-[all]@ is to dequeue from the front of the ``want subscripts'' list and re-enqueue at its back. |
---|
960 | For example, in a two dimensional matrix, this semantics conceptually transposes the matrix by reversing the subscripts. |
---|
961 | The semantics of @-[i]@ is to dequeue from the front of the ``want subscripts'' list and lock its value to be @i@. |
---|
962 | |
---|
963 | Contiguous arrays, and slices of them, are all represented by the same underlying parameterized type, which includes stride information in its metatdata. |
---|
964 | \PAB{Do not understand this sentence: The \lstinline{-[all]} operation is a conversion from a reference to one instantiation to a reference to another instantiation.} |
---|
965 | The running example's @all@-effective step, stated more concretely, is: |
---|
966 | \begin{cquote} |
---|
967 | \begin{tabular}{@{}ll@{}} |
---|
968 | @x@ & : 5 of ( 7 of @float@ each spaced 1 @float@ apart ) each spaced 7 @floats@ apart \\ |
---|
969 | @x[all]@ & : 7 of ( 5 of @float@ each spaced 7 @float@s apart ) each spaced 1 @float@ apart |
---|
970 | \end{tabular} |
---|
971 | \end{cquote} |
---|
972 | |
---|
973 | \begin{figure} |
---|
974 | \includegraphics{measuring-like-layout} |
---|
975 | \caption{Visualization of subscripting by value and by \lstinline[language=CFA]{all}, for \lstinline{x} of type \lstinline{array( float, 5, 7 )} understood as 5 rows by 7 columns. |
---|
976 | The horizontal layout represents contiguous memory addresses while the vertical layout is conceptual. |
---|
977 | The vertical shaded band highlights the location of the targeted element, 2.3. |
---|
978 | Any such vertical slice contains various interpretations of a single address.} |
---|
979 | \label{fig:subscr-all} |
---|
980 | \end{figure} |
---|
981 | |
---|
982 | Figure~\ref{fig:subscr-all} shows one element (in the shaded band) accessed two different ways: as @x[2][3]@ and as @x[all][3][2]@. |
---|
983 | In both cases, value 2 selects from the coarser dimension (rows of @x@), |
---|
984 | while the value 3 selects from the finer dimension (columns of @x@). |
---|
985 | The figure illustrates the value of each subexpression, comparing how numeric subscripting proceeds from @x@, \vs from @x[all]@. |
---|
986 | Proceeding from @x@ gives the numeric indices as coarse then fine, while proceeding from @x[all]@ gives them fine then coarse. |
---|
987 | These two starting expressions, which are the example's only multidimensional subexpressions |
---|
988 | (those that received zero numeric indices so far), are illustrated with vertical steps where a \emph{first} numeric index would select. |
---|
989 | |
---|
990 | The figure's presentation offers an intuition answering to: What is an atomic element of @x[all]@? |
---|
991 | From there, @x[all]@ itself is simply a two-dimensional array, in the strict C sense, of these building blocks. |
---|
992 | An atom (like the bottommost value, @x[all][3][2]@), is the contained value (in the square box) |
---|
993 | and a lie about its size (the left diagonal above it, growing upward). |
---|
994 | An array of these atoms (like the intermediate @x[all][3]@) is just a contiguous arrangement of them, done according to their size; |
---|
995 | call such an array a column. |
---|
996 | A column is almost ready to be arranged into a matrix; |
---|
997 | it is the \emph{contained value} of the next-level building block, but another lie about size is required. |
---|
998 | At first, an atom needs to be arranged as if it were bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward). |
---|
999 | These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@. |
---|
1000 | Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@. |
---|
1001 | Yet every time the programmer presents an index, a C-array subscript is achieving the offset calculation. |
---|
1002 | |
---|
1003 | In the @x[all]@ case, after the finely strided subscript is done (column 3 is selected), |
---|
1004 | the locations referenced by the coarse subscript options (rows 0..4) are offset by 3 floats, |
---|
1005 | compared with where analogous rows appear when the row-level option is presented for @x@. |
---|
1006 | |
---|
1007 | For example, in \lstinline{x[all]}, the shaded band touches atoms 2.0, 2.1, 2.2, 2.3, 1.4, 1.5 and 1.6 (left diagonal). |
---|
1008 | But only the atom 2.3 is storing its value there. |
---|
1009 | The rest are lying about (conflicting) claims on this location, but never exercising these alleged claims. |
---|
1010 | |
---|
1011 | Lying is implemented as casting. |
---|
1012 | The arrangement just described is implemented in the structure @arpk@. |
---|
1013 | This structure uses one type in its internal field declaration and offers a different type as the return of its subscript operator. |
---|
1014 | The field within is a plain-C array of the fictional type, which is 7 floats long for @x[all][3][2]@ and 1 float long for @x[all][3]@. |
---|
1015 | The subscript operator presents what is really inside, by casting to the type below the left diagonal of the lie. |
---|
1016 | |
---|
1017 | % Does x[all] have to lie too? The picture currently glosses over how it it advertises a size of 7 floats. I'm leaving that as an edge case benignly misrepresented in the picture. Edge cases only have to be handled right in the code. |
---|
1018 | |
---|
1019 | Casting, overlapping, and lying are unsafe. |
---|
1020 | The mission is to implement a style-1 feature in the type system for safe use by a programmer. |
---|
1021 | The offered style-1 system is allowed to be internally unsafe, |
---|
1022 | just as C's implementation of a style-2 system (upon a style-3 system) is unsafe within, even when the programmer is using it without casts or pointer arithmetic. |
---|
1023 | Having a style-1 system relieves the programmer from resorting to unsafe pointer arithmetic when working with noncontiguous slices. |
---|
1024 | |
---|
1025 | % PAB: repeat from previous paragraph. |
---|
1026 | % The choice to implement this style-1 system upon C's style-2 arrays, rather than its style-3 pointer arithmetic, reduces the attack surface of unsafe code. |
---|
1027 | % My casting is unsafe, but I do not do any pointer arithmetic. |
---|
1028 | % When a programmer works in the common-case style-2 subset (in the no-@[all]@ top of Figure~\ref{fig:subscr-all}), my casts are identities, and the C compiler is doing its usual displacement calculations. |
---|
1029 | % If I had implemented my system upon style-3 pointer arithmetic, then this common case would be circumventing C's battle-hardened displacement calculations in favour of my own. |
---|
1030 | |
---|
1031 | % \noindent END: Paste looking for a home |
---|
1032 | |
---|
1033 | The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping. |
---|
1034 | The @arpk@ structure and its @-[i]@ operator are defined as: |
---|
1035 | \begin{cfa} |
---|
1036 | forall( |
---|
1037 | [N], $\C{// length of current dimension}$ |
---|
1038 | S & | sized(S), $\C{// masquerading-as}$ |
---|
1039 | Timmed &, $\C{// immediate element, often another array}$ |
---|
1040 | Tbase & $\C{// base element, \eg float, never array}$ |
---|
1041 | ) { // distribute forall to each element |
---|
1042 | struct arpk { |
---|
1043 | S strides[N]; $\C{// so that sizeof(this) is N of S}$ |
---|
1044 | }; |
---|
1045 | // expose Timmed, stride by S |
---|
1046 | static inline Timmed & ?[?]( arpk( N, S, Timmed, Tbase ) & a, long int i ) { |
---|
1047 | subcheck( a, i, 0, N ); |
---|
1048 | return (Timmed &)a.strides[i]; |
---|
1049 | } |
---|
1050 | } |
---|
1051 | \end{cfa} |
---|
1052 | The private @arpk@ structure (array with explicit packing) is generic over four types: dimension length, masquerading-as, ... |
---|
1053 | This structure's public interface is hidden behind the @array(...)@ macro and the subscript operator. |
---|
1054 | Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information. |
---|
1055 | Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding. |
---|
1056 | Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form. |
---|
1057 | |
---|
1058 | An instantiation of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@. |
---|
1059 | In the base case, @array(E_base)@ is just @E_base@. |
---|
1060 | Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides. |
---|
1061 | |
---|
1062 | Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions. |
---|
1063 | Expressed as an operation on types, this rotation is: |
---|
1064 | \begin{eqnarray*} |
---|
1065 | suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\ |
---|
1066 | enq( N, S, E_b, E_b ) & = & arpk( N, S, E_b, E_b ) \\ |
---|
1067 | enq( N, S, arpk(N', S', E_i', E_b), E_b ) & = & arpk( N', S', enq(N, S, E_i', E_b), E_b ) |
---|
1068 | \end{eqnarray*} |
---|
1069 | |
---|
1070 | |
---|
1071 | \section{Bound checks, added and removed} |
---|
1072 | |
---|
1073 | \CFA array subscripting is protected with runtime bound checks. |
---|
1074 | Having dependent typing causes the optimizer to remove more of these bound checks than it would without them. |
---|
1075 | This section provides a demonstration of the effect. |
---|
1076 | |
---|
1077 | The experiment compares the \CFA array system with the padded-room system [TODO:xref] most typically exemplified by Java arrays, but also reflected in the \CC pattern where restricted vector usage models a checked array. |
---|
1078 | The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based. |
---|
1079 | The experiment compares with the \CC version to keep access to generated assembly code simple. |
---|
1080 | |
---|
1081 | As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and \CC versions. |
---|
1082 | When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code. |
---|
1083 | But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake. |
---|
1084 | |
---|
1085 | TODO: paste source and assembly codes |
---|
1086 | |
---|
1087 | Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized. |
---|
1088 | The case is naive matrix multiplication over a row-major encoding. |
---|
1089 | |
---|
1090 | TODO: paste source codes |
---|
1091 | |
---|
1092 | |
---|
1093 | \section{Array lifecycle} |
---|
1094 | |
---|
1095 | An array is an aggregate, like a structure; |
---|
1096 | both are containers wrapping subordinate objects. |
---|
1097 | Any arbitrary object type, like @string@, can be an array element or structure member. |
---|
1098 | A consequence is that the lifetime of the container must match with its subordinate objects: all elements and members must be initialized/uninitialized implicitly as part of the container's allocation/deallocation. |
---|
1099 | Modern programming languages implicitly perform these operations via a type's constructor and destructor. |
---|
1100 | Therefore, \CFA must assure that an array's subordinate objects' lifetime operations are called. |
---|
1101 | |
---|
1102 | Preexisting \CFA mechanisms achieve this requirement, but with poor performance. |
---|
1103 | Furthermore, advanced array users need an exception to the basic mechanism, which does not occur with other aggregates. |
---|
1104 | Hence, arrays introduce subleties in supporting an element's lifecycle. |
---|
1105 | |
---|
1106 | The preexisting \CFA support for contained-element lifecycle is based on recursive occurrences of the object-type (@otype@) pseudo-trait. |
---|
1107 | A type is an @otype@, if it provides a default (parameterless) constructor, copy constructor, assignment operator, and destructor (like \CC). |
---|
1108 | When declaring a structure with @otype@ members, the compiler implicitly generates implementations of the four @otype@ functions for the outer structure. |
---|
1109 | Then the generated default constructor for the outer structure calls the default constructor for each member, and the other @otype@ functions work similarly. |
---|
1110 | For a member that is a C array, these calls occur in a loop for each array element, which even works for VLAs. |
---|
1111 | This logic works the same, whether the member is a concrete type (that happens to be an @otype@) or if the member is a polymorphic type asserted to be an @otype@ (which is implicit in the syntax, @forall(T)@). |
---|
1112 | The \CFA array has the simplified form (similar to one seen before): |
---|
1113 | \begin{cfa} |
---|
1114 | forall( T * ) // non-otype element, no lifecycle functions |
---|
1115 | // forall( T ) // otype element, lifecycle functions asserted |
---|
1116 | struct array5 { |
---|
1117 | T __items[ 5 ]; |
---|
1118 | }; |
---|
1119 | \end{cfa} |
---|
1120 | Being a structure with a C-array member, using the otype-form declaration for @T@ causes @array5(float)@ to implement @otype@ too. |
---|
1121 | |
---|
1122 | But this @otype@-recursion pattern has a performance issue. |
---|
1123 | For example, in a cube of @float@: |
---|
1124 | \begin{cfa} |
---|
1125 | array5( array5( array5( float ) ) ) |
---|
1126 | \end{cfa} |
---|
1127 | the first few steps of the compiler's work to find the lifecycle functions, under the @otype@-recursion pattern, are shown in \VRef[Figure]{f:OtypeRecursionBlowup}. |
---|
1128 | All the work needed for the full @float@-cube would have 256 leaves. |
---|
1129 | |
---|
1130 | %array5(T) offers |
---|
1131 | %1 parameterless ctor, which asks for T to have |
---|
1132 | % 1 parameterless ctor |
---|
1133 | % 2 copy ctor |
---|
1134 | % 3 asgt |
---|
1135 | % 4 dtor |
---|
1136 | %2 copy ctor, which asks for T to have |
---|
1137 | % 1 parameterless ctor |
---|
1138 | % 2 copy ctor |
---|
1139 | % 3 asgt |
---|
1140 | % 4 dtor |
---|
1141 | %3 asgt, which asks for T to have |
---|
1142 | % 1 parameterless ctor |
---|
1143 | % 2 copy ctor |
---|
1144 | % 3 asgt |
---|
1145 | % 4 dtor |
---|
1146 | %4 dtor, which asks for T to have |
---|
1147 | % 1 parameterless ctor |
---|
1148 | % 2 copy ctor |
---|
1149 | % 3 asgt |
---|
1150 | % 4 dtor |
---|
1151 | |
---|
1152 | \begin{figure} |
---|
1153 | \centering |
---|
1154 | \setlength{\tabcolsep}{15pt} |
---|
1155 | \begin{tabular}{@{}lll@{}} |
---|
1156 | \begin{cfa}[deletekeywords={default}] |
---|
1157 | float offers |
---|
1158 | 1 default ctor |
---|
1159 | 2 copy ctor |
---|
1160 | 3 asgt |
---|
1161 | 4 dtor |
---|
1162 | |
---|
1163 | |
---|
1164 | |
---|
1165 | |
---|
1166 | |
---|
1167 | |
---|
1168 | |
---|
1169 | |
---|
1170 | |
---|
1171 | |
---|
1172 | |
---|
1173 | |
---|
1174 | |
---|
1175 | |
---|
1176 | |
---|
1177 | |
---|
1178 | |
---|
1179 | |
---|
1180 | |
---|
1181 | |
---|
1182 | |
---|
1183 | |
---|
1184 | |
---|
1185 | |
---|
1186 | \end{cfa} |
---|
1187 | & |
---|
1188 | \begin{cfa}[deletekeywords={default}] |
---|
1189 | array5(float) has |
---|
1190 | 1 default ctor, using float's |
---|
1191 | 1 default ctor |
---|
1192 | 2 copy ctor |
---|
1193 | 3 asgt |
---|
1194 | 4 dtor |
---|
1195 | 2 copy ctor, using float's |
---|
1196 | 1 default ctor |
---|
1197 | 2 copy ctor |
---|
1198 | 3 asgt |
---|
1199 | 4 dtor |
---|
1200 | 3 asgt, using float's |
---|
1201 | 1 default ctor |
---|
1202 | 2 copy ctor |
---|
1203 | 3 asgt |
---|
1204 | 4 dtor |
---|
1205 | 4 dtor, using float's |
---|
1206 | 1 default ctor |
---|
1207 | 2 copy ctor |
---|
1208 | 3 asgt |
---|
1209 | 4 dtor |
---|
1210 | |
---|
1211 | |
---|
1212 | |
---|
1213 | |
---|
1214 | |
---|
1215 | |
---|
1216 | |
---|
1217 | |
---|
1218 | \end{cfa} |
---|
1219 | & |
---|
1220 | \begin{cfa}[deletekeywords={default}] |
---|
1221 | array5(array5(float)) has |
---|
1222 | 1 default ctor, using array5(float)'s |
---|
1223 | 1 default ctor, using float's |
---|
1224 | 1 default ctor |
---|
1225 | 2 copy ctor |
---|
1226 | 3 asgt |
---|
1227 | 4 dtor |
---|
1228 | 2 copy ctor, using float's |
---|
1229 | 1 default ctor |
---|
1230 | 2 copy ctor |
---|
1231 | 3 asgt |
---|
1232 | 4 dtor |
---|
1233 | 3 asgt, using float's |
---|
1234 | 1 default ctor |
---|
1235 | 2 copy ctor |
---|
1236 | 3 asgt |
---|
1237 | 4 dtor |
---|
1238 | 4 dtor, using float's |
---|
1239 | 1 default ctor |
---|
1240 | 2 copy ctor |
---|
1241 | 3 asgt |
---|
1242 | 4 dtor |
---|
1243 | 2 copy ctor, using array5(float)'s |
---|
1244 | ... 4 children, 16 leaves |
---|
1245 | 3 asgt, using array5(float)'s |
---|
1246 | ... 4 children, 16 leaves |
---|
1247 | 4 dtor, using array5(float)'s |
---|
1248 | ... 4 children, 16 leaves |
---|
1249 | (64 leaves) |
---|
1250 | \end{cfa} |
---|
1251 | \end{tabular} |
---|
1252 | \caption{Exponential thunk generation under the otype-recursion pattern. |
---|
1253 | Each time that one type's function (\eg ctor) uses another type's function, the \CFA compiler generates a thunk, to capture the used function's dependencies, presented according to the using function's need. |
---|
1254 | So, each non-leaf line represents a generated thunk and every line represents a search request for the resolver to find a satisfying function.} |
---|
1255 | \label{f:OtypeRecursionBlowup} |
---|
1256 | \end{figure} |
---|
1257 | |
---|
1258 | So the @otype@-recursion pattern seeks a quantity of helper functions, and generates a quantity of thunks, that are exponential in the number of dimensions. |
---|
1259 | Anecdotal experience with this solution found the resulting compile times annoyingly slow at three dimensions, and unusable at four. |
---|
1260 | |
---|
1261 | The issue is that the otype-recursion pattern uses more assertions than needed. |
---|
1262 | Consider how @array5(float)@'s default constructor is getting all four lifecycle assertions about the element type, @float@. |
---|
1263 | It only needs @float@'s default constructor; |
---|
1264 | the full set of operations is never used. |
---|
1265 | Current work by the \CFA team aims to improve this situation. |
---|
1266 | Therefore, a workaround is needed for now. |
---|
1267 | |
---|
1268 | The workaround is to provide a default constructor, copy constructor and destructor, defined with lean, bespoke assertions: |
---|
1269 | \begin{cquote} |
---|
1270 | \begin{tabular}{@{}l@{\hspace{0.5in}}l@{}} |
---|
1271 | \begin{cfa} |
---|
1272 | // autogenerated for otype-recursion: |
---|
1273 | forall( T ) |
---|
1274 | void ?{}( array5(T) & this ) { |
---|
1275 | for (i; 5) { ( this[i] ){}; } |
---|
1276 | } |
---|
1277 | forall( T ) |
---|
1278 | void ?{}( array5(T) & this, array5(T) & src ) { |
---|
1279 | for (i; 5) { ( this[i] ){ src[i] }; } |
---|
1280 | } |
---|
1281 | forall( T ) |
---|
1282 | void ^?{}( array5(T) & this ) { |
---|
1283 | for (i; 5) { ^( this[i] ){}; } |
---|
1284 | } |
---|
1285 | \end{cfa} |
---|
1286 | & |
---|
1287 | \begin{cfa} |
---|
1288 | // lean, bespoke: |
---|
1289 | forall( T* | { void @?{}( T & )@; } ) |
---|
1290 | void ?{}( array5(T) & this ) { |
---|
1291 | for (i; 5) { ( this[i] ){}; } |
---|
1292 | } |
---|
1293 | forall( T* | { void @?{}( T &, T )@; } ) |
---|
1294 | void ?{}( array5(T) & this, array5(T) & src ) { |
---|
1295 | for (i; 5) { ( this[i] ){ src[i] }; } |
---|
1296 | } |
---|
1297 | forall( T* | { void @?{}( T & )@; } ) |
---|
1298 | void ^?{}( array5(T) & this ) { |
---|
1299 | for (i; 5) { ^( this[i] ){}; } |
---|
1300 | } |
---|
1301 | \end{cfa} |
---|
1302 | \end{tabular} |
---|
1303 | \end{cquote} |
---|
1304 | Moreover, the assignment operator is skipped, to avoid hitting a lingering growth case. |
---|
1305 | Skipping assignment is tolerable because array assignment is not a common operation. |
---|
1306 | With this solution, the critical lifecycle functions are available, with no growth in thunk creation. |
---|
1307 | |
---|
1308 | Finally, the intuition that a programmer using an array always wants the elements' default constructor called \emph{automatically} is simplistic. |
---|
1309 | Arrays exist to store different values at each position. |
---|
1310 | So, array initialization needs to let the programmer provide different constructor arguments to each element. |
---|
1311 | \begin{cfa} |
---|
1312 | thread worker { int id; }; |
---|
1313 | void ?{}( worker & ) = void; // remove default constructor |
---|
1314 | void ?{}( worker &, int id ); |
---|
1315 | array( worker, 5 ) ws = @{}@; // rejected; but desire is for no initialization yet |
---|
1316 | for (i; 5) (ws[i]){ @i@ }; // explicitly initialize each thread, giving id |
---|
1317 | \end{cfa} |
---|
1318 | Note the use of the \CFA explicit constructor call, analogous to \CC's placement-@new@. |
---|
1319 | This call is where initialization is desired, and not at the declaration of @ws@. |
---|
1320 | The attempt to initialize from nothing (equivalent to dropping @= {}@ altogether) is invalid because the @worker@ type removes the default constructor. |
---|
1321 | The @worker@ type is designed this way to work with the threading system. |
---|
1322 | A thread type forks a thread at the end of each constructor and joins with it at the start of each destructor. |
---|
1323 | But a @worker@ cannot begin its forked-thead work without knowing its @id@. |
---|
1324 | Therefore, there is a conflict between the implicit actions of the builtin @thread@ type and a user's desire to defer these actions. |
---|
1325 | |
---|
1326 | Another \CFA feature may, at first, seem viable for initializing the array @ws@, though on closer inspection, it is not. |
---|
1327 | C initialization, \lstinline|array(worker, 5) ws @= {};|, ignores all \CFA lifecycle management and uses C empty initialization. |
---|
1328 | This option does achieve the desired semantics on the construction side. |
---|
1329 | But on destruction side, the desired semantics is for implicit destructor calls to continue, to keep the join operation tied to lexical scope. |
---|
1330 | C initialization disables \emph{all} implicit lifecycle management, but the goal is to disable only the implicit construction. |
---|
1331 | |
---|
1332 | To fix this problem, I enhanced the \CFA standard library to provide the missing semantics, available in either form: |
---|
1333 | \begin{cfa} |
---|
1334 | array( @uninit@(worker), 5 ) ws1; |
---|
1335 | array( worker, 5) ws2 = { @delay_init@ }; |
---|
1336 | \end{cfa} |
---|
1337 | Both cause the @ws@-construction-time implicit call chain to stop before reaching a @worker@ constructor, while leaving the implicit destruction calls intact. |
---|
1338 | Two forms are available, to parallel the development of this feature in \uCpp. |
---|
1339 | Originally \uCpp offered only the @ws1@ form, using the class-template @uNoCtor@ equivalent to \CFA's @uninit@. |
---|
1340 | More recently, \uCpp was extended with the declaration macro, @uArray@, with usage similar to the @ws2@ case. |
---|
1341 | Based on experience piloting @uArray@ as a replacement of @uNoCtor@, it might be possible to remove the first option. |
---|
1342 | |
---|
1343 | % note to Mike, I have more fragments on some details available in push24\fragments\uNoCtor.txt |
---|
1344 | |
---|
1345 | \section{Comparison with other arrays} |
---|
1346 | |
---|
1347 | |
---|
1348 | \subsection{Rust} |
---|
1349 | |
---|
1350 | \CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C. |
---|
1351 | Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly-typed ownership-system, which further helps guarantee statically the validity of every pointer deference. |
---|
1352 | These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid. |
---|
1353 | \CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter. |
---|
1354 | |
---|
1355 | \CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns. |
---|
1356 | Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent. |
---|
1357 | The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program. |
---|
1358 | |
---|
1359 | |
---|
1360 | \subsection{Java} |
---|
1361 | |
---|
1362 | Java arrays are arrays-of-arrays because all objects are references \see{\VRef{toc:mdimpl}}. |
---|
1363 | For each array, Java implicitly storages the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations. |
---|
1364 | \begin{cquote} |
---|
1365 | \begin{tabular}{rl} |
---|
1366 | C & @void f( size_t n, size_t m, float x[n][m] );@ \\ |
---|
1367 | Java & @void f( float x[][] );@ |
---|
1368 | \end{tabular} |
---|
1369 | \end{cquote} |
---|
1370 | However, in the C prototype, the parameters @n@ and @m@ are documentation only as the intended size of the first and second dimension of @x@. |
---|
1371 | \VRef[Figure]{f:JavaVsCTriangularMatrix} compares a triangular matrix (array-of-arrays) in dynamically safe Java to unsafe C. |
---|
1372 | Each dynamically sized row in Java stores its dimension, while C requires the programmer to manage these sizes explicitly (@rlnth@). |
---|
1373 | All subscripting is Java has bounds checking, while C has none. |
---|
1374 | Both Java and C require explicit null checking, otherwise there is a runtime failure. |
---|
1375 | |
---|
1376 | \begin{figure} |
---|
1377 | \setlength{\tabcolsep}{15pt} |
---|
1378 | \begin{tabular}{ll@{}} |
---|
1379 | \begin{java} |
---|
1380 | int m[][] = { // triangular matrix |
---|
1381 | new int [4], |
---|
1382 | new int [3], |
---|
1383 | new int [2], |
---|
1384 | new int [1], |
---|
1385 | null |
---|
1386 | }; |
---|
1387 | |
---|
1388 | for ( int r = 0; r < m.length; r += 1 ) { |
---|
1389 | if ( m[r] == null ) continue; |
---|
1390 | for ( int c = 0; c < m[r].length; c += 1 ) { |
---|
1391 | m[r][c] = c + r; // subscript checking |
---|
1392 | } |
---|
1393 | |
---|
1394 | } |
---|
1395 | |
---|
1396 | for ( int r = 0; r < m.length; r += 1 ) { |
---|
1397 | if ( m[r] == null ) { |
---|
1398 | System.out.println( "null row" ); |
---|
1399 | continue; |
---|
1400 | } |
---|
1401 | for ( int c = 0; c < m[r].length; c += 1 ) { |
---|
1402 | System.out.print( m[r][c] + " " ); |
---|
1403 | } |
---|
1404 | System.out.println(); |
---|
1405 | |
---|
1406 | } |
---|
1407 | \end{java} |
---|
1408 | & |
---|
1409 | \begin{cfa} |
---|
1410 | int * m[5] = { // triangular matrix |
---|
1411 | calloc( 4, sizeof(int) ), |
---|
1412 | calloc( 3, sizeof(int) ), |
---|
1413 | calloc( 2, sizeof(int) ), |
---|
1414 | calloc( 1, sizeof(int) ), |
---|
1415 | NULL |
---|
1416 | }; |
---|
1417 | int rlnth = 4; |
---|
1418 | for ( int r = 0; r < 5; r += 1 ) { |
---|
1419 | if ( m[r] == NULL ) continue; |
---|
1420 | for ( int c = 0; c < rlnth; c += 1 ) { |
---|
1421 | m[r][c] = c + r; // no subscript checking |
---|
1422 | } |
---|
1423 | rlnth -= 1; |
---|
1424 | } |
---|
1425 | rlnth = 4; |
---|
1426 | for ( int r = 0; r < 5; r += 1 ) { |
---|
1427 | if ( m[r] == NULL ) { |
---|
1428 | printf( "null row\n" ); |
---|
1429 | continue; |
---|
1430 | } |
---|
1431 | for ( int c = 0; c < rlnth; c += 1 ) { |
---|
1432 | printf( "%d ", m[r][c] ); |
---|
1433 | } |
---|
1434 | printf( "\n" ); |
---|
1435 | rlnth -= 1; |
---|
1436 | } |
---|
1437 | \end{cfa} |
---|
1438 | \end{tabular} |
---|
1439 | \caption{Java (left) \vs C (right) Triangular Matrix} |
---|
1440 | \label{f:JavaVsCTriangularMatrix} |
---|
1441 | \end{figure} |
---|
1442 | |
---|
1443 | The downside of the arrays-of-arrays approach is performance due to pointer chasing versus pointer arithmetic for a contiguous arrays. |
---|
1444 | Furthermore, there is the cost of managing the implicit array descriptor. |
---|
1445 | It is unlikely that a JIT can dynamically rewrite an arrays-of-arrays form into a contiguous form. |
---|
1446 | |
---|
1447 | |
---|
1448 | \subsection{\CC} |
---|
1449 | |
---|
1450 | Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array. |
---|
1451 | While the vector size can grow and shrink dynamically, \vs a fixed-size dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration (one dynamic call) to avoid using @push_back@. |
---|
1452 | \begin{c++} |
---|
1453 | vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations |
---|
1454 | \end{c++} |
---|
1455 | Multidimensional arrays are arrays-of-arrays with associated costs. |
---|
1456 | Each @vector@ array has an array descriptor contain the dimension, which allows bound checked using @x.at(i)@ in place of @x[i]@. |
---|
1457 | Used with these restrictions, out-of-bound accesses are caught, and in-bound accesses never exercise the vector's ability to grow, preventing costly reallocate and copy, and never invalidate references to contained values. |
---|
1458 | This scheme matches Java's safety and expressiveness exactly, but with the inherent costs. |
---|
1459 | |
---|
1460 | |
---|
1461 | \subsection{Levels of dependently typed arrays} |
---|
1462 | |
---|
1463 | The \CFA array and the field of ``array language'' comparators all leverage dependent types to improve on the expressiveness over C and Java, accommodating examples such as: |
---|
1464 | \begin{itemize} |
---|
1465 | \item a \emph{zip}-style operation that consumes two arrays of equal length |
---|
1466 | \item a \emph{map}-style operation whose produced length matches the consumed length |
---|
1467 | \item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions |
---|
1468 | \end{itemize} |
---|
1469 | Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out. |
---|
1470 | Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C. |
---|
1471 | |
---|
1472 | Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted. |
---|
1473 | In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example. |
---|
1474 | The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination. |
---|
1475 | In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate. |
---|
1476 | [TODO: clarify how even Idris type checking terminates] |
---|
1477 | |
---|
1478 | Idris is a current, general-purpose dependently typed programming language. |
---|
1479 | Length checking is a common benchmark for full dependent type systems. |
---|
1480 | Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started. |
---|
1481 | [TODO: finish explaining what Data.Vect is and then the essence of the comparison] |
---|
1482 | |
---|
1483 | POINTS: |
---|
1484 | here is how our basic checks look (on a system that does not have to compromise); |
---|
1485 | it can also do these other cool checks, but watch how I can mess with its conservativeness and termination |
---|
1486 | |
---|
1487 | Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer novel contributions concerning similar, restricted dependent types for tracking array length. |
---|
1488 | Unlike \CFA, both are garbage-collected functional languages. |
---|
1489 | Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary. |
---|
1490 | So, like \CFA, the checking in question is a lightweight bounds-only analysis. |
---|
1491 | Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression. |
---|
1492 | |
---|
1493 | |
---|
1494 | |
---|
1495 | The Futhark work discusses the working language's connection to a lambda calculus, with typing rules and a safety theorem proven in reference to an operational semantics. |
---|
1496 | There is a particular emphasis on an existential type, enabling callee-determined return shapes. |
---|
1497 | |
---|
1498 | |
---|
1499 | Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type. |
---|
1500 | |
---|
1501 | Futhark and full-strength dependently typed languages treat array sizes are ordinary values. |
---|
1502 | Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not. |
---|
1503 | |
---|
1504 | \CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances. |
---|
1505 | Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark. |
---|
1506 | Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark. |
---|
1507 | |
---|
1508 | \subsection{Static safety in C extensions} |
---|
1509 | |
---|
1510 | |
---|
1511 | \section{Future work} |
---|
1512 | |
---|
1513 | \subsection{Declaration syntax} |
---|
1514 | |
---|
1515 | \subsection{Range slicing} |
---|
1516 | |
---|
1517 | \subsection{With a module system} |
---|
1518 | |
---|
1519 | \subsection{With described enumerations} |
---|
1520 | |
---|
1521 | A project in \CFA's current portfolio will improve enumerations. |
---|
1522 | In the incumbent state, \CFA has C's enumerations, unmodified. |
---|
1523 | I will not discuss the core of this project, which has a tall mission already, to improve type safety, maintain appropriate C compatibility and offer more flexibility about storage use. |
---|
1524 | It also has a candidate stretch goal, to adapt \CFA's @forall@ generic system to communicate generalized enumerations: |
---|
1525 | \begin{cfa} |
---|
1526 | forall( T | is_enum(T) ) |
---|
1527 | void show_in_context( T val ) { |
---|
1528 | for( T i ) { |
---|
1529 | string decorator = ""; |
---|
1530 | if ( i == val-1 ) decorator = "< ready"; |
---|
1531 | if ( i == val ) decorator = "< go" ; |
---|
1532 | sout | i | decorator; |
---|
1533 | } |
---|
1534 | } |
---|
1535 | enum weekday { mon, tue, wed = 500, thu, fri }; |
---|
1536 | show_in_context( wed ); |
---|
1537 | \end{cfa} |
---|
1538 | with output |
---|
1539 | \begin{cfa} |
---|
1540 | mon |
---|
1541 | tue < ready |
---|
1542 | wed < go |
---|
1543 | thu |
---|
1544 | fri |
---|
1545 | \end{cfa} |
---|
1546 | The details in this presentation aren't meant to be taken too precisely as suggestions for how it should look in \CFA. |
---|
1547 | But the example shows these abilities: |
---|
1548 | \begin{itemize} |
---|
1549 | \item a built-in way (the @is_enum@ trait) for a generic routine to require enumeration-like information about its instantiating type |
---|
1550 | \item an implicit implementation of the trait whenever a user-written enum occurs (@weekday@'s declaration implies @is_enum@) |
---|
1551 | \item a total order over the enumeration constants, with predecessor/successor (@val-1@) available, and valid across gaps in values (@tue == 1 && wed == 500 && tue == wed - 1@) |
---|
1552 | \item a provision for looping (the @for@ form used) over the values of the type. |
---|
1553 | \end{itemize} |
---|
1554 | |
---|
1555 | If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level. |
---|
1556 | |
---|
1557 | [TODO: introduce Ada in the comparators] |
---|
1558 | |
---|
1559 | In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, \CC, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers. |
---|
1560 | The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing. |
---|
1561 | |
---|
1562 | This change of perspective also lets us remove ubiquitous dynamic bound checks. |
---|
1563 | [TODO: xref] discusses how automatically inserted bound checks can often be optimized away. |
---|
1564 | But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation. |
---|
1565 | To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet. |
---|
1566 | |
---|
1567 | [TODO, fix confusion: Idris has this arrangement of checks, but still the natural numbers as the domain.] |
---|
1568 | |
---|
1569 | The structural assumptions required for the domain of an array in Dex are given by the trait (there, ``interface'') @Ix@, which says that the parameter @n@ is a type (which could take an argument like @weekday@) that provides two-way conversion with the integers and a report on the number of values. |
---|
1570 | Dex's @Ix@ is analogous the @is_enum@ proposed for \CFA above. |
---|
1571 | \begin{cfa} |
---|
1572 | interface Ix n |
---|
1573 | get_size n : Unit -> Int |
---|
1574 | ordinal : n -> Int |
---|
1575 | unsafe_from_ordinal n : Int -> n |
---|
1576 | \end{cfa} |
---|
1577 | |
---|
1578 | Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes. |
---|
1579 | This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function. |
---|
1580 | Dex's example is a routine that calculates pointwise differences between two samples. |
---|
1581 | Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result). |
---|
1582 | In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size. |
---|
1583 | |
---|
1584 | The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic. |
---|
1585 | In the audio instantiation, the duration-of-clip type argument is used for the domain. |
---|
1586 | In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $. |
---|
1587 | This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements |
---|
1588 | \begin{cfa} |
---|
1589 | instance {a b} [Ix a, Ix b] Ix (a & b) |
---|
1590 | get_size = \(). size a * size b |
---|
1591 | ordinal = \(i, j). (ordinal i * size b) + ordinal j |
---|
1592 | unsafe_from_ordinal = \o. |
---|
1593 | bs = size b |
---|
1594 | (unsafe_from_ordinal a (idiv o bs), unsafe_from_ordinal b (rem o bs)) |
---|
1595 | \end{cfa} |
---|
1596 | and by a user-provided adapter expression at the call site that shows how to indexing with a tuple is backed by indexing each dimension at a time |
---|
1597 | \begin{cfa} |
---|
1598 | img_trans :: (img_wd,img_ht)=>Real |
---|
1599 | img_trans.(i,j) = img.i.j |
---|
1600 | result = pairwise img_trans |
---|
1601 | \end{cfa} |
---|
1602 | [TODO: cite as simplification of example from https://openreview.net/pdf?id=rJxd7vsWPS section 4] |
---|
1603 | |
---|
1604 | In the case of adapting this pattern to \CFA, my current work provides an adapter from ``successively subscripted'' to ``subscripted by tuple,'' so it is likely that generalizing my adapter beyond ``subscripted by @ptrdiff_t@'' is sufficient to make a user-provided adapter unnecessary. |
---|
1605 | |
---|
1606 | \subsection{Retire pointer arithmetic} |
---|
1607 | |
---|
1608 | |
---|
1609 | \section{\CFA} |
---|
1610 | |
---|
1611 | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\ |
---|
1612 | moved from background chapter \\ |
---|
1613 | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \\ |
---|
1614 | |
---|
1615 | Traditionally, fixing C meant leaving the C-ism alone, while providing a better alternative beside it. |
---|
1616 | (For later: That's what I offer with array.hfa, but in the future-work vision for arrays, the fix includes helping programmers stop accidentally using a broken C-ism.) |
---|
1617 | |
---|
1618 | \subsection{\CFA features interacting with arrays} |
---|
1619 | |
---|
1620 | Prior work on \CFA included making C arrays, as used in C code from the wild, |
---|
1621 | work, if this code is fed into @cfacc@. |
---|
1622 | The quality of this this treatment was fine, with no more or fewer bugs than is typical. |
---|
1623 | |
---|
1624 | More mixed results arose with feeding these ``C'' arrays into preexisting \CFA features. |
---|
1625 | |
---|
1626 | A notable success was with the \CFA @alloc@ function, |
---|
1627 | which type information associated with a polymorphic return type |
---|
1628 | replaces @malloc@'s use of programmer-supplied size information. |
---|
1629 | \begin{cfa} |
---|
1630 | // C, library |
---|
1631 | void * malloc( size_t ); |
---|
1632 | // C, user |
---|
1633 | struct tm * el1 = malloc( sizeof(struct tm) ); |
---|
1634 | struct tm * ar1 = malloc( 10 * sizeof(struct tm) ); |
---|
1635 | |
---|
1636 | // CFA, library |
---|
1637 | forall( T * ) T * alloc(); |
---|
1638 | // CFA, user |
---|
1639 | tm * el2 = alloc(); |
---|
1640 | tm (*ar2)[10] = alloc(); |
---|
1641 | \end{cfa} |
---|
1642 | The alloc polymorphic return compiles into a hidden parameter, which receives a compiler-generated argument. |
---|
1643 | This compiler's argument generation uses type information from the left-hand side of the initialization to obtain the intended type. |
---|
1644 | Using a compiler-produced value eliminates an opportunity for user error. |
---|
1645 | |
---|
1646 | TODO: fix in following: even the alloc call gives bad code gen: verify it was always this way; walk back the wording about things just working here; assignment (rebind) seems to offer workaround, as in bkgd-cfa-arrayinteract.cfa |
---|
1647 | |
---|
1648 | Bringing in another \CFA feature, reference types, both resolves a sore spot of the last example, and gives a first example of an array-interaction bug. |
---|
1649 | In the last example, the choice of ``pointer to array'' @ar2@ breaks a parallel with @ar1@. |
---|
1650 | They are not subscripted in the same way. |
---|
1651 | \begin{cfa} |
---|
1652 | ar1[5]; |
---|
1653 | (*ar2)[5]; |
---|
1654 | \end{cfa} |
---|
1655 | Using ``reference to array'' works at resolving this issue. TODO: discuss connection with Doug-Lea \CC proposal. |
---|
1656 | \begin{cfa} |
---|
1657 | tm (&ar3)[10] = *alloc(); |
---|
1658 | ar3[5]; |
---|
1659 | \end{cfa} |
---|
1660 | The implicit size communication to @alloc@ still works in the same ways as for @ar2@. |
---|
1661 | |
---|
1662 | Using proper array types (@ar2@ and @ar3@) addresses a concern about using raw element pointers (@ar1@), albeit a theoretical one. |
---|
1663 | TODO xref C standard does not claim that @ar1@ may be subscripted, |
---|
1664 | because no stage of interpreting the construction of @ar1@ has it be that ``there is an \emph{array object} here.'' |
---|
1665 | But both @*ar2@ and the referent of @ar3@ are the results of \emph{typed} @alloc@ calls, |
---|
1666 | where the type requested is an array, making the result, much more obviously, an array object. |
---|
1667 | |
---|
1668 | The ``reference to array'' type has its sore spots too. |
---|
1669 | TODO see also @dimexpr-match-c/REFPARAM_CALL@ (under @TRY_BUG_1@) |
---|
1670 | |
---|
1671 | TODO: I fixed a bug associated with using an array as a T. I think. Did I really? What was the bug? |
---|