Changeset dab9fb93


Ignore:
Timestamp:
Dec 11, 2023, 1:46:50 PM (8 months ago)
Author:
Michael Brooks <mlbrooks@…>
Branches:
master
Children:
40ab446
Parents:
2554f24
Message:

Accept Peter's proofreading and adjustment of examples to current syntax

Location:
doc/theses/mike_brooks_MMath
Files:
5 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mike_brooks_MMath/array.tex

    r2554f24 rdab9fb93  
    11\chapter{Array}
    22
     3
    34\section{Features Added}
    45
    56The present work adds a type @array@ to the \CFA standard library~\cite{Cforall}.
    67
    7 This array's length is statically governed and dynamically valued.  This static governance achieves argument safety and suggests a path to subscript safety as future work (TODO: cross reference).  In present state, this work is a runtime libray accessed through a system of macros, while section [TODO: discuss C conexistence] discusses a path for the new array type to be accessed directly by \CFA's array syntax, replacing the lifted C array that this syntax currently exposes.
    8 
    9 This section presents motivating examples of the new array type's usage, and follows up with definitions of the notations that appear.
    10 
    11 The core of the new array governance is tracking all array lengths in the type system.  Dynamically valued lengths are represented using type variables.  The stratification of type variables preceding object declarations makes a length referenceable everywhere that it is needed.  For example, a declaration can share one length, @N@, among a pair of parameters and the return.
    12 \lstinputlisting[language=CFA, firstline=50, lastline=59]{hello-array.cfa}
    13 Here, the function @f@ does a pointwise comparison, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated bool array.
    14 
    15 The array type uses the parameterized length information in its @sizeof(-)@ determination, illustrated in the example's call to @alloc@.  That call requests an allocation of type @array(bool, N)@, which the type system deduces from the left-hand side of the initialization, into the return type of the @alloc@ call.  Preexesting \CFA behaviour is leveraged here, both in the return-type-only polymorphism, and the @sized(T)@-aware standard-library @alloc@ routine.  The new @array@ type plugs into this behaviour by implementing the @sized@/@sizeof(-)@ assertion to have the intuitive meaning.  As a result, this design avoids an opportunity for programmer error by making the size/length communication to a called routine implicit, compared with C's @calloc@ (or the low-level \CFA analog @aalloc@) which take an explicit length parameter not managed by the type system.
    16 
    17 A harness for this @f@ function shows how dynamic values are fed into the system.
    18 \lstinputlisting[language=CFA, firstline=100, lastline=119]{hello-array.cfa}
    19 Here, the @a@ sequence is loaded with decreasing values, and the @b@ sequence with amounts off by a constant, giving relative differences within tolerance at first and out of tolerance later.  The driver program is run with two different inputs of sequence length.
    20 
    21 The loops in the driver follow the more familiar pattern of using the ordinary variable @n@ to convey the length.  The type system implicitly captures this value at the call site (@main@ calling @f@) and makes it available within the callee (@f@'s loop bound).
    22 
    23 The two parts of the example show @Z(n)@ adapting a variable into a type-system governed length (at @main@'s declarations of @a@, @b@, and @result@), @z(N)@ adapting in the opposite direction (at @f@'s loop bound), and a passthru use of a governed length (at @f@'s declaration of @ret@.)  It is hoped that future language integration will allow the macros @Z@ and @z@ to be omitted entirely from the user's notation, creating the appearance of seamlessly interchanging numeric values with appropriate generic parameters.
    24 
    25 The macro-assisted notation, @forall...ztype@, participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and in the function body.  So future language integration only sweetens this form and does not seek to elimimate the declaration.  The present form is chosen to parallel, as closely as a macro allows, the existing forall forms:
    26 \begin{lstlisting}
    27   forall( dtype T  ) ...
    28   forall( otype T  ) ...
    29   forall( ztype(N) ) ...
    30 \end{lstlisting}
    31 
    32 The notation @array(thing, N)@ is also macro-assisted, though only in service of enabling multidimensional uses discussed further in section \ref{toc:mdimpl}.  In a single-dimensional case, the marco expansion gives a generic type instance, exactly like the original form suggests.
    33 
    34 
    35 
     8This array's length is statically managed and dynamically valued.  This static management achieves argument safety and suggests a path to subscript safety as future work (TODO: cross reference).
     9
     10This section presents motivating examples of the new array type's usage and follows up with definitions of the notations that appear.
     11
     12The core of the new array management is tracking all array lengths in the type system.  Dynamically valued lengths are represented using type variables.  The stratification of type variables preceding object declarations makes a length referenceable everywhere that it is needed.  For example, a declaration can share one length, @N@, among a pair of parameters and the return.
     13\lstinputlisting[language=CFA, firstline=10, lastline=17]{hello-array.cfa}
     14Here, the function @f@ does a pointwise comparison, checking if each pair of numbers is within half a percent of each other, returning the answers in a newly allocated @bool@ array.
     15
     16The array type uses the parameterized length information in its @sizeof@ determination, illustrated in the example's call to @alloc@.  That call requests an allocation of type @array(bool, N)@, which the type system deduces from the left-hand side of the initialization, into the return type of the @alloc@ call.  Preexisting \CFA behaviour is leveraged here, both in the return-type-only polymorphism, and the @sized(T)@-aware standard-library @alloc@ routine.  The new @array@ type plugs into this behaviour by implementing the @sized@/@sizeof@ assertion to have the intuitive meaning.  As a result, this design avoids an opportunity for programmer error by making the size/length communication to a called routine implicit, compared with C's @calloc@ (or the low-level \CFA analog @aalloc@), which take an explicit length parameter not managed by the type system.
     17
     18\VRef[Figure]{f:fHarness} shows the harness to use the @f@ function illustrating how dynamic values are fed into the system.
     19Here, the @a@ array is loaded with decreasing values, and the @b@ array with amounts off by a constant, giving relative differences within tolerance at first and out of tolerance later.  The program main is run with two different inputs of sequence length.
     20
     21\begin{figure}
     22\lstinputlisting[language=CFA, firstline=30, lastline=49]{hello-array.cfa}
     23\caption{\lstinline{f} Harness}
     24\label{f:fHarness}
     25\end{figure}
     26
     27The loops in the program main follow the more familiar pattern of using the ordinary variable @n@ to convey the length.  The type system implicitly captures this value at the call site (@main@ calling @f@) and makes it available within the callee (@f@'s loop bound).
     28
     29The two parts of the example show @n@ adapting a variable into a type-system managed length (at @main@'s declarations of @a@, @b@, and @result@), @N@ adapting in the opposite direction (at @f@'s loop bound), and a pass-thru use of a managed length (at @f@'s declaration of @ret@).
     30
     31The @forall( ...[N] )@ participates in the user-relevant declaration of the name @N@, which becomes usable in parameter/return declarations and in the function @b@. The present form is chosen to parallel the existing @forall@ forms:
     32\begin{cfa}
     33forall( @[N]@ ) ... // array kind
     34forall( & T  ) ...  // reference kind (dtype)
     35forall( T  ) ...    // value kind (otype)
     36\end{cfa}
     37
     38The notation @array(thing, N)@ is a single-dimensional case, giving a generic type instance.
    3639In summary:
    37 
    38 \begin{tabular}{p{15em}p{20em}}
    39   @ztype( N )@ & within a forall, declares the type variable @N@ to be a governed length \\[0.25em]
    40   @Z( @ $e$ @ )@ & a type representing the value of $e$ as a governed length, where $e$ is a @size_t@-typed expression \\[0.25em]
    41   @z( N )@ & an expression of type @size_t@, whose value is the governed length @N@ \\[0.25em]
    42   @array( thing, N0, N1, ... )@
    43   &  a type wrapping $\prod_i N_i$ adjacent occurrences of @thing@ objects
    44 \end{tabular}
    45 
    46 Unsigned integers have a special status in this type system.  Unlike how C++ allows @template< size_t N, char * msg, typename T >...@ declarations, this system does not accommodate values of any user-provided type.  TODO: discuss connection with dependent types.
    47 
     40\begin{itemize}
     41\item
     42@[N]@ -- within a forall, declares the type variable @N@ to be a managed length
     43\item
     44$e$ -- a type representing the value of $e$ as a managed length, where $e$ is a @size_t@-typed expression
     45\item
     46N -- an expression of type @size_t@, whose value is the managed length @N@
     47\item
     48@array( thing, N0, N1, ... )@ -- a type wrapping $\prod_i N_i$ adjacent occurrences of @thing@ objects
     49\end{itemize}
     50Unsigned integers have a special status in this type system.  Unlike how C++ allows @template< size_t N, char * msg, typename T >...@ declarations, \CFA does not accommodate values of any user-provided type.  TODO: discuss connection with dependent types.
    4851
    4952An example of a type error demonstrates argument safety.  The running example has @f@ expecting two arrays of the same length.  A compile-time error occurs when attempting to call @f@ with arrays whose lengths may differ.
    50 \lstinputlisting[language=CFA, firstline=150, lastline=155]{hello-array.cfa}
    51 As is common practice in C, the programmer is free to cast, to assert knownledge not shared with the type system.
    52 \lstinputlisting[language=CFA, firstline=200, lastline=202]{hello-array.cfa}
    53 
    54 Argument safety, and the associated implicit communication of length, work with \CFA's generic types too.  As a structure can be defined over a parameterized element type, so can it be defined over a parameterized length.  Doing so gives a refinement of C's ``flexible array member'' pattern, that allows nesting structures with array members anywhere within other structures.
    55 \lstinputlisting[language=CFA, firstline=20, lastline=26]{hello-accordion.cfa}
    56 This structure's layout has the starting offest of @cost_contribs@ varying in @Nclients@, and the offset of @total_cost@ varying in both generic paramters.  For a function that operates on a @request@ structure, the type system handles this variation transparently.
    57 \lstinputlisting[language=CFA, firstline=50, lastline=57]{hello-accordion.cfa}
    58 In the example runs of a driver program, different offset values are navigated in the two cases.
    59 \lstinputlisting[language=CFA, firstline=100, lastline=115]{hello-accordion.cfa}
     53\lstinputlisting[language=CFA, firstline=60, lastline=65]{hello-array.cfa}
     54As is common practice in C, the programmer is free to cast, to assert knowledge not shared with the type system.
     55\lstinputlisting[language=CFA, firstline=70, lastline=75]{hello-array.cfa}
     56
     57Argument safety and the associated implicit communication of array length work with \CFA's generic types too.
     58\CFA allows aggregate types to be generalized with multiple type parameters, including parameterized element type, so can it be defined over a parameterized length.
     59Doing so gives a refinement of C's ``flexible array member'' pattern, that allows nesting structures with array members anywhere within other structures.
     60\lstinputlisting[language=CFA, firstline=10, lastline=16]{hello-accordion.cfa}
     61This structure's layout has the starting offset of @cost_contribs@ varying in @Nclients@, and the offset of @total_cost@ varying in both generic parameters.  For a function that operates on a @request@ structure, the type system handles this variation transparently.
     62\lstinputlisting[language=CFA, firstline=40, lastline=47]{hello-accordion.cfa}
     63In the example, different runs of the program result in different offset values being used.
     64\lstinputlisting[language=CFA, firstline=60, lastline=76]{hello-accordion.cfa}
    6065The output values show that @summarize@ and its caller agree on both the offsets (where the callee starts reading @cost_contribs@ and where the callee writes @total_cost@).  Yet the call site still says just, ``pass the request.''
    6166
     
    6772TODO: introduce multidimensional array feature and approaches
    6873
    69 The new \CFA standard library @array@ datatype supports multidimensional uses more richly than the C array.  The new array's multimentsional interface and implementation, follows an array-of-arrays setup, meaning, like C's @float[n][m]@ type, one contiguous object, with coarsely-strided dimensions directly wrapping finely-strided dimensions.  This setup is in contrast with the pattern of array of pointers to other allocations representing a sub-array.  Beyond what C's type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.  C and C++ require a programmer with such a need to manage pointer/offset arithmetic manually.
    70 
    71 Examples are shown using a $5 \times 7$ float array, @a@, loaded with increments of $0.1$ when stepping across the length-7 finely-strided dimension shown on columns, and with increments of $1.0$ when stepping across the length-5 corsely-strided dimension shown on rows.
    72 \lstinputlisting[language=CFA, firstline=120, lastline=128]{hello-md.cfa}
     74The new \CFA standard library @array@ datatype supports multidimensional uses more richly than the C array.  The new array's multidimensional interface and implementation, follows an array-of-arrays setup, meaning, like C's @float[n][m]@ type, one contiguous object, with coarsely-strided dimensions directly wrapping finely-strided dimensions.  This setup is in contrast with the pattern of array of pointers to other allocations representing a sub-array.  Beyond what C's type offers, the new array brings direct support for working with a noncontiguous array slice, allowing a program to work with dimension subscripts given in a non-physical order.  C and C++ require a programmer with such a need to manage pointer/offset arithmetic manually.
     75
     76Examples are shown using a $5 \times 7$ float array, @a@, loaded with increments of $0.1$ when stepping across the length-7 finely-strided dimension shown on columns, and with increments of $1.0$ when stepping across the length-5 coarsely-strided dimension shown on rows.
     77\lstinputlisting[language=CFA, firstline=120, lastline=126]{hello-md.cfa}
    7378The memory layout of @a@ has strictly increasing numbers along its 35 contiguous positions.
    7479
    75 A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.  Like with the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array, by a prefix of its dimensions.  This action first subscripts away the most coaresly strided dimensions, leaving a result that expects to be be subscripted by the more finely strided dimensions.
     80A trivial form of slicing extracts a contiguous inner array, within an array-of-arrays.  Like with the C array, a lesser-dimensional array reference can be bound to the result of subscripting a greater-dimensional array, by a prefix of its dimensions.  This action first subscripts away the most coarsely strided dimensions, leaving a result that expects to be be subscripted by the more finely strided dimensions.
    7681\lstinputlisting[language=CFA, firstline=60, lastline=66]{hello-md.cfa}
    77 \lstinputlisting[language=CFA, firstline=140, lastline=140]{hello-md.cfa}
    78 
    79 This function declaration is asserting too much knowledge about its parameter @c@, for it to be usable for printing either a row slice or a column slice.  Specifically, declaring the parameter @c@ with type @array@ means that @c@ is contiguous.  However, the function does not use this fact.  For the function to do its job, @c@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@), with governed length @N@.  The new-array library provides the trait @ix@, so-defined.  With it, the original declaration can be generalized, while still implemented with the same body, to the latter declaration:
     82\lstinputlisting[aboveskip=0pt, language=CFA, firstline=140, lastline=140]{hello-md.cfa}
     83
     84This function declaration is asserting too much knowledge about its parameter @c@, for it to be usable for printing either a row slice or a column slice.  Specifically, declaring the parameter @c@ with type @array@ means that @c@ is contiguous.  However, the function does not use this fact.  For the function to do its job, @c@ need only be of a container type that offers a subscript operator (of type @ptrdiff_t@ $\rightarrow$ @float@), with managed length @N@.  The new-array library provides the trait @ix@, so-defined.  With it, the original declaration can be generalized, while still implemented with the same body, to the latter declaration:
    8085\lstinputlisting[language=CFA, firstline=40, lastline=44]{hello-md.cfa}
    81 \lstinputlisting[language=CFA, firstline=145, lastline=145]{hello-md.cfa}
     86\lstinputlisting[aboveskip=0pt, language=CFA, firstline=145, lastline=145]{hello-md.cfa}
    8287
    8388Nontrivial slicing, in this example, means passing a noncontiguous slice to @print1d@.  The new-array library provides a ``subscript by all'' operation for this purpose.  In a multi-dimensional subscript operation, any dimension given as @all@ is left ``not yet subscripted by a value,'' implementing the @ix@ trait, waiting for such a value.
     
    122127\begin{figure}
    123128    \includegraphics{measuring-like-layout}
    124     \caption{Visualization of subscripting by value and by \lstinline[language=CFA,basicstyle=\ttfamily]{all}, for \lstinline[language=CFA,basicstyle=\ttfamily]{a} of type \lstinline[language=CFA,basicstyle=\ttfamily]{array( float, Z(5), Z(7) )}. The horizontal dimension represents memory addresses while vertical layout is conceptual.}
     129    \caption{Visualization of subscripting by value and by \lstinline[language=CFA,basicstyle=\ttfamily]{all}, for \lstinline[language=CFA,basicstyle=\ttfamily]{a} of type \lstinline[language=CFA,basicstyle=\ttfamily]{array( float, 5, 7 )}. The horizontal dimension represents memory addresses while vertical layout is conceptual.}
    125130    \label{fig:subscr-all}
    126131\end{figure}
    127132
    128 \noindent While the latter description implies overlapping elements, Figure \ref{fig:subscr-all} shows that the overlaps only occur with unused spaces between elements.  Its depictions of @a[all][...]@ show the navigation of a memory layout with nontrivial strides, that is, with ``spaced \_ floats apart'' values that are greater or smaller than the true count of valid indeces times the size of a logically indexed element.  Reading from the bottom up, the expression @a[all][3][2]@ shows a float, that is masquerading as a @float[7]@, for the purpose of being arranged among its peers; five such occurrences form @a[all][3]@.  The tail of flatter boxes extending to the right of a poper element represents this stretching.  At the next level of containment, the structure @a[all][3]@ masquerades as a @float[1]@, for the purpose of being arranged among its peers; seven such occurrences form @a[all]@.  The verical staircase arrangement represents this compression, and resulting overlapping.
    129 
    130 The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.  The private @arpk@ structure (array with explicit packing) is generic over these two types (and more): the contained element, what it is masquerading as.  This structure's public interface is the @array(...)@ construction macro and the two subscript operators.  Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.  Subscrpting by @all@ rearranges the order of masquerading-as types to achieve, in genernal, nontrivial striding.  Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
     133\noindent While the latter description implies overlapping elements, Figure \ref{fig:subscr-all} shows that the overlaps only occur with unused spaces between elements.  Its depictions of @a[all][...]@ show the navigation of a memory layout with nontrivial strides, that is, with ``spaced \_ floats apart'' values that are greater or smaller than the true count of valid indices times the size of a logically indexed element.  Reading from the bottom up, the expression @a[all][3][2]@ shows a float, that is masquerading as a @float[7]@, for the purpose of being arranged among its peers; five such occurrences form @a[all][3]@.  The tail of flatter boxes extending to the right of a proper element represents this stretching.  At the next level of containment, the structure @a[all][3]@ masquerades as a @float[1]@, for the purpose of being arranged among its peers; seven such occurrences form @a[all]@.  The vertical staircase arrangement represents this compression, and resulting overlapping.
     134
     135The new-array library defines types and operations that ensure proper elements are accessed soundly in spite of the overlapping.  The private @arpk@ structure (array with explicit packing) is generic over these two types (and more): the contained element, what it is masquerading as.  This structure's public interface is the @array(...)@ construction macro and the two subscript operators.  Construction by @array@ initializes the masquerading-as type information to be equal to the contained-element information.  Subscripting by @all@ rearranges the order of masquerading-as types to achieve, in general, nontrivial striding.  Subscripting by a number consumes the masquerading-as size of the contained element type, does normal array stepping according to that size, and returns there element found there, in unmasked form.
    131136
    132137The @arpk@ structure and its @-[i]@ operator are thus defined as:
     
    138143      ) {
    139144    struct arpk {
    140         S strides[z(N)];        // so that sizeof(this) is N of S
     145        S strides[N];           // so that sizeof(this) is N of S
    141146    };
    142147
     
    148153\end{lstlisting}
    149154
    150 An instantion of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ exapnsion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.  In the base case, @array(E_base)@ is just @E_base@.  Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
    151 
    152 Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instatiations, intact, to new positions.  Expressed as an operation on types, this rotation is:
     155An instantiation of the @arpk@ generic is given by the @array(E_base, N0, N1, ...)@ expansion, which is @arpk( N0, Rec, Rec, E_base )@, where @Rec@ is @array(E_base, N1, ...)@.  In the base case, @array(E_base)@ is just @E_base@.  Because this construction uses the same value for the generic parameters @S@ and @E_im@, the resulting layout has trivial strides.
     156
     157Subscripting by @all@, to operate on nontrivial strides, is a dequeue-enqueue operation on the @E_im@ chain, which carries @S@ instantiations, intact, to new positions.  Expressed as an operation on types, this rotation is:
    153158\begin{eqnarray*}
    154159suball( arpk(N, S, E_i, E_b) ) & = & enq( N, S, E_i, E_b ) \\
     
    160165\section{Bound checks, added and removed}
    161166
    162 \CFA array subscripting is protected with runtime bound checks.  Having dependent typing causes the opimizer to remove more of these bound checks than it would without them.  This section provides a demonstration of the effect.
    163 
    164 The experiment compares the \CFA array system with the padded-room system [todo:xref] most typically exemplified by Java arrays, but also reflected in the C++ pattern where restricted vector usage models a checked array.  The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.  The experiment compares with the C++ version to keep access to generated assembly code simple.
    165 
    166 As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and C++ versions.  When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.  But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assemly, ready to catch an occurrence the mistake.
    167 
    168 TODO: paste source and assemby codes
     167\CFA array subscripting is protected with runtime bound checks.  Having dependent typing causes the optimizer to remove more of these bound checks than it would without them.  This section provides a demonstration of the effect.
     168
     169The experiment compares the \CFA array system with the padded-room system [TODO:xref] most typically exemplified by Java arrays, but also reflected in the C++ pattern where restricted vector usage models a checked array.  The essential feature of this padded-room system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.  The experiment compares with the C++ version to keep access to generated assembly code simple.
     170
     171As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and C++ versions.  When the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.  But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
     172
     173TODO: paste source and assembly codes
    169174
    170175Incorporating reuse among dimension sizes is seen to give \CFA an advantage at being optimized.  The case is naive matrix multiplication over a row-major encoding.
     
    178183\section{Comparison with other arrays}
    179184
    180 \CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.  Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly typed ownership system that further helps guarantee statically the validity of every pointer deference.  These systems, therefore, ask the programmer to convince the typechecker that every pointer dereference is valid.  \CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
     185\CFA's array is the first lightweight application of dependently-typed bound tracking to an extension of C.  Other extensions of C that apply dependently-typed bound tracking are heavyweight, in that the bound tracking is part of a linearly typed ownership system that further helps guarantee statically the validity of every pointer deference.  These systems, therefore, ask the programmer to convince the type checker that every pointer dereference is valid.  \CFA imposes the lighter-weight obligation, with the more limited guarantee, that initially-declared bounds are respected thereafter.
    181186
    182187\CFA's array is also the first extension of C to use its tracked bounds to generate the pointer arithmetic implied by advanced allocation patterns.  Other bound-tracked extensions of C either forbid certain C patterns entirely, or address the problem of \emph{verifying} that the user's provided pointer arithmetic is self-consistent.  The \CFA array, applied to accordion structures [TOD: cross-reference] \emph{implies} the necessary pointer arithmetic, generated automatically, and not appearing at all in a user's program.
     
    184189\subsection{Safety in a padded room}
    185190
    186 Java's array [todo:cite] is a straightforward example of assuring safety against undefined behaviour, at a cost of expressiveness for more applied properties.  Consider the array parameter declarations in:
     191Java's array [TODO:cite] is a straightforward example of assuring safety against undefined behaviour, at a cost of expressiveness for more applied properties.  Consider the array parameter declarations in:
    187192
    188193\begin{tabular}{rl}
     
    191196\end{tabular}
    192197
    193 Java's safety against undefined behaviour assures the callee that, if @a@ is non-null, then @a.length@ is a valid access (say, evaluating to the number $\ell$) and if @i@ is in $[0, \ell)$ then @a[i]@ is a valid access.  If a value of @i@ outside this range is used, a runtime error is guaranteed.  In these respects, C offers no guarantess at all.  Notably, the suggestion that @n@ is the intended size of the first dimension of @a@ is documentation only.  Indeed, many might prefer the technically equivalent declarations @float a[][m]@ or @float (*a)[m]@ as emphasizing the ``no guarantees'' nature of an infrequently used language feature, over using the opportunity to explain a programmer intention.  Moreover, even if @a[0][0]@ is valid for the purpose intended, C's basic infamous feature is the possibility of an @i@, such that @a[i][0]@ is not valid for the same purpose, and yet, its evaluation does not produce an error.
     198Java's safety against undefined behaviour assures the callee that, if @a@ is non-null, then @a.length@ is a valid access (say, evaluating to the number $\ell$) and if @i@ is in $[0, \ell)$ then @a[i]@ is a valid access.  If a value of @i@ outside this range is used, a runtime error is guaranteed.  In these respects, C offers no guarantees at all.  Notably, the suggestion that @n@ is the intended size of the first dimension of @a@ is documentation only.  Indeed, many might prefer the technically equivalent declarations @float a[][m]@ or @float (*a)[m]@ as emphasizing the ``no guarantees'' nature of an infrequently used language feature, over using the opportunity to explain a programmer intention.  Moreover, even if @a[0][0]@ is valid for the purpose intended, C's basic infamous feature is the possibility of an @i@, such that @a[i][0]@ is not valid for the same purpose, and yet, its evaluation does not produce an error.
    194199
    195200Java's lack of expressiveness for more applied properties means these outcomes are possible:
     
    201206C's array has none of these limitations, nor do any of the ``array language'' comparators discussed in this section.
    202207
    203 This Java level of safety and expressiveness is also exemplified in the C family, with the commonly given advice [todo:cite example], for C++ programmers to use @std::vector@ in place of the C++ language's array, which is essentially the C array.  The advice is that, while a vector is also more powerful (and quirky) than an arry, its capabilities include options to preallocate with an upfront size, to use an available bound-checked accessor (@a.at(i)@ in place of @a[i]@), to avoid using @push_back@, and to use a vector of vectors.  Used with these restrictions, out-of-bound accesses are stopped, and in-bound accesses never exercise the vector's ability to grow, which is to say, they never make the program slow to reallocate and copy, and they never invalidate the program's other references to the contained values.  Allowing this scheme the same referential integrity assumption that \CFA enjoys [todo:xref], this scheme matches Java's safety and expressiveness exactly.  [TODO: decide about going deeper; some of the Java expressiveness concerns have mitigations, up to even more tradeoffs.]
     208This Java level of safety and expressiveness is also exemplified in the C family, with the commonly given advice [TODO:cite example], for C++ programmers to use @std::vector@ in place of the C++ language's array, which is essentially the C array.  The advice is that, while a vector is also more powerful (and quirky) than an array, its capabilities include options to preallocate with an upfront size, to use an available bound-checked accessor (@a.at(i)@ in place of @a[i]@), to avoid using @push_back@, and to use a vector of vectors.  Used with these restrictions, out-of-bound accesses are stopped, and in-bound accesses never exercise the vector's ability to grow, which is to say, they never make the program slow to reallocate and copy, and they never invalidate the program's other references to the contained values.  Allowing this scheme the same referential integrity assumption that \CFA enjoys [TODO:xref], this scheme matches Java's safety and expressiveness exactly.  [TODO: decide about going deeper; some of the Java expressiveness concerns have mitigations, up to even more tradeoffs.]
    204209
    205210\subsection{Levels of dependently typed arrays}
     
    211216    \item a formulation of matrix multiplication, where the two operands must agree on a middle dimension, and where the result dimensions match the operands' outer dimensions
    212217\end{itemize}
    213 Across this field, this expressiveness is not just an avaiable place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.  Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
    214 
    215 
     218Across this field, this expressiveness is not just an available place to document such assumption, but these requirements are strongly guaranteed by default, with varying levels of statically/dynamically checked and ability to opt out.  Along the way, the \CFA array also closes the safety gap (with respect to bounds) that Java has over C.
    216219
    217220Dependent type systems, considered for the purpose of bound-tracking, can be full-strength or restricted.  In a full-strength dependent type system, a type can encode an arbitrarily complex predicate, with bound-tracking being an easy example.  The tradeoff of this expressiveness is complexity in the checker, even typically, a potential for its nontermination.  In a restricted dependent type system (purposed for bound tracking), the goal is to check helpful properties, while keeping the checker well-behaved; the other restricted checkers surveyed here, including \CFA's, always terminate.  [TODO: clarify how even Idris type checking terminates]
    218221
    219 Idris is a current, general-purpose dependently typed programming language.  Length checking is a common benchmark for full dependent type stystems.  Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.  [todo: finish explaining what Data.Vect is and then the essence of the comparison]
     222Idris is a current, general-purpose dependently typed programming language.  Length checking is a common benchmark for full dependent type systems.  Here, the capability being considered is to track lengths that adjust during the execution of a program, such as when an \emph{add} operation produces a collection one element longer than the one on which it started.  [TODO: finish explaining what Data.Vect is and then the essence of the comparison]
    220223
    221224POINTS:
    222 here is how our basic checks look (on a system that deosn't have to compromise);
     225here is how our basic checks look (on a system that does not have to compromise);
    223226it can also do these other cool checks, but watch how I can mess with its conservativeness and termination
    224227
    225 Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer offer novel contributions concerning similar, restricted dependent types for tracking array length.  Unlike \CFA, both are garbage-collected functional languages.  Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.  So, like \CFA, the checking in question is a leightweight bounds-only analysis.  Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
     228Two current, state-of-the-art array languages, Dex\cite{arr:dex:long} and Futhark\cite{arr:futhark:tytheory}, offer offer novel contributions concerning similar, restricted dependent types for tracking array length.  Unlike \CFA, both are garbage-collected functional languages.  Because they are garbage-collected, referential integrity is built-in, meaning that the heavyweight analysis, that \CFA aims to avoid, is unnecessary.  So, like \CFA, the checking in question is a lightweight bounds-only analysis.  Like \CFA, their checks that are conservatively limited by forbidding arithmetic in the depended-upon expression.
    226229
    227230
     
    231234Dex uses a novel conception of size, embedding its quantitative information completely into an ordinary type.
    232235
    233 Futhark and full-strength dependently typed lanaguages treat array sizes are ordinary values.  Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
     236Futhark and full-strength dependently typed languages treat array sizes are ordinary values.  Futhark restricts these expressions syntactically to variables and constants, while a full-strength dependent system does not.
    234237
    235238CFA's hybrid presentation, @forall( [N] )@, has @N@ belonging to the type system, yet has no instances.  Belonging to the type system means it is inferred at a call site and communicated implicitly, like in Dex and unlike in Futhark.  Having no instances means there is no type for a variable @i@ that constrains @i@ to be in the range for @N@, unlike Dex, [TODO: verify], but like Futhark.
     
    280283If \CFA gets such a system for describing the list of values in a type, then \CFA arrays are poised to move from the Futhark level of expressiveness, up to the Dex level.
    281284
    282 [TODO: indroduce Ada in the comparators]
     285[TODO: introduce Ada in the comparators]
    283286
    284287In Ada and Dex, an array is conceived as a function whose domain must satisfy only certain structural assumptions, while in C, C++, Java, Futhark and \CFA today, the domain is a prefix of the natural numbers.  The generality has obvious aesthetic benefits for programmers working on scheduling resources to weekdays, and for programmers who prefer to count from an initial number of their own choosing.
    285288
    286 This change of perspective also lets us remove ubiquitous dynamic bound checks.  [TODO: xref] discusses how automatically inserted bound checks can often be otimized away.  But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.  To remove the ubiquitious dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
     289This change of perspective also lets us remove ubiquitous dynamic bound checks.  [TODO: xref] discusses how automatically inserted bound checks can often be optimized away.  But this approach is unsatisfying to a programmer who believes she has written code in which dynamic checks are unnecessary, but now seeks confirmation.  To remove the ubiquitous dynamic checking is to say that an ordinary subscript operation is only valid when it can be statically verified to be in-bound (and so the ordinary subscript is not dynamically checked), and an explicit dynamic check is available when the static criterion is impractical to meet.
    287290
    288291[TODO, fix confusion:  Idris has this arrangement of checks, but still the natural numbers as the domain.]
     
    296299\end{lstlisting}
    297300
    298 Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.  This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this funciton.  Dex's example is a routine that calculates pointwise differences between two samples.  Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).  In both cases, but with respectively dimensoned interpretations of ``size,'' this function requries the argument sizes to match, and it produces a result of the that size.
    299 
    300 The polymorphism plays out with the pointwise-difference routine advertizing a single-dimensional interface whose domain type is generic.  In the audio instantiation, the duration-of-clip type argument is used for the domain.  In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.  This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
     301Dex uses this foundation of a trait (as an array type's domain) to achieve polymorphism over shapes.  This flavour of polymorphism lets a function be generic over how many (and the order of) dimensions a caller uses when interacting with arrays communicated with this function.  Dex's example is a routine that calculates pointwise differences between two samples.  Done with shape polymorphism, one function body is equally applicable to a pair of single-dimensional audio clips (giving a single-dimensional result) and a pair of two-dimensional photographs (giving a two-dimensional result).  In both cases, but with respectively dimensioned interpretations of ``size,'' this function requires the argument sizes to match, and it produces a result of the that size.
     302
     303The polymorphism plays out with the pointwise-difference routine advertising a single-dimensional interface whose domain type is generic.  In the audio instantiation, the duration-of-clip type argument is used for the domain.  In the photograph instantiation, it's the tuple-type of $ \langle \mathrm{img\_wd}, \mathrm{img\_ht} \rangle $.  This use of a tuple-as-index is made possible by the built-in rule for implementing @Ix@ on a pair, given @Ix@ implementations for its elements
    301304\begin{lstlisting}
    302305instance {a b} [Ix a, Ix b] Ix (a & b)
  • doc/theses/mike_brooks_MMath/programs/hello-accordion.cfa

    r2554f24 rdab9fb93  
    1 #include "stdlib.hfa"
    2 #include "array.hfa"
     1#include <fstream.hfa>
     2#include <stdlib.hfa>
     3#include <array.hfa>
     4
     5
     6
     7
     8
     9
     10
     11forall( T, [Nclients], [Ncosts] )
     12struct request {
     13    unsigned int requestor_id;
     14    array( T, Nclients ) impacted_client_ids; // nested VLA
     15    array( float, Ncosts ) cost_contribs; // nested VLA
     16    float total_cost;
     17};
     18
     19
     20// TODO: understand (fix?) why these are needed (autogen seems to be failing ... is typeof as struct member nayok?)
     21
     22forall( T, [Nclients], [Ncosts] )
     23        void ?{}( T &, request( T, Nclients, Ncosts ) & this ) {}
     24
     25forall( T &, [Nclients], [Ncosts] )
     26        void ^?{}( request( T, Nclients, Ncosts ) & this ) {}
    327
    428
     
    1539
    1640
    17 
    18 
    19 
    20 forall( ztype(Nclients), ztype(Ncosts) )
    21 struct request {
    22     unsigned int requestor_id;
    23     array( unsigned int, Nclients ) impacted_client_ids;
    24     array( float, Ncosts ) cost_contribs;
    25     float total_cost;
    26 };
    27 
    28 
    29 // TODO: understand (fix?) why these are needed (autogen seems to be failing ... is typeof as struct member nayok?)
    30 
    31 forall( ztype(Nclients), ztype(Ncosts) )
    32 void ?{}( request(Nclients, Ncosts) & this ) {}
    33 
    34 forall( ztype(Nclients), ztype(Ncosts) )
    35 void ^?{}( request(Nclients, Ncosts) & this ) {}
    36 
    37 
    38 
    39 
    40 
    41 
    42 
    43 
    44 
    45 
    46 
    47 
    48 
    49 
    50 forall( ztype(Nclients), ztype(Ncosts) )
    51 void summarize( request(Nclients, Ncosts) & r ) {
     41forall( T, [Nclients], [Ncosts] )
     42void summarize( request( T, Nclients, Ncosts ) & r ) {
    5243    r.total_cost = 0;
    53     for( i; z(Ncosts) )
     44    for( i; Ncosts )
    5445        r.total_cost += r.cost_contribs[i];
    5546    // say the cost is per-client, to make output vary
    56     r.total_cost *= z(Nclients);
     47    r.total_cost *= Nclients;
    5748}
    5849
     
    6859
    6960
     61int main( int argc, char * argv[] ) {
     62        const int ncl = ato( argv[1] );
     63        const int nco = 2;
    7064
     65        request( int, ncl, nco ) r;
     66        r.cost_contribs[0] = 100;
     67        r.cost_contribs[1] = 0.1;
    7168
    72 
    73 
    74 
    75 
    76 
    77 
    78 
    79 
    80 
    81 
    82 
    83 
    84 
    85 
    86 
    87 
    88 
    89 
    90 
    91 
    92 
    93 
    94 
    95 
    96 int main( int argc, char ** argv ) {
    97 
    98 
    99 
    100 const int ncl = atoi(argv[1]);
    101 const int nco = 2;
    102 
    103 request( Z(ncl), Z(nco) ) r;
    104 r.cost_contribs[0] = 100;
    105 r.cost_contribs[1] = 0.1;
    106 
    107 summarize(r);
    108 printf("Total cost: %.1f\n", r.total_cost);
    109 
     69        summarize(r);
     70        sout | "Total cost:" | r.total_cost;
     71}
    11072/*
    111 ./a.out 5
     73$\$$ ./a.out 5
    11274Total cost: 500.5
    113 ./a.out 6
     75$\$$ ./a.out 6
    11476Total cost: 600.6
    11577*/
    116 
    117 
    118 
    119 
    120 }
  • doc/theses/mike_brooks_MMath/programs/hello-array.cfa

    r2554f24 rdab9fb93  
    1 
    2 #include <common.hfa>
    3 #include <bits/align.hfa>
    4 
    5 extern "C" {
    6     int atoi(const char *str);
    7 }
    8 
    9 
    10 #include "stdlib.hfa"
    11 #include "array.hfa" // learned has to come afer stdlib, which uses the word tag
    12 
    13 
    14 
    15 
    16 
    17 
    18 
    19 
    20 
    21 
    22 
    23 
    24 
    25 
    26 
     1#include <fstream.hfa>
     2#include <stdlib.hfa>
     3#include <array.hfa> // learned has to come afer stdlib, which uses the word tag
    274
    285// Usage:
     
    318
    329
    33 
    34 
    35 
    36 
    37 
    38 
    39 
    40 
    41 
    42 
    43 
    44 
    45 
    46 
    47 
    48 
    49 
    50 forall( ztype( N ) )
     10forall( [N] ) // array bound
    5111array(bool, N) & f( array(float, N) & a, array(float, N) & b ) {
    52     array(bool, N) & ret = *alloc();
    53     for( i; z(N) ) {
    54         float fracdiff = 2 * abs( a[i] - b[i] )
    55                        / ( abs( a[i] ) + abs( b[i] ) );
    56         ret[i] = fracdiff < 0.005;
     12    array(bool, N) & ret = *alloc(); // sizeof used by alloc
     13    for( i; N ) {
     14        ret[i] = 0.005 > 2 * (abs(a[i] - b[i])) / (abs(a[i]) + abs(b[i]));
    5715    }
    5816    return ret;
     
    6826
    6927
    70 
    71 
    72 
    73 
    74 
    75 
    76 
    77 
    78 
    79 
    80 
    81 
    82 
    83 
    84 
    85 
    86 
    87 
    88 
    89 
    90 
    91 
    92 
    93 
    94 
    95 
    96 
    97 
    9828// TODO: standardize argv
    9929
    100 int main( int argc, char ** argv ) {
    101     int n = atoi(argv[1]);
    102     array(float, Z(n)) a, b;
    103     for (i; n) {
    104         a[i] = 3.14 / (i+1);
     30int main( int argc, char * argv[] ) {
     31    int n = ato( argv[1] );
     32    array(float, n) a, b; // VLA
     33    for ( i; n ) {
     34        a[i] = 3.14 / (i + 1);
    10535        b[i] = a[i] + 0.005 ;
    10636    }
    107     array(bool, Z(n)) & answer = f( a, b );
    108     printf("answer:");
    109     for (i; n)
    110         printf(" %d", answer[i]);
    111     printf("\n");
    112     free( & answer );
     37    array(bool, n) & result = f( a, b ); // call
     38    sout | "result: " | nonl;
     39    for ( i; n )
     40        sout | result[i] | nonl;
     41    sout | nl;
     42    free( &result ); // free returned storage
    11343}
    11444/*
    115 $ ./a.out 5
    116 answer: 1 1 1 0 0
    117 $ ./a.out 7
    118 answer: 1 1 1 0 0 0 0
     45$\$$ ./a.out 5
     46result: true true true false false
     47$\$$ ./a.out 7
     48result: true true true false false false false
    11949*/
    12050
    121 
    122 
    123 
    124 
    125 
    126 
    127 
    128 
    129 
    130 
    131 
    132 
    133 
    134 
    135 
    136 forall( ztype(M), ztype(N) )
    137 void not_so_bad(array(float, M) &a, array(float, N) &b ) {
     51void fred() {
     52        array(float, 10) a;
     53        array(float, 20) b;
    13854    f( a, a );
    13955    f( b, b );
     56    f( a, b );
    14057}
    14158
    142 
    143 
    144 
    145 
    146 
    147 
    14859#ifdef SHOWERR1
    149 
    150 forall( ztype(M), ztype(N) )
     60forall( [M], [N] )
    15161void bad( array(float, M) &a, array(float, N) &b ) {
    15262    f( a, a ); // ok
     
    15464    f( a, b ); // error
    15565}
    156 
    15766#endif
    15867
    15968
    16069
    161 
    162 
    163 
    164 
    165 
    166 
    167 
    168 
    169 
    170 
    171 
    172 
    173 
    174 
    175 
    176 
    177 
    178 
    179 
    180 
    181 
    182 
    183 
    184 
    185 
    186 
    187 
    188 
    189 
    190 
    191 
    192 
    193 
    194 
    195 
    196 forall( ztype(M), ztype(N) )
    197 void bad_fixed( array(float, M) &a, array(float, N) &b ) {
    198    
    199 
    200     if ( z(M) == z(N) ) {
    201         f( a, ( array(float, M) & ) b ); // fixed
     70forall( [M], [N] )
     71void bad_fixed( array(float, M) & a, array(float, N) & b ) {
     72    if ( M == N ) {
     73        f( a, (array(float, M) &)b ); // cast b to matching type
    20274    }
    203 
    20475}
  • doc/theses/mike_brooks_MMath/programs/hello-md.cfa

    r2554f24 rdab9fb93  
    1 #include "array.hfa"
    2 
     1#include <fstream.hfa>
     2#include <array.hfa>
    33
    44
     
    6060forall( [N] )
    6161void print1d_cstyle( array(float, N) & c ) {
    62     for( i; N ) {
    63         printf("%.1f  ", c[i]);
     62    for ( i; N ) {
     63        sout | c[i] | nonl;
    6464    }
    65     printf("\n");
     65    sout | nl;
    6666}
    67 
    6867
    6968
     
    8180void print1d( C & c ) {
    8281    for( i; N ) {
    83         printf("%.1f  ", c[i]);
     82        sout | c[i] | nonl;
    8483    }
    85     printf("\n");
     84    sout | nl;
    8685}
    8786
     
    103102        for ( j; 7 ) {
    104103            a[i,j] = 1.0 * i + 0.1 * j;
    105             printf("%.1f  ", a[i,j]);
     104            sout | a[[i,j]] | nonl;
    106105        }
    107         printf("\n");
     106        sout | nl;
    108107    }
    109     printf("\n");
     108    sout | nl;
    110109}
    111110
    112111int main() {
    113 
    114112
    115113
     
    128126*/
    129127   
     128
     129
    130130
    131131
     
    168168
    169169}
    170 
    171 
    172 
  • doc/theses/mike_brooks_MMath/string.tex

    r2554f24 rdab9fb93  
    1111Earlier work on \CFA [to cite Schluntz] implemented the feature of constructors and destructors.  A constructor is a user-defined function that runs implicitly, when control passes an object's declaration, while a destructor runs at the exit of the declaration's lexical scope.  The feature allows programmers to assume that, whenever a runtime object of a certain type is accessible, the system called one of the programmer's constuctor functions on that object, and a matching destructor call will happen in the future.  The feature helps programmers know that their programs' invariants obtain.
    1212
    13 The purposes of such invariants go beyond ensuring authentic values for the bits inside the object.   These invariants can track occurrences of the managed objects in other data structures.  Reference counting is a typical application of the latter invariant type.  With a reference-counting smart pointer, the consturctor and destructor \emph{of the pointer type} track the lifecycles of occurrences of these pointers, by incrementing and decrementing a counter (ususally) on the referent object, that is, they maintain a that is state separate from the objects to whose lifecycles they are attached.  Both the C++ and \CFA RAII systems ares powerful enough to achive such reference counting.
    14 
    15 The C++ RAII system supports a more advanced application.  A lifecycle function has access to the object under managamanet, by location; constructors and destuctors receive a @this@ parameter providing its memory address.  A lifecycle-function implementation can then add its objects to a collection upon creation, and remove them at destruction.  A modulue that provides such objects, by using and encapsulating such a collection, can traverse the collection at relevant times, to keep the objects ``good.''  Then, if you are the user of such an module, declaring an object of its type means not only receiving an authentically ``good'' value at initialization, but receiving a subscription to a service that will keep the value ``good'' until you are done with it.
    16 
    17 In many cases, the relationship between memory location and lifecycle is simple.  But with stack-allocated objects being used as parameters and returns, there is a sender version in one stack frame and a receiver version in another.  C++ is able to treat those versions as distinct objects and guarantee a copy-constructor call for communicating the value from one to the other.  This ability has implications on the language's calling convention.  Consider an ordinary function @void f( Vehicle x )@, which receives an aggregate by value.  If the type @Vehicle@ has custom lifecycle functions, then a call to a user-provided copy constructor occurs, after the caller evaluates its argument expression, after the callee's stack frame exists, with room for its variable @x@ (which is the location that the copy-constructor must target), but before the user-provided body of @f@ begins executing.  C++ achieves this ordering by changing the function signature, in the compiled form, to pass-by-reference and having the callee invoke the copy constructor in its preamble.  On the other hand, if @Vehicle@ is a simple structure then the C calling convention is applied as the code originally appeared, that is, the callsite implementation code performs a bitwise copy from the caller's expression result, into the callee's x.
     13The purposes of such invariants go beyond ensuring authentic values for the bits inside the object.   These invariants can track occurrences of the managed objects in other data structures.  Reference counting is a typical application of the latter invariant type.  With a reference-counting smart pointer, the consturctor and destructor \emph{of the pointer type} track the lifecycles of occurrences of these pointers, by incrementing and decrementing a counter (ususally) on the referent object, that is, they maintain a that is state separate from the objects to whose lifecycles they are attached.  Both the \CC and \CFA RAII systems ares powerful enough to achive such reference counting.
     14
     15The \CC RAII system supports a more advanced application.  A lifecycle function has access to the object under managamanet, by location; constructors and destuctors receive a @this@ parameter providing its memory address.  A lifecycle-function implementation can then add its objects to a collection upon creation, and remove them at destruction.  A modulue that provides such objects, by using and encapsulating such a collection, can traverse the collection at relevant times, to keep the objects ``good.''  Then, if you are the user of such an module, declaring an object of its type means not only receiving an authentically ``good'' value at initialization, but receiving a subscription to a service that will keep the value ``good'' until you are done with it.
     16
     17In many cases, the relationship between memory location and lifecycle is simple.  But with stack-allocated objects being used as parameters and returns, there is a sender version in one stack frame and a receiver version in another.  \CC is able to treat those versions as distinct objects and guarantee a copy-constructor call for communicating the value from one to the other.  This ability has implications on the language's calling convention.  Consider an ordinary function @void f( Vehicle x )@, which receives an aggregate by value.  If the type @Vehicle@ has custom lifecycle functions, then a call to a user-provided copy constructor occurs, after the caller evaluates its argument expression, after the callee's stack frame exists, with room for its variable @x@ (which is the location that the copy-constructor must target), but before the user-provided body of @f@ begins executing.  \CC achieves this ordering by changing the function signature, in the compiled form, to pass-by-reference and having the callee invoke the copy constructor in its preamble.  On the other hand, if @Vehicle@ is a simple structure then the C calling convention is applied as the code originally appeared, that is, the callsite implementation code performs a bitwise copy from the caller's expression result, into the callee's x.
    1818
    1919TODO: learn correction to fix inconcsistency: this discussion says the callee invokes the copy constructor, but only the caller knows which copy constructor to use!
Note: See TracChangeset for help on using the changeset viewer.