Index: doc/theses/mike_brooks_MMath/array.tex
===================================================================
--- doc/theses/mike_brooks_MMath/array.tex	(revision eb0d9b7d937213cd3bc39235c18121a496cbb52f)
+++ doc/theses/mike_brooks_MMath/array.tex	(revision 80e83b6c70ab292c8a3da65fc7273607cea987b3)
@@ -952,5 +952,5 @@
 A column is almost ready to be arranged into a matrix;
 it is the \emph{contained value} of the next-level building block, but another lie about size is required.
-At first, an atom needs to be arranged as if it were bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).
+At first, an atom needs to be arranged as if it is bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).
 These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@.
 Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@.
@@ -1034,9 +1034,19 @@
 The essential feature of this simple system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based.
 The experiment uses the \CC version to simplify access to generated assembly code.
-While ``\CC'' labels a participant, it is really the simple-safety system (of @vector@ with @.at@) whose limitaitons are being explained, and not limitations of \CC optimization.
+While ``\CC'' labels a participant, it is really the simple-safety system (of @vector@ with @.at@) whose limitations are being explained, and not limitations of \CC optimization.
 
 As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and \CC versions.
 Firstly, when the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code.
-But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake.
+But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch a mistake.
+
+\VRef[Figure]{f:ovhd-ctl} gives a control-case example of summing values in an array, where each column shows the program in languages C (a,~d,~g), \CFA (b,~e,~h), and \CC (c,~f,~i).
+The C code has no bounds checking, while the \CFA and \CC can have bounds checking.
+The source-code functions in (a, b, c) can be compiled to have either correct or incorrect uses of bounds.
+When compiling for correct bound use, the @BND@ macro passes its argument through, so the loops cover exactly the passed array sizes.
+When compiling for possible incorrect bound use (a programming error), the @BND@ macro hardcodes the loops for 100 iterations, which implies out-of-bound access attempts when the passed array has fewer than 100 elements.
+The assembly code excerpts show optimized translations for correct-bound mode in (d, e, f) and incorrect-bound mode in (g, h, i).
+Because incorrect-bound mode hardcodes 100 iterations, the loop always executes a first time, so this mode does not have the @n == 0@ branch seen in correct-bound mode.
+For C, this difference is the only (d--g) difference.
+For correct bounds, the \CFA translation equals the C translation, \ie~there is no (d--e) difference, while \CC has an additional indirection to dereference the vector's auxiliary allocation.
 
 \begin{figure}
@@ -1077,27 +1087,25 @@
 \end{figure}
 
-\VRef[Figure]{f:ovhd-ctl} gives a control-case example, of summing values in an array.
-Each column shows a this program in a different language: C (with no bound checking ever, a,~d,~g), \CFA (b,~e,~h), and \CC (c,~f,~i).
-The source-code functions in (a, b, c) can be compiled to have either correct or incorrect uses of bounds.
-When compiling for correct bound use, the @BND@ macro passes its argument through, so the loops cover exactly the passed array sizes.
-When compiling for incorrect bound use (a programming error), the @BND@ macro hardcodes the loops for 100 iterations, which implies out-of-bound access attempts when the passed array has fewer than 100 elements.
-The assembly code excerpts show optimized translations, for correct-bound mode in (d, e, f) and incorrect-bound mode in (g, h, i).
-Because incorrect-bound mode hardcodes 100 iterations, the loop always executes a first time, so this mode does not have the @n == 0@ branch seen in correct-bound mode.
-For C, this difference is the only (d--g) difference.
-For correct bounds, the \CFA translation equals the C translation, \ie~there is no (d--e) difference.
-It is practically so for \CC too, modulo the additional indirection of dereferencing into the vector's auxiliary allocation.
-
 Differences arise when the bound-incorrect programs are passed an array shorter than 100.
-The C version, (g), proceeds with undefined behaviour, reading past the end of the passed array.
+The C version, (g), proceeds with undefined behaviour, reading past the end of the array.
 The \CFA and \CC versions, (h, i), flag the mistake with the runtime check, the @i >= n@ branch.
-This check is provided implicitly by their library types, @array@ and @vector@ respectively.
-The significant result here is these runtime checks being \emph{absent} from the bound-correct translations, (e, f).
-The code optimizer has removed them because, at the point where they would occur (immediately past @.L28@/@.L3@), it knows from the surrounding control flow either @i == 0 && 0 < n@ or @i < n@ (directly), \i.e. @i < n@ either way.
-So a repeat of this check, the branch and its consequent code (raise error) are all removable.
-
-Progressing from the control case, a deliberately bound-incorrect mode is no longer informative.
-Rather, given a (well-typed) program that does good work on good data, the issue is whether it is (determinably) bound-correct on all data.
+This check is provided implicitly by the library types @array@ and @vector@, respectively.
+The significant result is the \emph{absence} of runtime checks from the bound-correct translations, (e, f).
+The code optimizer has removed them because, at the point where they would occur (immediately past @.L28@/@.L3@), it knows from the surrounding control flow either @i == 0 && 0 < n@ or @i < n@ (directly), \ie @i < n@ either way.
+So any repeated checks, \ie the branch and its consequent code (raise error), are removable.
+
+% Progressing from the control case, a deliberately bound-incorrect mode is no longer informative.
+% Rather, given a (well-typed) program that does good work on good data, the issue is whether it is (determinably) bound-correct on all data.
 
 When dimension sizes get reused, \CFA has an advantage over \CC-vector at getting simply written programs well optimized.
+The example case of \VRef[Figures]{f:ovhd-treat-src} and \VRef[]{f:ovhd-treat-asm} is a simple matrix multiplication over a row-major encoding.
+Simple means coding directly to the intuition of the mathematical definition without trying to optimize for memory layout.
+In the assembly code of \VRef[Figure]{f:ovhd-treat-asm}, the looping pattern of \VRef[Figure]{f:ovhd-ctl} (d, e, f), ``Skip ahead on zero; loop back for next,'' occurs with three nesting levels.
+Simultaneously, the dynamic bound-check pattern of \VRef[Figure]{f:ovhd-ctl} (h,~i), ``Get out on invalid,'' occurs, targeting @.L7@, @.L9@ and @.L8@ (bottom right).
+Here, \VRef[Figures]{f:ovhd-treat-asm} shows the \CFA solution optimizing into practically the C solution, while the \CC solution shows added runtime bound checks.
+Like in \VRef[Figure]{f:ovhd-ctl} (e), the significance is the \emph{absence} of library-imposed runtime checks, even though the source code is working through the the \CFA @array@ library.
+The optimizer removed the library-imposed checks because the data structure @array@-of-@array@ is constrained by its type to be shaped correctly for the intuitive looping.
+In \CC, the same constraint does not apply to @vector@-of-@vector@.
+Because every individual @vector@ carries its own size, two types of bound mistakes are possible.
 
 \begin{figure}
@@ -1107,11 +1115,10 @@
 \multicolumn{1}{c}{\CFA} &
 \multicolumn{1}{c}{\CC (nested vector)}
-\\[1em]
+\\
 \lstinput{20-37}{ar-bchk/treatment.c} &
 \lstinput{20-37}{ar-bchk/treatment.cfa} &
 \lstinput{20-37}{ar-bchk/treatment.cc}
 \end{tabular}
-\caption{Overhead comparison, differentiation case, source.
-}
+\caption{Overhead comparison, differentiation case, source.}
 \label{f:ovhd-treat-src}
 \end{figure}
@@ -1133,21 +1140,9 @@
 \end{figure}
 
-The example case of \VRef[Figures]{f:ovhd-treat-src} and \VRef{f:ovhd-treat-asm} is simple matrix multiplication over a row-major encoding.
-Simple means coding directly to the intuition of the mathematical definition, without trying to optimize for memory layout.
-In the assembly code of \VRef[Figures]{f:ovhd-treat-asm}, the looping pattern of \VRef[Figure]{f:ovhd-ctl} (d, e, f), ``Skip aheas on zero; loop back for next,'' occurs with three nesting levels.
-Simultaneously, the dynamic bound-check pattern of \VRef[Figure]{f:ovhd-ctl} (h,~i), ``Get out on invalid,'' occurs, targeting @.L7@, @.L9@ and @.L8@.
-
-Here, \VRef[Figures]{f:ovhd-treat-asm} shows the \CFA solution optimizing into practically the C solution, while the \CC solution shows added runtime bound checks.
-Like in \VRef[Figure]{f:ovhd-ctl} (e), the significance is the \emph{absence} of library-imposed runtime checks, even though the source code is working through the the \CFA @array@ library.
-The optimizer removed the library-imposed checks because the data strructure @array@-of-@array@ is constained by its type to be shaped correctly for the intuitive looping.
-In \CC, the same constraint does not apply to @vector@-of-@vector@.
-Because every individual @vector@ carries its own size, two types of bound mistakes are possible.
-
-Firstly, the three structures received may not be matrices at all, per the obvious/dense/total interpretation; rather, any one might be ragged-right in its rows.
+In detail, the three structures received may not be matrices at all, per the obvious/dense/total interpretation; rather, any one might be ragged-right in its rows.
 The \CFA solution guards against this possibility by encoding the minor length (number of columns) in the major element (row) type.
 In @res@, for example, each of the @M@ rows is @array(float, N)@, guaranteeing @N@ cells within it.
 Or more technically, guaranteeing @N@ as the basis for the imposed bound check \emph{of every row.}
-
-The second type of \CC bound mistake is that its types do not impose the mathematically familiar constraint of $(M \times P) (P \times N) \rightarrow (M \times N)$.
+As well, the \CC type does not impose the mathematically familiar constraint of $(M \times P) (P \times N) \rightarrow (M \times N)$.
 Even assuming away the first issue, \eg that in @lhs@, all minor/cell counts agree, the data format allows the @rhs@ major/row count to disagree.
 Or, the possibility that the row count @res.size()@ disagrees with the row count @lhs.size()@ illustrates this bound-mistake type in isolation.
@@ -1155,21 +1150,20 @@
 This capability lets \CFA escape the one-to-one correspondence between array instances and symbolic bounds, where this correspondence leaves a \CC-vector programmer stuck with a matrix representation that repeats itself.
 
-It is important to clarify that the \CFA solution does not become unsafe (like C) in losing its dynamic checks, even though it does become fast (as C) in losing them.
-The dynamic chekcs were dismissed as unnecessary \emph{because} the program was safe to begin with.
-
-To regain performance, a \CC programmer is left needing to state appropriate assertions or assumptions, to convince the optimizer to dismiss the runtime checks.
-Especially considering that two of them are in the inner-most loop.
-The solution is nontrivial.
-It requires doing the work of the inner-loop checks as a preflight step.
-But this work still takes looping; doing it upfront gives too much separation for the optimizer to see ``has been checked already'' in the deep loop.
+It is important to clarify that the \CFA solution does not become unsafe (like C) in losing its dynamic checks, even though it becomes fast (as C) in losing them.
+The dynamic checks are dismissed as unnecessary \emph{because} the program is safe to begin with.
+To achieve the same performance, a \CC programmer must state appropriate assertions or assumptions, to allow the optimizer to dismiss the runtime checks.
+% Especially considering that two of them are in the inner-most loop.
+The solution requires doing the work of the inner-loop checks as a \emph{preflight step}.
+But this step requires looping and doing it upfront gives too much separation for the optimizer to see ``has been checked already'' in the deep loop.
 So, the programmer must restate the preflight observation within the deep loop, but this time as an unchecked assumption.
 Such assumptions are risky because they introduce further undefined behaviour when misused.
 Only the programmer's discipline remains to ensure this work is done without error.
 
-The \CFA solution lets a simply stated program have dynamic guards that catch bugs, while letting a simply stated bug-free program run as fast as the unguarded C equivalent.
+In summary, the \CFA solution lets a simply stated program have dynamic guards that catch bugs, while letting a simply stated bug-free program run as fast as the unguarded C equivalent.
 
 \begin{comment}
 The ragged-right issue brings with it a source-of-truth difficulty: Where, in the \CC version, is one to find the value of $N$?  $M$ can come from either @res@'s or @lhs@'s major/row count, and checking these for equality is straightforward.  $P$ can come from @rhs@'s major/row count.  But $N$ is only available from columns, \ie minor/cell counts, which are ragged.  So any choice of initial source of truth, \eg 
 \end{comment}
+
 
 \section{Array Lifecycle}
@@ -1340,6 +1334,4 @@
 However, it only needs @float@'s default constructor, as the other operations are never used.
 Current work by the \CFA team aims to improve this situation.
-Therefore, I had to construct a workaround.
-
 My workaround moves from otype (value) to dtype (pointer) with a default-constructor assertion, where dtype does not generate any constructors but the assertion claws back the default otype constructor.
 \begin{cquote}
@@ -1421,4 +1413,5 @@
 
 \section{Array Comparison}
+
 
 \subsection{Rust}
@@ -1540,5 +1533,4 @@
 \label{JavaCompare}
 
-
 Java arrays are references, so multi-dimension arrays are arrays-of-arrays \see{\VRef{toc:mdimpl}}.
 For each array, Java implicitly storages the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations.
@@ -1632,5 +1624,5 @@
 Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array.
 While the vector size can grow and shrink dynamically, \vs an unchanging dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration.
-So, it costs one upfront dynamic allocation and avoids growing the arry through pushing.
+So, it costs one upfront dynamic allocation and avoids growing the array through pushing.
 \begin{c++}
 vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations
@@ -1652,6 +1644,6 @@
 } but does not change the fundamental limit of \lstinline{std::array}, that the length, being a template parameter, must be a static value.
 
-\CC~20's @std::span@ is a view that unifies true arrays, vectors, static sizes and dynamic sizes, under a common API that adds bound checking.
-When wrapping a vector, bound checking occurs on regular subscripting; one needn't remember to use @.at@.
+\CC~20's @std::span@ is a view that unifies true arrays, vectors, static sizes and dynamic sizes, under a common API that adds bounds checking.
+When wrapping a vector, bounds checking occurs on regular subscripting, \ie one need not remember to use @.at@.
 When wrapping a locally declared fixed-size array, bound communication is implicit.
 But it has a soundness gap by offering construction from pointer and user-given length.
@@ -1664,4 +1656,6 @@
 And furthermore, they do not provide any improvement to the C flexible array member pattern, for making a dynamic amount of storage contiguous with its header, as do \CFA's accordions.
 
+
+\begin{comment}
 \subsection{Levels of Dependently Typed Arrays}
 
@@ -1774,39 +1768,47 @@
 
 \subsection{Static Safety in C Extensions}
+\end{comment}
 
 
 \section{Future Work}
 
-\subsection{Array Syntax}
-
-An important goal is to recast @array(...)@ syntax into C-style @[]@.
-The proposal (which is partially implemented) is an alternate dimension and subscript syntax.
-C multi-dimension and subscripting syntax uses multiple brackets.
-\begin{cfa}
-int m@[2][3]@;  // dimension
-m@[0][1]@ = 3;  // subscript
-\end{cfa}
-Leveraging this syntax, the following (simpler) syntax should be intuitive to C programmers with only a small backwards compatibility issue.
-\begin{cfa}
-int m@[2, 3]@;  // dimension
-m@[0, 1]@ = 3;  // subscript
-\end{cfa}
-However, the new subscript syntax is not backwards compatible, as @0, 1@ is a comma expression.
-Interestingly, disallowed the comma expression in this context eliminates an unreported error: subscripting a matrix with @m[i, j]@ instead of @m[i][j]@, which selects the @j@th row not the @i, j@ element.
-Hence, a comma expression in this context is rare.
+% \subsection{Array Syntax}
+
+An important goal is to recast \CFA-style @array(...)@ syntax into C-style array syntax.
+The proposal (which is partially implemented) is an alternate dimension and subscript syntax for multi-dimension arrays.
+C multi-dimension and subscripting syntax uses multiple bracketed constants/expressions.
+\begin{cfa}
+int m@[2][3]@;   // dimension
+m@[0][1]@ = 3;   // subscript
+\end{cfa}
+The alternative \CFA syntax is a comma separated list.
+\begin{cfa}
+int m@[2, 3]@;   // dimension
+m@[0, 1]@ = 3;   // subscript
+\end{cfa}
+which should be intuitive to C programmers and is used in mathematics $M_{i,j}$ and other programing languages, \eg PL/I, Fortran.
+With respect to the dimension expressions, C only allows an assignment expression.
+\begin{cfa}
+	a[i, j];
+test.c:3:16: error: expected ']' before ',' token
+\end{cfa}
+However, there is an ambiguity for a single dimension array, where the syntax for old and new arrays are the same.
+The solution is to use a terminating comma to denote a \CFA-style single-dimension array.
+\begin{cfa}
+int m[2$\Huge\color{red},$];  // single dimension
+\end{cfa}
+This syntactic form is also used for the (rare) singleton tuple @[y@{\Large\color{red},}@]@.
+The extra comma in the dimension is only mildly annoying, and acts as eye-candy differentiating old and new arrays.
+Hence, \CFA can repurpose the comma expression in this context for a list of dimensions.
+The ultimate goal is to replace all C arrays with \CFA arrays, establishing a higher level of safety in C programs, and eliminating the need for the terminating comma.
+With respect to the subscript expression, the comma expression is allowed.
+However, a comma expression in this context is rare, and is most commonly a (silent) mistake: subscripting a matrix with @m[i, j]@ instead of @m[i][j]@ selects the @j@th row not the @i, j@ element.
 Finally, it is possible to write @m[(i, j)]@ in the new syntax to achieve the equivalent of the old @m[i, j]@.
-Note, the new subscript syntax can easily be internally lowered to @[-][-]...@ and handled as regular subscripting.
-The only ambiguity with C syntax is for a single dimension array, where the syntax for old and new are the same.
-\begin{cfa}
-int m[2@,@];  // single dimension
-m[0] = 3;  // subscript
-\end{cfa}
-The solution for the dimension is to use a terminating comma to denote a new single-dimension array.
-This syntactic form is also used for the (rare) singleton tuple @[y@{\color{red},}@]@.
-The extra comma in the dimension is only mildly annoying, and acts as eye-candy differentiating old and new arrays.
-The subscript operator is not an issue as overloading selects the correct single-dimension operation for old/new array types.
-The ultimately goal is to replace all C arrays with \CFA arrays, establishing a higher level of safety in C programs, and eliminating the need for the terminating comma.
-
-
+Note, there is no ambiguity for subscripting a single dimensional array, as the subscript operator selects the correct form from the array type.
+Currently, @array@ supports the old amd new subscript syntax \see{\VRef{f:ovhd-treat-src}}, including combinations of new and old, @arr[1, 2][3]@.
+Finally, the new syntax is trivially lowered to C-style dimension and subscripting.
+
+
+\begin{comment}
 \subsection{Range Slicing}
 
@@ -1817,5 +1819,4 @@
 
 
-\begin{comment}
 \section{\texorpdfstring{\CFA}{Cforall}}
 
Index: doc/theses/mike_brooks_MMath/programs/ar-bchk/control.c
===================================================================
--- doc/theses/mike_brooks_MMath/programs/ar-bchk/control.c	(revision eb0d9b7d937213cd3bc39235c18121a496cbb52f)
+++ doc/theses/mike_brooks_MMath/programs/ar-bchk/control.c	(revision 80e83b6c70ab292c8a3da65fc7273607cea987b3)
@@ -22,7 +22,7 @@
         size_t n, float x[] ) {
 	double sum = 0;
-	for( size_t i = 0;
+	for ( size_t i = 0;
             i < BND( n ); i++ )
-        sum += x[i];
+        sum += @x[i]@;
 	return sum;
 }
@@ -43,6 +43,6 @@
 int main() {
 	float x[EXPSZ];
-	for( size_t i = 0; i < EXPSZ; i++ ) x[i] = 0.1 * (i + 1);
-	for( size_t i = 0; i < EXPSZ; i++ ) printf("elm %zd %g\n", i, x[i]);
+	for ( size_t i = 0; i < EXPSZ; i++ ) x[i] = 0.1 * (i + 1);
+	for ( size_t i = 0; i < EXPSZ; i++ ) printf("elm %zd %g\n", i, x[i]);
 	double sum_ret = sum( EXPSZ, x );
 	printf( "sum   %2g\n", sum_ret );
Index: doc/theses/mike_brooks_MMath/programs/ar-bchk/control.cc
===================================================================
--- doc/theses/mike_brooks_MMath/programs/ar-bchk/control.cc	(revision eb0d9b7d937213cd3bc39235c18121a496cbb52f)
+++ doc/theses/mike_brooks_MMath/programs/ar-bchk/control.cc	(revision 80e83b6c70ab292c8a3da65fc7273607cea987b3)
@@ -22,7 +22,7 @@
         vector<float> & x ) {
 	double sum = 0;
-	for( size_t i = 0;
+	for ( size_t i = 0;
             i < BND( x.size() ); i++ )
-        sum += x.at(i);
+        sum += @x.at(i)@;
 	return sum;
 }
@@ -43,6 +43,6 @@
 int main() {
 	vector<float> x( EXPSZ );
-	for( size_t i = 0; i < EXPSZ; i++ ) x.at(i) = 0.1 * (i + 1);
-	for( size_t i = 0; i < EXPSZ; i++ ) cout << "elm " << i << " " <<  x[i] << endl;
+	for ( size_t i = 0; i < EXPSZ; i++ ) x.at(i) = 0.1 * (i + 1);
+	for ( size_t i = 0; i < EXPSZ; i++ ) cout << "elm " << i << " " <<  x[i] << endl;
 	double sum_ret = sum( x );
 	cout << "sum   " << sum_ret << endl;
Index: doc/theses/mike_brooks_MMath/programs/ar-bchk/control.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/ar-bchk/control.cfa	(revision eb0d9b7d937213cd3bc39235c18121a496cbb52f)
+++ doc/theses/mike_brooks_MMath/programs/ar-bchk/control.cfa	(revision 80e83b6c70ab292c8a3da65fc7273607cea987b3)
@@ -22,7 +22,7 @@
 		array(float, N) & x ) {
 	double sum = 0;
-	for( i;
+	for ( i;
 			BND( N ) )
-		sum += x[i];
+		sum += @x[i]@;
 	return sum;
 }
Index: doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.c
===================================================================
--- doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.c	(revision eb0d9b7d937213cd3bc39235c18121a496cbb52f)
+++ doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.c	(revision 80e83b6c70ab292c8a3da65fc7273607cea987b3)
@@ -24,14 +24,14 @@
         float lhs[m][p],
         float rhs[p][n] ) {
-    for( size_t i = 0;
+    for ( size_t i = 0;
             i < m; i++ )
-        for( size_t j = 0;
+        for ( size_t j = 0;
                 j < n; j++ ) {
             res[i][j] = 0.0;
-            for( size_t k = 0;
+            for ( size_t k = 0;
                     k < p; k++ )
-                res[i][j] +=
-                    lhs[i][k] *
-                    rhs[k][j];
+                @res[i][j] +=@
+                    @lhs[i][k] *@
+                    @rhs[k][j];@
         }
 }
@@ -49,6 +49,6 @@
 
 static void zero( size_t r, size_t c, float mat[r][c] ) {
-    for( size_t i = 0; i < r; i++ )
-        for( size_t j = 0; j < c; j++ )
+    for ( size_t i = 0; i < r; i++ )
+        for ( size_t j = 0; j < c; j++ )
             mat[i][j] = 0.0;
 }
@@ -56,6 +56,6 @@
 
 static void fill( size_t r, size_t c, float mat[r][c] ) {
-    for( size_t i = 0; i < r; i++ )
-        for( size_t j = 0; j < c; j++ )
+    for ( size_t i = 0; i < r; i++ )
+        for ( size_t j = 0; j < c; j++ )
             mat[i][j] = 1.0 * (i + 1) + 0.1 * (j+1);
 }
@@ -63,6 +63,6 @@
 
 static void print( size_t r, size_t c, float mat[r][c] ) {
-    for( size_t i = 0; i < r; i++ ) {
-        for( size_t j = 0; j < c; j++ )
+    for ( size_t i = 0; i < r; i++ ) {
+        for ( size_t j = 0; j < c; j++ )
             printf("%2g ", mat[i][j]);
         printf("\n");
Index: doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.cc
===================================================================
--- doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.cc	(revision eb0d9b7d937213cd3bc39235c18121a496cbb52f)
+++ doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.cc	(revision 80e83b6c70ab292c8a3da65fc7273607cea987b3)
@@ -24,14 +24,14 @@
         vector<vector<float>> & lhs,
         vector<vector<float>> & rhs ) {
-    for( size_t i = 0;
+    for ( size_t i = 0;
             i < res.size(); i++ )
-        for( size_t j = 0;
+        for ( size_t j = 0;
                 j < res.at(i).size(); j++ ) {
             res.at(i).at(j) = 0.0;
-            for( size_t k = 0;
+            for ( size_t k = 0;
                     k < rhs.size(); k++ )
-                res.at(i).at(j) +=
-                    lhs.at(i).at(k) *
-                    rhs.at(k).at(j);
+                @res.at(i).at(j) +=@
+                    @lhs.at(i).at(k) *@
+                    @rhs.at(k).at(j);@
         }
 }
@@ -49,6 +49,6 @@
 
 static void zero( vector<vector<float>> & mat ) {
-    for( size_t i = 0; i < mat.size(); i++ )
-        for( size_t j = 0; j < mat.at(i).size(); j++ )
+    for ( size_t i = 0; i < mat.size(); i++ )
+        for ( size_t j = 0; j < mat.at(i).size(); j++ )
             mat.at(i).at(j) = 0.0;
 }
@@ -56,6 +56,6 @@
 
 static void fill( vector<vector<float>> & mat ) {
-    for( size_t i = 0; i < mat.size(); i++ )
-        for( size_t j = 0; j < mat.at(i).size(); j++ )
+    for ( size_t i = 0; i < mat.size(); i++ )
+        for ( size_t j = 0; j < mat.at(i).size(); j++ )
             mat.at(i).at(j) = 1.0 * (i + 1) + 0.1 * (j+1);
 }
@@ -63,6 +63,6 @@
 
 static void print( vector<vector<float>> & mat ) {
-    for( size_t i = 0; i < mat.size(); i++ ) {
-        for( size_t j = 0; j < mat.at(i).size(); j++ )
+    for ( size_t i = 0; i < mat.size(); i++ ) {
+        for ( size_t j = 0; j < mat.at(i).size(); j++ )
             cout << mat.at(i).at(j) << " ";
         cout << endl;
Index: doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.cfa
===================================================================
--- doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.cfa	(revision eb0d9b7d937213cd3bc39235c18121a496cbb52f)
+++ doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.cfa	(revision 80e83b6c70ab292c8a3da65fc7273607cea987b3)
@@ -24,14 +24,14 @@
         array(float, M, P) & lhs,
         array(float, P, N) & rhs ) {
-    for( i; M )
+    for ( i; M )
 
-        for( j; N ) {
+        for ( j; N ) {
 
             res[i, j] = 0.0;
-            for( k; P )
+            for ( k; P )
 
-                res[i, j] +=
-                    lhs[i, k] *
-                    rhs[k, j];
+                @res[i, j] +=@
+                    @lhs[i, k] *@
+                    @rhs[k, j];@
         }
 }
