Changeset 80e83b6c
- Timestamp:
- Dec 21, 2025, 1:35:46 PM (10 hours ago)
- Branches:
- master
- Parents:
- eb0d9b7
- Location:
- doc/theses/mike_brooks_MMath
- Files:
-
- 7 edited
-
array.tex (modified) (14 diffs)
-
programs/ar-bchk/control.c (modified) (2 diffs)
-
programs/ar-bchk/control.cc (modified) (2 diffs)
-
programs/ar-bchk/control.cfa (modified) (1 diff)
-
programs/ar-bchk/treatment.c (modified) (4 diffs)
-
programs/ar-bchk/treatment.cc (modified) (4 diffs)
-
programs/ar-bchk/treatment.cfa (modified) (1 diff)
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/mike_brooks_MMath/array.tex
reb0d9b7 r80e83b6c 952 952 A column is almost ready to be arranged into a matrix; 953 953 it is the \emph{contained value} of the next-level building block, but another lie about size is required. 954 At first, an atom needs to be arranged as if it werebigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward).954 At first, an atom needs to be arranged as if it is bigger, but now a column needs to be arranged as if it is smaller (the left diagonal above it, shrinking upward). 955 955 These lying columns, arranged contiguously according to their size (as announced) form the matrix @x[all]@. 956 956 Because @x[all]@ takes indices, first for the fine stride, then for the coarse stride, it achieves the requirement of representing the transpose of @x@. … … 1034 1034 The essential feature of this simple system is the one-to-one correspondence between array instances and the symbolic bounds on which dynamic checks are based. 1035 1035 The experiment uses the \CC version to simplify access to generated assembly code. 1036 While ``\CC'' labels a participant, it is really the simple-safety system (of @vector@ with @.at@) whose limita itons are being explained, and not limitations of \CC optimization.1036 While ``\CC'' labels a participant, it is really the simple-safety system (of @vector@ with @.at@) whose limitations are being explained, and not limitations of \CC optimization. 1037 1037 1038 1038 As a control case, a simple loop (with no reused dimension sizes) is seen to get the same optimization treatment in both the \CFA and \CC versions. 1039 1039 Firstly, when the programmer treats the array's bound correctly (making the subscript ``obviously fine''), no dynamic bound check is observed in the program's optimized assembly code. 1040 But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch an occurrence the mistake. 1040 But when the bounds are adjusted, such that the subscript is possibly invalid, the bound check appears in the optimized assembly, ready to catch a mistake. 1041 1042 \VRef[Figure]{f:ovhd-ctl} gives a control-case example of summing values in an array, where each column shows the program in languages C (a,~d,~g), \CFA (b,~e,~h), and \CC (c,~f,~i). 1043 The C code has no bounds checking, while the \CFA and \CC can have bounds checking. 1044 The source-code functions in (a, b, c) can be compiled to have either correct or incorrect uses of bounds. 1045 When compiling for correct bound use, the @BND@ macro passes its argument through, so the loops cover exactly the passed array sizes. 1046 When compiling for possible incorrect bound use (a programming error), the @BND@ macro hardcodes the loops for 100 iterations, which implies out-of-bound access attempts when the passed array has fewer than 100 elements. 1047 The assembly code excerpts show optimized translations for correct-bound mode in (d, e, f) and incorrect-bound mode in (g, h, i). 1048 Because incorrect-bound mode hardcodes 100 iterations, the loop always executes a first time, so this mode does not have the @n == 0@ branch seen in correct-bound mode. 1049 For C, this difference is the only (d--g) difference. 1050 For correct bounds, the \CFA translation equals the C translation, \ie~there is no (d--e) difference, while \CC has an additional indirection to dereference the vector's auxiliary allocation. 1041 1051 1042 1052 \begin{figure} … … 1077 1087 \end{figure} 1078 1088 1079 \VRef[Figure]{f:ovhd-ctl} gives a control-case example, of summing values in an array.1080 Each column shows a this program in a different language: C (with no bound checking ever, a,~d,~g), \CFA (b,~e,~h), and \CC (c,~f,~i).1081 The source-code functions in (a, b, c) can be compiled to have either correct or incorrect uses of bounds.1082 When compiling for correct bound use, the @BND@ macro passes its argument through, so the loops cover exactly the passed array sizes.1083 When compiling for incorrect bound use (a programming error), the @BND@ macro hardcodes the loops for 100 iterations, which implies out-of-bound access attempts when the passed array has fewer than 100 elements.1084 The assembly code excerpts show optimized translations, for correct-bound mode in (d, e, f) and incorrect-bound mode in (g, h, i).1085 Because incorrect-bound mode hardcodes 100 iterations, the loop always executes a first time, so this mode does not have the @n == 0@ branch seen in correct-bound mode.1086 For C, this difference is the only (d--g) difference.1087 For correct bounds, the \CFA translation equals the C translation, \ie~there is no (d--e) difference.1088 It is practically so for \CC too, modulo the additional indirection of dereferencing into the vector's auxiliary allocation.1089 1090 1089 Differences arise when the bound-incorrect programs are passed an array shorter than 100. 1091 The C version, (g), proceeds with undefined behaviour, reading past the end of the passedarray.1090 The C version, (g), proceeds with undefined behaviour, reading past the end of the array. 1092 1091 The \CFA and \CC versions, (h, i), flag the mistake with the runtime check, the @i >= n@ branch. 1093 This check is provided implicitly by the ir library types, @array@ and @vector@respectively.1094 The significant result here is these runtime checks being \emph{absent}from the bound-correct translations, (e, f).1095 The code optimizer has removed them because, at the point where they would occur (immediately past @.L28@/@.L3@), it knows from the surrounding control flow either @i == 0 && 0 < n@ or @i < n@ (directly), \i .e.@i < n@ either way.1096 So a repeat of this check, the branch and its consequent code (raise error) are allremovable.1097 1098 Progressing from the control case, a deliberately bound-incorrect mode is no longer informative.1099 Rather, given a (well-typed) program that does good work on good data, the issue is whether it is (determinably) bound-correct on all data.1092 This check is provided implicitly by the library types @array@ and @vector@, respectively. 1093 The significant result is the \emph{absence} of runtime checks from the bound-correct translations, (e, f). 1094 The code optimizer has removed them because, at the point where they would occur (immediately past @.L28@/@.L3@), it knows from the surrounding control flow either @i == 0 && 0 < n@ or @i < n@ (directly), \ie @i < n@ either way. 1095 So any repeated checks, \ie the branch and its consequent code (raise error), are removable. 1096 1097 % Progressing from the control case, a deliberately bound-incorrect mode is no longer informative. 1098 % Rather, given a (well-typed) program that does good work on good data, the issue is whether it is (determinably) bound-correct on all data. 1100 1099 1101 1100 When dimension sizes get reused, \CFA has an advantage over \CC-vector at getting simply written programs well optimized. 1101 The example case of \VRef[Figures]{f:ovhd-treat-src} and \VRef[]{f:ovhd-treat-asm} is a simple matrix multiplication over a row-major encoding. 1102 Simple means coding directly to the intuition of the mathematical definition without trying to optimize for memory layout. 1103 In the assembly code of \VRef[Figure]{f:ovhd-treat-asm}, the looping pattern of \VRef[Figure]{f:ovhd-ctl} (d, e, f), ``Skip ahead on zero; loop back for next,'' occurs with three nesting levels. 1104 Simultaneously, the dynamic bound-check pattern of \VRef[Figure]{f:ovhd-ctl} (h,~i), ``Get out on invalid,'' occurs, targeting @.L7@, @.L9@ and @.L8@ (bottom right). 1105 Here, \VRef[Figures]{f:ovhd-treat-asm} shows the \CFA solution optimizing into practically the C solution, while the \CC solution shows added runtime bound checks. 1106 Like in \VRef[Figure]{f:ovhd-ctl} (e), the significance is the \emph{absence} of library-imposed runtime checks, even though the source code is working through the the \CFA @array@ library. 1107 The optimizer removed the library-imposed checks because the data structure @array@-of-@array@ is constrained by its type to be shaped correctly for the intuitive looping. 1108 In \CC, the same constraint does not apply to @vector@-of-@vector@. 1109 Because every individual @vector@ carries its own size, two types of bound mistakes are possible. 1102 1110 1103 1111 \begin{figure} … … 1107 1115 \multicolumn{1}{c}{\CFA} & 1108 1116 \multicolumn{1}{c}{\CC (nested vector)} 1109 \\ [1em]1117 \\ 1110 1118 \lstinput{20-37}{ar-bchk/treatment.c} & 1111 1119 \lstinput{20-37}{ar-bchk/treatment.cfa} & 1112 1120 \lstinput{20-37}{ar-bchk/treatment.cc} 1113 1121 \end{tabular} 1114 \caption{Overhead comparison, differentiation case, source. 1115 } 1122 \caption{Overhead comparison, differentiation case, source.} 1116 1123 \label{f:ovhd-treat-src} 1117 1124 \end{figure} … … 1133 1140 \end{figure} 1134 1141 1135 The example case of \VRef[Figures]{f:ovhd-treat-src} and \VRef{f:ovhd-treat-asm} is simple matrix multiplication over a row-major encoding. 1136 Simple means coding directly to the intuition of the mathematical definition, without trying to optimize for memory layout. 1137 In the assembly code of \VRef[Figures]{f:ovhd-treat-asm}, the looping pattern of \VRef[Figure]{f:ovhd-ctl} (d, e, f), ``Skip aheas on zero; loop back for next,'' occurs with three nesting levels. 1138 Simultaneously, the dynamic bound-check pattern of \VRef[Figure]{f:ovhd-ctl} (h,~i), ``Get out on invalid,'' occurs, targeting @.L7@, @.L9@ and @.L8@. 1139 1140 Here, \VRef[Figures]{f:ovhd-treat-asm} shows the \CFA solution optimizing into practically the C solution, while the \CC solution shows added runtime bound checks. 1141 Like in \VRef[Figure]{f:ovhd-ctl} (e), the significance is the \emph{absence} of library-imposed runtime checks, even though the source code is working through the the \CFA @array@ library. 1142 The optimizer removed the library-imposed checks because the data strructure @array@-of-@array@ is constained by its type to be shaped correctly for the intuitive looping. 1143 In \CC, the same constraint does not apply to @vector@-of-@vector@. 1144 Because every individual @vector@ carries its own size, two types of bound mistakes are possible. 1145 1146 Firstly, the three structures received may not be matrices at all, per the obvious/dense/total interpretation; rather, any one might be ragged-right in its rows. 1142 In detail, the three structures received may not be matrices at all, per the obvious/dense/total interpretation; rather, any one might be ragged-right in its rows. 1147 1143 The \CFA solution guards against this possibility by encoding the minor length (number of columns) in the major element (row) type. 1148 1144 In @res@, for example, each of the @M@ rows is @array(float, N)@, guaranteeing @N@ cells within it. 1149 1145 Or more technically, guaranteeing @N@ as the basis for the imposed bound check \emph{of every row.} 1150 1151 The second type of \CC bound mistake is that its types do not impose the mathematically familiar constraint of $(M \times P) (P \times N) \rightarrow (M \times N)$. 1146 As well, the \CC type does not impose the mathematically familiar constraint of $(M \times P) (P \times N) \rightarrow (M \times N)$. 1152 1147 Even assuming away the first issue, \eg that in @lhs@, all minor/cell counts agree, the data format allows the @rhs@ major/row count to disagree. 1153 1148 Or, the possibility that the row count @res.size()@ disagrees with the row count @lhs.size()@ illustrates this bound-mistake type in isolation. … … 1155 1150 This capability lets \CFA escape the one-to-one correspondence between array instances and symbolic bounds, where this correspondence leaves a \CC-vector programmer stuck with a matrix representation that repeats itself. 1156 1151 1157 It is important to clarify that the \CFA solution does not become unsafe (like C) in losing its dynamic checks, even though it does become fast (as C) in losing them. 1158 The dynamic chekcs were dismissed as unnecessary \emph{because} the program was safe to begin with. 1159 1160 To regain performance, a \CC programmer is left needing to state appropriate assertions or assumptions, to convince the optimizer to dismiss the runtime checks. 1161 Especially considering that two of them are in the inner-most loop. 1162 The solution is nontrivial. 1163 It requires doing the work of the inner-loop checks as a preflight step. 1164 But this work still takes looping; doing it upfront gives too much separation for the optimizer to see ``has been checked already'' in the deep loop. 1152 It is important to clarify that the \CFA solution does not become unsafe (like C) in losing its dynamic checks, even though it becomes fast (as C) in losing them. 1153 The dynamic checks are dismissed as unnecessary \emph{because} the program is safe to begin with. 1154 To achieve the same performance, a \CC programmer must state appropriate assertions or assumptions, to allow the optimizer to dismiss the runtime checks. 1155 % Especially considering that two of them are in the inner-most loop. 1156 The solution requires doing the work of the inner-loop checks as a \emph{preflight step}. 1157 But this step requires looping and doing it upfront gives too much separation for the optimizer to see ``has been checked already'' in the deep loop. 1165 1158 So, the programmer must restate the preflight observation within the deep loop, but this time as an unchecked assumption. 1166 1159 Such assumptions are risky because they introduce further undefined behaviour when misused. 1167 1160 Only the programmer's discipline remains to ensure this work is done without error. 1168 1161 1169 The \CFA solution lets a simply stated program have dynamic guards that catch bugs, while letting a simply stated bug-free program run as fast as the unguarded C equivalent.1162 In summary, the \CFA solution lets a simply stated program have dynamic guards that catch bugs, while letting a simply stated bug-free program run as fast as the unguarded C equivalent. 1170 1163 1171 1164 \begin{comment} 1172 1165 The ragged-right issue brings with it a source-of-truth difficulty: Where, in the \CC version, is one to find the value of $N$? $M$ can come from either @res@'s or @lhs@'s major/row count, and checking these for equality is straightforward. $P$ can come from @rhs@'s major/row count. But $N$ is only available from columns, \ie minor/cell counts, which are ragged. So any choice of initial source of truth, \eg 1173 1166 \end{comment} 1167 1174 1168 1175 1169 \section{Array Lifecycle} … … 1340 1334 However, it only needs @float@'s default constructor, as the other operations are never used. 1341 1335 Current work by the \CFA team aims to improve this situation. 1342 Therefore, I had to construct a workaround.1343 1344 1336 My workaround moves from otype (value) to dtype (pointer) with a default-constructor assertion, where dtype does not generate any constructors but the assertion claws back the default otype constructor. 1345 1337 \begin{cquote} … … 1421 1413 1422 1414 \section{Array Comparison} 1415 1423 1416 1424 1417 \subsection{Rust} … … 1540 1533 \label{JavaCompare} 1541 1534 1542 1543 1535 Java arrays are references, so multi-dimension arrays are arrays-of-arrays \see{\VRef{toc:mdimpl}}. 1544 1536 For each array, Java implicitly storages the array dimension in a descriptor, supporting array length, subscript checking, and allowing dynamically-sized array-parameter declarations. … … 1632 1624 Because C arrays are difficult and dangerous, the mantra for \CC programmers is to use @std::vector@ in place of the C array. 1633 1625 While the vector size can grow and shrink dynamically, \vs an unchanging dynamic size with VLAs, the cost of this extra feature is mitigated by preallocating the maximum size (like the VLA) at the declaration. 1634 So, it costs one upfront dynamic allocation and avoids growing the arr y through pushing.1626 So, it costs one upfront dynamic allocation and avoids growing the array through pushing. 1635 1627 \begin{c++} 1636 1628 vector< vector< int > > m( 5, vector<int>(8) ); // initialize size of 5 x 8 with 6 dynamic allocations … … 1652 1644 } but does not change the fundamental limit of \lstinline{std::array}, that the length, being a template parameter, must be a static value. 1653 1645 1654 \CC~20's @std::span@ is a view that unifies true arrays, vectors, static sizes and dynamic sizes, under a common API that adds bound checking.1655 When wrapping a vector, bound checking occurs on regular subscripting; one needn't remember to use @.at@.1646 \CC~20's @std::span@ is a view that unifies true arrays, vectors, static sizes and dynamic sizes, under a common API that adds bounds checking. 1647 When wrapping a vector, bounds checking occurs on regular subscripting, \ie one need not remember to use @.at@. 1656 1648 When wrapping a locally declared fixed-size array, bound communication is implicit. 1657 1649 But it has a soundness gap by offering construction from pointer and user-given length. … … 1664 1656 And furthermore, they do not provide any improvement to the C flexible array member pattern, for making a dynamic amount of storage contiguous with its header, as do \CFA's accordions. 1665 1657 1658 1659 \begin{comment} 1666 1660 \subsection{Levels of Dependently Typed Arrays} 1667 1661 … … 1774 1768 1775 1769 \subsection{Static Safety in C Extensions} 1770 \end{comment} 1776 1771 1777 1772 1778 1773 \section{Future Work} 1779 1774 1780 \subsection{Array Syntax} 1781 1782 An important goal is to recast @array(...)@ syntax into C-style @[]@. 1783 The proposal (which is partially implemented) is an alternate dimension and subscript syntax. 1784 C multi-dimension and subscripting syntax uses multiple brackets. 1785 \begin{cfa} 1786 int m@[2][3]@; // dimension 1787 m@[0][1]@ = 3; // subscript 1788 \end{cfa} 1789 Leveraging this syntax, the following (simpler) syntax should be intuitive to C programmers with only a small backwards compatibility issue. 1790 \begin{cfa} 1791 int m@[2, 3]@; // dimension 1792 m@[0, 1]@ = 3; // subscript 1793 \end{cfa} 1794 However, the new subscript syntax is not backwards compatible, as @0, 1@ is a comma expression. 1795 Interestingly, disallowed the comma expression in this context eliminates an unreported error: subscripting a matrix with @m[i, j]@ instead of @m[i][j]@, which selects the @j@th row not the @i, j@ element. 1796 Hence, a comma expression in this context is rare. 1775 % \subsection{Array Syntax} 1776 1777 An important goal is to recast \CFA-style @array(...)@ syntax into C-style array syntax. 1778 The proposal (which is partially implemented) is an alternate dimension and subscript syntax for multi-dimension arrays. 1779 C multi-dimension and subscripting syntax uses multiple bracketed constants/expressions. 1780 \begin{cfa} 1781 int m@[2][3]@; // dimension 1782 m@[0][1]@ = 3; // subscript 1783 \end{cfa} 1784 The alternative \CFA syntax is a comma separated list. 1785 \begin{cfa} 1786 int m@[2, 3]@; // dimension 1787 m@[0, 1]@ = 3; // subscript 1788 \end{cfa} 1789 which should be intuitive to C programmers and is used in mathematics $M_{i,j}$ and other programing languages, \eg PL/I, Fortran. 1790 With respect to the dimension expressions, C only allows an assignment expression. 1791 \begin{cfa} 1792 a[i, j]; 1793 test.c:3:16: error: expected ']' before ',' token 1794 \end{cfa} 1795 However, there is an ambiguity for a single dimension array, where the syntax for old and new arrays are the same. 1796 The solution is to use a terminating comma to denote a \CFA-style single-dimension array. 1797 \begin{cfa} 1798 int m[2$\Huge\color{red},$]; // single dimension 1799 \end{cfa} 1800 This syntactic form is also used for the (rare) singleton tuple @[y@{\Large\color{red},}@]@. 1801 The extra comma in the dimension is only mildly annoying, and acts as eye-candy differentiating old and new arrays. 1802 Hence, \CFA can repurpose the comma expression in this context for a list of dimensions. 1803 The ultimate goal is to replace all C arrays with \CFA arrays, establishing a higher level of safety in C programs, and eliminating the need for the terminating comma. 1804 With respect to the subscript expression, the comma expression is allowed. 1805 However, a comma expression in this context is rare, and is most commonly a (silent) mistake: subscripting a matrix with @m[i, j]@ instead of @m[i][j]@ selects the @j@th row not the @i, j@ element. 1797 1806 Finally, it is possible to write @m[(i, j)]@ in the new syntax to achieve the equivalent of the old @m[i, j]@. 1798 Note, the new subscript syntax can easily be internally lowered to @[-][-]...@ and handled as regular subscripting. 1799 The only ambiguity with C syntax is for a single dimension array, where the syntax for old and new are the same. 1800 \begin{cfa} 1801 int m[2@,@]; // single dimension 1802 m[0] = 3; // subscript 1803 \end{cfa} 1804 The solution for the dimension is to use a terminating comma to denote a new single-dimension array. 1805 This syntactic form is also used for the (rare) singleton tuple @[y@{\color{red},}@]@. 1806 The extra comma in the dimension is only mildly annoying, and acts as eye-candy differentiating old and new arrays. 1807 The subscript operator is not an issue as overloading selects the correct single-dimension operation for old/new array types. 1808 The ultimately goal is to replace all C arrays with \CFA arrays, establishing a higher level of safety in C programs, and eliminating the need for the terminating comma. 1809 1810 1807 Note, there is no ambiguity for subscripting a single dimensional array, as the subscript operator selects the correct form from the array type. 1808 Currently, @array@ supports the old amd new subscript syntax \see{\VRef{f:ovhd-treat-src}}, including combinations of new and old, @arr[1, 2][3]@. 1809 Finally, the new syntax is trivially lowered to C-style dimension and subscripting. 1810 1811 1812 \begin{comment} 1811 1813 \subsection{Range Slicing} 1812 1814 … … 1817 1819 1818 1820 1819 \begin{comment}1820 1821 \section{\texorpdfstring{\CFA}{Cforall}} 1821 1822 -
doc/theses/mike_brooks_MMath/programs/ar-bchk/control.c
reb0d9b7 r80e83b6c 22 22 size_t n, float x[] ) { 23 23 double sum = 0; 24 for ( size_t i = 0;24 for ( size_t i = 0; 25 25 i < BND( n ); i++ ) 26 sum += x[i];26 sum += @x[i]@; 27 27 return sum; 28 28 } … … 43 43 int main() { 44 44 float x[EXPSZ]; 45 for ( size_t i = 0; i < EXPSZ; i++ ) x[i] = 0.1 * (i + 1);46 for ( size_t i = 0; i < EXPSZ; i++ ) printf("elm %zd %g\n", i, x[i]);45 for ( size_t i = 0; i < EXPSZ; i++ ) x[i] = 0.1 * (i + 1); 46 for ( size_t i = 0; i < EXPSZ; i++ ) printf("elm %zd %g\n", i, x[i]); 47 47 double sum_ret = sum( EXPSZ, x ); 48 48 printf( "sum %2g\n", sum_ret ); -
doc/theses/mike_brooks_MMath/programs/ar-bchk/control.cc
reb0d9b7 r80e83b6c 22 22 vector<float> & x ) { 23 23 double sum = 0; 24 for ( size_t i = 0;24 for ( size_t i = 0; 25 25 i < BND( x.size() ); i++ ) 26 sum += x.at(i);26 sum += @x.at(i)@; 27 27 return sum; 28 28 } … … 43 43 int main() { 44 44 vector<float> x( EXPSZ ); 45 for ( size_t i = 0; i < EXPSZ; i++ ) x.at(i) = 0.1 * (i + 1);46 for ( size_t i = 0; i < EXPSZ; i++ ) cout << "elm " << i << " " << x[i] << endl;45 for ( size_t i = 0; i < EXPSZ; i++ ) x.at(i) = 0.1 * (i + 1); 46 for ( size_t i = 0; i < EXPSZ; i++ ) cout << "elm " << i << " " << x[i] << endl; 47 47 double sum_ret = sum( x ); 48 48 cout << "sum " << sum_ret << endl; -
doc/theses/mike_brooks_MMath/programs/ar-bchk/control.cfa
reb0d9b7 r80e83b6c 22 22 array(float, N) & x ) { 23 23 double sum = 0; 24 for ( i;24 for ( i; 25 25 BND( N ) ) 26 sum += x[i];26 sum += @x[i]@; 27 27 return sum; 28 28 } -
doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.c
reb0d9b7 r80e83b6c 24 24 float lhs[m][p], 25 25 float rhs[p][n] ) { 26 for ( size_t i = 0;26 for ( size_t i = 0; 27 27 i < m; i++ ) 28 for ( size_t j = 0;28 for ( size_t j = 0; 29 29 j < n; j++ ) { 30 30 res[i][j] = 0.0; 31 for ( size_t k = 0;31 for ( size_t k = 0; 32 32 k < p; k++ ) 33 res[i][j] +=34 lhs[i][k] *35 rhs[k][j];33 @res[i][j] +=@ 34 @lhs[i][k] *@ 35 @rhs[k][j];@ 36 36 } 37 37 } … … 49 49 50 50 static void zero( size_t r, size_t c, float mat[r][c] ) { 51 for ( size_t i = 0; i < r; i++ )52 for ( size_t j = 0; j < c; j++ )51 for ( size_t i = 0; i < r; i++ ) 52 for ( size_t j = 0; j < c; j++ ) 53 53 mat[i][j] = 0.0; 54 54 } … … 56 56 57 57 static void fill( size_t r, size_t c, float mat[r][c] ) { 58 for ( size_t i = 0; i < r; i++ )59 for ( size_t j = 0; j < c; j++ )58 for ( size_t i = 0; i < r; i++ ) 59 for ( size_t j = 0; j < c; j++ ) 60 60 mat[i][j] = 1.0 * (i + 1) + 0.1 * (j+1); 61 61 } … … 63 63 64 64 static void print( size_t r, size_t c, float mat[r][c] ) { 65 for ( size_t i = 0; i < r; i++ ) {66 for ( size_t j = 0; j < c; j++ )65 for ( size_t i = 0; i < r; i++ ) { 66 for ( size_t j = 0; j < c; j++ ) 67 67 printf("%2g ", mat[i][j]); 68 68 printf("\n"); -
doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.cc
reb0d9b7 r80e83b6c 24 24 vector<vector<float>> & lhs, 25 25 vector<vector<float>> & rhs ) { 26 for ( size_t i = 0;26 for ( size_t i = 0; 27 27 i < res.size(); i++ ) 28 for ( size_t j = 0;28 for ( size_t j = 0; 29 29 j < res.at(i).size(); j++ ) { 30 30 res.at(i).at(j) = 0.0; 31 for ( size_t k = 0;31 for ( size_t k = 0; 32 32 k < rhs.size(); k++ ) 33 res.at(i).at(j) +=34 lhs.at(i).at(k) *35 rhs.at(k).at(j);33 @res.at(i).at(j) +=@ 34 @lhs.at(i).at(k) *@ 35 @rhs.at(k).at(j);@ 36 36 } 37 37 } … … 49 49 50 50 static void zero( vector<vector<float>> & mat ) { 51 for ( size_t i = 0; i < mat.size(); i++ )52 for ( size_t j = 0; j < mat.at(i).size(); j++ )51 for ( size_t i = 0; i < mat.size(); i++ ) 52 for ( size_t j = 0; j < mat.at(i).size(); j++ ) 53 53 mat.at(i).at(j) = 0.0; 54 54 } … … 56 56 57 57 static void fill( vector<vector<float>> & mat ) { 58 for ( size_t i = 0; i < mat.size(); i++ )59 for ( size_t j = 0; j < mat.at(i).size(); j++ )58 for ( size_t i = 0; i < mat.size(); i++ ) 59 for ( size_t j = 0; j < mat.at(i).size(); j++ ) 60 60 mat.at(i).at(j) = 1.0 * (i + 1) + 0.1 * (j+1); 61 61 } … … 63 63 64 64 static void print( vector<vector<float>> & mat ) { 65 for ( size_t i = 0; i < mat.size(); i++ ) {66 for ( size_t j = 0; j < mat.at(i).size(); j++ )65 for ( size_t i = 0; i < mat.size(); i++ ) { 66 for ( size_t j = 0; j < mat.at(i).size(); j++ ) 67 67 cout << mat.at(i).at(j) << " "; 68 68 cout << endl; -
doc/theses/mike_brooks_MMath/programs/ar-bchk/treatment.cfa
reb0d9b7 r80e83b6c 24 24 array(float, M, P) & lhs, 25 25 array(float, P, N) & rhs ) { 26 for ( i; M )26 for ( i; M ) 27 27 28 for ( j; N ) {28 for ( j; N ) { 29 29 30 30 res[i, j] = 0.0; 31 for ( k; P )31 for ( k; P ) 32 32 33 res[i, j] +=34 lhs[i, k] *35 rhs[k, j];33 @res[i, j] +=@ 34 @lhs[i, k] *@ 35 @rhs[k, j];@ 36 36 } 37 37 }
Note:
See TracChangeset
for help on using the changeset viewer.