Changeset 72b20c9 for doc/theses
- Timestamp:
- Feb 13, 2019, 3:16:08 PM (5 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- 96bf5781
- Parents:
- 9e43aff
- Location:
- doc/theses/aaron_moss_PhD/phd
- Files:
-
- 3 added
- 4 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/aaron_moss_PhD/phd/Makefile
r9e43aff r72b20c9 38 38 GRAPHS = ${addsuffix .tex, \ 39 39 generic-timing \ 40 tests-completed \ 40 41 } 41 42 … … 62 63 ${LATEX} ${BASE} 63 64 64 ${GRAPHS}: generic-timing.gp generic-timing.dat ${BUILD}65 generic-timing.tex : generic-timing.gp generic-timing.dat ${BUILD} 65 66 gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/generic-timing.gp 67 68 tests-completed.tex : algo-summary.gp algo-summary.dat bu-summary.dat ${BUILD} 69 gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/algo-summary.gp 66 70 67 71 ${BUILD}: -
doc/theses/aaron_moss_PhD/phd/evaluation/generic-timing.dat
r9e43aff r72b20c9 8 8 "clear\npair" 2840 773 748 3511 9 9 "pop\npair" 3025 5414 813 23867 10 -
doc/theses/aaron_moss_PhD/phd/experiments.tex
r9e43aff r72b20c9 104 104 The \textsc{bu-dca-bas} and \textsc{bu-dca-per} variants were able to run all 131 test inputs to completion under this restriction, with maximum memory usage of 70~MB and 78~MB, respectively, which validates its selection as an error threshold. 105 105 However, this threshold did eliminate a significant number of algorithm-test variants, with the worst-performing variant, \textsc{td-imm-inc}, only completing 62 test inputs within the memory bound. 106 Full results for tests completed by algorithm variant are presented in Figure~\TODO{}. 106 Full results for tests completed by algorithm variant are presented in Figure~\ref{tests-completed-fig}. 107 108 \begin{figure} 109 \centering 110 \input{tests-completed} 111 \caption[Tests completed for each algorithmic variant]{Number of tests completed for each algorithmic variant} \label{tests-completed-fig} 112 \end{figure} 113 114 As can be seen from these results, traversal direction is clearly the dominant variable in memory usage, with the \textsc{bu-*} variants performing better than the \textsc{co-} variants, which in turn out-perform the \textsc{td-} variants. 115 It can also be seen that the incremental inheritance (\textsc{inc}) type environment consistently under-performs the other two environment data structures tested. 116 117 To provide a more holistic view of performance, I have considerered the results from the 56 test inputs which all algorithms are able to complete within the memory bound. 118 Limiting consideration to these algorithms provides an apples-to-apples comparison between algorithms, as the excluded inputs are harder instances which take more time and memory for the algorithms which are able to solve them. 119 Figures~\ref{avg-peak-mem-fig} and~\ref{avg-runtime-fig} show the mean peak memory and runtime, respectively, of each algorithm over the inputs in this data set. 120 These averages are not themselves meaningful, but do enable an overall comparison of relative performance of the different variants. 121 Selecting only these 56 ``easy'' test inputs does bias the average values downward, but has little effect on the relative trends; similar trends can be seen in the graphs of the \textsc{bu-*} algorithms over the 124 (of 131) test inputs which all complete, omitted to save space. 122 123 \begin{figure} 124 \centering 125 \input{avg-peak-mem} 126 \caption[Average peak memory for each algorithmic variant]{Average peak resident set size for each algorithmic variant over the 56 test inputs all variants complete.} \label{avg-peak-mem-fig} 127 \end{figure} 128 129 \begin{figure} 130 \centering 131 \input{avg-runtime} 132 \caption[Average runtime for each algorithmic variant]{Average runtime for each algorithmic variant over the 56 test inputs all variants complete.} \label{avg-runtime-fig} 133 \end{figure} 134 135 % \begin{figure} 136 % \centering 137 % \input{bu-peak-mem} 138 % \caption[Average peak memory for each \textsc{bu-*} variant]{Average peak resident set size for each \textsc{bu-*} variant over the 124 test inputs all \textsc{bu-*} variants complete.} \label{bu-peak-mem-fig} 139 % \end{figure} 140 141 % \begin{figure} 142 % \centering 143 % \input{bu-runtime} 144 % \caption[Average runtime for each \textsc{bu-*} variant]{Average runtime for each \textsc{bu-*} variant over the 124 test inputs all \textsc{bu-*} variants complete.} \label{bu-runtime-fig} 145 % \end{figure} 146 147 It can be seen from these results that that top-down immediate-assertion-satisfaction (\textsc{td-imm-*}) are particularly un-performant, as they check a significant number of assertions without filtering to determine if the arguments can be made to fit. 148 It is clear that the bottom-up (\textsc{bu-*}) traversal order is better than both top-down and the Bilson-style bottom-up-combined orders. 149 150 With regard to assertion satisfaction, immediate (\textsc{*-imm-*}) satisfaction is an inferior solution, though there is little performance difference between deferred (\textsc{*-def-*}) and deferred-cached (\textsc{*-dca-*}) for instances which they can both complete. 151 Since the \textsc{dca} algorithm can solve some particularly hard instances which \textsc{def} cannot, it is the recommended approach. 152 153 The \textsc{inc} type environment also often uses upwards of double the memory required by the other variants, in addition to being consistently slower on these easy tests; aside from \textsc{bu-imm-bas} performing worse than \textsc{bu-imm-inc} on average when larger tests are considered, these results hold for the other variants. 154 Aside from that, the persistent union-find (\textsc{per}) type environment generally performs better than the basic (\textsc{bas}) environment, with similar peak memory usage and an average speedup factor of nearly 2, though the requirements of the \textsc{per} environment for automatic garbage collection and a shared history for combination make retrofitting it into older code difficult. 107 155 108 156 \section{Instance Difficulty} -
doc/theses/aaron_moss_PhD/phd/generic-types.tex
r9e43aff r72b20c9 449 449 \centering 450 450 \input{generic-timing} 451 \caption {Benchmark timing results (smaller is better)} \label{generic-eval-fig}451 \caption[Benchmark timing results]{Benchmark timing results (smaller is better)} \label{generic-eval-fig} 452 452 \end{figure} 453 453
Note: See TracChangeset
for help on using the changeset viewer.