- Timestamp:
- Feb 15, 2019, 9:00:06 PM (6 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- 44eeddd, a6f991f
- Parents:
- 4c41b17
- Location:
- doc/theses/aaron_moss_PhD/phd
- Files:
-
- 14 added
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/aaron_moss_PhD/phd/Makefile
r4c41b17 r060b12d 40 40 tests-completed \ 41 41 per-prob-histo \ 42 per-prob-depth \ 42 43 } 43 44 … … 73 74 gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/per-prob.gp 74 75 76 per-prob-depth.tex : per-prob-scatter.gp ${BUILD} 77 gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/per-prob-scatter.gp 78 75 79 ${BUILD}: 76 80 mkdir -p ${BUILD} -
doc/theses/aaron_moss_PhD/phd/experiments.tex
r4c41b17 r060b12d 173 173 174 174 Since the top centile of expression resolution instances requires approximately two-thirds of the resolver's time, optimizing the resolver for specific hard problem instances has proven to be an effective technique for reducing overall runtime. 175 \TODO{Discuss metrics of difficulty.} 176 177 % TODO: look at overloads 178 % TODO: histogram of hard cases 175 The data below indicates that number of assertions necessary to resolve has the greatest effect on runtime, as seen in 176 Figure~\ref{per-prob-assns-fig}. 177 However, since the number of assertions required is only known once resolution is finished, the most-promising pre-resolution metric of difficulty is the nesting depth of the expression; as seen in Figure~\ref{per-prob-depth-fig}, expressions of depth $> 10$ in this dataset are uniformly difficult. 178 Figure~\ref{per-prob-subs-fig} presents a similar pattern for number of subexpressions, though given that the expensive tail of problem instances occurs at approximately twice the depth values, it is reasonable to believe that the difficult expressions in question are deeply-nested invocations of binary functions rather than wider but shallowly-nested expressions. 179 180 % TODO statistics to tease out difficulty? Is ANOVA the right keyword? 181 % TODO maybe metrics to sum number of poly-overloads invoked 182 183 \begin{figure} 184 \centering 185 \input{per-prob-assns} 186 \caption[Top-level expression resolution time by number of assertions resolved.]{Top-level expression resolution time by number of assertions resolved. Note log scales on both axes.} \label{per-prob-assns-fig} 187 \end{figure} 188 189 \begin{figure} 190 \centering 191 \input{per-prob-depth} 192 \caption[Top-level expression resolution time by maximum nesting depth of expression.]{Top-level expression resolution time by maximum nesting depth of expression. Note log scales on both axes.} \label{per-prob-depth-fig} 193 \end{figure} 194 195 \begin{figure} 196 \centering 197 \input{per-prob-subs} 198 \caption[Top-level expression resolution time by number of subexpressions.]{Top-level expression resolution time by number of subexpressions. Note log scales on both axes.} \label{per-prob-subs-fig} 199 \end{figure} 200 179 201 180 202 \section{\CFA{} Results}
Note: See TracChangeset
for help on using the changeset viewer.