Changeset 060b12d for doc


Ignore:
Timestamp:
Feb 15, 2019, 9:00:06 PM (6 years ago)
Author:
Aaron Moss <a3moss@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
44eeddd, a6f991f
Parents:
4c41b17
Message:

Add scatter plots describing instance difficulty

Location:
doc/theses/aaron_moss_PhD/phd
Files:
14 added
2 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/aaron_moss_PhD/phd/Makefile

    r4c41b17 r060b12d  
    4040tests-completed \
    4141per-prob-histo \
     42per-prob-depth \
    4243}
    4344
     
    7374        gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/per-prob.gp
    7475
     76per-prob-depth.tex : per-prob-scatter.gp ${BUILD}
     77        gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/per-prob-scatter.gp
     78
    7579${BUILD}:
    7680        mkdir -p ${BUILD}
  • doc/theses/aaron_moss_PhD/phd/experiments.tex

    r4c41b17 r060b12d  
    173173
    174174Since the top centile of expression resolution instances requires approximately two-thirds of the resolver's time, optimizing the resolver for specific hard problem instances has proven to be an effective technique for reducing overall runtime.
    175 \TODO{Discuss metrics of difficulty.}
    176 
    177 % TODO: look at overloads
    178 % TODO: histogram of hard cases
     175The data below indicates that number of assertions necessary to resolve has the greatest effect on runtime, as seen in
     176Figure~\ref{per-prob-assns-fig}.
     177However, since the number of assertions required is only known once resolution is finished, the most-promising pre-resolution metric of difficulty is the nesting depth of the expression; as seen in Figure~\ref{per-prob-depth-fig}, expressions of depth $> 10$ in this dataset are uniformly difficult.
     178Figure~\ref{per-prob-subs-fig} presents a similar pattern for number of subexpressions, though given that the expensive tail of problem instances occurs at approximately twice the depth values, it is reasonable to believe that the difficult expressions in question are deeply-nested invocations of binary functions rather than wider but shallowly-nested expressions.
     179
     180% TODO statistics to tease out difficulty? Is ANOVA the right keyword?
     181% TODO maybe metrics to sum number of poly-overloads invoked
     182
     183\begin{figure}
     184\centering
     185\input{per-prob-assns}
     186\caption[Top-level expression resolution time by number of assertions resolved.]{Top-level expression resolution time by number of assertions resolved. Note log scales on both axes.} \label{per-prob-assns-fig}
     187\end{figure}
     188
     189\begin{figure}
     190\centering
     191\input{per-prob-depth}
     192\caption[Top-level expression resolution time by maximum nesting depth of expression.]{Top-level expression resolution time by maximum nesting depth of expression. Note log scales on both axes.} \label{per-prob-depth-fig}
     193\end{figure}
     194
     195\begin{figure}
     196\centering
     197\input{per-prob-subs}
     198\caption[Top-level expression resolution time by number of subexpressions.]{Top-level expression resolution time by number of subexpressions. Note log scales on both axes.} \label{per-prob-subs-fig}
     199\end{figure}
     200       
    179201
    180202\section{\CFA{} Results}
Note: See TracChangeset for help on using the changeset viewer.