# Changeset 83a09648 for doc

Ignore:
Timestamp:
Feb 19, 2019, 8:01:44 PM (4 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
7e9fa47
Parents:
049d9a5
Message:

thesis: Add CFA results to experiments section

Location:
doc/theses/aaron_moss_PhD/phd
Files:
3 edited

Unmodified
Removed
• ## doc/theses/aaron_moss_PhD/phd/Makefile

 r049d9a5 per-prob-histo \ per-prob-depth \ cfa-time \ } gnuplot -e BUILD="'${BUILD}/'"${EVALDIR}/per-prob-scatter.gp cfa-time.tex : cfa-plots.gp cfa-time.tsv cfa-mem.tsv ${BUILD} gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/cfa-plots.gp${BUILD}: mkdir -p \${BUILD}
• ## doc/theses/aaron_moss_PhD/phd/experiments.tex

 r049d9a5 \TODO{test performance; shouldn't be too hard to change \texttt{resolveAssertions} to use unification} \section{Prototype Experiments} \section{Prototype Experiments} \label{proto-exp-sec} The primary performance experiments for this thesis were conducted using the resolver prototype on problem instances generated from actual \CFA{} code using the method described in Section~\ref{rp-features-sec}. \section{\CFA{} Results} I have integrated most of the algorithmic techniques discussed in this chapter into \CFACC{}. This integration took place over a period of months while \CFACC{} was under active development on a number of other fronts, so it is not possible to completely isolate the effects of the algorithmic changes, but I have generated some data. To generate this data, representative commits from the \texttt{git} history of the project were checked out and compiled, then run on the same machine used for the resolver prototype experiments discussed in Section~\ref{proto-exp-sec}. To negate the effects of changes to the \CFA{} standard library on the timing results, 55 test files from the test suite of the oldest \CFA{} variant were compiled with the \texttt{-E} flag to inline their library dependencies, and these inlined files were used to test the remaining \CFACC{} versions. I performed two rounds of modification to \CFACC{}; the first round moved from Bilson's original combined-bottom-up algorithm to an un-combined bottom-up algorithm, denoted \textsc{cfa-co} and \textsc{cfa-bu}, respectively. A top-down algorithm was not attempted in \CFACC{} due to its poor performance in the prototype. The second round of modifications addressed assertion satisfaction, taking Bilson's original \textsc{cfa-imm} algorithm, and iteratively modifying it, first to use the deferred approach \textsc{cfa-def}, then caching those results in the \text{cfa-dca} algorithm. The new environment data structures discussed in Section~\ref{proto-exp-sec} have not been successfully merged into \CFACC{} due to their dependencies on the garbage-collection framework in the prototype; I spent several months modifiying \CFACC{} to use similar garbage collection, but due to \CFACC{} not being designed to use such memory management the performance of the modified compiler was non-viable. It is possible that the persistent union-find environment could be modified to use a reference-counted pointer internally without changing the entire memory-management framework of \CFACC{}, but such an attempt is left to future work. As can be seen in Figures~\ref{cfa-time-fig} and~\ref{cfa-mem-fig}, which show the time and peak memory results for these five versions of \CFACC{} on the same test suite, assertion resolution dominates total resolution cost, with the \textsc{cfa-def} and \textsc{cfa-dca} variants running consistently faster than the others on more expensive test cases. The results from \CFACC{} do not exactly mirror those from the prototype; I conjecture this is mostly due to the different memory-management schemes and sorts of data required to run type unification and assertion satisfaction calculations, as \CFACC{} performance has proven to be particularly sensitive to the amount of heap allocation performed. This data also shows a noticable regression in compiler performance in the eleven months between \textsc{cfa-bu} and \textsc{cfa-imm}; this regression is not due to expression resolution, as no integration work happened in this time, but I am unable to ascertain its actual cause. It should also be noted with regard to the peak memory results in Figure~\ref{cfa-mem-fig} that the peak memory usage does not always occur during the resolution phase of the compiler. \begin{figure} \centering \input{cfa-time} \caption[\CFACC{} runtime against \textsc{cfa-co} baseline.]{\CFACC{} runtime against \textsc{cfa-co} baseline. Note log scales on both axes.} \label{cfa-time-fig} \end{figure} \begin{figure} \centering \input{cfa-mem} \caption[\CFACC{} peak memory usage against \textsc{cfa-co} baseline runtime.]{\CFACC{} peak memory usage against \textsc{cfa-co} baseline runtime. Note log scales on both axes.} \label{cfa-mem-fig} \end{figure} % use Jenkins daily build logs to rebuild speedup graph with more data
Note: See TracChangeset for help on using the changeset viewer.