\chapter{Experiments} \label{expr-chap} I have implemented a prototype system to test the practical effectiveness of the various algorithms described in Chapters~\ref{resolution-chap} and~\ref{env-chap}. This prototype system essentially just implements the expression resolution pass of \CFACC{}, with a simplified version of the \CFA{} type system and a parser to read in problem instances. The prototype system allows for quicker iteration on algorithms due to its simpler language model and lack of a requirement to generate runnable code, yet captures enough of the nuances of \CFA{} to have some predictive power for the runtime performance of algorithmic variants in \CFACC{} itself. There are three sources of problem instances for the resolver prototype. The first is small, hand-written tests designed to test the expressive power and correctness of the prototype. These tests are valuable for regression testing, but not time-consuming enough to be useful performance tests. The second sort of problem instances are procedurally generated according to a set of parameters (distributions of polymorphic versus monomorphic functions, number of function arguments, number of types, \etc{}); the procedural problem generator can be used to explore the behaviour of an algorithm with respect to certain sorts of problem instances by varying the input parameters. I have implemented a flagged \CFACC{} pass which outputs information which can be used to initialize the procedural generator's parameters to realistic values. The final sort of problem instances are derived from actual \CFA{} code. The prototype has a rich enough representation of \CFA{} that actual instances of expression resolution can be expressed with good fidelity, and I have implemented a compiler pass for \CFACC{} which can generate instances from \CFA{} code. Since at this juncture all development in \CFA{} is done by our research team, I have tested the prototype system on all \CFA{} code currently extant, primarily the standard library and compiler test suite. \section{Resolver Prototype Features} \label{rp-features-sec} The resolver prototype can express most of the \CFA{} features described in Chapter~\ref{cfa-chap}. It supports both monomorphic and polymorphic functions, with type assertions for polymorphic functions. Traits are not explicitly represented, but \CFACC{} inlines traits before the resolver pass, so this is a faithful representation of the existing compiler problem. The prototype system supports variable declarations as well as function declarations, and has a lexical-scoping scheme which obeys \CFA{}-like overloading and overriding rules. The type system of the resolver prototype also captures the key aspects of the \CFA{} type system. \emph{Concrete types} represent the built-in arithmetic types of \CFA{}, along with the implicit conversions between them. Each concrete type is represented by an integer ID, and the conversion cost from $x$ to $y$ is $|y-x|$, a safe conversion if $y > x$, or an unsafe conversion if $y < x$. This is markedly simpler than the graph of conversion costs in \CFA{}, but captures the essentials of the design well. For simplicity, !zero_t! and !one_t!, the types of !0! and !1!, are represented by the type corresponding to !int!. \emph{Named types} are analogues to \CFA{} aggregates, such as structs and unions; aggregate fields are encoded as unary functions from the struct type to the field type, named based on the field name. Named types also support type parameters, and as such can represent generic types as well. Generic named types are used to represent the built-in parameterized types of \CFA{} as well; !T*! is encoded as \texttt{\#\$ptr}. \CFA{} arrays are also represented as pointers, to simulate array-to-pointer decay, while top-level reference types are replaced by their referent to simulate the variety of reference conversions. Function types have first-class representation in the prototype as the type of both function declarations and function pointers, though the function type in the prototype system loses information about type assertions, so polymorphic function pointers cannot be expressed. Void and tuple types are also supported in the prototype, to express the multiple-return-value functions in \CFA{}, though varargs functions and !ttype! tuple-typed type variables are absent from the prototype system. The prototype system also does not represent type qualifiers (\eg{} !const!, !volatile!), so all such qualifiers are stripped during conversion to the prototype system. The resolver prototype supports three sorts of expressions in its input language. The simplest are \emph{value expressions}, which are expressions declared to be a certain type; these implement literal expressions in \CFA{}, and, already being typed, are passed through the resolver unchanged. The second sort, \emph{name expressions}, represent a variable expression in \CFA{}; these contain the name of a variable or function, and are matched to an appropriate declaration overloading that name by the resolver. The third input expression, the \emph{function expression}, represents a call to a function, with a name and zero or more argument subexpressions. As is usual in \CFA{}, operators are represented as function calls; however, as mentioned above, the prototype system represents field access expressions !a.f! as function expressions as well. The main area for future expansion in the design of the resolver prototype is conversions. Cast expressions are implemented in the output language of the resolver, but cannot be expressed in the input. The only implicit conversions supported are between the arithmetic-like concrete types, which captures most, but not all, of \CFA{}'s built-in implicit conversions\footnote{Notable absences include \lstinline{void*} to other pointer types, or \lstinline{0} to pointer types.}. Future work should include a way to express implicit (and possibly explicit) conversions in the input language, with an investigation of the most efficient way to handle implicit conversions, and potentially design for user-defined conversions. \section{Resolver Prototype Design} As discussed above, the resolver prototype works over a simplified version of the \CFA{} type system, for speed of development. The build system for the resolver prototype uses a number of conditional compilation flags to switch between algorithm variants while retaining maximal shared code. A different executable name is also generated for each algorithmic variant so that distinct variants can be more easily tested against each other. The primary architectural difference between the resolver prototype and \CFACC{} is that the prototype system uses a simple mark-and-sweep garbage collector for memory management, while \CFACC{} takes a manual memory management approach. This decision was made for the purpose of faster development iteration, but has proved to be a significant performance benefit as well. \CFACC{} frequently needs to make deep clones of significant object graphs to ensure memory ownership (followed by eventual deletion of these clones), an unnecessarily time-consuming process. The prototype, on the other hand, only needs to clone modified nodes, and can share identical subsets of the object graph. The key design decision enabling this is that all subnodes are held by !const! pointer, and thus cannot be mutated once they have been stored in a parent node. With minimal programming discipline, it can thus be ensured that any expression is either mutable or shared, but never both; the Dotty research compiler for Scala takes a similar architectural approach\cit{}. % this citation would be "personal correspondence" The tree mutator abstraction is designed to take advantage of this, only creating new nodes if a node must actually be mutated. I attempted to port this garbage collector to \CFACC{}, but without success. The GC could be used for memory management with few changes to the codebase, but without a substantial re-write to enforce the same ``!const! children'' discipline \CFACC{} could not take advantage of the potential to share sub-objects; without sharing of sub-objects the GC variant of \CFACC{} must do all the same allocations and deletions and garbage-collector overhead degraded performance unacceptably (though it did fix some known memory leaks intoduced by failures of the existing manual memory management scheme). Another minor architectural difference between \CFACC{} and the prototype system is that \CFACC{} makes extensive use of the pointer-chasing !std::list!, !std::set!, and !std::map! data structures, while the prototype uses the array-based !std::vector! and the hash-based !unordered_! variants of !set! and !map! instead. \TODO{investigate performance difference by testing a resolver prototype variant with List etc. redefined} The final difference between \CFACC{} and the resolver prototype is that, as an experiment in language usability, the prototype performs resolution-based rather than unification-based assertion satisfaction, as discussed in Section~\ref{resn-conclusion-sec}. This enables coding patterns not available in \CFACC{}, \eg{} a more flexible approach to type assertion satisfaction and better handling of functions returning polymorphic type variables that do not exist in the parameter list. \TODO{test performance; shouldn't be too hard to change \texttt{resolveAssertions} to use unification} \section{Prototype Experiments} The primary performance experiments for this thesis were conducted using the resolver prototype on problem instances generated from actual \CFA{} code using the method described in Section~\ref{rp-features-sec}. The prototype was compiled in 24 variants over 3 variables, with variants identified by the hyphen-separated concatenation of their short codes, \eg{} \textsc{bu-imm-bas} for bottom-up traversal, immediate assertion satisfaction, basic type environment. The variables and their values are as follows: \begin{description} \item[Traversal direction] The order in which arguments are matched with parameters, as discussed in Section~\ref{arg-parm-matching-sec}. \begin{description} \item[Bottom-up] (\textsc{bu}) Baker-style bottom-up pass, searching for function candidates based on the available argument interpretations. \item[Combined] (\textsc{co}) Bilson-style bottom-up pass, where argument interpretations are combined into a single combination interpretation. \item[Top-down] (\textsc{td}) Cormack-style top-down pass, searching for argument interpretations based on function candidate parameter types. The \textsc{td-*} variants of the resolver prototype implement a caching system to avoid recomputation of the same argument interpretation with the same type. \end{description} \item[Assertion satisfaction] The algorithm for finding satisfying declarations for type assertions, as discussed in Section~\ref{assn-sat-sec}. \begin{description} \item[Immediate] (\textsc{imm}) All assertions are checked for satisfaction immediately upon generating a candidate interpretation. The techniques discussed in Section~\ref{assn-sat-sec} for environment combination and level-by-level consideration of recursive assertions are applied here. \item[Deferred] (\textsc{def}) As in \textsc{-imm-}, but waits to check assertions until a top-level interpretation has been generated. \item[Cached] (\textsc{dca}) As in \textsc{-def-}, but uses the caching optimization discussed in Section~\ref{assn-sat-sec}. \end{description} \item[Type Environment] The type environment data structure used, as discussed in Chapter~\ref{env-chap}. \begin{description} \item[Basic] (\textsc{bas}) Bilson-style type environment with hash-based equivalence class storage, as discussed in Section~\ref{naive-env-sec}. \item[Incremental Inheritance] (\textsc{inc}) Incremental inheritance variant sharing unmodified common parent information between environments, as discussed in Section~\ref{inc-env-sec}. \item[Persistent union-find] (\textsc{per}) Union-find-based environment, using the persistent variant discussed in Section~\ref{env-persistent-union-find} for backtracking and combination. The requirement of this type environment for common root environments for combination is incompatible with the caching used in the top-down traversal direction, and thus no \textsc{td-*-per} algorithms are tested. \end{description} \end{description} To test the various algorithms, the resolver prototype was compiled with each of the 24 valid combinations of variables\footnote{Namely, all combinations except \textsc{td-*-per}.}, and then timed running each of the \CFA{}-derived test inputs. Terminal output was suppressed for all tests to avoid confounding factors in the timing results, and all tests were run three times in series, with the median result reported in all cases. The medians are representative data points; considering test cases that took at least 0.2~s to run, the average run was within 2\% of the reported median runtime, and no run diverged by more than 20\% of median runtime or 5.5~s. The memory results are even more consistent, with no run exceeding 2\% difference from median in peak resident set size, and 93\% of tests not recording any difference within the 1~KB granularity of the measurement software. All tests were run on a machine with 128~GB of RAM and 64 cores running at 2.2~GHz. As a matter of experimental practicality, test runs which exceeded 8~GB of peak resident memory usage were excluded from the data set. This is a reasonable real-world restriction, as a compiler which is merely slow may be accommodated with patience, but one which uses in excess of 8~GB of RAM may be impossible to run on many currently deployed computer systems. The \textsc{bu-dca-bas} and \textsc{bu-dca-per} variants were able to run all 131 test inputs to completion under this restriction, with maximum memory usage of 70~MB and 78~MB, respectively, which validates its selection as an error threshold. However, this threshold did eliminate a significant number of algorithm-test variants, with the worst-performing variant, \textsc{td-imm-inc}, only completing 62 test inputs within the memory bound. Full results for tests completed by algorithm variant are presented in Figure~\ref{tests-completed-fig}. \begin{figure} \centering \input{tests-completed} \caption[Tests completed for each algorithmic variant]{Number of tests completed for each algorithmic variant} \label{tests-completed-fig} \end{figure} As can be seen from these results, traversal direction is clearly the dominant variable in memory usage, with the \textsc{bu-*} variants performing better than the \textsc{co-} variants, which in turn out-perform the \textsc{td-} variants. It can also be seen that the incremental inheritance (\textsc{inc}) type environment consistently under-performs the other two environment data structures tested. To provide a more holistic view of performance, I have considerered the results from the 56 test inputs which all algorithms are able to complete within the memory bound. Limiting consideration to these algorithms provides an apples-to-apples comparison between algorithms, as the excluded inputs are harder instances which take more time and memory for the algorithms which are able to solve them. Figures~\ref{avg-peak-mem-fig} and~\ref{avg-runtime-fig} show the mean peak memory and runtime, respectively, of each algorithm over the inputs in this data set. These averages are not themselves meaningful, but do enable an overall comparison of relative performance of the different variants. Selecting only these 56 ``easy'' test inputs does bias the average values downward, but has little effect on the relative trends; similar trends can be seen in the graphs of the \textsc{bu-*} algorithms over the 124 (of 131) test inputs which all complete, omitted to save space. \begin{figure} \centering \input{avg-peak-mem} \caption[Average peak memory for each algorithmic variant]{Average peak resident set size for each algorithmic variant over the 56 test inputs all variants complete.} \label{avg-peak-mem-fig} \end{figure} \begin{figure} \centering \input{avg-runtime} \caption[Average runtime for each algorithmic variant]{Average runtime for each algorithmic variant over the 56 test inputs all variants complete.} \label{avg-runtime-fig} \end{figure} % \begin{figure} % \centering % \input{bu-peak-mem} % \caption[Average peak memory for each \textsc{bu-*} variant]{Average peak resident set size for each \textsc{bu-*} variant over the 124 test inputs all \textsc{bu-*} variants complete.} \label{bu-peak-mem-fig} % \end{figure} % \begin{figure} % \centering % \input{bu-runtime} % \caption[Average runtime for each \textsc{bu-*} variant]{Average runtime for each \textsc{bu-*} variant over the 124 test inputs all \textsc{bu-*} variants complete.} \label{bu-runtime-fig} % \end{figure} It can be seen from these results that that top-down immediate-assertion-satisfaction (\textsc{td-imm-*}) are particularly un-performant, as they check a significant number of assertions without filtering to determine if the arguments can be made to fit. It is clear that the bottom-up (\textsc{bu-*}) traversal order is better than both top-down and the Bilson-style bottom-up-combined orders. With regard to assertion satisfaction, immediate (\textsc{*-imm-*}) satisfaction is an inferior solution, though there is little performance difference between deferred (\textsc{*-def-*}) and deferred-cached (\textsc{*-dca-*}) for instances which they can both complete. Since the \textsc{dca} algorithm can solve some particularly hard instances which \textsc{def} cannot, it is the recommended approach. The \textsc{inc} type environment also often uses upwards of double the memory required by the other variants, in addition to being consistently slower on these easy tests; aside from \textsc{bu-imm-bas} performing worse than \textsc{bu-imm-inc} on average when larger tests are considered, these results hold for the other variants. Aside from that, the persistent union-find (\textsc{per}) type environment generally performs better than the basic (\textsc{bas}) environment, with similar peak memory usage and an average speedup factor of nearly 2, though the requirements of the \textsc{per} environment for automatic garbage collection and a shared history for combination make retrofitting it into older code difficult. \section{Instance Difficulty} To characterize the difficulty of expression resolution problem instances, the test suites must be explored at a finer granuarity. As discussed in Section~\ref{resn-analysis-sec}, a single top-level expression is the fundamental problem instance for resolution, yet the test inputs discussed above are composed of thousands of top-level expressions, like the actual source code they are derived from. To pull out the effects of these individual problems, I instrumented the resolver prototype to time resolution for each expression, and also report some relevant properties of the expression. This instrumented resolver was then run on a set of difficult test instances; to limit the data collection task, these runs were limited to the best-performing \textsc{bu-dca-per} algorithm and test inputs which that algorithm took more than 1~s to complete. The 13 test inputs thus selected contain 20632 top-level expressions between them, which are separated into order-of-magnitude bins by runtime in Figure~\ref{per-prob-histo-fig}. As can be seen from this figure, overall runtime is dominated by a few particularly difficult problem instances --- the 60\% of expressions which resolve in under 0.1~ms collectively take less time to resolve than any of the 0.2\% of expressions which take at least 100~ms to resolve. On the other hand, the 46 expressions in that 0.2\% take 38\% of the overall time in this difficult test suite, while the 201 expressions that take between 10 and 100~ms to resolve consume another 30\%. \begin{figure} \centering \input{per-prob-histo} \caption[Histogram of top-level expressions]{Histogram of top-level expression resolution runtime, binned by order-of-magnitude. The left series counts the expressions in each bin according to the left axis, while the right series reports the summed runtime of resolution for all expressions in that bin. Note that both y-axes are log-scaled.} \label{per-prob-histo-fig} \end{figure} Since the top centile of expression resolution instances requires approximately two-thirds of the resolver's time, optimizing the resolver for specific hard problem instances has proven to be an effective technique for reducing overall runtime. \TODO{Discuss metrics of difficulty.} % TODO: look at overloads % TODO: histogram of hard cases \section{\CFA{} Results} % use Jenkins daily build logs to rebuild speedup graph with more data % look back at Resolution Algorithms section for threads to tie up "does the algorithm look like this?"