# Changeset 8d752592 for doc/theses/aaron_moss_PhD

Ignore:
Timestamp:
Feb 4, 2019, 10:20:43 PM (3 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr
Children:
902b123
Parents:
0e6a0be
Message:

thesis: discuss design differences between cfa-cc and resolv-proto

Location:
doc/theses/aaron_moss_PhD/phd
Files:
2 edited

### Legend:

Unmodified
 r0e6a0be The first is small, hand-written tests designed to test the expressive power and correctness of the prototype. These tests are valuable for regression testing, but not time-consuming enough to be useful performance tests. The second sort of problem instances are procedurally generated according to a set of parameters (\eg{} distributions of polymorphic versus monomorphic functions, number of function arguments, number of types, \etc{}); the procedural problem generator can be used to explore the behaviour of an algorithm with respect to certain sorts of problem instances by varying the input parameters. The second sort of problem instances are procedurally generated according to a set of parameters (distributions of polymorphic versus monomorphic functions, number of function arguments, number of types, \etc{}); the procedural problem generator can be used to explore the behaviour of an algorithm with respect to certain sorts of problem instances by varying the input parameters. I have implemented a flagged \CFACC{} pass which outputs information which can be used to initialize the procedural generator's parameters to realistic values. The final sort of problem instances are derived from actual \CFA{} code. The prototype has a rich enough representation of \CFA{} that actual instances of expression resolution can be expressed with good fidelity, and I have implemented a compiler pass for \CFACC{} which can generate instances from actual code. The prototype has a rich enough representation of \CFA{} that actual instances of expression resolution can be expressed with good fidelity, and I have implemented a compiler pass for \CFACC{} which can generate instances from \CFA{} code. Since at this juncture all development in \CFA{} is done by our research team, I have tested the prototype system on all \CFA{} code currently extant, primarily the standard library and compiler test suite. \section{Resolver Prototype Design} % talk about GC here, associated visitor/AST design As discussed above, the resolver prototype works over a simplified version of the \CFA{} type system, for speed of development. The build system for the resolver prototype uses a number of conditional compilation flags to switch between algorithm variants while retaining maximal shared code. A different executable name is also generated for each algorithmic variant so that distinct variants can be more easily tested against each other. % talk about shared type representations The primary architectural difference between the resolver prototype and \CFACC{} is that the prototype system uses a simple mark-and-sweep garbage collector for memory management, while \CFACC{} takes a manual memory management approach. This decision was made for the purpose of faster development iteration, but has proved to be a significant performance benefit as well. \CFACC{} frequently needs to make deep clones of significant object graphs to ensure memory ownership (followed by eventual deletion of these clones), an unnecessarily time-consuming process. The prototype, on the other hand, only needs to clone modified nodes, and can share identical subsets of the object graph. The key design decision enabling this is that all subnodes are held by !const! pointer, and thus cannot be mutated once they have been stored in a parent node. With minimal programming discipline, it can thus be ensured that any expression is either mutable or shared, but never both; the Dotty research compiler for Scala takes a similar architectural approach\cit{}. % this citation would be "personal correspondence" The tree mutator abstraction is designed to take advantage of this, only creating new nodes if a node must actually be mutated. I attempted to port this garbage collector to \CFACC{}, but without success. The GC could be used for memory management with few changes to the codebase, but without a substantial re-write to enforce the same !const! children'' discipline \CFACC{} could not take advantage of the potential to share sub-objects; without sharing of sub-objects the GC variant of \CFACC{} must do all the same allocations and deletions and garbage-collector overhead degraded performance unacceptably (though it did fix some known memory leaks intoduced by failures of the existing manual memory management scheme). Another minor architectural difference between \CFACC{} and the prototype system is that \CFACC{} makes extensive use of the pointer-chasing !std::list!, !std::set!, and !std::map! data structures, while the prototype uses the array-based !std::vector! and the hash-based !unordered_! variants of !set! and !map! instead. \TODO{investigate performance difference by testing a resolver prototype variant with List etc. redefined} The final difference between \CFACC{} and the resolver prototype is that, as an experiment in language usability, the prototype performs resolution-based rather than unification-based assertion satisfaction, as discussed in Section~\ref{resn-conclusion-sec}. This enables coding patterns not available in \CFACC{}, \eg{} a more flexible approach to type assertion satisfaction and better handling of functions returning polymorphic type variables that do not exist in the parameter list. \TODO{test performance; shouldn't be too hard to change \texttt{resolveAssertions} to use unification} \section{Experimental Results}