Changeset e99e43f for doc/theses
- Timestamp:
- Jan 10, 2019, 3:50:34 PM (7 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, no_list, persistent-indexer, pthread-emulation, qualifiedEnum
- Children:
- d97c3a4
- Parents:
- aeb8f70 (diff), 08222c7 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - git-author:
- Aaron Moss <a3moss@…> (01/10/19 14:46:09)
- git-committer:
- Aaron Moss <a3moss@…> (01/10/19 15:50:34)
- Location:
- doc/theses
- Files:
-
- 23 added
- 6 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/aaron_moss_PhD/phd/Makefile
raeb8f70 re99e43f 1 1 BUILD = build 2 2 BIBDIR = ../../../bibliography 3 EVALDIR = evaluation 3 4 TEXLIB = .:${BUILD}:${BIBDIR}: 4 5 5 LATEX = TEXINPUTS=${TEXLIB} && export TEXINPUTS && pdflatex -interaction= -output-directory=${BUILD} 6 # LATEX = TEXINPUTS=${TEXLIB} && export TEXINPUTS && pdflatex -interaction=nonstopmode -halt-on-error -output-directory=${BUILD} 7 LATEX = TEXINPUTS=${TEXLIB} && export TEXINPUTS && latex -halt-on-error -output-directory=${BUILD} 6 8 BIBTEX = BIBINPUTS=${TEXLIB} && export BIBINPUTS && bibtex 9 10 VPATH = ${EVALDIR} 7 11 8 12 BASE = thesis … … 23 27 } 24 28 29 GRAPHS = ${addsuffix .tex, \ 30 generic-timing \ 31 } 32 25 33 .PHONY : all rebuild-refs clean wc 26 34 … … 33 41 wc ${SOURCES} 34 42 35 ${DOCUMENT} : ${SOURCES} ${BUILD} 36 ${LATEX} ${BASE} 37 ${LATEX} ${BASE} 43 ${DOCUMENT} : ${BASE}.ps 44 ps2pdf ${BUILD}/$< 38 45 39 rebuild-refs : ${SOURCES} ${BIBFILE} ${BUILD} 46 ${BASE}.ps : ${BASE}.dvi 47 dvips ${BUILD}/$< -o ${BUILD}/$@ 48 49 ${BASE}.dvi : Makefile ${SOURCES} ${GRAPHS} ${BIBFILE} ${BUILD} 40 50 ${LATEX} ${BASE} 41 51 ${BIBTEX} ${BUILD}/${BASE} … … 43 53 ${LATEX} ${BASE} 44 54 55 ${GRAPHS} : generic-timing.gp generic-timing.dat ${BUILD} 56 gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/generic-timing.gp 57 45 58 ${BUILD}: 46 59 mkdir -p ${BUILD} -
doc/theses/aaron_moss_PhD/phd/generic-types.tex
raeb8f70 re99e43f 135 135 136 136 \CC{}, Java, and other languages use \emph{generic types} to produce type-safe abstract data types. 137 Design and implementation of generic types for \CFA{} is the first major contribution of this thesis, a summary of which is published in \cite{Moss18} .137 Design and implementation of generic types for \CFA{} is the first major contribution of this thesis, a summary of which is published in \cite{Moss18} and from which this chapter is closely based. 138 138 \CFA{} generic types integrate efficiently and naturally with the existing polymorphic functions in \CFA{}, while retaining backward compatibility with C in layout and support for separate compilation. 139 139 A generic type can be declared in \CFA{} by placing a !forall! specifier on a !struct! or !union! declaration, and instantiated using a parenthesized list of types after the generic name. … … 179 179 \subsection{Related Work} 180 180 181 One approach to the design of generic types is that taken by \CC{} templates\cit {}.181 One approach to the design of generic types is that taken by \CC{} templates\cite{C++}. 182 182 The template approach is closely related to the macro-expansion approach to C polymorphism demonstrated in Figure~\ref{macro-generic-fig}, but where the macro-expansion syntax has been given first-class language support. 183 183 Template expansion has the benefit of generating code with near-optimal runtime efficiency, as distinct optimizations can be applied for each instantiation of the template. … … 185 185 The most significant restriction of the \CC{} template model is that it breaks separate compilation and C's translation-unit-based encapsulation mechanisms. 186 186 Because a \CC{} template is not actually code, but rather a sort of ``recipe'' to generate code, template code must be visible at its call site to be used. 187 Furthermore, \CC{} template code cannot be type-checked without instantiating it, a time consuming process with no hope of improvement until \CC{} concepts\cite{C++Concepts} are standardized in \CCtwenty{}. 187 188 C code, by contrast, only needs a !struct! or function declaration to call that function or use (by-pointer) values of that type, a desirable property to maintain for \CFA{}. 188 189 189 Java\cit {} has another prominent implementation for generic types,based on a significantly different approach than \CC{}.190 Java\cite{Java8} has another prominent implementation for generic types, introduced in Java~5 and based on a significantly different approach than \CC{}. 190 191 The Java approach has much more in common with the !void*!-polymorphism shown in Figure~\ref{void-generic-fig}; since in Java nearly all data is stored by reference, the Java approach to polymorphic data is to store pointers to arbitrary data and insert type-checked implicit casts at compile-time. 191 192 This process of \emph{type erasure} has the benefit of allowing a single instantiation of polymorphic code, but relies heavily on Java's object model and garbage collector. 192 193 To use this model, a more C-like language such as \CFA{} would be required to dynamically allocate internal storage for variables, track their lifetime, and properly clean them up afterward. 193 194 194 \TODO{Talk about Go, maybe Rust, Swift, etc. as well; specifically mention ``fat pointer'' polymorphism} 195 196 \TODO{Talk about Cyclone as well, and why my generics are more powerful} 195 Cyclone\cite{Grossman06} is another language extending C, and also provides capabilities for polymorphic functions and existential types, similar to \CFA{}'s !forall! functions and generic types. 196 Cyclone existential types can include function pointers in a construct similar to a virtual function table, but these pointers must be explicitly initialized at some point in the code, which is tedious and error-prone compared to \CFA{}'s implicit assertion satisfaction. 197 Furthermore, Cyclone's polymorphic functions and types are restricted to abstraction over types with the same layout and calling convention as !void*!, \ie{} only pointer types and !int!. 198 In the \CFA{} terminology discussed in Section~\ref{generic-impl-sec}, all Cyclone polymorphism must be dtype-static. 199 While the Cyclone polymorphism design provides the efficiency benefits discussed in Section~\ref{dtype-static-sec} for dtype-static polymorphism, it is more restrictive than the more general model of \CFA{}. 200 201 Many other languages include some form of generic types. 202 As a brief survey, ML\cite{ML} was the first language to support parameteric polymorphism, but unlike \CFA{} does not support the use of assertions and traits to constrain type arguments. 203 Haskell\cite{Haskell10} combines ML-style polymorphism with the notion of type classes, similar to \CFA{} traits, but requiring an explicit association with their implementing types, unlike \CFA{}. 204 Objective-C\cite{obj-c-book} is an extension to C which has had some industrial success; however, it did not support type-checked generics until recently\cite{xcode7}, and it's garbage-collected, message-passing object-oriented model is a radical departure from C. 205 Go\cite{Go}, and Rust\cite{Rust} are modern compiled languages with abstraction features similar to \CFA{} traits, \emph{interfaces} in Go and \emph{traits} in Rust. 206 Go has implicit interface implementation and uses a ``fat pointer'' construct to pass polymorphic objects to functions, similar in principle to \CFA{}'s implicit forall paramters. 207 Go does not, however, allow user code to define generic types, restricting Go programmers to the small set of generic types defined by the compiler. 208 Rust has powerful abstractions for generic programming, including explicit implemenation of traits and options for both separately-compiled virtual dispatch and template-instantiated static dispatch in functions. 209 On the other hand, the safety guarantees of Rust's \emph{lifetime} abstraction and borrow checker impose a distinctly idiosyncratic programming style and steep learning curve; \CFA{}, with its more modest safety features, allows direct ports of C code while maintaining the idiomatic style of the original source. 197 210 198 211 \subsection{\CFA{} Generics} … … 212 225 213 226 In this example, !with_len! is defined at the same scope as !pair!, but it could be called from any context that can see the definition of !pair! and a declaration of !with_len!. 214 If its return type was !pair(const char*, int)*!, callers of !with_len! would only need the declaration !forall(otype R, otype S) struct pair ;! to call it, in accordance with the usual C rules for opaque types.227 If its return type was !pair(const char*, int)*!, callers of !with_len! would only need the declaration !forall(otype R, otype S) struct pair! to call it, in accordance with the usual C rules for opaque types. 215 228 216 229 !with_len! is itself a monomorphic function, returning a type that is structurally identical to !struct { const char* first; int second; }!, and as such could be called from C given an appropriate redeclaration and demangling flags. … … 234 247 \end{cfa} 235 248 236 \section{Implementation} 237 238 % forall constraints on struct/union constrain default constructor (TODO check with Rob) 239 240 % TODO discuss layout function algorithm, application to separate compilation 241 % TODO put a static const field in for _n_fields for each generic, describe utility for separate compilation (actually no, you need to be able to see the type for it to be sized) 242 243 % mention that tuples are implemented on top of a per-arity generic type 244 245 \section{Performance Experiments} 246 247 % TODO pull benchmarks from Moss et al. 249 \CFA{} generic types also support the type constraints from !forall! functions. 250 For example, the following declaration of a sorted set type ensures that the set key implements equality and relational comparison: 251 252 \begin{cfa} 253 forall(otype Key | { int ?==?(Key, Key); int ?<?(Key, Key); }) struct sorted_set; 254 \end{cfa} 255 256 These constraints are implemented by applying equivalent constraints to the compiler-generated constructors for this type. 257 258 \section{Implementation} \label{generic-impl-sec} 259 260 The ability to use generic types in polymorphic contexts means that the \CFA{} implementation in \CFACC{} must support a mechanism for accessing fields of generic types dynamically at runtime. 261 While \CFACC{} could in principle use this same mechanism for accessing fields of all generic types, such an approach would throw away compiler knowledge of static types and impose an unnecessary runtime cost, limiting the utility of the generic type design. 262 Instead, my design for generic type support in \CFACC{} distinguishes between \emph{concrete} generic types that have a fixed memory layout regardless of type parameters and \emph{dynamic} generic types that may vary in memory layout depending on their type parameters. 263 A \emph{dtype-static} type has polymorphic parameters but is still concrete. 264 Polymorphic pointers are an example of dtype-static types; given some type variable !T!, T is a polymorphic type, but !T*! has a fixed size and can therefore be represented by a !void*! in code generation. 265 In particular, generic types where all parameters are un-!sized! (\ie{} they do not conform to the built-in !sized! trait because the compiler does not know their size and alignment) are always concrete, as there is no possibility for their layout to vary based on type parameters of unknown size and alignment. 266 More precisely, a type is concrete if and only if all of its !sized! type parameters are concrete, and a concrete type is dtype-static if any of its type parameters are (possibly recursively) polymorphic. 267 To illustrate, the following code using the !pair! type from above \TODO{test this} has each use of !pair! commented with its class: 268 269 \begin{cfa} 270 //dynamic, layout varies based on T 271 forall(otype T) T value( pair(const char*, T) p ) { return p.second; } 272 273 // dtype-static, F* and T* are concrete but recursively polymorphic 274 forall(dtype F, otype T) T value( pair(F*, T*) ) { return *p.second; } 275 276 pair(const char*, int) p = {"magic", 42}; $\C[2.5in]{// concrete}$ 277 int i = value(p); 278 pair(void*, int*) q = {0, &p.second}; $\C[2.5in]{// concrete}$ 279 i = value(q); 280 double d = 1.0; 281 pair(double*, double*) r = {&d, &d}; $\C[2.5in]{// concrete}$ 282 d = value(r); 283 \end{cfa} 284 285 \subsection{Concrete Generic Types} 286 287 The \CFACC{} translator template expands concrete generic types into new structure types, affording maximal inlining. 288 To enable interoperation among equivalent instantiations of a generic type, \CFACC{} saves the set of instantiations currently in scope and reuses the generated structure declarations where appropriate. 289 In particular, tuple types are implemented as a single compiler-generated generic type definition per tuple arity, and can be instantiated and reused according to the usual rules for generic types. 290 A function declaration that accepts or returns a concrete generic type produces a declaration for the instantiated structure in the same scope, which all callers may reuse. 291 As an example, the concrete instantiation for !pair(const char*, int)! is\footnote{This omits the field name mangling performed by \CFACC{} for overloading purposes.\label{mangle-foot}} 292 293 \begin{cfa} 294 struct _pair_conc0 { const char * first; int second; }; 295 \end{cfa} 296 297 A concrete generic type with dtype-static parameters is also expanded to a structure type, but this type is used for all matching instantiations. 298 In the example above, the !pair(F*, T*)! parameter to !value! is such a type; its expansion is below\footref{mangle-foot}, and it is used as the type of the variables !q! and !r! as well, with casts for member access where appropriate. 299 300 \begin{cfa} 301 struct _pair_conc1 { void* first; void* second; }; 302 \end{cfa} 303 304 \subsection{Dynamic Generic Types} 305 306 In addition to this efficient implementation of concrete generic types, \CFA{} also offers flexibility with powerful support for dynamic generic types. 307 In the pre-existing compiler design, !otype! (and all !sized!) type parameters come with implicit size and alignment parameters provided by the caller. 308 The design for generic types presented here adds an \emph{offset array} containing structure-member offsets for dynamic generic !struct! types. 309 A dynamic generic !union! needs no such offset array, as all members are at offset 0, but size and alignment are still necessary. 310 Access to members of a dynamic structure is provided at runtime via base displacement addressing the structure pointer and the member offset (similar to the !offsetof! macro), moving a compile-time offset calculation to runtime. 311 312 the offset arrays are statically generated where possible. 313 If a dynamic generic type is passed or returned by value from a polymorphic function, \CFACC{} can safely assume that the generic type is complete (\ie{} has a known layout) at any call site, and the offset array is passed from the caller; if the generic type is concrete at the call site, the elements of this offset array can even be statically generated using the C !offsetof! macro. 314 As an example, the body of the second !value! function above is implemented as 315 316 \begin{cfa} 317 _assign_T( _retval, p + _offsetof_pair[1] ); $\C[2in]{// return *p.second}$ 318 \end{cfa} 319 320 Here, !_assign_T! is passed in as an implicit parameter from !otype T! and takes two !T*! (!void*! in the generated code), a destination and a source, and !_retval! is the pointer to a caller-allocated buffer for the return value, the usual \CFA{} method to handle dynamically-sized return types. 321 !_offsetof_pair! is the offset array passed into !value!; this array is generated at the call site as 322 323 \begin{cfa} 324 size_t _offsetof_pair[] = {offsetof(_pair_conc0, first), offsetof(_pair_conc0, second)}; 325 \end{cfa} 326 327 \subsubsection{Layout Functions} 328 329 In some cases, the offset arrays cannot be statically generated. 330 For instance, modularity is generally provided in C by including an opaque forward declaration of a structure and associated accessor and mutator functions in a header file, with the actual implementations in a separately-compiled \texttt{.c} file. 331 \CFA{} supports this pattern for generic types, implying that the caller of a polymorphic function may not know the actual layout or size of a dynamic generic type and only holds it by pointer. 332 \CFACC{} automatically generates \emph{layout functions} for cases where the size, alignment, and offset array of a generic struct cannot be passed into a function from that functions's caller. 333 These layout functions take as arguments pointers to size and alignment variables and a caller-allocated array of member offsets, as well as the size and alignment of all !sized! parameters to the generic structure. 334 Un!sized! parameters not passed because they are forbidden from being used in a context that affects layout by C's usual rules about incomplete types. 335 Similarly, the layout function can only safely be called from a context where the generic type definition is visible, because otherwise the caller will not know how large to allocate the array of member offsets. 336 337 The C standard does not specify a memory layout for structs, but the POSIX ABI for x86\cit{} does; this memory layout is common for C implementations, but is a platform-specific issue for porting \CFA{}. 338 This algorithm, sketched below in pseudo-\CFA{}, is a straightforward mapping of consecutive fields into the first properly-aligned offset in the !struct! layout; layout functions for !union! types omit the offset array and simply calculate the maximum size and alignment over all union variants. 339 Since \CFACC{} generates a distinct layout function for each type, constant-folding and loop unrolling are applied. 340 341 \begin{cfa} 342 forall(dtype T1, dtype T2, ... | sized(T1) | sized(T2) | ...) 343 void layout(size_t* size, size_t* align, size_t* offsets) { 344 // initialize values 345 *size = 0; *align = 1; 346 // set up members 347 for ( int i = 0; i < n_fields; ++i ) { 348 // pad to alignment 349 size_t off_align = *size % alignof(field[i]); 350 if ( off_align != 0 ) { *size += alignof(field[i]) - off_align; } 351 // mark member, increase size, and fix alignment 352 offsets[i] = *size; 353 *size += sizeof(field[i]); 354 if ( *align < alignof(field[i]) ) { *align = alignof(field[i]); } 355 } 356 // final padding to alignment 357 size_t off_align = *size % *align; 358 if ( off_align != 0 ) { *size += *align - off_align; } 359 } 360 \end{cfa} 361 362 Results of layout function calls are cached so that they are only computed once per type per function. 363 Layout functions also allow generic types to be used in a function definition without reflecting them in the function signature, an important implemenation-hiding constraint of the design. 364 For instance, a function that strips duplicate values from an unsorted !list(T)! likely has a reference to the list as its only explicit parameter, but uses some sort of !set(T)! internally to test for duplicate values. 365 This function could acquire the layout for !set(T)! by calling its layout function, providing as an argument the layout of !T! implicitly passed into that function. 366 367 Whether a type is concrete, dtype-static, or dynamic is decided solely on the basis of the type arguments and !forall! clause type paramters. 368 This design allows opaque forward declarations of generic types, \eg{} !forall(otype T) struct Box;! like in C, all uses of $Box(T)$ can be separately compiled, and callers from other translation units know the proper calling conventions to use. 369 In an alternate design where the definition of a structure type is included in deciding whether a generic type is dynamic or concrete, some further types may be recognized as dtype-static --- \eg{} !Box! could be defined with a body !{ T* p; }!, and would thus not depend on !T! for its layout. 370 However, the existence of an !otype! parameter !T! means that !Box! \emph{could} depend on !T! for its layout if this definition is not visible, and we judged preserving separate compilation (and the associated C compatibility) in the implemented design to be an acceptable trade-off. 371 372 \subsection{Applications of Dtype-static Types} \label{dtype-static-sec} 373 374 The reuse of dtype-static structure instantiations enables useful programming patterns at zero runtime cost. 375 The most important such pattern is using !forall(dtype T) T*! as a type-checked replacement for !void*!, \eg{} creating a lexicographic comparison function for pairs of pointers. 376 377 \begin{cfa} 378 forall(dtype T) 379 int lexcmp( pair(T*, T*)* a, pair(T*, T*)* b, int (*cmp)(T*, T*) ) { 380 int c = cmp( a->first, b->first ); 381 return c ? c : cmp( a->second, b->second ); 382 } 383 \end{cfa} 384 385 Since !pair(T*, T*)! is a concrete type, there are no implicit parameters passed to !lexcmp!; hence, the generated code is identical to a function written in standard C using !void*!, yet the \CFA{} version is type-checked to ensure members of both pairs and arguments to the comparison function match in type. 386 387 Another useful pattern enabled by reused dtype-static type instantiations is zero-cost \emph{tag structures}. 388 Sometimes, information is only used for type checking and can be omitted at runtime. 389 In the example below, !scalar! is a dtype-static type; hence, all uses have a single structure definition containing !unsigned long! and can share the same implementations of common functions like !?+?!. 390 These implementations may even be separately compiled, unlike \CC{} template functions. 391 However, the \CFA{} type checker ensures matching types are used by all calls to !?+?!, preventing nonsensical computations like adding a length to a volume. 392 393 \begin{cfa} 394 forall(dtype Unit) struct scalar { unsigned long value; }; 395 struct metres {}; 396 struct litres {}; 397 398 forall(dtype U) scalar(U) ?+?(scalar(U) a, scalar(U) b) { 399 return (scalar(U)){ a.value + b.value }; 400 } 401 402 scalar(metres) half_marathon = { 21098 }; 403 scalar(litres) pool = { 2500000 }; 404 scalar(metres) marathon = half_marathon + half_marathon; 405 `marathon + pool;` $\C[4in]{// compiler ERROR, mismatched types}$ 406 \end{cfa} 407 408 \section{Performance Experiments} \label{generic-performance-sec} 409 410 To validate the practicality of this generic type design I have conducted microbenchmark-based tests against a number of comparable code designs in C and \CC{}, first published in \cite{Moss18}. 411 Since all these languages are compiled with the same compiler backend and share a subset essentially comprising standard C, maximal-performance benchmarks should show little runtime variance, differing only in length and clarity of source code. 412 A more illustrative comparison measures the costs of idiomatic usage of each language's features. 413 The code below shows the \CFA{} benchmark tests for a generic stack based on a singly-linked list; the test suite is equivalent for the other other languages. 414 The experiment uses element types !int! and !pair(short, char)! and pushes $N = 40M$ elements on a generic stack, copies the stack, clears one of the stacks, and finds the maximum value in the other stack. 415 416 \begin{cfa} 417 int main() { 418 int max = 0, val = 42; 419 stack( int ) si, ti; 420 421 REPEAT_TIMED( "push_int", N, push( si, val ); ) 422 TIMED( "copy_int", ti{ si }; ) 423 TIMED( "clear_int", clear( si ); ) 424 REPEAT_TIMED( "pop_int", N, int x = pop( ti ); if ( x > max ) max = x; ) 425 426 pair( short, char ) max = { 0h, '\0' }, val = { 42h, 'a' }; 427 stack( pair( short, char ) ) sp, tp; 428 429 REPEAT_TIMED( "push_pair", N, push( sp, val ); ) 430 TIMED( "copy_pair", tp{ sp }; ) 431 TIMED( "clear_pair", clear( sp ); ) 432 REPEAT_TIMED( "pop_pair", N, pair(short, char) x = pop( tp ); 433 if ( x > max ) max = x; ) 434 } 435 \end{cfa} 436 437 The four versions of the benchmark implemented are C with !void*!-based polymorphism, \CFA{} with parameteric polymorphism, \CC{} with templates, and \CC{} using only class inheritance for polymorphism, denoted \CCV{}. 438 The \CCV{} variant illustrates an alternative object-oriented idiom where all objects inherit from a base !object! class, mimicking a Java-like interface; in particular, runtime checks are necessary to safely downcast objects. 439 The most notable difference among the implementations is the memory layout of generic types: \CFA{} and \CC{} inline the stack and pair elements into corresponding list and pair nodes, while C and \CCV{} lack such capability and, instead, must store generic objects via pointers to separately allocated objects. 440 Note that the C benchmark uses unchecked casts as C has no runtime mechanism to perform such checks, whereas \CFA{} and \CC{} provide type safety statically. 441 442 Figure~\ref{generic-eval-fig} and Table~\ref{generic-eval-table} show the results of running the described benchmark. 443 The graph plots the median of five consecutive runs of each program, with an initial warm-up run omitted. 444 All code is compiled at \texttt{-O2} by gcc or g++ 6.4.0, with all \CC{} code compiled as \CCfourteen{}. 445 The benchmarks are run on an Ubuntu 16.04 workstation with 16 GB of RAM and a 6-core AMD FX-6300 CPU with 3.5 GHz maximum clock frequency. 446 I conjecture that these results scale across most uses of generic types, given the constant underlying polymorphism implementation. 447 448 \begin{figure} 449 \centering 450 \input{generic-timing} 451 \caption{Benchmark timing results (smaller is better)} \label{generic-eval-fig} 452 \end{figure} 453 454 \begin{table} 455 \caption{Properties of benchmark code} \label{generic-eval-table} 456 \centering 457 \newcommand{\CT}[1]{\multicolumn{1}{c}{#1}} 458 \begin{tabular}{lrrrr} 459 & \CT{C} & \CT{\CFA} & \CT{\CC} & \CT{\CCV} \\ 460 maximum memory usage (MB) & 10\,001 & 2\,502 & 2\,503 & 11\,253 \\ 461 source code size (lines) & 201 & 191 & 125 & 294 \\ 462 redundant type annotations (lines) & 27 & 0 & 2 & 16 \\ 463 binary size (KB) & 14 & 257 & 14 & 37 \\ 464 \end{tabular} 465 \end{table} 466 467 The C and \CCV{} variants are generally the slowest and have the largest memory footprint, due to their less-efficient memory layout and the pointer indirection necessary to implement generic types in those languages; this inefficiency is exacerbated by the second level of generic types in the pair benchmarks. 468 By contrast, the \CFA{} and \CC{} variants run in roughly equivalent time for both the integer and pair because of the equivalent storage layout, with the inlined libraries (\ie{} no separate compilation) and greater maturity of the \CC{} compiler contributing to its lead. 469 \CCV{} is slower than C largely due to the cost of runtime type checking of downcasts (implemented with !dynamic_cast!); the outlier for \CFA{}, pop !pair!, results from the complexity of the generated-C polymorphic code. 470 The gcc compiler is unable to optimize some dead code and condense nested calls; a compiler designed for \CFA{} could more easily perform these optimizations. 471 Finally, the binary size for \CFA{} is larger because of static linking with \CFA{} libraries. 472 473 \CFA{} is also competitive in terms of source code size, measured as a proxy for programmer effort. 474 The line counts in Table~\ref{generic-eval-table} include implementations of !pair! and !stack! types for all four languages for purposes of direct comparison, although it should be noted that \CFA{} and \CC{} have prewritten data structures in their standard libraries that programmers would generally use instead. 475 Use of these standard library types has minimal impact on the performance benchmarks, but shrinks the \CFA{} and \CC{} code to 39 and 42 lines, respectively. 476 The difference between the \CFA{} and \CC{} line counts is primarily declaration duplication to implement separate compilation; a header-only \CFA{} library is similar in length to the \CC{} version. 477 On the other hand, due to the language shortcomings mentioned at the beginning of the chapter, C does not have a generic collections library in its standard distribution, resulting in frequent reimplementation of such collection types by C programmers. 478 \CCV{} does not use the \CC{} standard template library by construction, and, in fact, includes the definition of !object! and wrapper classes for !char!, !short!, and !int! in its line count, which inflates this count somewhat, as an actual object-oriented language would include these in the standard library. 479 I justify the given line count by noting that many object-oriented languages do not allow implementing new interfaces on library types without subclassing or wrapper types, which may be similarly verbose. 480 481 Line count is a fairly rough measure of code complexity; another important factor is how much type information the programmer must specify manually, especially where that information is not type-checked. 482 Such unchecked type information produces a heavier documentation burden and increased potential for runtime bugs and is much less common in \CFA{} than C, with its manually specified function pointer arguments and format codes, or \CCV{}, with its extensive use of un-type-checked downcasts, \eg{} !object! to !integer! when popping a stack. 483 To quantify this manual typing, the ``redundant type annotations'' line in Table~\ref{generic-eval-table} counts the number of lines on which the known type of a variable is respecified, either as a format specifier, explicit downcast, type-specific function, or by name in a !sizeof!, !struct! literal, or !new! expression. 484 The \CC{} benchmark uses two redundant type annotations to create new stack nodes, whereas the C and \CCV{} benchmarks have several such annotations spread throughout their code. 485 The \CFA{} benchmark is able to eliminate \emph{all} redundant type annotations through use of the return-type polymorphic !alloc! function in the \CFA{} standard library. 248 486 249 487 \section{Future Work} 250 488 251 % mention future work adding non-type generic parameters, like ints 252 253 % taking advantage of generic layout functions to provide field assertions in forall qualifiers 254 255 % mention packed generic layouts (significantly more complex layout function, but possible) 489 The generic types design presented here is already sufficiently expressive to implement a variety of useful library types. 490 However, some other features based on this design could further improve \CFA{}. 491 492 The most pressing addition is the ability to have non-type generic parameters. 493 C already supports fixed-length array types, \eg{} !int[10]!; these types are essentially generic types with unsigned integer parameters, and allowing \CFA{} users the capability to build similar types is a requested feature. 494 More exotically, the ability to have these non-type parameters depend on dynamic runtime values rather than static compile-time constants opens up interesting opportunities for type-checking problematic code patterns. 495 For example, if a collection iterator was parameterized over the pointer to the collection it was drawn from, then a sufficiently powerful static analysis pass could ensure that that iterator was only used for that collection, eliminating one source of hard-to-find bugs. 496 497 The implementation mechanisms behind this generic types design can also be used to add new features to \CFA{}. 498 One such potential feature would be to add \emph{field assertions} to the existing function and variable assertions on polymorphic type variables. 499 Implementation of these field assertions would be based on the same code that supports member access by dynamic offset calculation for dynamic generic types. 500 Simulating field access can already be done more flexibly in \CFA{} by declaring a trait containing an accessor function to be called from polymorphic code, but these accessor functions impose some overhead both to write and call, and directly providing field access via an implicit offset parameter would be both more concise and more efficient. 501 Of course, there are language design trade-offs to such an approach, notably that providing the two similar features of field and function assertions would impose a burden of choice on programmers writing traits, with field assertions more efficient, but function assertions more general; given this open design question we have deferred a decision on field assertions until we have more experience using \CFA{}. 502 If field assertions are included in the language, a natural extension would be to provide a structural inheritance mechanism for every !struct! type that simply turns the list of !struct! fields into a list of field assertions, allowing monomorphic functions over that type to be generalized to polymorphic functions over other similar types with added or reordered fields. 503 \CFA{} could also support a packed or otherwise size-optimized representation for generic types based on a similar mechanism --- the layout function would need to be re-written, but nothing in the use of the offset arrays implies that the field offsets need be monotonically increasing. 504 505 With respect to the broader \CFA{} polymorphism design, the experimental results in Section~\ref{generic-performance-sec} demonstrate that though the runtime impact of \CFA{}'s dynamic virtual dispatch is low, it is not as low as the static dispatch of \CC{} template inlining. 506 However, rather than subject all \CFA{} users to the compile-time costs of ubiquitous template expansion, we are considering more targeted mechanisms for performance-sensitive code. 507 Two promising approaches are are an !inline! annotation at polymorphic function call sites to create a template specialization of the function (provided the code is visible) or placing a different !inline! annotation on polymorphic function definitions to instantiate a specialized version of the function for some set of types. 508 These approaches are not mutually exclusive and allow performance optimizations to be applied only when necessary, without suffering global code bloat. 509 In general, the \CFA{} team believes that separate compilation works well with loaded hardware caches by producing smaller code, which may offset the benefit of larger inlined code. -
doc/theses/aaron_moss_PhD/phd/introduction.tex
raeb8f70 re99e43f 5 5 6 6 \begin{table}[h] 7 \label{tiobe-table} 8 \caption[TIOBE index over time]{Current top 5 places in the TIOBE index averaged over years} 7 \caption[TIOBE index over time]{Current top 5 places in the TIOBE index averaged over years} \label{tiobe-table} 9 8 10 9 \centering -
doc/theses/aaron_moss_PhD/phd/macros.tex
raeb8f70 re99e43f 12 12 \newcommand{\CCseventeen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}17} % C++17 symbolic name 13 13 \newcommand{\CCtwenty}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}20} % C++20 symbolic name 14 \newcommand{\CCV}{\rm C\kern-.1em\hbox{+\kern-.25em+}obj} % C++ virtual symbolic name 14 15 \newcommand{\Csharp}{C\raisebox{-0.7ex}{\Large$^\sharp$}} % C# symbolic name 15 16 … … 19 20 \newcommand{\etal}{\textit{et~al.}} 20 21 22 \newcommand{\myset}[1]{\left\{#1\right\}} 23 21 24 \newcommand{\TODO}[1]{\textbf{TODO:} \textit{#1}} 22 25 \newcommand{\cit}{\textsuperscript{[citation needed]}} -
doc/theses/aaron_moss_PhD/phd/thesis.tex
raeb8f70 re99e43f 21 21 22 22 \usepackage{amsmath,amssymb,amstext} % Lots of math symbols and environments 23 \usepackage[pdftex]{graphicx} % For including graphics N.B. pdftex graphics driver 23 % \usepackage[pdftex]{graphicx} % For including graphics N.B. pdftex graphics driver 24 \usepackage{graphicx} 25 26 \usepackage{amsthm} % for theorem environment 27 \newtheorem{theorem}{Theorem} 28 29 \usepackage{footmisc} % for double refs to the same footnote 24 30 25 31 % Hyperlinks make it very easy to navigate an electronic document. … … 28 34 % Use the "hyperref" package 29 35 % N.B. HYPERREF MUST BE THE LAST PACKAGE LOADED; ADD ADDITIONAL PKGS ABOVE 30 \usepackage[pdftex,pagebackref=false]{hyperref} % with basic options 36 %\usepackage[pdftex,pagebackref=false]{hyperref} % with basic options 37 \usepackage[pagebackref=false]{hyperref} 31 38 % N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing. 32 39 … … 120 127 \input{background} 121 128 \input{generic-types} 129 \input{resolution-heuristics} 122 130 \input{type-environment} 123 \input{resolution-heuristics}124 131 \input{conclusion} 125 132 -
doc/theses/aaron_moss_PhD/phd/type-environment.tex
raeb8f70 re99e43f 2 2 \label{env-chap} 3 3 4 Talk about the type environment data structure. Pull from your presentation. 4 One key data structure for expression resolution is the \emph{type environment}. 5 As discussed in Chapter~\ref{resolution-chap}, being able to efficiently determine which type variables are bound to which concrete types or whether two type environments are compatible is a core requirement of the resolution algorithm. 6 Furthermore, expression resolution involves a search through many related possible solutions, so being able to re-use shared subsets of type environment data and to switch between environments quickly is desirable for performance. 7 In this chapter I discuss and empirically compare a number of type environment data structure variants, including some novel variations on the union-find\cite{Galler64} data structure introduced in this thesis. 8 9 \section{Definitions} \label{env-defn-sec} 10 11 For purposes of this chapter, a \emph{type environment} $T$ is a set of \emph{type classes} $\myset{T_1, T_2, \cdots, T_{|T|}}$. 12 Each type class $T_i$ contains a set of \emph{type variables} $\myset{v_{i,1}, v_{i,2}, \cdots, v_{i,|T_i|}}$; note that the sets of variables contained in two distinct classes in the same environment must be disjoint. 13 Each individual type class $T_i$ may also be associated with a \emph{bound}, $b_i$; this bound contains the \emph{bound type} which the variables in the type class are replaced with, but also includes other information in \CFACC{}, including whether type conversions are permissible on the bound type and what sort of type variables are contained in the class (data types, function types, or variadic tuples). 14 15 \begin{table} 16 \caption[Type environment operation summary]{Summary of type environment operations.} 17 \label{env-op-table} 18 \centering 19 \begin{tabular}{r@{\hskip 0.25em}ll} 20 $find(T, v_{i,j})$ & $\rightarrow T_i | \bot$ & Locate class for variable \\ 21 $report(T_i)$ & $\rightarrow \{ v_{i,j} \cdots \}$ & List variables for class \\ 22 $bound(T_i)$ & $\rightarrow b_i | \bot$ & Get bound for class \\ 23 $insert(v_{i,1})$ & & New single-variable class \\ 24 $add(T_i, v_{i,j})$ & & Add variable to class \\ 25 $bind(T_i, b_i)$ & & Set or update class bound \\ 26 $unify(T, T_i, T_j)$ & $\rightarrow \top | \bot$ & Combine two type classes \\ 27 $split(T, T_i)$ & $\rightarrow T'$ & Revert the last $unify$ operation on $T_i$ \\ 28 $combine(T, T')$ & $\rightarrow \top | \bot$ & Merge two environments \\ 29 $save(T)$ & $\rightarrow H$ & Get handle for current state \\ 30 $backtrack(T, H)$ & & Return to handle state 31 \end{tabular} 32 \end{table} 33 34 Given this basic structure, type environments in \CFACC{} need to support eleven basic operations, summarized in Table~\ref{env-op-table}. 35 The first seven operations are straightforward queries and updates on these data structures: 36 The lookup operation $find(T, v_{i,j})$ produces $T_i$, the type class in $T$ which contains variable $v_{i,j}$, or an invalid sentinel value for no such class. 37 The other two query operations act on type classes, where $report(T_i)$ produces the set $\myset{v_{i,1}, v_{i,2}, \cdots, v_{i,|T_i|}}$ of all type variables in a class $T_i$ and $bound(T_i)$ produces the bound $b_i$ of that class, or a sentinel indicating no bound is set. 38 39 The update operation $insert(T, v_{i,1})$ creates a new type class $T_i$ in $T$ that contains only the variable $v_{i,1}$ and no bound; due to the disjointness property $v_{i,1}$ cannot belong to any other type class in $T$. 40 The $add(T_i, v_{i,j})$ operation adds a new type variable $v_{i,j}$ to class $T_i$; again, $v_{i,j}$ cannot exist elsewhere in $T$. 41 $bind(T_i, b_i)$ mutates the bound for a type class, setting or updating the current bound. 42 43 The $unify$ operation is the fundamental non-trivial operation a type environment data structure must support. 44 $unify(T, T_i, T_j)$ merges a type class $T_j$ into another $T_i$, producing a failure result and leaving $T$ in an invalid state if this merge fails. 45 It is always possible to unify the type variables of both classes by simply taking the union of both sets; given the disjointness property, no checks for set containment are required, and the variable sets can simply be concatenated if supported by the underlying data structure. 46 $unify$ depends on an internal $unifyBound$ operation which may fail. 47 In \CFACC{}, $unifyBound(b_i, b_j) \rightarrow b'_i|\bot$ checks that the type classes contain the same sort of variable, takes the tighter of the two conversion permissions, and checks if the bound types can be unified. 48 If the bound types cannot be unified (\eg{} !struct A! with !int*!), then $unifyBound$ fails, while other combinations of bound types may result in recursive calls. 49 For instance, unifying !R*! with !S*! for type variables !R! and !S! will result in a call to $unify(T, find($!R!$), find($!S!$))$, while unifying !R*! with !int*! will result in a call to $unifyBound$ on !int! and the bound type of the class containing !R!. 50 As such, a call to $unify(T, T_i, T_j)$ may touch every type class in $T$, not just $T_i$ and $T_j$, collapsing the entirety of $T$ into a single type class in extreme cases. 51 For more information on \CFA{} unification, see \cite{Bilson03}. 52 The inverse of $unify$ is $split(T, T_i)$, which produces a new environment $T'$ which is the same as $T$ except that $T_i$ has been replaced by two classes corresponding to the arguments to the previous call to $unify$ on $T_i$. 53 If there has been no call to $unify$ on $T_i$ (\ie{} $T_i$ is a single-element class) $T_i$ is absent in $T'$. 54 55 Given the nature of the expression resolution problem as backtracking search, caching and concurrency are both useful tools to decrease runtime. 56 However, both of these approaches may produce multiple distinct descendants of the same initial type environment, which have possibly been mutated in incompatible ways. 57 As such, to effectively employ either concurrency or caching, the type environment data structure must support an efficient method to check if two type environments are compatible and merge them if so. 58 $combine(T,T')$ attempts to merge an environment $T'$ into another environment $T$, producing $\top$ if successful or leaving $T$ in an invalid state and producing $\bot$ otherwise. 59 The invalid state of $T$ on failure is not important, given that a combination failure will result in the resolution algorithm backtracking to a different environment. 60 $combine$ proceeds by calls to $insert$, $add$, and $unify$ as needed, and can be roughly thought of as calling $unify$ on every pair of classes in $T$ that have variables $v'_{i,j}$ and $v'_{i,k}$ in the same class $T'_i$ in $T'$. 61 Like $unify$, $combine$ can always find a mutually-consistent partition of type variables into classes (in the extreme case, all type variables from $T$ and $T'$ in a single type class), but may fail due to inconsistent bounds on merged type classes. 62 63 Finally, the backtracking access patterns of the compiler can be exploited to reduce memory usage or runtime through use of an appropriately designed data structure. 64 The set of mutations to a type environment across the execution of the resolution algorithm produce an implicit tree of related environments, and the backtracking search typically focuses only on one leaf of the tree at once, or at most a small number of closely-related nodes as arguments to $combine$. 65 As such, the ability to save and restore particular type environment states is useful, and supported by the $save(T) \rightarrow H$ and $backtrack(T, H)$ operations, which produce a handle for the current environment state and mutate an environment back to a previous state, respectively. 66 These operations can be naively implemented by a deep copy of $T$ into $H$ and vice versa, but have more efficient implementations in persistency-aware data structures. 67 68 \section{Approaches} 69 70 \subsection{Na\"{\i}ve} 71 72 The type environment data structure used in Bilson's\cite{Bilson03} original implementation of \CFACC{} is a straightforward translation of the definitions in Section~\ref{env-defn-sec} to \CC{} code; a !TypeEnvironment! contains a list of !EqvClass! type equivalence classes, each of which contains the type bound information and a tree-based sorted set of type variables. 73 This approach has the benefit of being easy to understand and not imposing life-cycle or inheritance constraints on its use, but, as can be seen in Table~\ref{env-bounds-table}, does not support many of the desired operations with any particular efficiency. 74 Some variations on this structure may improve performance somewhat; for instance, replacing the !EqvClass! variable storage with a hash-based set would reduce search and update times from $O(\log n)$ to amortized $O(1)$, while adding an index for the type variables in the entire environment would remove the need to check each type class individually to maintain the disjointness property. 75 These improvements do not change the fundamental issues with this data structure, however. 76 77 \subsection{Incremental Inheritance} 78 79 One more invasive modification to this data structure which I investigated is to support swifter combinations of closely-related environments in the backtracking tree by storing a reference to a \emph{parent} environment within each environment, and having that environment only store type classes which have been modified with respect to the parent. 80 This approach provides constant-time copying of environments, as a new environment simply consists of an empty list of type classes and a reference to its (logically identical) parent; since many type environments are no different than their parent, this speeds backtracking in this common case. 81 Since all mutations made to a child environment are by definition compatible with the parent environment, two descendants of a common ancestor environment can be combined by iteratively combining the changes made in one environment then that environment's parent until the common ancestor is reached, again re-using storage and reducing computation in many cases. 82 83 For this environment I also employed a lazily-generated index of type variables to their containing class, which could be in either the current environment or an ancestor. 84 Any mutation of a type class in an ancestor environment would cause that class to be copied into the current environment before mutation, as well as added to the index, ensuring that all local changes to the type environment are listed in its index. 85 However, not adding type variables to the index until lookup or mutation preserves the constant-time environment copy operation in the common case in which the copy is not mutated from its parent during its life-cycle. 86 87 This approach imposes some performance penalty on $combine$ if related environments are not properly linked together, as the entire environment needs to be combined rather than just the diff, but is correct as long as the ``null parent'' base case is properly handled. 88 The life-cycle issues are somewhat more complex, as many environments may descend from a common parent, and all of these need their parent to stay alive for purposes of lookup. 89 These issues can be solved by ``flattening'' parent nodes into their children before the parents leave scope, but given the tree structure of the inheritance graph it is more straightforward to store the parent nodes in reference-counted or otherwise automatically garbage-collected heap storage. 90 91 \subsection{Union-Find} \label{env-union-find-approach} 92 93 Given the nature of the classes of type variables as disjoint sets, another natural approach to implementing a type environment is the union-find disjoint set data structure\cite{Galler64}. 94 Union-find efficiently implements two operations over a partition of a collection of elements into disjoint sets; $find(x)$ locates the \emph{representative} of $x$, the element which canonically names its set, while $union(r, s)$ merges two sets represented by $r$ and $s$, respectively. 95 The union-find data structure is based on providing each element with a reference to its parent element, such that the root of a tree of elements is the representative of the set of elements contained in the tree. 96 $find$ is then implemented by a search up to the parent, generally combined with a \emph{path compression} step that links nodes more directly to their ancestors to speed up subsequent searches. 97 $union$ involves making the representative of one set a child of the representative of the other, generally employing a rank- or size-based heuristic to ensure that the tree remains somewhat balanced. 98 If both path compression and a balancing heuristic are employed, both $union$ and $find$ run in amortized $O(\alpha(n))$ worst-case time; this bound by the inverse Ackermann function is a small constant for all practical values of $n$. 99 100 The union-find $find$ and $union$ operations have obvious applicability to the $find$ and $unify$ type environment operations in Table~\ref{env-op-table}, but the union-find data structure must be augmented to fully implement the type environment operations. 101 In particular, the type class bound cannot be easily included in the union-find data structure, as the requirement to make it the class representative breaks the balancing properties of $union$, and requires too-close integration of the type environment $unifyBound$ internal operation. 102 This issue can be solved by including a side map from class representatives to the type class bound. 103 If placeholder values are inserted in this map for type classes without bounds than this also has the useful property that the key set of the map provides an easily obtainable list of all the class representatives, a list which cannot be derived from the union-find data structure without a linear search for class representatives through all elements. 104 105 \subsection{Union-Find with Classes} \label{env-union-find-classes-approach} 106 107 Another type environment operation not supported directly by the union-find data structure is $report$, which lists the type variables in a given class, and similarly $split$, which reverts a $unify$ operation. 108 Since the union-find data structure stores only links from children to parents and not vice-versa, there is no way to reconstruct a class from one of its elements without a linear search over the entire data structure, with $find$ called on each element to check its membership in the class. 109 The situation is even worse for the $split$ operation, which would require extra information to maintain the order that each child was added to its parent node. 110 Unfortunately, the literature\cite{Tarjan84,Galil91,Patwary10} on union-find does not present a way to keep references to children without breaking the asymptotic time bounds of the algorithm; I have discovered a method to do so which, despite its simplicity, seems to be novel. 111 112 \TODO{port figure from slideshow} 113 114 The core idea of this ``union-find with classes'' data structure and algorithm is to keep the members of each class stored in a circularly-linked list. 115 Aho, Hopcroft, and Ullman also include a circularly-linked list in their 1974 textbook~\cite{Aho74}. 116 However, the algorithm presented by Aho~\etal{} has an entirely flat class hierarchy, where all elements are direct children of the representative, giving constant-time $find$ at the cost of linear-time $union$ operations. 117 In my version, the list data structure does not affect the layout of the union-find tree, maintaining the same asymptotic bounds as union-find. 118 In more detail, each element is given a !next! pointer to another element in the same class; this !next! pointer initially points to the element itself. 119 When two classes are unified, the !next! pointers of the representatives of those classes are swapped, splicing the two circularly-linked lists together. 120 Importantly, though this approach requires an extra pointer per element, it does maintain the linear space bound of union-find, and because it only requires updating the two root nodes in $union$ it does not asymptotically increase runtime either. 121 The basic approach is compatible with all path-compression techniques, and allows the members of any class to be retrieved in time linear in the size of the class simply by following the !next! pointers from any element. 122 123 If the path-compression optimization is abandoned, union-find with classes also encodes a reversible history of all the $union$ operations applied to a given class. 124 Theorem~\ref{env-reverse-thm} demonstrates that the !next! pointer of the representative of a class always points to a leaf from the last-added subtree. 125 This property is sufficient to reverse the most-recent $union$ operation by finding the ancestor of that leaf that is an immediate child of the representative, breaking its parent link, and swapping the !next! pointers back\footnote{Union-by-size may be a more appropriate approach than union-by-rank in this instance, as adding two known sizes is a reversible operation, but the rank increment operation cannot be reliably reversed.}. 126 Once the $union$ operation has been reversed, Theorem~\ref{env-reverse-thm} still holds for the reduced class, and the process can be repeated recursively until the entire set is split into its component elements. 127 128 \begin{theorem} \label{env-reverse-thm} 129 The !next! pointer of a class representative in the union-find with classes algorithm without path compression points to a leaf from the most-recently-added subtree. 130 \end{theorem} 131 132 \begin{proof} 133 By induction on the height of the tree. \\ 134 \emph{Base case:} A height 1 tree by definition includes only a single item. In such a case, the representative's !next! pointer points to itself by construction, and the representative is the most-recently-added (and only) leaf in the tree. \\ 135 \emph{Inductive case:} By construction, a tree $T$ of height greater than 1 has children of the root (representative) node that were representative nodes of classes merged by $union$. By definition, the most-recently-added subtree $T'$ has a smaller height than $T$, thus by the inductive hypothesis before the most-recent $union$ operation the !next! pointer of the root of $T'$ pointed to one of the leaf nodes of $T'$; by construction the !next! pointer of the root of $T$ points to this leaf after the $union$ operation. 136 \end{proof} 137 138 On its own, union-find, like the na\"{\i}ve approach, has no special constraints on life-cycle or inheritance, but it can be used as a building block in more sophisticated type environment data structures. 139 140 \subsection{Persistent Union-Find} 141 142 Given the backtracking nature of the resolution algorithm discussed in Section~\ref{env-defn-sec}, the abilities to quickly switch between related versions of a type environment and to de-duplicate shared data between environments are both assets to performance. 143 Conchon and Filli\^{a}tre~\cite{Conchon07} present a persistent union-find data structure based on the persistent array of Baker~\cite{Baker78,Baker91}. 144 145 \TODO{port figure from slideshow} 146 147 In Baker's persistent array, an array reference contains either a pointer to the array or a pointer to an \emph{edit node}; these edit nodes contain an array index, the value in that index, and another array reference pointing either to the array or a different edit node. 148 In this manner, a tree of edits is formed, rooted at the actual array. 149 Read from the actual array at the root can be performed in constant time, as with a non-persistent array. 150 The persistent array can be mutated in constant time by directly modifying the underlying array, then replacing its array reference with an edit node containing the mutated index, the previous value at that index, and a reference to the mutated array. If the current array reference is not the root, mutation consists simply of constructing a new edit node encoding the change and referring to the current array reference. 151 The mutation algorithm at the root is in some sense a special case of the key operation on persistent arrays, $reroot$. 152 153 A rerooting operation takes any array reference and makes it the root node of the array. 154 This is accomplished by tracing the path from some edit node to the root node of the array (always the underlying array), recursively applying the edits to the underlying array and replacing each edit node's successor with the inverse edit. 155 In this way, any previous state of the persistent array can be restored in time proportional to the number of edits to the current state of the array. 156 While $reroot$ does maintain the same value mapping in every version of the persistent array, the internal mutations it performs means that it is not thread-safe, and must be used behind a lock in a concurrent context. 157 Also, the root node with the actual array may in principle be anywhere in the tree, and does not provide information to report its leaf nodes, so some form of automatic garbage collection is generally required for the data structure. 158 Since the graph of edit nodes is tree-structured, reference counting approaches suffice for garbage collection; Conchon and Filli\^{a}tre~\cite{Conchon07} also observe that if the only $reroot$ operations are for backtracking then the tail of inverse edit nodes may be elided, suggesting the possibility of stack-based memory management. 159 160 While Conchon and Filli\^{a}tre~\cite{Conchon07} implement their persistent union-find data structure over a universe of integer elements in the fixed range $[1,N]$, the type environment problem needs more flexibility. 161 In particular, an arbitrary number of type variables must be added to the environment. 162 As such, a persistent hash table is a more suitable structure than a persistent array, providing the same expected asymptotic time bounds while allowing a dynamic number of elements. 163 Besides replacing the underlying array with a hash table, the other major change in this approach is to replace the two types of array references, !Array! and !Edit!, with four node types, !Table!, !Edit!, !Add!, and !Remove!, where !Add! adds a new key-value pair, !Remove! removes a key, and !Edit! mutates an existing key-value pair. 164 In this variant of \CFACC{}, this persistent hash table is used as the side map discussed in Section~\ref{env-union-find-approach} for class bounds. 165 The actual union-find data structure is slightly modified from this approach, with a !Base! node containing the root union-find data structure, !Add! nodes adding new elements, !AddTo! nodes defining the union of two type classes, and !Remove! and !RemoveFrom! nodes as inverses of the previous two elements, for purposes of maintaining the edit list. 166 Making !AddTo! and !RemoveFrom! single nodes shortens the edit path for improved performance, while also providing semantic information missing from the raw array updates in Conchon and Filli\^{a}tre's data structure. 167 The single-node approach, does, however, break under most path-compression algorithms; !RemoveFrom! can be applied to the underlying data structure using the ``leaf of last union'' approach discussed in in Section~\ref{env-union-find-classes-approach}; this was judged an acceptable trade-off for the added semantic information and shortened paths. 168 169 Maintaining explicit information on $union$ operations in the persistent union-find edit tree in the form of !AddTo! and !RemoveFrom! nodes exposes a new option for combining type environments. 170 If the type environments are part of the same edit tree, one environment $T'$ can be combined with another $T$ by only testing the edits on the path from $T'$ to $T$ in both the persistent union-find data structure describing the classes and the persistent hash table containing the class bounds. 171 This is generally more efficient than testing the compatibility of all type classes in $T'$, as only those that are actually different than those in $T$ must be considered. 172 173 The procedure for $combine(T, T')$ based on edit paths is as follows: 174 The shared edit trees for classes and bindings are rerooted at $T$, and the path from $T'$ to $T$ is followed to create a list of actual edits. 175 By tracking the state of each element, redundant changes such as an !Edit! followed by an !Edit! can be reduced to their form in $T'$ by dropping the later (more like $T$) !Edit! for the same key; !Add! and !Remove! cancel similarly. 176 This procedure is repeated for both the class edit tree and the binding edit tree. 177 When the list of net changes to the environment has been produced, the additive changes are applied to $T$. 178 For example, if a type class exists in $T'$ but not $T$, the corresponding !Add! edit will be applied to $T$, but in the reverse situation the !Remove! edit will not be applied to $T$, as the intention is to produce a new environment representing the union of the two sets of type classes; similarly, !AddTo! edits are applied to unify type-classes in $T$ that are united in $T'$, but !RemoveFrom! edits that split type classes are not. 179 The new environment, $T''$ can always be constructed with a consistent partitioning of type variables; in the extreme case, all variables from both $T$ and $T'$ will be united in a single type class in $T''$. 180 Where $combine$ can fail is in unifying the bound types; if any class in $T'$ has a class bound which does not unify with the merged class in $T''$ than $combine$ fails. 181 182 \section{Analysis} 183 184 In this section I present asymptotic analyses of the various approaches to a type environment data structure discussed in the previous section. 185 186 \begin{table} 187 \caption[Type environment operation bounds]{Worst-case analysis of type environment operations. $n$ is the number of type classes, $m$ the maximum size of a type class, and $p$ the edit distance between two environments or a single environment and the empty environment; $u(n)$ captures the recursive cost of class unification.} 188 \label{env-bounds-table} 189 \centering 190 \begin{tabular}{rllll} 191 \hline 192 & \textbf{Na\"{\i}ve} & \textbf{Incremental} & \textbf{Union-Find} & \textbf{U-F with Classes} \\ 193 \hline 194 $find$ & $O(n)$ & $O(p)$ & $O(\alpha(m))$ & $O(\log m)$ \\ 195 $report$ & $O(m)$ & $O(m)$ & $O(n \log m)$ & $O(m)$ \\ 196 $bound$ & $O(1)$ & $O(1)$ & $O(1)$ & $O(1)$ \\ 197 $insert$ & $O(1)$ & $O(1)$ & $O(1)$ & $O(1)$ \\ 198 $add$ & $O(1)$ & $O(1)$ & $O(1)$ & $O(1)$ \\ 199 $bind$ & $O(1)$ & $O(1)$ & $O(1)$ & $O(1)$ \\ 200 $unify$ & $O(m + u(n))$ & $O(m + u(n))$ & $O(\log m + u(n))$ & $O(\log m + u(n))$ \\ 201 $split$ & --- & --- & --- & $O(\log m)$ \\ 202 $combine$ & $O(nm \cdot u(n))$ & $O(pm \cdot u(n))$ & $O(n \log m \cdot u(n))$ & $O(p \log m \cdot u(n))$ \\ 203 $save$ & $O(nm)$ & $O(1)$ & $O(nm)$ & $O(1)$ \\ 204 $backtrack$ & $O(nm)$ & $O(pm)$ & $O(nm)$ & $O(p)$ \\ 205 \hline 206 \end{tabular} 207 \end{table} 208 209 % Future work: design multi-threaded version of C&F persistent map --- core idea is some sort of thread-boundary edit node
Note:
See TracChangeset
for help on using the changeset viewer.