Changeset 266f36d for doc/theses
- Timestamp:
- Jan 17, 2019, 5:23:31 PM (6 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- 28c120b
- Parents:
- 3c54a71
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/aaron_moss_PhD/phd/resolution-heuristics.tex
r3c54a71 r266f36d 9 9 \section{Conversion Cost} 10 10 11 \TODO{Open with discussion of C's ``usual arithmetic conversions''} 11 12 13 Loosely defined, the conversion cost counts the implicit conversions utilized by an interpretation. 14 With more specificity, the cost is a lexicographically-ordered tuple, where each element corresponds to a particular kind of conversion. 15 In the original \CFA{} design by Bilson~\cite{Bilson03}, conversion cost is a three-tuple, $(unsafe, poly, safe)$, where $unsafe$ is the count of unsafe (narrowing) conversions, $poly$ is the count of polymorphic type bindings, and $safe$ is the sum of the degree of safe (widening) conversions. 16 The following example lists the cost in the Bilson model of calling each of the following functions with two !int! parameters: 17 18 \begin{cfa} 19 void f(char, long); $\C{// (1,0,1)}$ 20 forall(otype T) void f(T, long); $\C{// (0,1,1)}$ 21 void f(long, long); $\C{// (0,0,2)}$ 22 void f(int, unsigned long); $\C{// (0,0,2)}$ 23 void f(int, long); $\C{// (0,0,1)}$ 24 \end{cfa} 25 26 Note that safe and unsafe conversions are handled differently; \CFA{} counts ``distance'' of safe conversions (\eg{} !int! to !long! is cheaper than !int! to !unsigned long!), while only counting the number of unsafe conversions (\eg{} !int! to !char! and !int! to !short! both have unsafe cost 1). 27 As part of adding reference types to \CFA{} (see Section~\ref{type-features-sec}), Schluntz added a new $reference$ element to the cost tuple, which counts the number of implicit reference-to-rvalue conversions performed so that candidate interpretations can be distinguished by how closely they match the nesting of reference types; since references are meant to act almost indistinguishably from lvalues, this $reference$ element is the least significant in the lexicographic comparison of cost tuples. 28 29 I have also refined the \CFA{} cost model as part of this thesis work. 30 Bilson's \CFA{} cost model includes the cost of polymorphic type bindings from a function's type assertions in the $poly$ element of the cost tuple; this has the effect of making more-constrained functions more expensive than less-constrained functions. 31 However, type assertions actually make a function \emph{less} polymorphic, and as such functions with more type assertions should be preferred in type resolution. 32 As an example, some iterator-based algorithms can work on a forward iterator that only provides an increment operator, but are more efficient on a random-access iterator that can be incremented by an arbitrary number of steps in a single operation. 33 The random-access iterator has more type constraints, but should be chosen whenever those constraints can be satisfied. 34 As such, I have added a $specialization$ element to the \CFA{} cost tuple, the values of which are always negative. 35 Each type assertion subtracts 1 from $specialization$, so that more-constrained functions will be chosen over less-constrained functions, all else being equal. 36 A more sophisticated design would define a partial order over sets of type assertions by set inclusion (\ie{} one function would only cost less than another if it had all of the same assertions, plus more, rather than just more total assertions), but I did not judge the added complexity of computing and testing this order to be worth the gain in specificity. 37 38 \TODO{Discuss $var$ element, bring in Ditchfield 4.4.4 (p.89), add discussion of type specialization} 12 39 13 40 % Discuss changes to cost model, as promised in Ch. 2
Note: See TracChangeset
for help on using the changeset viewer.