Index: doc/theses/aaron_moss_PhD/phd/resolution-heuristics.tex
===================================================================
--- doc/theses/aaron_moss_PhD/phd/resolution-heuristics.tex	(revision af00c4589447fcba274fd7a68bac3d284e211d76)
+++ doc/theses/aaron_moss_PhD/phd/resolution-heuristics.tex	(revision 266f36db4f5b143e0d0daccf69e320d688c5c0d1)
@@ -9,5 +9,32 @@
 \section{Conversion Cost}
 
+\TODO{Open with discussion of C's ``usual arithmetic conversions''}
 
+Loosely defined, the conversion cost counts the implicit conversions utilized by an interpretation. 
+With more specificity, the cost is a lexicographically-ordered tuple, where each element corresponds to a particular kind of conversion. 
+In the original \CFA{} design by Bilson~\cite{Bilson03}, conversion cost is a three-tuple, $(unsafe, poly, safe)$, where $unsafe$ is the count of unsafe (narrowing) conversions, $poly$ is the count of polymorphic type bindings, and $safe$ is the sum of the degree of safe (widening) conversions. 
+The following example lists the cost in the Bilson model of calling each of the following functions with two !int! parameters:
+
+\begin{cfa}
+void f(char, long); $\C{// (1,0,1)}$
+forall(otype T) void f(T, long); $\C{// (0,1,1)}$
+void f(long, long); $\C{// (0,0,2)}$
+void f(int, unsigned long); $\C{// (0,0,2)}$
+void f(int, long); $\C{// (0,0,1)}$
+\end{cfa}
+
+Note that safe and unsafe conversions are handled differently; \CFA{} counts ``distance'' of safe conversions (\eg{} !int! to !long! is cheaper than !int! to !unsigned long!), while only counting the number of unsafe conversions (\eg{} !int! to !char! and !int! to !short! both have unsafe cost 1).
+As part of adding reference types to \CFA{} (see Section~\ref{type-features-sec}), Schluntz added a new $reference$ element to the cost tuple, which counts the number of implicit reference-to-rvalue conversions performed so that candidate interpretations can be distinguished by how closely they match the nesting of reference types; since references are meant to act almost indistinguishably from lvalues, this $reference$ element is the least significant in the lexicographic comparison of cost tuples.
+
+I have also refined the \CFA{} cost model as part of this thesis work. 
+Bilson's \CFA{} cost model includes the cost of polymorphic type bindings from a function's type assertions in the $poly$ element of the cost tuple; this has the effect of making more-constrained functions more expensive than less-constrained functions. 
+However, type assertions actually make a function \emph{less} polymorphic, and as such functions with more type assertions should be preferred in type resolution. 
+As an example, some iterator-based algorithms can work on a forward iterator that only provides an increment operator, but are more efficient on a random-access iterator that can be incremented by an arbitrary number of steps in a single operation. 
+The random-access iterator has more type constraints, but should be chosen whenever those constraints can be satisfied. 
+As such, I have added a $specialization$ element to the \CFA{} cost tuple, the values of which are always negative. 
+Each type assertion subtracts 1 from $specialization$, so that more-constrained functions will be chosen over less-constrained functions, all else being equal.
+A more sophisticated design would define a partial order over sets of type assertions by set inclusion (\ie{} one function would only cost less than another if it had all of the same assertions, plus more, rather than just more total assertions), but I did not judge the added complexity of computing and testing this order to be worth the gain in specificity.
+
+\TODO{Discuss $var$ element, bring in Ditchfield 4.4.4 (p.89), add discussion of type specialization}
 
 % Discuss changes to cost model, as promised in Ch. 2
