# Changeset 902b123 for doc/theses/aaron_moss_PhD/phd/resolution-heuristics.tex

Ignore:
Timestamp:
Feb 8, 2019, 5:55:43 PM (3 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr
Children:
0f78f3c7
Parents:
8d752592
Message:

thesis: expand details of conversion cost based on recent exploration with Peter

File:
1 edited

Unmodified
Added
Removed
• ## doc/theses/aaron_moss_PhD/phd/resolution-heuristics.tex

 r8d752592 \subsection{Conversion Cost} \label{conv-cost-sec} C does not have an explicit cost model for implicit conversions, but the usual arithmetic conversions''\cit{} used to decide which arithmetic operators to use define one implicitly. C does not have an explicit cost model for implicit conversions, but the usual arithmetic conversions''\cite[\S{}6.3.1.8]{C11} used to decide which arithmetic operators to use define one implicitly. The only context in which C has name overloading is the arithmetic operators, and the usual arithmetic conversions define a \emph{common type} for mixed-type arguments to binary arithmetic operators. Since for backward-compatibility purposes the conversion costs of \CFA{} must produce an equivalent result to these common type rules, it is appropriate to summarize \cite[\S{}6.3.1.8]{C11} here: \begin{itemize} \item If either operand is a floating-point type, the common type is the size of the largest floating-point type. If either operand is !_Complex!, the common type is also !_Complex!. \item If both operands are of integral type, the common type has the same size\footnote{Technically, the C standard defines a notion of \emph{rank}, a distinct value for each \lstinline{signed} and \lstinline{unsigned} pair; integral types of the same size thus may have distinct ranks. For instance, if \lstinline{int} and \lstinline{long} are the same size, \lstinline{long} will have greater rank. The standard-defined types are declared to have greater rank than any types added as compiler extensions.} as the larger type. \item If the operands have opposite signs, the common type is !signed! if the !signed! operand is strictly larger, or !unsigned! otherwise. \item If the operands have equal signs, the common type has that sign as well. \end{itemize} Beginning with the work of Bilson\cite{Bilson03}, \CFA{} has defined a \emph{conversion cost} for each function call in a way that generalizes C's conversion rules. Loosely defined, the conversion cost counts the implicit conversions utilized by an interpretation. With more specificity, the cost is a lexicographically-ordered tuple, where each element corresponds to a particular kind of conversion. In Bilson's \CFA{} design, conversion cost is a 3-tuple, $(unsafe, poly, safe)$, where $unsafe$ is the count of unsafe (narrowing) conversions, $poly$ is the count of polymorphic type bindings, and $safe$ is the sum of the degree of safe (widening) conversions. Degree of safe conversion is calculated as path weight in a weighted directed graph of safe conversions between types; the current version of this graph is in Figure~\ref{safe-conv-graph-fig}. The safe conversion graph designed such that the common type $c$ of two types $u$ and $v$ is compatible with the C standard definitions from \cite[\S{}6.3.1.8]{C11} and can be calculated as the unique type minimizing the sum of the path weights of $\overrightarrow{uc}$ and $\overrightarrow{vc}$. The following example lists the cost in the Bilson model of calling each of the following functions with two !int! parameters: \end{cfa} Note that safe and unsafe conversions are handled differently; \CFA{} counts distance'' of safe conversions (\eg{} !int! to !long! is cheaper than !int! to !unsigned long!), while only counting the number of unsafe conversions (\eg{} !int! to !char! and !int! to !short! both have unsafe cost 1). Note that safe and unsafe conversions are handled differently; \CFA{} counts distance of safe conversions (\eg{} !int! to !long! is cheaper than !int! to !unsigned long!), while only counting the number of unsafe conversions (\eg{} !int! to !char! and !int! to !short! both have unsafe cost 1). As part of adding reference types to \CFA{} (see Section~\ref{type-features-sec}), Schluntz added a new $reference$ element to the cost tuple, which counts the number of implicit reference-to-rvalue conversions performed so that candidate interpretations can be distinguished by how closely they match the nesting of reference types; since references are meant to act almost indistinguishably from lvalues, this $reference$ element is the least significant in the lexicographic comparison of cost tuples. Since the user programmer provides parameters, but cannot provide guidance on return type, specialization cost is not counted for the return type list. Since both $vars$ and $specialization$ are properties of the declaration rather than any particular interpretation, they are prioritized less than the interpretation-specific conversion costs from Bilson's original 3-tuple. The current \CFA{} cost tuple is thus as follows: A final refinement I have made to the \CFA{} cost model is with regard to choosing between arithmetic conversions. The C standard states that the common type of !int! and !unsigned int! is !unsigned int! and that the common type of !int! and !long! is !long!, but does not provide guidance for making a choice between those two conversions. Bilson's \CFACC{} used conversion costs based off a graph similar to that in Figure~\ref{safe-conv-graph-fig}, but with arcs selectively removed to disambiguate the costs of such conversions. However, the arc removal in Bilson's design resulted in inconsistent and somewhat surprising costs, with conversion to the next-larger same-sign type generally (but not always) double the cost of conversion to the !unsigned! type of the same size. In my redesign, for consistency with the approach of the usual arithmetic conversions,which select a common type primarily based on size, but secondarily on sign, the costs of arcs in the new graph are defined to be $1$ to go to a larger size, but $1 + \varepsilon$ to change the sign. This means that sign conversions are approximately the same cost as widening conversions, but slightly more expensive (as opposed to less expensive in Bilson's graph). The $\varepsilon$ portion of the arc cost is implemented by adding a new $sign$ cost lexicographically after $safe$ which counts sign conversions. \begin{figure} \centering \includegraphics{figures/safe-conv-graph} \caption[Safe conversion graph.]{Safe conversion graph; plain arcs have cost $1$ while dashed sign-conversion arcs have cost $1+ \varepsilon$. The arc from \lstinline{unsigned long} to \lstinline{long long} is deliberately omitted, as on the presented system \lstinline{sizeof(long) == sizeof(long long)}.} \label{safe-conv-graph-fig} \end{figure} With these modifications, the current \CFA{} cost tuple is as follows: \begin{equation*} (unsafe, poly, safe, vars, specialization, reference) (unsafe, poly, safe, sign, vars, specialization, reference) \end{equation*}
Note: See TracChangeset for help on using the changeset viewer.