Changeset f728971 for doc/theses


Ignore:
Timestamp:
Feb 20, 2019, 2:00:37 PM (5 years ago)
Author:
Aaron Moss <a3moss@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
a2971cc
Parents:
95c0ebe (diff), 7e9fa47 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'aaron-thesis' of plg.uwaterloo.ca:software/cfa/cfa-cc into aaron-thesis

Location:
doc/theses/aaron_moss_PhD/phd
Files:
27 added
6 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/aaron_moss_PhD/phd/Makefile

    r95c0ebe rf728971  
    3939generic-timing \
    4040tests-completed \
     41per-prob-histo \
     42per-prob-depth \
     43cfa-time \
    4144}
    4245
     
    6972        gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/algo-summary.gp
    7073
     74per-prob-histo.tex : per-prob.gp per-prob.tsv ${BUILD}
     75        gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/per-prob.gp
     76
     77per-prob-depth.tex : per-prob-scatter.gp ${BUILD}
     78        gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/per-prob-scatter.gp
     79
     80cfa-time.tex : cfa-plots.gp cfa-time.tsv cfa-mem.tsv ${BUILD}
     81        gnuplot -e BUILD="'${BUILD}/'" ${EVALDIR}/cfa-plots.gp
     82
    7183${BUILD}:
    7284        mkdir -p ${BUILD}
  • doc/theses/aaron_moss_PhD/phd/experiments.tex

    r95c0ebe rf728971  
    6868\TODO{test performance; shouldn't be too hard to change \texttt{resolveAssertions} to use unification}
    6969
    70 \section{Prototype Experiments}
     70\section{Prototype Experiments} \label{proto-exp-sec}
    7171
    7272The primary performance experiments for this thesis were conducted using the resolver prototype on problem instances generated from actual \CFA{} code using the method described in Section~\ref{rp-features-sec}.
     
    9898Terminal output was suppressed for all tests to avoid confounding factors in the timing results, and all tests were run three times in series, with the median result reported in all cases.
    9999The medians are representative data points; considering test cases that took at least 0.2~s to run, the average run was within 2\% of the reported median runtime, and no run diverged by more than 20\% of median runtime or 5.5~s.
    100 The memory results are even more consistent, with no run exceeding 2\% difference from median in peak resident set size, and 93\% of tests not recording any difference within the 1~KB granularity of the measurement software.
     100The memory results are even more consistent, with no run exceeding 2\% difference from median in peak resident set size, and 93\% of tests not recording any difference within the 1~KB granularity of the measurement software.
     101All tests were run on a machine with 128~GB of RAM and 64 cores running at 2.2~GHz.
    101102
    102103As a matter of experimental practicality, test runs which exceeded 8~GB of peak resident memory usage were excluded from the data set.
     
    156157\section{Instance Difficulty}
    157158
    158 \section{\CFA{} Results}
     159To characterize the difficulty of expression resolution problem instances, the test suites must be explored at a finer granuarity.
     160As discussed in Section~\ref{resn-analysis-sec}, a single top-level expression is the fundamental problem instance for resolution, yet the test inputs discussed above are composed of thousands of top-level expressions, like the actual source code they are derived from.
     161To pull out the effects of these individual problems, I instrumented the resolver prototype to time resolution for each expression, and also report some relevant properties of the expression.
     162This instrumented resolver was then run on a set of difficult test instances; to limit the data collection task, these runs were limited to the best-performing \textsc{bu-dca-per} algorithm and test inputs which that algorithm took more than 1~s to complete.
     163
     164The 13 test inputs thus selected contain 20632 top-level expressions between them, which are separated into order-of-magnitude bins by runtime in Figure~\ref{per-prob-histo-fig}.
     165As can be seen from this figure, overall runtime is dominated by a few particularly difficult problem instances --- the 60\% of expressions which resolve in under 0.1~ms collectively take less time to resolve than any of the 0.2\% of expressions which take at least 100~ms to resolve.
     166On the other hand, the 46 expressions in that 0.2\% take 38\% of the overall time in this difficult test suite, while the 201 expressions that take between 10 and 100~ms to resolve consume another 30\%.
     167
     168\begin{figure}
     169        \centering
     170        \input{per-prob-histo}
     171        \caption[Histogram of top-level expressions]{Histogram of top-level expression resolution runtime, binned by order-of-magnitude. The left series counts the expressions in each bin according to the left axis, while the right series reports the summed runtime of resolution for all expressions in that bin. Note that both y-axes are log-scaled.} \label{per-prob-histo-fig}
     172\end{figure}
     173
     174Since the top centile of expression resolution instances requires approximately two-thirds of the resolver's time, optimizing the resolver for specific hard problem instances has proven to be an effective technique for reducing overall runtime.
     175The data below indicates that number of assertions necessary to resolve has the greatest effect on runtime, as seen in
     176Figure~\ref{per-prob-assns-fig}.
     177However, since the number of assertions required is only known once resolution is finished, the most-promising pre-resolution metric of difficulty is the nesting depth of the expression; as seen in Figure~\ref{per-prob-depth-fig}, expressions of depth $> 10$ in this dataset are uniformly difficult.
     178Figure~\ref{per-prob-subs-fig} presents a similar pattern for number of subexpressions, though given that the expensive tail of problem instances occurs at approximately twice the depth values, it is reasonable to believe that the difficult expressions in question are deeply-nested invocations of binary functions rather than wider but shallowly-nested expressions.
     179
     180% TODO statistics to tease out difficulty? Is ANOVA the right keyword?
     181% TODO maybe metrics to sum number of poly-overloads invoked
     182
     183\begin{figure}
     184\centering
     185\input{per-prob-assns}
     186\caption[Top-level expression resolution time by number of assertions resolved.]{Top-level expression resolution time by number of assertions resolved. Note log scales on both axes.} \label{per-prob-assns-fig}
     187\end{figure}
     188
     189\begin{figure}
     190\centering
     191\input{per-prob-depth}
     192\caption[Top-level expression resolution time by maximum nesting depth of expression.]{Top-level expression resolution time by maximum nesting depth of expression. Note log scales on both axes.} \label{per-prob-depth-fig}
     193\end{figure}
     194
     195\begin{figure}
     196\centering
     197\input{per-prob-subs}
     198\caption[Top-level expression resolution time by number of subexpressions.]{Top-level expression resolution time by number of subexpressions. Note log scales on both axes.} \label{per-prob-subs-fig}
     199\end{figure}
     200       
     201
     202\section{\CFA{} Results} \label{cfa-results-sec}
     203
     204I have integrated most of the algorithmic techniques discussed in this chapter into \CFACC{}.
     205This integration took place over a period of months while \CFACC{} was under active development on a number of other fronts, so it is not possible to completely isolate the effects of the algorithmic changes, but I have generated some data.
     206To generate this data, representative commits from the \texttt{git} history of the project were checked out and compiled, then run on the same machine used for the resolver prototype experiments discussed in Section~\ref{proto-exp-sec}.
     207To negate the effects of changes to the \CFA{} standard library on the timing results, 55 test files from the test suite of the oldest \CFA{} variant were compiled with the \texttt{-E} flag to inline their library dependencies, and these inlined files were used to test the remaining \CFACC{} versions.
     208
     209I performed two rounds of modification to \CFACC{}; the first round moved from Bilson's original combined-bottom-up algorithm to an un-combined bottom-up algorithm, denoted \textsc{cfa-co} and \textsc{cfa-bu}, respectively.
     210A top-down algorithm was not attempted in \CFACC{} due to its poor performance in the prototype.
     211The second round of modifications addressed assertion satisfaction, taking Bilson's original \textsc{cfa-imm} algorithm, and iteratively modifying it, first to use the deferred approach \textsc{cfa-def}, then caching those results in the \text{cfa-dca} algorithm.
     212The new environment data structures discussed in Section~\ref{proto-exp-sec} have not been successfully merged into \CFACC{} due to their dependencies on the garbage-collection framework in the prototype; I spent several months modifiying \CFACC{} to use similar garbage collection, but due to \CFACC{} not being designed to use such memory management the performance of the modified compiler was non-viable.
     213It is possible that the persistent union-find environment could be modified to use a reference-counted pointer internally without changing the entire memory-management framework of \CFACC{}, but such an attempt is left to future work.
     214
     215As can be seen in Figures~\ref{cfa-time-fig} and~\ref{cfa-mem-fig}, which show the time and peak memory results for these five versions of \CFACC{} on the same test suite, assertion resolution dominates total resolution cost, with the \textsc{cfa-def} and \textsc{cfa-dca} variants running consistently faster than the others on more expensive test cases.
     216The results from \CFACC{} do not exactly mirror those from the prototype; I conjecture this is mostly due to the different memory-management schemes and sorts of data required to run type unification and assertion satisfaction calculations, as \CFACC{} performance has proven to be particularly sensitive to the amount of heap allocation performed.
     217This data also shows a noticable regression in compiler performance in the eleven months between \textsc{cfa-bu} and \textsc{cfa-imm}; this regression is not due to expression resolution, as no integration work happened in this time, but I am unable to ascertain its actual cause.
     218It should also be noted with regard to the peak memory results in Figure~\ref{cfa-mem-fig} that the peak memory usage does not always occur during the resolution phase of the compiler.
     219
     220\begin{figure}
     221\centering
     222\input{cfa-time}
     223\caption[\CFACC{} runtime against \textsc{cfa-co} baseline.]{\CFACC{} runtime against \textsc{cfa-co} baseline. Note log scales on both axes.} \label{cfa-time-fig}
     224\end{figure}
     225
     226\begin{figure}
     227\centering
     228\input{cfa-mem}
     229\caption[\CFACC{} peak memory usage against \textsc{cfa-co} baseline runtime.]{\CFACC{} peak memory usage against \textsc{cfa-co} baseline runtime. Note log scales on both axes.} \label{cfa-mem-fig}
     230\end{figure}
    159231
    160232% use Jenkins daily build logs to rebuild speedup graph with more data
     
    167239
    168240% look back at Resolution Algorithms section for threads to tie up "does the algorithm look like this?"
     241
     242\section{Conclusion}
     243
     244As can be seen from the prototype results, per-expression benchmarks, and \CFACC{}, the dominant factor in the cost of \CFA{} expression resolution is assertion satisfaction.
     245Reducing the number of total number of assertion satisfaction problems solved, as in the deferred satisfaction algorithm, is consistently effective at reducing runtime, and caching results of these satisfaction problems has shown promise in the prototype system.
     246The results presented here also demonstrate that a bottom-up approach to expression resolution is superior to top-down, settling an open question from Baker~\cite{Baker82}.
     247The persistent union-find type environment introduced in Chapter~\ref{env-chap} has also been demonstrated to be a modest performance improvement on the na\"{\i}ve approach.
     248
     249Given the consistently strong performance of the \textsc{bu-dca-imm} and \textsc{bu-dca-per} variants of the resolver prototype, the results in this chapter demonstrate that it is possible to develop a \CFA{} compiler with acceptable runtime performance for widespread use, an important and previously unaddressed consideration for the practical viability of the language.
     250However, the less-marked improvement in Section~\ref{cfa-results-sec} from retrofitting these algorithmic changes onto the existing compiler leave the actual development of a performant \CFA{} compiler to future work.
     251Characterization and elimination of the performance deficits in the existing \CFACC{} has proven difficult, though runtime is generally dominated by the expression resolution phase; as such, building a new \CFA{} compiler based on the resolver prototype contributed by this work may prove to be an effective strategy.
  • doc/theses/aaron_moss_PhD/phd/resolution-heuristics.tex

    r95c0ebe rf728971  
    2828\begin{itemize}
    2929\item If either operand is a floating-point type, the common type is the size of the largest floating-point type. If either operand is !_Complex!, the common type is also !_Complex!.
    30 \item If both operands are of integral type, the common type has the same size\footnote{Technically, the C standard defines a notion of \emph{rank}, a distinct value for each \lstinline{signed} and \lstinline{unsigned} pair; integral types of the same size thus may have distinct ranks. For instance, if \lstinline{int} and \lstinline{long} are the same size, \lstinline{long} will have greater rank. The standard-defined types are declared to have greater rank than any types of the same size added as compiler extensions.} as the larger type.
     30\item If both operands are of integral type, the common type has the same size\footnote{Technically, the C standard defines a notion of \emph{rank} in \cite[\S{}6.3.1.1]{C11}, a distinct value for each \lstinline{signed} and \lstinline{unsigned} pair; integral types of the same size thus may have distinct ranks. For instance, if \lstinline{int} and \lstinline{long} are the same size, \lstinline{long} will have greater rank. The standard-defined types are declared to have greater rank than any types of the same size added as compiler extensions.} as the larger type.
    3131\item If the operands have opposite signedness, the common type is !signed! if the !signed! operand is strictly larger, or !unsigned! otherwise. If the operands have the same signedness, the common type shares it.
    3232\end{itemize}
    3333
    34 Beginning with the work of Bilson\cite{Bilson03}, \CFA{} has defined a \emph{conversion cost} for each function call in a way that generalizes C's conversion rules.
     34Beginning with the work of Bilson\cite{Bilson03}, \CFA{} defines a \emph{conversion cost} for each function call in a way that generalizes C's conversion rules.
    3535Loosely defined, the conversion cost counts the implicit conversions utilized by an interpretation.
    3636With more specificity, the cost is a lexicographically-ordered tuple, where each element corresponds to a particular kind of conversion.
    37 In Bilson's \CFA{} design, conversion cost is a 3-tuple, $(unsafe, poly, safe)$, where $unsafe$ is the count of unsafe (narrowing) conversions, $poly$ is the count of polymorphic type bindings, and $safe$ is the sum of the degree of safe (widening) conversions.
    38 Degree of safe conversion is calculated as path weight in a weighted directed graph of safe conversions between types; the current version of this graph is in Figure~\ref{safe-conv-graph-fig}.
    39 The safe conversion graph designed such that the common type $c$ of two types $u$ and $v$ is compatible with the C standard definitions from \cite[\S{}6.3.1.8]{C11} and can be calculated as the unique type minimizing the sum of the path weights of $\overrightarrow{uc}$ and $\overrightarrow{vc}$.
     37In Bilson's design, conversion cost is a 3-tuple, $(unsafe, poly, safe)$, where $unsafe$ is the count of unsafe (narrowing) conversions, $poly$ is the count of polymorphic type bindings, and $safe$ is the sum of the degree of safe (widening) conversions.
     38Degree of safe conversion is calculated as path weight in a directed graph of safe conversions between types; both Bilson's version and the current version of this graph are in Figure~\ref{safe-conv-graph-fig}.
     39The safe conversion graph is designed such that the common type $c$ of two types $u$ and $v$ is compatible with the C standard definitions from \cite[\S{}6.3.1.8]{C11} and can be calculated as the unique type minimizing the sum of the path weights of $\overrightarrow{uc}$ and $\overrightarrow{vc}$.
    4040The following example lists the cost in the Bilson model of calling each of the following functions with two !int! parameters:
    4141
    4242\begin{cfa}
    4343void f(char, long); $\C{// (1,0,1)}$
     44void f(short, long); $\C{// (1,0,1)}$
    4445forall(otype T) void f(T, long); $\C{// (0,1,1)}$
    4546void f(long, long); $\C{// (0,0,2)}$
     
    4849\end{cfa}
    4950
    50 Note that safe and unsafe conversions are handled differently; \CFA{} counts distance of safe conversions (\eg{} !int! to !long! is cheaper than !int! to !unsigned long!), while only counting the number of unsafe conversions (\eg{} !int! to !char! and !int! to !short! both have unsafe cost 1).
     51Note that safe and unsafe conversions are handled differently; \CFA{} counts distance of safe conversions (\eg{} !int! to !long! is cheaper than !int! to !unsigned long!), while only counting the number of unsafe conversions (\eg{} !int! to !char! and !int! to !short! both have unsafe cost 1, as in the first fwo declarations above).
     52These costs are summed over the paramters in a call; in the example above, the cost of the two !int! to !long! conversions for the fourth declaration sum equal to the one !int! to !unsigned long! conversion in the fifth.
    5153
    5254As part of adding reference types to \CFA{} (see Section~\ref{type-features-sec}), Schluntz added a new $reference$ element to the cost tuple, which counts the number of implicit reference-to-rvalue conversions performed so that candidate interpretations can be distinguished by how closely they match the nesting of reference types; since references are meant to act almost indistinguishably from lvalues, this $reference$ element is the least significant in the lexicographic comparison of cost tuples.
    5355
    5456I have also refined the \CFA{} cost model as part of this thesis work.
    55 Bilson's \CFA{} cost model includes the cost of polymorphic type bindings from a function's type assertions in the $poly$ element of the cost tuple; this has the effect of making more-constrained functions more expensive than less-constrained functions.
     57Bilson's \CFA{} cost model includes the cost of polymorphic type bindings from a function's type assertions in the $poly$ element of the cost tuple; this has the effect of making more-constrained functions more expensive than less-constrained functions, as in the following example:
     58
     59\begin{cfa}
     60forall(dtype T | { T& ++?(T&); }) T& advance(T&, int);
     61forall(dtype T | { T& ++?(T&); T& ?+=?(T&, int)}) T& advance(T&, int);
     62\end{cfa}
     63
     64In resolving a call to !advance!, the binding to the !T&! parameter in the assertions is added to the $poly$ cost in Bilson's model.
    5665However, type assertions actually make a function \emph{less} polymorphic, and as such functions with more type assertions should be preferred in type resolution.
    57 As an example, some iterator-based algorithms can work on a forward iterator that only provides an increment operator, but are more efficient on a random-access iterator that can be incremented by an arbitrary number of steps in a single operation.
    58 The random-access iterator has more type constraints, but should be chosen whenever those constraints can be satisfied.
    59 As such, I have added a $specialization$ element to the \CFA{} cost tuple, the values of which are always negative.
    60 Each type assertion subtracts 1 from $specialization$, so that more-constrained functions will cost less and thus be chosen over less-constrained functions, all else being equal.
     66In the example above, the more-constrained second function can be implemented more efficiently, and as such should be chosen whenever its added constraint can be satisfied.
     67As such, a $specialization$ element is now included in the \CFA{} cost tuple, the values of which are always negative.
     68Each type assertion subtracts 1 from $specialization$, so that more-constrained functions cost less, and thus are chosen over less-constrained functions, all else being equal.
    6169A more sophisticated design would define a partial order over sets of type assertions by set inclusion (\ie{} one function would only cost less than another if it had a strict superset of assertions,  rather than just more total assertions), but I did not judge the added complexity of computing and testing this order to be worth the gain in specificity.
    6270
    63 I have also incorporated an unimplemented aspect of Ditchfield's earlier cost model.
     71I also incorporated an unimplemented aspect of Ditchfield's earlier cost model.
    6472In the example below, adapted from \cite[p.89]{Ditchfield92}, Bilson's cost model only distinguished between the first two cases by accounting extra cost for the extra set of !otype! parameters, which, as discussed above, is not a desirable solution:
    6573
     
    7179\end{cfa}
    7280
    73 I account for the fact that functions with more polymorphic variables are less constrained by introducing a $var$ cost element that counts the number of type variables on a candidate function.
     81The new cost model accounts for the fact that functions with more polymorphic variables are less constrained by introducing a $var$ cost element that counts the number of type variables on a candidate function.
    7482In the example above, the first !f! has $var = 2$, while the remainder have $var = 1$.
    75 My new \CFA{} cost model also accounts for a nuance un-handled by Ditchfield or Bilson, in that it makes the more specific fourth function above cheaper than the more generic third function.
     83The new cost model also accounts for a nuance un-handled by Ditchfield or Bilson, in that it makes the more specific fourth function above cheaper than the more generic third function.
    7684The fourth function is presumably somewhat optimized for handling pointers, but the prior \CFA{} cost model could not account for the more specific binding, as it simply counted the number of polymorphic unifications.
    7785
    78 In my modified model, each level of constraint on a polymorphic type in the parameter list results in a decrement of the $specialization$ cost element.
    79 Thus, all else equal, if both a binding to !T! and a binding to !T*! are available, \CFA{} will pick the more specific !T*! binding.
    80 This process is recursive, such that !T**! produces a -2 specialization cost, as opposed to the -1 cost for !T*!.
    81 This works similarly for generic types, \eg{} !box(T)! also has specialization cost -1.
     86In the modified model, each level of constraint on a polymorphic type in the parameter list results in a decrement of the $specialization$ cost element, which is shared with the count of assertions due to their common nature as constraints on polymorphic type bindings.
     87Thus, all else equal, if both a binding to !T! and a binding to !T*! are available, the model chooses the more specific !T*! binding with $specialization = -1$.
     88This process is recursive, such that !T**! has $specialization = -2$.
     89This calculation works similarly for generic types, \eg{} !box(T)! also has specialization cost -1.
    8290For multi-argument generic types, the least-specialized polymorphic parameter sets the specialization cost, \eg{} the specialization cost of !pair(T, S*)! is -1 (from !T!) rather than -2 (from !S!).
    83 Since the user programmer provides parameters, but cannot provide guidance on return type, specialization cost is not counted for the return type list.
     91Specialization cost is not counted on the return type list; since $specialization$ is a property of the function declaration, a lower specialization cost prioritizes one declaration over another.
     92User programmers can choose between functions with varying parameter lists by adjusting the arguments, but the same is not true of varying return types, so the return types are omitted from the $specialization$ element.
    8493Since both $vars$ and $specialization$ are properties of the declaration rather than any particular interpretation, they are prioritized less than the interpretation-specific conversion costs from Bilson's original 3-tuple.
    8594
    86 A final refinement I have made to the \CFA{} cost model is with regard to choosing between arithmetic conversions.
    87 The C standard states that the common type of !int! and !unsigned int! is !unsigned int! and that the common type of !int! and !long! is !long!, but does not provide guidance for making a choice between those two conversions.
    88 Bilson's \CFACC{} used conversion costs based off a graph similar to that in Figure~\ref{safe-conv-graph-fig}, but with arcs selectively removed to disambiguate the costs of such conversions.
    89 However, the arc removal in Bilson's design resulted in inconsistent and somewhat surprising costs, with conversion to the next-larger same-sign type generally (but not always) double the cost of conversion to the !unsigned! type of the same size.
    90 In my redesign, for consistency with the approach of the usual arithmetic conversions,which select a common type primarily based on size, but secondarily on sign, the costs of arcs in the new graph are defined to be $1$ to go to a larger size, but $1 + \varepsilon$ to change the sign.
     95A final refinement I have made to the \CFA{} cost model is with regard to choosing among arithmetic conversions.
     96The C standard \cite[\S{}6.3.1.8]{C11} states that the common type of !int! and !unsigned int! is !unsigned int! and that the common type of !int! and !long! is !long!, but does not provide guidance for making a choice among conversions.
     97Bilson's \CFACC{} uses conversion costs based off the left graph in Figure~\ref{safe-conv-graph-fig}.
     98However, Bilson's design results in inconsistent and somewhat surprising costs, with conversion to the next-larger same-sign type generally (but not always) double the cost of conversion to the !unsigned! type of the same size.
     99In the redesign, for consistency with the approach of the usual arithmetic conversions, which select a common type primarily based on size, but secondarily on sign, arcs in the new graph are annotated with whether they represent a sign change, and such sign changes are summed in a new $sign$ cost element that lexicographically succeeds than $safe$.
    91100This means that sign conversions are approximately the same cost as widening conversions, but slightly more expensive (as opposed to less expensive in Bilson's graph).
    92 The $\varepsilon$ portion of the arc cost is implemented by adding a new $sign$ cost lexicographically after $safe$ which counts sign conversions.
    93101
    94102\begin{figure}
    95103        \centering
    96104        \includegraphics{figures/safe-conv-graph}
    97         \caption[Safe conversion graph.]{Safe conversion graph; plain arcs have cost $1$ while dashed sign-conversion arcs have cost $1+ \varepsilon$. The arc from \lstinline{unsigned long} to \lstinline{long long} is deliberately omitted, as on the presented system \lstinline{sizeof(long) == sizeof(long long)}.}
     105        \caption[Safe conversion graphs.]{Safe conversion graphs; Bilson's on the left, the extended graph on the right. In both graphs, plain arcs have cost $safe = 1, sign = 0$ while dashed sign-conversion arcs have cost $safe = 1, sign = 1$. As per \cite[\S{}6.3.1.8]{C11}, types promote to types of the same signedness with greater rank, from \lstinline{signed} to \lstinline{unsigned} with the same rank, and from \lstinline{unsigned} to \lstinline{signed} with greater size. The arc from \lstinline{unsigned long} to \lstinline{long long} is deliberately omitted in the modified graph, as on the presented system \lstinline{sizeof(long) == sizeof(long long)}.}
    98106        \label{safe-conv-graph-fig}
    99107\end{figure}
     
    105113\end{equation*}
    106114
    107 \subsection{Expression Cost}
     115\subsection{Expression Cost} \label{expr-cost-sec}
    108116
    109117The mapping from \CFA{} expressions to cost tuples is described by Bilson in \cite{Bilson03}, and remains effectively unchanged modulo the refinements to the cost tuple described above.
     
    189197
    190198To resolve the outermost !wrap!, the resolver must check that !pair(pair(int))! unifies with itself, but at three levels of nesting, !pair(pair(int))! is more complex than either !pair(T)! or !T!, the types in the declaration of !wrap!.
    191 Accordingly, the cost of a single argument-parameter unification is $O(d)$, where !d! is the depth of the expression tree, and the cost of argument-parameter unification for a single candidate for a given function call expression is $O(pd)$, where $p$ is the number of parameters.
     199Accordingly, the cost of a single argument-parameter unification is $O(d)$, where $d$ is the depth of the expression tree, and the cost of argument-parameter unification for a single candidate for a given function call expression is $O(pd)$, where $p$ is the number of parameters.
    192200
    193201Implicit conversions are also checked in argument-parameter matching, but the cost of checking for the existence of an implicit conversion is again proportional to the complexity of the type, $O(d)$.
     
    364372
    365373% Mention relevance of work to C++20 concepts
     374
     375% Mention more compact representations of the (growing) cost tuple
Note: See TracChangeset for help on using the changeset viewer.