# Changeset 46b4259

Ignore:
Timestamp:
Aug 14, 2016, 8:26:13 AM (6 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, ctor, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, memory, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
Children:
1d4580a
Parents:
bf1a2bf (diff), 473d8c82 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg2:software/cfa/cfa-cc

Files:
5 edited

Unmodified
Removed
• ## doc/LaTeXmacros/common.tex

 rbf1a2bf {\abbrevFont{etc}.\xspace}% }% \newcommand{\etal}{% \@ifnextchar{.}{\abbrevFont{et al}}% {\abbrevFont{et al}.\xspace}% }% \makeatother
• ## doc/aaron_comp_II/comp_II.tex

 rbf1a2bf \subsection{Argument-Parameter Matching} The first axis for consideration is argument-parameter matching direction --- whether the type matching for a candidate function to a set of candidate arguments is directed by the argument types or the parameter types. For programming languages without implicit conversions, argument-parameter matching is essentially the entirety of the expression resolution problem, and is generally referred to as overload resolution'' in the literature. All expression resolution algorithms form a DAG of interpretations, some explicitly, some implicitly; in this DAG, arcs point from function-call interpretations to argument interpretations, as below: \begin{figure}[h] double *f(int*, int*); // $f_d$ char *f(char*, char*); // $f_c$ char *f(char*, int*); // $f_c$ f( f( p, p ), p ); Deciding when to switch between bottom-up and top-down resolution to minimize wasted work in a hybrid algorithm is a necessarily heuristic process, and though finding good heuristics for which subexpressions to swich matching strategies on is an open question, one reasonable approach might be to set a threshold $t$ for the number of candidate functions, and to use top-down resolution for any subexpression with fewer than $t$ candidate functions, to minimize the number of unmatchable argument interpretations computed, but to use bottom-up resolution for any subexpression with at least $t$ candidate functions, to reduce duplication in argument interpretation computation between the different candidate functions. Ganzinger and Ripken~\cite{Ganzinger80} propose an approach (later refined by Pennello~\etal~\cite{Pennello80}) that uses a top-down filtering pass followed by a bottom-up filtering pass to reduce the number of candidate interpretations; they prove that for the Ada programming language a small number of such iterations is sufficient to converge to a solution for the expression resolution problem. Persch~\etal~\cite{PW:overload} developed a similar two-pass approach where the bottom-up pass is followed by the top-down pass. These algorithms differ from the hybrid approach under investigation in that they take multiple passes over the expression tree to yield a solution, and also in that they apply both filtering heuristics to all expression nodes; \CFA's polymorphic functions and implicit conversions make the approach of filtering out invalid types taken by all of these algorithms infeasible. \subsubsection{Common Subexpression Caching} With any of these argument-parameter approaches, it may be a useful optimization to cache the resolution results for common subexpressions; in Figure~\ref{fig:res_dag} this optimization would result in the list of interpretations $[p_c, p_i]$ for ©p© only being calculated once, and re-used for each of the three instances of ©p©. \subsection{Implicit Conversion Application} Baker's and Cormack's algorithms do not account for implicit conversions\footnote{Baker does briefly comment on an approach for handling implicit conversions, but does not provide an implementable algorithm.}; both assume that there is at most one valid interpretation of a given expression for each distinct type. Integrating implicit conversion handling into their algorithms provides some choice of implementation approach. With the exception of Bilson, the authors mentioned above do not account for implicit conversions in their algorithms\footnote{Baker does briefly comment on an approach for handling implicit conversions, but does not provide an implementable algorithm.}; all assume that there is at most one valid interpretation of a given expression for each distinct type. Integrating implicit conversion handling into the presented argument-parameter matching algorithms thus provides some choice of implementation approach. Inference of polymorphic type variables can be considered a form of implicit conversion application, where monomorphic types are implicitly converted to instances of some polymorphic type\footnote{This conversion'' may not be implemented in any explicit way at runtime, but does need to be handled by the expression resolver as an inexact match between argument and parameter types.}. This form of implicit conversion is particularly common in functional languages; Haskell's type classes~\cite{typeclass} are a particularly well-studied variant of this inference. However, type classes arguably do not allow name overloading, as (at least in the Haskell implmentation) identifiers belonging to type classes may not be overloaded in any other context than an implementation of that type class; this provides a single (possibly polymorphic) interpretation of any identifier, simplifing the expression resolution problem relative to \CFA. \CC~\cite{ANSI98:C++} includes both name overloading and implicit conversions in its expression resolution specification, though unlike \CFA it does complete type-checking on a generated monomorphization of template functions, where \CFA simply checks a list of type constraints. The upcoming Concepts standard~\cite{C++concepts} defines a system of type constraints similar in principle to \CFA's. Cormack and Wright~\cite{Cormack90} present an algorithm which integrates overload resolution with a polymorphic type inference approach very similar to \CFA's. However, their algorithm does not account for implicit conversions other than polymorphic type binding and their discussion of their overload resolution algorithm is not sufficiently detailed to classify it with the other argument-parameter matching approaches\footnote{Their overload resolution algorithm is possibly a variant of Ganzinger and Ripken~\cite{Ganzinger80} or Pennello~\etal~\cite{Pennello80}, modified to allow for polymorphic type binding.}. \subsubsection{On Parameters} Bilson does account for implicit conversions in his algorithm, but it is unclear the approach is optimal. Bilson does account for implicit conversions in his algorithm, but it is unclear if the approach is optimal. His algorithm integrates checking for valid implicit conversions into the argument-parameter-matching step, essentially trading more expensive matching for a smaller number of argument interpretations. This approach may result in the same subexpression being checked for a type match with the same type multiple times, though again memoization may mitigate this cost, and this approach does not generate implicit conversions that are not useful to match the containing function. \subsection{Candidate Set Generation} Cormack, Baker and Bilson all generate the complete set of candidate argument interpretations before attempting to match the containing function call expression. All the algorithms discussed to this point generate the complete set of candidate argument interpretations before attempting to match the containing function call expression. However, given that the top-level expression interpretation that is ultimately chosen is the minimal-cost valid interpretation, any consideration of non-minimal-cost interpretations is in some sense wasted work. Under the assumption that that programmers generally write function calls with relatively low-cost interpretations, a possible work-saving heuristic is to generate only the lowest-cost argument interpretations first, attempt to find a valid top-level interpretation using them, and only if that fails generate the next higher-cost argument interpretations. \subsubsection{Eager} Within the eager approach taken by Cormack, Baker and Bilson, there are still variants to explore. Within the eager approach taken by the existing top-down and bottom-up algorithms, there are still variants to explore. Cormack and Baker do not account for implict conversions, and thus do not account for the possibility of multiple valid interpretations with distinct costs; Bilson, on the other hand, sorts the list of interpretations to aid in finding minimal-cost interpretations. Sorting the lists of argument or function call interpretations by cost at some point during resolution may provide useful opportunities to short-circuit expression evaluation when a minimal-cost interpretation is found, though it is unclear if this short-circuiting behaviour justifies the cost of the sort. This proposed project should provide valuable data on how to implement a performant compiler for modern programming languages such as \CFA with powerful static type-systems, specifically targeting the feature interaction between name overloading and implicit conversions. This work is not limited in applicability to \CFA, but may also be useful for supporting efficient compilation of the upcoming Concepts'' standard~\cite{C++concepts} for \CC template constraints, for instance. This work is not limited in applicability to \CFA, but may also be useful for supporting efficient compilation of the upcoming Concepts standard~\cite{C++concepts} for \CC template constraints, for instance. \appendix
• ## doc/bibliography/cfa.bib

 rbf1a2bf } @article{Ganzinger80, contributer = {a3moss@uwaterloo.ca}, author = {Ganzinger, Harald and Ripken, Knut}, title = {Operator Identification in {ADA}: Formal Specification, Complexity, and Concrete Implementation}, journal = {SIGPLAN Notices}, issue_date = {February 1980}, volume = {15}, number = {2}, month = feb, year = {1980}, issn = {0362-1340}, pages = {30--42}, numpages = {13}, url = {http://doi.acm.org/10.1145/947586.947589}, doi = {10.1145/947586.947589}, publisher = {ACM}, address = {New York, NY, USA} } @article{Ford82, keywords    = {}, } @article{Pennello80, contributer = {a3moss@uwaterloo.ca}, author = {Pennello, Tom and DeRemer, Frank and Meyers, Richard}, title = {A Simplified Operator Identification Scheme for {Ada}}, journal = {SIGPLAN Notices}, issue_date = {July-August 1980}, volume = {15}, number = {7 and 8}, month = jul, year = {1980}, issn = {0362-1340}, pages = {82--87}, numpages = {6}, url = {http://doi.acm.org/10.1145/947680.947688}, doi = {10.1145/947680.947688}, publisher = {ACM}, address = {New York, NY, USA}, } @inproceedings{Dice10, keywords    = {hardware, synchronization, transactional memory},
• ## src/tests/.expect/32/extension.txt

 rbf1a2bf ((void)((__extension__ __a__i_2 , __extension__ __b__i_2) , __extension__ __c__i_2)); } __attribute__ ((constructor(),)) static void _init_extension(void){ int _global_init0; ((void)((*((int *)(&__a__i_1)))=_global_init0) /* ?{} */); int _global_init1; ((void)((*((int *)(&__b__i_1)))=_global_init1) /* ?{} */); int _global_init2; ((void)((*((int *)(&__c__i_1)))=_global_init2) /* ?{} */); } __attribute__ ((destructor(),)) static void _destroy_extension(void){ ((void)((*((int *)(&__c__i_1)))) /* ^?{} */); ((void)((*((int *)(&__b__i_1)))) /* ^?{} */); ((void)((*((int *)(&__a__i_1)))) /* ^?{} */); }
• ## src/tests/.expect/32/gccExtensions.txt

 rbf1a2bf struct s4 __y2__3ss4_2; ((void)___constructor__F_P3ss4_autogen___2(((struct s4 *)(&__y2__3ss4_2)))); int __m1__A0i_2[((long unsigned int )10)]; int __m2__A0A0i_2[((long unsigned int )10)][((long unsigned int )10)]; int __m3__A0A0i_2[((long unsigned int )10)][((long unsigned int )10)]; int __m1__A0i_2[((unsigned int )10)]; int __m2__A0A0i_2[((unsigned int )10)][((unsigned int )10)]; int __m3__A0A0i_2[((unsigned int )10)][((unsigned int )10)]; int _retVal0 = { 0 }; ((void)(_retVal0=0) /* ?{} */);
Note: See TracChangeset for help on using the changeset viewer.