Changeset 46b4259
- Timestamp:
- Aug 14, 2016, 8:26:13 AM (8 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, ctor, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, memory, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 1d4580a
- Parents:
- bf1a2bf (diff), 473d8c82 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Files:
-
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/LaTeXmacros/common.tex
rbf1a2bf r46b4259 153 153 {\abbrevFont{etc}.\xspace}% 154 154 }% 155 \newcommand{\etal}{% 156 \@ifnextchar{.}{\abbrevFont{et al}}% 157 {\abbrevFont{et al}.\xspace}% 158 }% 155 159 \makeatother 156 160 -
doc/aaron_comp_II/comp_II.tex
rbf1a2bf r46b4259 413 413 \subsection{Argument-Parameter Matching} 414 414 The first axis for consideration is argument-parameter matching direction --- whether the type matching for a candidate function to a set of candidate arguments is directed by the argument types or the parameter types. 415 For programming languages without implicit conversions, argument-parameter matching is essentially the entirety of the expression resolution problem, and is generally referred to as ``overload resolution'' in the literature. 415 416 All expression resolution algorithms form a DAG of interpretations, some explicitly, some implicitly; in this DAG, arcs point from function-call interpretations to argument interpretations, as below: 416 417 \begin{figure}[h] … … 422 423 423 424 double *f(int*, int*); // $f_d$ 424 char *f(char*, char*); // $f_c$425 char *f(char*, int*); // $f_c$ 425 426 426 427 f( f( p, p ), p ); … … 463 464 Deciding when to switch between bottom-up and top-down resolution to minimize wasted work in a hybrid algorithm is a necessarily heuristic process, and though finding good heuristics for which subexpressions to swich matching strategies on is an open question, one reasonable approach might be to set a threshold $t$ for the number of candidate functions, and to use top-down resolution for any subexpression with fewer than $t$ candidate functions, to minimize the number of unmatchable argument interpretations computed, but to use bottom-up resolution for any subexpression with at least $t$ candidate functions, to reduce duplication in argument interpretation computation between the different candidate functions. 464 465 466 Ganzinger and Ripken~\cite{Ganzinger80} propose an approach (later refined by Pennello~\etal~\cite{Pennello80}) that uses a top-down filtering pass followed by a bottom-up filtering pass to reduce the number of candidate interpretations; they prove that for the Ada programming language a small number of such iterations is sufficient to converge to a solution for the expression resolution problem. 467 Persch~\etal~\cite{PW:overload} developed a similar two-pass approach where the bottom-up pass is followed by the top-down pass. 468 These algorithms differ from the hybrid approach under investigation in that they take multiple passes over the expression tree to yield a solution, and also in that they apply both filtering heuristics to all expression nodes; \CFA's polymorphic functions and implicit conversions make the approach of filtering out invalid types taken by all of these algorithms infeasible. 469 465 470 \subsubsection{Common Subexpression Caching} 466 471 With any of these argument-parameter approaches, it may be a useful optimization to cache the resolution results for common subexpressions; in Figure~\ref{fig:res_dag} this optimization would result in the list of interpretations $[p_c, p_i]$ for ©p© only being calculated once, and re-used for each of the three instances of ©p©. 467 472 468 473 \subsection{Implicit Conversion Application} 469 Baker's and Cormack's algorithms do not account for implicit conversions\footnote{Baker does briefly comment on an approach for handling implicit conversions, but does not provide an implementable algorithm.}; both assume that there is at most one valid interpretation of a given expression for each distinct type. 470 Integrating implicit conversion handling into their algorithms provides some choice of implementation approach. 474 With the exception of Bilson, the authors mentioned above do not account for implicit conversions in their algorithms\footnote{Baker does briefly comment on an approach for handling implicit conversions, but does not provide an implementable algorithm.}; all assume that there is at most one valid interpretation of a given expression for each distinct type. 475 Integrating implicit conversion handling into the presented argument-parameter matching algorithms thus provides some choice of implementation approach. 476 477 Inference of polymorphic type variables can be considered a form of implicit conversion application, where monomorphic types are implicitly converted to instances of some polymorphic type\footnote{This ``conversion'' may not be implemented in any explicit way at runtime, but does need to be handled by the expression resolver as an inexact match between argument and parameter types.}. 478 This form of implicit conversion is particularly common in functional languages; Haskell's type classes~\cite{typeclass} are a particularly well-studied variant of this inference. 479 However, type classes arguably do not allow name overloading, as (at least in the Haskell implmentation) identifiers belonging to type classes may not be overloaded in any other context than an implementation of that type class; this provides a single (possibly polymorphic) interpretation of any identifier, simplifing the expression resolution problem relative to \CFA. 480 \CC~\cite{ANSI98:C++} includes both name overloading and implicit conversions in its expression resolution specification, though unlike \CFA it does complete type-checking on a generated monomorphization of template functions, where \CFA simply checks a list of type constraints. 481 The upcoming Concepts standard~\cite{C++concepts} defines a system of type constraints similar in principle to \CFA's. 482 Cormack and Wright~\cite{Cormack90} present an algorithm which integrates overload resolution with a polymorphic type inference approach very similar to \CFA's. 483 However, their algorithm does not account for implicit conversions other than polymorphic type binding and their discussion of their overload resolution algorithm is not sufficiently detailed to classify it with the other argument-parameter matching approaches\footnote{Their overload resolution algorithm is possibly a variant of Ganzinger and Ripken~\cite{Ganzinger80} or Pennello~\etal~\cite{Pennello80}, modified to allow for polymorphic type binding.}. 471 484 472 485 \subsubsection{On Parameters} 473 Bilson does account for implicit conversions in his algorithm, but it is unclear the approach is optimal.486 Bilson does account for implicit conversions in his algorithm, but it is unclear if the approach is optimal. 474 487 His algorithm integrates checking for valid implicit conversions into the argument-parameter-matching step, essentially trading more expensive matching for a smaller number of argument interpretations. 475 488 This approach may result in the same subexpression being checked for a type match with the same type multiple times, though again memoization may mitigate this cost, and this approach does not generate implicit conversions that are not useful to match the containing function. … … 484 497 485 498 \subsection{Candidate Set Generation} 486 Cormack, Baker and Bilson allgenerate the complete set of candidate argument interpretations before attempting to match the containing function call expression.499 All the algorithms discussed to this point generate the complete set of candidate argument interpretations before attempting to match the containing function call expression. 487 500 However, given that the top-level expression interpretation that is ultimately chosen is the minimal-cost valid interpretation, any consideration of non-minimal-cost interpretations is in some sense wasted work. 488 501 Under the assumption that that programmers generally write function calls with relatively low-cost interpretations, a possible work-saving heuristic is to generate only the lowest-cost argument interpretations first, attempt to find a valid top-level interpretation using them, and only if that fails generate the next higher-cost argument interpretations. 489 502 490 503 \subsubsection{Eager} 491 Within the eager approach taken by Cormack, Baker and Bilson, there are still variants to explore.504 Within the eager approach taken by the existing top-down and bottom-up algorithms, there are still variants to explore. 492 505 Cormack and Baker do not account for implict conversions, and thus do not account for the possibility of multiple valid interpretations with distinct costs; Bilson, on the other hand, sorts the list of interpretations to aid in finding minimal-cost interpretations. 493 506 Sorting the lists of argument or function call interpretations by cost at some point during resolution may provide useful opportunities to short-circuit expression evaluation when a minimal-cost interpretation is found, though it is unclear if this short-circuiting behaviour justifies the cost of the sort. … … 559 572 560 573 This proposed project should provide valuable data on how to implement a performant compiler for modern programming languages such as \CFA with powerful static type-systems, specifically targeting the feature interaction between name overloading and implicit conversions. 561 This work is not limited in applicability to \CFA, but may also be useful for supporting efficient compilation of the upcoming ``Concepts''standard~\cite{C++concepts} for \CC template constraints, for instance.574 This work is not limited in applicability to \CFA, but may also be useful for supporting efficient compilation of the upcoming Concepts standard~\cite{C++concepts} for \CC template constraints, for instance. 562 575 563 576 \appendix -
doc/bibliography/cfa.bib
rbf1a2bf r46b4259 4685 4685 } 4686 4686 4687 @article{Ganzinger80, 4688 contributer = {a3moss@uwaterloo.ca}, 4689 author = {Ganzinger, Harald and Ripken, Knut}, 4690 title = {Operator Identification in {ADA}: Formal Specification, Complexity, and Concrete Implementation}, 4691 journal = {SIGPLAN Notices}, 4692 issue_date = {February 1980}, 4693 volume = {15}, 4694 number = {2}, 4695 month = feb, 4696 year = {1980}, 4697 issn = {0362-1340}, 4698 pages = {30--42}, 4699 numpages = {13}, 4700 url = {http://doi.acm.org/10.1145/947586.947589}, 4701 doi = {10.1145/947586.947589}, 4702 publisher = {ACM}, 4703 address = {New York, NY, USA} 4704 } 4705 4687 4706 @article{Ford82, 4688 4707 keywords = {}, … … 5851 5870 } 5852 5871 5872 @article{Pennello80, 5873 contributer = {a3moss@uwaterloo.ca}, 5874 author = {Pennello, Tom and DeRemer, Frank and Meyers, Richard}, 5875 title = {A Simplified Operator Identification Scheme for {Ada}}, 5876 journal = {SIGPLAN Notices}, 5877 issue_date = {July-August 1980}, 5878 volume = {15}, 5879 number = {7 and 8}, 5880 month = jul, 5881 year = {1980}, 5882 issn = {0362-1340}, 5883 pages = {82--87}, 5884 numpages = {6}, 5885 url = {http://doi.acm.org/10.1145/947680.947688}, 5886 doi = {10.1145/947680.947688}, 5887 publisher = {ACM}, 5888 address = {New York, NY, USA}, 5889 } 5890 5853 5891 @inproceedings{Dice10, 5854 5892 keywords = {hardware, synchronization, transactional memory}, -
src/tests/.expect/32/extension.txt
rbf1a2bf r46b4259 100 100 ((void)((__extension__ __a__i_2 , __extension__ __b__i_2) , __extension__ __c__i_2)); 101 101 } 102 __attribute__ ((constructor(),)) static void _init_extension(void){103 int _global_init0;104 ((void)((*((int *)(&__a__i_1)))=_global_init0) /* ?{} */);105 int _global_init1;106 ((void)((*((int *)(&__b__i_1)))=_global_init1) /* ?{} */);107 int _global_init2;108 ((void)((*((int *)(&__c__i_1)))=_global_init2) /* ?{} */);109 }110 __attribute__ ((destructor(),)) static void _destroy_extension(void){111 ((void)((*((int *)(&__c__i_1)))) /* ^?{} */);112 ((void)((*((int *)(&__b__i_1)))) /* ^?{} */);113 ((void)((*((int *)(&__a__i_1)))) /* ^?{} */);114 } -
src/tests/.expect/32/gccExtensions.txt
rbf1a2bf r46b4259 151 151 struct s4 __y2__3ss4_2; 152 152 ((void)___constructor__F_P3ss4_autogen___2(((struct s4 *)(&__y2__3ss4_2)))); 153 int __m1__A0i_2[(( longunsigned int )10)];154 int __m2__A0A0i_2[(( long unsigned int )10)][((longunsigned int )10)];155 int __m3__A0A0i_2[(( long unsigned int )10)][((longunsigned int )10)];153 int __m1__A0i_2[((unsigned int )10)]; 154 int __m2__A0A0i_2[((unsigned int )10)][((unsigned int )10)]; 155 int __m3__A0A0i_2[((unsigned int )10)][((unsigned int )10)]; 156 156 int _retVal0 = { 0 }; 157 157 ((void)(_retVal0=0) /* ?{} */);
Note: See TracChangeset
for help on using the changeset viewer.