Changeset 7ff30d07

Ignore:
Timestamp:
Jun 14, 2016, 1:23:18 PM (6 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, ctor, deferred_resn, demangler, enum, forall-pointer-decay, gc_noraii, jacob/cs343-translation, jenkins-sandbox, master, memory, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
Children:
545ef59, c738ca4, ee51534
Parents:
6cbc25a (diff), c8c03683 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Conflicts:

Makefile.in
aclocal.m4
configure
src/Makefile.in
src/Tests/Context.c
src/driver/Makefile.in
src/examples/Makefile.in
src/examples/ctxts.c
src/examples/esskaykay.c
src/libcfa/Makefile.in

Files:
36 edited

Unmodified
Removed
• doc/LaTeXmacros/common.tex

• doc/user/Cdecl.fig

 r6cbc25a 1 1 1.00 45.00 90.00 1950 1275 1950 1500 4 1 0 50 -1 4 9 0.0000 2 105 90 1350 1650 0\001 4 1 0 50 -1 4 9 0.0000 2 105 90 1500 1650 1\001 4 1 0 50 -1 4 9 0.0000 2 105 90 1650 1650 2\001 4 1 0 50 -1 4 9 0.0000 2 105 90 1800 1650 3\001 4 1 0 50 -1 4 9 0.0000 2 105 90 1950 1650 4\001 4 1 0 50 -1 4 9 0.0000 2 75 75 1200 1325 x\001 4 1 0 50 -1 4 10 0.0000 2 105 90 1350 1650 0\001 4 1 0 50 -1 4 10 0.0000 2 105 90 1500 1650 1\001 4 1 0 50 -1 4 10 0.0000 2 105 90 1650 1650 2\001 4 1 0 50 -1 4 10 0.0000 2 105 90 1800 1650 3\001 4 1 0 50 -1 4 10 0.0000 2 105 90 1950 1650 4\001 4 1 0 50 -1 4 10 0.0000 2 75 75 1200 1325 x\001 -6 6 2325 1200 3600 1350 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 2850 1200 3600 1200 3600 1350 2850 1350 2850 1200 4 1 0 50 -1 4 9 0.0000 2 105 90 2925 1325 0\001 4 1 0 50 -1 4 9 0.0000 2 105 90 3075 1325 1\001 4 1 0 50 -1 4 9 0.0000 2 105 90 3225 1325 2\001 4 1 0 50 -1 4 9 0.0000 2 105 90 3375 1325 3\001 4 1 0 50 -1 4 9 0.0000 2 105 90 3525 1325 4\001 4 1 0 50 -1 4 10 0.0000 2 105 90 2925 1325 0\001 4 1 0 50 -1 4 10 0.0000 2 105 90 3075 1325 1\001 4 1 0 50 -1 4 10 0.0000 2 105 90 3225 1325 2\001 4 1 0 50 -1 4 10 0.0000 2 105 90 3375 1325 3\001 4 1 0 50 -1 4 10 0.0000 2 105 90 3525 1325 4\001 -6 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 2550 1275 2850 1275 -6 4 1 0 50 -1 4 9 0.0000 2 75 75 2400 1325 x\001 4 1 0 50 -1 4 10 0.0000 2 75 75 2400 1325 x\001 -6
• doc/user/user.tex

• doc/working/resolver_design.md

 r6cbc25a ## Conversion Costs ## Each possible resolution of an expression has a _cost_ consisting of four integer components: _unsafe_ conversion cost, _polymorphic_ specialization cost, _safe_ conversion cost, and a _count_ of conversions. These components form a lexically-ordered tuple which can be summed element-wise; summation starts at (0, 0, 0, 0). Each possible resolution of an expression has a _cost_ tuple consisting of the following components: _unsafe_ conversion cost, _polymorphic_ specialization cost, _safe_ conversion cost, a count of _explicit_ conversions, and _qualifier_ conversion cost. These components are lexically-ordered and can be summed element-wise; summation starts at (0, 0, 0, 0, 0). ### Lvalue and Qualifier Conversions ### programmers. ## Resolver Architecture ## ### Function Application Resolution ### Our resolution algorithm for function application expressions is based on Baker's[3] single-pass bottom-up algorithm, with Cormack's[4] single-pass top-down algorithm applied where appropriate as an optimization. Broadly speaking, the cost of this resolution per expression will be proportional to i^d, where i is the number of interpretations of each program symbol, and d is the maximum depth of the expression DAG. Since d is determined by the user programmer (in general, bounded by a small constant), opportunities for resolver optimization primarily revolve around minimizing i, the number of interpretations of each symbol that are considered. [3] Baker, Theodore P. A one-pass algorithm for overload resolution in Ada. ACM Transactions on Programming Languages and Systems (1982) 4:4 p.601-614 [4] Cormack, Gordon V. An algorithm for the selection of overloaded functions in Ada. SIGPLAN Notices (1981) 16:2 p.48-52 Unlike Baker, our system allows implicit type conversions for function arguments and return types; the problem then becomes to find the valid interpretation for an expression that has the unique minimal conversion cost, if such exists. Interpretations can be produced both by overloaded names and implicit conversions applied to existing interpretations; we have proposals to reduce the number of interpretations considered from both sources. To simplify the problem for this discussion, we will consider application resolution restricted to a domain of functions applied to variables, possibly in a nested manner (e.g. f( g( x ), y ), where x and y are variables and f and g are functions), and possibly in a typed context such as a variable initialization (e.g. int i = f( x );); the other aspects of Cforall type resolution should be able to be straightforwardly mapped into this model. The types of the symbol tables used for variable and function declarations look somewhat like the following: variable_table = name_map( variable_name, variable_map ) function_table = name_map( function_name, function_map ) variable_map = multi_index( by_type( variable_type ), variable_decl_set ) function_map = multi_index( by_int( n_params ), by_type( return_type ), function_decl_set ) variable_name and function_name are likely simple strings, with name_map a hash table (or perhaps trie) mapping string keys to values. variable_decl_set and function_decl_set can be thought of for the moment as simple bags of typed declarations, where the declaration types are linked to the graph of available conversions for that type. In a typed context both the variable_decl_set and the function_decl_set should be able to be selected upon by type; this is accomplished by the by_type index of both variable_map and function_map. The by_int index of function_map also provides a way to select functions by their number of parameters; this index may be used to swiftly discard any function declaration which does not have the appropriate number of parameters for the argument interpretations being passed to it; given the likely small number of entries in this map, it is possible that a binary search of a sorted vector or even a linear search of an unsorted vector would be more efficient than the usual hash-based index. Given these data structures, the general outline of our algorithm follows Baker, with Cormack's algorithm used as a heuristic filter in typed contexts. In an untyped context, we use a variant of Baker's bottom-up algorithm. The leaves of the interpretation DAG are drawn from the variable symbol table, with entries in the table each producing zero-cost interpretations, and each implicit conversion available to be applied to the type of an existing entry producing a further interpretation with the same cost as the conversion. As in Baker, if two or more interpretations have the same type, only the minimum cost interpretation with that type is produced; if there is no unique minimum cost interpretation than resolution with that type is ambiguous, and not permitted. It should be relatively simple to produce the list of interpretations sorted by cost by producing the interpretations via a breadth-first search of the conversion graph from the initial interpretations provided in the variable symbol table. To match a function at one of the internal nodes of the DAG, we first look up the function's name in the function symbol table, the appropriate number of parameters for the arguments that are provided through the by_int index of the returned function_map, then go through the resulting function_decl_set searching for functions where the parameter types can unify with the provided argument lists; any such matching function produces an interpretation with a cost that is the sum of its argument costs. Though this is not included in our simplified model, this unification step may include binding of polymorphic variables, which introduces a cost for the function binding itself which must be added to the argument costs. Also, checking of function assertions would likely be done at this step as well, possibly eliminating some possible matching functions (if no suitable assertions can be satisfied), or adding further conversion costs for the assertion satisfaction. Once the set of valid function interpretations is produced, these may also be expanded by the graph of implicit conversions on their return types, as the variable interpretations were. This implicit conversion-based expansion of interpretations should be skipped for the top-level expression if used in an untyped (void) context, e.g. for f in f( g ( x ) ); or x in x;. On the other hand, if the top-level expression specifies a type, e.g. in int i = f( x );, only top level expressions that return that type are relevant to the search, so the candidates for f can be filtered first by those that return int (or a type convertable to it); this can be accomplished by performing a top-down filter of the interpretations of f by the by_type index of the function_map in a manner similar to Cormack's[4] algorithm. In a typed context, such as an initialization expression T x = f( g( y ), z );, only interpretations of f( g( y ), z ) which have type T are valid; since there are likely to be valid interpretations of f( g( y ), z ) which cannot be used to initialize a variable of type T, we can use this information to reduce the number of interpretations considered. Drawing from Cormack[4], we first search for interpretations of f where the return type is T; by breadth-first-search of the conversion graph, it should be straightforward to order the interpretations of f by the cost to convert their return type to T. We can also filter out interpretations of f with less than two parameters, since both g( y ) and z must produce at least one parameter; we may not, however, rule out interpretations of f with more than two parameters, as there may be a valid interpretation of g( y ) as a function returning more than one parameter (if the expression was f( y, z ) instead, we could use an exact parameter count, assuming that variables of tuple type don't exist). For each compatible interpretation of f, we can add the type of the first parameter of that interpretation of f to a set S, and recursively search for interpretations of g( y ) that return some type Si in S, and similarly for interpretations of z that match the type of any of the second parameters of some f. Naturally, if no matching interpretation of g( y ) can be found for the first parameter of some f, the type of the second parameter of that f will not be added to the set of valid types for z. Each node in this interpretation DAG is given a cost the same way it would be in the bottom-up approach, with the exception that when going top-down there must be a final bottom-up pass to sum the interpretation costs and sort them as appropriate. If a parameter type for some f is a polymorphic type variable that is left unbound by the return type (e.g. forall(otype S) int f(S x, int y)), the matching arguments should be found using the bottom-up algorithm above for untyped contexts, because the polymorphic type variable does not sufficiently constrain the available interpretations of the argument expression. Similarly, it would likely be an advantage to use top-down resolution for cast expressions (e.g. (int)x), even when those cast expressions are subexpressions of an otherwise untyped expression. It may also be fruitful to switch between the bottom-up and top-down algorithms if the number of valid interpretations for a subexpression or valid types for an argument exceeds some heuristic threshold, but finding such a threshold (if any exists) will require experimental data. This hybrid top-down/bottom-up search provides more opportunities for pruning interpretations than either a bottom-up or top-down approach alone, and thus may be more efficient than either. A top-down-only approach, however, devolves to linear search through every possible interpretation in the solution space in an untyped context, and is thus likely to be inferior to a strictly bottom-up approach, though this hypothesis needs to be empirically validated. Both Baker and Cormack explicitly generate all possible interpretations of a given expression; thinking of the set of interpretations of an expression as a list sorted by cost, this is an eager evaluation of the list. However, since we generally expect that user programmers will not often use high-cost implicit conversions, one potentially effective way to prune the search space would be to first find the minimal-cost interpretations of any given subexpression, then to save the resolution progress of the subexpressions and attempt to resolve the superexpression using only those subexpression interpretations. If no valid interpretation of the superexpression can be found, the resolver would then repeatedly find the next-most-minimal cost interpretations of the subexpressions and attempt to produce the minimal cost interpretation of the superexpression. This process would proceed until all possible subexpression interpretations have been found and considered. A middle ground between the eager and lazy approaches can be taken by considering the lexical order on the cost tuple; essentially, every interpretation in each of the classes below will be strictly cheaper than any interpretation in the class after it, so if a min-cost valid interpretation can be found while only generating interpretations in a given class, that interpretation is guaranteed to be the best possible one: 1. Interpretations without polymorphic functions or implicit conversions 2. Interpretations without polymorphic functions using only safe conversions 3. Interpretations using polymorphic functions without unsafe conversions 4. Interpretations using unsafe conversions In this lazy-eager approach, all the interpretations in one class would be eagerly generated, while the interpretations in the next class would only be considered if no match was found in the previous class. ## Appendix A: Partial and Total Orders ## The <=` relation on integers is a commonly known _total order_, and

• src/CodeGen/CodeGenerator.h

 r6cbc25a // Created On       : Mon May 18 07:44:20 2015 // Last Modified By : Peter A. Buhr // Last Modified On : Wed Mar  2 17:32:24 2016 // Update Count     : 28 // Last Modified On : Thu Jun  9 13:15:58 2016 // Update Count     : 29 // std::ostream& operator()(std::ostream & os); }; void extension( Expression *expr ) { if ( expr->get_extension() ) { output << "__extension__ "; } // if } // extension private:
• src/Common/utility.h

 r6cbc25a // Author           : Richard C. Bilson // Created On       : Mon May 18 07:44:20 2015 // Last Modified By : Rob Schluntz // Last Modified On : Thu Apr 28 13:18:24 2016 // Update Count     : 16 // Last Modified By : Peter A. Buhr // Last Modified On : Wed Jun  8 17:33:59 2016 // Update Count     : 22 // } template struct maybeBuild_t { static T * doit( const U *orig ) { if ( orig ) { return orig->build(); } else { return 0; } // if } }; template< typename T, typename U > static inline T * maybeBuild( const U *orig ) { if ( orig ) { return orig->build(); } else { return 0; } // if return maybeBuild_t::doit(orig); }
• src/Parser/ExpressionNode.cc

 r6cbc25a // Created On       : Sat May 16 13:17:07 2015 // Last Modified By : Peter A. Buhr // Last Modified On : Fri Apr  8 15:43:05 2016 // Update Count     : 296 // Last Modified On : Mon Jun 13 14:46:17 2016 // Update Count     : 307 // using namespace std; ExpressionNode::ExpressionNode() : ParseNode(), argName( 0 ) {} ExpressionNode::ExpressionNode( const string *name ) : ParseNode( name ), argName( 0 ) {} ExpressionNode::ExpressionNode( const ExpressionNode &other ) : ParseNode( other.name ) { ExpressionNode::ExpressionNode() : ParseNode() {} ExpressionNode::ExpressionNode( const string *name ) : ParseNode( name ) {} ExpressionNode::ExpressionNode( const ExpressionNode &other ) : ParseNode( other.name ), extension( other.extension ) { if ( other.argName ) { argName = other.argName->clone(); Expression *DesignatorNode::build() const { Expression * ret = get_argName()->build(); Expression * ret = maybeBuild(get_argName()); if ( isArrayIndex ) { "Cond", "NCond", // diadic "SizeOf", "AlignOf", "OffsetOf", "Attr", "CompLit", "?+?", "?-?", "?*?", "?/?", "?%?", "||", "&&", "SizeOf", "AlignOf", "OffsetOf", "Attr", "?+?", "?-?", "?*?", "?/?", "?%?", "||", "&&", "?|?", "?&?", "?^?", "Cast", "?<>?", "??", "?<=?", "?>=?", "?==?", "?!=?", "?=?", "?*=?", "?/=?", "?%=?", "?+=?", "?-=?", "?<<=?", "?>>=?", "?&=?", "?^=?", "?|=?", if ( ! ( op = dynamic_cast( function ) ) ) { // function as opposed to operator return new UntypedExpr( function->build(), args, maybeBuild< Expression >( get_argName() )); return new UntypedExpr( maybeBuild(function), args, maybeBuild< Expression >( get_argName() )); } // if if ( dynamic_cast< VoidType* >( targetType ) ) { delete targetType; return new CastExpr( expr_node->build(), maybeBuild< Expression >( get_argName() ) ); return new CastExpr( maybeBuild(expr_node), maybeBuild< Expression >( get_argName() ) ); } else { return new CastExpr( expr_node->build(),targetType, maybeBuild< Expression >( get_argName() ) ); return new CastExpr( maybeBuild(expr_node),targetType, maybeBuild< Expression >( get_argName() ) ); } // if } assert( var ); if ( ! get_args()->get_link() ) { return new AttrExpr( var->build(), ( Expression*)0); return new AttrExpr( maybeBuild(var), ( Expression*)0); } else if ( TypeValueNode * arg = dynamic_cast( get_args()->get_link()) ) { return new AttrExpr( var->build(), arg->get_decl()->buildType()); return new AttrExpr( maybeBuild(var), arg->get_decl()->buildType()); } else { return new AttrExpr( var->build(), args.back()); } // if } case OperatorNode::CompLit: throw UnimplementedError( "C99 compound literals" ); // the short-circuited operators return new AttrExpr( maybeBuild(var), args.back()); } // if } case OperatorNode::Or: case OperatorNode::And: Expression *AsmExprNode::build() const { return new AsmExpr( maybeBuild< Expression >( inout ), (ConstantExpr *)constraint->build(), operand->build() ); return new AsmExpr( maybeBuild< Expression >( inout ), (ConstantExpr *)maybeBuild(constraint), maybeBuild(operand) ); } Expression *ValofExprNode::build() const { return new UntypedValofExpr ( get_body()->build(), maybeBuild< Expression >( get_argName() ) ); return new UntypedValofExpr ( maybeBuild(get_body()), maybeBuild< Expression >( get_argName() ) ); } Expression *CompoundLiteralNode::build() const { Declaration * newDecl = type->build();                          // compound literal type Declaration * newDecl = maybeBuild(type); // compound literal type if ( DeclarationWithType * newDeclWithType = dynamic_cast< DeclarationWithType * >( newDecl ) ) { // non-sue compound-literal type return new CompoundLiteralExpr( newDeclWithType->get_type(), kids->build() ); return new CompoundLiteralExpr( newDeclWithType->get_type(), maybeBuild(kids) ); // these types do not have associated type information } else if ( StructDecl * newDeclStructDecl = dynamic_cast< StructDecl * >( newDecl )  ) { return new CompoundLiteralExpr( new StructInstType( Type::Qualifiers(), newDeclStructDecl->get_name() ), kids->build() ); return new CompoundLiteralExpr( new StructInstType( Type::Qualifiers(), newDeclStructDecl->get_name() ), maybeBuild(kids) ); } else if ( UnionDecl * newDeclUnionDecl = dynamic_cast< UnionDecl * >( newDecl )  ) { return new CompoundLiteralExpr( new UnionInstType( Type::Qualifiers(), newDeclUnionDecl->get_name() ), kids->build() ); return new CompoundLiteralExpr( new UnionInstType( Type::Qualifiers(), newDeclUnionDecl->get_name() ), maybeBuild(kids) ); } else if ( EnumDecl * newDeclEnumDecl = dynamic_cast< EnumDecl * >( newDecl )  ) { return new CompoundLiteralExpr( new EnumInstType( Type::Qualifiers(), newDeclEnumDecl->get_name() ), kids->build() ); return new CompoundLiteralExpr( new EnumInstType( Type::Qualifiers(), newDeclEnumDecl->get_name() ), maybeBuild(kids) ); } else { assert( false );

• src/Parser/StatementNode.cc

 r6cbc25a // Created On       : Sat May 16 14:59:41 2015 // Last Modified By : Peter A. Buhr // Last Modified On : Thu Jul 30 14:39:39 2015 // Update Count     : 130 // Last Modified On : Thu Jun  9 14:18:46 2016 // Update Count     : 132 // branches.pop_front(); } // if return new IfStmt( labs, notZeroExpr( get_control()->build() ), thenb, elseb ); return new IfStmt( labs, notZeroExpr( maybeBuild(get_control()) ), thenb, elseb ); } case While: assert( branches.size() == 1 ); return new WhileStmt( labs, notZeroExpr( get_control()->build() ), branches.front() ); return new WhileStmt( labs, notZeroExpr( maybeBuild(get_control()) ), branches.front() ); case Do: assert( branches.size() == 1 ); return new WhileStmt( labs, notZeroExpr( get_control()->build() ), branches.front(), true ); return new WhileStmt( labs, notZeroExpr( maybeBuild(get_control()) ), branches.front(), true ); case For: { Expression *cond = 0; if ( ctl->get_condition() != 0 ) cond = notZeroExpr( ctl->get_condition()->build() ); cond = notZeroExpr( maybeBuild(ctl->get_condition()) ); Expression *incr = 0; if ( ctl->get_change() != 0 ) incr = ctl->get_change()->build(); incr = maybeBuild(ctl->get_change()); return new ForStmt( labs, init, cond, incr, branches.front() ); } case Switch: return new SwitchStmt( labs, get_control()->build(), branches ); return new SwitchStmt( labs, maybeBuild(get_control()), branches ); case Choose: return new ChooseStmt( labs, get_control()->build(), branches ); return new ChooseStmt( labs, maybeBuild(get_control()), branches ); case Fallthru: return new FallthruStmt( labs ); case Case: return new CaseStmt( labs, get_control()->build(), branches ); return new CaseStmt( labs, maybeBuild(get_control()), branches ); case Default: return new CaseStmt( labs, 0, branches, true ); if ( get_target() == "" ) {                                     // computed goto assert( get_control() != 0 ); return new BranchStmt( labs, get_control()->build(), BranchStmt::Goto ); return new BranchStmt( labs, maybeBuild(get_control()), BranchStmt::Goto ); } // if
• src/Parser/parser.cc

 r6cbc25a /* Line 1806 of yacc.c  */ #line 432 "parser.yy" { (yyval.en) = (yyvsp[(2) - (2)].en); } { (yyval.en) = (yyvsp[(2) - (2)].en)->set_extension( true ); } break; /* Line 1806 of yacc.c  */ #line 685 "parser.yy" { (yyval.sn) = new StatementNode( (yyvsp[(2) - (2)].decl) ); } { (yyval.sn) = new StatementNode( (yyvsp[(2) - (2)].decl) )/*->set_extension( true )*/; } break; /* Line 1806 of yacc.c  */ #line 1475 "parser.yy" { (yyval.decl) = (yyvsp[(2) - (3)].decl); } { (yyval.decl) = (yyvsp[(2) - (3)].decl)/*->set_extension( true )*/; } break; /* Line 1806 of yacc.c  */ #line 1478 "parser.yy" { (yyval.decl) = (yyvsp[(2) - (3)].decl); } { (yyval.decl) = (yyvsp[(2) - (3)].decl)/*->set_extension( true )*/; } break; /* Line 1806 of yacc.c  */ #line 1994 "parser.yy" { (yyval.decl) = (yyvsp[(2) - (2)].decl); } { (yyval.decl) = (yyvsp[(2) - (2)].decl)/*->set_extension( true )*/; } break;
• src/Parser/parser.yy

 r6cbc25a // Created On       : Sat Sep  1 20:22:55 2001 // Last Modified By : Peter A. Buhr // Last Modified On : Tue Jun  7 08:08:31 2016 // Update Count     : 1560 // Last Modified On : Mon Jun 13 15:00:23 2016 // Update Count     : 1578 // { $$= 1; } | EXTENSION cast_expression // GCC {$$ = $2; } { $$= 2->set_extension( true ); } | ptrref_operator cast_expression // CFA {$$ = new CompositeExprNode($1, $2 ); } { $$= new StatementNode( 1 ); } | EXTENSION declaration // GCC {$$ = new StatementNode($2 ); } { $$= new StatementNode( 2 )/*->set_extension( true )*/; } | function_definition {$$ = new StatementNode( $1 ); } new_field_declaring_list ';' // CFA, new style field declaration | EXTENSION new_field_declaring_list ';' // GCC { $$= 2; } {$$ =$2/*->set_extension( true )*/; } | field_declaring_list ';' | EXTENSION field_declaring_list ';'                            // GCC { $$= 2; } {$$ = $2/*->set_extension( true )*/; } ; } | EXTENSION external_definition { $$= 2; } {$$ =$2/*->set_extension( true )*/; } ;
• src/ResolvExpr/AlternativeFinder.cc

 r6cbc25a // Author           : Richard C. Bilson // Created On       : Sat May 16 23:52:08 2015 // Last Modified By : Rob Schluntz // Last Modified On : Wed Apr 20 14:24:03 2016 // Update Count     : 24 // Last Modified By : Peter A. Buhr // Last Modified On : Mon Jun 13 16:13:54 2016 // Update Count     : 25 // for ( std::list< DeclarationWithType* >::iterator i = declList.begin(); i != declList.end(); ++i ) { VariableExpr newExpr( *i, nameExpr->get_argName() ); newExpr.set_extension( nameExpr->get_extension() ); alternatives.push_back( Alternative( newExpr.clone(), env, Cost() ) ); PRINT(
• src/SynTree/Expression.cc

 r6cbc25a // Author           : Richard C. Bilson // Created On       : Mon May 18 07:44:20 2015 // Last Modified By : Rob Schluntz // Last Modified On : Fri May 13 13:23:11 2016 // Update Count     : 40 // Last Modified By : Peter A. Buhr // Last Modified On : Mon Jun 13 16:03:39 2016 // Update Count     : 42 // Expression::Expression( Expression *_aname ) : env( 0 ), argName( _aname ) {} Expression::Expression( const Expression &other ) : env( maybeClone( other.env ) ), argName( maybeClone( other.get_argName() ) ) { Expression::Expression( const Expression &other ) : env( maybeClone( other.env ) ), argName( maybeClone( other.get_argName() ) ), extension( other.extension ) { cloneAll( other.results, results ); } os << std::string( indent, ' ' ) << "with designator:"; argName->print( os, indent+2 ); } // if if ( extension ) { os << std::string( indent, ' ' ) << "with extension:"; } // if }
• src/SynTree/Expression.h

 r6cbc25a // Author           : Richard C. Bilson // Created On       : Mon May 18 07:44:20 2015 // Last Modified By : Rob Schluntz // Last Modified On : Wed Apr 27 17:06:49 2016 // Update Count     : 21 // Last Modified By : Peter A. Buhr // Last Modified On : Wed Jun  8 17:05:30 2016 // Update Count     : 22 // Expression *get_argName() const { return argName; } void set_argName( Expression *name ) { argName = name; } bool get_extension() const { return extension; } void set_extension( bool exten ) { extension = exten; } virtual Expression *clone() const = 0; TypeSubstitution *env; Expression* argName; // if expression is used as an argument, it can be "designated" by this name bool extension = false; };
• src/Tests/Abstype.c

 r6cbc25a type T | { T x( T ); }; otype T | { T x( T ); }; T y( T t ) { } forall( type T ) lvalue T *?( T * ); forall( otype T ) lvalue T *?( T * ); int ?++( int * ); int ?=?( int *, int ); forall( dtype DT ) DT * ?=?( DT **, DT * ); type U = int *; otype U = int *; U x( U u ) {
• src/Tests/Attributes.c

 r6cbc25a //    @max // // 2. a direct application to a manifest type // 2. a direct application to a manifest otype // //    @max( int ) // // 3. constraining a type variable; the application is implicitly performed at the call site as in (2) // 3. constraining a otype variable; the application is implicitly performed at the call site as in (2) // //    forall( type T | { T @max( T ); } ) T x( T t ); //    forall( otype T | { T @max( T ); } ) T x( T t ); // // //    x = (*attr_var); // // 2. an indirect application to a manifest type // 2. an indirect application to a manifest otype // //    (*attr_var)( int ) // // 3. a direct application to a type variable // 3. a direct application to a otype variable // //    @max( T ) // 3. polymorphic // //    forall( type T | constraint( T ) ) int @attr( T ); //    forall( otype T | constraint( T ) ) int @attr( T ); int @max = 3; int main() { int x; type @type(type t);                                                                 // compiler intrinsic type @widest(type t); @type(x) *y;                                                                                // gcc: typeof(x) *y; const @widest(double) *w;                                                   // gcc: const typeof(x) *w; * @type(3 + 4) z;                                                                   // cfa declaration syntax otype @otype(otype t);                                                                      // compiler intrinsic otype @widest(otype t); @otype(x) *y;                                                                               // gcc: otypeof(x) *y; const @widest(double) *w;                                                   // gcc: const otypeof(x) *w; * @otype(3 + 4) z;                                                                  // cfa declaration syntax y = @max; z = @max(x) + @size(int);
• src/Tests/InferParam.c

 r6cbc25a double ?=?( double*, double ); forall( type T, type U | { U f(T); } ) U g(T); forall( otype T, otype U | { U f(T); } ) U g(T); float f( int ); double f( int ); } context has_f_and_j( type T, type U ) { context has_f_and_j( otype T, otype U ) { U f( T ); U j( T, U ); float j( int, float ); forall( type T, type U | has_f_and_j( T, U ) ) U k( T ); forall( otype T, otype U | has_f_and_j( T, U ) ) U k( T ); void l() {

• src/tests/Array.c

 r6cbc25a //Testing array declarations int a1[]; int a2[*]; double a4[3.0]; //int a2[*]; //double a4[3.0]; int m1[][3]; int m2[*][*]; //int m2[*][*]; int m4[3][3]; int fred() { int a1[]; int a2[*]; //      int a1[]; //      int a2[*]; int a4[3]; int T[3];
• src/tests/Forall.c

 r6cbc25a void g1() { forall( type T ) T f( T ); forall( otype T ) T f( T ); void f( int ); void h( void (*p)(void) ); void g2() { forall( type T ) void f( T, T ); forall( type T, type U ) void f( T, U ); forall( otype T ) void f( T, T ); forall( otype T, otype U ) void f( T, U ); int x; } typedef forall ( type T ) int (*f)( int ); typedef forall ( otype T ) int (*f)( int ); forall( type T ) forall( otype T ) void swap( T left, T right ) { T temp = left; } context sumable( type T ) { context sumable( otype T ) { const T 0; T ?+?(T, T); }; type T1 | { const T1 0; T1 ?+?(T1, T1); T1 ?++(T1); [T1] ?+=?(T1,T1); }, T2(type P1, type P2 ), otype T1 | { const T1 0; T1 ?+?(T1, T1); T1 ?++(T1); [T1] ?+=?(T1,T1); }, T2(otype P1, otype P2 ), T3 | sumable(T3); type T2(type P1, type P2) | sumable(T2(P1,P2)) = struct { P1 i; P2 j; }; otype T2(otype P1, otype P2) | sumable(T2(P1,P2)) = struct { P1 i; P2 j; }; T2(int, int) w1; typedef T2(int, int) w2; w2 g2; type w3 = T2(int, int); otype w3 = T2(int, int); w3 g3; forall( type T | sumable( T ) ) forall( otype T | sumable( T ) ) T sum( int n, T a[] ) { T total = 0; } forall( type T | { const T 0; T ?+?(T, T); T ?++(T); [T] ?+=?(T,T); } ) forall( otype T | { const T 0; T ?+?(T, T); T ?++(T); [T] ?+=?(T,T); } ) T twice( T t ) { return t + t; } forall( type T | { const T 0; int ?!=?(T, T); int ?
• src/tests/Functions.c

 r6cbc25a int ((*f12())[])[3] {} // "implicit int" type specifier (not ANSI) // "implicit int" otype specifier (not ANSI) fII1( int i ) {}
• src/tests/GccExtensions.c

 r6cbc25a __signed s2; __typeof(s1) t1; __typeof__(s1) t2; __otypeof(s1) t1; __otypeof__(s1) t2; __volatile int v1;
• src/tests/Scope.c

 r6cbc25a typedef float t; y z; type u = struct { int a; double b; }; otype u = struct { int a; double b; }; int f( int y ); y q; y w( y y, u v ) { type x | { x t(u); }; otype x | { x t(u); }; u u = y; x z = t(u); y p; context has_u( type z ) { context has_u( otype z ) { z u(z); }; forall( type t | has_u( t ) ) forall( otype t | has_u( t ) ) y q( t the_t ) { t y = u( the_t );
• src/tests/Subrange.c

 r6cbc25a // A small context defining the notion of an ordered type.  (The standard // A small context defining the notion of an ordered otype.  (The standard // library should probably contain a context for this purpose.) context ordered(type T) { context ordered(otype T) { int ?
• src/tests/TypeGenerator.c

 r6cbc25a context addable( type T ) { context addable( otype T ) { T ?+?( T,T ); T ?=?( T*, T); }; type List1( type T | addable( T ) ) = struct { T data; List1( T ) *next; } *; otype List1( otype T | addable( T ) ) = struct { T data; List1( T ) *next; } *; typedef List1( int ) ListOfIntegers; //List1( int ) li; [int] h( * List1( int ) p );                                                    // new declaration syntax struct( type T ) S2 { T i; };                                                   // actual definition struct( otype T ) S2 { T i; };                                                  // actual definition struct( int ) S3 v1, *p;                                                                // expansion and instantiation struct( type T )( int ) S24 { T i; } v2;                                // actual definition, expansion and instantiation struct( type T )( int ) { T i; } v2;                                    // anonymous actual definition, expansion and instantiation struct( otype T )( int ) S24 { T i; } v2;                               // actual definition, expansion and instantiation struct( otype T )( int ) { T i; } v2;                                   // anonymous actual definition, expansion and instantiation struct( type T | addable( T ) ) node { T data; struct( T ) node *next; }; type List( type T ) = struct( T ) node *; struct( otype T | addable( T ) ) node { T data; struct( T ) node *next; }; otype List( otype T ) = struct( T ) node *; List( int ) my_list; type Complex | addable( Complex ); otype Complex | addable( Complex ); int main() {
• src/tests/Typedef.c

 r6cbc25a a c; typedef typeof(3) x, y;  // GCC typedef otypeof(3) x, y;  // GCC x p; int main() { typedef typeof(3) z, p; typedef otypeof(3) z, p; z w; p x;
• src/tests/Typeof.c

 r6cbc25a int main() { int *v1; typeof(v1) v2; typeof(*v1) v3[4]; otypeof(v1) v2; otypeof(*v1) v3[4]; char *v4[4]; typeof(typeof(char *)[4]) v5; typeof (int *) v6; typeof( int ( int, int p ) ) *v7; typeof( [int] ( int, int p ) ) *v8; otypeof(otypeof(char *)[4]) v5; otypeof (int *) v6; otypeof( int ( int, int p ) ) *v7; otypeof( [int] ( int, int p ) ) *v8; }
• src/tests/io.c

 r6cbc25a // Created On       : Wed Mar  2 16:56:02 2016 // Last Modified By : Peter A. Buhr // Last Modified On : Thu May 26 10:06:00 2016 // Update Count     : 28 // Last Modified On : Wed Jun  8 22:52:04 2016 // Update Count     : 30 // long double _Complex ldc; char s1[10], s2[10]; int x = 3, y = 5, z = 7; sout | x * 3 | y + 1 | z << 2 | x == y | (x | y) | (x || y) | (x > z ? 1 : 2) | endl; sout | 1 | 2 | 3 | endl; sout | '1' | '2' | '3' | endl; sout | 1 | "" | 2 | "" | 3 | endl; sout | endl; ifstream in;                                                                                            // create / open file
Note: See TracChangeset for help on using the changeset viewer.